Google: "There are bound to be some oddities and errors" in system that told people to eat rocks.
See full article...
See full article...
Who wants this? Besides Google?aims to provide search users with summarized answers to questions by using an AI model integrated with Google's web ranking systems
AI Overviews work very differently than chatbots and other LLM products that people may have tried out. They’re not simply generating an output based on training data. While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from our index.
Glue pizza is the next big thing, I tells yaGravel companies?
Can I get my pie with half glue and half rocks?Glue pizza is the next big thing, I tells ya
What a very Muskian response. Is this Reid person new to the Internet?While addressing the "nonsensical searches" angle in the post, Reid uses the example search, "How many rocks should I eat each day," which went viral in a tweet on May 23. Reid says, "Prior to these screenshots going viral, practically no one asked Google that question."
Perhaps unsurprisingly, the company is forgiving of itself for the failures so far. "At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors. We’ve learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone."
Even worse. The LLMs are now being trained on their own garbage output being put on the web through these SEO spam sites. Thus further reinforcing the cycle of shit.Garbage In, Garbage Out
Still better that the mayonnaise pizza some people eat.Glue pizza is the next big thing, I tells ya
What evidence do you have that this isn't intentional?I think the phrase “flawed by design” is actually used wrongly here. It usually means “intentionally flawed,” whereas I think the author wants to say that there exists a “flaw in the design.”
Don't be ridiculous. No one would get a pizza half covered with rocks.Can I get my pie with half glue and half rocks?
I suspect the longer-term vision is to evolve this into a voice assistant kind of product - Alexa-like - where you can just ask your phone, or watch, or car, or robot dog a question about anything, and have it speak the short and authoritative answer back at you.Who wants this? Besides Google?
it could be 1 very large rock.Don't be ridiculous. No one would get a pizza half covered with rocks.
You are only supposed to eat one rock per day.
I feel like Amazon are part of the make it work at all cost crowd that, ironically, aren’t actually trying to show this technology out in the best light.I've recently seen two AI summary products that are superficially the same, and yet totally different in value.
The first was an AI summary of reviews for hiking trails. In this case, it summarized the reviews and had been tweaked to understand that recency and seasonality mattered, so it could say things like "In January 2024, a reviewer posted that a downed tree was blocking the trail" or that "Reviewers report that this trail is muddy and difficult in winter." No official human was generating an authoritative summary of each trail, so this is helpful in getting the user to information that would otherwise be hard to find.
By contrast, Amazon has started adding a review summary for fiction books to its listings. In this case, there is always at least one authoritative human-written summary of the book which is more useful than the AI. If you want a human review, even reading a single highly rated review gives you better information than the AI summary did. "Customers report this book is funny and engaging" doesn't do anything - you already knew that, and the AI is just getting in your way.
Surely like those one glass of wine things - one is for health, the rest is just for me.Don't be ridiculous. No one would get a pizza half covered with rocks.
You are only supposed to eat one rock per day.
Garbage In, Repackaged Garbage OutEven worse. The LLMs are now being trained on their own garbage output being put on the web through these SEO spam sites. Thus further reinforcing the cycle of shit.
It never was. The people providing the information were a reliable source of information. Too bad the money was in spamming SEO results and plausible misinformation dissemination and not in accuracy.So Google is no longer a reliable source of information. Got it.
The entire AI push over the last year makes a lot more sense when you realize no one really knows what AI is actually useful for, or how to make money off it (or even how it works, but that's not really all that important: most people don't know how the Internet itself works after all). All pretty much everyone knows is it's very popular, and might one day be extremely powerful and useful. So companies are throwing AI everywhere they can, because they're hoping they'll create the next Facebook or Netflix in the process. It's not actually irrational, once you realize management has absolutely no idea how and why some companies succeed, and others fail, they just know they need to be seen doing something, because doing nothing (even if that is the successfull move) looks like a failure of vision.Who wants this? Besides Google?