Google: "There are bound to be some oddities and errors" in system that told people to eat rocks.
See full article...
See full article...
Probably doesn’t help that Google made SEO like reading Tea Leaves.It never was. The people providing the information were a reliable source of information. Too bad the money was in spamming SEO results and plausible misinformation dissemination and not in accuracy.
Okay, so your product, when it is "working exactly as designed" takes satirical, sarcastic, and other non-serious content, removes it from its original context, and presents it as if it's just as authoritative as any other answer.While addressing the "nonsensical searches" angle in the post, Reid uses the example search, "How many rocks should I eat each day," which went viral in a tweet on May 23. Reid says, "Prior to these screenshots going viral, practically no one asked Google that question." And since there isn't much data on the web that answers it, she says there is a "data void" or "information gap" that was filled by satirical content found on the web, and the AI model found it and pushed it as an answer, much like Featured Snippets might. So basically, it was working exactly as designed.
It's adding a multiplier to the dis/misinformation shoveled onto the internet by expletive expletives.Garbage In, Garbage Out
I see plenty of evidence in the article of erroneous assumptions, wrong thinking, and the like (on Google’s behalf) — but I don’t see the claim that Google is deliberately putting out a crap AI-based system because it likes crap products but because it won’t relinquish these bad assumptions. To me, this is “flawed in design” not “flawed by design”.What evidence do you have that this isn't intentional?
It seems to be functioning exactly per Google's design if this Reid person is to be trusted.
Dwayne Johnson Memes have entered the chat.Don't be ridiculous. No one would get a pizza half covered with rocks.
You are only supposed to eat one rock per day.
I mostly agree: Pre-AI, Google did provide reliable, accurate info: The site domain, a snippet of text extracted from the site, and a link. I could very quickly use the domain as a first filter, the text as a second filter, and click the link to read the full source. I had to wade through an ever-increasing pile of crap to get to the usable stuff, but I never held Google responsible for the accuracy of the source, just the accuracy of the snippet and link.It never was. The people providing the information were a reliable source of information. Too bad the money was in spamming SEO results and plausible misinformation dissemination and not in accuracy.
Throwing an AI summary over the junk search results that Google currently generates was never going to work well. Google really has lost the plot. My guess is that their market research has told the executives that people are finding Google searches getting less useful, and some executive said "could we use AI to fix the search results?" and then others shouting "Brilliant! give that person a bonus!"
Or maybe pound those rocks into sand before eating them? Might be safer for your teeth.Of course its wrong, the users were supposed to be told to pound sand.
If you believe this, not only do they know, but they do it on purpose because money, basically.Do people at Google genuinely not know that the quality of search results has been going down in recent years (beyond what they're actively doing to make them worse)? Do they... not use Google? Do they just run unit tests and give a thumbs-up without ever actually checking? The explanation is plausible, but it just leads to more questions.
The blog post said:While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from our index.
I don't think that they intended for Google Search to start recommending people eat rocks or put glue on pizza. So I don't think it's intentional in that regard.What evidence do you have that this isn't intentional?
It seems to be functioning exactly per Google's design if this Reid person is to be trusted.
I really wish this was being discussed in news articles more often than the headline "AI will revolutionize x".Usually, when querying Google, section 230 kicks in on the actual content making the hosting party of the content not liable. How does that work when Google generates a summary? Is it creating content or is it more like a translation from a liability point of view?
If, as I commented above, you believe these accusations, it would indeed be flawed by design.I see plenty of evidence in the article of erroneous assumptions, wrong thinking, and the like (on Google’s behalf) — but I don’t see the claim that Google is deliberately putting out a crap AI-based system because it likes crap products but because it won’t relinquish these bad assumptions. To me, this is “flawed in design” not “flawed by design”.
That's the rub. None of these fancy autocompletes actually knows anything, other than this word usually comes after that word. But Google already had a thing that kinda knew what things were, or at least knew things about things, and what things were related to other things. It's Google Knowledge Graph. But it doesn't seem like they're using that with this at all.I think there's another reason it comes out with these hallucinations: it's not good at understanding what it's reading--indeed, it probably doesn't understand at all. But it's extremely good at writing decent English prose, which makes it look as if it's understanding.
Pizza stone makers now advertising as zero cal, zero fat, gluten free.it could be 1 very large rock.
Not true, rocks may contain carbon, which means it can oxidize so that's theoretically a few calories...Pizza stone makers now advertising as zero cal, zero fat, gluten free.
I'm sure they do. They just don't care as long as the ad revenue from user engagement keeps growing.Do people at Google genuinely not know that the quality of search results has been going down in recent years (beyond what they're actively doing to make them worse)? Do they... not use Google? Do they just run unit tests and give a thumbs-up without ever actually checking? The explanation is plausible, but it just leads to more questions.
Well I can think of nothing better to improve engagement than to force users to put up with a hallucinating thing to give you preferential search results that are, well, useless.I'm sure they do. They just don't care as long as the ad revenue from user engagement keeps growing.
If you already have curated content why do you need the AI? Why does the curated content need to be run through a confabulation algorithm before being presented to an end user? What value does that add to the equation?AI should use curated content. Maybe it is time for directories like DMOZ and Yahoo to return. And also books should be used.
Holy fucking shitballs...Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results.
si.shimano.comI just noticed (maybe I'm slow) that Amazon's "ask about this product" is now AI powered. Yesterday while ordering a bike tool, I asked it a pretty simple question: "Is this compatible with Shimano tool TL-FC37". It said "No" (wrong), then went on to explain to me why it wasn't (all wrong). Sounded like a know-it-all uncle that knows nothing about a topic, but still feels the need to pose as an expert. It was icky.
At least it gave me the opportunity to give feedback on the answer (which is more than I can say for most know-it-all uncles).
Sure in the long run. But by that point these executives will have made their many millions in stock incentives (the only end goal in all this) and will have either left Google completely or moved on to wreck another product in a different division. And with the perverse way in which Google works, these executives will very rarely if ever face any negative consequences for their actions.Well I can think of nothing better to improve engagement than to force users to put up with a hallucinating thing to give you preferential search results that are, well, useless.
Oh wait
I expect they'll try to have it both ways: when sued for plagiarism (by any other name), they'll say, "Oh, but we're creating new content!", and when sued for dispensing dangerously defective advice, defaming people, etc., they'll say, "Oh, but we're just passing along content from independent sources!"Usually, when querying Google, section 230 kicks in on the actual content making the hosting party of the content not liable. How does that work when Google generates a summary? Is it creating content or is it more like a translation from a liability point of view?
So it’s an Artificial Intelligence Centipede and the user is on the end.Even worse. The LLMs are now being trained on their own garbage output being put on the web through these SEO spam sites. Thus further reinforcing the cycle of shit.
Or an Ouroboros. You're free to choose your own metaphor.So it’s an Artificial Intelligence Centipede and the user is on the end.
They knew the design was inherently flawed, but they didn't care, designed it that way and put it out, because they think it will make them money. It's meant to give bad results, because they knew it would and didn't care.I see plenty of evidence in the article of erroneous assumptions, wrong thinking, and the like (on Google’s behalf) — but I don’t see the claim that Google is deliberately putting out a crap AI-based system because it likes crap products but because it won’t relinquish these bad assumptions. To me, this is “flawed in design” not “flawed by design”.