Google’s AI Overview is flawed by design, and a new company blog post hints at why

It never was. The people providing the information were a reliable source of information. Too bad the money was in spamming SEO results and plausible misinformation dissemination and not in accuracy.
Probably doesn’t help that Google made SEO like reading Tea Leaves.

Lots of money in claiming you know how to exploit an algorithm.
 
Upvote
22 (22 / 0)

GenericAnimeBoy

Ars Tribunus Militum
1,836
Subscriptor++
While addressing the "nonsensical searches" angle in the post, Reid uses the example search, "How many rocks should I eat each day," which went viral in a tweet on May 23. Reid says, "Prior to these screenshots going viral, practically no one asked Google that question." And since there isn't much data on the web that answers it, she says there is a "data void" or "information gap" that was filled by satirical content found on the web, and the AI model found it and pushed it as an answer, much like Featured Snippets might. So basically, it was working exactly as designed.
Okay, so your product, when it is "working exactly as designed" takes satirical, sarcastic, and other non-serious content, removes it from its original context, and presents it as if it's just as authoritative as any other answer.

If you don't see why that's problematic, you are not qualified to lead Google Search.
 
Upvote
71 (72 / -1)

J.King

Ars Praefectus
4,424
Subscriptor
Do people at Google genuinely not know that the quality of search results has been going down in recent years (beyond what they're actively doing to make them worse)? Do they... not use Google? Do they just run unit tests and give a thumbs-up without ever actually checking? The explanation is plausible, but it just leads to more questions.
 
Upvote
50 (51 / -1)

s73v3r

Ars Legatus Legionis
25,731
"There are bound to be some oddities and errors"

I'm really not seeing why this should be considered an acceptable thing. Why should we have to accept things that are flat out broken, especially when Google had something that worked, and worked very well, not 10 years ago? Their old, normal search might have had this come up, but it would be presented in context, and people would realize it's a silly goof. And it would have been like the eighth result on the page, below real, actual results.

Quite frankly, I'm really sick and tired of MBAs and finance assholes ruining everything that's good.
 
Upvote
70 (74 / -4)
What evidence do you have that this isn't intentional? 🤷‍♂️

It seems to be functioning exactly per Google's design if this Reid person is to be trusted.
I see plenty of evidence in the article of erroneous assumptions, wrong thinking, and the like (on Google’s behalf) — but I don’t see the claim that Google is deliberately putting out a crap AI-based system because it likes crap products but because it won’t relinquish these bad assumptions. To me, this is “flawed in design” not “flawed by design”.
 
Upvote
-11 (8 / -19)

xizive

Wise, Aged Ars Veteran
128
It never was. The people providing the information were a reliable source of information. Too bad the money was in spamming SEO results and plausible misinformation dissemination and not in accuracy.
I mostly agree: Pre-AI, Google did provide reliable, accurate info: The site domain, a snippet of text extracted from the site, and a link. I could very quickly use the domain as a first filter, the text as a second filter, and click the link to read the full source. I had to wade through an ever-increasing pile of crap to get to the usable stuff, but I never held Google responsible for the accuracy of the source, just the accuracy of the snippet and link.

Now that Google has replaced that accurate info with a word salad created from an amalgam of random sites, the results are absolutely, 100% useless to me.
 
Upvote
51 (52 / -1)

meta.x.gdb

Ars Scholae Palatinae
1,353
Throwing an AI summary over the junk search results that Google currently generates was never going to work well. Google really has lost the plot. My guess is that their market research has told the executives that people are finding Google searches getting less useful, and some executive said "could we use AI to fix the search results?" and then others shouting "Brilliant! give that person a bonus!"
 
Upvote
6 (6 / 0)

WXW

Ars Scholae Palatinae
1,161
Do people at Google genuinely not know that the quality of search results has been going down in recent years (beyond what they're actively doing to make them worse)? Do they... not use Google? Do they just run unit tests and give a thumbs-up without ever actually checking? The explanation is plausible, but it just leads to more questions.
If you believe this, not only do they know, but they do it on purpose because money, basically.
 
Upvote
23 (23 / 0)
The blog post said:
While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from our index.

"Relevant, high-quality results" such as the contents of a forum called Shitty Food Porn?
 
Upvote
11 (11 / 0)

s73v3r

Ars Legatus Legionis
25,731
What evidence do you have that this isn't intentional? 🤷‍♂️

It seems to be functioning exactly per Google's design if this Reid person is to be trusted.
I don't think that they intended for Google Search to start recommending people eat rocks or put glue on pizza. So I don't think it's intentional in that regard.

But I really do think they never thought about the problem of the LLM picking up something like that. Never thought about how their project could go wrong, or recommend bad things like that. And to me, that level of naivete is worse. Like, how can you be on the internet for this long, and not know that the internet is full of bad and flat out stupid things?

I think we're lucky that someone didn't start asking it the best way to slip date rape drugs into a drink.
 
Upvote
22 (22 / 0)
Usually, when querying Google, section 230 kicks in on the actual content making the hosting party of the content not liable. How does that work when Google generates a summary? Is it creating content or is it more like a translation from a liability point of view?
I really wish this was being discussed in news articles more often than the headline "AI will revolutionize x".

Because the truth is Google and all of these AI companies are telling 2 (untrue) stories: 1 that they want to be true in the market and 1 that they want to be true in court.

The market story is: "AI is actually intelligent: it's chatting with you! It knows the answer to your questions. It learns. Its real." So to your point, this story should get them in a lot of trouble under 230 if they're telling people to eat rocks. It's their content according to this story.

The legal story is: "BTW we train AI with public, fair use content. Its not ours. But its not yours either." They dont have to pay anyone for anything, aren't responsible for any consequences of this. Its not their mistakes after all, but YOUR mis-takes ("eat rocks"). But for sure they'll collect all the checks.
 
Upvote
40 (40 / 0)

WXW

Ars Scholae Palatinae
1,161
I see plenty of evidence in the article of erroneous assumptions, wrong thinking, and the like (on Google’s behalf) — but I don’t see the claim that Google is deliberately putting out a crap AI-based system because it likes crap products but because it won’t relinquish these bad assumptions. To me, this is “flawed in design” not “flawed by design”.
If, as I commented above, you believe these accusations, it would indeed be flawed by design.
 
Upvote
15 (15 / 0)

thrillgore

Ars Praefectus
4,090
Subscriptor
So they've begun the gaslighting as a way to get past the failures of its AI launch. I'm sorry, but Gemini was not ready for launch. It's not going to ever be ready for launch if this is the best they can do. LLMs are not going to be useful for search. Hallucination IS NOT A BUG. It's a feature and its working as designed.

It needs to be tabled indefinitely, they need to go back to DeepMind and put their AI research on meaningful things, and the entire chain of management has to go. Up to the CEO.

It won't happen. They killed their fucking SEARCH ENGINE. I'm going back to SearXNG until I get enough confidence to fuck off to Kagi.
 
Upvote
23 (26 / -3)
I suppose people immediately started misusing fire the second it was invented, too, and caveat emptor has been with us from the beginning. But if someone says fire will get rid of all the junk in your house, you can probably intuit that it will destroy the house itself as well, and politely decline. If someone says this New Magic Talking Thing can answer your questions accurately, the implicit destructive potential is less obvious because the underlying mechanism is utterly mysterious.
 
Last edited:
Upvote
15 (16 / -1)

s73v3r

Ars Legatus Legionis
25,731
I think there's another reason it comes out with these hallucinations: it's not good at understanding what it's reading--indeed, it probably doesn't understand at all. But it's extremely good at writing decent English prose, which makes it look as if it's understanding.
That's the rub. None of these fancy autocompletes actually knows anything, other than this word usually comes after that word. But Google already had a thing that kinda knew what things were, or at least knew things about things, and what things were related to other things. It's Google Knowledge Graph. But it doesn't seem like they're using that with this at all.
 
Upvote
25 (26 / -1)

Steve austin

Ars Scholae Palatinae
1,780
Subscriptor
It’s pretty much an admission that both their page rank algorithm is now terrible - both by being gamed by SEO players and just in general - and that they never had a way to judge the accuracy of the information in their search results. Before it was up to the reader to evaluate the found info (with the hope that a good page rank algorithm made it more likely that good info floated to the top and bad info was fairly obvious), with the always existing risk that whoever/whatever created the info was just wrong. But now they basically have to take it as gospel for their “summary“ because they have little or no automated way to make an informed evaluation.

There doesn’t seem to be a way to solve this and I’m not sure why they felt a need to add the feature in the first place. They are paid (mostly) for click-through, and providing answers that reduce those clicks seems counterproductive for them. Fixing page rank would seem more valuable to Google users, but unfortunately not for Google customers.
 
Upvote
22 (23 / -1)

xizive

Wise, Aged Ars Veteran
128
I have a weird analogy... I remember a deli counter where the butcher had formed a crude sculpture of a pig's head from ground pork. I found this to be deeply disturbing, especially from the pig's viewpoint.

AI Overview is the same thing, rendering a site down to symbols and then trying to reconstruct it without any real intelligence, just statistics. The result resembles the original content the same way ground pork resembles a pig.
 
Upvote
5 (9 / -4)
Do people at Google genuinely not know that the quality of search results has been going down in recent years (beyond what they're actively doing to make them worse)? Do they... not use Google? Do they just run unit tests and give a thumbs-up without ever actually checking? The explanation is plausible, but it just leads to more questions.
I'm sure they do. They just don't care as long as the ad revenue from user engagement keeps growing.
 
Upvote
9 (9 / 0)

thrillgore

Ars Praefectus
4,090
Subscriptor
I'm sure they do. They just don't care as long as the ad revenue from user engagement keeps growing.
Well I can think of nothing better to improve engagement than to force users to put up with a hallucinating thing to give you preferential search results that are, well, useless.

Oh wait
 
Upvote
5 (5 / 0)
AI should use curated content. Maybe it is time for directories like DMOZ and Yahoo to return. And also books should be used.
If you already have curated content why do you need the AI? Why does the curated content need to be run through a confabulation algorithm before being presented to an end user? What value does that add to the equation?

Why not just present the original, unadulterated content directly to the user? 🤷‍♂️

This whole 'everything needs to be run through an AI' feels very cargo cultish and no one can explain in a convincing way what value the AI is adding to the mix. You're just supposed to uncritically accept the premise.
 
Last edited:
Upvote
33 (34 / -1)

Fatesrider

Ars Legatus Legionis
25,260
Subscriptor
Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results.
Holy fucking shitballs...

I mean, if there was a better way to completely fuck up a search, that would be it. Populist shit is never correct, and always dangerous at some level.

I want the information to be accurate based on FUCKING FACTS! Not on groupthink bullshit mentality.
 
Upvote
21 (23 / -2)

dpjf123

Seniorius Lurkius
19
I just noticed (maybe I'm slow) that Amazon's "ask about this product" is now AI powered. Yesterday while ordering a bike tool, I asked it a pretty simple question: "Is this compatible with Shimano tool TL-FC37". It said "No" (wrong), then went on to explain to me why it wasn't (all wrong). Sounded like a know-it-all uncle that knows nothing about a topic, but still feels the need to pose as an expert. It was icky.

At least it gave me the opportunity to give feedback on the answer (which is more than I can say for most know-it-all uncles).
 
Upvote
30 (30 / 0)
I just noticed (maybe I'm slow) that Amazon's "ask about this product" is now AI powered. Yesterday while ordering a bike tool, I asked it a pretty simple question: "Is this compatible with Shimano tool TL-FC37". It said "No" (wrong), then went on to explain to me why it wasn't (all wrong). Sounded like a know-it-all uncle that knows nothing about a topic, but still feels the need to pose as an expert. It was icky.

At least it gave me the opportunity to give feedback on the answer (which is more than I can say for most know-it-all uncles).
si.shimano.com
productinfo.shimano.com
 
Upvote
-19 (0 / -19)
Well I can think of nothing better to improve engagement than to force users to put up with a hallucinating thing to give you preferential search results that are, well, useless.

Oh wait
Sure in the long run. But by that point these executives will have made their many millions in stock incentives (the only end goal in all this) and will have either left Google completely or moved on to wreck another product in a different division. And with the perverse way in which Google works, these executives will very rarely if ever face any negative consequences for their actions.
 
Upvote
9 (10 / -1)
Usually, when querying Google, section 230 kicks in on the actual content making the hosting party of the content not liable. How does that work when Google generates a summary? Is it creating content or is it more like a translation from a liability point of view?
I expect they'll try to have it both ways: when sued for plagiarism (by any other name), they'll say, "Oh, but we're creating new content!", and when sued for dispensing dangerously defective advice, defaming people, etc., they'll say, "Oh, but we're just passing along content from independent sources!"
 
Upvote
30 (31 / -1)
What seems damning is not that their experimental feature is broken; but that they are so eager to keep the 'engagement' on their site, rather than the sites they stole the content from, that they are pushing it into production anyway; knowing full well that it's broken.

Honestly, the accuracy issues are more or less a sideshow compared to the intense ramp-up of efforts to move from providing sensible links based on user requests to attempting to copy enough of the rest of the internet in order to supplant it. That would be deeply problematic even if their bot were a veritable mentat reference librarian.
 
Upvote
26 (26 / 0)

marsilies

Ars Legatus Legionis
24,484
Subscriptor++
I see plenty of evidence in the article of erroneous assumptions, wrong thinking, and the like (on Google’s behalf) — but I don’t see the claim that Google is deliberately putting out a crap AI-based system because it likes crap products but because it won’t relinquish these bad assumptions. To me, this is “flawed in design” not “flawed by design”.
They knew the design was inherently flawed, but they didn't care, designed it that way and put it out, because they think it will make them money. It's meant to give bad results, because they knew it would and didn't care.
 
Upvote
16 (16 / 0)

rm

Ars Scholae Palatinae
1,272
I'll agree with some of the other comments that the root issue is the poor quality of search results which pre-dates this AI nonsense.

Its obvious Google has lost the plot on accurately identifying what information is authoritative quite a while back so adding a layer of clueless AI on top is on-brand. But of course this is an advertising company so if they can serve a relevant ad to match the search, and a user follows that rather than the actual search results, they have met their main objective.

Its been a steady incremental creep to improving click-through-ratio on ads and giving organic results less real estate on the page, so yeah, I am very, not very, surprised.
 
Last edited:
Upvote
21 (21 / 0)

Brent Nordquist

Smack-Fu Master, in training
51
Subscriptor
I notice they said nothing about the question "How many U.S. presidents graduated from UW Madison?" I find it highly unlikely that the answer it gave (in which most presidents it listed supposedly graduated from there more than once, over many decades) was backed up with high-quality search results. The much more likely explanation is hallucination; it just linked presidential names with lists of graduates, different people who happened to share a presidential name. And then it wasn't even smart enough to realize that graduating four times over fifty years makes no sense.
 
Upvote
23 (24 / -1)