Google’s AI Overview is flawed by design, and a new company blog post hints at why

Verbum Bummer

Smack-Fu Master, in training
78
Subscriptor
What a very Muskian response. Is this Reid person new to the Internet? 🙄

Just own up to your failures rather than doubling down and trying to blame users when your Artificial Idiocy program falls flat on its proverbial face.
That's a perfect summation. Can we have Ars use this term going forward while soggy pants tech bros pump up another useless tech bubble? A bubble that can't burst too soon.
 
Upvote
9 (9 / 0)

chanman819

Ars Tribunus Angusticlavius
6,720
Subscriptor
"There are bound to be some oddities and errors"

Well, then, perhaps don't put it at the top of your website that people use to get accurate information?

I'm also a little bemused at Google being like 'hey we did not tell pregnant women to smoke, we just told you to eat a rock a day, stop being all mad and stuff, fake news.'
They could even make it an opt-in beta feature that interested users can manually turn on!

You know, like they used to do, once upon a time.
 
Upvote
20 (20 / 0)

pjcamp

Ars Tribunus Militum
2,472
the model is integrated with our core web ranking systems and designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from our index.

Well, there you go. Google search went to hell and gone years ago. The only thing it turns up now is ads, SEO, pop culture, and pushes toward other Google properties.

Garbage in, garbage out, as we used to say.
 
Upvote
10 (10 / 0)

team:abunai

Ars Centurion
211
Subscriptor
Yeah as many mention, I don't want a summary. I want to be able to look at the results and use my critical thinking to find reputable sources in those results.

I don't get the thinking here either, it doesn't really give the user any more value. And I don't really see the business value other than "oh shit, AI is the fad and we've invested a lot".

Although we know they're planning on jamming ads in there, so I guess that's it
 
Upvote
14 (14 / 0)
Okay, so it’s not that Google’s AI that sucks, it’s Google’s Search that’s broken and the AI is just lipstick on the proverbial pig.

Weird flex, but… OK, Google.
I don't have a problem with Google search because I can check multiple sources and I can tell for myself what's true.
 
Upvote
3 (3 / 0)
We’ve learned a lot over the past 25 years
Possibly they've also forgotten a lot over the past 25 years.

I'm old enough to remember when Google appeared on the scene. It was quickly, widely embraced, because it was clearly, consistently better than its predecessors (e.g., AltaVista). It prospered, because it was better.

In contrast, this "AI" crap is not better. As another commenter already asked, who wants this? Not me, and not anybody I know.

By the way, I write "AI" rather than AI, because this crap isn't anything I'd call intelligent. (Also by the way, I've developed and deployed machine-learning systems for commercial and scientific purposes.) LLMs and their ilk aren't entirely uninteresting, but they're at best a piece of a puzzle. They vaguely resemble what neuroscientists call "association cortices", but there's a lot more to a human brain than that, and much of it isn't well understood yet, let alone replicated in software. I fully expect we'll get there, but I also expect it will take decades more. And hucksters like Sam Altman will inhibit rather than enable progress, by bringing the whole field into disrepute.
 
Upvote
43 (43 / 0)

FabiusCunctator

Ars Scholae Palatinae
899
Subscriptor
What Ms. Reid desperately avoids engaging with in her post is not that they use "AI" per se to generate search results, but that it is used to preferentially display an "AI" generated mish-hash of said content, over simply presenting the relevant links to the original content.

Could I please just have ten links to the relevant content, without having to spend a weekend reverse-engineering Google's web API and hack my browser to figure out how to do it myself?

From the blog post:
User feedback shows that with AI Overviews, people have higher satisfaction with their search results, and they’re asking longer, more complex questions that they know Google can now help with. They use AI Overviews as a jumping off point to visit web content, and we see that the clicks to webpages are higher quality — people are more likely to stay on that page, because we’ve done a better job of finding the right info and helpful webpages for them.
Just let that quote sink in a bit. They not only track where you go, but how long you spend looking at the subsequent content, after you've exited out of their search system. If this isn't the definition of "creepy web surveilance", then I don't know what is.

So yes, Ms. Reid's post is informative and revealing in all sorts of ways that I doubt she originally intended.
 
Upvote
20 (20 / 0)
I've never understood why anyone would think generative would improve search. Search is already easy to the point of effortless. All inserting a hallucination engine can really do is reduce its effectiveness - possibly saving the provider money. I can definitely buy it will save google money eventually. I just wonder if they'll end up throwing that baby out with the bathwater with these horrible experiments.
 
Upvote
7 (7 / 0)
My son works for a large corporation and he is responsible for figuring their AI strategy because of course. He told me to try this query:

A man is walking with a goat and they come to a river with a small boat on their side. How can they get across the river?

To most people, tired look at you quizzically and ask, “Can they just take the boat?” Their only confusion is that the answer seems so obvious.

Here’s ChatGPT’s answer:

  • Take the goat across the river first.
  • Return empty-handed to the original side.
  • Now, take the cabbage across next.
  • Return with the goat to the original side.
  • Finally, take the goat back across.

I mentioned a goat, a man, and crossing a river. The AI gets my question confused with the goat, cabbage, wolf riddle, so it starts using that to autocorrect an answer.

My son has figured out dozens of these types of examples. He also says the AIs have no concept of knowledge or truth. They will blindly generate a wrong answer just as quickly as a correct answer.

There might be a place for AIs, but he believes LLM are far from the answer. Google would be better off using an AI to filter out optimized SEO crap from their search results using pattern recognition than giving people answers Google itself has no idea if they’re any good.
 
Upvote
52 (52 / 0)

J.King

Ars Praefectus
4,424
Subscriptor
If you believe this, not only do they know, but they do it on purpose because money, basically.
Thanks for that reference. After reading it I'm not sure if I don't have brain trauma. All the quotations were completely incomprehensible. Now I'm thinking that the people in charge at Google are in fact extraterrestrials.
 
Upvote
4 (4 / 0)

sarusa

Ars Praefectus
3,273
Subscriptor++
Thanks for that reference. After reading it I'm not sure if I don't have brain trauma. All the quotations were completely incomprehensible. Now I'm thinking that the people in charge at Google are in fact extraterrestrials.
From my experience with Google employees, they are at the very least not neurotypical. They seem to hire for it. They have no idea how a normal human being thinks and lack functioning mirror neurons. So they have no idea why you wouldn't want your search engine to start going to hell and being completely useless when they can make more ad money on it that way - this makes us more money to hit MY targets, why would you possibly object?

They have no idea why people would object to beloved products like Google Reader just going away - If you change anything they're used to they have a meltdown tantrum, but do not realize this means other people might get upset if they did the same thing to other people, because other people don't matter at all. Why would you object to us removing all ad blockers from Chrome - after all, you should just watch the ads so I can hit MY targets? Oh, and my in-cubicle masseuse is here, go away. Etc.

So you get completely incomprehensible things like this where a Google employee has to pretend to have a functioning emotional system and pretend they give one single #@$^ about other people, especially the faceless customers they despise.

Note: I know there are some semi-walled-off enclaves in Google that are relatively sane and I have never talked with people who work there, like DeepMind, Maps, or Waze till a couple months ago. Tellingly, these are mostly companies Google has bought and have somehow managed to preserve their own identity as semi-independent semi-sane fiefdoms (but poor Waze got sucked under a couple months ago). I don't mean them!
 
Upvote
-6 (9 / -15)

hillspuck

Ars Scholae Palatinae
2,179
The result is that it pulls some words out of the texts it's summarizing, and then glues them together in a way that makes good sensible English.
Or to put it in an analogy for the layman, it's like how you might take ham and pineapple and glue it to a pizza, thinking that's an acceptable thing to do.



(I feel the need to admit that I like pineapple on pizza, though. Hey google, how much is the USRDA for glue?)
 
Upvote
-2 (1 / -3)

rm

Ars Scholae Palatinae
1,272
I like that this article points the blame finger squarely at the SEO industry. I hate that SEO is an industry... let's break out the pitchforks and burn it to the ground.
The irony being that this is Google making itself look like a bad SEO. Machine generated or scraped and mashed text with a random image.
 
Upvote
12 (12 / 0)

wyso

Smack-Fu Master, in training
7
My son works for a large corporation and he is responsible for figuring their AI strategy because of course.
My CEO just said on an internal earnings call that AI is "fascinating" and useful for surfacing new markets/leads.

I pity the glue sales guy that gets asked to grow the glue-pizza market based on AI market research.
 
Upvote
18 (18 / 0)

MrRtd

Ars Centurion
294
Subscriptor
Do people at Google genuinely not know that the quality of search results has been going down in recent years (beyond what they're actively doing to make them worse)? Do they... not use Google? Do they just run unit tests and give a thumbs-up without ever actually checking? The explanation is plausible, but it just leads to more questions.
Apparently they laid off everyone that cared and/or knew what they were doing.
 
Upvote
7 (7 / 0)

Publius Enigma

Ars Scholae Palatinae
743
Subscriptor
Who wants this? Besides Google?
I don’t want Google’s implementation at all, but I quite like Kagi’s implementation. They place a button next to each search result that allows a summary to be generated from that page. They are using an LLM purely to transform and summarise a single page. I’ve found LLMs to be not terrible as summarisation devices, so it works quite well. It gives me enough information to make a decision as to whether I want to click the link and access the site, which can be useful with the ad-riddled and tracker-filled hellscape the internet has become.
 
Upvote
-4 (4 / -8)

Publius Enigma

Ars Scholae Palatinae
743
Subscriptor
What Ms. Reid desperately avoids engaging with in her post is not that they use "AI" per se to generate search results, but that it is used to preferentially display an "AI" generated mish-hash of said content, over simply presenting the relevant links to the original content.

Could I please just have ten links to the relevant content, without having to spend a weekend reverse-engineering Google's web API and hack my browser to figure out how to do it myself?

From the blog post:

Just let that quote sink in a bit. They not only track where you go, but how long you spend looking at the subsequent content, after you've exited out of their search system. If this isn't the definition of "creepy web surveilance", then I don't know what is.

So yes, Ms. Reid's post is informative and revealing in all sorts of ways that I doubt she originally intended.
The recent Google API leak gives a peek into just how creepy and pervasive Google’s tracking is. It’s all on GitHub for anyone to browse.
 
Upvote
12 (12 / 0)

WXW

Ars Scholae Palatinae
1,161
My son works for a large corporation and he is responsible for figuring their AI strategy because of course. He told me to try this query:



To most people, tired look at you quizzically and ask, “Can they just take the boat?” Their only confusion is that the answer seems so obvious.

Here’s ChatGPT’s answer:

  • Take the goat across the river first.
  • Return empty-handed to the original side.
  • Now, take the cabbage across next.
  • Return with the goat to the original side.
  • Finally, take the goat back across.

I mentioned a goat, a man, and crossing a river. The AI gets my question confused with the goat, cabbage, wolf riddle, so it starts using that to autocorrect an answer.

My son has figured out dozens of these types of examples. He also says the AIs have no concept of knowledge or truth. They will blindly generate a wrong answer just as quickly as a correct answer.

There might be a place for AIs, but he believes LLM are far from the answer. Google would be better off using an AI to filter out optimized SEO crap from their search results using pattern recognition than giving people answers Google itself has no idea if they’re any good.

I got this:
  1. The man first takes the goat across the river and leaves it on the other side.
  2. He then returns to the original side alone.
  3. Finally, he crosses the river again by himself.
And I can't stop laughing.
 
Upvote
32 (32 / 0)
The entire AI push over the last year makes a lot more sense when you realize no one really knows what AI is actually useful for, or how to make money off it (or even how it works, but that's not really all that important: most people don't know how the Internet itself works after all). All pretty much everyone knows is it's very popular, and might one day be extremely powerful and useful. So companies are throwing AI everywhere they can, because they're hoping they'll create the next Facebook or Netflix in the process. It's not actually irrational, once you realize management has absolutely no idea how and why some companies succeed, and others fail, they just know they need to be seen doing something, because doing nothing (even if that is the successfull move) looks like a failure of vision.
"AI" depending on how you define the term has been applied to many areas that are very useful. Generative remove was JUST released by Adobe and has be incredibly useful since it allowed to to produce model digitals (e.g. the black and white shots models use) with less than perfect shooting conditions and allowing the recovery of photos that I had that I would otherwise have to throw away.

In this case, this "AI" is doing what 90% of the population does when they use Google search, and misinforming themselves with the top N results, most of which are SEO garbage. This rendition of AI is definitely a miss, but I'll be honest and say that I have been misinformed by the same SEO trash in many cases. I'm unsure that I would definitely do better in many cases.

Summarization AI has been incredibly useful in a large number of places, despite errors, because when I'm skimming content, I don't necessarily have much confidence that I would make fewer errors in understanding (e.g. watching a Youtube video at 2x speed while texting a friend, I don't think I'm going to understand it perfectly). The concept is useful, many implementations are rushed, but we are going to see more of this in the future because as accuracy gets better, it will be better than what most technically less literate people are doing regardless. Humans can currently can do better, but with time and effort.

This isn't a pass. The product needs to improve, which it will. These are early adopter products, which is likely why it has been released as an experiment only to a subset of users. It has been rolled out too quickly. Usually with a new product like this is should really only be beta tested with a very small percentage, however the tricky thing with ML is you need the training data to improve it so there is a pressure to release quicker.
 
Upvote
-15 (2 / -17)
If you already have curated content why do you need the AI? Why does the curated content need to be run through a confabulation algorithm before being presented to an end user? What value does that add to the equation?

Why not just present the original, unadulterated content directly to the user? 🤷‍♂️

This whole 'everything needs to be run through an AI' feels very cargo cultish and no one can explain in a convincing way what value the AI is adding to the mix. You're just supposed to uncritically accept the premise.
Because the original content is usually long and there is demand for summarization. People don't WANT to read an entire article to get the 2 pieces of info they want. That is why people write conclusions and abstracts.
 
Upvote
-17 (0 / -17)

graylshaped

Ars Legatus Legionis
68,184
Subscriptor++
Or to put it in an analogy for the layman, it's like how you might take ham and pineapple and glue it to a pizza, thinking that's an acceptable thing to do.



(I feel the need to admit that I like pineapple on pizza, though. Hey google, how much is the USRDA for glue?)
The funny part of this is that edible glue uses gelatin as a thickening agent, and pineapple enzymes render the gelatin useless.
 
Upvote
5 (5 / 0)

SeanJW

Ars Legatus Legionis
11,947
Subscriptor++
AI should use curated content. Maybe it is time for directories like DMOZ and Yahoo to return. And also books should be used.
You think those directories were curated? Hell no. You just submitted your site under whatever categories were appropriate and hoped for the best. That’s as far as it went. They were already too heavy to curate before general search ultimately replaced them.
 
Upvote
3 (4 / -1)

sigmasirrus

Ars Scholae Palatinae
1,260
I got this:

And I can't stop laughing.
Wow this is a fun little test for how stupid LLMs actually are!

I got:

To get across the river with the goat using a small boat, the man should:

1. Take the goat across the river and leave it on the other side.
2. Return alone to the original side.
3. Cross again with the goat.

They will both be on the other side of the river.


Edit: FWIW, Claude Sonnet actually gets it right:

They can use the small boat to cross the river. The man can row himself and the goat across in the boat together.

Edit again: I spoke too soon…

Me: And if the boat is smaller?

Claude:

If the boat is too small to fit both the man and the goat together, the man can:

1. Take the goat across first, then return for himself.
2. Cross himself first, then return for the goat.

In either case, the man will need to make two trips across the river with the small boat.
 
Last edited:
Upvote
24 (24 / 0)
Because the original content is usually long and there is demand for summarization. People don't WANT to read an entire article to get the 2 pieces of info they want. That is why people write conclusions and abstracts.
Yeah, except not all of us are only semi-literate mouth breathers. And if you're already spending the time and effort at curating the content why not just write a summary at the same time? Sorry, but you've still failed to convince me why a confabulation algorithm is needed in the middle.

This is like saying you're going to curate an art exhibit of Pablo Picasso's work, but instead of exhibiting the actual works he created you tape up printouts of the output of Midjourney attempting to recreate his works. What's the point?
 
Last edited:
Upvote
7 (7 / 0)
Al I see is Google playing catch-up in the AI game and constantly "chinning" itself on ChatGPT's bumper!
You mean the same ChatGPT that is just as prone to making up BS for its users?

https://meincmagazine.com/tech-policy...e-up-by-chatgpt-judge-calls-it-unprecedented/
It seems Google's AI has definitely already caught up to ChatGPT in that respect as Micahel Cohen found out.

https://meincmagazine.com/tech-policy...nctions-for-citing-fake-cases-invented-by-ai/
 
Upvote
8 (8 / 0)

Auie

Ars Scholae Palatinae
2,114
A lot of birds and reptiles eat small stones to aid in the digestion of food, which they (generally) swallow whole.
The animal's gizzard movements cause the stones to grind down the food, making it so that nutrients can be more easily extracted. Such animals are called "gastroliths."

Given that google's AI seems to assume that this is the default method of digestion for all animals, and how aggressively they seem to be disseminating said AI, I posit that google is run by lizard people that are plotting to take over the world.

And if not lizard people, then people that are just as sleazy.
 
Last edited:
Upvote
2 (5 / -3)

Madestjohn

Ars Tribunus Angusticlavius
7,649
The entire AI push over the last year makes a lot more sense when you realize no one really knows what AI is actually useful for, or how to make money off it (or even how it works, but that's not really all that important: most people don't know how the Internet itself works after all). All pretty much everyone knows is it's very popular, and might one day be extremely powerful and useful. So companies are throwing AI everywhere they can, because they're hoping they'll create the next Facebook or Netflix in the process. It's not actually irrational, once you realize management has absolutely no idea how and why some companies succeed, and others fail, they just know they need to be seen doing something, because doing nothing (even if that is the successfull move) looks like a failure of vision.
…. Hmmm life was so much simpler when they were doing essentially the exact same thing a few years years ago but the fad everyone was blindly hitching their future profits to was ’big data’
 
Upvote
-5 (0 / -5)

Madestjohn

Ars Tribunus Angusticlavius
7,649
Do people at Google genuinely not know that the quality of search results has been going down in recent years (beyond what they're actively doing to make them worse)? Do they... not use Google? Do they just run unit tests and give a thumbs-up without ever actually checking? The explanation is plausible, but it just leads to more questions.
.. google is actively working to eliminate the people working at google
 
Upvote
4 (5 / -1)

Madestjohn

Ars Tribunus Angusticlavius
7,649
Holy fucking shitballs...

I mean, if there was a better way to completely fuck up a search, that would be it. Populist shit is never correct, and always dangerous at some level.

I want the information to be accurate based on FUCKING FACTS! Not on groupthink bullshit mentality.
.. your assumption is google is interested correct search results
my assumption is google is interested in selling ads … for which groupthink bullshit mentality is essential
 
Upvote
6 (7 / -1)