Google will put more links to websites in AI Overviews

UserIDAlreadyInUse

Ars Tribunus Angusticlavius
7,800
Subscriptor
"We Can Source Your Sites For You Wholesale"

"Ten cents a day gets you cited as a source, or subscribe to our premium package to be listed as an authoritative source for our AI Overview. Get the View+ add-on and be one of the first three links displayed; research has shown that searchers click those links at a higher rate."
 
Upvote
95 (97 / -2)

Daniel

Ars Praefectus
3,679
Subscriptor
I think this is a good start. I have to use AI for work for things occasionally, I'm not in love with it, but I have to ask every time to source the information, then I go and read that for backup. Knowing that most people now seem to go just off headlines or social media, they're probably not going to actually read the sources, but at least make them available.
 
Upvote
9 (9 / 0)

JoHBE

Ars Praefectus
4,289
Subscriptor++
"Google also promises that AI answers will include more links generally. These will continue to appear as small pills at the end of paragraphs. Clicking on them will show a list of sources that supposedly formed the foundation of the AI output"

It's kinda important to know whether the AI created (or verified) its summary strictly on the referenced "sources", or that the sources are just some after-the-fact random collection of search results that contain some of the concepts mentioned in the summary.

In the latter case, there could be some nasty surprises if linking to primarily reliable "sources" gives the false impression of trustworthiness.
 
Upvote
39 (39 / 0)
This makes me want to use Google even less than I already do.

I eagerly await the memes showing Google AI stating one thing and their "source" showing a completely different answer.
It was happening a long while ago.

YouTube summaries of videos critical of Google....well were gaslighting what was actually said into compliments to Google. And that was last year.
 
Upvote
4 (4 / 0)

JoHBE

Ars Praefectus
4,289
Subscriptor++
A friend of mine who runs a nonprofit research site for Bible study says that visits have dropped over 50% in one year. He thinks people just use the AI overview, derived from his and other sites, and move on to their next question.

Nobody could have foreseen that!
 
Upvote
12 (12 / 0)
"Google also promises that AI answers will include more links generally. These will continue to appear as small pills at the end of paragraphs. Clicking on them will show a list of sources that supposedly formed the foundation of the AI output"

It's kinda important to know whether the AI created (or verified) its summary strictly on the referenced "sources", or that the sources are just some after-the-fact random collection of search results that contain some of the concepts mentioned in the summary.

In the latter case, there could be some nasty surprises if linking to primarily reliable "sources" gives the false impression of trustworthiness.

Good point. In fact, it does. I use DDG and its AI summaries show references. This seems good until one realizes the summary has included half a sentence from the source, thus giving a bad answer.

edit: Rereading your comment, I think you meant something else. Multiple false impressions, then. ;)
 
Upvote
3 (4 / -1)

Sarty

Ars Tribunus Angusticlavius
7,924
Feels like we're training up an entire generation to depend on an executive summary.
Paragraphs and pages of information cannot be distilled to a single sentence that contains all you need to know. So they're making some tweaks, but those who were actually interested in learning more were already doing so.
It seems to be a somewhat popular viewpoint that it's easy to write a short blurb and hard to write a research paper. That's not to diminish the value of the research paper, but writing a concise, accurate, and scope-complete summary is hard. In many cases it requires a more comprehensive understanding of the topic than writing the long fine article itself.

I will continue to avoid Google's slopbot like the plague, but it's hard to say that including more links isn't at least a tiny step in the right direction.
 
Upvote
18 (18 / 0)
It seems to be a somewhat popular viewpoint that it's easy to write a short blurb and hard to write a research paper. That's not to diminish the value of the research paper, but writing a concise, accurate, and scope-complete summary is hard. In many cases it requires a more comprehensive understanding of the topic than writing the long fine article itself.

I will continue to avoid Google's slopbot like the plague, but it's hard to say that including more links isn't at least a tiny step in the right direction.

It's not just a summary, it's a different type of communication than the long form. Very difficult, especially if one is an engineer and over-thinker.
"Must. Add. Words." ;)
 
Upvote
1 (2 / -1)
The sources are clearly false when I’ve checked them: I’ve clicked through to the original site using the supposed “source” link, and the claim in the “summary” or quick-answer is NOT on the source page. I usually use duckduckgo and it’s still the same problem. I’ve seen the same thing on Discord, where pasted links show with an automated “summary”(?) of claims/formulations that are NOT attested by the linked source, while the look of it falsely suggests that it’s a direct excerpt. And I’m not talking about trivial differences of phrasing, but key assertions that were not at all supported in the source and/or 100% false (based on what the source actually says).

And none of this is surprising, because if the sourcing was accurate then this would just be a web search…which a lot of LLM cheerleaders pretend doesn’t exist anymore and wouldn’t get low-intelligence investors to throw money at it. Getting keyword results for a specific coherent page that actually contains the phrases is a web search / corpus search as we’ve been doing for 30 years. LLMs function by doing mass theft and vomiting out an unreliable mash-up of keyword associations (also known as slop).

Technologically, it’s a dead-end and a scam.
 
Last edited:
Upvote
33 (36 / -3)

invertedpanda

Ars Tribunus Militum
2,857
Subscriptor
If Google AI is summarizing my content, then yes, they are literally stealing traffic to my websites because people will read the summary over my content.. And will often miss context, too.

Beyond that, the whole "Look, we're adding citations prominently at the bottom" is such bullshit, and they know it.. Because once people get the answer they seek they sure as shit aren't clicking through.

You could at least put the sources at the TOP of the AI summary.

Even if they changed, though, too little too late. I switched to using Startpage for my search, and focus more on social and direct interaction with folks to get my content in front of their eyeballs. I still don't get the traffic I used to get before AI ruined both the SRP and flooded the 'net with SEO-optimized slop, but it's better than no traffic at all.
 
Upvote
5 (6 / -1)

rockmuelle

Seniorius Lurkius
25
Subscriptor
Just, don't. The links are often worse than the summary.

Concrete example: Today I was looking to see if Jens Larsen, a popular Jazz Guitar YouTuber, had a video on the Wayne Shorter song "Footprints".

My google search was: "jens larsen footprints". Google's AI summary starts off with this text and a few links:
Jens Larsen provides comprehensive jazz guitar tutorials on Wayne Shorter's "Footprints," focusing on mastering the melody, chord voicings, and improvisation in 3/4 time. His lessons typically cover playing the melody in different neck positions, using 3-note quartal voicings for comping, and incorporating bluesy phrasing. [1, 2, 3, 4]
Key elements of Jens Larsen's "Footprints" lessons:
  • (bullet points that outline the "lesson")
Here's the fun part: per Google's real search results and Jens' YouTube page, Jens DOES NOT have a lesson on "Footprints". Those links? The first is to his channel and the other three are to other YouTubers who either play the song or have a tutorial (which, cool, but that's not what I asked for).

The "Key elements of Jens Larsen's "Footprints" lessons:"? Completely made up by Gemini.

Links aren't going to help the AI results if they don't actually support the summary.
 
Upvote
47 (47 / 0)

Resistance

Wise, Aged Ars Veteran
546
As long as they measure the quality of overviews as:
percentage of statements of fact in an overview that are correct
and not as:
if or if not the query was answered correctly
then they are going to have lots of problems, right now if I ask:
who is the president of the US and it says:
The president of the US is Joe Biden, Biden was elected as the 46th president in 2020, and was born in Scranton in 1942.
Then Google will rate that output at 80%.
 
Upvote
-2 (1 / -3)

Fatesrider

Ars Legatus Legionis
25,260
Subscriptor
I think this is a good start. I have to use AI for work for things occasionally, I'm not in love with it, but I have to ask every time to source the information, then I go and read that for backup. Knowing that most people now seem to go just off headlines or social media, they're probably not going to actually read the sources, but at least make them available.
Assuming the sources the AI cites are actually legitimate and not made-on-demand AI-hallucinated bullshit to support AI-hallucinated results.
 
Upvote
18 (18 / 0)

Smeghead

Ars Praefectus
4,640
Subscriptor
It's not just a summary, it's a different type of communication than the long form. Very difficult, especially if one is an engineer and over-thinker.
"Must. Add. Words." ;)
Tell that to the short ad for Odyssey that's running on youtube, which is basically an ad for the actual trailer. On-screen text and the (Cinemasins?) voice at the end literally both say:

Watch.
Trailer.
Now.

It makes my teeth hurt.
 
Upvote
-1 (0 / -1)

spacespektr

Ars Praetorian
598
Subscriptor
Sci-fi author John Scalzi regularly tests the accuracy of Gemini and other LLMs by asking a question about himself (since he’s pretty confident he knows the correct answer). Earlier today, he posted this:


View: https://bsky.app/profile/scalzi.com/post/3mle5esvjtk2h


And before anyone says that isn’t a fair question, Scalzi started blogging almost 30 years ago and is very public about personal details like that.
 
Upvote
29 (29 / 0)

andy o

Ars Scholae Palatinae
621
Sci-fi author John Scalzi regularly tests the accuracy of Gemini and other LLMs by asking a question about himself (since he’s pretty confident he knows the correct answer). Earlier today, he posted this:


View: https://bsky.app/profile/scalzi.com/post/3mle5esvjtk2h


And before anyone says that isn’t a fair question, Scalzi started blogging almost 30 years ago and is very public about personal details like that.

Even with simple fact-based, technical questions like "Does Bluetooth 5 support dual audio?" it's still extremely wrong, because of what I believe is traced to one video, MKBHD's also extremely wrong video about BT5 when it came out (it's also wrong about other widely misunderstood things about the standard like more distance and throughput -- it was either/or, and it only applied to the BT LE radio, not audio or the other BT Classic profiles). I could find some small "news" sites that published misinformation before that, but that video really amplified it.

Then Tech Press did what Tech Press does and just started regurgitating that video first, and over time each other. That was 9 years ago when the Samsung S8 came out, and now "AI" is doing what it does.

At the time the Bluetooth SIG had the press release, a white paper, and the spec available, none of which even mentioned dual audio, and none of these "tech" websites apparently read even the whole press release (let alone the white paper) which in the body of the text was clear that these enhancements were only for the LE radio.

Gary Sims made several actual informative videos about BT 5 on his "gary explains" series on YT, but that's all buried under all the misinformation.
 

Attachments

  • Screenshot 2026-05-08 202735.png
    Screenshot 2026-05-08 202735.png
    366.6 KB · Views: 16
Upvote
14 (14 / 0)

alinuxuser

Seniorius Lurkius
33
Subscriptor
When using Firefox, if I dare to expand Google's AI overview, I get 100% permanent utilization on 6 CPU cores (12 available) on my Linux desktop. Similar thing on my Linux server and my other laptop running Windows 11 for the office, but there in both cases it's "just" using 1 CPU core.
Happens since a few weeks.
Keep an eye on it if your CPU fan starts spinning and your battery charge drops for no reason... .
(luckily if I at least switch tab then that stops, unless I go back to that tab that displays the AI overviw)
Why? So far I did not notice any any animations nor dynamic content in those overviews.
 
Upvote
-1 (0 / -1)

Retrosal

Smack-Fu Master, in training
87
I think this is a good start. I have to use AI for work for things occasionally, I'm not in love with it, but I have to ask every time to source the information, then I go and read that for backup. Knowing that most people now seem to go just off headlines or social media, they're probably not going to actually read the sources, but at least make them available.
Are you checking more than one source? Because it happens frequently that AI-chatbots choose a single source for their answer - a source which happens to be among the few which are completely wrong about the subject.
 
Upvote
3 (3 / 0)

cleek

Ars Scholae Palatinae
1,145
Just, don't. The links are often worse than the summary.

Concrete example: Today I was looking to see if Jens Larsen, a popular Jazz Guitar YouTuber, had a video on the Wayne Shorter song "Footprints".

My google search was: "jens larsen footprints". Google's AI summary starts off with this text and a few links:

Here's the fun part: per Google's real search results and Jens' YouTube page, Jens DOES NOT have a lesson on "Footprints". Those links? The first is to his channel and the other three are to other YouTubers who either play the song or have a tutorial (which, cool, but that's not what I asked for).

The "Key elements of Jens Larsen's "Footprints" lessons:"? Completely made up by Gemini.

Links aren't going to help the AI results if they don't actually support the summary.

but, like fake citations in a term paper, they do provide a veneer of authority to the text. and as long as the reader isn't interested in clicking links, or gets distracted by what they find down a link and doesn't come back to complain, the links have worked.
 
Upvote
5 (5 / 0)