Research finds AI users scarily willing to “surrender” their cognition to LLMs

balthazarr

Ars Tribunus Angusticlavius
6,878
Subscriptor++
There are vaccine skeptics and COVID deniers because there are a lot of people who are a) deeply and fundamentally incompetent, b) painfully aware and crushingly self-conscious of their incompetence and c) have traits of oppositional/defiant disorder. As a result they simmer with resentment and fury at competent experts who make them feel bad about their mediocrity, and embrace conspiracy views that let them pretend to be part of some unusually perceptive minority that sees through the lies and embraces the real occult knowledge that the experts lie to us about and blah blah blah.
And d) have torrents of disinformation hurled at them daily by algorithms designed to bolster their engagement, consequences be damned.
 
Upvote
7 (7 / 0)

Jeepdon

Wise, Aged Ars Veteran
116
Subscriptor++
Is this the future of humanity? Idiots quoting an LLM?
It’s also the past. Before LLMs they’d simply quote their favorite politician, conspiracy theorist, cult leader, or whatever. This is just scaling. I mean, FFS, look what people came up with during COVID and they didn’t have LLMs then.
 
Upvote
6 (6 / 0)
Figuring things out is a life skill. Outsourcing basic life skills to an agent of dubious reliability is a strange choice to defend.
I don't think serious research is basic. It is forever beyond most people.

I do think it's good for every to learn everything, and it is easy to see how AI is going to disrupt that. But also I don't think everyone can learn everything, even if they tried.
 
Last edited:
Upvote
-7 (0 / -7)
There are vaccine skeptics and COVID deniers because there are a lot of people who are a) deeply and fundamentally incompetent, b) painfully aware and crushingly self-conscious of their incompetence and c) have traits of oppositional/defiant disorder. As a result they simmer with resentment and fury at competent experts who make them feel bad about their mediocrity, and embrace conspiracy views that let them pretend to be part of some unusually perceptive minority that sees through the lies and embraces the real occult knowledge that the experts lie to us about and blah blah blah.
Eh, I don't think they are especially incompetent. Very few people sit and read all the studies and follow all the math, or understand anything deeply.

Most people just listen to whatever the experts say, but sometimes their youtubes tell them that the experts are scamming them, and most people will side with their youtubes.
 
Upvote
-8 (0 / -8)

SubWoofer2

Ars Tribunus Militum
2,592
I'm not sure any sci-fi story ever predicted we'd be tempted to turn over our thinking to the obviously incompetent, but our current society demands so MUCH of our time is focused on the 24/7 work-till-you-drop productivity culture, with so many unable to even percieve the fruits of their own labor, that actual quality doesn't seem to make much of a difference so long as they can focus a little more on their personal lives again.

Some immediate f'rinstances. Holly on Red Dwarf was not the cleverest of computers. The handover of library overdue notices and fines to "the computer" resulted in the death penalty for an overdue library book in Gordon Dickson's 1957 "Computers Don't Argue". A certain shipboard computer in the Hitchhikers Guide almost had its passengers killed due to not prioritising "what makes a good cup of tea" lower than "how to avoid this approaching missile attack".
 
Upvote
2 (3 / -1)
Some immediate f'rinstances. Holly on Red Dwarf was not the cleverest of computers. The handover of library overdue notices and fines to "the computer" resulted in the death penalty for an overdue library book in Gordon Dickson's 1957 "Computers Don't Argue". A certain shipboard computer in the Hitchhikers Guide almost had its passengers killed due to not prioritising "what makes a good cup of tea" lower than "how to avoid this approaching missile attack".
It's not particularly comforting that the only analogues you can think of are unserious comedies. Is that what we stepped into?
 
Upvote
5 (5 / 0)
What if that thing I said? I'm not sure any sci-fi story ever predicted we'd be tempted to turn over our thinking to the obviously incompetent, but our current society demands so MUCH of our time is focused on the 24/7 work-till-you-drop productivity culture, with so many unable to even percieve the fruits of their own labor, that actual quality doesn't seem to make much of a difference so long as they can focus a little more on their personal lives again. Well, that and a lot of these decisions aren't being made by the individual but by the few in charge of our lives.
Throughout history, the US has had hunger in the streets, 60 hour work weeks, no education, and death before retirement.

In terms of long-term trends, we are nearing the opposite of all those things. Folks get a college education. Work 40 hour weeks. Eat until they are fat. Retire for entire decades. And have plenty of time throughout our very long lives to read and learn and think. We just do it very poorly. And end up watching Fox News from the age of 60 to 90.
 
Last edited:
Upvote
1 (1 / 0)

graylshaped

Ars Legatus Legionis
67,885
Subscriptor++
I don't think serious research is basic. It is forever beyond most people.

I do think it's good for every to learn everything, and it is easy to see how AI is going to disrupt that. But also I don't think everyone can learn everything, even if they tried.
You confuse work ethic with capability. It is "beyond" far fewer people than the number who choose not to be bothered.
 
Upvote
3 (3 / 0)
You confuse work ethic with capability. It is "beyond" far fewer people than the number who choose not to be bothered.
Maybe. In any case I think maybe 1% of people understand enough science and math (especially statistics) to do serious medical research. Probably a similar set really grasp finance, law, science, accounting, etc. Or can do their own plumbing, painting, flooring, or automotive repair. Or speak another language well enough to understand and be understood. Or understand design and code well enough to create a simple app, webpage, or game. Or draw, paint, and play a musical instrument passingly well. Or cook satisfying meals for a large number of people.

These are all skills that I think are worth pursuing. But they're also all things that normal people leave leave to professionals. I don't know if it's laziness or stupidity that keeps us from learning everything. But we already don't.
 
Last edited:
Upvote
2 (2 / 0)
Throughout history, the US has had hunger in the streets, 60 hour work weeks, no education, and death before retirement.

In terms of long-term trends, we are nearing the opposite of all those things. Folks get a college education. Work 40 hour weeks. Eat until they are fat. Retire for entire decades. And have plenty of time throughout our very long lives to read and learn and think. We just do it very poorly. And end up watching Fox News from the age of 60 to 90.
Productivity has increased many times over, while wages have stagnated. Whatever else you may say, we're doing more work but aren't being compensated for it.
 
Upvote
9 (9 / 0)
Maybe. In any case I think maybe 1% of people understand enough science and math (especially statistics) to do serious medical research. Probably a similar set really grasp finance, law, science, accounting, etc. Or can do their own plumbing, painting, flooring, or automotive repair. Or speak another language well enough to understand and be understood. Or understand design and code well enough to create a simple app, webpage, or game. Or draw, paint, and play a musical instrument passingly well. Or cook satisfying meals for a large number of people.

These are all skills that I think are worth pursuing. But they're also all things that normal people leave leave to professionals. I don't know if it's laziness or stupidity that keeps us from learning everything. But we already don't.
We specialize because that's how the human species works. We can't learn everything. That's not a moral failing, it's just how we work. We specialize and the specialists of various areas rely on each other to fill the gaps. I know to a certainty you didn't "learn everything", and if you claim you did, I would call you liar.
 
Upvote
4 (4 / 0)

Martin123

Ars Scholae Palatinae
657
Subscriptor
0.01mm is the safest height for a bridge that you will bungee jump off of. Give that student an A+.
It's definitely safe, but I would argue that there isn't anything 0.01mm in height that fits any reasonable definition of a "bridge", given that the "bungee jumping" context makes it clear we're talking about road bridges and not microelectronics or such ;-)
 
Upvote
4 (4 / 0)

pacohope

Seniorius Lurkius
7
The photo credit at the top says "Artist's conception of an average AI user's image of an LLM's ultra-rational thought process" and then the photo credit is "Gety Images". I suspect that's 2 errors in one caption. (1) I bet no "Artist" drew that, and (2) I bet it's AI slop from Getty Images and the credit is a misspelling. Perfect for the header of an article talking about "large majorities uncritically accepting 'faulty' AI answers".
 
Upvote
-1 (0 / -1)

BroastedToaster

Smack-Fu Master, in training
9
There seems to be some misconceptions about this study here. They had study participants self select if they wanted an "AI Assistant" answer to the question. These were deliberately correct or incorrect and each set was specifically recorded. The correct and incorrect results are for when users were deliberately presented with correct or incorrect results. It's unclear how this was actually performed in the test. We can make a reasonable guess from the example test provided on the site, but the setup is not properly covered in the paper. Which is an issue because presentation will likely have a significant effect on results.
 
Upvote
-4 (0 / -4)

BroastedToaster

Smack-Fu Master, in training
9
This seems like it's fundamentally just a study of the effectiveness of ethos. The concept that credibility alone is intrinsically persuasive. I'd argue this study doesn't really demonstrate anything interesting. If the results differ from the same format with an "expert help" option then it presents something interesting. It seems like the most interesting part of this is that self reported belief in the credibility does create a statistically significant change in behavior.
 
Upvote
-4 (0 / -4)

BroastedToaster

Smack-Fu Master, in training
9
The theory presented here seems like a really weak hypothesis. Maybe sub in external for artificial and you can make an argument. The paper implies this effect only applies to, in effect, LLMs. At the very least the data in this study doesn't seem to suggest any fundamental differences between this theorized type of cognition and the effect of just hearing someone else's thoughts on the matter.


That is, System 3 introduces a qualitatively different mode of cognition (externalized, automated, and data-driven) that humans can access on demand. It functions not just as a tool or extension but as a co-agent in reasoning, often delivering outputs with epistemic authority.
--Shaw, Steven D and Nave, Gideon, Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender (January 11, 2026)
 
Upvote
-3 (1 / -4)

graylshaped

Ars Legatus Legionis
67,885
Subscriptor++
This seems like it's fundamentally just a study of the effectiveness of ethos. The concept that credibility alone is intrinsically persuasive. I'd argue this study doesn't really demonstrate anything interesting. If the results differ from the same format with an "expert help" option then it presents something interesting. It seems like the most interesting part of this is that self reported belief in the credibility does create a statistically significant change in behavior.
Right. And...?

That's exactly what the study shows. People willingly succumb to appeals to authorities they accept, whether they deserve such trust or not. It is established fact that "AI" models are unreliable to a degree by design.

This is an ah-ha for you, not a gotcha about the study.
 
Upvote
3 (3 / 0)

Hmnhntr

Ars Scholae Palatinae
3,129
I've been consulting with LLMs for years now in an attempt to solve my seemingly insurmountable health issues. The healthcare system in my city of one million people is one of the worst in Russia, a country that has already been ranked far below average in terms of healthcare. And yes, I trust what LLMs say because I lack the education and knowledge to understand what's going on, and local "doctors" seem to know even less than I do after using "chatbots".

I don't have the money or resources to go to Germany (it no longer issues visas to Russians), Israel or the US to get proper treatment. That would cost hundreds of thousands of dollars.

Yes, there's far better healthcare in Moscow, it's actually not so bad, but getting a referral to be treated in Moscow state clinics for free is nigh impossible here. You would literally have to be close to death to get one.

And the funniest thing about all the AI hatred on Ars, never mind the fact that the vast majority of IT professionals use AI daily, is that many people in the US find themselves in the same situation I am in now, considering how expensive healthcare is. Hundreds of thousands of people in the US are declared bankrupt due to exorbitant medical bills and they often have no choice but to ask LLMs how to literally survive.

Would be great if LLMs could interpret MRIs and X-ray images, and not just synthesize answers based on a doctor's report, but also probe your condition further. Often, additional questions and tests can reveal information that your existing test results might miss. Nowadays, you have to randomly poke at them in an attempt to get an answer, and too often it doesn't work because you're not a medical professional.

People turn to LLMs because often the actual available system is inaccessible, low-quality, dismissive, unaffordable, or all four at once. In that setting, "don’t use AI, go see a doctor" can be functionally useless advice.
If you're not going to see a doctor, at least research it yourself instead of putting your life in the hands of a machine that constantly hallucinates. It's your health; surely it's worth the time investment.
 
Upvote
4 (4 / 0)

Hmnhntr

Ars Scholae Palatinae
3,129
I've seen far too many stories about people who have actually cured themselves after consulting LLMs, despite having seen numerous doctors who failed to understand what was wrong with them. And many such stories come from ... the US where healthcare is actually quite decent albeit very expensive.

I really don't understand the stance of the Ars audience on this issue. To me it looks like hubris compounded by the assumption that you can ... study chemistry, biology and physiology sufficiently well to become a better doctor than those you can afford to treat you who have likely spent many years studying. How does this work? Please enlighten me. In my entire life, only one person has achieved that: my own mother. All others have perished. I'm on the same trajectory.

I have major issues with my intestines and digestion. I have seen the head of the gastroenterology department at the local medical university in my region. I have also seen the best local gastroenterologist. I have also seen two other "normal" gastroenterologists. Four specialists in total. Yet no one has given me any advice. Not one of them even asked me to be tested for H. pylori, which I did myself. I found out that I was infected with it and cured myself, all after consulting what's called here "AI slop". Sadly, my digestive issues haven't improved at all but at least I'm H.Pylori free (not an issue in Western countries, 90% of Russians or so are infected with it).

I have other major health issues as well, including problems with my cervical spine and brain. I have not been able to find anyone who can explain what is going on with me.

It's all so easy for you, isn't it? You just get an appointment with a doctor and you're good to go, right? But why do famous American doctors say it's often not the case? Why is it that the vast majority of doctors in the healthcare system around the world treat your symptoms, but not the underlying disease?

'LLMs are bad', 'LLMs don't understand', 'LLMs hallucinate', 'don't use them' ... even if you're bankrupt or can't access decent healthcare. Is it groupthink? Confirmation bias? Hubris? It's up to you to decide. Don't forget to downvote me just because you earn $120k a year, have insurance and benefits, and can access one of the best healthcare systems in the world. I was made redundant a few days ago. I earned an above-average salary of ... ta-dam $1k a month in Russia. If you have a hammer, everything looks like a nail, right? And other nations and peoples who earn a tenth of what you do don't exist, right?

I could really use your resources to help me get healthy. It's a shame I was born in a forsaken Russian city and ruined my health through negligence and self-loathing caused by major depression and stress.
So it didn't help you cure your condition? It guessed that you have a condition that affects 90% of people in your country, and maybe helped you clear yourself of it. That's not a glowing recommendation; you certainly could have done that yourself. And if you never confirmed its information, how are you even sure you really had H. Pylori and are now free of it?

You claim it is useful, but your example is the AI telling you something you could have looked up in 5 minutes, and ultimately doing nothing to help with the condition you went to it for. That sure sounds like the people here on Ars are right.

And as far as resources, people have already addressed that; I don't make much money myself. I have barely over $5k in savings. I lost most of mine to a medical emergency at the start of last year. No one is saying "just go to the doctor". You're fighting with a ghost. The community is just saying "maybe don't rely on an auto-complete that frequently makes up answers and ultimately just regurgitates what you could find out on your own".
 
Upvote
3 (3 / 0)

Hmnhntr

Ars Scholae Palatinae
3,129
While this might be true for you and me, most people can not do research on a PhD level, nor do they have a PhD friend who will sit for dozens of hours researching their disease. They don't have the science background or the math skills. They can't simply synthesize all the studies and apply them to their situation. Which is why there are so many vaccine skeptics and COVID deniers in the world.

If all you're doing is typing your symptoms into Google and clicking on the first link, then you're going to end up with a worse answer than LLM. (Or I suppose, these days, the first link will be an LLM answer.)
Pretty sure it's a better idea to look at Mayo clinic and similar websites than trust an LLM. If it gives you an accurate answer, it will basically just by telling you what those sites would, anyways.

You think an LLM has the ability to synthesize studies and accurately apply them? You think it can actually understand the science or math behind its answers?
 
Upvote
5 (5 / 0)

Hmnhntr

Ars Scholae Palatinae
3,129
The craziest part is my daughter in junior college taking CS (it's one of the programs that transfers to a CSU program) - and the prof had to change assignments to count for 0 because so many kids were using AI and not even bothering to remove dead giveway comments, etc. How the heck? Do they think that it's about "learning to use AI" when they're being taught C++?? Both my daughters finished high school in two years because they couldn't stand the pure stupidity of most of the students, and the older one is finding it almost as bad at a mid-level CSU. And just using AI - they can't even figure out deadlines, plan for something due next week, etc. It's ridiculous. They're all automatons, doing what their phone tells them..
People who go to school, but do everything they can to avoid doing schoolwork or learning anything baffle me. Do they really think that the knowledge doesn't matter? That they're just going to fake their way through life? Double that when they're paying to be there.

And I guarantee that when reality hits these people in the face, they'll blame school for 'failing them'. I took enough regular-level classes amidst my advanced classes to see a clear pattern. The only real difference was whether or not you were willing to pay attention and try. I found that the vast majority of 'regular' students did neither of those things. They bullied the teacher, talked through the whole class, and made no attempt to do homework or study- then always complained about how 'impossible' the tests were. I don't know what people like that think school is. Do they think the teacher can somehow psychically pass on knowledge without their own effort? At the very least, how can they not tell that they're the problem. Our school system definitely has serious issues. But in my experience one of the biggest is that so many kids don't respect the idea of an education in the first place. And that's a failure of the parents. I can only imagine that the mindset gets worse each generation....
 
Upvote
3 (3 / 0)

steelcobra

Ars Tribunus Angusticlavius
9,858
People who go to school, but do everything they can to avoid doing schoolwork or learning anything baffle me. Do they really think that the knowledge doesn't matter? That they're just going to fake their way through life? Double that when they're paying to be there.

And I guarantee that when reality hits these people in the face, they'll blame school for 'failing them'. I took enough regular-level classes amidst my advanced classes to see a clear pattern. The only real difference was whether or not you were willing to pay attention and try. I found that the vast majority of 'regular' students did neither of those things. They bullied the teacher, talked through the whole class, and made no attempt to do homework or study- then always complained about how 'impossible' the tests were. I don't know what people like that think school is. Do they think the teacher can somehow psychically pass on knowledge without their own effort? At the very least, how can they not tell that they're the problem. Our school system definitely has serious issues. But in my experience one of the biggest is that so many kids don't respect the idea of an education in the first place. And that's a failure of the parents. I can only imagine that the mindset gets worse each generation....
It's because they've been driven hard in believing the degree at the end is the only part that matters.
 
Upvote
0 (0 / 0)
Pretty sure it's a better idea to look at Mayo clinic and similar websites than trust an LLM. If it gives you an accurate answer, it will basically just by telling you what those sites would, anyways.

You think an LLM has the ability to synthesize studies and accurately apply them? You think it can actually understand the science or math behind its answers?

There have been a few studies here and there that have looked into it. Originally in 2023? one comparing GPT to "webmd" and doctors. GPT did much better than WebMD and a little worse than doctors. Then in 2024 or 2025 there was a study that showed given complicated case studies, GPT was better than doctors. Another that showed while they were sometimes more accurate (depending on domain) the mistakes were worse. Another (2025?) that pointed out that argued that even though LLMs could diagnose conditions, lay people were bad at using LLMs to diagnose a condition, because they would just fail to type in all the details.

Anyway... that's a lot of details. But basically, I think if you're trying to research something complicated, an LLM might help you...

A comparable example might be if you are looking for an unpopular book. You don't remember the name of it. You might remember a plot point, or the feel of the story, or the first letter of a character's name and their job titles. Google will not help you at all. A librarian might help if they read the book. An LLM will have a decent change of catching it, but might have some dumb guesses first.

Anyway, I wouldn't trust an LLM, but I might ask it about my condition anyway, in the same way I might ask anyone.

Also, while Mayo Clinic is fine, a lot of folks are ensnared by complete bullshit, like vitamins and RFK and manosphere youtubes.
 
Last edited:
Upvote
0 (0 / 0)
You think an LLM has the ability to synthesize studies and accurately apply them? You think it can actually understand the science or math behind its answers?
I also realize I didn't reply directly. I think the can apply studies, math, and science okay. Scientists and mathematicians use LLMs as quasi research assistants. It may not understand anything at all, but it is still useful.

And certainly blows away the average person who does not understand probability at all.
 
Upvote
-2 (0 / -2)

k h

Ars Centurion
365
Subscriptor
It's definitely safe, but I would argue that there isn't anything 0.01mm in height that fits any reasonable definition of a "bridge", given that the "bungee jumping" context makes it clear we're talking about road bridges and not microelectronics or such ;-)
On the contrary! Nearly every bridge has at least two bungee safe zones that are at most 0.01mm above the adjacent territory.
 
Last edited:
Upvote
0 (0 / 0)

Hayano Chie

Smack-Fu Master, in training
2
A comparable example might be if you are looking for an unpopular book. You don't remember the name of it. You might remember a plot point, or the feel of the story, or the first letter of a character's name and their job titles. Google will not help you at all. A librarian might help if they read the book. An LLM will have a decent change of catching it, but might have some dumb guesses first.
I have - not intentionally upon reading this post - done this and I can tell you that it doesn't work.

In particular I remembered a movie from my childhood, about some guy or a girl that does insurance fraud using his kid. The adult jumps under a car and then the kid runs in, "You drove over my dad / mom" and they ask for money. During one of the "hits" they get entangled with the "victim" romantically, ends with love, of course. Couldn't remember the name and hey, AI could help here, right?

Nope, it was awful. The thing went in exactly the same direction as Google Search, "fraud" questions directed me to "heist" movies, it offered some (many) movies, but no hits, then it offered some more, always subserviently trying to help.

Then it started hallucinating, offering my movies that clearly didn't exist. Sandra Bullock movie from 1993 (fair, she had like six in that year alone), but with a movie name not listed in her filmography that internet knows nothing about. It got worse and worse, not better!

In the end I went back to the Google and eventually found it out. For those interested, it's a somewhat unknown flop (trying to ride the success of Pretty Woman and failing, I assume) from 1991 called Curly Sue.
 
Upvote
2 (2 / 0)
I have - not intentionally upon reading this post - done this and I can tell you that it doesn't work.

In particular I remembered a movie from my childhood, about some guy or a girl that does insurance fraud using his kid. The adult jumps under a car and then the kid runs in, "You drove over my dad / mom" and they ask for money. During one of the "hits" they get entangled with the "victim" romantically, ends with love, of course. Couldn't remember the name and hey, AI could help here, right?

Nope, it was awful. The thing went in exactly the same direction as Google Search, "fraud" questions directed me to "heist" movies, it offered some (many) movies, but no hits, then it offered some more, always subserviently trying to help.

Then it started hallucinating, offering my movies that clearly didn't exist. Sandra Bullock movie from 1993 (fair, she had like six in that year alone), but with a movie name not listed in her filmography that internet knows nothing about. It got worse and worse, not better!

In the end I went back to the Google and eventually found it out. For those interested, it's a somewhat unknown flop (trying to ride the success of Pretty Woman and failing, I assume) from 1991 called Curly Sue.
Me:
I'm looking for a movie from the 90s where someone uses his kid to commit insurance fraud? The adult jumps under a car and then the kid runs in, "You drove over my dad / mom" and then they ask for money. During one of the "hits" they get entangled with the "victim" romantically, ends with love, of course.

Claude:
Searched the web

Searched the web

That's it! The TV Tropes page directly mentions the movie you're thinking of. It's Curly Sue (1991), written and directed by John Hughes!
In Curly Sue, the scheme is played straight: the titular girl helps her father pull off this staged-pedestrian-accident scam with a rich female lawyer ("You've killed him!").



First try, though it did scribble some other movies first. Maybe your bot had a hard time because they don't commit insurance fraud, I don't know?
 
Last edited:
Upvote
-1 (0 / -1)
Google search AI also got right on the first try. What did you actually say to it??

Search:
I'm looking for a movie from my childhood where someone uses his kid to commit insurance fraud? The adult jumps under a car and then the kid runs in, "You drove over my dad / mom" and then they ask for money. During one of the "hits" they get entangled with the "victim" romantically, ends with love, of course.

Google:
The movie you're thinking of is Curly Sue (1991).
 
Last edited:
Upvote
-1 (0 / -1)
In the end I went back to the Google and eventually found it out. For those interested, it's a somewhat unknown flop (trying to ride the success of Pretty Woman and failing, I assume) from 1991 called Curly Sue.
And I never saw Curly Sue, but it sounds to me like a reboot of Charlie Chaplan's the kid, which is actually a good movie, by the way. Probably better than everything else at the time.

1776345775827.png
 
Last edited:
Upvote
0 (0 / 0)

Hayano Chie

Smack-Fu Master, in training
2
First try, though it did scribble some other movies first. Maybe your bot had a hard time because they don't commit insurance fraud, I don't know?

Or maybe the AI is quite bad if you put in a bad query (I didn't use this particular query at the time, of course) and it strays, for whatever reason, somewhat far from the target. This is a known thing that if you have an algorithm that you are trying to point to something, if it becomes focused on a wrong optimum, it's very hard to pull it away from it.

Which wouldn't be a bad thing per se - if it didn't pretend it does indeed know and asks you further and further for details as it descends into hallucinations. This is different to, say, old school internet searches - every search is pretty much independent (there's a memorization stuff, but if you assume it hampers with your search, you can use anonymized ) and if you get zero results, you search a different term.

You can, of course do this with AI as well, open new instance to forget context. It's just even more environmentally unfriendly, more annoying and you have to notice first - which is hard for many people, exactly because the AI keeps nagging them "Oh, that was not it, maybe if you add one more detail, I might finally remember" and people don't know that it's the worst thing they can do at the moment.

(Another possibility, of course, is that at the time of the search the movie wasn't yet in the lexicon of the stolen data of whatever model I was using at the time and now it is~)
 
Upvote
0 (0 / 0)