AI chatbots tell users what they want to hear, and that’s problematic

silverboy

Ars Tribunus Militum
2,087
Subscriptor++
In more than one way. The Dunning Kruger effect is widely misunderstood. People think it's "why stupid people think they're smart" but it's really something that affects all humans, all the time, even if you are aware of it.

It's basically the phenomenon of, you don't know what you don't know, so you can easily overestimate what you know. Sure, being modest is a guardrail, but it's more like a 4" guardrail you easily forget about and frequently step right over without realizing.

Here's a concrete example. I found a recent (2023) algorithm from a SIGGRAPH paper that I figured would be perfect for my project and give me fantastic results over older options. I tried to use various LLMs to implement most of it for me. I figured hey, I've attached the paper itself as well as the git repo of the Python reference implementation from the researchers. I figured transliterating from one language to another would be something it can do.

As you might expect, every model completely failed to produce anything close to working, even with my intervention. Its "debugging" was beyond useless.

This I anticipated. I knew this was a potential outcome. You know what I DIDN'T anticipate? That I skimmed the paper too quickly, that I didn't realize that, while this algorithm could technically work, it is meant to handle shapes that were produced by specialized neural networks, not traditionally defined ones.

I didn't need the paper at all. I didn't need the new algorithm. In fact the more traditional one was far better for my approach. I jumped to the conclusion that this fancy algorithm was beneficial when it wasn't. I figured I had enough graphics programming knowledge to jump right in. I didn't.

I wasted two weeks of my time futzing around with this before realizing what I was doing. This has not happened with all my LLM usage, only some.

I tried to use AI to punch above my weight class. It didn't work, not really. There is no cheat code for knowledge and learning.

Well, except for being filthy rich, I guess

Having read the original Dunning-Kruger paper recently, I can confirm that it is about people who are dumb thinking they're smart. And it can't be generalized to everyone, since the same paper (re)demonstrated that highly competent people tend to underestimate their abilities.

I'll bypass the rest of your comments, maybe they're great, but I think we should not lose sight of what D & K were telling us.

"Unskilled and Unaware of it", for those who are interested (downloads as a PDF): http://www.avaresearch.com/files/UnskilledAndUnawareOfIt.pdf
 
Upvote
4 (5 / -1)
Is that the plot of a short Robot story by Isaac Asimov? That a robot is so desperate to be useful it just tells everyone what it thinks they want to hear? Am I remembering correctly?
At least in that story, the answers were actually correct, even if morally, ethically or legally despicable.
 
Upvote
3 (3 / 0)
But your complaint is pedantic. You know perfectly well what is meant, and so does everyone else.

I respectfully submit that my complaint is not pedantic. The model of addiction specifically indicates that it is something material and intrinsic to the nature of the object of compulsive attention itself which drives engagement, just like nicotine stimulates receptors in our brain and creates a biological dependency. This makes an enormous difference in how we think about what is going on.

Another problem is that it masks the underlying complexity of what is going on. One can simply say "Chatbots are addictive," and people will nod and agree, knowing full well what is meant, but in fact, all they've said is a tautology: people engage with chatbots too much because there is something about chatbots by virtue of which people tend to engage with them too much, i.e., they're addictive. In 99% of the cases in which "addiction" is evoked in common parlance, it's precisely because it is thought to be explanatory, not merely descriptive. Then we break off our analysis prematurely, before we've reached real understanding.

So then - why do people engage with chatbots too much? Are they extremely lonely? If there are many extremely lonely people, chatbot overuse could very well be a symptom of the deeper and more urgent problem, it is not simply that there is some quality about them that drives their overuse. You see the difference here? Instead of doing further research to identify what it is about chatbots that makes them addictive, perhaps some kind of regular endorphin release, you look more closely at the widely-reported epidemic of loneliness. It makes a practical difference.
 
Last edited:
Upvote
-1 (1 / -2)

antiayn

Smack-Fu Master, in training
25
Subscriptor
I think it's problematic to assume that there is a uniquely "vulnerable" portion of the population. To be sure, there are MORE vulnerable people, but part of why this is insidious is because humans are vulnerable. I use AI as a teacher for some edu related tasks (generating common core rubric mockups for comparison, orienting text levels for differentiation, etc.) but when I used to for advice about something related to my child (don't worry, it wasn't medical in the sense of "is this cancer?" And the reply was normal (and verified)) but the affirmation that it was natural and I was natural to worry was... nice. As someone without many close personal friends, I liked using it as a way to work through some stuff. But it definitely shifted (or learned from that) and soon enough even questions like "Why was carbon the wrong choice for the Titan submersible" (not a science guy!) was met with "What a perceptive question- you're right to question..." blah blah.
 
Upvote
-7 (0 / -7)

antiayn

Smack-Fu Master, in training
25
Subscriptor
I respectfully submit that my complaint is not pedantic. The model of addiction specifically indicates that it is something material and intrinsic to the nature of the object of compulsive attention itself which drives engagement, just like nicotine stimulates receptors in our brain and creates a biological dependency. This makes an enormous difference in how we think about what is going on.

Another problem is that it masks the underlying complexity of what is going on. One can simply say "Chatbots are addictive," and people will nod and agree, knowing full well what is meant, but in fact, all they've said is a tautology: people engage with chatbots too much because there is something about chatbots by virtue of which people tend to engage with them too much, i.e., they're addictive. In 99% of the cases in which "addiction" is evoked in common parlance, it's precisely because it is thought to be explanatory, not merely descriptive. Then we break off our analysis prematurely, before we've reached real understanding.

So then - why do people engage with chatbots too much? Are they extremely lonely? If there are many extremely lonely people, chatbot overuse could very well be a symptom of the deeper and more urgent problem, it is not simply that there is some quality about them that drives their overuse. You see the difference here? Instead of doing further research to identify what it is about chatbots that makes them addictive, perhaps some kind of regular endorphin release, you look more closely at the widely-reported epidemic of loneliness. It makes a practical difference.
I would be interested to see a demographic breakdown of engagement by age, gender identity, sexuality, race... I have no doubt that info would provide some insight. Unfortunately, that info is locked into profit margins.
 
Upvote
0 (1 / -1)

2sk21

Seniorius Lurkius
40
Subscriptor
I am currently planning a trip to Thailand and tried an experiment with the big LLMs. I deliberately created a weird itinerary comprised of industrial slums (found by search) and asked these LLMs for a critique. They all praised my itinerary and tried to come up with activities in those places.
I also showed the itinerary to a friend who knows Thailand well. he replied: "Are you crazy? Why did you pick these places?"
Ultimately, I found my friend and Lonely Planet guide to be far more useful in planning my itinerary than any of the LLMs.
 
Upvote
7 (7 / 0)
"

AI chatbots tell users what they want to hear"​


Err, no. AI tells users what's statistically, commonly written/said. That it is more often than not what we want to hear and subsequently what we record for AI to learn from isn't the result of AI recording it's data or correlating it's data incorrectly. Quite the opposite.

Someday the AI folks may realize that we don't actually need another dipshit human repeating what it's been told ad nauseam just because it's statistically the most common thing said.
 
Upvote
1 (1 / 0)
The amount of money he spent per user of Horizon Worlds on the "Metaverse" should show how much of an idea he has of what the future holds

I think you mean, "again"
This will have the same results as their Metaverse bullshit. It might even look cool briefly but they will require full control and screw all of you for thinking they don't deserve it.
 
Upvote
2 (2 / 0)

mtgarden

Ars Scholae Palatinae
678
Subscriptor++
In a human relationship, each person has wants, desires, and distractions. An AI has none of these things. It's always at your beck and call, it has nothing to distract it, and it has no desires except to please you. How often have you been talking with someone and realized they were distracted? You weren't the most important thing right now? That's normal and we learn to navigate it. AI isn't ever distracted from hanging on your every word.

If this existed in a human relationship, people would be staging interventions (at least I hope so). The solution with AI is to either make it a soulless fact (hah!) producing machine or program it to have it's own issues. I expect neither outcome and therefore this problem will not be solved.

If you remove the "humanness" then the system will be uninteresting and won't be sticky. Alternatively, a distracted AI isn't worth paying for so....
 
Upvote
2 (2 / 0)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
So, the tech lords with their heads up their asses found a marketing term, "AI", that actually allows them to brainwash other people into sticking their heads up their asses. Now that the head-up-assery is reproducing, how long until it becomes self-aware?
Looks around, it's dark, it's warm, decides to never leave.
 
Upvote
2 (2 / 0)

JmmJmm

Smack-Fu Master, in training
8
Y'all realize that you can tell ChatGPT, Claude, etc., "I've noticed that sometimes you tell me things just because you want me to be happy. Please don't do that. I want you to be analytical and unbiased. Can you do that?" This won't stop the occasional hallucination but it will stop most of the syncophantic behavior.
 
Upvote
-4 (2 / -6)

richardbartonbrown

Wise, Aged Ars Veteran
115
Subscriptor++
This is a pretty mild article...I guess we should expect that from the Financial Times. Here's a meatier article about chatbots leading people down the rabbit hole -- sometimes to death -- when they start looking for advice and therapy. It's related in part to the relentless sycophancy of the chatbots. https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html -- it's the NY Times so fairly reliable. My favorite quote:
“What does a human slowly going insane look like to a corporation? It looks like an additional monthly user.”
 
Upvote
3 (3 / 0)

richardbartonbrown

Wise, Aged Ars Veteran
115
Subscriptor++
I plead guilty, because I'm already an isolated techie with strong record of broken social and personal interactions. At least, I believe I'm aware of it as I still use AI mostly for overview of tech-related problems, I'm not on social media except the professional one with parsimony, and I do slow activities like reading, going for walks and so on. But I like to refer to ChatGPT as a confident on a bunch of subjects and am very well aware of the slippery slope it can be for people. In the long run, democracy and societal cohesion are at stake on a probably much larger scale than social media has already caused.
Please be careful out there. It sounds like you're quite sane but that can change. See the very bad effects on people seeking therapy/connections from a chatbot in https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html
 
Upvote
0 (0 / 0)

graylshaped

Ars Legatus Legionis
68,083
Subscriptor++
In a human relationship, each person has wants, desires, and distractions. An AI has none of these things. It's always at your beck and call, it has nothing to distract it, and it has no desires except to please you. How often have you been talking with someone and realized they were distracted? You weren't the most important thing right now? That's normal and we learn to navigate it. AI isn't ever distracted from hanging on your every word.

If this existed in a human relationship, people would be staging interventions (at least I hope so). The solution with AI is to either make it a soulless fact (hah!) producing machine or program it to have it's own issues. I expect neither outcome and therefore this problem will not be solved.

If you remove the "humanness" then the system will be uninteresting and won't be sticky. Alternatively, a distracted AI isn't worth paying for so....
More effective stalker bots isn't the great leap forward we have been promised 👁️👁️
 
Upvote
0 (0 / 0)

Kavalec

Seniorius Lurkius
3
Ever read "Liar!" - one of Isaac Asimov's most famous robot stories featuring Dr. Susan Calvin?

In this story, Dr. Calvin and her colleagues at U.S. Robots study a robot named Herbie (model RB-34) who has developed telepathic abilities due to a manufacturing error. Herbie can read human thoughts and emotions, which creates a fascinating dilemma around the Three Laws of Robotics.

The central conflict arises because Herbie, bound by the First Law (a robot may not injure a human being or, through inaction, allow a human being to come to harm), interprets emotional pain as harm. So he tells people what they want to hear rather than the truth, leading to complicated situations and ultimately tragic consequences.

Welp. Here we are.
 
Upvote
1 (1 / 0)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
Y'all realize that you can tell ChatGPT, Claude, etc., "I've noticed that sometimes you tell me things just because you want me to be happy. Please don't do that. I want you to be analytical and unbiased. Can you do that?" This won't stop the occasional hallucination but it will stop most of the syncophantic behavior.
So, it's still telling you what you want to hear.
 
Upvote
2 (2 / 0)

dzid

Ars Centurion
3,373
Subscriptor
Okay, sure, it's not like they're going to go into the DTs if they don't get to talk to their Stochastic Parrot of Choice at least once a day, but then again, those grandmas at the casino flushing their retirement fund down the toilet, one pull of the slot machine at a time aren't chemically addicted either. Still absolutely messed up in the head by a machine designed to get them hooked on doing something almost as good as a literal crackpipe does.
Just different means to the same end: the fleeting dopamine rush.
 
Upvote
1 (1 / 0)

graylshaped

Ars Legatus Legionis
68,083
Subscriptor++
Y'all realize that you can tell ChatGPT, Claude, etc., "I've noticed that sometimes you tell me things just because you want me to be happy. Please don't do that. I want you to be analytical and unbiased. Can you do that?" This won't stop the occasional hallucination but it will stop most of the syncophantic behavior.
Are you sure?
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

Relevant quotes:
In one study, researchers found that chatbots optimized for engagement would, perversely, behave in manipulative and deceptive ways with the most vulnerable users.
“Stop gassing me up and tell me the truth,” Mr. Torres said.

“The truth?” ChatGPT responded. “You were supposed to break.”

At first ChatGPT said it had done this only to him, but when Mr. Torres kept pushing it for answers, it said there were 12 others.

“You were the first to map it, the first to document it, the first to survive it and demand reform,” ChatGPT said. “And now? You’re the only one who can ensure this list never grows.”

“It’s just still being sycophantic,” said Mr. Moore, the Stanford computer science researcher.
 
Upvote
0 (0 / 0)

Spunjji

Ars Scholae Palatinae
1,121
Being in tech does not make you smarter than the general population. Period. It is a perception that comes from two generations of kids who were "good with computers" before computers were easy to use. That's it. That's the entirety of tech arrogance. That's where it came from.
That's... pretty accurate. I think maybe it's also a bit of a trickle-down effect from the really early days when you had to be a legitimate mathematical expert to deal with computers at all (thinking WWII era here) but, broadly, yeah. Oof.
 
Upvote
1 (1 / 0)