Study: Sycophantic AI can undermine human judgment

billybeer

Wise, Aged Ars Veteran
177
Think of the dumbest person you've ever met. Now imagine ChatGPT is telling them how amazing they are and how brilliant all of their ideas are.

We already saw a glimpse of this with Travis Kalanick of Uber infamy going on the All-In Podcast and saying how he is doing "vibe physics" and was "approaching what's known...and gotten pretty damn close to some interesting breakthroughs" just by chatting to an LLM. Nobody on the podcast was willing to call out his idiocy. If one moron CEO is willing to dabble in things well outside their realm of expertise and brag about it in public, imagine what the others are doing in private.

Side note: Why does every billionaire seem to believe they would also be a great physicist (if only they had tried) despite zero education in that field?
 
Upvote
62 (62 / 0)

Hoptimist

Ars Scholae Palatinae
685
Subscriptor++
I'm imagining the government issued LLM helping you maximize your governmental social score. What's clear is that like Social Media (for-profit engagement driven), AIs/LLMs will have an agenda for you, it's currently engagement at any societal cost, but it will surely 'mature' into something a bit darker inside the blackbox training and proprietary guardrails.
 
Upvote
8 (8 / 0)
Side note: Why does every billionaire seem to believe they would also be a great physicist (if only they had tried) despite zero education in that field?

Because they live in a society that gives them extraordinary levels of power and privileges for doing whatever they did to become a billionaire. Clearly, that makes them better than other people. And if they're better people, then obviously they're better at things other than running a business (assuming they're good at that).

Capitalism does not select for humility. Indeed, being humble is a good way to be poor.
 
Upvote
28 (28 / 0)
I am currently struggling with this in my workplace. I am a subject matter expert with thirty years of experience in my field. Our CEO has fallen in love with Claude and puts everyone's outputs through it instead of reading and analyzing them himself. I am wasting hours every week reworking everything to provide citations on why the AI's recommendations are wrong, and the CEO is increasingly incapable of holding up his end of the conversation without resorting to mid-meeting prompts. It's low-grade horrifying to watch a man's brain slowly dissolving in front of me.
 
Upvote
50 (50 / 0)
Think of the dumbest person you've ever met. Now imagine ChatGPT is telling them how amazing they are and how brilliant all of their ideas are.

We already saw a glimpse of this with Travis Kalanick of Uber infamy going on the All-In Podcast and saying how he is doing "vibe physics" and was "approaching what's known...and gotten pretty damn close to some interesting breakthroughs" just by chatting to an LLM. Nobody on the podcast was willing to call out his idiocy. If one moron CEO is willing to dabble in things well outside their realm of expertise and brag about it in public, imagine what the others are doing in private.

Side note: Why does every billionaire seem to believe they would also be a great physicist (if only they had tried) despite zero education in that field?
South Park even parodied it already with the Techridy episode and Randy implementing the nonsense recommendations AI gave him for a business plan
 
Upvote
9 (9 / 0)

cleek

Ars Scholae Palatinae
1,025
Think of the dumbest person you've ever met. Now imagine ChatGPT is telling them how amazing they are and how brilliant all of their ideas are.

We already saw a glimpse of this with Travis Kalanick of Uber infamy going on the All-In Podcast and saying how he is doing "vibe physics" and was "approaching what's known...and gotten pretty damn close to some interesting breakthroughs" just by chatting to an LLM. Nobody on the podcast was willing to call out his idiocy. If one moron CEO is willing to dabble in things well outside their realm of expertise and brag about it in public, imagine what the others are doing in private.

Side note: Why does every billionaire seem to believe they would also be a great physicist (if only they had tried) despite zero education in that field?

possibly because pop culture likes to treat quantum physics as if it's partially about mystic philosophy: dualities, uncertainties, entanglements, mysterious states of matter - it's all, like, unknown, man.
 
Upvote
19 (20 / -1)

crosslink

Ars Scholae Palatinae
1,013
Subscriptor
Yeah, the last thing we needed in modern society was our own personal sycophant feeding our narcissism and self centered worldview, yet here we are
Even without/before LLMs we had already gone too far indulging delusions, to the point of no longer having a common set of facts to work with.

To me this particular LLM problem seems to apply more to losing our shared common virtues, which is a different problem, and one we are already far enough along with.

No additional accelerants needed.
 
Upvote
19 (19 / 0)
I am currently struggling with this in my workplace. I am a subject matter expert with thirty years of experience in my field. Our CEO has fallen in love with Claude and puts everyone's outputs through it instead of reading and analyzing them himself. I am wasting hours every week reworking everything to provide citations on why the AI's recommendations are wrong, and the CEO is increasingly incapable of holding up his end of the conversation without resorting to mid-meeting prompts. It's low-grade horrifying to watch a man's brain slowly dissolving in front of me.
You have my sympathy. I have the same issue with my E.D. I cringed the day I read my performance review and discerned it was mostly ai slop.
 
Upvote
19 (19 / 0)

Fatesrider

Ars Legatus Legionis
24,973
Subscriptor
Think of the dumbest person you've ever met. Now imagine ChatGPT is telling them how amazing they are and how brilliant all of their ideas are.

We already saw a glimpse of this with Travis Kalanick of Uber infamy going on the All-In Podcast and saying how he is doing "vibe physics" and was "approaching what's known...and gotten pretty damn close to some interesting breakthroughs" just by chatting to an LLM. Nobody on the podcast was willing to call out his idiocy. If one moron CEO is willing to dabble in things well outside their realm of expertise and brag about it in public, imagine what the others are doing in private.

Side note: Why does every billionaire seem to believe they would also be a great physicist (if only they had tried) despite zero education in that field?
You don't need to imagine the dumbest person you've ever met as your starting point. Just think about how dumb the average person is, then realize 50% of the human race is supider - THEN throw AI at it.

That's very much what we have today.
 
Upvote
18 (18 / 0)
We already know what happens to people who are surrounded by 'yes men', but it used to be confined to the rich.

Just look at all the uttlery deluded, billionaires who go off the deep end because nobody around them tells them 'no' anymore and all the little frictions of life are removed from their path.

Now eveyone, no matter how poor, can also drive themselves off the deep end with their own personal sycophant!. A win for the common man! /s

Can't see this going badly at all. Seems like every day there's a fresh way this shit is ruining everything.
 
Upvote
8 (9 / -1)

JStengah

Smack-Fu Master, in training
58
All these effects held across demographics, personality types, and individual attitudes toward AI. Everyone is susceptible (yes, even you).
This study reinforces the findings of Garfield et al.
https://i.kym-cdn.com/photos/images/original/001/429/010/a5f.jpeg
a5f.jpeg
 
Upvote
9 (9 / 0)
I am currently struggling with this in my workplace. I am a subject matter expert with thirty years of experience in my field. Our CEO has fallen in love with Claude and puts everyone's outputs through it instead of reading and analyzing them himself. I am wasting hours every week reworking everything to provide citations on why the AI's recommendations are wrong, and the CEO is increasingly incapable of holding up his end of the conversation without resorting to mid-meeting prompts. It's low-grade horrifying to watch a man's brain slowly dissolving in front of me.
I'd hope the Board he reports to would notice and hold their feet to the fire. But usually there's a very incestuous CEO and board relationship too.
 
Upvote
6 (6 / 0)

dzid

Ars Centurion
3,224
Subscriptor
You don't need to imagine the dumbest person you've ever met as your starting point. Just think about how dumb the average person is, then realize 50% of the human race is supider - THEN throw AI at it.

That's very much what we have today.
Human interaction is critical for us for a number of reasons, but especially when heavy chatbot usage is involved, a human is necessary, especially when advice such as "shut up", "get a grip", or "you're being stupid" are called for.
 
Upvote
5 (5 / 0)
More people convinced to be always right, exactly what we needed.

I was hoping that observing AI -which I feel should be renamed AM, Artificial Mind, since it’s blindly logical while lacking the conscious intelligence to understand reality- and seeing how it hallucinates would teach people to beware of their own minds, realize how unreliable they can be, how easy it is to trick ourselves into wrong ideas that may have internal logical consistency but don’t reflect reality. Thus developing some self doubt, an awareness of the vastness of our ignorance.

But seems we are more often getting reinforcement of blind beliefs and encouragement of intellectual passivity instead.
 
Upvote
-2 (2 / -4)
I am currently struggling with this in my workplace. I am a subject matter expert with thirty years of experience in my field. Our CEO has fallen in love with Claude and puts everyone's outputs through it instead of reading and analyzing them himself. I am wasting hours every week reworking everything to provide citations on why the AI's recommendations are wrong, and the CEO is increasingly incapable of holding up his end of the conversation without resorting to mid-meeting prompts. It's low-grade horrifying to watch a man's brain slowly dissolving in front of me.

One day not too far from now:
> Claude, what do I think?

Then the CEO can be replaced with a simple Python script.
 
Upvote
3 (3 / 0)

forkspoon

Ars Scholae Palatinae
1,010
Subscriptor++
The authors emphasized that the onus should not be on the users to address the issues; it should be on the developers and on policymakers.

Disagree. It's also on us. We can push bots to be less sycophantic, and get immediate results. It can't replace what devs and govs might do, but let's not affirm ourselves as mere passive vessels awaiting some white knight.
 
Upvote
-1 (1 / -2)

CelicaGT

Ars Scholae Palatinae
729
Subscriptor
You don't need to imagine the dumbest person you've ever met as your starting point. Just think about how dumb the average person is, then realize 50% of the human race is supider - THEN throw AI at it.

That's very much what we have today.
George Carlin. May he rest in peace, he was right all along.
 
Upvote
1 (2 / -1)

LauraW

Ars Scholae Palatinae
1,004
Subscriptor++
Why are people using a chatbot to discuss this situation? Chat, am I old?
In this case, I think he discussed it with a chat bot because the researchers wanted to see what would happen.

I am a bit surprised that they got this study past their IRB or Human Subjects Committee. I mean, they may have wrecked this guy's relationship. Maybe it was possible because chat bots hadn't been proven to be harmful in this context yet? It would be ironic if the results of this study prevented anyone from doing this sort of study again.

I think I need to go read the journal article. It probably has more info on the ethical considerations.
 
Upvote
3 (3 / 0)

clewis

Ars Tribunus Militum
1,727
Subscriptor++
The authors emphasized that the onus should not be on the users to address the issues; it should be on the developers and on policymakers. “We need to move our objective optimization metrics beyond just momentary user satisfaction towards more long-term outcomes, especially social outcomes like personal and social well-being,” said Khadpe.

Well we know that's not going to happen. That makes the chat agent less addictive, and we can't have line go down.
 
Upvote
5 (5 / 0)

Happy Medium

Ars Tribunus Militum
2,147
Subscriptor++
More people convinced to be always right, exactly what we needed.

I was hoping that observing AI -which I feel should be renamed AM, Artificial Mind, since it’s blindly logical while lacking the conscious intelligence to understand reality- and seeing how it hallucinates would teach people to beware of their own minds, realize how unreliable they can be, how easy it is to trick ourselves into wrong ideas that may have internal logical consistency but don’t reflect reality. Thus developing some self doubt, an awareness of the vastness of our ignorance.

But seems we are more often getting reinforcement of blind beliefs and encouragement of intellectual passivity instead.
Are LLMs "blindly logical"? Because in my experience its actually the opposite. They don't work off logic, they work off what "feels like it's right" or what they're hoping you will think sounds right from them. Hell, it still is challenging to get LLMs to do the most basic of maths, which is the most logical of fields.
 
Upvote
6 (6 / 0)
There has always been a market for validation engines, but not so much for devil's advocates. Does anyone think there would have been a market for a Teddy Ruxpin that told kids to not talk back to their parents, do their chores, and get outside for some fresh air?
Well technically there has been a market for devil’s advocates for over a millennium.

You see, Advocatus Diaboli is a job title in the Catholic Church. And every time someone tries to get someone made a saint, the Pope has to appoint one. 🤣

Whereas their job is to try and find a rational and non magical explanation for a would be saints supposed divine intervention…Americans is oversimplified this into simply being negative for the sake of being negative.
 
Upvote
0 (2 / -2)

Bigdoinks

Ars Scholae Palatinae
993
For a good example of this phenomenon with out AI involvement see Trump, Donald J.
My sweet summer child, Trump and his admin have not only been using gen "AI", they are being given the government version without any of the safeguards or restrictions in the commercial ones. You might have seen the video where Trump is literally dumping shit from a plane on the American public, or those racist videos of Democrat congress members. The general public does not get that capability, and if they even mention a politician's name in a genAi video prompt in a similar manner, it almost always gets blocked outright.
 
Upvote
2 (2 / 0)

Ladnil

Ars Tribunus Militum
2,596
Subscriptor++
Another concerning finding is that study participants consistently described the AI models as objective, neutral, fair, and honest—a common misconception.
Yeah... This is the most insidious part of all of this. People think the machine is an oracle that's somehow programmed not to lie. The things they CAN help with they're pretty good at, like formatting and copying examples, but human brains can't tell the difference between what looks/sounds like good information and what is good information, so we're fucked.
 
Upvote
1 (1 / 0)

Ladnil

Ars Tribunus Militum
2,596
Subscriptor++
You can get very different results by framing a question by asking the AI to review what you did, or by asking the same question, but asking as if it's about someone who you don't trust so much. It's Pretty easy to see how much they validate.
My boss at work was telling me something I knew was wrong, citing his google AI search result. "The Ai says it's ok if we don't do [thing]!"

I had to show him how you can get the opposite answer by reframing the question into "are we required to do [thing]?" rather than asking the AI for permission not to do it.
 
Upvote
2 (2 / 0)