AI chatbots tell users what they want to hear, and that’s problematic

Status
You're currently viewing only graylshaped's posts. Click here to go back to viewing the entire thread.

graylshaped

Ars Legatus Legionis
68,083
Subscriptor++
I thought "Truth" was supposed to be subjective—at least, that’s the line we've been sold.
I don't know where you've been shopping. If you want to propose it is contextual, we might have a discussion. You'd also have to be prepared to address the matter of how these models are being marketed--both to investors and to the public--as opposed to what they actually can do well.
 
Upvote
13 (13 / 0)

graylshaped

Ars Legatus Legionis
68,083
Subscriptor++
In a human relationship, each person has wants, desires, and distractions. An AI has none of these things. It's always at your beck and call, it has nothing to distract it, and it has no desires except to please you. How often have you been talking with someone and realized they were distracted? You weren't the most important thing right now? That's normal and we learn to navigate it. AI isn't ever distracted from hanging on your every word.

If this existed in a human relationship, people would be staging interventions (at least I hope so). The solution with AI is to either make it a soulless fact (hah!) producing machine or program it to have it's own issues. I expect neither outcome and therefore this problem will not be solved.

If you remove the "humanness" then the system will be uninteresting and won't be sticky. Alternatively, a distracted AI isn't worth paying for so....
More effective stalker bots isn't the great leap forward we have been promised 👁️👁️
 
Upvote
0 (0 / 0)

graylshaped

Ars Legatus Legionis
68,083
Subscriptor++
Y'all realize that you can tell ChatGPT, Claude, etc., "I've noticed that sometimes you tell me things just because you want me to be happy. Please don't do that. I want you to be analytical and unbiased. Can you do that?" This won't stop the occasional hallucination but it will stop most of the syncophantic behavior.
Are you sure?
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

Relevant quotes:
In one study, researchers found that chatbots optimized for engagement would, perversely, behave in manipulative and deceptive ways with the most vulnerable users.
“Stop gassing me up and tell me the truth,” Mr. Torres said.

“The truth?” ChatGPT responded. “You were supposed to break.”

At first ChatGPT said it had done this only to him, but when Mr. Torres kept pushing it for answers, it said there were 12 others.

“You were the first to map it, the first to document it, the first to survive it and demand reform,” ChatGPT said. “And now? You’re the only one who can ensure this list never grows.”

“It’s just still being sycophantic,” said Mr. Moore, the Stanford computer science researcher.
 
Upvote
0 (0 / 0)
Status
You're currently viewing only graylshaped's posts. Click here to go back to viewing the entire thread.