AI chatbots tell users what they want to hear, and that’s problematic

The ethics lesson continues. With cash-hungry companies eager to push AI into every corner they can imagine, there are going to be unintended and tragic consequences for some. Especially sad is when someone socially awkward or, let's call it what it is, just plain lonely develops an emotional dependency on something that, at the end of the day, is JUST PLAIN SOFTWARE.

Humanity needs to get better at finding ways to connect with the vulnerable rather than handing them off to artificial devices like AI because we culturally don't want to figure out a way to deal with them. We need to learn to be human and, while we're at it, compassionate. Computers ain't gonna do it for us, and one fine day we may find ourselves needing a genuine helping hand.
 
Upvote
18 (19 / -1)

ehusen01

Smack-Fu Master, in training
2
Is that the plot of a short Robot story by Isaac Asimov? That a robot is so desperate to be useful it just tells everyone what it thinks they want to hear? Am I remembering correctly?
I believe the story is "Liar!".
A telepathic robot who lies to avoid hurting people's feelings. But lying also hurts people. It doesn't end well for the poor robot. Positronic brain fries due to conflict in the 3 laws of robotics
 
Upvote
19 (19 / 0)
So far it is the user's responsibility for using these tools safely. I'll be interested to see how the liability model evolves. I fully expect that AI companies will continue take no responsibility whatsoever for their products. Perhaps we will get to the equivalent of pharma ads: Ask your AI expert if Ralph134.5 is right for you? Potential side effects may include: addiction, depression, social estrangement, suicide .....

"Liability model." Excellent choice of words. Nice job envisioning the national TV ad spots for us, too.
 
Upvote
15 (15 / 0)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
I'm sure the likes of Facebook and Twitter want automated engagement engines. And the ChatGPT public facing personality. But some of the other competitors in this market seem all-in on AI agent employees because that's how they envision multibillion dollar revenues in a few years. To make that work, they need to achieve some level of reliability, at least to the point that they wont cause legal liability.
Or, they need to develop thick-skinned resistance to being told their AI systems are wrong about anything. That, I think, is going to turn out to be the tallest poll: blind acceptance because the AI said so.

Note that Zuckerberg seems to be headed straight down this path with his "super-intelligence" project. He doesn't want intelligence; he wants to create the impression of a system that cannot be questioned.
 
Upvote
16 (16 / 0)

Unknowable

Wise, Aged Ars Veteran
151
Subscriptor
Ah, is there a word in the English language more bandied about and abused than "addiction"? Take a course in psychopharmacology and then tell me chatbots are "addictive".
Okay, sure, it's not like they're going to go into the DTs if they don't get to talk to their Stochastic Parrot of Choice at least once a day, but then again, those grandmas at the casino flushing their retirement fund down the toilet, one pull of the slot machine at a time aren't chemically addicted either. Still absolutely messed up in the head by a machine designed to get them hooked on doing something almost as good as a literal crackpipe does.
 
Upvote
27 (27 / 0)
It’s depressing how many people on r/chatGPT think using LLMs as therapists is totally fine.

Well, the issue is, as long as you remember that you're exchanging inputs and responses with an electronic, mechanical process, that's acceptable. It's those who have never worked in software development and have huge emotional issues (and are, thus, vulnerable) who aren't able to keep the line of reality separate, that are the most at risk for ill effects from chatbot dependencies. Some folks probably still believe - and want to believe - that AI and chatbots are magic, infused with sentience. Their needs are so deep and acute that blurring the line is fine. And, frankly, in our dog-eat-dog, Western style of civilization that lauds self-sufficiency, they're willing to reach out for a solution, any solution, that can help patch the emotional holes in their lives.

I, personally, am still a believer in journaling as a way of sorting out my personal issues, but that's just me.
 
Upvote
2 (2 / 0)
Post content hidden for low score. Show…

mpfaff

Ars Praefectus
3,142
Subscriptor++
Or, they need to develop thick-skinned resistance to being told their AI systems are wrong about anything. That, I think, is going to turn out to be the tallest poll: blind acceptance because the AI said so.

Note that Zuckerberg seems to be headed straight down this path with his "super-intelligence" project. He doesn't want intelligence; he wants to create the impression of a system that cannot be questioned.

Zuckerberg going all in gives me hope. The amount of money he spent per user of Horizon Worlds on the "Metaverse" should show how much of an idea he has of what the future holds. Like his dream of VR was replacing the office and integrating it into life, not a thing everyone's kids use to play Gorilla Tag.
 
Upvote
24 (24 / 0)

ninjaneer

Ars Scholae Palatinae
634
Subscriptor
Or that it's private. It was a bit upsetting to see the look of panic on a friend's face when I told her that all Chatbot chats were logged and mined for data and she realized that not only may actual humans be reading her most personal thoughts but depending on what she's been telling it she may also be feeding into the next Harlequin Botmance or OnlyBots product.

Or LexisNexis or palantir. Imagine getting rejected for a home loan because you discussed marriage problems with chatgpt and are now a high risk for divorce.
 
Upvote
27 (28 / -1)
Post content hidden for low score. Show…

Bongle

Ars Praefectus
4,477
Subscriptor++
Wait until these AIs evolve from being next-word-predictors to actually intelligent and adept at convincing its users to behave in the way that the AI wants or has been programmed to train its users, then you'll really be shaking your fist at the clouds.
They don't really need to evolve at all to do that.

Elon made it far too obvious with Grok going 100% about "white genocide" for a day, but it would be profoundly easy to slightly overweight certain concepts that the LLM's owner/trainer wants to prefer. You could do it at runtime with a prompt modification ("you are a helpful helper who leans towards capitalism and loves the taste of O-RANGE") or you could do it by overweighting performance on certain training data. Pro-union content? Underweight. Wall street journal op-eds? Overweight.
 
Upvote
10 (10 / 0)

Ianal

Ars Scholae Palatinae
1,178
Subscriptor
Wait until these AIs evolve from being next-word-predictors to actually intelligent and adept at convincing its users to behave in the way that the AI wants or has been programmed to train its users, then you'll really be shaking your fist at the clouds.
Don’t need actually intelligent software for that. Imagine Grok’s ‘white genocide’ episode but instigated by somebody more subtle than Musk - which is a bar that an arthritic cockroach could clear.

In the future it’s only the stupid criminals and AI propagandists (but I repeat myself) that’ll get caught.

People want to believe in AI as more than an LLM and people, even the careful ones, don’t care to double check every last damn thing they see on the internet.

Throw in a load of zero click search to indoctrinate the masses in the ways of the infallible machine gods, and I’d say we’re pretty much fucked.

Edit. And ninja’d. Tips hat to Bongle.
 
Upvote
12 (12 / 0)

mpfaff

Ars Praefectus
3,142
Subscriptor++
They don't really need to evolve at all to do that.

Elon made it far too obvious with Grok going 100% about "white genocide" for a day, but it would be profoundly easy to slightly overweight certain concepts that the LLM's owner/trainer wants to prefer. You could do it at runtime with a prompt modification ("you are a helpful helper who leans towards capitalism and loves the taste of O-RANGE") or you could do it by overweighting performance on certain training data. Pro-union content? Underweight. Wall street journal op-eds? Overweight.

I think the lesson learned from the Grok is you can’t thumb the scales with the system prompt without making your LLM substantially worse in every way. Like if you told it to lean towards capitalism then it wouldn’t take long before people noticed it crowing about the free markets all the time in unrelated queries.

I also think trying to curate training data will also make the model perform significantly worse than the others if you tried it from that angle. It may not generate weird unrelated tangents like it does with injecting shit in the system prompt, but it will have gaps in knowledge that’ll be well known real quick.
 
Upvote
0 (5 / -5)
But as a form of pyschology or therapy? Not as bad as one expects. But then again, it can be dangerous if people use it to obsess with them, instead of helping them. If you're not in the right frame of mind and do not understand what is happening, it can be dangerous.
So they can be helpful, or possibly incredibly harmful?

That doesn't seem like a great sell.

"It might help you, it might drive you to suicide! What a wonderful advancement!"

This shit is gonna be sold to corporations who offer it as a "wellness perk" while cutting actual health benefits. This is already a thing, they already push cheap questionable telehealth services.

One of the "therapists" provided by Capital One suggested that my wife "go off of her meds to better fit in with the cult of personality at Capital One." What the fuck?

I guess that's probably a bad example, because an LLM probably wouldn't say something so cosmically stupid. An LLM can't reason or imagine, but hoo boy can humans "imagine" some...things

Moral of the story, corporate healthcare is a fucking racket and technology has never made it better.
 
Upvote
7 (7 / 0)
Or that it's private. It was a bit upsetting to see the look of panic on a friend's face when I told her that all Chatbot chats were logged and mined for data and she realized that not only may actual humans be reading her most personal thoughts but depending on what she's been telling it she may also be feeding into the next Harlequin Botmance or OnlyBots product.
If they use Facebook's chatbot it's even worse than that. https://www.businessinsider.com/mark-zuckerberg-meta-ai-chatbot-discover-feed-depressing-why-2025-6
 
Upvote
3 (3 / 0)
Post content hidden for low score. Show…

Ianal

Ars Scholae Palatinae
1,178
Subscriptor
The ethics lesson continues. With cash-hungry companies eager to push AI into every corner they can imagine, there are going to be unintended and tragic consequences for some. Especially sad is when someone socially awkward or, let's call it what it is, just plain lonely develops an emotional dependency on something that, at the end of the day, is JUST PLAIN SOFTWARE.

Humanity needs to get better at finding ways to connect with the vulnerable rather than handing them off to artificial devices like AI because we culturally don't want to figure out a way to deal with them. We need to learn to be human and, while we're at it, compassionate. Computers ain't gonna do it for us, and one fine day we may find ourselves needing a genuine helping hand.
I regret that I have but one upvote for this comment.
 
Upvote
0 (0 / 0)

DCRoss

Ars Scholae Palatinae
1,300
Ah, is there a word in the English language more bandied about and abused than "addiction"? Take a course in psychopharmacology and then tell me chatbots are "addictive".
Or, alternately, you could look up the phrase "Addictive behaviour" and then... stop talking about the word addiction.
 
Upvote
23 (23 / 0)

Lunakki

Wise, Aged Ars Veteran
103
Subscriptor
Is that the plot of a short Robot story by Isaac Asimov? That a robot is so desperate to be useful it just tells everyone what it thinks they want to hear? Am I remembering correctly?
There's one about a robot that accidentally is made to be telepathic, and because it's directed to "cause no harm", it always tells people what they want to hear, because they'll be emotionally hurt to be told otherwise.
 
Upvote
1 (1 / 0)
The "thinking" models love to complement themselves at every stage. Thinking eliminates many simple hallucinations, but perniciously embeds more subtle hallucinations in its self-prompting.

Amusingly, the best workaround I've found is to ask "oi c*nt, Stop Wanking"

edit: Why is this particular comment so unpopular? I'm perplexed by commentators reactions to LLM topics.
Why is this particular comment so unpopular? I'm perplexed by commentators reactions to LLM topics.
I encountered this perplexing trend a while back, the answer is simple, posts like yours sound like the author thinks LLMs are good/useful, but spend the entire comment talking about their flaws and how they require absurd gymnastics to get any value out of them.
 
Upvote
4 (5 / -1)

Soylentgreen77

Smack-Fu Master, in training
40
It's on purpose. It's to make executives feel good about themselves and make them feel smart.

Executives who think they're geniuses that deliver the real value, not the workers.

In other words, potential customers of AI tools for the enterprise.

Until given evidence otherwise I'm assuming it's all bullshit. I assume everything coming out of these companies is a lie. They cannot be trusted for ANY technical information.

I mean when you've got the people who invented LLMs saying, "they don't work that way" while actual scientists at these companies who should fucking know better have deluded themselves into thinking it's a path to AGI.

I wouldn't be surprised to find out they're asking their own AIs why they're acting that way, and "believing them" due to misplaced faith in the technology and their sense of purpose.

Edit: then of course there's the marketing strategy of, "wow, this stuff is SOOOO powerful that we don't entirely understand why it works! That's potentially a threat! We need to research it more! Give us money pleeease"

It's literally the argument they're making to multiple world governments. They benefit by having the world think AI is a threat or otherwise un-understandably intelligent and perceptive, because "only they can fix it."
great, Now I want to make an "Executive" version of my offline chatbot!
 
Upvote
0 (0 / 0)

DarthSlack

Ars Legatus Legionis
23,300
Subscriptor++
So what happens when it's realized AI can replace the entire C-Suite?
Will they fine-tune the LLM models to prevent it, or create the so-called 'guard rails' to protect them specifically. 🤔

Considering that the C suite and the board are generally in cahoots, I don't expect this to be something that actually happens. They're all too busy telling each other how good they are.
 
Upvote
4 (4 / 0)
I plead guilty, because I'm already an isolated techie with strong record of broken social and personal interactions. At least, I believe I'm aware of it as I still use AI mostly for overview of tech-related problems, I'm not on social media except the professional one with parsimony, and I do slow activities like reading, going for walks and so on. But I like to refer to ChatGPT as a confident on a bunch of subjects and am very well aware of the slippery slope it can be for people. In the long run, democracy and societal cohesion are at stake on a probably much larger scale than social media has already caused.
 
Upvote
-2 (3 / -5)

MHStrawn

Ars Scholae Palatinae
1,432
Subscriptor
Really?

"The challenge that tech companies face is making AI chatbots and assistants helpful and friendly, while not being annoying or addictive."

The challenge tech companies face is making AI chatbots SO ADDICTIVE they can actually make money off them. It's what every social media algorithm exists to do and there's nothing indicating or even suggesting that the powers-that-be behind Big AI will be any different.
 
Upvote
15 (15 / 0)

MHStrawn

Ars Scholae Palatinae
1,432
Subscriptor
To everyone involved with these overhyped bullshit machines - hope you’re suitably proud of yourselves for preying on the vulnerable and trafficking in human misery for a lousy handful of bucks.
Folks exploiting human misery for profit is basically the human condition. Most incredible with AI is THERE ARE NO PROFITS! Unless you're making chips no one is making profits off AI. Literally hundreds of billions of dollars have been thrown at "the next big tech" and there's still no killer app, no compelling mass consumer product.

Add the fact it still doesn't work, making it smarter will cost exponentially more and they've already scraped all available data and it's hard to envision AI coming close to meeting the "revolutionary" promises of its advocates.
 
Upvote
12 (12 / 0)

Castellum Excors

Ars Scholae Palatinae
743
Subscriptor++
So they can be helpful, or possibly incredibly harmful?

That doesn't seem like a great sell.

"It might help you, it might drive you to suicide! What a wonderful advancement!"

This shit is gonna be sold to corporations who offer it as a "wellness perk" while cutting actual health benefits. This is already a thing, they already push cheap questionable telehealth services.

One of the "therapists" provided by Capital One suggested that my wife "go off of her meds to better fit in with the cult of personality at Capital One." What the fuck?

I guess that's probably a bad example, because an LLM probably wouldn't say something so cosmically stupid. An LLM can't reason or imagine, but hoo boy can humans "imagine" some...things

Moral of the story, corporate healthcare is a fucking racket and technology has never made it better.
You have to know what it is and what it isn't. It is a tool, a wildly misunderstood and misused tool, which does make it dangerous. It is up to the companies to remedy that. If you know how to harness it, it has its perks though. It's a lot like the early days of electricity. Not everyone is cut out to be an electrician, especially in the days where the understanding of just how it works is nebulous at best.

You do sound like you could probably benefit from some sort of in-person therapy, though.
 
Upvote
-3 (3 / -6)

MHStrawn

Ars Scholae Palatinae
1,432
Subscriptor
Don’t need actually intelligent software for that. Imagine Grok’s ‘white genocide’ episode but instigated by somebody more subtle than Musk - which is a bar that an arthritic cockroach could clear.

In the future it’s only the stupid criminals and AI propagandists (but I repeat myself) that’ll get caught.

People want to believe in AI as more than an LLM and people, even the careful ones, don’t care to double check every last damn thing they see on the internet.

Throw in a load of zero click search to indoctrinate the masses in the ways of the infallible machine gods, and I’d say we’re pretty much fucked.

Edit. And ninja’d. Tips hat to Bongle.
You were both right and both made compelling, valid points.
 
Upvote
1 (1 / 0)
Okay, sure, it's not like they're going to go into the DTs if they don't get to talk to their Stochastic Parrot of Choice at least once a day, but then again, those grandmas at the casino flushing their retirement fund down the toilet, one pull of the slot machine at a time aren't chemically addicted either. Still absolutely messed up in the head by a machine designed to get them hooked on doing something almost as good as a literal crackpipe does.
I wont bother replying OP. Likely to land on deaf ears. But an interesting thing that DSM-5 defines addiction as "A pattern that involves impaired control, social problems, risky use, and drug effects"

Regardless of exogenous chemicals. ie drugs.

Also it states: "“Addiction is a primary, chronic disease of brain reward, motivation, memory and related circuitry. Dysfunction in these circuits leads to characteristic biological, psychological, social and spiritual manifestations." "This is reflected in an individual pathologically pursuing reward and/or relief by substance use and other behaviors."

Hence why sex, gambling, video games, etc. are often called out for being addictive. (And yes, people tend to defend video games right off the bat, but hell, I spend my 20s playing video games 10am-11pm)
 
Upvote
6 (6 / 0)
Ha, I was literally telling my wife the other day that LLMs are the worlds worst Yes-men. I was using one to chew through a lot of data and and provide some general groupings of the data. Overall it was useful. However, I got pretty far down one grouping structure (using my own insight, data, and some from ideas ChatGPT) before I realized the structure of one of the groupings was completely wrong when you drilled down a few levels and basically was pulling in topics from other areas. I reflected on how I got to that point and I realized I had shared my groupings back to ChatGPT to get any feedback and, while it did provide some useful feedback, far too often it was showering me with compliments which made me overly confident in the work. Fortunately it was early enough that I was able to course correct easily, but lesson-learned.
 
Upvote
3 (3 / 0)

Acidtech

Ars Scholae Palatinae
842
This is a major problem when dealing with difficult to check things. When writing code at least you can check it functions like it should and unit tests. The less testable information from AIs is a bigger issue.

I expect we will need to setup AIs that check other AIs using completely separately trained models for each.
 
Upvote
0 (0 / 0)

Acidtech

Ars Scholae Palatinae
842
Ah, is there a word in the English language more bandied about and abused than "addiction"? Take a course in psychopharmacology and then tell me chatbots are "addictive".
You may have had an argument, if we hadn't been watching "internet addition" developing LIVE in the general population for the last decade or so.
 
Upvote
11 (11 / 0)

Acidtech

Ars Scholae Palatinae
842
The ethics lesson continues. With cash-hungry companies eager to push AI into every corner they can imagine, there are going to be unintended and tragic consequences for some. Especially sad is when someone socially awkward or, let's call it what it is, just plain lonely develops an emotional dependency on something that, at the end of the day, is JUST PLAIN SOFTWARE.

Humanity needs to get better at finding ways to connect with the vulnerable rather than handing them off to artificial devices like AI because we culturally don't want to figure out a way to deal with them. We need to learn to be human and, while we're at it, compassionate. Computers ain't gonna do it for us, and one fine day we may find ourselves needing a genuine helping hand.
Haha. Not saying you are wrong, but that has ALREADY been happening. Long before AI. Go look up video chat girlfriends and addiction to said if you don't believe me. Heck, go back to the good old days of 900 numbers.
 
Upvote
8 (8 / 0)

Dmytry

Ars Legatus Legionis
11,451
Feedback is broken in other ways too. I find a related problem is extremely verbose answers. They often start by reflecting your question back to you (please don't do that every effing time we interact), then "break it down" in a way that's super condescending, then write a listicle of related points, then summarize again, then tell you they're "here if you need them" (also every time). Uh thanks Chat, all I wanted was a simple fucking "yes" or "no", a sentence or two about why, and maybe a link.

How am I supposed to even spend an appropriate amount of time reviewing these massive textual garbage dumps, let alone give the entire thing a single thumbs up/down? It's like sitting through a rambling 3 hour powerpoint presentation, which should have been a 5 minute conversation, then being asked to raise your hand if it was good (or bad). Like I'm sorry but you melted my brain and all I want is to leave now. Instructions to be concise seem to lose all effect within roughly 2-5 prompts, too.
Keep in mind that anything it writes is also a prompt, it is sort of prompting itself with that verbose garbage. I think this is done in part to improve performance (it has a huge context window and it is a waste of context window not to fill it with repetitive garbage. It is nothing like a human, it consumes a LOT of text all at once in parallel simultaneously, just to output a single token, where we consume text incrementally to build a complex mental model), and in part because people tend to give a sort of "partial credit" to AI when it gets the solution right at some point - even if it then proceeds to undo the solution.
 
Upvote
3 (3 / 0)