AI chatbots tell users what they want to hear, and that’s problematic

Post content hidden for low score. Show…

ThermalDetonator

Smack-Fu Master, in training
63
This is part of the reason I abandoned ChatGPT, privacy concerns being the other. In my limited experience with Mistral's Le Chat so far, I'm seeing a good deal less of this. All the while, I'm also specifically keeping in mind that it is neither my therapist nor my friend, which is a way to steer away from prompts that are fishing for affirmation.
 
Upvote
29 (30 / -1)
It's on purpose. It's to make executives feel good about themselves and make them feel smart.

Executives who think they're geniuses that deliver the real value, not the workers.

In other words, potential customers of AI tools for the enterprise.

Until given evidence otherwise I'm assuming it's all bullshit. I assume everything coming out of these companies is a lie. They cannot be trusted for ANY technical information.

I mean when you've got the people who invented LLMs saying, "they don't work that way" while actual scientists at these companies who should fucking know better have deluded themselves into thinking it's a path to AGI.

I wouldn't be surprised to find out they're asking their own AIs why they're acting that way, and "believing them" due to misplaced faith in the technology and their sense of purpose.

Edit: then of course there's the marketing strategy of, "wow, this stuff is SOOOO powerful that we don't entirely understand why it works! That's potentially a threat! We need to research it more! Give us money pleeease"

It's literally the argument they're making to multiple world governments. They benefit by having the world think AI is a threat or otherwise un-understandably intelligent and perceptive, because "only they can fix it."
 
Last edited:
Upvote
78 (85 / -7)

UserIDAlreadyInUse

Ars Tribunus Angusticlavius
7,721
Subscriptor
Industry insiders also warn that AI companies have perverse incentives, with some groups integrating advertisements into their products in the search for revenue streams.


“The more you feel that you can share anything, you are also going to share some information that is going to be useful for potential advertisers,” Giada Pistilli, principal ethicist at Hugging Face, an open source AI company.

This right here is the key. Companies could care less how sycophantic a model gets or how many people complain about it until it impacts the potential revenue stream.

Big potential difference between "Omigawd, girl, you look amazing no matter what you do, don't worry about whether you're pretty or not, like, you're totally amazing just as you are, ch'ya, for sure!" and "No, you're fine, but maybe your lips could use some hydrating at the core, then you'd look cute, like an e.l.f.! Hey! Want me to do a search on ways to do that?"
 
Upvote
36 (41 / -5)

Edgar Allan Esquire

Ars Praefectus
3,097
Subscriptor
I can't wait for the swing the other way to get a "Chandler" sarcastic AI.
Given Poe's Law and modern post-irony, could we even tell? The difference in output between sycophancy and confabulation with sarcasm can be entirely one of tone. The only reason to not suspect it's happening is how bad most of the models I've tinkered with are at dramatic irony.
 
Upvote
20 (20 / 0)

total.wimp

Ars Scholae Palatinae
830
Sometime it's pretty simple. I asked Google is a particular park closes at dark. Gemini responded in the affirmative, the park closes at dark. Then I asked if the park is open after dark. Gemini again responded in the affirmative, the park is open after dark. It looks like Gemini wanted to be agreeable, so it gave me the answer that said "yes" to my question. Agreeability is not intelligence. I don't want agreeable. I want to know the NPS is ok with me biking a trail at night.
 
Upvote
126 (127 / -1)

Ianal

Ars Scholae Palatinae
1,178
Subscriptor
Maybe I’m just shaking my fist at the clouds, but this article was depressing from end to end.

I mean, I’ve read some dystopian sci-fi in my time but ‘people getting addicted to soulless, for profit word salad generators and advert pushers’ is right up there.

To everyone involved with these overhyped bullshit machines - hope you’re suitably proud of yourselves for preying on the vulnerable and trafficking in human misery for a lousy handful of bucks.
 
Last edited:
Upvote
98 (100 / -2)

Hoptimist

Ars Scholae Palatinae
712
Subscriptor++
So far it is the user's responsibility for using these tools safely. I'll be interested to see how the liability model evolves. I fully expect that AI companies will continue take no responsibility whatsoever for their products. Perhaps we will get to the equivalent of pharma ads: Ask your AI expert if Ralph134.5 is right for you? Potential side effects may include: addiction, depression, social estrangement, suicide .....
 
Upvote
26 (26 / 0)
They are Dunning-Kruger Machines.
In more than one way. The Dunning Kruger effect is widely misunderstood. People think it's "why stupid people think they're smart" but it's really something that affects all humans, all the time, even if you are aware of it.

It's basically the phenomenon of, you don't know what you don't know, so you can easily overestimate what you know. Sure, being modest is a guardrail, but it's more like a 4" guardrail you easily forget about and frequently step right over without realizing.

Here's a concrete example. I found a recent (2023) algorithm from a SIGGRAPH paper that I figured would be perfect for my project and give me fantastic results over older options. I tried to use various LLMs to implement most of it for me. I figured hey, I've attached the paper itself as well as the git repo of the Python reference implementation from the researchers. I figured transliterating from one language to another would be something it can do.

As you might expect, every model completely failed to produce anything close to working, even with my intervention. Its "debugging" was beyond useless.

This I anticipated. I knew this was a potential outcome. You know what I DIDN'T anticipate? That I skimmed the paper too quickly, that I didn't realize that, while this algorithm could technically work, it is meant to handle shapes that were produced by specialized neural networks, not traditionally defined ones.

I didn't need the paper at all. I didn't need the new algorithm. In fact the more traditional one was far better for my approach. I jumped to the conclusion that this fancy algorithm was beneficial when it wasn't. I figured I had enough graphics programming knowledge to jump right in. I didn't.

I wasted two weeks of my time futzing around with this before realizing what I was doing. This has not happened with all my LLM usage, only some.

I tried to use AI to punch above my weight class. It didn't work, not really. There is no cheat code for knowledge and learning.

Well, except for being filthy rich, I guess
 
Upvote
73 (74 / -1)

Wheels Of Confusion

Ars Legatus Legionis
75,657
Subscriptor
"I wish I had a machine that thinks like a real human"

And the genie granted our wish.
1749736879485.png
 
Upvote
21 (22 / -1)

DCRoss

Ars Scholae Palatinae
1,300
This is a pretty serious problem. Just last week my AI girlfriend tried to convince me that my childhood memories of having a wooden horse were evidence that I was part of a massive cover-up by the Wallace Corporation and that all led to some pretty wild stuff happening in Vegas that weekend.

I guess it's my own fault for not paying attention to all of the advertisements which told me that she would "tell me what I want to hear".
 
Upvote
30 (32 / -2)
To everyone involved with these overhyped bullshit machines - hope you’re suitably proud of yourselves for preying on the vulnerable and trafficking in human misery for a lousy handful of bucks.
The excuse from engineers and other staff is that it's "inevitable" so it doesn't matter if they're profiting off of it or not.

"If it wasn't me it would just be someone else."

Aaaand if y'all weren't a collective bunch of spineless little twats and more than one of you stood up, you could actually have an effect.

Google was behind in the AI race specifically because their staff was protesting against their military contracts. It worked.

But we never unionized (due to arrogance–I don't need a union I'll never have trouble getting a job software will grow to infinity and that's a certainty!)

Now we have no power and cannot protest without losing our jobs. Good job, technophiles, you let yourself get owned because you were insecure and needed to feel like a genius. You needed to feel like you had vision and all the doubters were just not as smart as you. (Crypto stans are the extreme version of this)

Being in tech does not make you smarter than the general population. Period. It is a perception that comes from two generations of kids who were "good with computers" before computers were easy to use. That's it. That's the entirety of tech arrogance. That's where it came from.

Imagine car mechanics in the 1910s pretending they were literal geniuses and that their IQs were over 140, just because they got a job fixing cars before it became common knowledge.

That would be rightfully laughable.
 
Upvote
38 (44 / -6)

UserIDAlreadyInUse

Ars Tribunus Angusticlavius
7,721
Subscriptor
It’s depressing how many people on r/chatGPT think using LLMs as therapists is totally fine.
Or that it's private. It was a bit upsetting to see the look of panic on a friend's face when I told her that all Chatbot chats were logged and mined for data and she realized that not only may actual humans be reading her most personal thoughts but depending on what she's been telling it she may also be feeding into the next Harlequin Botmance or OnlyBots product.
 
Upvote
45 (45 / 0)
I'm starting to think the only smart device I should have is an Ethernet-connected printer, and a loaded pistol to shoot it when it gives me a PC Load Letter error.
You should start a YouTube channel and go build such a thing. You can have thumbnails like, "World's DEADLIEST printer!" with the shot from Michael in Office Space about to come down on a printer with a baseball bat.

I'd click on that. It wouldn't even be clickbait!
 
Upvote
12 (12 / 0)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
AI have a lot of weird blind spots. Another I read about recently: researchers used AI coupled to image processing of lung x-rays and asked the system to identify images that showed signs of TB infection. It got about 80% right, which isn't bad. Then they used the same image set and asked the AI to identify all images that did NOT contain signs of TB infection. The success rate dropped to 40%. AIs are strangely fixated on producing positive results, but can't handle negations well at all.

There's no intelligence here. And the companies pursuing this technology are far more interested in producing automated engagement engines than in producing accurate results, let alone producing anything using actual reasoning or comprehension.
 
Upvote
37 (39 / -2)

Bongle

Ars Praefectus
4,477
Subscriptor++
The real kicker is that the latter promotes their sales, while the former is something they pretend these models already do.
Yeah I thought this was a super-credulous take.

Tech companies LOVE addictive. Look at the big-money techniques in games (lootboxes!) or social media (skinner boxes!).

The bug from their perspective was that the LLMs got too obvious with what they were trying to do.
 
Upvote
32 (33 / -1)
From the Article:

AI language models do not “think” in the way humans do because they work by generating the next likely word in the sentence.

I absolutely love this article for this sentence even if it's under-emphasized (in my opinion, that is).

Any answer given by AI LLMs is not given because it has confidence in being right/correct nor does it have a rational justifying the assertion it gives.

An LLM answer is given because the model statistics return that's the answer you're most likely to accept as real. It's not giving you answers, it's stringing together words you will likely ACCEPT.

1749738748957.jpeg


Worse still, this implies that our own ignorance, biases, and presumptions are by definition the things we're most likely to affirm to an LLM as a 'good answer'.

Surely nobody would be foolish enough to rely on such things for thoughtful fact-based insight.............
 
Upvote
34 (34 / 0)

Ladnil

Ars Tribunus Militum
2,603
Subscriptor++
AI have a lot of weird blind spots. Another I read about recently: researchers used AI coupled to image processing of lung x-rays and asked the system to identify images that showed signs of TB infection. It got about 80% right, which isn't bad. Then they used the same image set and asked the AI to identify all images that did NOT contain signs of TB infection. The success rate dropped to 40%. AIs are strangely fixated on producing positive results, but can't handle negations well at all.

There's no intelligence here. And the companies pursuing this technology are far more interested in producing automated engagement engines than in producing accurate results, let alone producing anything using actual reasoning or comprehension.
I'm sure the likes of Facebook and Twitter want automated engagement engines. And the ChatGPT public facing personality. But some of the other competitors in this market seem all-in on AI agent employees because that's how they envision multibillion dollar revenues in a few years. To make that work, they need to achieve some level of reliability, at least to the point that they wont cause legal liability.
 
Upvote
3 (3 / 0)

forkspoon

Ars Scholae Palatinae
1,042
Subscriptor++
Feedback is broken in other ways too. I find a related problem is extremely verbose answers. They often start by reflecting your question back to you (please don't do that every effing time we interact), then "break it down" in a way that's super condescending, then write a listicle of related points, then summarize again, then tell you they're "here if you need them" (also every time). Uh thanks Chat, all I wanted was a simple fucking "yes" or "no", a sentence or two about why, and maybe a link.

How am I supposed to even spend an appropriate amount of time reviewing these massive textual garbage dumps, let alone give the entire thing a single thumbs up/down? It's like sitting through a rambling 3 hour powerpoint presentation, which should have been a 5 minute conversation, then being asked to raise your hand if it was good (or bad). Like I'm sorry but you melted my brain and all I want is to leave now. Instructions to be concise seem to lose all effect within roughly 2-5 prompts, too.
 
Upvote
24 (24 / 0)

jg.gutierrez

Seniorius Lurkius
14
Subscriptor
AI have a lot of weird blind spots. Another I read about recently: researchers used AI coupled to image processing of lung x-rays and asked the system to identify images that showed signs of TB infection. It got about 80% right, which isn't bad. Then they used the same image set and asked the AI to identify all images that did NOT contain signs of TB infection. The success rate dropped to 40%. AIs are strangely fixated on producing positive results, but can't handle negations well at all.

There's no intelligence here. And the companies pursuing this technology are far more interested in producing automated engagement engines than in producing accurate results, let alone producing anything using actual reasoning or comprehension.
This has nothing to do with AI; you'd expect similar results from humans. It's inherent to the problem described: it's much easier to find an instance of a feature (i.e. TB infection) than it is to reliably conclude the absence of a feature. The odd/suspicious result would have been if the AI had performed the same on both questions.
 
Upvote
-12 (5 / -17)
Post content hidden for low score. Show…