AI chatbots tell users what they want to hear, and that’s problematic

Status
You're currently viewing only SixDegrees's posts. Click here to go back to viewing the entire thread.

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
AI have a lot of weird blind spots. Another I read about recently: researchers used AI coupled to image processing of lung x-rays and asked the system to identify images that showed signs of TB infection. It got about 80% right, which isn't bad. Then they used the same image set and asked the AI to identify all images that did NOT contain signs of TB infection. The success rate dropped to 40%. AIs are strangely fixated on producing positive results, but can't handle negations well at all.

There's no intelligence here. And the companies pursuing this technology are far more interested in producing automated engagement engines than in producing accurate results, let alone producing anything using actual reasoning or comprehension.
 
Upvote
37 (39 / -2)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
I'm sure the likes of Facebook and Twitter want automated engagement engines. And the ChatGPT public facing personality. But some of the other competitors in this market seem all-in on AI agent employees because that's how they envision multibillion dollar revenues in a few years. To make that work, they need to achieve some level of reliability, at least to the point that they wont cause legal liability.
Or, they need to develop thick-skinned resistance to being told their AI systems are wrong about anything. That, I think, is going to turn out to be the tallest poll: blind acceptance because the AI said so.

Note that Zuckerberg seems to be headed straight down this path with his "super-intelligence" project. He doesn't want intelligence; he wants to create the impression of a system that cannot be questioned.
 
Upvote
16 (16 / 0)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
When reading articles like this I always think about the movie Rainman. The LLM is an autistic Dustin Hoffman that knows basically everything but cannot express it so that most people can understand and in between you have Tom Cruise as a social people pleaser. It seems they added a bit too much Tom Cruise in this case.
There is no persona behind an AI.

The lights are on, but nobody's home.
 
Upvote
9 (9 / 0)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
Lately I've been thinking about the difference between being knowledgable and being wise. I think that a big part of wisdom is knowing where your knowledge runs out; in other works, wisdom is knowing when to say "I don't know, let's find out." LLMs are a lot of things, but I can't say I've ever thought one was wise.
In 1973, Lloyd Kahn, an early proponent of self-built domes and author of Domebook and Domebook 2, published a fascinating essay called Smart But Not Wise that is still broadly pertinent today. Definitely worth a read.
 
Upvote
0 (0 / 0)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
This has nothing to do with AI; you'd expect similar results from humans. It's inherent to the problem described: it's much easier to find an instance of a feature (i.e. TB infection) than it is to reliably conclude the absence of a feature. The odd/suspicious result would have been if the AI had performed the same on both questions.
No, it has very much to do with AI and the very different way LLMs process language compared with human linguistic processing. They're entirely unalike, and LLMs have deep, inherent troubles handling some language constructs that humans do not.
 
Upvote
2 (2 / 0)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
So, the tech lords with their heads up their asses found a marketing term, "AI", that actually allows them to brainwash other people into sticking their heads up their asses. Now that the head-up-assery is reproducing, how long until it becomes self-aware?
Looks around, it's dark, it's warm, decides to never leave.
 
Upvote
2 (2 / 0)

SixDegrees

Ars Legatus Legionis
48,502
Subscriptor
Y'all realize that you can tell ChatGPT, Claude, etc., "I've noticed that sometimes you tell me things just because you want me to be happy. Please don't do that. I want you to be analytical and unbiased. Can you do that?" This won't stop the occasional hallucination but it will stop most of the syncophantic behavior.
So, it's still telling you what you want to hear.
 
Upvote
2 (2 / 0)
Status
You're currently viewing only SixDegrees's posts. Click here to go back to viewing the entire thread.