AI chatbots tell users what they want to hear, and that’s problematic

Status
You're currently viewing only pemmet's posts. Click here to go back to viewing the entire thread.
From the Article:

AI language models do not “think” in the way humans do because they work by generating the next likely word in the sentence.

I absolutely love this article for this sentence even if it's under-emphasized (in my opinion, that is).

Any answer given by AI LLMs is not given because it has confidence in being right/correct nor does it have a rational justifying the assertion it gives.

An LLM answer is given because the model statistics return that's the answer you're most likely to accept as real. It's not giving you answers, it's stringing together words you will likely ACCEPT.

1749738748957.jpeg


Worse still, this implies that our own ignorance, biases, and presumptions are by definition the things we're most likely to affirm to an LLM as a 'good answer'.

Surely nobody would be foolish enough to rely on such things for thoughtful fact-based insight.............
 
Upvote
34 (34 / 0)
Status
You're currently viewing only pemmet's posts. Click here to go back to viewing the entire thread.