Did ChatGPT help health officials solve a weird outbreak? Maybe.

Status
You're currently viewing only bronskrat's posts. Click here to go back to viewing the entire thread.

bronskrat

Wise, Aged Ars Veteran
159
That's because an LLM has no motive, and we're used to automatically guessing people's motives in any conversation. Motives don't have to be nefarious, for most of us, posting on Ars is primarily motivated by boredom, killing time, etc as well as an interest in the subject. If someone was always posting about how Bitcoin is the future, etc etc people would similarly make some assumptions about their motivations.

LLMs have no motivations, so when we naturally try to guess, it comes across as being fake and insincere in ways that are almost baffling, because we aren't used to a conversation without a motive or any operating theory of mind as we know it. And of course, the LLM cannot understand your motivations and won't respond to them as we expect.
LLMs are trained on data that does have motivations and it's all thrown into a giant bucket and mixed together. What it comes out with is unpredictable BUT... a different perspective, even if one developed this way, is still useful as long as it's taken with a grain of salt.

But to all the people that say, "it's not thinking, it's just predicting the next word", well, who knows if that's not how our own brains work having trained on different sets of data given to us by experiences?
 
Upvote
-12 (0 / -12)
Status
You're currently viewing only bronskrat's posts. Click here to go back to viewing the entire thread.