AI chatbots tell users what they want to hear, and that’s problematic

Status
You're currently viewing only Bongle's posts. Click here to go back to viewing the entire thread.

Bongle

Ars Praefectus
4,477
Subscriptor++
The real kicker is that the latter promotes their sales, while the former is something they pretend these models already do.
Yeah I thought this was a super-credulous take.

Tech companies LOVE addictive. Look at the big-money techniques in games (lootboxes!) or social media (skinner boxes!).

The bug from their perspective was that the LLMs got too obvious with what they were trying to do.
 
Upvote
32 (33 / -1)

Bongle

Ars Praefectus
4,477
Subscriptor++
Wait until these AIs evolve from being next-word-predictors to actually intelligent and adept at convincing its users to behave in the way that the AI wants or has been programmed to train its users, then you'll really be shaking your fist at the clouds.
They don't really need to evolve at all to do that.

Elon made it far too obvious with Grok going 100% about "white genocide" for a day, but it would be profoundly easy to slightly overweight certain concepts that the LLM's owner/trainer wants to prefer. You could do it at runtime with a prompt modification ("you are a helpful helper who leans towards capitalism and loves the taste of O-RANGE") or you could do it by overweighting performance on certain training data. Pro-union content? Underweight. Wall street journal op-eds? Overweight.
 
Upvote
10 (10 / 0)
Status
You're currently viewing only Bongle's posts. Click here to go back to viewing the entire thread.