I believe the story is "Liar!".Is that the plot of a short Robot story by Isaac Asimov? That a robot is so desperate to be useful it just tells everyone what it thinks they want to hear? Am I remembering correctly?
So far it is the user's responsibility for using these tools safely. I'll be interested to see how the liability model evolves. I fully expect that AI companies will continue take no responsibility whatsoever for their products. Perhaps we will get to the equivalent of pharma ads: Ask your AI expert if Ralph134.5 is right for you? Potential side effects may include: addiction, depression, social estrangement, suicide .....
Or, they need to develop thick-skinned resistance to being told their AI systems are wrong about anything. That, I think, is going to turn out to be the tallest poll: blind acceptance because the AI said so.I'm sure the likes of Facebook and Twitter want automated engagement engines. And the ChatGPT public facing personality. But some of the other competitors in this market seem all-in on AI agent employees because that's how they envision multibillion dollar revenues in a few years. To make that work, they need to achieve some level of reliability, at least to the point that they wont cause legal liability.
Okay, sure, it's not like they're going to go into the DTs if they don't get to talk to their Stochastic Parrot of Choice at least once a day, but then again, those grandmas at the casino flushing their retirement fund down the toilet, one pull of the slot machine at a time aren't chemically addicted either. Still absolutely messed up in the head by a machine designed to get them hooked on doing something almost as good as a literal crackpipe does.Ah, is there a word in the English language more bandied about and abused than "addiction"? Take a course in psychopharmacology and then tell me chatbots are "addictive".
It’s depressing how many people on r/chatGPT think using LLMs as therapists is totally fine.
Or, they need to develop thick-skinned resistance to being told their AI systems are wrong about anything. That, I think, is going to turn out to be the tallest poll: blind acceptance because the AI said so.
Note that Zuckerberg seems to be headed straight down this path with his "super-intelligence" project. He doesn't want intelligence; he wants to create the impression of a system that cannot be questioned.
Or that it's private. It was a bit upsetting to see the look of panic on a friend's face when I told her that all Chatbot chats were logged and mined for data and she realized that not only may actual humans be reading her most personal thoughts but depending on what she's been telling it she may also be feeding into the next Harlequin Botmance or OnlyBots product.
They don't really need to evolve at all to do that.Wait until these AIs evolve from being next-word-predictors to actually intelligent and adept at convincing its users to behave in the way that the AI wants or has been programmed to train its users, then you'll really be shaking your fist at the clouds.
Don’t need actually intelligent software for that. Imagine Grok’s ‘white genocide’ episode but instigated by somebody more subtle than Musk - which is a bar that an arthritic cockroach could clear.Wait until these AIs evolve from being next-word-predictors to actually intelligent and adept at convincing its users to behave in the way that the AI wants or has been programmed to train its users, then you'll really be shaking your fist at the clouds.
They don't really need to evolve at all to do that.
Elon made it far too obvious with Grok going 100% about "white genocide" for a day, but it would be profoundly easy to slightly overweight certain concepts that the LLM's owner/trainer wants to prefer. You could do it at runtime with a prompt modification ("you are a helpful helper who leans towards capitalism and loves the taste of O-RANGE") or you could do it by overweighting performance on certain training data. Pro-union content? Underweight. Wall street journal op-eds? Overweight.
So they can be helpful, or possibly incredibly harmful?But as a form of pyschology or therapy? Not as bad as one expects. But then again, it can be dangerous if people use it to obsess with them, instead of helping them. If you're not in the right frame of mind and do not understand what is happening, it can be dangerous.
If they use Facebook's chatbot it's even worse than that. https://www.businessinsider.com/mark-zuckerberg-meta-ai-chatbot-discover-feed-depressing-why-2025-6Or that it's private. It was a bit upsetting to see the look of panic on a friend's face when I told her that all Chatbot chats were logged and mined for data and she realized that not only may actual humans be reading her most personal thoughts but depending on what she's been telling it she may also be feeding into the next Harlequin Botmance or OnlyBots product.
So what happens when it's realized AI can replace the entire C-Suite?And it certainly explains why LLMs are so popular in C-Suites.
I regret that I have but one upvote for this comment.The ethics lesson continues. With cash-hungry companies eager to push AI into every corner they can imagine, there are going to be unintended and tragic consequences for some. Especially sad is when someone socially awkward or, let's call it what it is, just plain lonely develops an emotional dependency on something that, at the end of the day, is JUST PLAIN SOFTWARE.
Humanity needs to get better at finding ways to connect with the vulnerable rather than handing them off to artificial devices like AI because we culturally don't want to figure out a way to deal with them. We need to learn to be human and, while we're at it, compassionate. Computers ain't gonna do it for us, and one fine day we may find ourselves needing a genuine helping hand.
Or, alternately, you could look up the phrase "Addictive behaviour" and then... stop talking about the word addiction.Ah, is there a word in the English language more bandied about and abused than "addiction"? Take a course in psychopharmacology and then tell me chatbots are "addictive".
There's one about a robot that accidentally is made to be telepathic, and because it's directed to "cause no harm", it always tells people what they want to hear, because they'll be emotionally hurt to be told otherwise.Is that the plot of a short Robot story by Isaac Asimov? That a robot is so desperate to be useful it just tells everyone what it thinks they want to hear? Am I remembering correctly?
Could this intelligence be any more artificial?I can't wait for the swing the other way to get a "Chandler" sarcastic AI.
The "thinking" models love to complement themselves at every stage. Thinking eliminates many simple hallucinations, but perniciously embeds more subtle hallucinations in its self-prompting.
Amusingly, the best workaround I've found is to ask "oi c*nt, Stop Wanking"
edit: Why is this particular comment so unpopular? I'm perplexed by commentators reactions to LLM topics.
I encountered this perplexing trend a while back, the answer is simple, posts like yours sound like the author thinks LLMs are good/useful, but spend the entire comment talking about their flaws and how they require absurd gymnastics to get any value out of them.Why is this particular comment so unpopular? I'm perplexed by commentators reactions to LLM topics.
great, Now I want to make an "Executive" version of my offline chatbot!It's on purpose. It's to make executives feel good about themselves and make them feel smart.
Executives who think they're geniuses that deliver the real value, not the workers.
In other words, potential customers of AI tools for the enterprise.
Until given evidence otherwise I'm assuming it's all bullshit. I assume everything coming out of these companies is a lie. They cannot be trusted for ANY technical information.
I mean when you've got the people who invented LLMs saying, "they don't work that way" while actual scientists at these companies who should fucking know better have deluded themselves into thinking it's a path to AGI.
I wouldn't be surprised to find out they're asking their own AIs why they're acting that way, and "believing them" due to misplaced faith in the technology and their sense of purpose.
Edit: then of course there's the marketing strategy of, "wow, this stuff is SOOOO powerful that we don't entirely understand why it works! That's potentially a threat! We need to research it more! Give us money pleeease"
It's literally the argument they're making to multiple world governments. They benefit by having the world think AI is a threat or otherwise un-understandably intelligent and perceptive, because "only they can fix it."
So what happens when it's realized AI can replace the entire C-Suite?
Will they fine-tune the LLM models to prevent it, or create the so-called 'guard rails' to protect them specifically.![]()
Folks exploiting human misery for profit is basically the human condition. Most incredible with AI is THERE ARE NO PROFITS! Unless you're making chips no one is making profits off AI. Literally hundreds of billions of dollars have been thrown at "the next big tech" and there's still no killer app, no compelling mass consumer product.To everyone involved with these overhyped bullshit machines - hope you’re suitably proud of yourselves for preying on the vulnerable and trafficking in human misery for a lousy handful of bucks.
You have to know what it is and what it isn't. It is a tool, a wildly misunderstood and misused tool, which does make it dangerous. It is up to the companies to remedy that. If you know how to harness it, it has its perks though. It's a lot like the early days of electricity. Not everyone is cut out to be an electrician, especially in the days where the understanding of just how it works is nebulous at best.So they can be helpful, or possibly incredibly harmful?
That doesn't seem like a great sell.
"It might help you, it might drive you to suicide! What a wonderful advancement!"
This shit is gonna be sold to corporations who offer it as a "wellness perk" while cutting actual health benefits. This is already a thing, they already push cheap questionable telehealth services.
One of the "therapists" provided by Capital One suggested that my wife "go off of her meds to better fit in with the cult of personality at Capital One." What the fuck?
I guess that's probably a bad example, because an LLM probably wouldn't say something so cosmically stupid. An LLM can't reason or imagine, but hoo boy can humans "imagine" some...things
Moral of the story, corporate healthcare is a fucking racket and technology has never made it better.
You were both right and both made compelling, valid points.Don’t need actually intelligent software for that. Imagine Grok’s ‘white genocide’ episode but instigated by somebody more subtle than Musk - which is a bar that an arthritic cockroach could clear.
In the future it’s only the stupid criminals and AI propagandists (but I repeat myself) that’ll get caught.
People want to believe in AI as more than an LLM and people, even the careful ones, don’t care to double check every last damn thing they see on the internet.
Throw in a load of zero click search to indoctrinate the masses in the ways of the infallible machine gods, and I’d say we’re pretty much fucked.
Edit. And ninja’d. Tips hat to Bongle.
I wont bother replying OP. Likely to land on deaf ears. But an interesting thing that DSM-5 defines addiction as "A pattern that involves impaired control, social problems, risky use, and drug effects"Okay, sure, it's not like they're going to go into the DTs if they don't get to talk to their Stochastic Parrot of Choice at least once a day, but then again, those grandmas at the casino flushing their retirement fund down the toilet, one pull of the slot machine at a time aren't chemically addicted either. Still absolutely messed up in the head by a machine designed to get them hooked on doing something almost as good as a literal crackpipe does.
You may have had an argument, if we hadn't been watching "internet addition" developing LIVE in the general population for the last decade or so.Ah, is there a word in the English language more bandied about and abused than "addiction"? Take a course in psychopharmacology and then tell me chatbots are "addictive".
Haha. Not saying you are wrong, but that has ALREADY been happening. Long before AI. Go look up video chat girlfriends and addiction to said if you don't believe me. Heck, go back to the good old days of 900 numbers.The ethics lesson continues. With cash-hungry companies eager to push AI into every corner they can imagine, there are going to be unintended and tragic consequences for some. Especially sad is when someone socially awkward or, let's call it what it is, just plain lonely develops an emotional dependency on something that, at the end of the day, is JUST PLAIN SOFTWARE.
Humanity needs to get better at finding ways to connect with the vulnerable rather than handing them off to artificial devices like AI because we culturally don't want to figure out a way to deal with them. We need to learn to be human and, while we're at it, compassionate. Computers ain't gonna do it for us, and one fine day we may find ourselves needing a genuine helping hand.
Keep in mind that anything it writes is also a prompt, it is sort of prompting itself with that verbose garbage. I think this is done in part to improve performance (it has a huge context window and it is a waste of context window not to fill it with repetitive garbage. It is nothing like a human, it consumes a LOT of text all at once in parallel simultaneously, just to output a single token, where we consume text incrementally to build a complex mental model), and in part because people tend to give a sort of "partial credit" to AI when it gets the solution right at some point - even if it then proceeds to undo the solution.Feedback is broken in other ways too. I find a related problem is extremely verbose answers. They often start by reflecting your question back to you (please don't do that every effing time we interact), then "break it down" in a way that's super condescending, then write a listicle of related points, then summarize again, then tell you they're "here if you need them" (also every time). Uh thanks Chat, all I wanted was a simple fucking "yes" or "no", a sentence or two about why, and maybe a link.
How am I supposed to even spend an appropriate amount of time reviewing these massive textual garbage dumps, let alone give the entire thing a single thumbs up/down? It's like sitting through a rambling 3 hour powerpoint presentation, which should have been a 5 minute conversation, then being asked to raise your hand if it was good (or bad). Like I'm sorry but you melted my brain and all I want is to leave now. Instructions to be concise seem to lose all effect within roughly 2-5 prompts, too.