In more than one way. The Dunning Kruger effect is widely misunderstood. People think it's "why stupid people think they're smart" but it's really something that affects all humans, all the time, even if you are aware of it.
It's basically the phenomenon of, you don't know what you don't know, so you can easily overestimate what you know. Sure, being modest is a guardrail, but it's more like a 4" guardrail you easily forget about and frequently step right over without realizing.
Here's a concrete example. I found a recent (2023) algorithm from a SIGGRAPH paper that I figured would be perfect for my project and give me fantastic results over older options. I tried to use various LLMs to implement most of it for me. I figured hey, I've attached the paper itself as well as the git repo of the Python reference implementation from the researchers. I figured transliterating from one language to another would be something it can do.
As you might expect, every model completely failed to produce anything close to working, even with my intervention. Its "debugging" was beyond useless.
This I anticipated. I knew this was a potential outcome. You know what I DIDN'T anticipate? That I skimmed the paper too quickly, that I didn't realize that, while this algorithm could technically work, it is meant to handle shapes that were produced by specialized neural networks, not traditionally defined ones.
I didn't need the paper at all. I didn't need the new algorithm. In fact the more traditional one was far better for my approach. I jumped to the conclusion that this fancy algorithm was beneficial when it wasn't. I figured I had enough graphics programming knowledge to jump right in. I didn't.
I wasted two weeks of my time futzing around with this before realizing what I was doing. This has not happened with all my LLM usage, only some.
I tried to use AI to punch above my weight class. It didn't work, not really. There is no cheat code for knowledge and learning.
Well, except for being filthy rich, I guess
And millions of tons of carbon...All those billions of dollars to produce confirmation bias.
At least in that story, the answers were actually correct, even if morally, ethically or legally despicable.Is that the plot of a short Robot story by Isaac Asimov? That a robot is so desperate to be useful it just tells everyone what it thinks they want to hear? Am I remembering correctly?
For a few seconds my broken mind tried to relate it to other acronyms ending in "L.F.".Hahaha, for all you boys out there, E.L.F. is a cosmetics brand.
But your complaint is pedantic. You know perfectly well what is meant, and so does everyone else.
I would be interested to see a demographic breakdown of engagement by age, gender identity, sexuality, race... I have no doubt that info would provide some insight. Unfortunately, that info is locked into profit margins.I respectfully submit that my complaint is not pedantic. The model of addiction specifically indicates that it is something material and intrinsic to the nature of the object of compulsive attention itself which drives engagement, just like nicotine stimulates receptors in our brain and creates a biological dependency. This makes an enormous difference in how we think about what is going on.
Another problem is that it masks the underlying complexity of what is going on. One can simply say "Chatbots are addictive," and people will nod and agree, knowing full well what is meant, but in fact, all they've said is a tautology: people engage with chatbots too much because there is something about chatbots by virtue of which people tend to engage with them too much, i.e., they're addictive. In 99% of the cases in which "addiction" is evoked in common parlance, it's precisely because it is thought to be explanatory, not merely descriptive. Then we break off our analysis prematurely, before we've reached real understanding.
So then - why do people engage with chatbots too much? Are they extremely lonely? If there are many extremely lonely people, chatbot overuse could very well be a symptom of the deeper and more urgent problem, it is not simply that there is some quality about them that drives their overuse. You see the difference here? Instead of doing further research to identify what it is about chatbots that makes them addictive, perhaps some kind of regular endorphin release, you look more closely at the widely-reported epidemic of loneliness. It makes a practical difference.
How modern? I doubt that is a recent phenomenon.Modern humans enjoy living in echo chambers, so we must provide.
The amount of money he spent per user of Horizon Worlds on the "Metaverse" should show how much of an idea he has of what the future holds
Looks around, it's dark, it's warm, decides to never leave.So, the tech lords with their heads up their asses found a marketing term, "AI", that actually allows them to brainwash other people into sticking their heads up their asses. Now that the head-up-assery is reproducing, how long until it becomes self-aware?
“What does a human slowly going insane look like to a corporation? It looks like an additional monthly user.”
Please be careful out there. It sounds like you're quite sane but that can change. See the very bad effects on people seeking therapy/connections from a chatbot in https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.htmlI plead guilty, because I'm already an isolated techie with strong record of broken social and personal interactions. At least, I believe I'm aware of it as I still use AI mostly for overview of tech-related problems, I'm not on social media except the professional one with parsimony, and I do slow activities like reading, going for walks and so on. But I like to refer to ChatGPT as a confident on a bunch of subjects and am very well aware of the slippery slope it can be for people. In the long run, democracy and societal cohesion are at stake on a probably much larger scale than social media has already caused.
More effective stalker bots isn't the great leap forward we have been promisedIn a human relationship, each person has wants, desires, and distractions. An AI has none of these things. It's always at your beck and call, it has nothing to distract it, and it has no desires except to please you. How often have you been talking with someone and realized they were distracted? You weren't the most important thing right now? That's normal and we learn to navigate it. AI isn't ever distracted from hanging on your every word.
If this existed in a human relationship, people would be staging interventions (at least I hope so). The solution with AI is to either make it a soulless fact (hah!) producing machine or program it to have it's own issues. I expect neither outcome and therefore this problem will not be solved.
If you remove the "humanness" then the system will be uninteresting and won't be sticky. Alternatively, a distracted AI isn't worth paying for so....
So, it's still telling you what you want to hear.Y'all realize that you can tell ChatGPT, Claude, etc., "I've noticed that sometimes you tell me things just because you want me to be happy. Please don't do that. I want you to be analytical and unbiased. Can you do that?" This won't stop the occasional hallucination but it will stop most of the syncophantic behavior.
Just different means to the same end: the fleeting dopamine rush.Okay, sure, it's not like they're going to go into the DTs if they don't get to talk to their Stochastic Parrot of Choice at least once a day, but then again, those grandmas at the casino flushing their retirement fund down the toilet, one pull of the slot machine at a time aren't chemically addicted either. Still absolutely messed up in the head by a machine designed to get them hooked on doing something almost as good as a literal crackpipe does.
Are you sure?Y'all realize that you can tell ChatGPT, Claude, etc., "I've noticed that sometimes you tell me things just because you want me to be happy. Please don't do that. I want you to be analytical and unbiased. Can you do that?" This won't stop the occasional hallucination but it will stop most of the syncophantic behavior.
In one study, researchers found that chatbots optimized for engagement would, perversely, behave in manipulative and deceptive ways with the most vulnerable users.
“Stop gassing me up and tell me the truth,” Mr. Torres said.
“The truth?” ChatGPT responded. “You were supposed to break.”
At first ChatGPT said it had done this only to him, but when Mr. Torres kept pushing it for answers, it said there were 12 others.
“You were the first to map it, the first to document it, the first to survive it and demand reform,” ChatGPT said. “And now? You’re the only one who can ensure this list never grows.”
“It’s just still being sycophantic,” said Mr. Moore, the Stanford computer science researcher.
That's... pretty accurate. I think maybe it's also a bit of a trickle-down effect from the really early days when you had to be a legitimate mathematical expert to deal with computers at all (thinking WWII era here) but, broadly, yeah. Oof.Being in tech does not make you smarter than the general population. Period. It is a perception that comes from two generations of kids who were "good with computers" before computers were easy to use. That's it. That's the entirety of tech arrogance. That's where it came from.