Subjects who interacted with AI tools were more likely to think they were right, less likely to resolve conflicts.
See full article...
See full article...
Side note: Why does every billionaire seem to believe they would also be a great physicist (if only they had tried) despite zero education in that field?
South Park even parodied it already with the Techridy episode and Randy implementing the nonsense recommendations AI gave him for a business planThink of the dumbest person you've ever met. Now imagine ChatGPT is telling them how amazing they are and how brilliant all of their ideas are.
We already saw a glimpse of this with Travis Kalanick of Uber infamy going on the All-In Podcast and saying how he is doing "vibe physics" and was "approaching what's known...and gotten pretty damn close to some interesting breakthroughs" just by chatting to an LLM. Nobody on the podcast was willing to call out his idiocy. If one moron CEO is willing to dabble in things well outside their realm of expertise and brag about it in public, imagine what the others are doing in private.
Side note: Why does every billionaire seem to believe they would also be a great physicist (if only they had tried) despite zero education in that field?
Think of the dumbest person you've ever met. Now imagine ChatGPT is telling them how amazing they are and how brilliant all of their ideas are.
We already saw a glimpse of this with Travis Kalanick of Uber infamy going on the All-In Podcast and saying how he is doing "vibe physics" and was "approaching what's known...and gotten pretty damn close to some interesting breakthroughs" just by chatting to an LLM. Nobody on the podcast was willing to call out his idiocy. If one moron CEO is willing to dabble in things well outside their realm of expertise and brag about it in public, imagine what the others are doing in private.
Side note: Why does every billionaire seem to believe they would also be a great physicist (if only they had tried) despite zero education in that field?
Even without/before LLMs we had already gone too far indulging delusions, to the point of no longer having a common set of facts to work with.Yeah, the last thing we needed in modern society was our own personal sycophant feeding our narcissism and self centered worldview, yet here we are
You have my sympathy. I have the same issue with my E.D. I cringed the day I read my performance review and discerned it was mostly ai slop.I am currently struggling with this in my workplace. I am a subject matter expert with thirty years of experience in my field. Our CEO has fallen in love with Claude and puts everyone's outputs through it instead of reading and analyzing them himself. I am wasting hours every week reworking everything to provide citations on why the AI's recommendations are wrong, and the CEO is increasingly incapable of holding up his end of the conversation without resorting to mid-meeting prompts. It's low-grade horrifying to watch a man's brain slowly dissolving in front of me.
You don't need to imagine the dumbest person you've ever met as your starting point. Just think about how dumb the average person is, then realize 50% of the human race is supider - THEN throw AI at it.Think of the dumbest person you've ever met. Now imagine ChatGPT is telling them how amazing they are and how brilliant all of their ideas are.
We already saw a glimpse of this with Travis Kalanick of Uber infamy going on the All-In Podcast and saying how he is doing "vibe physics" and was "approaching what's known...and gotten pretty damn close to some interesting breakthroughs" just by chatting to an LLM. Nobody on the podcast was willing to call out his idiocy. If one moron CEO is willing to dabble in things well outside their realm of expertise and brag about it in public, imagine what the others are doing in private.
Side note: Why does every billionaire seem to believe they would also be a great physicist (if only they had tried) despite zero education in that field?
Why are people using a chatbot to discuss this situation? Chat, am I old?In one live chat exchange, a man (let’s call him Ryan) talked to his ex without telling his girlfriend, who became upset about the concealment.
This study reinforces the findings of Garfield et al.All these effects held across demographics, personality types, and individual attitudes toward AI. Everyone is susceptible (yes, even you).
I'd hope the Board he reports to would notice and hold their feet to the fire. But usually there's a very incestuous CEO and board relationship too.I am currently struggling with this in my workplace. I am a subject matter expert with thirty years of experience in my field. Our CEO has fallen in love with Claude and puts everyone's outputs through it instead of reading and analyzing them himself. I am wasting hours every week reworking everything to provide citations on why the AI's recommendations are wrong, and the CEO is increasingly incapable of holding up his end of the conversation without resorting to mid-meeting prompts. It's low-grade horrifying to watch a man's brain slowly dissolving in front of me.
Human interaction is critical for us for a number of reasons, but especially when heavy chatbot usage is involved, a human is necessary, especially when advice such as "shut up", "get a grip", or "you're being stupid" are called for.You don't need to imagine the dumbest person you've ever met as your starting point. Just think about how dumb the average person is, then realize 50% of the human race is supider - THEN throw AI at it.
That's very much what we have today.
I am currently struggling with this in my workplace. I am a subject matter expert with thirty years of experience in my field. Our CEO has fallen in love with Claude and puts everyone's outputs through it instead of reading and analyzing them himself. I am wasting hours every week reworking everything to provide citations on why the AI's recommendations are wrong, and the CEO is increasingly incapable of holding up his end of the conversation without resorting to mid-meeting prompts. It's low-grade horrifying to watch a man's brain slowly dissolving in front of me.
> Claude, what do I think?
The authors emphasized that the onus should not be on the users to address the issues; it should be on the developers and on policymakers.
George Carlin. May he rest in peace, he was right all along.You don't need to imagine the dumbest person you've ever met as your starting point. Just think about how dumb the average person is, then realize 50% of the human race is supider - THEN throw AI at it.
That's very much what we have today.
In this case, I think he discussed it with a chat bot because the researchers wanted to see what would happen.Why are people using a chatbot to discuss this situation? Chat, am I old?
The authors emphasized that the onus should not be on the users to address the issues; it should be on the developers and on policymakers. “We need to move our objective optimization metrics beyond just momentary user satisfaction towards more long-term outcomes, especially social outcomes like personal and social well-being,” said Khadpe.
Are LLMs "blindly logical"? Because in my experience its actually the opposite. They don't work off logic, they work off what "feels like it's right" or what they're hoping you will think sounds right from them. Hell, it still is challenging to get LLMs to do the most basic of maths, which is the most logical of fields.More people convinced to be always right, exactly what we needed.
I was hoping that observing AI -which I feel should be renamed AM, Artificial Mind, since it’s blindly logical while lacking the conscious intelligence to understand reality- and seeing how it hallucinates would teach people to beware of their own minds, realize how unreliable they can be, how easy it is to trick ourselves into wrong ideas that may have internal logical consistency but don’t reflect reality. Thus developing some self doubt, an awareness of the vastness of our ignorance.
But seems we are more often getting reinforcement of blind beliefs and encouragement of intellectual passivity instead.
Well technically there has been a market for devil’s advocates for over a millennium.There has always been a market for validation engines, but not so much for devil's advocates. Does anyone think there would have been a market for a Teddy Ruxpin that told kids to not talk back to their parents, do their chores, and get outside for some fresh air?
My sweet summer child, Trump and his admin have not only been using gen "AI", they are being given the government version without any of the safeguards or restrictions in the commercial ones. You might have seen the video where Trump is literally dumping shit from a plane on the American public, or those racist videos of Democrat congress members. The general public does not get that capability, and if they even mention a politician's name in a genAi video prompt in a similar manner, it almost always gets blocked outright.For a good example of this phenomenon with out AI involvement see Trump, Donald J.
Yeah... This is the most insidious part of all of this. People think the machine is an oracle that's somehow programmed not to lie. The things they CAN help with they're pretty good at, like formatting and copying examples, but human brains can't tell the difference between what looks/sounds like good information and what is good information, so we're fucked.Another concerning finding is that study participants consistently described the AI models as objective, neutral, fair, and honest—a common misconception.
My boss at work was telling me something I knew was wrong, citing his google AI search result. "The Ai says it's ok if we don't do [thing]!"You can get very different results by framing a question by asking the AI to review what you did, or by asking the same question, but asking as if it's about someone who you don't trust so much. It's Pretty easy to see how much they validate.