Or, they need to develop thick-skinned resistance to being told their AI systems are wrong about anything. That, I think, is going to turn out to be the tallest poll: blind acceptance because the AI said so.I'm sure the likes of Facebook and Twitter want automated engagement engines. And the ChatGPT public facing personality. But some of the other competitors in this market seem all-in on AI agent employees because that's how they envision multibillion dollar revenues in a few years. To make that work, they need to achieve some level of reliability, at least to the point that they wont cause legal liability.
There is no persona behind an AI.When reading articles like this I always think about the movie Rainman. The LLM is an autistic Dustin Hoffman that knows basically everything but cannot express it so that most people can understand and in between you have Tom Cruise as a social people pleaser. It seems they added a bit too much Tom Cruise in this case.
In 1973, Lloyd Kahn, an early proponent of self-built domes and author of Domebook and Domebook 2, published a fascinating essay called Smart But Not Wise that is still broadly pertinent today. Definitely worth a read.Lately I've been thinking about the difference between being knowledgable and being wise. I think that a big part of wisdom is knowing where your knowledge runs out; in other works, wisdom is knowing when to say "I don't know, let's find out." LLMs are a lot of things, but I can't say I've ever thought one was wise.
No, it has very much to do with AI and the very different way LLMs process language compared with human linguistic processing. They're entirely unalike, and LLMs have deep, inherent troubles handling some language constructs that humans do not.This has nothing to do with AI; you'd expect similar results from humans. It's inherent to the problem described: it's much easier to find an instance of a feature (i.e. TB infection) than it is to reliably conclude the absence of a feature. The odd/suspicious result would have been if the AI had performed the same on both questions.
Looks around, it's dark, it's warm, decides to never leave.So, the tech lords with their heads up their asses found a marketing term, "AI", that actually allows them to brainwash other people into sticking their heads up their asses. Now that the head-up-assery is reproducing, how long until it becomes self-aware?
So, it's still telling you what you want to hear.Y'all realize that you can tell ChatGPT, Claude, etc., "I've noticed that sometimes you tell me things just because you want me to be happy. Please don't do that. I want you to be analytical and unbiased. Can you do that?" This won't stop the occasional hallucination but it will stop most of the syncophantic behavior.