They are Dunning-Kruger Machines.LLM's are also terrible at not knowing what they don't know. They have a serious drug problem of using shrooms and hallucinating garbage.
Industry insiders also warn that AI companies have perverse incentives, with some groups integrating advertisements into their products in the search for revenue streams.
“The more you feel that you can share anything, you are also going to share some information that is going to be useful for potential advertisers,” Giada Pistilli, principal ethicist at Hugging Face, an open source AI company.
Given Poe's Law and modern post-irony, could we even tell? The difference in output between sycophancy and confabulation with sarcasm can be entirely one of tone. The only reason to not suspect it's happening is how bad most of the models I've tinkered with are at dramatic irony.I can't wait for the swing the other way to get a "Chandler" sarcastic AI.
Who knew J.A.R.V.I.S.'s snarky remarks were actually baked in to keep Tony from going full Messiah Complex (... sooner than he did)?I can't wait for the swing the other way to get a "Chandler" sarcastic AI.
The real kicker is that the latter promotes their sales, while the former is something they pretend these models already do.The challenge that tech companies face is making AI chatbots and assistants helpful and friendly, while not being annoying or addictive.
In more than one way. The Dunning Kruger effect is widely misunderstood. People think it's "why stupid people think they're smart" but it's really something that affects all humans, all the time, even if you are aware of it.They are Dunning-Kruger Machines.
"I wish I had a machine that thinks like a real human"They are Dunning-Kruger Machines.
Rather like the interactive TVs from Fahrenheit 451?I mean, I’ve read some dystopian sci-fi in my time but ‘people getting addicted to soulless, for profit word salad generators and advert pushers’ is right up there.
"I wish I had a machine that thinks like a real human"
And the genie granted our wish.
The excuse from engineers and other staff is that it's "inevitable" so it doesn't matter if they're profiting off of it or not.To everyone involved with these overhyped bullshit machines - hope you’re suitably proud of yourselves for preying on the vulnerable and trafficking in human misery for a lousy handful of bucks.
Or that it's private. It was a bit upsetting to see the look of panic on a friend's face when I told her that all Chatbot chats were logged and mined for data and she realized that not only may actual humans be reading her most personal thoughts but depending on what she's been telling it she may also be feeding into the next Harlequin Botmance or OnlyBots product.It’s depressing how many people on r/chatGPT think using LLMs as therapists is totally fine.
You should start a YouTube channel and go build such a thing. You can have thumbnails like, "World's DEADLIEST printer!" with the shot from Michael in Office Space about to come down on a printer with a baseball bat.I'm starting to think the only smart device I should have is an Ethernet-connected printer, and a loaded pistol to shoot it when it gives me a PC Load Letter error.
Yeah I thought this was a super-credulous take.The real kicker is that the latter promotes their sales, while the former is something they pretend these models already do.
so you are saying they are becoming more and more like humans?LLM's are also terrible at not knowing what they don't know. They have a serious drug problem of using shrooms and hallucinating garbage.
AI language models do not “think” in the way humans do because they work by generating the next likely word in the sentence.
I'm sure the likes of Facebook and Twitter want automated engagement engines. And the ChatGPT public facing personality. But some of the other competitors in this market seem all-in on AI agent employees because that's how they envision multibillion dollar revenues in a few years. To make that work, they need to achieve some level of reliability, at least to the point that they wont cause legal liability.AI have a lot of weird blind spots. Another I read about recently: researchers used AI coupled to image processing of lung x-rays and asked the system to identify images that showed signs of TB infection. It got about 80% right, which isn't bad. Then they used the same image set and asked the AI to identify all images that did NOT contain signs of TB infection. The success rate dropped to 40%. AIs are strangely fixated on producing positive results, but can't handle negations well at all.
There's no intelligence here. And the companies pursuing this technology are far more interested in producing automated engagement engines than in producing accurate results, let alone producing anything using actual reasoning or comprehension.
It's what humans have been striving for ever since the first king: ontologically loyal slaves.All those billions of dollars to produce confirmation bias.
This has nothing to do with AI; you'd expect similar results from humans. It's inherent to the problem described: it's much easier to find an instance of a feature (i.e. TB infection) than it is to reliably conclude the absence of a feature. The odd/suspicious result would have been if the AI had performed the same on both questions.AI have a lot of weird blind spots. Another I read about recently: researchers used AI coupled to image processing of lung x-rays and asked the system to identify images that showed signs of TB infection. It got about 80% right, which isn't bad. Then they used the same image set and asked the AI to identify all images that did NOT contain signs of TB infection. The success rate dropped to 40%. AIs are strangely fixated on producing positive results, but can't handle negations well at all.
There's no intelligence here. And the companies pursuing this technology are far more interested in producing automated engagement engines than in producing accurate results, let alone producing anything using actual reasoning or comprehension.
It's what humans have been striving for ever since the first king: ontologically loyal slaves.