Will ChatGPT’s hallucinations be allowed to ruin your life?

Just another reason why unleashing unproven technology like generative AI is a terrible idea especially in an Internet landscape already full of mis- and disinformation. Tech companies want to have it both ways: use AI to "improve" their search product but also not be responsible for the content it returns.
 
Upvote
203 (209 / -6)
Post content hidden for low score. Show…

aliksy

Ars Scholae Palatinae
1,081
I think I'm even more old man yells at cloud than I used to be, but a lot of this ai generated text seems kind of pointless. We already have human written and edited sources for a lot of things. Why do I need a script to spit out a mangled Wikipedia article for me?

Like the other day I asked a friend how a game plays, and he pasted me a barely readable summary from chatgpt. I just went to drivethrurpg and Wikipedia and read about it there.

Generating fiction might be interesting, if soulless and problematic in other ways.
 
Upvote
113 (118 / -5)

Sajuuk

Ars Legatus Legionis
13,088
Subscriptor++
I think I'm even more old man yells at cloud than I used to be, but a lot of this ai generated text seems kind of pointless. We already have human written and edited sources for a lot of things. Why do I need a script to spit out a mangled Wikipedia article for me?

Like the other day I asked a friend how a game plays, and he pasted me a barely readable summary from chatgpt. I just went to drivethrurpg and Wikipedia and read about it there.

Generating fiction might be interesting, if soulless and problematic in other ways.
Think bigger.

Why pay writers when you can ask a computer to spit out a whole new book, or show, or movie, or play? It's pointless for you, but not for business.
 
Upvote
94 (95 / -1)
The "it's not a publication" claim seems like both purest bullshit and like an exceptionally broad carve-out, were someone to fall for it.

There's basically no area of media delivery more personalized than print newspapers or broadcast TV where it wouldn't be relatively feasible to claim that you are just delivering 'draft content' based on the user's request(whether an explicit prompt or inferred from their activity and context) and shove a 'for entertainment purposes only; may be all lies; research it" somewhere into a EULA nobody reads.
 
Upvote
77 (83 / -6)

fenris_uy

Ars Tribunus Angusticlavius
9,113
LLMs aren't sources of truth. Whatever they say is a fabrication by definition of how they work. Why is that so hard for people to understand, they aren't an alternative to a Google search that sends you to a page. They create text/images/videos/audio based on mathematical models, not based on the truth.
 
Upvote
114 (127 / -13)

siliconaddict

Ars Legatus Legionis
13,048
Subscriptor++
"OpenAI’s Terms of Use make clear that ChatGPT is a tool that assists the user in the writing or creation of draft content and that the user owns the content they generate with ChatGPT,"

Yes and social media shouldn't be used to get news......And yet people do. Just because you have some freaking disclaimer that defines something as beta......oh I'm sorry a draft.....doesn't mean people aren't going to treat it as truth.


Move fast, break things......the technology company mantra. The only thing that keeps them in check and forces them to validate anything is the always looming possibility of a company shattering lawsuit. Without that, companies give zero craps. Its the same way antitrust laws have been watered down over the last 30 years. Companies don't care anymore so they will do whatever they want. Same deal here. If companies aren't held liable for the code they are making and the output it generates....then they can do whatever they want which is, IMHO, a serious issue.
 
Upvote
100 (101 / -1)

Wheels Of Confusion

Ars Legatus Legionis
75,564
Subscriptor
Think bigger.

Why pay writers when you can ask a computer to spit out a whole new book, or show, or movie, or play? It's pointless for you, but not for business.
Why pay the guy whose job it is to ask the computer to spit out the content? Seems easy to automate...
 
Upvote
23 (24 / -1)

Tridus

Ars Tribunus Militum
2,506
Subscriptor
The idea that megacorps can rake in money by flagrantly pushing lies and nonsense, and get away with it because it's a "draft" shows just how completely screwed up the legal system is.

Immunity from liability for the harms companies cause has gone far, far too far and needs to be reined in. Too bad it may take a full on revolution to do it because the government is complicit at best.
 
Upvote
77 (81 / -4)

arlinn

Smack-Fu Master, in training
83
Subscriptor++
LLMs aren't sources of truth. Whatever they say is a fabrication by definition of how they work. Why is that so hard for people to understand, they aren't an alternative to a Google search that sends you to a page. They create text/images/videos/audio based on mathematical models, not based on the truth.
Although almost everyone on Ars should agree with this, the general public doesn’t understand that.

That’s where the real problem lies. Not informed, tech-literate people using it, but the general public which doesn’t understand how most technology actually works. If we want to fix the real problem, we should invest in education.
 
Upvote
89 (95 / -6)
Post content hidden for low score. Show…

SplatMan_DK

Ars Tribunus Angusticlavius
8,247
Subscriptor++
Related but not identical problem: Big tech generally builds their business models in ways that specifically replaces humans with tech. In the process they accept flawed tech, as long as it works "well enough" to support whatever busines case they're pursuing. It works with stuff like Google Maps, Uber, AirBnB, and pushing ads, because there is no significant consequence when the tech fails.

The problem with contemporary AI is: so many of the things people (and ignorant CEO's especially) dream of using AI for, really doesn't allow flawed tech. You can't reduce this to an numbers game and accept 0.5% errors... because you don't want stuff like legal advice, economic analysis, medical diagnostics or construction design to be flawed.

A short story from real life: I am a Google Local Guide, and I submit content to Google Maps. I have around 90 million views of my content. It's fun, and I can separate it from my personal data well enough to not worry about evil-Google.

Lately, many of my reviews have been taken offline for alleged violations of Google's terms. It's mostly positive reviews and all in my local language (Danish). I checked and double-checked but there is nothing wrong with these reviews. As a techie, I can confidently say the problem is a flawed AI detection algorithm, putting false flags on my content. Perhaps the niche language is the problem - who knows.

So now I am in "bad standing" at Google, because their content-AI is hallucinating something. Not hallucinating in the same sense as this article, but still, same category of problem.

Off course, there is nobody I can reach out to. Nobody to contact. Google has gone out of its way to ensure you can't ever reach a human for support, or to tell them they f*cked up.

Since it's just insignificant reviews and photos on Google Maps, my life will go on. I am unjustly marked as an evil geezer by Google's big AI apparatus, but I'll be fine. It's not significant. It's a bit of vanity at most.

But what if the same thing had happened to someone's credit scoring? Or their insurance data? Or the record of their drivers license? Their job performance, so they got fired? Criminal records?

I think there is every reason to fear faulty tech, and hallucinating AIs in particular. Because the owners of these machines not only run from their responsibility. They also make it impossible for the little guy to get in touch with anyone who might be able to help them.
 
Last edited:
Upvote
195 (197 / -2)
It's far from a hypothetical problem. ChatGPT once claimed I co-authored a book with John Steinbeck. While that's not terrible (in fact, my reaction was "I wish!"), it made me wonder "what if it accused me of ghostwriting Mein Kampf for Hitler?"

You might not be far off, 1925 Mein Kampf had been re-issued but the newer versions are heavily commented, so a reasonable person reading "co-authored" would think the bot meant something to the effect that you were footnoting Hitler's assertations which have been extensively annotated with about twice as much commentary as text.

https://en.wikipedia.org/wiki/Mein_Kampf
 
Upvote
22 (24 / -2)

Dachannien

Ars Scholae Palatinae
1,145
Subscriptor
It's far from a hypothetical problem. ChatGPT once claimed I co-authored a book with John Steinbeck. While that's not terrible (in fact, my reaction was "I wish!"), it made me wonder "what if it accused me of ghostwriting Mein Kampf for Hitler?"

And a little bit of prompt engineering probably could have resulted in such a statement. Who bears the responsibility for defamation when someone purposefully develops a prompt that generates a defamatory statement when the prompt itself doesn't appear to be defamatory, and then shares that prompt online to help spread that disinformation with deniability for themselves?
 
Upvote
37 (40 / -3)

wagnerrp

Ars Legatus Legionis
31,760
Subscriptor
Although almost everyone on Ars should agree with this, the general public doesn’t understand that.

That’s where the real problem lies. Not informed, tech-literate people using it, but the general public which doesn’t understand how most technology actually works. If we want to fix the real problem, we should invest in education.
Then there’s a simple solution. Add the same disclaimer as fictional television shows and movies. The output of this system is meant for entertainment purposes only. Any similarity to real individuals, places, or events is coincidental.

Of course that still leaves Microsoft in hot water, as they’ve tied AI responses into their real world search tools. Can’t have it both ways.
 
Upvote
57 (61 / -4)
Post content hidden for low score. Show…
LLMs aren't sources of truth. Whatever they say is a fabrication by definition of how they work. Why is that so hard for people to understand, they aren't an alternative to a Google search that sends you to a page. They create text/images/videos/audio based on mathematical models, not based on the truth.
Unfortunately, given the extreme and deliberate degradation of Google search they are often superior in getting answers. I hate that Google search has become so utterly useless that I often have to turn to an LLM (I refuse to call them 'AI', because they're not) - but what other choice do I have? Search is genuinely incapable of providing useful information in 2023.
 
Upvote
-16 (21 / -37)
I kinda wish that AI services sold in a service model were put under the same scrutiny as other work for hire companies that were human powered.

If the OpenHuman company started a service where you could chat with its employees for a fixed cost per word of output and it passed itself off as a semi-reliable source of information it would be pretty liable for defamation if its employees regularly defamed people while operating as agents of the company.

Not holding AI services to the same standard as non-AI services feels a bit unfair and like you are specifically choosing for AI services to win as they do not need to pay the burden to train their agents (humans employees or AI) to only provide non-defaming information.
 
Upvote
53 (55 / -2)
LLMs aren't sources of truth. Whatever they say is a fabrication by definition of how they work. Why is that so hard for people to understand, they aren't an alternative to a Google search that sends you to a page. They create text/images/videos/audio based on mathematical models, not based on the truth.
Sounds to me like Battle's complaints are showing up in regular search results as the blurb about him at the top.
 
Upvote
21 (22 / -1)

Chmilz

Ars Tribunus Militum
1,539
When users sign up to use a chatbot, disclaimers warn them to fact check chatbot responses
We've moved past training AI to identify bicycles and pedestrians for autonomous driving under the guise of security, to asking us to curate AI's content base under the guise of "suggested content".

Late stage gaslighting.
 
Upvote
32 (33 / -1)
Post content hidden for low score. Show…

wagnerrp

Ars Legatus Legionis
31,760
Subscriptor
I hate that Google search has become so utterly useless that I often have to turn to an LLM (I refuse to call them 'AI', because they're not)
Eliza isn’t AI. Markov chains aren’t AI. Master systems aren’t AI. OCR isn’t AI. NN speech identification isn’t AI. NN image identification isn’t AI. LLMs aren’t AI. One of these days, we’re going to figure out what intelligence actually is through process of elimination. (Edit: in case it wasn't clear, the preceding was an attempt at mockery, and not the belief of this poster)

There’s a reason we coined the term “general intelligence” a few decades ago.
 
Last edited:
Upvote
37 (41 / -4)

fredrum

Ars Scholae Palatinae
817
We can see how Facebook gets away with saying 'Oh so sorry Yes we really need to improve! Let us start straight away." And then nothing much improves.

These companies are already starting to provide its users their own Insurence to paper over this. They have so much money that they can just promise to pay for any damages that eventually gets through a court system some years after the event.

When was the last time govenrment stopped a gold rush?
 
Upvote
14 (15 / -1)

JoHBE

Ars Praefectus
4,218
Subscriptor++
I think I'm even more old man yells at cloud than I used to be, but a lot of this ai generated text seems kind of pointless. We already have human written and edited sources for a lot of things. Why do I need a script to spit out a mangled Wikipedia article for me?

Like the other day I asked a friend how a game plays, and he pasted me a barely readable summary from chatgpt. I just went to drivethrurpg and Wikipedia and read about it there.

Generating fiction might be interesting, if soulless and problematic in other ways.
Soon, almost all people like you (and me) will be filtered out because "not efficient enough". Too slow , not enough output, overthinking "content"...
 
Upvote
7 (11 / -4)
Post content hidden for low score. Show…

Atterus

Ars Tribunus Militum
2,335
Eliza isn’t AI. Markov chains aren’t AI. Master systems aren’t AI. OCR isn’t AI. NN speech identification isn’t AI. NN image identification isn’t AI. LLMs aren’t AI. One of these days, we’re going to figure out what intelligence actually is through process of elimination.

There’s a reason we coined the term “general intelligence” a few decades ago.
Artificial Intelligence is the entire field, a umbrella term from everything from Deep Learning to Regressions. What you are thinking of is "Machine Learning" which is effectively the use of complex mathematics to emulate a decision making process, but incapable of learning on its own. That's what nearly everything is right now with the few exceptions "lesser" models.

The issue is, and had been, people pretending a tool can be used for things beyond its purpose. LLMs sole role and design is to look true, not actually be true. That these tech bro morons are pushing for the latter should come with liability because it has for the actual scientists since the 70s. A model doesn't train itself, and someone has to market it. Do it wrong, and it is fraud at best.

Strict certifications should be required to work with AI tools. How many more bogus law cases do lazy lawyers need to promote before this is understood? The public is far from capable of handling these tools appropriately, and that is a very dangerous problem. But hey! Ignore the folks who invented the field/models. See what happens... again. Congress did.
 
Upvote
34 (41 / -7)
An "AI" assistant on a home server that is your responsibility sounds great to me. Far better than the things from Google, Apple, and Amazon. Let me train it on the sources that I choose
Like 4chan? I can sympathize. I mean if I was looking to wipe out Trump supporters at scale I would go with a language model tuned on custom misinformation.

They already chug bleach and horse paste. They already refuse vaccinations and other reasonable precautions against fatal disease. Maybe my pet Llama can help convince them rattlesnake venom cures COVID. You heard it here first, folks. Soon all across social media.

Clarification: I do not actually endorse wiping out Trump supporters at scale, but the Russians for sure don't have any qualms about it. Thanks Hugging Face for parameter efficient fine tuning and the whole rest of your stack making it easy to tune one's very own hate bot on a consumer GPU. Your ethics are just swell. Who needs GPT-4 when a comparatively stupid model will work just as well at targeting "vulnerable" communities with misinformation?
 
Upvote
10 (18 / -8)

wagnerrp

Ars Legatus Legionis
31,760
Subscriptor
Artificial Intelligence is the entire field, an umbrella term from everything from Deep Learning to Regressions. What you are thinking of is "Machine Learning" which is effectively the use of complex mathematics to emulate a decision making process, but incapable of learning on its own. That's what nearly everything is right now with the few exceptions "lesser" models.
No. I’m mocking the need of some to continually redefine “intelligence” as always something beyond what we’ve currently developed. They won’t be satisfied until it’s on par with ourselves.
 
Upvote
-4 (11 / -15)
Don't need AI to make up things. People felt the same about others reading, the printing press, free speech, the internet, and thousands of other things.

An "AI" assistant on a home server that is your responsibility sounds great to me. Far better than the things from Google, Apple, and Amazon. Let me train it on the sources that I choose and an easy enough way to manage data stored locally.

If I have it change all the zombies in World War Z to Teletubbies then that is for personal use. Can't legally profit from it just like everything already. Or like CNN would be made fun of, again probably, of using doctored pictures/video. It doesn't matter where it came from without verification and even then you still want multiple sources for extraordinary claims.
Sure, but accelerating the proliferation of false information is definitely a bad thing that LLM is doing right now. At least with traditional search you can rely on reputable sources like Wikipedia (major caveat here since I remember an article about Wikipedia citations that don't actually support claims made in articles.)

But at the very least reading from a trusted source is better than an LLM spitting out a "summary" of a reputable source that is not actually accurate, and just a the most mathematically probably output based on that source.
 
Upvote
41 (44 / -3)

fenris_uy

Ars Tribunus Angusticlavius
9,113
Sounds to me like Battle's complaints are showing up in regular search results as the blurb about him at the top.

Battle's complaint is different, but it also isn't about AI, it's about Bing enhanced search blurbs. Something that they need to stop doing, because it has already caused problems to both Bing and Google.
 
Upvote
27 (29 / -2)
there was a situation when I used Bing Chat (ChatGPT4) and it created an answer. I always go through the listed links to confirm the contents of the answer, and in this case, the answer didn't match the links. That's my way of verifying the answer content. I can also search the prompt again to see if the answer matches which often produces more links to verify content. in any case, no answer should be trusted without some of verification whether that answer comes from AI or a human being.
 
Upvote
-10 (5 / -15)