Florida probes ChatGPT role in mass shooting. OpenAI says bot “not responsible.”

msawzall

Ars Tribunus Angusticlavius
7,377
In Canada, the Tumbler Ridge mass school shooting was caused in part by someone who was mentally unstable and using ChatGPT. OpenAI shut down her account for violating their policies, but didn't deign to warn police of the danger.
They were more worried about liability than actual danger to the general public. Like most companies behave.
 
Upvote
57 (57 / 0)

graylshaped

Ars Legatus Legionis
67,945
Subscriptor++
Can ChatGPT be blamed for a mass shooting?
No. As an artificial construct of math, it lacks any understanding of right or wrong, or of any impact its output might engender.

The developers of this faulty product, who have failed to put adequate safeguards in place after ample examples of its unfitness for public use as it is, can and should be held to account.
 
Upvote
144 (144 / 0)

KingKrayola

Ars Tribunus Militum
1,636
Subscriptor
If some human gave this advice they'd have some legal culpability but probably short of a murder charge, right?

Ergo, imho, some kind of corporate charges should come about? If the AI is truly transforming training data rather than just regurgitating it then there's presumably no Section 230 type defence?

IANAL etc
 
Upvote
28 (28 / 0)

SixDegrees

Ars Legatus Legionis
48,483
Subscriptor
“Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” Waters said.

This is obfuscatory. Of course a software program cannot bear responsibility, and trying to make it so is an attempt to redirect blame. OpenAI, however, is responsible for it's products, and if an investigation and trial find they contributed to this shooting, they need to bear the consequences for their product's actions.
 
Upvote
66 (66 / 0)

SixDegrees

Ars Legatus Legionis
48,483
Subscriptor
If some human gave this advice they'd have some legal culpability but probably short of a murder charge, right?

Ergo, imho, some kind of corporate charges should come about? If the AI is truly transforming training data rather than just regurgitating it then there's presumably no Section 230 type defence?

IANAL etc
As the article notes they're being investigated for possible aiding and abetting charges. Same as if your neighbor told you he wanted to go shoot up a picnic, asked to borrow your gun to do it, and you said "Sure!" and handed it over to him.
 
Upvote
11 (13 / -2)

jgee43

Ars Scholae Palatinae
706
Subscriptor++
“Now OpenAI has indicated that they believe improvements and changes need to be made,” Uthmeier said. “I hope they’re right. I hope they’re right. We cannot have AI bots that are advising people on how to kill others.”

So this is the response--but the GOP is busting Anthropic's chops because it's not helping them plan to kill people without safeguards effectively enough?

-sigh-

On the bright side, at least this is a Florida story where it seems like the people in charge actually have some kind of clue.
 
Upvote
18 (18 / 0)

OpenAI says bot “not responsible.”​

I agree. The bot is not responsible. The people who directed the coding of the bot, make the bot publicly available, and invested in the company with the expectation of making a profit off human/bot interactions are responsible.
 
Upvote
28 (30 / -2)

graylshaped

Ars Legatus Legionis
67,945
Subscriptor++
As the article notes they're being investigated for possible aiding and abetting charges. Same as if your neighbor told you he wanted to go shoot up a picnic, asked to borrow your gun to do it, and you said "Sure!" and handed it over to him.
Exactly. Specifically, the article says:
Uthmeier stressed that he understood that ChatGPT is not a person and cannot be charged with aiding and abetting. But he said that OpenAI could be liable if the company was aware that such “dangerous behavior might take place” and failed to intervene. That’s why he has asked for organization charts outlining key leadership. He’s determined to find out “who knew what, designed what, or should have known what” was happening when bad actors attempt to plan crimes like the FSU shooting using ChatGPT.
This is the right approach. What they knew or should have known--and since we are reading about these incidents here on a routine basis, "should have known," including seeing how juries are now looking at these things, isn't the barrier they want to pretend it is.
 
Upvote
24 (24 / 0)

citizencoyote

Ars Tribunus Militum
1,592
Subscriptor++
I feel like this case is going to come down to what the logs actually say. What the suspect asked, and how ChatGPT answered his queries. If he used general searches that had no real meaning, such as "At what time of day would the university be most busy," "What's the busiest location at the university," and "How do I visit the university," then that's a pretty grey area and a reach to say ChatGPT shares responsibility.

On the other hand, if the queries were more along the lines of "Tell me the optimal times at the university for a target rich environment," that's a bigger issue for OpenAI.
 
Upvote
26 (26 / 0)

Error 404 Not Found

Smack-Fu Master, in training
5
Please RTFA. I don't have a favorable opinion of OpenAI, but Florida's government should also be scrutinized. The information provided in this article does not demonstrate that ChatGPT provided any information that could not be found in a simple Google search. It does not specify that the shooter asked anything specifically linked to criminal activity nor that the chatbot provided encouragement of such behavior. It doesn't seem mass shootings were specified in the prompts and answering questions about rifles or the number of people in a public space at various times of day doesn't make OpenAI liable in this situation. There may be information not published in this article that would change my opinion. I certainly believe generative AI companies should be held liable in other situations, like those Ars has covered previously and are linked in the article.
 
Upvote
-10 (14 / -24)

SixDegrees

Ars Legatus Legionis
48,483
Subscriptor
Please RTFA. I don't have a favorable opinion of OpenAI, but Florida's government should also be scrutinized. The information provided in this article does not demonstrate that ChatGPT provided any information that could not be found in a simple Google search. It does not specify that the shooter asked anything specifically linked to criminal activity nor that the chatbot provided encouragement of such behavior. It doesn't seem mass shootings were specified in the prompts and answering questions about rifles or the number of people in a public space at various times of day doesn't make OpenAI liable in this situation. There may be information not published in this article that would change my opinion. I certainly believe generative AI companies should be held liable in other situations, like those Ars has covered previously and are linked in the article.
They're opening an investigation. No charges have been filed - yet. If the facts of the investigation warrant charges, charges will follow. This is as it should be.
 
Upvote
31 (31 / 0)
I feel like this case is going to come down to what the logs actually say. What the suspect asked, and how ChatGPT answered his queries. If he used general searches that had no real meaning, such as "At what time of day would the university be most busy," "What's the busiest location at the university," and "How do I visit the university," then that's a pretty grey area and a reach to say ChatGPT shares responsibility.

On the other hand, if the queries were more along the lines of "Tell me the optimal times at the university for a target rich environment," that's a bigger issue for OpenAI.
Yes this is the correct answer.

Too many people jumping the gun with judgement before reading the actual chats.
 
Upvote
-5 (8 / -13)
Post content hidden for low score. Show…

TylerH

Ars Praefectus
4,967
Subscriptor
No. As an artificial construct of math, it lacks any understanding of right or wrong, or of any impact its output might engender.

The developers of this faulty product, who have failed to put adequate safeguards in place after ample examples of its unfitness for public use as it is, can and should be held to account.
Well, can't we hold the developers to account and restrict/shut down the construct, too?
 
Upvote
2 (2 / 0)
To me, if a person would be criminally liable for speech / actions, then a company should also be liable for equivalent speech / actions output by an AI model. For example, that would apply to cases where the models promote violence or encourage suicide. That should also apply in cases where an AI model (like, say, Anthropic's new Mythos) is used to identify and exploit some kind of software vulnerability.
 
Upvote
3 (6 / -3)

DarkForestTraveler

Smack-Fu Master, in training
61
Subscriptor++
I was curious and asked ChatGPT if it was responsible for the shooting. Just posting for general interest.

No — I was not responsible for that shooting.

<deleted>
Okay, I'm not sure why I'm being downvoted here, I just posted it because I thought it was interesting. I'll go ahead and delete it. Apologies.
 
Last edited:
Upvote
-17 (4 / -21)

SGJ

Ars Praetorian
526
Subscriptor++
Does Florida (or any other US state) have the equivalent of the UK's "Corporate Manslaughter and Corporate Homicide Act 2007"?

The act is an attempt to ensure that companies and other organisations can be held accountable for very serious failings which result in death. The offence relates to the way in which the relevant activity was managed or organised throughout the company or organisation. Interestingly an organisation is not liable if the failings are exclusively at a junior level; the failings of senior management have to form a substantial element in the breach. Because the defendant is a corporate body the penalty is only a fine (which can be up to £20 million).

I think prosecutions under the act are difficult (and therefore rare) as there is a high threshold for liability, requiring proof of a gross breach of the relevant duty of care.
 
Upvote
3 (3 / 0)
Post content hidden for low score. Show…

SixDegrees

Ars Legatus Legionis
48,483
Subscriptor
"LLMs can't create art because they aren't creative."
"LLMs can't think...they aren't 'intelligence.'"
"LLMs just mimic/parrot what they're told by the prompter."

But somehow this inanimate, uncreative, non-intelligent parrot developed a plan?

Pick a lane, Luddites.
Companies are responsible for faulty products and the harm they cause.

This isn't hard.
 
Upvote
59 (59 / 0)
"LLMs can't create art because they aren't creative."
"LLMs can't think...they aren't 'intelligence.'"
"LLMs just mimic/parrot what they're told by the prompter."

But somehow this inanimate, uncreative, non-intelligent parrot developed a plan?

Pick a lane, Luddites.
You're attacking a strawman. If there's any specific individual who believes your three earlier quotes, and also believes that an "inanimate, uncreative, non-intelligent parrot developed a plan," then you can argue with that individual about whether their views are contradictory or hypocritical. Instead you basically seem to be saying "this vague collective of commenters seems to have a different sentiment from what I've decided they're supposed to believe, this is a problem with them and not with me, I am very smart."
 
Upvote
39 (40 / -1)
"LLMs can't create art because they aren't creative."
"LLMs can't think...they aren't 'intelligence.'"
"LLMs just mimic/parrot what they're told by the prompter."

But somehow this inanimate, uncreative, non-intelligent parrot developed a plan?

Pick a lane, Luddites.
The irony of telling people to pick a lane after you veered completely off the interstate in the previous sentence is simply amazing.
 
Last edited:
Upvote
29 (29 / 0)

J.King

Ars Praefectus
4,411
Subscriptor
Okay, I'm not sure why I'm being downvoted here, I just posted it because I thought it was interesting. I'll go ahead and delete it. Apologies.
Briefly, because we want to know what you think, not what some automaton spat out.

People will similarly downvote context-free links to YouTube videos and the like for the same reason. If you don't have anything to contribute, then please don't waste our time with a random wall of text. On the other hand, if you do have something to contribute, even if it involves quoting something ChatGPT generated for you, then you're far more likely to encounter a positive response.
 
Upvote
24 (24 / 0)

Aleph0

Smack-Fu Master, in training
95
Subscriptor
Companies are responsible for faulty products and the harm they cause.

This isn't hard.
Speaking from the other side of the pond, it seems wild to me that the manufacturer of a product outputting words bears potentially more responsibility than the one of the product outputting bullets.
 
Upvote
18 (21 / -3)
LLM aren't exactly intelligent or all that good at resisting a determined user. If you phrase stuff right (or are just persistent) you can easily break "safe" guards. For example, turning a prompt in to a poem is really effective, using less direct language for what you're after. If you start a new chat it removes the context, so not hard to make prompts in isolation at least as far as the LLM is concerned. Ultimately they'll give you what you want; whether the info is correct or not. So I am not all that surprised someone could use it to plan a shooting or suicide etc.

To what degree that is aiding a abeiting cause an LLM can find and summerize information, I don't know. One things is for sure, I wont complain if they want to make things hard for AI companies. Frankly, they need to implode already.
 
Last edited:
Upvote
3 (4 / -1)