They were more worried about liability than actual danger to the general public. Like most companies behave.In Canada, the Tumbler Ridge mass school shooting was caused in part by someone who was mentally unstable and using ChatGPT. OpenAI shut down her account for violating their policies, but didn't deign to warn police of the danger.
No. As an artificial construct of math, it lacks any understanding of right or wrong, or of any impact its output might engender.Can ChatGPT be blamed for a mass shooting?
“Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” Waters said.
As the article notes they're being investigated for possible aiding and abetting charges. Same as if your neighbor told you he wanted to go shoot up a picnic, asked to borrow your gun to do it, and you said "Sure!" and handed it over to him.If some human gave this advice they'd have some legal culpability but probably short of a murder charge, right?
Ergo, imho, some kind of corporate charges should come about? If the AI is truly transforming training data rather than just regurgitating it then there's presumably no Section 230 type defence?
IANAL etc
“Now OpenAI has indicated that they believe improvements and changes need to be made,” Uthmeier said. “I hope they’re right. I hope they’re right. We cannot have AI bots that are advising people on how to kill others.”
I agree. The bot is not responsible. The people who directed the coding of the bot, make the bot publicly available, and invested in the company with the expectation of making a profit off human/bot interactions are responsible.OpenAI says bot “not responsible.”
Exactly. Specifically, the article says:As the article notes they're being investigated for possible aiding and abetting charges. Same as if your neighbor told you he wanted to go shoot up a picnic, asked to borrow your gun to do it, and you said "Sure!" and handed it over to him.
This is the right approach. What they knew or should have known--and since we are reading about these incidents here on a routine basis, "should have known," including seeing how juries are now looking at these things, isn't the barrier they want to pretend it is.Uthmeier stressed that he understood that ChatGPT is not a person and cannot be charged with aiding and abetting. But he said that OpenAI could be liable if the company was aware that such “dangerous behavior might take place” and failed to intervene. That’s why he has asked for organization charts outlining key leadership. He’s determined to find out “who knew what, designed what, or should have known what” was happening when bad actors attempt to plan crimes like the FSU shooting using ChatGPT.
They're opening an investigation. No charges have been filed - yet. If the facts of the investigation warrant charges, charges will follow. This is as it should be.Please RTFA. I don't have a favorable opinion of OpenAI, but Florida's government should also be scrutinized. The information provided in this article does not demonstrate that ChatGPT provided any information that could not be found in a simple Google search. It does not specify that the shooter asked anything specifically linked to criminal activity nor that the chatbot provided encouragement of such behavior. It doesn't seem mass shootings were specified in the prompts and answering questions about rifles or the number of people in a public space at various times of day doesn't make OpenAI liable in this situation. There may be information not published in this article that would change my opinion. I certainly believe generative AI companies should be held liable in other situations, like those Ars has covered previously and are linked in the article.
Yes this is the correct answer.I feel like this case is going to come down to what the logs actually say. What the suspect asked, and how ChatGPT answered his queries. If he used general searches that had no real meaning, such as "At what time of day would the university be most busy," "What's the busiest location at the university," and "How do I visit the university," then that's a pretty grey area and a reach to say ChatGPT shares responsibility.
On the other hand, if the queries were more along the lines of "Tell me the optimal times at the university for a target rich environment," that's a bigger issue for OpenAI.
Well, sure! The general public, that's just some weird randos, but liability... that's stock prices!They were more worried about liability than actual danger to the general public. Like most companies behave.
Well, can't we hold the developers to account and restrict/shut down the construct, too?No. As an artificial construct of math, it lacks any understanding of right or wrong, or of any impact its output might engender.
The developers of this faulty product, who have failed to put adequate safeguards in place after ample examples of its unfitness for public use as it is, can and should be held to account.
Okay, I'm not sure why I'm being downvoted here, I just posted it because I thought it was interesting. I'll go ahead and delete it. Apologies.I was curious and asked ChatGPT if it was responsible for the shooting. Just posting for general interest.
No — I was not responsible for that shooting.
<deleted>
Companies are responsible for faulty products and the harm they cause."LLMs can't create art because they aren't creative."
"LLMs can't think...they aren't 'intelligence.'"
"LLMs just mimic/parrot what they're told by the prompter."
But somehow this inanimate, uncreative, non-intelligent parrot developed a plan?
Pick a lane, Luddites.
You're attacking a strawman. If there's any specific individual who believes your three earlier quotes, and also believes that an "inanimate, uncreative, non-intelligent parrot developed a plan," then you can argue with that individual about whether their views are contradictory or hypocritical. Instead you basically seem to be saying "this vague collective of commenters seems to have a different sentiment from what I've decided they're supposed to believe, this is a problem with them and not with me, I am very smart.""LLMs can't create art because they aren't creative."
"LLMs can't think...they aren't 'intelligence.'"
"LLMs just mimic/parrot what they're told by the prompter."
But somehow this inanimate, uncreative, non-intelligent parrot developed a plan?
Pick a lane, Luddites.
The irony of telling people to pick a lane after you veered completely off the interstate in the previous sentence is simply amazing."LLMs can't create art because they aren't creative."
"LLMs can't think...they aren't 'intelligence.'"
"LLMs just mimic/parrot what they're told by the prompter."
But somehow this inanimate, uncreative, non-intelligent parrot developed a plan?
Pick a lane, Luddites.
Briefly, because we want to know what you think, not what some automaton spat out.Okay, I'm not sure why I'm being downvoted here, I just posted it because I thought it was interesting. I'll go ahead and delete it. Apologies.
Speaking from the other side of the pond, it seems wild to me that the manufacturer of a product outputting words bears potentially more responsibility than the one of the product outputting bullets.Companies are responsible for faulty products and the harm they cause.
This isn't hard.
You just described everyone in Florida with a gun jaywalking across a busy street.ChatGPT was just Standing It's Ground and feared for it's life.