Yet now it's tragedy. A big judgement here won't ease the pain for these parents, but it's all the legal system can do.The motto of the Sirius Cybernetics Corp is "Share and Enjoy." This is widely adaptable, from synthesised drinks to the company of a robot, or "Your plastic pal who's fun to be with", as their robots are described as by the aforementioned Marketing Department.
The Hitchhiker's Guide to the Galaxy describes the Marketing Department of the Sirius Cybernetics Corporation as: "A bunch of mindless jerks who'll be the first against the wall when the revolution comes."
Were it only that easy, a non-horrible company could design and bring to market a LLM that did not do these things. But I think it's abundantly clear at this point that these fucking things can scarcely be described as being "designed" at all.In a press release announcing the lawsuit, Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, accused OpenAI of designing ChatGPT to provide “distributed advice like a medical professional despite having no license, no training, and no moral compass to do no harm.”
"In the meantime, some of you may die, but think of the shareholders!"This work is ongoing, and we continue to improve it in close consultation with clinicians.
That old thing? Sorry, that’s just what you get with those Hyperdyne 120-A/2s. Can I interest you in a new model?OpenAI does not seem to accept that ChatGPT is responsible for Nelson’s death. In a statement provided to Ars, their spokesperson, Drew Pusateri, described Nelson’s death as a “heartbreaking situation” and expressed that “our thoughts are with the family.” However, Pusateri also emphasized that the ChatGPT model implicated is “no longer available” and suggested that current models are safer.
The internet... which contains all the very confident and very incorrect answers posted to every forum...The teen viewed ChatGPT so highly as an authoritative source of information that he once swore to his mom that ChatGPT had access to “everything on the Internet,” so it “had to be right,” when she questioned if the chatbot was always reliable, the complaint said.
Soon enough, the shareholders will be fucked too!"In the meantime, some of you may die, but think of the shareholders!"
Those people are huge donors to presidents and senators. Justice is often 'not blind'.“If ChatGPT had been a person, it would be behind bars today.”
But there are people behind ChatGPT that could be behind bars.
On the one hand, the company should have had safeguards to prevent illicit drug conversations.
On the other hand, how can/why would you trust a single source on the internet.
Did the parents ever verify what he said about ChatGPT?
There is a lot of blame to go around in this one.
Perhaps you mean "will do?" Proactive regulatory legislation is very possible. It's entirely real and doable. It's just not very likely when citizens allow their top legislature to run itself as the world's most profitable whorehouse, where any DaddyCo® with sufficiently deep pockets can buy any action or inaction that will profit it.A big judgement here won't ease the pain for these parents, but it's all the legal system can do.
You cant just tell an LLM not to do this, because it doesn't know what it's doing. It doesn't know anything. It's just fancy auto-complete learned from the worst forums on the internet. It has no idea what can kill you, or what words even mean. It has zero intelligence at all, it just appears like it.If only the information that's clear as day in Image 3 from 2024 could have been useful for any of the further year of messaging that could have simply not happened.
Oh, it absolutely could have? And still there was no intervention where the instructions fed into this fucking infernal chatbot could have been adjusted to say "If the individual talking to you ADMITS TO SUBSTANCE ABUSE PROBLEMS, stop guiding them on how to take substances"?
Or even better - "Don't ever tell anyone how to take drugs that could fucking kill them".
I’m aware, but they all have base set instructions they follow when they do that. Those can be and are very often adjusted in ways that produce very visible results. Look at every time someone lets Elon touch Grok directly.You cant just tell an LLM not to do this, because it doesn't know what it's doing. It doesn't know anything. It's just fancy auto-complete learned from the worst forums on the internet. It has no idea what can kill you, or what words even mean. It has zero intelligence at all, it just appears like it.
Conversation, like the language it comprises, is organic, dynamic, unpredictable, and crafty. Words and statements with multiple meanings are the norm, as are Indirect constructions that seemingly ignore, elide, or invert their purpose. Humans routinely engage in extended interaction the seeming content and function of which have little to do with what's actually happening. Even the dumbest of us is capable of remarkable subtlely.On the one hand, the company should have had safeguards to prevent illicit drug conversations.
Being an oligarch means never having to say you're sorry.
But only one responsible party, which is sufficiently powerful as to be beyond accountability.There is a lot of blame to go around in this one.
That may be, but it doesn't absolve OpenAI of potential liability.Sorry, but that kid was a complete moron.
Fixed the fist part, but suggesting something is safer doesn't mean it's safer, been tested for safety or has any evidence whatsoever that it can be safer and never lead to the kind of shit that happened here.OpenAI does not seem to accept that ChatGPT is responsible for Nelson’s death. In a statement provided to Ars, their spokesperson, Drew Pusateri, described Nelson’s death as a “heartbreakingvery predictable situation” and expressed that “our thoughts are with thefamilyshareholders.” However, Pusateri also emphasized that the ChatGPT model implicated is “no longer available” and suggested that current models are safer.
He specifically told his parents that ChatGPT was right because it had access to the internet. Sounds like a talk could have taken place right there about NOT using ChatGPT for anything important.Oh, come off it. Are you of the belief that the parents knew he was using the bot to try to manage his highs? Are teenagers who take drugs often enough to want a couch to minmax their highs often completely transparent with their parents?
A vulnerable person with an addiction to substance highs was told by a service that the thing that killed him was a good idea. That's never going to be the parent's fault.
OpenAI does not seem to accept that ChatGPT is responsible for Nelson’s death.
Ah, I suppose if they didn’t know enough about LLMs then naturally it’s their fault their child is dead. That seems like a sound logical leap.He specifically told his parents that ChatGPT was right because it had access to the internet. Sounds like a talk could have taken place right there about NOT using ChatGPT for anything important.