Before psychosis, ChatGPT told man “he was an oracle,” new lawsuit alleges

Status
You're currently viewing only chaos215bar2's posts. Click here to go back to viewing the entire thread.
… I also have to question what DeCruise's mental state was prior to using ChatGPT, because I have a hard time believing that reading text output from a chatbot gave an otherwi0se mentally-healthy man bipolar disorder.
If you're looking for something else to blame when it comes to almost anything "mental health" related, you'll inevitably find it.

If you're looking to try to protect someone who was clearly vulnerable from a product that seems almost purpose-built to exploit that vulnerability, then perhaps some accountability is a good first step.

Offtopic I know, but I never had a huge amount of sympathy for the plaintiff in the Hot Coffee case. Yes, the coffee was hotter than the industry standard but what kind of idiot drives around in a car with a paper cup full of steaming hot coffee between their legs?!
Is this a joke, or do you have so little self-awareness you seriously wrote this in response to a comment literally calling out the lack of self-awareness in this go-to response?
 
Upvote
137 (138 / -1)
I don't follow your reasoning.

If an individual experienced messianic psychosis prior to the introduction of LLMs, I think it is reasonable to conclude that an LLM was not responsible.
The problem is the question the comment you quoted highlighted and the one it answers are different.

Obviously psychosis existed before LLMs. The more relevant question was whether we can definitively say that psychosis in any given case must have been a result of some underlying condition.

I would say the answer is a pretty clear "no", with every organized religion being the proverbial elephant in the room.
 
Upvote
21 (22 / -1)
I’m going to ignore the bad-faith question at the end of your edit but I will accept that the phrase “personal responsibility” Is a loaded term now thanks to right-leaning astroturfing.

But to my main point: I could be a little out of touch since reading and commenting on articles about LLMs by definition means I’m not a “normie” lol. However I would still expect someone using a tool to make an effort to understand the risks. I’m not willing to excuse people who fail to try to do that.
Why don't we start by not allowing companies to sell a known dangerous product, intentionally advertising its benefits while trying not to mention the downsides, ever, all while weaseling out of most meaningful legal avenues of redress via a few lines in a boilerplate TOS nearly everyone using the product will never read?

Then maybe we can have a conversation about whether the individual here took sufficient "personal responsibility".
 
Upvote
42 (42 / 0)
I really don’t understand why we insist on treating people like they are unintelligent and have no agency. Like, I genuinely need someone to explain it.
The problem is that you're viewing someone as unintelligent and without agency simply because they fell down a path that society plainly laid out for them to follow.

The way I see people like this is simply unlucky. It was going to happen to someone and they're just the ones who happened to be in the wrong place at the wrong time.

If society is willing to take responsibility first by being willing to hold the rich and powerful accountable rather than constantly jumping to make excuses, then we can talk about personal responsibilities everyone else should be taking all day. Otherwise "personal responsibility" is just a cheap excuse for poor outcomes, made in the interest of protecting those who often enough demonstrate an astounding lack of any kind of responsibility on their own part.

Another real world example is Tesla autopilot. The way it’s advertised feels borderline criminal. Even the name is a sham.
But if someone got in an accident because they uncritically swallowed something somebody told them, they would be found at fault. I’m really at a loss.
What if we simply held Tesla responsible for accidents resulting from their misleading advertisements? Wouldn't that discourage future incidents without the need to ruin quite so many lives in the process?

But more to the point, why would anyone want to intentionally work to create a society where there's so little trust, every individual must have the ability to completely vet every piece of information they might choose to act on?

Sure, there's some reasonable due diligence, but once you cross what should be a fairly low threshold in a functioning society, citing "personal responsibility" just turns into an excuse to place all the responsibility on an unfortunate few.
 
Upvote
48 (49 / -1)
I did. It says she spilled coffee on herself and burned herself. It something that would have happened whether McDonalds made the coffee or she made it herself.
If you're going to troll this badly, at least maybe try to keep it vaguely on topic?

I'm not even going to try to argue. You're just throwing the claim you want to make out there repeatedly without even acknowledging comments directly refuting it.
 
Upvote
46 (46 / 0)
The

Asking someone “x is bad. Why do you like bad things?” Is the definition of a bad faith argument.

My questions after reading the article were:
1) do users have a responsibility to understand the risks associated with using a product.
2) should the markers of a product be held responsible for harmful outcomes from using their product.

It fun and informative to read the opinions of others who see differently than I do. But I don’t want to entertain what to me feels like borderline trolls who want to sidestep my arguments to pontificate about “powerful people exploiting the vulnerable”. That’s not what I am discussing.
This particular line of discussion started with your comment here.

While the question you're taking objection to may have been a bit leading, I think it captured the gist of your argument pretty directly.

We're not talking about someone who made an obvious mistake like leaving their wallet unattended in their car. Although if I did run into someone who did that, personally my response would be empathy, because it's safe to assume their action was unintentional and there's no good reason they should have suffered for it.

What we're talking about is someone who started using ChatGPT for purposes it's advertised for, missed some early warning signs, and ultimately wound up in therapy as a pretty direct result of the way a product they legally accessed started behaving.

The answers to your questions are an obvious "yes" and "yes", by the way. What people are calling out is that you seem to be a lot more interested in arriving at a "yes" for the first than for the second, which has the direct effect of perpetuating a culture that amplifies harms to those who can't afford to defend themselves while tending to ignore the transgressions of the rich and powerful, who can very much afford to sort out their own mistakes.
 
Upvote
27 (29 / -2)
This is word-for-word the opposite of what I said.
It definitely was not.

I said I “don’t understand why we insist on treating people like they are unintelligent and have no agency”. Which means I believe most people are smart and capable of understanding the risks if they are shown to them.
I don't think Ars comments are treating anyone "like they are unintelligent and have no agency". That seems to be the way you're viewing the empathy on display in the comments here towards the victim. Hence my reading of your earlier comment.

I’ve been sitting here trying to figure out why resistance’s question bothered me so much. I feel like you and most people here agree with me that people have some level of duty to use LLMs with the knowledge that these machines can lead users astray if they’re not careful.

There’s no reason why we can’t say “companies should be punished when people get hurtandyou should be careful when you use this

I think what upset me is that people are biased against the phrase “personal responsibility” and in turn painted me with positions I don’t actually have. I don’t appreciate people ignoring my real arguments to make moral statements.
I don’t live in the US or Europe. I’m aware of the types of people who throw the words around as a way of expressing their contempt for people asserting their rights against those who seek to exploit them but there wasn’t a better way to word it.
You haven't made much of a "real argument", that I've seen. This entire line of comments has pretty much been a response to your insistence that we expect a reasonable degree of "personal responsibility".

I don't think anyone disagrees. It's just that's not a fight we need to be having when the US legal system (which is the context of the discussion) leans very heavily in the other direction.
 
Upvote
17 (21 / -4)
I don't think this is a "duty". I think people should use them that way in the interest of not furthering the collapse of understanding until it's absolutely irretrievable because too large a contingent believes they're super brains capable of things they aren't capable of.

But when I say "should" here, it's not with the conviction that failure to do so is their fault, just a belief that it's in their (and more broadly "society's") best interests if they do so.

I don't feel it's reasonable to expect as a "duty" for people to spend half their lives disbelieving and researching every single claim they hear made, which is effectively what they'd have to do in cases like this and Autopilot. It's much more reasonable to sue the absolute shit out of companies lying through their teeth about their products and mis-marketing them so that this isn't the unreasonable, burdensome expectation of every citizen of the world to live under.

I think "Don't aggressively lie about your product" is a much better standard to try and enforce on a societal level than "Everyone don't trust anyone and spend at least 30-40% of your life researching everything you hear". Society should default to some kind of "ease" or "comfort" for the average person.

Don't get me wrong: I'm not a completely naïve idiot who thinks that under those circumstances we could trust everything everyone says in some future world if we held these people to account—but if we did, it would narrow the window somewhat on what effort we all have to burn on questioning every damned thing about every damned product or service that enters our localized spheres.
An example that keeps coming to mind where "personal responsibility" gets it entirely wrong in a way that's clear enough I think most people would actually get it is what happened with vapes ~10 years ago.

Yes, everyone then knew smoking was terrible. And people were rightly suspicious of the shiny new alternative on offer. But I think what people forget is that social media — even Ars comments — was essentially bombarded by postings about how actually it wasn't really all that unhealthy and was a great alternative for those who wanted it. I remember a similar refrain around cannabis legalization as well.

You know what we know now? Yes, vaping is actually pretty terrible, and to a large extent, it was literally the same old Big Tobacco pushing it. And we also know that contrary to the claims of cannabis being mostly non-addictive, it's actually quite addictive in its various forms.

That's the power of marketing. Even here, amongst a crowd that's generally well informed, the initial message made it through loud and clear. The followup correction then took years to enter public discourse in a meaningful way, because it's really hard to compete with business interests in a position to make a whole lot of money on a new market.

And we expect people to have the "personal responsibility" to understand in the moment when they're being manipulated in a far more subtle way like the one demonstrated in that example?

This brand of "personal responsibility" is just another offshoot of the same branch of libertarianism that dictates that it's everyone else's responsibility to avoid the negative fallout of your latest scheme to make yourself just a little richer.

Real, meaningful responsibility is a team sport, not an individual endeavor.
 
Upvote
21 (24 / -3)
I get that people here are way out of their depth in their understanding of AI and its trajectory.

I can wait another 5-10 years for the other shoe to drop but it's not like I expect anyone here to accept that they're wrong (it's not human nature).

Short term bubbles notwithstanding - AI future is as inevitable as death and taxes.
Sure. And I was told by industry insiders maybe 3~4 years ago that we'd have human level general intelligence arriving… just about now, actually. Or certainly soon enough that we'd be seeing the massive progress towards that achievement.

Maybe if you cited the fundamental research you're basing your timeline on, it might carry some weight.
 
Upvote
20 (20 / 0)
Status
You're currently viewing only chaos215bar2's posts. Click here to go back to viewing the entire thread.