If you're looking for something else to blame when it comes to almost anything "mental health" related, you'll inevitably find it.… I also have to question what DeCruise's mental state was prior to using ChatGPT, because I have a hard time believing that reading text output from a chatbot gave an otherwi0se mentally-healthy man bipolar disorder.
Is this a joke, or do you have so little self-awareness you seriously wrote this in response to a comment literally calling out the lack of self-awareness in this go-to response?Offtopic I know, but I never had a huge amount of sympathy for the plaintiff in the Hot Coffee case. Yes, the coffee was hotter than the industry standard but what kind of idiot drives around in a car with a paper cup full of steaming hot coffee between their legs?!
The problem is the question the comment you quoted highlighted and the one it answers are different.I don't follow your reasoning.
If an individual experienced messianic psychosis prior to the introduction of LLMs, I think it is reasonable to conclude that an LLM was not responsible.
Why don't we start by not allowing companies to sell a known dangerous product, intentionally advertising its benefits while trying not to mention the downsides, ever, all while weaseling out of most meaningful legal avenues of redress via a few lines in a boilerplate TOS nearly everyone using the product will never read?I’m going to ignore the bad-faith question at the end of your edit but I will accept that the phrase “personal responsibility” Is a loaded term now thanks to right-leaning astroturfing.
But to my main point: I could be a little out of touch since reading and commenting on articles about LLMs by definition means I’m not a “normie” lol. However I would still expect someone using a tool to make an effort to understand the risks. I’m not willing to excuse people who fail to try to do that.
The problem is that you're viewing someone as unintelligent and without agency simply because they fell down a path that society plainly laid out for them to follow.I really don’t understand why we insist on treating people like they are unintelligent and have no agency. Like, I genuinely need someone to explain it.
What if we simply held Tesla responsible for accidents resulting from their misleading advertisements? Wouldn't that discourage future incidents without the need to ruin quite so many lives in the process?Another real world example is Tesla autopilot. The way it’s advertised feels borderline criminal. Even the name is a sham.
But if someone got in an accident because they uncritically swallowed something somebody told them, they would be found at fault. I’m really at a loss.
If you're going to troll this badly, at least maybe try to keep it vaguely on topic?I did. It says she spilled coffee on herself and burned herself. It something that would have happened whether McDonalds made the coffee or she made it herself.
This particular line of discussion started with your comment here.The
Asking someone “x is bad. Why do you like bad things?” Is the definition of a bad faith argument.
My questions after reading the article were:
1) do users have a responsibility to understand the risks associated with using a product.
2) should the markers of a product be held responsible for harmful outcomes from using their product.
It fun and informative to read the opinions of others who see differently than I do. But I don’t want to entertain what to me feels like borderline trolls who want to sidestep my arguments to pontificate about “powerful people exploiting the vulnerable”. That’s not what I am discussing.
It definitely was not.This is word-for-word the opposite of what I said.
I don't think Ars comments are treating anyone "like they are unintelligent and have no agency". That seems to be the way you're viewing the empathy on display in the comments here towards the victim. Hence my reading of your earlier comment.I said I “don’t understand why we insist on treating people like they are unintelligent and have no agency”. Which means I believe most people are smart and capable of understanding the risks if they are shown to them.
You haven't made much of a "real argument", that I've seen. This entire line of comments has pretty much been a response to your insistence that we expect a reasonable degree of "personal responsibility".I’ve been sitting here trying to figure out why resistance’s question bothered me so much. I feel like you and most people here agree with me that people have some level of duty to use LLMs with the knowledge that these machines can lead users astray if they’re not careful.
There’s no reason why we can’t say “companies should be punished when people get hurt” and “you should be careful when you use this”
I think what upset me is that people are biased against the phrase “personal responsibility” and in turn painted me with positions I don’t actually have. I don’t appreciate people ignoring my real arguments to make moral statements.
I don’t live in the US or Europe. I’m aware of the types of people who throw the words around as a way of expressing their contempt for people asserting their rights against those who seek to exploit them but there wasn’t a better way to word it.
An example that keeps coming to mind where "personal responsibility" gets it entirely wrong in a way that's clear enough I think most people would actually get it is what happened with vapes ~10 years ago.I don't think this is a "duty". I think people should use them that way in the interest of not furthering the collapse of understanding until it's absolutely irretrievable because too large a contingent believes they're super brains capable of things they aren't capable of.
But when I say "should" here, it's not with the conviction that failure to do so is their fault, just a belief that it's in their (and more broadly "society's") best interests if they do so.
I don't feel it's reasonable to expect as a "duty" for people to spend half their lives disbelieving and researching every single claim they hear made, which is effectively what they'd have to do in cases like this and Autopilot. It's much more reasonable to sue the absolute shit out of companies lying through their teeth about their products and mis-marketing them so that this isn't the unreasonable, burdensome expectation of every citizen of the world to live under.
I think "Don't aggressively lie about your product" is a much better standard to try and enforce on a societal level than "Everyone don't trust anyone and spend at least 30-40% of your life researching everything you hear". Society should default to some kind of "ease" or "comfort" for the average person.
Don't get me wrong: I'm not a completely naïve idiot who thinks that under those circumstances we could trust everything everyone says in some future world if we held these people to account—but if we did, it would narrow the window somewhat on what effort we all have to burn on questioning every damned thing about every damned product or service that enters our localized spheres.
Sure. And I was told by industry insiders maybe 3~4 years ago that we'd have human level general intelligence arriving… just about now, actually. Or certainly soon enough that we'd be seeing the massive progress towards that achievement.I get that people here are way out of their depth in their understanding of AI and its trajectory.
I can wait another 5-10 years for the other shoe to drop but it's not like I expect anyone here to accept that they're wrong (it's not human nature).
Short term bubbles notwithstanding - AI future is as inevitable as death and taxes.