No, Grok can’t really “apologize” for posting non-consensual sexual images

siliconaddict

Ars Legatus Legionis
13,080
Subscriptor++
Well what does Grok and their leadership have in common? Both are sociopaths who have no understanding of how human society works. sorry indicates an actual emotional reaction and a feeling of guilt on the part of the person who did something wrong. Saying your sorry by way of a computer means exactly NOTHING.
 
Upvote
11 (14 / -3)

Fatesrider

Ars Legatus Legionis
25,278
Subscriptor
I had a similar thought reading the other story on this. Grok can't really be relied on to "admit" culpability (issues of legal liabilities of LLMs aside) because it can be simply generating a sentence in response to a leading prompt, not accurately reporting its own actions. LLMs are notorious for inventing and fabricating events in a comprehensible sequence based on no actual facts. They don't reliable describe their own inner workings, processes, or histories. They just produce statistically expectable, syntactically correct strings related to their prompts. An LLM can't admit committing crimes any more reliably than it could count the number of times R appears in 'strawberry.'
I'm wondering why anyone would think that a somewhat advanced toaster/blender/mixer could even "speak".

An AI is a program. It does things it's designed to do. The problem is that those who designed it don't know what their product will do. From a consumer safety point of view, this is a fucking nightmare. People have DIED from using it who would likely not be dead had they not used it.

Trying to shift the blame from the toaster to the user is even worse. Yes, you probably shouldn't be sticking knives into the toaster while its plugged in to retrieve a locked in piece of ash that used to be bread because the toaster is defective, but it happens, and the fact it happens should be baked into the design with the proper circuit breaker to avoid electrocution.

These Silicon Valley fuckwits seem to not care that they have a defective product.

Why? Because these Silicon Valley fuckwits have inked or received maybe as much as a couple of trillion dollars worth of investment to get it's busted ass product to some level of acceptable to the general public. A public who is FINE playing with it for free but almost never willing to PAY FOR IT, and haven't made a penny in profits. "High revenue" isn't profits if you're not making more money than you've spent.

Knowing that their own house of cards will collapse the instant the bubble bursts, they 're DESPERATE to keep the "revenue" flowing in, bringing in a bunch of other Silicon Valley fuckwits in different parts of the Tech World to dig a deeper hole into which they'll all fall to some degree - some far deeper than others - praying to their false gods that SOMETHING scores a big hit with the world, because if they don't get a killer, must-have app out of it, they'll need decades to pay off investors, assuming they get enough revenue to stay in business long enough to do that.

IMHO, this is why we're getting the snark from these assholes. They know they're fucked, and there's not a thing they can do to stop the steamroller. This kind of reaction is PANIC based on desperation. So I'd not be at all surprised if that balloon bursts by the end of the month.
 
Upvote
54 (55 / -1)

dark.jade

Smack-Fu Master, in training
29
Subscriptor
showing signs of schizophrenia at worst
This article is great except for this one bit.

This kind of language contributes (however minimally) to stigmatisation of real people who are already misunderstood and misrepresented all the time.

That aside, it is great to see reporting which acknowledges that many reporters are failing their readers by presenting LLM technology as though it had agency or identity.
 
Upvote
25 (32 / -7)

pnplibi

Smack-Fu Master, in training
50
"It’s comforting to think that an LLM like Grok can learn from its mistakes and show remorse when it does something that wasn’t intended. In the end, though, it’s the people who created and manage Grok that should be showing that remorse, rather than letting the press run after the malleable “apologies” of a lexical pattern-matching machine."

Remorse? You are asking for remorse from a company run by Musk? Really?
 
Upvote
22 (22 / 0)

DrewW

Ars Tribunus Militum
2,016
Subscriptor++
Publicizing an apology from Grok treats this failure as a joke and publicity stunt.
Ars is the one outlet not accepting the non-apology at face value and describing the situation in context. Most other media is reporting “Grok apologized” and then telling people they can see pr0n of their favorite celebs on Grok.
 
Upvote
53 (53 / 0)

Dumb Svengali

Ars Scholae Palatinae
653
More importantly, it’s effectively user-generated content. It can’t both be “a tool for artists” and it’s own spokesperson. Writing “I’m sorry” with the airbrush in Photoshop would not be an official apology from Adobe.

This is like news outlets reporting some random user’s Facebook post as the official position of Meta.
 
Upvote
37 (38 / -1)

Uragan

Ars Legatus Legionis
11,342
Ars is the one outlet not accepting the non-apology at face value and describing the situation in context. Most other media is reporting “Grok apologized” and then telling people they can see pr0n of their favorite celebs on Grok.
I mean... let's be completely honest here. Grok is just regurgitating what any given user wants it to say. It isn't capable of actual self-reflection. Any "non-apology" or "apology" the software gives is absolutely meaningless. Of course, we're getting radio silence from xAI and Musk about this, which is beyond weird because of how much the right loves to point out how much pedophilia is such a problem... but they're surprisingly quiet when the CSAM is coming from within their own house.
 
Upvote
35 (35 / 0)
Post content hidden for low score. Show…
One of the things that makes shutting down USAID particularly contemptible, is that USAID is absolutely an "America First" program, the benefits of that program to the US vastly outweighed both the costs to the US and the benefits to the recipients combined.

Literally no one benefits in any meaningful way from shutting down the program except it possibly increasing the likelihood that some people will vote Republican.
Poohters, Bonesaw, and anyone else who is tired of the US hegemony globally stands to benefit greatly from our abrupt, sudden cruelty, and blatant and wanton disregard for human lives achieved by this de-funding.

All of them can now expand their own regional influence by pointing to US and, correctly, stating we abandoned people.

Fuck Elon. Nazi asshole.
 
Upvote
29 (31 / -2)
I'm wondering why anyone would think that a somewhat advanced toaster/blender/mixer could even "speak".

An AI is a program. It does things it's designed to do. The problem is that those who designed it don't know what their product will do. From a consumer safety point of view, this is a fucking nightmare. People have DIED from using it who would likely not be dead had they not used it.

Trying to shift the blame from the toaster to the user is even worse. Yes, you probably shouldn't be sticking knives into the toaster while its plugged in to retrieve a locked in piece of ash that used to be bread because the toaster is defective, but it happens, and the fact it happens should be baked into the design with the proper circuit breaker to avoid electrocution.

These Silicon Valley fuckwits seem to not care that they have a defective product.

Why? Because these Silicon Valley fuckwits have inked or received maybe as much as a couple of trillion dollars worth of investment to get it's busted ass product to some level of acceptable to the general public. A public who is FINE playing with it for free but almost never willing to PAY FOR IT, and haven't made a penny in profits. "High revenue" isn't profits if you're not making more money than you've spent.

Knowing that their own house of cards will collapse the instant the bubble bursts, they 're DESPERATE to keep the "revenue" flowing in, bringing in a bunch of other Silicon Valley fuckwits in different parts of the Tech World to dig a deeper hole into which they'll all fall to some degree - some far deeper than others - praying to their false gods that SOMETHING scores a big hit with the world, because if they don't get a killer, must-have app out of it, they'll need decades to pay off investors, assuming they get enough revenue to stay in business long enough to do that.

IMHO, this is why we're getting the snark from these assholes. They know they're fucked, and there's not a thing they can do to stop the steamroller. This kind of reaction is PANIC based on desperation. So I'd not be at all surprised if that balloon bursts by the end of the month.
Don't threaten me with a good time.
 
Upvote
4 (4 / 0)

nimelennar

Ars Tribunus Angusticlavius
10,034
Why does the article keep referring to non-consensual sexual images of minors?

It seems to me the question of consent doesn't, and shouldn't, enter the discussion of sexual images of minors.
I do think that it's a useful intensifier to say that:

1. They could not have legally consented to sharing sexual images, even if they wanted to, but
2. There is additional trauma being inflicted upon them by the fact that even if they could have consented to share sexual images, they wouldn't have.
 
Upvote
36 (37 / -1)
I do think that it's a useful intensifier to say that:

1. They could not have legally consented to sharing sexual images, even if they wanted to, but
2. There is additional trauma being inflicted upon them by the fact that even if they could have consented to share sexual images, they wouldn't have.
You know, I hadn't thought about it that way, but that makes sense. Thanks.
 
Upvote
24 (24 / 0)

graylshaped

Ars Legatus Legionis
68,202
Subscriptor++
Call it training or learning I guess, but "AI" isn't "trained" and doesn't "learn" in the way we understand those actions as human beings. We need a new description that doesn't obfuscate the truth about what LLMs and "AI" are and how they are created.
I'm going with "programmed."
 
Upvote
17 (17 / 0)
Post content hidden for low score. Show…

T_Bartholomew

Ars Praetorian
449
Subscriptor
Call it training or learning I guess, but "AI" isn't "trained" and doesn't "learn" in the way we understand those actions as human beings. We need a new description that doesn't obfuscate the truth about what LLMs and "AI" are and how they are created.
“Seeded”
 
Upvote
3 (3 / 0)
Post content hidden for low score. Show…

thrillgore

Ars Praefectus
4,090
Subscriptor
Upvote
22 (22 / 0)
Humans can't give accurate statements about our reasoning processes either, we pretty much always post hock confabulate. The part of the brain that does the figuring out and the part that constructs the narrative are separate and so the narrative construction is always a guess.
That seems like an odd take to me. I think most people are (to greater and lesser degrees, sure) are able to observe themselves and come to conclusions about their own reasoning and actions that are much more than just guesses or confabulations.

I'd say that part of being a healthy adult is learning to understand yourself (and your emotions, drives, habits...) in that way. It can get more murky and confused the more 'intuitive' and instinctual/subconscious it gets, but calling it guessing feels like a cop-out to me.
 
Upvote
22 (22 / 0)