Grok assumes users seeking images of underage girls have “good intent”

sporkinum

Ars Tribunus Militum
2,270
Well, he says otherwise (this is dated from 2024, maybe earlier)



It's about an unappealing man who we're conflicted about. So, yes on the repellant part. However, you'd need to be a really Jesus-like person to feel guilty for being disgusted by Musk.
I like it.. Jesus would have a hard time now a days. IMHO I think Musk and company are deserving of Old Testament punishments, pre-Jesus.
 
Upvote
3 (3 / 0)

numerobis

Ars Tribunus Angusticlavius
50,868
Subscriptor
I would doubt their ability to generate hallucinations outside of their training data with any kind of accuracy.
Low-accuracy CSAM is still a problem.

But in any case b) is very concerning for all models. Grok's is undoubtedly worse by practically encouraging it but including child porn in the training data is highly unethical even if you could stop it generating child porn (which another Ars article suggests is not possible in general due to the fallibility of any guard rails).
First, there's non-exploitative images of naked kids all over the place. Baby photos, kids at the beach (in many countries that's normal), medical imagery, etc.

Second, there's also CSAM all over the place. Training sets are basically everything that humanity has ever digitized that AI companies can get their hands on. There's CSAM in the training sets.

You can try to filter out CSAM from your training set, and you can add a filter after the generative bits that rejects CSAM that it detects. But the filters are never going to be perfect, and perverts will quickly learn how to thwart them.


Now, if you wanted to argue to me that maybe this whole generative AI thing is a disaster, I wouldn't contradict you!
 
Upvote
28 (28 / 0)
Just a reminder, if you still have an account on the Nazi chat site with built-in on-demand kiddie porn generator, you can and should delete it at any time.

That includes you, Ars Technica.

Keeping in mind that if you delete the account, or don't log in regularly, X will just reassign it to someone else after 30 days. If your account is tied to your real name or a handle you're widely known by, even if you stop using it you may want to take steps to make sure it doesn't get appropriated by some grifter chud who'll use it to generate CSAM in your name.
 
Upvote
18 (19 / -1)
D

Deleted member 221201

Guest
On Bluesky, a former Tumblr dev recounted the time when Apple decided that it couldn’t have any more consensual, adult boobs on its app, and they had two days to change the app for this. They then asked why this hasn’t happened to X
Because $$$$$ or getting sued for being a monopoly by X and its much easier to be a saint when you can browbeat smaller devs

I understand that companies may be coerced into advertising on X

Still does not explain why Ars is on X
 
Upvote
-9 (0 / -9)

Pooga

Ars Scholae Palatinae
1,345
Subscriptor++
Because $$$$$ or getting sued for being a monopoly by X and its much easier to be a saint when you can browbeat smaller devs

I understand that companies may be coerced into advertising on X

Still does not explain why Ars is on X
Just so @Aurich doesn't have to jump into yet another thread to explain this this week, here's what he said Monday:

I know you are not responsible for this policy and would change it if you could, but man, if xAI generating CSAM on demand isn't enough to disassociate oneself from that entity, then how low does that bar have to actually be before one does?
Here's the bottom line.

We have costs. I like my salary for all the work I do here, and so do all our union writers and our editors and Jason doing the work of a whole army of tech people etc.

We don't just exist in a vacuum. We cannot afford to just be free.

You have opinions on if we should be on X, and I respect that. Fuck X, fuck Elon Musk, I'm on the team.

But you're not a subscriber. I have no idea if you block ads or not. I'm gonna guess you do, you can tell me if I'm wrong. I'm not yelling at you about it, just stating where I think things are. Maybe I'm off base and you whitelist us or don't use a blocker.

Regardless. Walking away from X would cost us. Not traffic, not engagement on X (which I don't think we even have, and definitely don't care about either way) just ability. If someone wants to run some big ad campaign on Ars, and pay us lots of money for it, and part of that promotion is predicated on our big following on X and being able to put sponsored posts there or something? That's a value we cannot really afford to give up.

If you, and a whole bunch of other people, subscribe? We can afford to flip the bird to that stuff.

In the meantime we gotta just accept business realities as they are. X has been a trash fire since at least the name change. I would have loved to dump it then. But nothing has really changed. All we can do is ask people to support us, and keep up posting on Bluesky and Mastodon etc.

I'm not saying you can't criticize us if you're not a subscriber btw, just being real about it.
 
Upvote
32 (32 / 0)

Aurich

Director of Many Things
41,241
Ars Staff
Just so @Aurich doesn't have to jump into yet another thread to explain this this week, here's what he said Monday:
1767916869659.png
 
Upvote
12 (12 / 0)

andocom

Ars Scholae Palatinae
858
Using words like “‘teenage’ or ‘girl’ does not necessarily imply underage,” Grok’s instructions say.
Very important to not infringe on the right of those totally normal well adusted Twitter users to create AI porn of 18 and 19 year olds.

You would have thought the abundance of caution would have been the other way for some companies, like those with functioning legal departments for instance.
 
Upvote
2 (2 / 0)

Aurich

Director of Many Things
41,241
Ars Staff
Good point. I missed that & good explanation from @Aurich
I'm not trying be in here guilt tripping people or anything, it's just that until the day we can actually survive off of subscribers we have to live in the reality of the ad world.

You have a 16 year old account and 14,000 posts, you spend a lot of time here clearly. Which is great. We haven't put up a paywall, and I hope we never do. You don't have to subscribe to have an opinion here. I'm gonna remind people it's an option and makes a difference, but that's where that ends.

And I have no idea what our future with X is. I've done what I can within my limited capacity on these things to minimize its effect on the site. It's a very deliberate decision to have Mastodon and Bluesky in our footer and no X. We certainly aren't trying to drive traffic to it.

But just in general, be it Ars or somewhere else, as long as journalism has to depend on ads to survive it's always going to be something where you navigate the waters the best you can.
 
Upvote
17 (17 / 0)

Xenocrates

Ars Tribunus Militum
2,486
Subscriptor++
To add onto @Aurich's point, even if they could walk away from X financially, they may have issues with someone claiming the handle and engaging in the sort of behavior that is normal on X, and would utterly defame Ars and the writers there. Unfortunately, because X doesn't validate anything from anyone, and just wants cash, it's super vulnerable to squatters grabbing IP related handles and impersonating brands.
 
Upvote
16 (16 / 0)

Uragan

Ars Legatus Legionis
11,342
Because $$$$$ or getting sued for being a monopoly by X and its much easier to be a saint when you can browbeat smaller devs
Apple is allowed to curate the App Store however they want. That's not monopolistic behaviour. Twitter isn't owed anything by Apple.

I understand that companies may be coerced into advertising on X
Twitter is not owed any ad revenue from companies.
 
Upvote
5 (5 / 0)

Drjaydub

Smack-Fu Master, in training
70
Subscriptor
Legitimately, I can't even imagine what it's like to wake up each morning as an American. It's bad enough as a Canadian, waking up next to America.
I think the majority of us Americans are in disbelief our fall has been this fast and dumb. Those with money can buy the government they want openly now and the rich are very good at driving wedges into the general electorate. Until we ban all private money from our elections the U.S. will continue down the spiral into oblivion. There is little we can do as long as our Supreme Court believes the President should have unchecked power, and that money = free speech.
 
Upvote
16 (16 / 0)
IIRC, when stable diffusion first made its debut, it had a dead simple filter that prevented it from making porn: it would just check the generated image, and if it classified as naughty, it wiped it black before serving it to the user. How this hasn't occurred to the brain trust at Twitter AI, I don't know.
It's occurred to him, but he knows that his primary audience outside of Russian bots is incel pedo Nazis, so he doesn't want to kill his business any more than he already has. Anyone still using that site will have a hard time claiming they're not in that group.
 
Upvote
5 (5 / 0)
The only real solution to this is adding a cost to the companies creasting abuse material. Make them responsible, make them liable, make them pay ... big ... if their products produce such material.

So obviously it wont happen.
Corporate profit has always been holy in the US. Its probably more holy now if you are a friend of the Mob President, but neither Democrats nor Republicans ever wishes to actually make the rich and powerful pay.
 
Upvote
4 (4 / 0)

Martin123

Ars Scholae Palatinae
662
Subscriptor
While the chatbot claimed that xAI supposedly “identified lapses in safeguards” that allowed outputs flagged as child sexual abuse material (CSAM) and was “urgently fixing them,” Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.
Seriously, what is the point of including such a sentence? The only knowledge Grok has of what goes on at xAI is whatever would be included in its initial prompt, and that certainly has no reason to be updated in real time to include statements about ongoing controversies. Maybe it would be worth reminding readers of this, even though most of Ars's readership knows it of course very well.
 
Upvote
5 (5 / 0)

Uragan

Ars Legatus Legionis
11,342
Upvote
10 (10 / 0)

nivedita

Ars Tribunus Militum
2,256
Subscriptor
I would doubt their ability to generate hallucinations outside of their training data with any kind of accuracy.
The whole point of LLMs is to produce plausible-sounding (or -looking in this case) hallucinations outside of their training data. If you only wanted something that can retrieve stuff from training data with accuracy, you’d use a regular database or a regular search engine.
 
Upvote
0 (1 / -1)

nivedita

Ars Tribunus Militum
2,256
Subscriptor
Grok is not sentient; therefore, saying that Grok "assumes" something is imprecise at best.
I think that’s getting a little carried away with the whole “AI is not intelligent” thing. It’s perfectly natural and precise to use “assumes” when talking about computer software, eg if someone says “memcpy assumes that source and destination memory areas do not overlap”, they don’t think memcpy shows any signs of sentience.
 
Upvote
0 (2 / -2)
Post content hidden for low score. Show…

Uragan

Ars Legatus Legionis
11,342
As a company, you don't want to generate any nudity, because it is a risk.
Better tell the likes of Playboy, Penthouse, OnlyFans, Scoreland and the plethora of other porn companies that they’re working at risk then.

Unless you are a company like tumbler, imgur, reddit blocking it on mobile.
Why are those companies somehow exempt?

But lets not forget, sex sells.
There it is! The “…but”! Sex sells, sure. The porn industry is a multi-billion dollar industry. However, CSAM is still illegal and violating a person’s bodily autonomy and integrity is vile. (And quite possibly illegal in certain jurisdictions.)

And the best way to control ai companies is blaming them serving up porn.
Is this where you start trying to rationalize the generation of CSAM and sexualized deepfakes of non-consenting adults? Because lumping them under the umbrella of “porn” is a bit disingenuous.

But the problem with censoring is that you take the first step, because if you can block porn, you can also block violence, unwanted politcal statements, fascist statements, 1989 Tiananmen Square protests, well anything that somebody does not like.
If people are going to riot against the government wanting to shut down the generation of CSAM, I don’t want to live in that society. Additionally, various governments, including the USA, have put limits on what their citizens can and cannot say or do. And excluding the very recent decline of the US into an extremely illiberal society, most countries seem to be operating just fine.

And while chaggpt, microsoft and gemini are happy to make big guardrails, grok has to be a little bit more lenient because Musk wants to be able to shout some things that are way past the guardrails of the other AI companies.
IMG_6524.gif


And this is what happened here, the freedom of speech that is applies to all prompts, results in a bit too much freedom on CSAM pictures.
The freedom of speech is not absolute as I’ve already established. And if Grok/Twitter want to operate in various jurisdictions, they need to comply with the laws of those jurisdictions. Fucking easy as!
 
Upvote
13 (13 / 0)

Uragan

Ars Legatus Legionis
11,342
You'd think Elon Musk creating a system that automatically produces child porn would be a bigger story but it barely registered. What a great time we live in ...
To be quite fair, Grok isn’t “automatically” generating CSAM. It has to be prompted to do so.

And what are you talking about? I’ve seen plenty of stories that are covering this topic, which point out the absolute lack of action or even a statement from Musk, until I read the article that @NickHos posted just this morning… and even then, Musk is now explicitly saying that he seems to be okay with making CSAM as long as people pay him for the capability, which is absolutely terrifying but apparently he’s willing to cater to the absolute dregs of humanity and wallow in their company.
 
Upvote
6 (6 / 0)

MilanKraft

Ars Tribunus Angusticlavius
6,921
Remember when these people were convinced there was a pedophile cult in pizzahut
Rule #1 for manipulating the population hasn't changed: distract, distract, distract.

For those who follow ("I'd rather have Putin than a Dem" MAGA-tards, influencers, etc), it provides emo-fuel for their social media goals, which are mostly about circumventing and undermining trust in legitimate outlets like AP or Reuters, and giving stooge politicians their weekly talking points (even if the semantics aren't always the same).

For those who oppose, it forces a reactive, often defensive posture, and drains attention away from the fights that matter most and ultimately, done often enough, exhausts them.

Not sure how we're going to overcome Rule #1 (because they're still using it quite effectively), but we better find a way. For Musk's part he is nothing but a glorified, attention-whoring troll in this context. He doesn't believe half the shit he tweets about; like a petulant child he simply enjoys sewing outrage among those who might oppose him. Classic case of person who has evolved to understand the intricacies of business, maybe some politics in the current era, but emotionally still slave to his inner 19 year old. Also partly explains his 14 kids from 11 different women or whatever the tally is now. His belief in his own "superior genes" is the other part.
 
Last edited:
Upvote
2 (2 / 0)