I like it.. Jesus would have a hard time now a days. IMHO I think Musk and company are deserving of Old Testament punishments, pre-Jesus.Well, he says otherwise (this is dated from 2024, maybe earlier)
It's about an unappealing man who we're conflicted about. So, yes on the repellant part. However, you'd need to be a really Jesus-like person to feel guilty for being disgusted by Musk.
Low-accuracy CSAM is still a problem.I would doubt their ability to generate hallucinations outside of their training data with any kind of accuracy.
First, there's non-exploitative images of naked kids all over the place. Baby photos, kids at the beach (in many countries that's normal), medical imagery, etc.But in any case b) is very concerning for all models. Grok's is undoubtedly worse by practically encouraging it but including child porn in the training data is highly unethical even if you could stop it generating child porn (which another Ars article suggests is not possible in general due to the fallibility of any guard rails).
That's basically the implication of my post, yes. At least based on the information I have so far.Now, if you wanted to argue to me that maybe this whole generative AI thing is a disaster, I wouldn't contradict you!
Can we agree its assigned default parameters are likely dodgy?Grok is not sentient; therefore, saying that Grok "assumes" something is imprecise at best.
Just a reminder, if you still have an account on the Nazi chat site with built-in on-demand kiddie porn generator, you can and should delete it at any time.
That includes you, Ars Technica.
Because $$$$$ or getting sued for being a monopoly by X and its much easier to be a saint when you can browbeat smaller devsOn Bluesky, a former Tumblr dev recounted the time when Apple decided that it couldn’t have any more consensual, adult boobs on its app, and they had two days to change the app for this. They then asked why this hasn’t happened to X
Just so @Aurich doesn't have to jump into yet another thread to explain this this week, here's what he said Monday:Because $$$$$ or getting sued for being a monopoly by X and its much easier to be a saint when you can browbeat smaller devs
I understand that companies may be coerced into advertising on X
Still does not explain why Ars is on X
Here's the bottom line.I know you are not responsible for this policy and would change it if you could, but man, if xAI generating CSAM on demand isn't enough to disassociate oneself from that entity, then how low does that bar have to actually be before one does?
We have costs. I like my salary for all the work I do here, and so do all our union writers and our editors and Jason doing the work of a whole army of tech people etc.
We don't just exist in a vacuum. We cannot afford to just be free.
You have opinions on if we should be on X, and I respect that. Fuck X, fuck Elon Musk, I'm on the team.
But you're not a subscriber. I have no idea if you block ads or not. I'm gonna guess you do, you can tell me if I'm wrong. I'm not yelling at you about it, just stating where I think things are. Maybe I'm off base and you whitelist us or don't use a blocker.
Regardless. Walking away from X would cost us. Not traffic, not engagement on X (which I don't think we even have, and definitely don't care about either way) just ability. If someone wants to run some big ad campaign on Ars, and pay us lots of money for it, and part of that promotion is predicated on our big following on X and being able to put sponsored posts there or something? That's a value we cannot really afford to give up.
If you, and a whole bunch of other people, subscribe? We can afford to flip the bird to that stuff.
In the meantime we gotta just accept business realities as they are. X has been a trash fire since at least the name change. I would have loved to dump it then. But nothing has really changed. All we can do is ask people to support us, and keep up posting on Bluesky and Mastodon etc.
I'm not saying you can't criticize us if you're not a subscriber btw, just being real about it.
Very important to not infringe on the right of those totally normal well adusted Twitter users to create AI porn of 18 and 19 year olds.Using words like “‘teenage’ or ‘girl’ does not necessarily imply underage,” Grok’s instructions say.
Just following the respectable way shown by the Financial Times.Holly shit! If the headline doesn't hook you into reading the article, that disturbing image will.
Damn, Aurich, I've got to go see a therapist now.
I'm not trying be in here guilt tripping people or anything, it's just that until the day we can actually survive off of subscribers we have to live in the reality of the ad world.Good point. I missed that & good explanation from @Aurich
This makes me suspect that Musk considers this ability to generate deepfake CSAM be a feature and not a bug.What with pics of Elon being on Epstein Island, I’m not surprised Grok acts this way.
Apple is allowed to curate the App Store however they want. That's not monopolistic behaviour. Twitter isn't owed anything by Apple.Because $$$$$ or getting sued for being a monopoly by X and its much easier to be a saint when you can browbeat smaller devs
Twitter is not owed any ad revenue from companies.I understand that companies may be coerced into advertising on X
I think the majority of us Americans are in disbelief our fall has been this fast and dumb. Those with money can buy the government they want openly now and the rich are very good at driving wedges into the general electorate. Until we ban all private money from our elections the U.S. will continue down the spiral into oblivion. There is little we can do as long as our Supreme Court believes the President should have unchecked power, and that money = free speech.Legitimately, I can't even imagine what it's like to wake up each morning as an American. It's bad enough as a Canadian, waking up next to America.
I do not think you should even ascribe this agency to an LLM. Grok is not a spokesperson at all, it will say anything you ask it to.While the chatbot claimed
And upset their user base ?Dear X: Stop quibbling about semantics and fix your shit.
It's occurred to him, but he knows that his primary audience outside of Russian bots is incel pedo Nazis, so he doesn't want to kill his business any more than he already has. Anyone still using that site will have a hard time claiming they're not in that group.IIRC, when stable diffusion first made its debut, it had a dead simple filter that prevented it from making porn: it would just check the generated image, and if it classified as naughty, it wiped it black before serving it to the user. How this hasn't occurred to the brain trust at Twitter AI, I don't know.
Seriously, what is the point of including such a sentence? The only knowledge Grok has of what goes on at xAI is whatever would be included in its initial prompt, and that certainly has no reason to be updated in real time to include statements about ongoing controversies. Maybe it would be worth reminding readers of this, even though most of Ars's readership knows it of course very well.While the chatbot claimed that xAI supposedly “identified lapses in safeguards” that allowed outputs flagged as child sexual abuse material (CSAM) and was “urgently fixing them,” Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.
I assume generating CSAM must generate a lot of money for X.as X delays updates that could block Grok’s undressing spree
Ah, from the British newspaper writing guide.It's probably operating under the premise that 18- and 19-year-olds are still teenagers, but are also legal adults.
Something about that is even more disturbing. Basically, Musk is saying that if you pay him, you can still potentially make CSAM using Grok?Looks like Musk is, at least, partially backing down and limiting access to his image generator to subscribers only... https://www.theguardian.com/technology/2026/jan/09/grok-image-generator-outcry-sexualised-ai-imagery
The whole point of LLMs is to produce plausible-sounding (or -looking in this case) hallucinations outside of their training data. If you only wanted something that can retrieve stuff from training data with accuracy, you’d use a regular database or a regular search engine.I would doubt their ability to generate hallucinations outside of their training data with any kind of accuracy.
I think that’s getting a little carried away with the whole “AI is not intelligent” thing. It’s perfectly natural and precise to use “assumes” when talking about computer software, eg if someone says “memcpy assumes that source and destination memory areas do not overlap”, they don’t think memcpy shows any signs of sentience.Grok is not sentient; therefore, saying that Grok "assumes" something is imprecise at best.
Better tell the likes of Playboy, Penthouse, OnlyFans, Scoreland and the plethora of other porn companies that they’re working at risk then.As a company, you don't want to generate any nudity, because it is a risk.
Why are those companies somehow exempt?Unless you are a company like tumbler, imgur, reddit blocking it on mobile.
There it is! The “…but”! Sex sells, sure. The porn industry is a multi-billion dollar industry. However, CSAM is still illegal and violating a person’s bodily autonomy and integrity is vile. (And quite possibly illegal in certain jurisdictions.)But lets not forget, sex sells.
Is this where you start trying to rationalize the generation of CSAM and sexualized deepfakes of non-consenting adults? Because lumping them under the umbrella of “porn” is a bit disingenuous.And the best way to control ai companies is blaming them serving up porn.
If people are going to riot against the government wanting to shut down the generation of CSAM, I don’t want to live in that society. Additionally, various governments, including the USA, have put limits on what their citizens can and cannot say or do. And excluding the very recent decline of the US into an extremely illiberal society, most countries seem to be operating just fine.But the problem with censoring is that you take the first step, because if you can block porn, you can also block violence, unwanted politcal statements, fascist statements, 1989 Tiananmen Square protests, well anything that somebody does not like.
And while chaggpt, microsoft and gemini are happy to make big guardrails, grok has to be a little bit more lenient because Musk wants to be able to shout some things that are way past the guardrails of the other AI companies.
The freedom of speech is not absolute as I’ve already established. And if Grok/Twitter want to operate in various jurisdictions, they need to comply with the laws of those jurisdictions. Fucking easy as!And this is what happened here, the freedom of speech that is applies to all prompts, results in a bit too much freedom on CSAM pictures.
To be quite fair, Grok isn’t “automatically” generating CSAM. It has to be prompted to do so.You'd think Elon Musk creating a system that automatically produces child porn would be a bigger story but it barely registered. What a great time we live in ...
Rule #1 for manipulating the population hasn't changed: distract, distract, distract.Remember when these people were convinced there was a pedophile cult in pizzahut