In the end, the why of it doesn't really matter, and only distracts from the path forward.I'm surprised there is no explicit mention of the problem that resulted in the development of this policy: "AI makes mistakes / hallucinates and cannot on its own be trusted". It's only covered implicitly in the quote near the end.
I'm surprised there is no explicit mention of the problem that resulted in the development of this policy: "AI makes mistakes / hallucinates and cannot on its own be trusted". It's only covered implicitly in the quote near the end.
If this policy is also handed to new Ars contributors, a clearer warning about the hallucination risk upfront might be a good thing.
I'll be around to engage with the discussion and I'll be straight with you about what I can and can't get into. There are areas, particularly anything touching on confidential personnel or legal matters, where I won't be able to comment.
You're right that any individual clause can be gamed if someone is determined to game it. But the policy doesn't work as isolated sentences. The authorship principle says humans make editorial decisions. The accountability section says you own the result. A reporter who sketches a half-assed draft and rubber-stamps whatever an LLM hands back hasn't exercised editorial judgment, and that's a violation of the policy's core principle, not a clever use of a loophole. The policy can't prevent bad faith, but it does make bad faith indefensible.Say you sketch out a very half-assed version of an article, give it to an LLM and ask it to improve it, and unquestioningly accept every "suggestion" it gives you; can you not at least argue that all you were doing was accepting "style suggestions and structural feedback"?
From the doc:I'm surprised there is no explicit mention of the problem that resulted in the development of this policy: "AI makes mistakes / hallucinates and cannot on its own be trusted".
I suspect they have.Good. Now enforce it.
It is and it has been. Our internal policy is over two years old.Good. Now enforce it.
"These standards aren’t new. They’ve governed our editorial work since AI tooling became available. What’s new is making them visible to you. You deserve to see the rules we hold ourselves to, not just trust that they exist."Have you, or will you, run an amnesty period for contributors to admit to any content partially written by AI that might currently be up on the web site, so it can be removed or flagged as not meeting these standards?
I think this is that post. It lines up with the promise pretty well.Thank you for the transparency. If I recall correctly, after the incident with the AI quotes earlier this year, Ars staff said that a postmortem of that particular situation would be coming later. Are you still planning on doing that, or would it be one of the off-limits areas mentioned above?
Ars Technica has completed its review of this matter. The appropriate internal steps have been taken. In the coming weeks, we’ll publish a reader-facing guide explaining how we use—and do not use—AI in our work.
We do not comment on personnel decisions.
We have said all we can say on that matter.Thank you for the transparency. If I recall correctly, after the incident with the AI quotes earlier this year, Ars staff said that a postmortem of that particular situation would be coming later. Are you still planning on doing that, or would it be one of the off-limits areas mentioned above?
I'm slightly confused about one aspect of this.Anyone who uses AI tools in our editorial workflow is responsible for the accuracy and integrity of the resulting work. This responsibility cannot be transferred to colleagues, editors, or the tools themselves. More broadly, maintaining the standards in this policy is a shared obligation across our editorial operation.
I'm surprised there is no explicit mention of the problem that resulted in the development of this policy: "AI makes mistakes / hallucinates and cannot on its own be trusted".
Edit: got it, this is just the policy, not the reasoning behind it.
Not sure I love the idea, but if we did want to experiment with it I think all I would personally be interested in is seeing a bot that could generate reports, like users already do, so we could review posts.Serious question: what do you think about an AI moderation feature? I get the sense y'all are stretched a little thin. Keeping this community tight should be your top priority. Nobody has what Ars has here.
Reading between the lines here, I think Aurich greatly enjoys trolling through duplicate post reports - keep 'em coming!Better to have people flagging things so we can keep an eye on them even if they aren't over the line.
You're reading the 'shared obligation' language correctly. The policy establishes two things: individual responsibility for the person using the tool, and institutional responsibility for maintaining standards and oversight across the editorial operation. That includes editorial review. We didn't spell out every step of our editorial process because this is a policy document, not an org chart, but the short answer to your question is yes, more than one human looks at our work before it publishes.I'm slightly confused about one aspect of this.
I see no explicit mention of secondary human review, such as by an editor? But that could fall under "shared obligation across our editorial operation."
I just assumed one of the Artemis crew took the picture.For our April 1st post with the Moonshark I used stock images from Getty for the Moon and the shark. You’ll notice the shark’s nose is cut off, and the mouth is full of fish in the original image.
I drew in the rest of the nose by hand. I removed the fish inside the mouth by hand (using a handful of Photoshop cloning tricks). I would not feel comfortable using the generative fill to remove those fish or draw in the nose. That to me would be using AI features too directly in an image. It’s perhaps a subtle distinction vs removing the letters on a sign. but for me personally I’d rather draw the line early.
Also, you can always check my image credits for details. If it’s a collage with licensed photos it will say something like "Aurich Lawson | Getty Images". I don’t necessarily have to do that, but I prefer the transparency.
When you have the talent to do things properly and can use AI as a tool and not a crutch, I am not really offended. Just like the "fill" tool of old. Sure you can fill by hand but if there is a tool that doesn't essentially affect the end result, go for it.I personally am somewhere between Aurich’s approach and “never”.
However I might prefer Ars treat this generative “AI” and related technology, I respect the transparency and appreciate the commitment to continual work at earning trust.
Keep it up and you’ll keep me around. I might grumble about this or that, but call me the loyal opposition.
Cheers!
I have a basic philosophy when it comes to AI, in terms of my personal use but also just in how I find it personally interesting:Thanks you guys! I'd been waiting patiently for this (since The Article). I'm glad it's finally here.
I'd been struggling with some of the same questions as I do amateur photography (and thus amateur post production) as a hobby; some of the tools are very useful (e.g. subject selection and removal, where in the past I'd have to spend much more time cloning out that space and fixing anomalies, etc), but inhabit a sort of grey area; as noted, you aren't really adding to the photo.
Plus stuff like "remove this speck of dust and do an infill" has been around a long while, and existed long before the AI craze.
This looks like a great starting point for me, too.
You want someone to... police the forum for AI?I'd also like to see the forum posting guidelines updated to make clear when and how individual forum users may use AI in their forum or front page article comments.
I moderate people who copy and paste AI results as posts. "I asked ChatGPT and it said [big block of pasted text]"I'd also like to see the forum posting guidelines updated to make clear when and how individual forum users may use AI in their forum or front page article comments.