Our newsroom AI policy

Ken Fisher

Founder & Editor-in-Chief
19,422
Ars Staff
A few thoughts from me before the thread gets going.

I've been paying close attention to how other major outlets are approaching the same questions we are, and I've been pleasantly surprised. Many excellent newsrooms are arriving at the same principles independently, which gives me some confidence that the industry is landing in the right place (although there will be sad exceptions, I’m sure).

The principles behind this policy are settled. How we apply them will develop as the tools and the landscape evolve. If you see a real-world scenario this policy doesn't address, I want to hear about it. This document is better for being pressure-tested.

I'll be around to engage with the discussion and I'll be straight with you about what I can and can't get into. There are areas, particularly anything touching on confidential personnel or legal matters, where I won't be able to comment.

Also, I know some of you will argue that we shouldn't use these tools at all. I understand the concern, but I’m not going to debate this issue on that level. Like any tool, AI has strengths and weaknesses, and getting good results requires understanding both. What we're seeing across the industry is that the people who use these tools most successfully are domain experts working within their area of expertise, and within clear guardrails. Expertise alone isn't enough. That's why we have a policy, why we require training and approval, and why violations have consequences.

A journalism outlet like ours has to earn and re-earn trust every day. It's the nature of our business. We've made this policy as clear as possible to earn and keep earning that trust. We know that some people won't like every aspect of it, and that's fine. We have to judge what is best for us and our values, and that is a necessarily subjective thing to do. There is no policy that can smooth out the road ahead. Journalism didn't lack for challenges before the advent of AI. But a policy that centers humans first is a strong and, dare I say, obvious place to start.
 
Upvote
593 (597 / -4)

Aurich

Director of Many Things
41,099
Ars Staff
For transparency here is how I use AI tools when I make images.

tl;dr — I basically don’t.

I downgraded my Creative Cloud subscription with Adobe to their lower tier to avoid paying for AI features I not only don’t want, I also don’t want to support.

That said, I do use some of the basic tools in Photoshop like automatic subject selection, or oceassionally the remove tool, which quickly clones out an area.

The former is more machine learning, but the latter is technically generative. It does not use Adobe’s generative credit system though.

I might use the generative fill to clone out the letters on a sign to replace them with new words for a gag image for instance. I could hand clone them out, but it’s faster to use a tool. The end result is just continuing the background texture of something before I put something new over it.

I consider these basic time saving measures for non-creative purposes. A little noisy texture in the background isn’t important enough in my mind to stress over.

Otherwise the work you see from me is either original or a combination of collage and manipulation from licensed images. For example, the AI beef image the other day used licensed photos from Getty of Sam Altman, Elon Musk, and a package of meat.

collage.jpg


I did some subtle manipulation of the meat image to add a shrink wrap over it, made the AI BEEF sticker in Illustrator, and used the subjection selection tool in Photoshop to quickly grab the men out of their backgrounds before doing some filtering to turn them into stickers. Dropped in a wood texture background, did a little general color grading etc, and called it done.

That’s a pretty typical sort of work flow. Probably took me 15-20 minutes all said and done.

I’ll give you an example of what is, to me, beyond using those tools:

For our April 1st post with the Moonshark I used stock images from Getty for the Moon and the shark. You’ll notice the shark’s nose is cut off, and the mouth is full of fish in the original image.

shark0collage.jpg


I drew in the rest of the nose by hand. I removed the fish inside the mouth by hand (using a handful of Photoshop cloning tricks). I would not feel comfortable using the generative fill to remove those fish or draw in the nose. That to me would be using AI features too directly in an image. It’s perhaps a subtle distinction vs removing the letters on a sign. but for me personally I’d rather draw the line early.

Also, you can always check my image credits for details. If it’s a collage with licensed photos it will say something like "Aurich Lawson | Getty Images". I don’t necessarily have to do that, but I prefer the transparency.
 
Upvote
821 (825 / -4)
Post content hidden for low score. Show…

UserIDAlreadyInUse

Ars Tribunus Angusticlavius
7,636
Subscriptor
I'm surprised there is no explicit mention of the problem that resulted in the development of this policy: "AI makes mistakes / hallucinates and cannot on its own be trusted". It's only covered implicitly in the quote near the end.
In the end, the why of it doesn't really matter, and only distracts from the path forward.
 
Upvote
125 (132 / -7)
Post content hidden for low score. Show…

AdamWill

Ars Scholae Palatinae
947
Subscriptor++
It's great to see this, however, there may be devils in the details.

"AI-powered tools may be used to assist with editing and workflow in ways that don’t displace human authorship, including grammar checks, style suggestions, and structural feedback."

"Style suggestions and structural feedback" seems like it could be a truck-sized loophole, if an author wanted to wield it that way. Say you sketch out a very half-assed version of an article, give it to an LLM and ask it to improve it, and unquestioningly accept every "suggestion" it gives you; can you not at least argue that all you were doing was accepting "style suggestions and structural feedback"?
 
Upvote
71 (94 / -23)
Post content hidden for low score. Show…

Aurich

Director of Many Things
41,099
Ars Staff
I'm surprised there is no explicit mention of the problem that resulted in the development of this policy: "AI makes mistakes / hallucinates and cannot on its own be trusted". It's only covered implicitly in the quote near the end.

If this policy is also handed to new Ars contributors, a clearer warning about the hallucination risk upfront might be a good thing.

"When we attribute a statement, a position, or a quote to a named source, that material comes from direct engagement with interviews, transcripts, published statements, or documents reviewed by the reporter. AI tools must not be used to generate, extract, or summarize material that is then attributed to a named source, whether as a direct quote, a paraphrase, or a characterization of someone’s views."
 
Upvote
263 (263 / 0)
Kudos on a non-offensive policy. I don't want to read the work of an AI, and it saddens me that many tech executives think I do.

Also, I would really welcome a story that might show how a journalist could use AI on a big dataset. Not all of us are using these tools, so this sounds intriguing but also mysterious to me.

Serious question: what do you think about an AI moderation feature? I get the sense y'all are stretched a little thin. Keeping this community tight should be your top priority. Nobody has what Ars has here.
 
Upvote
111 (127 / -16)
Post content hidden for low score. Show…
I'll be around to engage with the discussion and I'll be straight with you about what I can and can't get into. There are areas, particularly anything touching on confidential personnel or legal matters, where I won't be able to comment.

Thank you for the transparency. If I recall correctly, after the incident with the AI quotes earlier this year, Ars staff said that a postmortem of that particular situation would be coming later. Are you still planning on doing that, or would it be one of the off-limits areas mentioned above?
 
Upvote
168 (174 / -6)

Hypatia

Ars Centurion
237
Subscriptor
I personally am somewhere between Aurich’s approach and “never”.
However I might prefer Ars treat this generative “AI” and related technology, I respect the transparency and appreciate the commitment to continual work at earning trust.

Keep it up and you’ll keep me around. I might grumble about this or that, but call me the loyal opposition.

Cheers!
 
Upvote
107 (108 / -1)
Post content hidden for low score. Show…

Ken Fisher

Founder & Editor-in-Chief
19,422
Ars Staff
Say you sketch out a very half-assed version of an article, give it to an LLM and ask it to improve it, and unquestioningly accept every "suggestion" it gives you; can you not at least argue that all you were doing was accepting "style suggestions and structural feedback"?
You're right that any individual clause can be gamed if someone is determined to game it. But the policy doesn't work as isolated sentences. The authorship principle says humans make editorial decisions. The accountability section says you own the result. A reporter who sketches a half-assed draft and rubber-stamps whatever an LLM hands back hasn't exercised editorial judgment, and that's a violation of the policy's core principle, not a clever use of a loophole. The policy can't prevent bad faith, but it does make bad faith indefensible.
 
Upvote
451 (452 / -1)
Post content hidden for low score. Show…

Ken Fisher

Founder & Editor-in-Chief
19,422
Ars Staff
I'm surprised there is no explicit mention of the problem that resulted in the development of this policy: "AI makes mistakes / hallucinates and cannot on its own be trusted".
From the doc:
"AI output is never treated as an authoritative source. Everything must be verified" is saying exactly that. And "reporters may not represent material as reviewed unless they have examined it directly" is the operational consequence of not trusting AI output.
 
Upvote
198 (198 / 0)
Post content hidden for low score. Show…
Upvote
292 (292 / 0)

Aurich

Director of Many Things
41,099
Ars Staff
Have you, or will you, run an amnesty period for contributors to admit to any content partially written by AI that might currently be up on the web site, so it can be removed or flagged as not meeting these standards?
"These standards aren’t new. They’ve governed our editorial work since AI tooling became available. What’s new is making them visible to you. You deserve to see the rules we hold ourselves to, not just trust that they exist."
 
Upvote
259 (259 / 0)
Thank you for the transparency. If I recall correctly, after the incident with the AI quotes earlier this year, Ars staff said that a postmortem of that particular situation would be coming later. Are you still planning on doing that, or would it be one of the off-limits areas mentioned above?
I think this is that post. It lines up with the promise pretty well.

Ars Technica has completed its review of this matter. The appropriate internal steps have been taken. In the coming weeks, we’ll publish a reader-facing guide explaining how we use—and do not use—AI in our work.

We do not comment on personnel decisions.
 
Upvote
184 (186 / -2)

Eric

Ars Legatus Legionis
19,168
Ars Staff
Thank you for the transparency. If I recall correctly, after the incident with the AI quotes earlier this year, Ars staff said that a postmortem of that particular situation would be coming later. Are you still planning on doing that, or would it be one of the off-limits areas mentioned above?
We have said all we can say on that matter.
 
Upvote
201 (207 / -6)

Wtcher

Ars Centurion
269
Subscriptor++
Thanks you guys! I'd been waiting patiently for this (since The Article). I'm glad it's finally here.

I'd been struggling with some of the same questions as I do amateur photography (and thus amateur post production) as a hobby; some of the tools are very useful (e.g. subject selection and removal, where in the past I'd have to spend much more time cloning out that space and fixing anomalies, etc), but inhabit a sort of grey area; as noted, you aren't really adding to the photo.

Plus stuff like "remove this speck of dust and do an infill" has been around a long while, and existed long before the AI craze.

This looks like a great starting point for me, too.
 
Upvote
108 (108 / 0)

Jeff S

Ars Legatus Legionis
11,045
Subscriptor++
Anyone who uses AI tools in our editorial workflow is responsible for the accuracy and integrity of the resulting work. This responsibility cannot be transferred to colleagues, editors, or the tools themselves. More broadly, maintaining the standards in this policy is a shared obligation across our editorial operation.
I'm slightly confused about one aspect of this.

I see no explicit mention of secondary human review, such as by an editor? But that could fall under "shared obligation across our editorial operation."

One way that paragraph could be interpreted is that Ars leadership is saying that only individual authors are responsible for the accuracy of Ars reporting.

But as a reader, I kind of expect that more than one human will be looking at an article and checking it, because I expect Ars as an organization to be responsible for the content.

But again, that may be implied by the "shared obligation" language.
 
Last edited:
Upvote
9 (20 / -11)
I'm surprised there is no explicit mention of the problem that resulted in the development of this policy: "AI makes mistakes / hallucinates and cannot on its own be trusted".

Edit: got it, this is just the policy, not the reasoning behind it.

Confabulations are one obvious reason why AI should not be used, but not the only reason. You don't want to tie a policy too closely to any specific reasoning, because that can limit how you apply the policy if the assumptions behind the reasoning change. (Though, of course, you might want to revisit the policy in that case.)

Hypothetically, if a new AI model arose that was guaranteed to never fabricate output, would it then be okay to use? I don't think so. There are plenty of other issues with AI slop besides "hallucinations."
 
Upvote
71 (73 / -2)

Aurich

Director of Many Things
41,099
Ars Staff
Serious question: what do you think about an AI moderation feature? I get the sense y'all are stretched a little thin. Keeping this community tight should be your top priority. Nobody has what Ars has here.
Not sure I love the idea, but if we did want to experiment with it I think all I would personally be interested in is seeing a bot that could generate reports, like users already do, so we could review posts.

I don't know how valuable that would really be though. Our rules are not really complicated, but there's a lot of nuance in how we apply them.

Honestly our user base is pretty great about reporting problems already, and as it is we reject probably 75% of the reports because they don't require moderation. Which is fine! Better to have people flagging things so we can keep an eye on them even if they aren't over the line.

Having a bot add to that doesn't seem that helpful.
 
Upvote
193 (193 / 0)

Ken Fisher

Founder & Editor-in-Chief
19,422
Ars Staff
I'm slightly confused about one aspect of this.

I see no explicit mention of secondary human review, such as by an editor? But that could fall under "shared obligation across our editorial operation."
You're reading the 'shared obligation' language correctly. The policy establishes two things: individual responsibility for the person using the tool, and institutional responsibility for maintaining standards and oversight across the editorial operation. That includes editorial review. We didn't spell out every step of our editorial process because this is a policy document, not an org chart, but the short answer to your question is yes, more than one human looks at our work before it publishes.
 
Upvote
162 (162 / 0)

Person_Man

Ars Tribunus Militum
1,516
Subscriptor
For our April 1st post with the Moonshark I used stock images from Getty for the Moon and the shark. You’ll notice the shark’s nose is cut off, and the mouth is full of fish in the original image.


I drew in the rest of the nose by hand. I removed the fish inside the mouth by hand (using a handful of Photoshop cloning tricks). I would not feel comfortable using the generative fill to remove those fish or draw in the nose. That to me would be using AI features too directly in an image. It’s perhaps a subtle distinction vs removing the letters on a sign. but for me personally I’d rather draw the line early.

Also, you can always check my image credits for details. If it’s a collage with licensed photos it will say something like "Aurich Lawson | Getty Images". I don’t necessarily have to do that, but I prefer the transparency.
I just assumed one of the Artemis crew took the picture.
 
Upvote
261 (261 / 0)
I personally am somewhere between Aurich’s approach and “never”.
However I might prefer Ars treat this generative “AI” and related technology, I respect the transparency and appreciate the commitment to continual work at earning trust.

Keep it up and you’ll keep me around. I might grumble about this or that, but call me the loyal opposition.

Cheers!
When you have the talent to do things properly and can use AI as a tool and not a crutch, I am not really offended. Just like the "fill" tool of old. Sure you can fill by hand but if there is a tool that doesn't essentially affect the end result, go for it.
 
Upvote
56 (57 / -1)

Aurich

Director of Many Things
41,099
Ars Staff
Thanks you guys! I'd been waiting patiently for this (since The Article). I'm glad it's finally here.

I'd been struggling with some of the same questions as I do amateur photography (and thus amateur post production) as a hobby; some of the tools are very useful (e.g. subject selection and removal, where in the past I'd have to spend much more time cloning out that space and fixing anomalies, etc), but inhabit a sort of grey area; as noted, you aren't really adding to the photo.

Plus stuff like "remove this speck of dust and do an infill" has been around a long while, and existed long before the AI craze.

This looks like a great starting point for me, too.
I have a basic philosophy when it comes to AI, in terms of my personal use but also just in how I find it personally interesting:

AI should enhance human abilities, not replace them.

My go to metaphor is an exoskeleton, instead of a robot. Strapping into a suit that makes you stronger sounds pretty cool. Kicking back while a robot does your work, not so much.

So when it comes to Photoshop, I consider things like filling in some sand or grass or concrete or whatever with a generated texture that matches a useful enhancement. I could hand clone that, but it would take time and effort to make it seamless, and the end result would be pretty invisible either way. It's grunt work, not creative work. Letting the computer help me lift feels okay.

Removing the fish in the shark mouth ... I mean, it's kinda on the border. I'm not sure doing it with a generative fill would be a terrible thing. But it starts to feel like a slide towards being lazy in ways I don't love. It was a specific thing, that inside of the mouth, not a stretch of sand where I'm trying to remove a shadow or some rocks. I chose to draw the line there.
 
Upvote
165 (165 / 0)
I'd also like to see the forum posting guidelines updated to make clear when and how individual forum users may use AI in their forum or front page article comments.
You want someone to... police the forum for AI?

If you want someone to go crazy, why don't you have them wait by a traffic light and see who runs a red light?
 
Upvote
35 (38 / -3)

Aurich

Director of Many Things
41,099
Ars Staff
I'd also like to see the forum posting guidelines updated to make clear when and how individual forum users may use AI in their forum or front page article comments.
I moderate people who copy and paste AI results as posts. "I asked ChatGPT and it said [big block of pasted text]"

That's about the limit though. Sometimes people flag a post for "it sounds like AI". I'm not the AI whisperer, and neither are you. We're generally not interested in trying to play that game.
 
Upvote
145 (146 / -1)