Skip to content
Orbiting HQ

Ars Technica’s policy on generative AI

How Ars Technica uses, and doesn’t use, generative AI.

Story text

AI is reshaping how information is produced, and our readers deserve to know where we stand. This is our policy on the use of generative AI in Ars Technica’s editorial work. It applies to all editorial work produced by Ars Technica’s writers, editors, and contributors.

The short version: Ars Technica is written by humans. AI doesn’t write our stories, generate our images, or put words in anyone’s mouth. Where we do use AI tools in our workflow, we use them as we do any other tool: with standards, under supervision, and with humans making every editorial decision.

If there are any changes to our policy, they will be reflected here.

Our journalism is human-authored

Ars Technica’s editorial text is written by humans. We do not use AI to generate our reporting, analysis, or commentary.

When AI output is itself the subject of reporting (for example, examining what a model produces or analyzing a system’s behavior), we may reproduce that output for demonstration or analysis. In those cases, AI-generated material is presented as exemplar material and is set apart visually, with disclosure placed as close to the material as possible.

AI-powered tools may be used to assist with editing and workflow in ways that don’t displace human authorship, including grammar checks, style suggestions, and structural feedback. These tools can recommend changes; only humans can make them.

Research and source material

Reporters may use AI tools vetted and approved for our workflow to assist with research, including navigating large volumes of material, summarizing background documents, and searching datasets. Even then, AI output is never treated as an authoritative source. Everything must be verified.

When we attribute a statement, a position, or a quote to a named source, that material comes from direct engagement with interviews, transcripts, published statements, or documents reviewed by the reporter. AI tools must not be used to generate, extract, or summarize material that is then attributed to a named source, whether as a direct quote, a paraphrase, or a characterization of someone’s views.

We don’t publish claims based solely on AI-generated summaries, and reporters may not represent any material as “reviewed” unless they have examined it directly.

Every author who uses AI tools in the course of reporting a story must disclose that use to their editors, and authors remain fully responsible for their content.

Images, audio, and video

Our visual content, including listing images, illustrations, and video, is produced by our editorial and art teams or sourced from photography services and wire providers. Our creative team may use AI tools in the production of certain visual material, but the creative direction and editorial judgment are human-driven.

We do not publish AI-generated images, audio, or video as authentic documentation of real events. We do not alter documentary media in ways that change their meaning. Standard production work, like color correction, cropping, and contrast adjustments, is fine.

When synthetic media is used in the context of reporting on AI, it will be clearly identified as AI-generated, with that disclosure placed as close to the material as possible.

Accountability is non-negotiable

Anyone who uses AI tools in our editorial workflow is responsible for the accuracy and integrity of the resulting work. This responsibility cannot be transferred to colleagues, editors, or the tools themselves. More broadly, maintaining the standards in this policy is a shared obligation across our editorial operation.

These standards have governed our editorial work since AI tooling became available. When violations occur, we take action. We’re publishing this reader-facing version because our readers deserve to see the rules we hold ourselves to, not just trust that they exist.

This policy was last updated April 22, 2026.