AI is reshaping how information is produced, and our readers deserve to know where we stand. This is our policy on the use of generative AI in Ars Technica’s editorial work. It applies to all editorial work produced by Ars Technica’s writers, editors, and contributors.
The short version: Ars Technica is written by humans. AI doesn’t write our stories, generate our images, or put words in anyone’s mouth. Where we do use AI tools in our workflow, we use them as we do any other tool: with standards, under supervision, and with humans making every editorial decision.
If there are any changes to our policy, they will be reflected here.
Our journalism is human-authored
Ars Technica’s editorial text is written by humans. We do not use AI to generate our reporting, analysis, or commentary.
When AI output is itself the subject of reporting (for example, examining what a model produces or analyzing a system’s behavior), we may reproduce that output for demonstration or analysis. In those cases, AI-generated material is presented as exemplar material and is set apart visually, with disclosure placed as close to the material as possible.
AI-powered tools may be used to assist with editing and workflow in ways that don’t displace human authorship, including grammar checks, style suggestions, and structural feedback. These tools can recommend changes; only humans can make them.
Research and source material
Reporters may use AI tools vetted and approved for our workflow to assist with research, including navigating large volumes of material, summarizing background documents, and searching datasets. Even then, AI output is never treated as an authoritative source. Everything must be verified.