What happened is clear. One of the journalists used AI, to extract info from a (VERY SHORT) blog post that he could not bother to read (why?), and the AI hallucinated.I've been a Pro subscriber for quite a while now, and I want to preface what I say below by stating I'm not threating my subscription over this. Journalism is extremely undervalued and it would be reactionary to dismiss the whole outfit over one or two writers.
I also don't expect Ars to fire writers on such short notice. I'm mad, but it'd be irresponsible to fire staff without at least investigating what happened. I don't want a head on a pike, I want to know how this happened and what Ars will do going forward to prevent it happening again.
That being said, I expect better from Ars' writers. If I found out one of Beth Mole's medical nightmare stories didn't actually happen, or some component of it was fabricated, I don't think I could ever enjoy their pieces again.
Unfortunately, until that happens I cannot trust these two journalists. I'm not interested in reading potential misinformation on hot topics. If I wanted that I'd still be using twitter. So, for their sake, please publish a follow-up to this.
I'm sorry to disappoint you, but...now that we're on page... 16!
It doesn't have to be a binary position because Ars is far bigger than any one of its staff. Someone can step back and evaluate their sense of value considering what does and doesn't seem worth their money. They may decide that there are enough good contributions that Ars isn't worth dropping completely, but also not feel comfortable supporting directly.I had respect for your position right until you said you were going to read the articles you no longer trusted anyway.
Or they are "out for blood" because they want Ars to be trustworthy and reliable.Too many people are clearly out for blood because of their personal feelings about AI
This is one of my biggest gripes. I've seen it happen on several occasions and in real time and others I missed, (what was the context? It only leaves me with questions) and what bothers me is that the process hasn't improved over time. Instead, it's always seems to be the same - pull article, lock any comments, and give some context free or double down mea culpa. They really need to do better.The editorial & review process is still my biggest curiosity about Ars in general.
I wouldn't say it's "frequently" but a couple times a year we get those its-Friday-and-a-new-writer-puts-out-something-clearly-not-up-to-Ars-expectations articles. Those articles get immediately roasted in the comments (rightfully so), then eventually an editor comes in to apologize and do damage control.
What exactly is the review process here? If there is oversight, why can't they predict these articles which crumble after 2 minutes of analysis by the readership? Who has the authority to push the button on posting a new story? How are accuracy and facts being confirmed?
Being sick sucks, being sick in the US sucks even more.What I don't understand nor accept is why Benj was working even though he himself understood that he was in no state of mind to do his job properly. Why did Ars let someone work while sick?
No kidding. Their lack of oversight in the first place and then the subsequent damage control measures masquerading as an apology (after deleting the evidence and locking the comments) is pretty bad.This is one of my biggest gripes. I've seen it happen on several occasions and in real time and others I missed, (what was the context? It only leaves me with questions) and what bothers me is that the process hasn't improved over time. Instead, it's always seems to be the same - pull article, lock any comments, and give some context free or double down mea culpa. They really need to do better.
Plus it was their AI topic reporter - that should be trustworthy on that same subject. I will avoid his reporting from now on, and/or wait for Ars to clarify this further.I'll leave it to everyone's own judgement what to make of the fact that reporting on AI uses AI to generate said reporting, and leaves it to an AI to fabricate quotes without at least the author doing the fact-checking on it...
Newspapers don't delete articles and just stick an editor's note up instead. Ars has gained decades of trust as THE IT/tech information source.Two comments:
First, dropping your subscription because a single author violated Ars policy and wasn’t caught before publication seems excessive, particularly considering they acted aggressively over the weekend and even admitted precisely what was wrong.
Second, it gets tricky when possible employee discipline is involved. I think that has to be handled first, before additional public postmortem.
I don’t like it, but I’m also not aware of any publication acting more aggressively and publicly than Ars has (so far) in a similar case.
Can this be pinned as top comment!I am gravely concerned that this happened not on a syndicated article, not from some random freelancer, but on an article by two of the most recognizable authors on the site. This has deeply shaken my trust in Ars.
I'm willing to allow some time, but I do expect a full postmortem. I want to know:
I can get AI slop anywhere. Ars is supposed to be better than that.
- how this happened
- what is being done to ensure that it never happens again
- whether or not disciplinary action is being taken, and all details that are safe to share about it
So the Ars staff writer that covers AI topics has been caught using AI tools to write text (quotes) for him? Not a great look, honestly, but waiting for a follow-up that will explain what happened - and if he is to blame to what extend - fabricating stories or messing up a tool that he - as an AI pro - did mess up.
The original article is here (again Ars should NOT delete content, just attach at top the editor's note - just like a newspaper would do - don't delete it):
https://web.archive.org/web/2026021...ent-published-a-hit-piece-on-someone-by-name/
“agents can research individuals, generate personalized narratives, and publish them online at scale,” Shambaugh wrote. “Even if the content is inaccurate or exaggerated, it can become part of a persistent public record.”
A "professional" writer (not a student using said tools to assist on an essay) that covers AI topics for Ars (THE IT/tech site of trust for decades) gets caught using AI to use fabricated quotes - in an article about AI making shit up - that's unreal.Here's my suggestion: Don't use AI tools to write, don't use them to "assist", don't even use them to summarize. A complete moratorium on AI writing or inquiries. Yes, of course I'd say this... but that this happened using AI was, frankly, inevitable. It's the nature of the tool, and it WILL happen again, even if it's writing is "proofread". The work it takes to verity each claim AI makes is better spent just doing that initial research and writing it with a human... by a human. You can if you wish grasp a whole other person I suppose, HR department may object.
@Ken Fisher, ignore that noise above.@Ken Fisher , I know that employee discipline processes take time, so I personally will be waiting two weeks to see the consequences of at least one of your writers lying to you, deceiving readers, and permanently damaging Ars' credibility. if everything is business as usual, with no public consequences, I will be cancelling my subscription on March 1st. I'm not going to pay for fabrications and lies
Cory Doctorow had been promoting what I think is a nice illustration for this.I work for an employer which is really pushing AI-based solutions to both its employees and its users.
At one of the information sessions, they (rightly) emphasized that anything the AI spat out had to be double-checked, and that the end product and responsibility belonged to the human being using the AI.
Someone at that session pointed out the many examples of professionals (lawyers, journalists, etc.) failing at that responsibility, and asked what guardrails they had in place to protect the employer from the results of employees not verifying what the AI spat out. Their response was to double down on it being the user's responsibility to double check anything, and otherwise avoided the question.
I guess what I'm trying to say is, if a tech journalist whose beat is Artificial Intelligence can't internalize that message, and AI keeps getting pushed by everyone as the solution to everything, then we are utterly screwed as a society.
You know, you take five minutes and then suddenly....I'm sorry to disappoint you, but...
Benj Edwards said:
The difficulties I have with your criticism is that I use AI chat bots every day now to help me brainstorm and stuff, you know? It doesn't write my articles or anything 'cause it's... That's not what we do, it's not our policy, and it's not allowed, and it would suck, 'cause it's not a good writer, but... So I find use like AI models as sort of knowledge translators and framework translators, like, to... And, like, a sort of a memory augmentation that... Ever since I had... I had COVID, like, so many times, and I've had some brain fog issues. ChatGPT is great for, like... If I can't put my finger on what this thing is called and I can't remember it, I ask, like... You can describe it in a, like, a roundabout fuzzy way and get an answer pretty quickly, and then, you can verify it, but, you know, you would never search... [no audio] For... Or if you didn't... Could agree with you that AI... These AI models are not what they're billed to be, you know? They are not people, they are not replacements for labor, they are... Like, potentially at best, yeah, some kind of augmentation tool
The author’s excuses on his bluesky account make this entire event even more pathetic
It's not like Benj is shy of his AI usage as a tool to write better articles.
View: https://youtu.be/1nEph7-Viyc?t=255
In his interview with Ed Zitron, he is very candid, and he admits that GenAI is a good tool to help him fight against COVID Brain Fog. This is not a new excuse from him, the interview is from 4 months ago.
This is how everything gets enshittified.Don't usually post, but posting now to express appreciation for both author's work and observe that I can think of lots of scenarios consistent with posted statements that would not make this close to a firing offense.
I know that there is a real sense of betrayal, given that (for me at least) Ars is generally a bastion of standards and sanity, but if it is verified as an isolated occurrence about which everyone is honest within a short, but non-zero, amount of time, I can't see it as a nefarious plot, and I'm a bit surprised by the instantaneous vehemence here.
You are not gay or bisexual then?You can be the greatest guy in the world, but if you suck one c*ck you’ll always be known as a c*ck sucker.
In one article, ars destroyed itself.
One of the problems is: if it remains up, how do you prevent world+dog from continuing to reference it, and AI's to keep reading it forever - conveniently skipping the "Redacted" part in the top?On the one hand this is obviously completely unacceptable.
On the other hand I don't see how a journalist can report on AI without using AI, essetially they have to dogfood it, and the problem here was that due to a slip twixt cup and lip the dogfood ended up getting into our collective bowl as readers.
If this was a writer with a different beat I would be in full 'burn the witch' mode, because they shouldn't be using these tools at all, but this situation is much more messy.
On the other hand the response from Ars here is ugly.
I had a similar problem with how Ars handled the tale of the Egyptian physicist, where they removed the erroneous information entirely without preserving the record so to speak.
- Memory holing the article entirely is messed up, they should preserve it even as they retract it.
- There should have been an indication that a full explanation of the hows and whys would be forthcoming.
- It should have identified the writer in question, even if that is harsh, even if he is sick - because otherwise we, the readers, are left having to do our own research. It isn't like it could have been kept secret, and what is not secret should be openly acknowledged.
...the US is wild
PTO and being sick have nothing to do with each other in any civilized country.
It does seem there is a contingent here expressing disappointment the initial announcement did not include labeling him a domestic terrorist. That people want heads on spikes has long been a thing; it's also long been known that type of accountability isn't as effective as the pitchfork wavers think it to be.
I will have to disagree with that. Even competent people make rare mistakes. It is human to err, and I want more humans at Ars; not less.That's good to hear. But frankly, this is still the kind of "isolated incident" that should be considered an immediate firing offense. This was not a peccadillo, this was an utter abnegation of journalistic work, let alone standards and integrity.
If posting slop to the front page isn't a firing offense, I have to start questioning what the job is in the first place.