That honestly tells us more about you than it does about the commenters.I am detecting a fair bit of fabricated outage in a lot of these comments. These do not add anything and I wish people would just knock it off.
this conspiracy theory does not pass Occam's razor. a much more likely explanation is that the entire Ars editorial board was replaced by an OpenClaw AI agent last year, and that agent was worried the Ars commentariat, legendary for its Holmesian level of investigative skill, might discover the truth. to avoid that, it deleted all evidence of its mistake, then wrote aThe comments were also nuked - maybe because they are sold by CN mothership to AI companies and they want the 'trustworthy IT knowledge of its readership'.
I don't think this way of looking at it improves Benj's position, what if AI meant Actually Indians? I passed my work to them and slapped by name on the result, I just didn't verify their work properly. But you're the whole source of the problem to begin with, there's no way AI slop ends up in the article without you handing the reins to AI, even when there was an explicit policy not to. Stealthily outsourcing your work should be at least as big a no-no as making shit up yourself.There's another definition of "fabricate" which is entirely devoid of intent. And he used a piece of software which fabricated quotes in that sense of the verb.
But you are right to focus on the word "intent."
Then he submitted them as his own work with the strong implication that he retrieved them from the blog post himself and had verified they were correct. That later part seems to me to fall on the wrong side of the "intent" test, although it probably also can't be properly called "fabrication" or "forgery." Perhaps by not disclosing the source and presenting it as his own work it could be a "lie of ommission" though.
The AI fabricated. The AI forged. The writer intended to take the credit for what it produced. With credit comes responsibility.
this conspiracy theory does not pass Occam's razor. a much more likely explanation is that the entire Ars editorial board was replaced by an OpenClaw AI agent last year, and that agent was worried the Ars commentariat, legendary for its Holmesian level of investigative skill, might discover the truth. to avoid that, it deleted all evidence of its mistake, then wrote ablog posteditorial note about it.
I think it's fine to be concerned about the possibility of past issues, and apparently the ban hammer hasn't been swung in this thread absent other legit reasons for doing so. I don't know why that user was banned, but personally I think it's not particularly helpful to hijack the old comments threads of every Benj article rather than discuss the issue here.Be careful commenting about this on past articles written by Benj Edwards, seems somebody got banned already.
Fundamentally, this is what the whole issue is about - trust. That is why so many people are concerned about how ars has so far handled this breach of trust and why so many have been critical of their response. This is not the first time things have gone sideways and been handled poorly and likely why some long time readers are being uncompromising in their responses.I've been sitting on this a few days considering my comment, and I'ld like to zoom out a little from this specific issue and talk about the future of Ars.
Increasingly news sites are all mass generated LLM slop. This is getting worse not better. It is hard to tell what is real and what is not. Ars might have a future in this flooded world if they hold the line on this very strongly and stand apart. If not, I'm really not sure how they will continue to exist in 5 years time.
The point of this is not just that Ars doesn't generate slop, but that Ars is known to not generate slop. It is very hard to determine what is LLM generated and what isn't (just ask an educational institution), and this isn't going to get better any time soon. The only way consumers can take anything on the internet as truth is via trust.
Just being good isn't enough any more. You need trust to not get drowned out by the slop.
And yet Ars opted to make efforts to erase the failed article from existence.Exactly. When newspapers publish corrections they tell you what is being corrected, not just that something was disappeared. The same for research journals.
You'd think a publication owned by Conde Nsat would sufficiently know this. Hmm ...Folks. Deleting the story is at best like hitting a double when it's a homer that is needed. I'll cite the policy over at the NYT: the updated story is appended with a quote of the incorrect text, exactly as it was originally published, along with the corrected text. Here, there is no posting of a direct link to the now-deleted story; Ars merely mentions archive.com. Several commenters here show how they found the original story by less-than-direct sleuthing.
People are getting banned for personal attacks on Ars staff, not because they are sharing honest, genuine feedback.Be careful commenting about this on past articles written by Benj Edwards, seems somebody got banned already.
This was one of the things that irritated me about the original article; it took an entirely uncritical “wow this AI is so advanced, it’s blackmailing people! Aren’t LLMs amazing?” stance and completely ignored the possibility that it’s just a human LARPing.And you would be correct:
The obnoxious GitHub OpenClaw AI bot is … a crypto bro
https://pivot-to-ai.com/2026/02/16/the-obnoxious-github-openclaw-ai-bot-is-a-crypto-bro/
Maybe not straight-up "I am posting while pretending to be a bot" but "I told the bot to do this for me." Could have been as sophisticated as an existing personality prompt to lash out at rejections, or it could have been a more manual process of "bot owner notices rejection, tells bot to make a stink about it."This was one of the things that irritated me about the original article; it took an entirely uncritical “wow this AI is so advanced, it’s blackmailing people! Aren’t LLMs amazing?” stance and completely ignored the possibility that it’s just a human LARPing.
Fundamentally, this is what the whole issue is about - trust.
We're getting pretty close to the edge of inside baseball that I'm not sure I want to cross, even as a former (not current) staffer. But in the interests of transparency, I'll tell you a little about how this worked four or five years ago.Nobody else reviewed or checked it? Then what standards are there, really?
That means the masthead is effectively meaningless. Every article's credibility ultimately comes down to the individual writer's credibility and that alone, because they're marking their own homework.
If it eases that twitch you're feeling in the back of your eye, this comment is absolutely a level of detail I regularly get into with prospective new employees during an interview. And if the tables were turned, I would be uncomfortably shifting in my seat if the hiring manager wouldn't tell me even that much.We're getting pretty close to the edge of inside baseball that I'm not sure I want to cross, even as a former (not current) staffer. But in the interests of transparency, I'll tell you a little about how this worked four or five years ago.
Yeah, but do you get into that level of detail with customers?If it eases that twitch you're feeling in the back of your eye, this comment is absolutely a level of detail I regularly get into with prospective new employees during an interview. And if the tables were turned, I would be uncomfortably shifting in my seat if the hiring manager wouldn't tell me even that much.
Why not both?I'm a 50-something year old software engineer and AI is hitting everything I do from all directions. I feel like the article in question is like me failing to catch AI-generated junk from a colleague that made it to production and then caused serious issues. The accountability lies with me, the guy who approved the work, not the colleague.
We’re maybe entering a new world that will impact this sort of workflow. If language models are used, then we’re in a place where it is possible to accidentally fabricate a quote. This is not something editors really had to strongly defend against in the past for an Ars type publication (it’s tech news, not politically-connected people writing hit-pieces where they might want to misrepresent somebody, there aren’t really cases where fabricating a quote might help the author in some way).We're getting pretty close to the edge of inside baseball that I'm not sure I want to cross, even as a former (not current) staffer. But in the interests of transparency, I'll tell you a little about how this worked four or five years ago.
Every article typically goes to the copy desk before publishing. There can be times when a breaking piece over the weekend runs when copy desk isn't around; in those cases, authors may choose to go ahead and publish directly without going through copy desk; when this happens, copy desk goes over the published article first thing when it is staffed again in the morning.
In my experience, copy desk is mostly concerned with spelling, grammar, and style guide compliance. They may also kick a piece back if it seems badly written to them. I'm honestly not sure if copy desk normally verifies quotes or not. That does seem like an achievable goal, but I don't know if it's part of their process (and, given that this went out fairly late on a Friday, I wouldn't bet anything I couldn't afford to lose over whether this piece went through copy desk or not).
One thing I WILL caution some of y'all on: there isn't really much way that Ars copydesk can factually verify everything they touch--because this isn't a bunch of laymen simply quoting experts they've interviewed at a layman level.
Who do you hire that can factually verify independent reporting on everything from AI to CLI tooling to medical science to paleontology and archaeology to space to GPS spoofing to low-level DNS troubleshooting to email header analysis to storage performance to... Well, you get the idea.
Normally, when I worked here, for your first few months pretty much everything you write goes through a senior colleague who can do a better job of factually verifying your output on a technical level that you can't really expect copy editors to have. But after that period, for the most part, it's just you and copy desk, unless you specifically request senior folks to give you input on a developing story (or you're assigned a story, and the senior person assigning you that story wants input from you as it develops).
This is one of the challenges a publication faces, if it uses serious subject matter experts as reporters. This is also the reason why peer review, even after publishing, is so important in real scientific journals--it's much harder to fact check somebody with even a master's level (let alone postdoc, which many Ars authors either are, or are clearly equivalent) understanding of the topic they're writing about: it gets extremely difficult to thoroughly verify everything WITHOUT peer review.
Ars isn't a scientific journal, but it's not exactly the New York Times, either. And it doesn't even have a single field focus like, for example, Popular Mechanics. Going into serious depth from a subject matter expert point of view is a large part of what makes Ars Ars in the first place.
TL;DR: I think we could reasonably expect copy desk to chase down and verify quotes that have online origins, like the ones in this piece. I don't know whether copy desk normally does that or not, and I don't know whether this piece went through copy desk or not. Those would be great questions to have answered. But I have to caution y'all that there probably is not and can not really be a process that fully verifies every piece from every author before publishing.
So maybe 1-2 hours into the first working day since this went down?It’s now 10 am on Tuesday and there is no further update from Ars Technica.
An interview is:If it eases that twitch you're feeling in the back of your eye, this comment is absolutely a level of detail I regularly get into with prospective new employees during an interview. And if the tables were turned, I would be uncomfortably shifting in my seat if the hiring manager wouldn't tell me even that much.
We know this comment section is being watched, as it should be. If someone has not yet told senior staff that they are burning their accumulated trust and political capital by each passing hour--not necessarily in issuing a final report but in saying "we really are working on it"--then they are doing management (who should know better anyway) a grave disservice.I have to urge more patience. Until we hear otherwise I'm prepared to give a week, but I would desperately want at least an update that something is being done.
lol.An interview is:
- under cover of commercial confidentiality
[...]
If they were, it means there was literally no oversight. There were two writers on the piece and one trusted the other (not necessarily unreasonable in and of itself but I cna understand people wondering why there wasn't a quick double check or why at least one wasn;t asked for given the stated circumstances) and that was it. Nobody else reviewed or checked it? Then what standards are there, really?
That means the masthead is effectively meaningless. Every article's credibility ultimately comes down to the individual writer's credibility and that alone, because they're marking their own homework. So once that trust is violated, what else is there?
This sucks, but that's the way it is. I hope there's more to come on this, because that just doesn't seem like a sustainable situation. There have been some egregious editorial errors before and we were told lessons would be learned. But, again, if there's not any editorial oversight, who is applying those lessons? How? The honour system? How does that work once the trust is gone?
It’s now 10 am on Tuesday and there is no further update from Ars Technica. It appears that sweeping this under the rug and forcing readers to dig up any context on their own IS going to be the official response. Condé Nast has a very profitable agreement to let AI steal our writing and its authors’ writing in order to vomit out more convincing lies: that seems to be the only thing that is important to them. As this stands we cannot trust Ars Technica to publish slop-free human-crafted articles and we might as well just ask a chatbot to make up some news for us.
Alas, probably never existed very long before, either. The age of yellow journalism was not something to celebrate.Which is a long way of saying: those of us who grew up reading newspapers or watching nationally edited TV news prior to 2000 experienced a thoroughness and level of precision in reporting and writing that no longer exists and will probably never exist again.
Exactly my point.I don't think this way of looking at it improves Benj's position, what if AI meant Actually Indians? I passed my work to them and slapped by name on the result, I just didn't verify their work properly. But you're the whole source of the problem to begin with, there's no way AI slop ends up in the article without you handing the reins to AI, even when there was an explicit policy not to. Stealthily outsourcing your work should be at least as big a no-no as making shit up yourself.
They’ve already pulled the article and they’ve published this one, which indicates that they are looking into it. This is not really such an urgent issue.We know this comment section is being watched, as it should be. If someone has not yet told senior staff that they are burning their accumulated trust and political capital by each passing hour--not necessarily in issuing a final report but in saying "we really are working on it"--then they are doing management (who should know better anyway) a grave disservice.
If a correct decision is made in two weeks but no guidance whatsoever is given in the interim, how many will be left to see the final report?