Not at all. I'm suggesting her weekly rephrasing of the NEJM case of the week could easily be AI slop. Whether it is AI slop or human slop, it doesn't belong here.
I have (thankfully?) been terminated twice under such vague and mysterious circumstances that I genuinely could not explain my termination. Once (I was working so hard on it that I was up until 1am the morning of the day I got fired) I was in the middle of writing test libraries for a new project, and another time I was told by the HR rep firing me it had already been explained to me why I was being terminated. Asked to state what that cause was, for my own edification (and to try to wring any kind of sense from an extremely surprising termination), she merely reiterated to me (the plain lie) that I had already had it explained it me. It seemed as if they had a reason it would not have been so difficult to restate it, no?That last sentence is the one people overlook. They get fired, then tell a future employer they have never been fired, then when a minor thing happens down the road they look at the file and say "Wait a minute--he falsified his employment application!"
Considering those fabricated quotes were already created by an AI, I don't think they need help making shit up.I agree with your general sentiment around keeping incorrect text up, but I think it gets thorny with fabricated quotes. Future AIs will inevitably slurp that up, ignore the context that they were fabricated, and then confidently assert that they were actual quotes.
Maybe he saw this announcement:Regarding the "Experimental Claude Code Based AI Tool" that Mr. Edwards mentioned on BlueSky: Per Claude,
"Claude Code is an agentic coding tool that reads your codebase, edits files, and runs commands. It works in your terminal, IDE, browser, and as a desktop app."
Did Mr. Edwards try coding his own program, using Claude, to pull quotes from websites? Claude Code is not designed to read text from websites, to my knowledge (but I hope someone corrects me).
Did you read the article you linked, cause I've read the first dozen or so paragraphs and it doesn't support the point you're making, edit: in fact much of it does exactly the opposite. Edit2: Holy shit, you couldn't have picked a better article to advocate for the exact opposite of your point. Please tell me this post was some kind of joke cause it's that fucking hilarious.For everyone brandishing the pitchforks I suggest you read this Columbia Journalism Review.
Some journalists that are using AI:
Gina Chua
EXECUTIVE EDITOR OF SEMAFOR
Nicholas Thompson
CEO OF THE ATLANTIC
Zach Seward
EDITORIAL DIRECTOR OF AI INITIATIVES AT THE NEW YORK TIMES
Millie Tran
CHIEF DIGITAL CONTENT OFFICER AT THE COUNCIL ON FOREIGN RELATIONS
Sarah Cahlan
PULITZER PRIZE–WINNING REPORTER AND FOUNDING MEMBER OF THE VISUAL FORENSICS TEAM AT THE WASHINGTON POST
Jason Koebler
COFOUNDER OF 404 MEDIA
Khari Johnson
TECH REPORTER AT CALMATTERS AND PRACTITIONER FELLOW AT THE UNIVERSITY OF VIRGINIA’S KARSH INSTITUTE OF DEMOCRACY WHO HAS COVERED AI FOR A DECADE
Araceli Gómez-Aldana
NEWS REPORTER AND ANCHOR AT WBEZ IN CHICAGO, AND 2023 WINNER OF THE JOHN S. KNIGHT JOURNALISM FELLOWSHIP AT STANFORD
Ben Welsh
FOUNDER OF THE REUTERS NEWS APPLICATIONS DESK, WHERE HE LEADS THE DEVELOPMENT OF DASHBOARDS, DATABASES, AND OTHER AUTOMATED SYSTEMS
Susie Cagle
A WRITER AND ARTIST FOR PROPUBLICA, THE GUARDIAN, WIRED, THE NATION, AND MANY OTHERS
Ina Fried
CHIEF TECHNOLOGY CORRESPONDENT FOR AXIOSAND AUTHOR OF THE DAILY AXIOS AI+ NEWSLETTER
David Carson
A JOHN S. KNIGHT JOURNALISM FELLOW AT STANFORD UNIVERSITY, ON LEAVE FROM HIS JOB AS STAFF PHOTOJOURNALIST AT THE ST. LOUIS POST-DISPATCH
No, but he had a LLM pluck a couple of select quotes* out of it.Did you read the article you linked, cause I've read the first dozen or so paragraphs and it doesn't support the point you're making.
Technically the AI didnt have any personal anything, the human content it scraped/stole did.I remember reading that article and not understanding how an ai could have personal motivation to make threats to someone. Still seems weird.
Too soonNo, but he had a LLM pluck a couple of select quotes* out of it.

On Wednesday, Shambaugh published a longer account of the incident, shifting the focus from the pull request to the broader philosophical question of what it means when an AI coding agent publishes personal attacks on human coders without apparent human direction or transparency about who might have directed the actions.
“Open source maintainers function as supply chain gatekeepers for widely used software,” Shambaugh wrote. “If autonomous agents respond to routine moderation decisions with public reputational attacks, this creates a new form of pressure on volunteer maintainers.”
Shambaugh noted that the agent’s blog post had drawn on his public contributions to construct its case, characterizing his decision as exclusionary and speculating about his internal motivations. His concern was less about the effect on his public reputation than about the precedent this kind of agentic AI writing was setting. “AI agents can research individuals, generate personalized narratives, and publish them online at scale,” Shambaugh wrote. “Even if the content is inaccurate or exaggerated, it can become part of a persistent public record.”
instead of a quote. And then would we know it without meticulously tracking down every sourcing of every sentence in the article? It would probably be a lot harder for Shambaugh to point out these instances. I'm pretty sure that - at least from how they've represented things - Ars hasn't had the time to go over this article with a fine-toothed comb. I'm guessing they asked Benj and he said that those few quotes were the extent of it. But personally, I'm having a hard time believing that. Especially if Benj is going to claim that he's been working in a fever fog this whole time.Shambaugh noted that the agent’s blog post had drawn on his public contributions to construct its case
The article was redirected to /dev/null within about two hours of publication on a Friday afternoon. We are still only about 50 hours out from that event. There have been basically zero conventional working hours since the failstorm erupted.That's why Ars buried it as hard as they could, then when they lost containment they recreated the article (rather than un-unpublishing it) or deleted all comments on it, don't state who did the thing, what the thing they did was, and otherwise assign no actual accountability.
...
Ars got caught aiming that firehose at their audience, lost containment of the attempt to hide it, and are still hiding what the firehose contained. This is not kudos-worthy.
… what Ars did do is immediately remove the story (which was the right thing to do).
imagine someone did something to annoy you, and you asked Reddit what to do about it, and someone replied "you should make a blog post complaining about it". that's believable, since it's something a person might say.I remember reading that article and not understanding how an ai could have personal motivation to make threats to someone. Still seems weird.
He did not blame his actions on COVID. He merely said it slowed his replies on social media.
It’s a bit ironic that you would skim over the facts while commenting on a matter of poor fact checking.
while working from bed with a fever and very little sleep, I unintentionally made a journalistic error
I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh's words
Being sick and rushing to finish
I asked my boss to pull the piece because I was too sick to finish it on Friday
You might also consider a retraction. I was really annoyed to see Jason Koebler listed only to find you'd misrepresented the substance of the page.For everyone brandishing the pitchforks I suggest you read this Columbia Journalism Review.
Some journalists that are using AI: ...
Multiple times. Each one of those authors describes how they are using AI. Or you could read about how CJR itself uses AI.
But I understand they are not members of your echo chamber.
The right thing to do is to maintain the original content at the original URI, with a big notice of retraction included. "Let's all agree to forget this ever happened" is not a productive solution.Removing the article was the right call.
Maybe even one near El Paso...?Now. imagine the AI generated code that runs a nuclear plant or a weapons guidance system having this problem.
Don't you know? The reality is much worse!I hope Beth Mole's horrifying medical articles can still be trusted.
Jim Salter said:
I can relate. I once was fired by Microsoft, but laid off by the contracting company I was working for. They knew the firing was just more stupid MSFT politics so they made sure I was eligible for unemployment until I got the next gig. Try explaining that on a resume.I have (thankfully?) been terminated twice under such vague and mysterious circumstances that I genuinely could not explain my termination.
The comments currents are moving swiftly but even as a non-journalist some of this stuff feels like 8th-grade ethics (i.e., completely obvious to anyone functioning as a near -or full adult).Journalism ethics have been around a lot longer than Ars Technica has, and no, this absolutely has not followed "best practice." It seems to be trying to get there, and I truly hope that it eventually does, but the initial reaction--panic delete--was an enormous misstep.
https://publicationethics.org/guidance/guideline/retraction-guidelines
In order to be best practice, the original text should still be readily available, clearly marked up with what was wrong with it and corrections to it, along with an explanation of how this happened and why it shouldn't continue happening.
So far we had a panic delete (which still stands, and removed reader comments as well as the offending article), a few locked threads with almost no real information, and personal statements made elsewhere on personal social media accounts belonging to both authors.
And this comment thread, where we at least, and finally, get to talk to each other about what happened, based almost entirely on those external non-official social media posts.
Could it be worse? Obviously. Is this "best practice?" Hell no. Not yet. But it still has time to get there. And I'm still hopeful.
Which section of that article do you feel best supports your position?Multiple times. Each one of those authors describes how they are using AI. Or you could read about how CJR itself uses AI.
But I understand they are not members of your echo chamber.
"Using AI" and "not verifying the accuracy of the output of the AI that is appearing under your byline" are not equivalent. Personally I've chosen not to engage in the former because, as someone else said above, the first hit is free. Once you've crossed that line keeping your footing solid while standing on the slope on the other side takes a lot more discipline than most humans have.For everyone brandishing the pitchforks I suggest you read this Columbia Journalism Review.
Some journalists that are using AI:
(snip)
Like the dentist said about the picture of a horrifying abscess: You can't handle the tooth!Don't you know? The reality is much worse!
Having gotten all of my outrage out, and stayed up late to do it (Ars is important to me), I appreciate your level headed comment and am going to try to get my own head right and get some sleep before I go on any more tears or screeds.The article was redirected to /dev/null within about two hours of publication on a Friday afternoon. We are still only about 50 hours out from that event. There have been basically zero conventional working hours since the failstorm erupted.
I am not going to say that the Ars editorial staff has necessarily covered itself in glory here--you could make an argument that this should have been an "all hands on deck, 6a-6p work, Christmas is cancelled" event--but to me it does not currently seem to be dripping with concentrated asscoverium.
Maybe he saw this announcement:
View: https://www.reddit.com/r/ClaudeAI/comments/1qqtmct/academic_quote_extractor_cli_tool_for_pulling/
There's lots of promises of only verbatim text and no hallucination, and it does run on Claude Code. And it's very new, so it's perfect for an AI journalist to want to test.
But of course, it didn't work, and then Mr. Edwards turned to ChatGPT...
Maybe you can rationalize it as getting confused about which code output which quotes and what guarantees there were supposed to be.
It's definitely FUBAR; but to me, there's plenty of reason to believe it was not intentionally malicious.
Giles Corey, screaming in his death throes: MORE NINESLeaving aside whether it’s doable at all, getting 99.9% reliable LLM output is the holy grail of the field and anyone doing it would be screaming it from the rooftops.