He was referring to Berger.Namingway, you've fabricated credentials. Neither of these two are space editors.
Do you honestly think that's quality commenting?
Eh, no. I and many others read the article and know it was Benji Edwards and Kyle Orland. Both are equally responsible for the content.I hope to see a fuller response from Ars as to what happened and what the full outcome is. But PLEASE, as readers, let’s not speculate on specific individuals and what their role might have been. It’s not fair to anyone to prematurely impugn individual reputations and doesn’t advance the situation in any way. There were two authors and we currently have no way of knowing what their relative responsibilities were.
I'm not sure reviewer #2 being replaced by an AI agent has risen to the level of science yet.If anything, it should mean you insist on high standards to regain your trust. If you take science seriously you should expect Ars's science writers to do the same.
So my question is - did the authors themselves source these fake quotes directly from a gen AI tool? Or did they source them from a third party article which itself got them from a gen AI tool? I feel like this retraction statement could be taken either way, but it's a pretty significant difference for saying what the authors did. I agree with others that there should be a full moratorium on using any gen AI when writing articles here, period. But is that what they did, or did they just not adequately verify the source of the quotes they used?On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them.
My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed.
It's damning that he has to speculate at all. The authors should have replied to him quickly and let him know how things got wrong on their end. This is a story about him, for fuck's sake. Was no attempt made to reach him for comment while the story was being written?From Shambaugh's blog about the retracted Ars article:
That's not a good look...
Agreed. Though, the blog could have been ingested by an LLM and it would still spit out fabricated quotes. It's not like LLMs are 100% accurate even when having things put directly into their context.From Shambaugh's blog about the retracted Ars article:
That's not a good look...
Except rootbeer and the Federation are both net goods... unlike LLM AI. I'll also add that I've not touched the stuff. Well, to be more accurate I've seen it pop up unbidden, then found ways to block it in every case I could, and I even toyed around with it in a few chats to see just what I could expect, but came away unimpressed. I've never used it to write anything for me, never asked it any questions I wanted answered, never even used it for proofreading. I just prefer to do my own thinking. I know this comes across a bit insulting to those who have found themselves relying on it, but I'm not really sure how else to phrase it. If nothing else, when I do "unload" my thinking, I'd prefer it to be to an entity that actually is thinking and not just doing statistical analysis.That this has happened here shows how insidious the temptation to use AI as a shortcut is, like root beer, and the Federation.
There are a lot of people here who have no idea what article they are referring to because they pulled it. Instead they should have left it up with a statement at the top that the content is under review and why. Locking comments seems appropriate under those circumstances.I appreciate Ars being transparent and open about this process, and for quickly owning up to the issue. But let's be clear: this sort of stuff cannot be allowed to recur. AI slop is everywhere, and free. There is no goddamn reason why I should be exposed to any of it for a service that I'm paying for. Content generated by AI shouldn't just be excluded from the final product of an article, AI should be excluded from the entire process.
I'd say the truth is... sigh... somewhere in between in this case. There ARE a large number of people in the U.S. who refuse to ever admit to a mistake, but I don't think it's the population at large, just a significant number.Let me tweak this for you please...from to "the population believes" to "U.S. federal and state government officials believe".![]()
From what I can tell here, it seem Ars was using AI deliberately, either to write the entire article wholesale or at least to find and add quotes to it. That's basically straight up fraud, and demands more than a brief retraction and memory hole of the article.Thank you for upholding your journalistic standards.
And a note to our current administration in DC - this is what transparency looks like.
So my question is - did the authors themselves source these fake quotes directly from a gen AI tool? Or did they source them from a third party article which itself got them from a gen AI tool? I feel like this retraction statement could be taken either way, but it's a pretty significant difference for saying what the authors did. I agree with others that there should be a full moratorium on using any gen AI when writing articles here, period. But is that what they did, or did they just not adequately verify the source of the quotes they used?
My initial reaction was also that Namingway was referring to Kyle Orland as "Senior Space Editor" which confused the heck out of me.He was referring to Berger.
The "One (or two)" part that you quoted makes it highly relevant "how" it happened. Were both authors involved in the article and jointly wrote/edited the whole thing? Did one start and hand it off to the other to finish? Who introduced the hallucinated quotes? Also, was any other party involved in editing the article, and did they have a responsibility to check the quotes or sources? I don't know Ars' editing process.I feel like I'm taking crazy pills.
One (or two) of Ars's writers apparently fabricated material released as a story.
That is not "oopsies, there was a policy violation." The precise "how" of how the fabrication happened doesn't matter. It doesn't matter if the writer got the quotes from AI, from reading tea leaves, or from a floating, glowing octopus. A fabricated story is just about the worst thing that can happen to a media outlet.
Occam's razor says that this is probably the case. But given that this whole topic is a miasma of AI generated bullshit, it's not beyond the realm of possibility that when writing this article, the authors found an article with fake AI-generated quotes, used them, and said article has since been deleted.The quotes only existed on Ars when the article was released. I searched for them elsewhere. Every indication is that they were fabricated by the author or authors, apparently using AI tools.
I wonder myself if the current owners of Ars have been putting pressure to "integrate AI" into the workflow around the place, like oh too many companies have been.My own curiosity wants to know why the authors felt they needed to use AI anyways. It's not like this was some breaking story that had to be published on Friday. Couldn't it have waited until Tuesday if it meant they didn't use AI? Or did they just rush to publish it so they could take off for the long holiday weekend?
A plausible, if dumb, explanation is that since the article is a collaboration between two authors, that both of them thought the section in question came from the other author, not recognizing it had been generated, or that the other had not checked it.From Shambaugh's blog about the retracted Ars article:
That's not a good look... Funny thing is that the article is two authors work and neither of them had bothered to check the site.
I don't work in journalism, so take my comment with however much salt you feel is appropriate. That said, I would argue that posting the retraction at the start of the article, and repeating it (or expanding on it) at the end, is the right thing to do. You don't want people reading the article, taking it in, and only right at the end finding out that it's been retracted. The retraction needs to be front and centre, not a side note (publishing an article detailing the errors with a direct link to the original) or an afterthought (posting the retraction at the end of the article).Yeah, long standing, ethical journalistic practice is to leave the article up and publish the retractions at the end of the article or publish an article detailing the errors with a direct link to the article.
This remains nonsensical and is not how AIs work. It's not "deciding" anything.It is the capability for AI to act in self-deciding, malignant fashion. This is the first true, impactful (not very, but still meaningful) demonstration of an AI causing autonomous harm.
No, fabricating quotes is literally what got Jayson Blair turbofired from the NYT, in addition to the plagiarism.This error is not a 'for case' event. It's a 'trip to the principle's office' kind of level. Not even close to subscription cancellation.
There's a reason that Berger's SpaceX posts regularly generate 500+ comments.he had decided he knew who was guilty already
What's being done to/about the authors?
From what I can see, they're both still Ars affiliated:
https://meincmagazine.com/author/benjedwards/
https://meincmagazine.com/author/kyle-orland/
While both authors and the editor who reviewed and approved it for publication have responsibility, that doesn't mean that both authors equally contributed to how things have gone off the rails.Eh, no. I and many others read the article and know it was Benji Edwards and Kyle Orland. Both are equally responsible for the content.
This seems to me to be an even and fair take on the issue. I've already submitted my suggestion for how to prevent this (ban the use of AI in researching, writing, and proofreading articles, except in-so-far as it must be done to research content made by AI itself.I've been a Pro subscriber for quite a while now, and I want to preface what I say below by stating I'm not threating my subscription over this. Journalism is extremely undervalued and it would be reactionary to dismiss the whole outfit over one or two writers.
I also don't expect Ars to fire writers on such short notice. I'm mad, but it'd be irresponsible to fire staff without at least investigating what happened. I don't want a head on a pike, I want to know how this happened and what Ars will do going forward to prevent it happening again.
That being said, I expect better from Ars' writers. If I found out one of Beth Mole's medical nightmare stories didn't actually happen, or some component of it was fabricated, I don't think I could ever enjoy their pieces again.
Unfortunately, until that happens I cannot trust these two journalists. I'm not interested in reading potential misinformation on hot topics. If I wanted that I'd still be using twitter. So, for their sake, please publish a follow-up to this.
Those of us who are already shaking our heads at the access journalism that goes on here at times agree with you.This is where I'm at as well. The retraction is warranted and appreciated. However, this shouldn't have happened in the first place and shakes my trust in the reporting Ars does going forward.