I'm pleased by the retraction and notice, but displeased by the approach. I'll echo others in saying I expect the retraction itself to include the title of the original story being retracted (and ideally, a link as well); and I think it reasonable to also expect that the original story will not be removed, but rather altered with a prominent notice of the retraction at the top of the story, ideally with a link back to the retraction notice.
Anything less leaves one with the lingering suspicion, however undeserved, that the editorial staff is more interested in "burying" the mistake than correcting it.
Which is why I am giving them the benefit of the doubt. This was a really bad event for them. I can’t imagine anyone there being happy with this. I’d imagine they are going to do some serious crackdown on making sure this doesn’t happen again because it’s just about the most damaging thing they could do. I know they’ve lost subscribers over this and that many people will not give them another chance.
I’ve been there. I loved Washington Post until they shit the bed and I had to cancel. I don’t see that happening here, at least for me. I can’t imagine them doing anything other than taking this very seriously. They know this could be an extinction level event for a small publication focused on technology.
But should I be wrong and it happens again I’ll be lining up to cancel too. I just think they’ve earned a little bit of understanding and patience, more than any other media outlet I’ve ever interacted with at least in my view.
The situation isn’t strange. LLMS don’t know truth or reality. Why are people surprised when it gets things wrong?The whole situation is so strange. I have started using Gemini quite a bit in the last 3-4 weeks, and it is shocking how good it is and how much detailed information it can give me about obscure topics like particular revisions of automotive parts or configuration settings.
What's also eye-opening is just how often it's completely wrong.
...and when it's incorrect, it's confidently incorrect.
It has suggested parts that fulfill my requirements that simply don't exist, and it has explicitly told me which programming values to change to update the 12V battery configuration in my Mach-e, and when verifying, those values are in the wrong location.
It even explains why those are the correct parts or correct values.
When I correct it, it says, "well spotted! Those are the correct values for the F-150 and x other vehicle. The correct value for your vehicle is 'y'".
It's often still incorrect.
It's so important to verify these things.
At first I agreed with you, upvoted, and moved on. But then I thought about the implications of doing this in today's world.Indeed, the old saying "it's not the crime, it's the coverup" has stuck around for good reason. In this case, putting the notice of retraction, link to retraction article(s), a{strike}around the whole original article and delisting it, but leaving it up with the comments thread up as well (but locked from further replies) seems like a better response. Nothing to hide, no links broken, people can see what happened, while also having it be very clear that it's retracted.
Maybe with the ever rising tide of AI, journalism organizations will start having to treat people more like aircraft pilots or the like, where if someone is sick enough it's effectively a "safety risk" and they should just be outright forbidden from any further work (enforce it technically too! disable the VPN etc) until they're recovered. Or at least any public facing work, maybe doing some equivalent of desktop clean up is ok, but nothing in the hot seat. Both for them and for the org. It's true that a lot of people might just try to work remotely normally but perhaps enforcing stronger work/life separation (you will take some time and relax and you will like it) in this age of blurring lines.
While publishing a statement despite management instructions might be a fireable offense (if that is what happened; this article might be management’s response), is also… basically good, right? I mean, management always tells you not to comment as a CYA measure for them.Re: Benj Edwards' statement
First off, I don't buy the sick in bed excuse. It's lame.
Second, that you were using AI tools to help you write an article should be enough grounds for Ars to fire your ass.
Third, that you've published a statement when you were explicitly directed not to should also warrant your termination.
In summary, I don't think your statement will accomplish what you're hoping it will. You did exonerate your co-author, so at least you're willing to go down without taking someone else with you.
He had knowledge- the knowledge that these tools were not allowed by Ars policy. He had intent- he didn't disclose the use of these tools, so he obviously knew he would get in trouble if he did. A lie of omission is still a lie. If he thought he was doing the right thing, he would have said something BEFORE he got caughtA lie requires knowledge and intent. This appears to be an honest (if stupid) mistake on the author's part, per their explanation.
I seem to remember them addressing it. And I can't imagine why they would want to dwell on it. It became national news so there was plenty of reporting on it. And I mean... what could they have done differently in terms of hiring him? Asked in the interview if the guy was a pedo? It's a horrible look for any business to have someone like that on their staff but it's silly to think they knew anything about it.Every time something like this happens in this site, it reminds me they had a literal pedophile on staff for years, and after it was discovered and charged by the authorities their reactions was to pretend it never happened...
Funny that you mention that:Hmm unfortunately this does not surprise me at all. As an active curator I see a fair bit of slop on Stack Overflow and feel like I’ve got pretty good at spotting it. I’ve had questions about some of the author’s articles in the past, with sections in retro gaming pieces that sounded like AI to me. I chalked it up to my being paranoid since he’s also “the AI guy,” but in light of this I hope past content is investigated as well.
It looks like negligence to me. If you want to play with it on your own system, fine. But putting a powerful unpredictable agent on the net is negligent.So at this point it seems a little bit like OpenClaw and the like are mainly written to be trolling tools. Like seriously, set AI loose on the internet with all the fine examples of behavior, what do you expect? Is there a little bit of glee that things go off the rails?
I understand that, but Ars can still define their culture and try to break that preexisting conception. If there's one positive that can come out of this, I hope Ars takes this opportunity to do so.I wouldn't attribute that to Ars. It's extremely common in white-collar work to just soldier through sickness, and that's only been made worse by remote work (ironic, given its origins in COVID). I know I've worked from home plenty of days when I probably should've just taken advantage of a sick day.
We know more, but only if we go digging through authors' personal social media accounts. Here is Benj's statement taking responsibility and exonerating Kyle.We only know that Kyle was asked not to comment.
When Glass made up a fake company for his fake story, he produced a website, business cards and flyers for the company participating in the conference he made up. When caught, he had his brother pose as an executive of that company to try to fool his editor. If that level of dishonesty were going on, I would hope for that level of response.When Stephen Glass was found to have fabricated a story for the New Republic, there was a long investigation of prior stories and quite a bit of public disclosure. Will that be happening here?
First time?It looks like negligence to me. If you want to play with it on your own system, fine. But putting a powerful unpredictable agent on the net is negligent.
I thought of that as well, though I should have covered that in my comment as I meant to. The problem though is that you're assuming the information can be vanished, and that people won't check. Which is precisely the whole issue under discussion! The information cannot be vanished, it's now out there permanently on various mirrors/archives/caches. And if any AI uses it, then just as we're asking here we have to depend on the human to verify the source right? Or it's pointless regardless.At first I agreed with you, upvoted, and moved on. But then I thought about the implications of doing this in today's world.
5 years ago that would have been an ideal response. Because a human encountering a page of strikethrough text braketed with warnings could reliably be expected to interpret the article as intended.
However ai cannot be trusted to do this and therefore any misinformation contained in the article could be spread to unknowing humans by ai.
There needs to be a more careful solution.
Benj Edwards currently posts on twitter.Also, off course everyone involved is only on Bluesky, also known as the place snowflakes go to have their own petite and annoying echo chamber.
No, of course there's no way to know AT THIS TIME. You can reliably bet that there's going to be a very serious set of discussions behind closed doors and that there will be a suitable write up on this in due course.I mean with other authors, not just this one. Failure of the editorial process.
From the explanation, it seems that he just thought he was using ChatGPT as a tool to help explain and extract data from a blog post.I have very mixed thoughts about the "but I had COVID" excuse. Brain fog from a fever is VERY real, and I could understand a lapse in judgment in trusting a model's output due to brain fog... But there was also a double policy violation (posting AI generated content, AND failing to clearly disclose that content).
Shout out to @Aurich and any of the other moderators (who I unfortunately don't know by name) working in the moderation mines for this. You have undoubtable zero meaningful connection to what happened here beyond it being your employer, yet you're having to read through numerous (justifiable) comments on the story. I've been reading for a while now and every time I hit next page, there are more pages...
I also would like this! In addition to AI, I suggest clarifying how Ars decides when to grant sources anonymity, and the measures in place to prevent access journalism. These are two issues that have cropped up from time to time previously.Is Ars written policy on AI use in articles available for the public? I tried to search for their policies and was unable to find them. Some other outlets that I support have their editorial standards publicly available,
That's a tad bit of an understatement on Benj's part, but okay...... , I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words.