Editor’s Note: Retraction of article containing fabricated quotations

Status
Not open for further replies.

Bernardo Verda

Ars Legatus Legionis
13,006
Subscriptor++
I'm pleased by the retraction and notice, but displeased by the approach. I'll echo others in saying I expect the retraction itself to include the title of the original story being retracted (and ideally, a link as well); and I think it reasonable to also expect that the original story will not be removed, but rather altered with a prominent notice of the retraction at the top of the story, ideally with a link back to the retraction notice.

Anything less leaves one with the lingering suspicion, however undeserved, that the editorial staff is more interested in "burying" the mistake than correcting it.

Which is why I am giving them the benefit of the doubt. This was a really bad event for them. I can’t imagine anyone there being happy with this. I’d imagine they are going to do some serious crackdown on making sure this doesn’t happen again because it’s just about the most damaging thing they could do. I know they’ve lost subscribers over this and that many people will not give them another chance.

I’ve been there. I loved Washington Post until they shit the bed and I had to cancel. I don’t see that happening here, at least for me. I can’t imagine them doing anything other than taking this very seriously. They know this could be an extinction level event for a small publication focused on technology.

But should I be wrong and it happens again I’ll be lining up to cancel too. I just think they’ve earned a little bit of understanding and patience, more than any other media outlet I’ve ever interacted with at least in my view.

Seconded. Or third-ed, or whatever number I am in the chorus.
 
Last edited:
Upvote
-3 (15 / -18)
Post content hidden for low score. Show…

Soothsayer786

Ars Tribunus Militum
2,871
Subscriptor
I think all of this does give Ars some opportunity though to try and lead the way among the media in having zero tolerance for AI gimmicks. I mean these guys are the experts and it happened to them, so we all know that the rest of the media is doing it a million times worse.

Ars should publish some helpful articles pointed not just at regular readers of the publication, but at the media industry in general. I mean really break down what your editorial process is and how you will work to prevent it from happening again, and share any tips or whatever about the traps and pitfalls of AI tools.

There is going to need to be some sort of industry wide framework established eventually to prevent this stuff or the outrage will just keep growing as every outlet publishes AI BS, often probably without anyone catching it.

But I guess the media is gonna media. I don't have a lot of hope for it happening spontaneously. It's going to take events like this, where they publish outright wholly fabricated nonsense that lands them in serious hot water. It's just a matter of time until AI hallucination reporting gets someone killed or harmed in the real world from false statements. Then come the lawsuits. Then maybe they'll get serious about it. I sure as hell hope.
 
Upvote
60 (60 / 0)

miken32

Ars Scholae Palatinae
861
Hmm unfortunately this does not surprise me at all. As an active curator I see a fair bit of slop on Stack Overflow and feel like I’ve got pretty good at spotting it. I’ve had questions about some of the author’s articles in the past, with sections in retro gaming pieces that sounded like AI to me. I chalked it up to my being paranoid since he’s also “the AI guy,” but in light of this I hope past content is investigated as well.
 
Upvote
70 (71 / -1)
Post content hidden for low score. Show…

stoattiep

Smack-Fu Master, in training
53
The whole situation is so strange. I have started using Gemini quite a bit in the last 3-4 weeks, and it is shocking how good it is and how much detailed information it can give me about obscure topics like particular revisions of automotive parts or configuration settings.

What's also eye-opening is just how often it's completely wrong.

...and when it's incorrect, it's confidently incorrect.

It has suggested parts that fulfill my requirements that simply don't exist, and it has explicitly told me which programming values to change to update the 12V battery configuration in my Mach-e, and when verifying, those values are in the wrong location.

It even explains why those are the correct parts or correct values.

When I correct it, it says, "well spotted! Those are the correct values for the F-150 and x other vehicle. The correct value for your vehicle is 'y'".

It's often still incorrect.

It's so important to verify these things.
The situation isn’t strange. LLMS don’t know truth or reality. Why are people surprised when it gets things wrong?
 
Upvote
81 (82 / -1)

Marlor_AU

Ars Tribunus Angusticlavius
7,670
Subscriptor
On top of everything, it's not a good look for Benj to copy-paste the same reply here, here, here, and here.
Why is that an issue? He's written a comprehensive response, and is individually replying to people to direct them to it.

There are problems here, but this isn't one of them.
 
Upvote
102 (105 / -3)
Post content hidden for low score. Show…

Resistance

Wise, Aged Ars Veteran
418
Indeed, the old saying "it's not the crime, it's the coverup" has stuck around for good reason. In this case, putting the notice of retraction, link to retraction article(s), a {strike} around the whole original article and delisting it, but leaving it up with the comments thread up as well (but locked from further replies) seems like a better response. Nothing to hide, no links broken, people can see what happened, while also having it be very clear that it's retracted.


Maybe with the ever rising tide of AI, journalism organizations will start having to treat people more like aircraft pilots or the like, where if someone is sick enough it's effectively a "safety risk" and they should just be outright forbidden from any further work (enforce it technically too! disable the VPN etc) until they're recovered. Or at least any public facing work, maybe doing some equivalent of desktop clean up is ok, but nothing in the hot seat. Both for them and for the org. It's true that a lot of people might just try to work remotely normally but perhaps enforcing stronger work/life separation (you will take some time and relax and you will like it ;)) in this age of blurring lines.
At first I agreed with you, upvoted, and moved on. But then I thought about the implications of doing this in today's world.

5 years ago that would have been an ideal response. Because a human encountering a page of strikethrough text braketed with warnings could reliably be expected to interpret the article as intended.

However ai cannot be trusted to do this and therefore any misinformation contained in the article could be spread to unknowing humans by ai.

There needs to be a more careful solution.
 
Upvote
108 (114 / -6)
Re: Benj Edwards' statement

First off, I don't buy the sick in bed excuse. It's lame.

Second, that you were using AI tools to help you write an article should be enough grounds for Ars to fire your ass.

Third, that you've published a statement when you were explicitly directed not to should also warrant your termination.

In summary, I don't think your statement will accomplish what you're hoping it will. You did exonerate your co-author, so at least you're willing to go down without taking someone else with you.
While publishing a statement despite management instructions might be a fireable offense (if that is what happened; this article might be management’s response), is also… basically good, right? I mean, management always tells you not to comment as a CYA measure for them.

But, for us in the community he’s basically cleared up what happened. And IMO, stopping people from blaming his coworker is the human-level non-corporate-speak right thing to do.
 
Upvote
77 (82 / -5)

magao

Wise, Aged Ars Veteran
198
I'm not going to read all the comments right now (working), so this may have already have been covered, but there are a lot of comments about not leaving the article up with a retraction. Personally at this point in time I think the article should have been named, but not linked.

Be as transparent about the process and results as you can, when you can. I expect that there was an urgent meeting with the lawyers where it was stated "this is the limit of what you can say at this point".

Whilst I agree that having the article + retraction is the preferred situation from a human PoV, we are also living in a world where that incorrect information is then being indexed and sucked up into LLMs, making the problem worse. So initial damage control is IMO probably the right move - esp. on a holiday weekend. Same thing that should be done with a bad production release - revert and contain the damage as best as possible in the shortest time possible, then do your root cause analysis and mitigations to whatever came out of that.
 
Upvote
55 (59 / -4)

josephhansen

Ars Centurion
287
Subscriptor
A lie requires knowledge and intent. This appears to be an honest (if stupid) mistake on the author's part, per their explanation.
He had knowledge- the knowledge that these tools were not allowed by Ars policy. He had intent- he didn't disclose the use of these tools, so he obviously knew he would get in trouble if he did. A lie of omission is still a lie. If he thought he was doing the right thing, he would have said something BEFORE he got caught
 
Upvote
85 (90 / -5)

Soothsayer786

Ars Tribunus Militum
2,871
Subscriptor
Every time something like this happens in this site, it reminds me they had a literal pedophile on staff for years, and after it was discovered and charged by the authorities their reactions was to pretend it never happened...
I seem to remember them addressing it. And I can't imagine why they would want to dwell on it. It became national news so there was plenty of reporting on it. And I mean... what could they have done differently in terms of hiring him? Asked in the interview if the guy was a pedo? It's a horrible look for any business to have someone like that on their staff but it's silly to think they knew anything about it.

Dennis Hastert, 1999 to 2007, Republican Speaker of the House. Convicted pedophile. The person third in line from President was a pedo. And there is a mountain of circumstantial evidence that our current President is one also and he isn't even really hiding it.

These people are everywhere at all levels.
 
Upvote
118 (119 / -1)

passivesmoking

Ars Tribunus Angusticlavius
8,530
This is the real danger of "AI". It poisons the well with so much crap that even MAGA's ability to spew complete and utter bullshit at an alarming rate pales in comparison, and with so much complete and utter crap out there it's now basically all but impossible to determine what's real and what isn't.

This is the literal meaning of gaslighting. I really dislike how "gaslighting" has become a synonym for mere lying. It's not, it's far more insidious than that. It's not mere lying, it's spewing so much half-truth, revisionism, and flat out fabrications, that your ability to even tell what's true and what isn't is compromised.

If people whose entire career is built around distinguishing fact from fiction can be taken in, what hope do the rest of us have?
 
Upvote
57 (60 / -3)

Jensen404

Ars Scholae Palatinae
1,075
Hmm unfortunately this does not surprise me at all. As an active curator I see a fair bit of slop on Stack Overflow and feel like I’ve got pretty good at spotting it. I’ve had questions about some of the author’s articles in the past, with sections in retro gaming pieces that sounded like AI to me. I chalked it up to my being paranoid since he’s also “the AI guy,” but in light of this I hope past content is investigated as well.
Funny that you mention that:

View: https://bsky.app/profile/benjedwards.com/post/3memhhdd3ls2r


View: https://bsky.app/profile/benjedwards.com/post/3memhxj2ew22r
 
Upvote
27 (37 / -10)

LauraW

Ars Scholae Palatinae
1,005
Subscriptor++
This retraction is a good first step. But you also need to...
  1. Change the article's retraction notice to say why it was retracted.
  2. Conduct a post-mortem to determine exactly how this happened.
  3. Change Ars policies and/or personnel to prevent it from happening again.
  4. Be completely transparent about 2 and 3. If you aren't completely open, you will not regain the trust you've lost and many readers / subscribers will view Ars as just another outlet for AI slop.
  5. If your lawyers say "no" to #4, find new lawyers who will work with you to help minimize business risk while still coming clean.
 
Upvote
59 (60 / -1)
It's cool that the writer threw himself under the bus (even if you think his excuse is lame), but to be real, throwing-ones-self-under-the-bus is the editor's job because what gets published is ultimately their responsibility (regardless of what copy was initially written. As you don't see it until they sign off on it).

I know people have become all loosey-goosey with this concept in the new "we gotta compete with twitter!" days. But, sometimes old-school is the correct school. Just saying.

While I'm browsing the 'real talk' dept: Using AI quotes in an article about how AI use is becoming a living nightmare is some knee-slapping shit. I tip my cap.
 
Last edited:
Upvote
-3 (24 / -27)

rochefort

Ars Praefectus
5,245
Subscriptor
So at this point it seems a little bit like OpenClaw and the like are mainly written to be trolling tools. Like seriously, set AI loose on the internet with all the fine examples of behavior, what do you expect? Is there a little bit of glee that things go off the rails?
It looks like negligence to me. If you want to play with it on your own system, fine. But putting a powerful unpredictable agent on the net is negligent.
 
Upvote
34 (35 / -1)

User_E

Wise, Aged Ars Veteran
112
Subscriptor++
I wouldn't attribute that to Ars. It's extremely common in white-collar work to just soldier through sickness, and that's only been made worse by remote work (ironic, given its origins in COVID). I know I've worked from home plenty of days when I probably should've just taken advantage of a sick day.
I understand that, but Ars can still define their culture and try to break that preexisting conception. If there's one positive that can come out of this, I hope Ars takes this opportunity to do so.

I hope a more comprehensive postmortem from Ars will address this systemic aspect. No worker (at Ars or elsewhere) should hesitate for a second to call in sick when they have COVID. They should have adequate (unlimited, in my opinion, coordinated with disability and with guardrails as I mentioned before) paid sick leave available to them and the culture should be set from the top down to encourage its use (and discourage working while sick). It's a win for everyone: it avoids recurrences of costly, reputation-damaging situations like this, and more importantly, it's a win for workers' welfare.
 
Upvote
30 (35 / -5)

Jim Salter

Ars Legatus Legionis
17,141
Subscriptor++
We only know that Kyle was asked not to comment.
We know more, but only if we go digging through authors' personal social media accounts. Here is Benj's statement taking responsibility and exonerating Kyle.


View: https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p


I have very mixed thoughts about the "but I had COVID" excuse. Brain fog from a fever is VERY real, and I could understand a lapse in judgment in trusting a model's output due to brain fog... But there was also a double policy violation (posting AI generated content, AND failing to clearly disclose that content).

I'm not going to join in the crowd of hateful comments about Benj. I wish him well... Elsewhere. But not here. I don't think handwaving this issue would be in the best interests of either Ars or Benj.
 
Upvote
123 (131 / -8)
Post content hidden for low score. Show…

Tobold

Ars Tribunus Militum
1,973
Subscriptor++
When Stephen Glass was found to have fabricated a story for the New Republic, there was a long investigation of prior stories and quite a bit of public disclosure. Will that be happening here?
When Glass made up a fake company for his fake story, he produced a website, business cards and flyers for the company participating in the conference he made up. When caught, he had his brother pose as an executive of that company to try to fool his editor. If that level of dishonesty were going on, I would hope for that level of response.

This appears to have been a lazy and careless use of AI. While absolutely unacceptable and galling when writing for a site that has frequently cautioned that AI cannot be trusted, it is far from the level of dishonesty that Glass employed.
 
Upvote
99 (101 / -2)
Post content hidden for low score. Show…

xoa

Ars Legatus Legionis
12,364
Subscriptor++
At first I agreed with you, upvoted, and moved on. But then I thought about the implications of doing this in today's world.

5 years ago that would have been an ideal response. Because a human encountering a page of strikethrough text braketed with warnings could reliably be expected to interpret the article as intended.

However ai cannot be trusted to do this and therefore any misinformation contained in the article could be spread to unknowing humans by ai.

There needs to be a more careful solution.
I thought of that as well, though I should have covered that in my comment as I meant to. The problem though is that you're assuming the information can be vanished, and that people won't check. Which is precisely the whole issue under discussion! The information cannot be vanished, it's now out there permanently on various mirrors/archives/caches. And if any AI uses it, then just as we're asking here we have to depend on the human to verify the source right? Or it's pointless regardless.

So basically if someone is checking sources then deleting the article does not help. And if someone isn't checking sources then deleting the article still doesn't help because it might as well be hallucinations anyway right? But deleting the original does prevent people from learning, does feel like a lack of transparency, and does remove any value the community added independent of the article itself. We can no longer directly see the original response made by the falsely "quoted" subject and the dialog that ensued from there.

So while I can see the first level of argument "hide this from AI", I both don't think it hides it from AI at all and I think it's a possible case of "the cure is worse than the disease".
 
Upvote
61 (61 / 0)
My thoughts:

Ars Technica doesn't owe anyone blood.

I don't have a problem with vanishing the original article. It was Friday before a long-weekend. Leaving it up unedited when you're not even sure how much of the article is even valid is malpractice, and the authors best able to determine how the article should be changed are the ones that are the reason the article is in question in the first place. Better to just take it down compeletely until you've decided what's to be done about both it and the authors that wrote it.

COVID's a bitch. I've had COVID far too many times for my liking, both pre-immunization and post-immunization, including a bout last year even though I've gotten every booster available to me, and I've belatedly realized that in the long term it's affected both my cognition and memory.

The danger of relinquishing the way we think and communicate about the real world to AI is that AI can never be held accountable. Humans can be held accountable, and a goal of accountability should be the opportunity of redemption.
 
Upvote
-7 (45 / -52)
Why was there pressure to work through Covid and get the article finished in the first place? It's a straightforward case of sick leave until recovery, which ideally should be paid sick leave.

If the piece had to come out, it could have been handed over to another member of staff to complete. Instead of what appears to have happened here, which is try and get it done ASAP and miss things.
 
Last edited:
Upvote
24 (39 / -15)

taxythingy

Ars Praetorian
573
Subscriptor
I mean with other authors, not just this one. Failure of the editorial process.
No, of course there's no way to know AT THIS TIME. You can reliably bet that there's going to be a very serious set of discussions behind closed doors and that there will be a suitable write up on this in due course.

What you will not be getting is a guarantee that the editorial process will double-check every single sentence, quote and attribution of every article going out. That kind of scrutiny just isn't going to happen and I don't think it is being asked for.

As a reader, what I want is for authors to have a clear policy around accuracy, particularly of quotes and attributions, be given the authority to do their job, the expectation to do it well and the responsibility to get it right. This framework enables most people to perform highly and to enjoy what they do, the combination of which leads to the typically high standard of articles on ArsTechnica.
 
Upvote
49 (50 / -1)

Scriptor

Seniorius Lurkius
3
This feels like a full-circle moment for the site.

In June 2014, there was a story here claiming a "supercomputer" named Eugene Goostman was the first to satisfy the Turing test. That report relied on claims by Kevin Warwick, who has often been criticized for prioritizing media attention over scientific accuracy.

That historical story was technically incorrect - Eugene was a script, not a high performance computer. Other programs like PC Therapist in 1991 on limited topics and Cleverbot in 2011 had already reached higher results (50% and 59% respectively) than the 30% mark used for the 2014 claim.

It is a reminder that AI news can bypass basic skepticism. We've gone from reporting on a bot that tricked judges to being the judges tricked by the bots.
 
Upvote
37 (37 / 0)

Marlor_AU

Ars Tribunus Angusticlavius
7,670
Subscriptor
I have very mixed thoughts about the "but I had COVID" excuse. Brain fog from a fever is VERY real, and I could understand a lapse in judgment in trusting a model's output due to brain fog... But there was also a double policy violation (posting AI generated content, AND failing to clearly disclose that content).
From the explanation, it seems that he just thought he was using ChatGPT as a tool to help explain and extract data from a blog post.

I've seen people in my own company tripped up by this. When asked where dubious requirements come from, they'll say: "I used an LLM to extract the relevant requirements". That's not reliable, LLMs aren't deterministic. But less technically inclined staff don't know that, and it's usually treated as a learning experience.

However, I'd expect an AI reporter to already be well across this.
 
Upvote
86 (87 / -1)

Aurich

Director of Many Things
40,906
Ars Staff
Shout out to @Aurich and any of the other moderators (who I unfortunately don't know by name) working in the moderation mines for this. You have undoubtable zero meaningful connection to what happened here beyond it being your employer, yet you're having to read through numerous (justifiable) comments on the story. I've been reading for a while now and every time I hit next page, there are more pages...
GmaaDTA.gif
 
Upvote
121 (124 / -3)
Post content hidden for low score. Show…

willdude

Ars Scholae Palatinae
760
Kinda crazy how many people here are acting like this is some sort of nefarious cover up and not an in-progress issue over a holiday weekend. Like... the original post was pulled less than 2 hours after it was posted because Benj was too sick to fix it. This isn't the NYT or WaPo, they don't have a 24/7 newsroom working this weekend. Maybe give them more than a Sunday afternoon to figure out the full story, and don't just assume the quick retraction note is the end of it?
 
Upvote
81 (93 / -12)
Is Ars written policy on AI use in articles available for the public? I tried to search for their policies and was unable to find them. Some other outlets that I support have their editorial standards publicly available,
I also would like this! In addition to AI, I suggest clarifying how Ars decides when to grant sources anonymity, and the measures in place to prevent access journalism. These are two issues that have cropped up from time to time previously.
 
Upvote
15 (16 / -1)

saanaito

Ars Scholae Palatinae
1,305
I'm not angry, but I'm deeply disappointed in Mr. Edwards for his use of AI to "source" quotes. I'm also irritated with Ars for retracting the article by removing the entirety of its text from the website, instead of leaving it available in some form so that folks can be made more aware. I understand not wanting to leave up misinformation, but transparency is far more important IMO.

I'm so, so goddamned tired of generative AI. I've dabbled in using chatbots before, but I've been trying to avoid them because I am vehemently against all of their drawbacks and external negative consequences. It's just not reliable enough to offset the enormous strain AI puts on our society - from the misinformation it "makes" and spreads, to the erosion of artistic and journalistic integrity, to the theft of IP its makers partake in, to the disastrous level of resource consumption it takes to run it at scale, to the catastrophic consequences it's wreaking on the US economy and on computing hardware resources by proxy. And yet it feels like so much of the population, outside of spaces like Ars, just does. Not. Care.

Earlier today I was on the phone with a friend who has been developing a game in Godot, and who decided to take the plunge and install Linux Mint on his desktop today, after encountering a final straw with Windows 10. Through the course of the conversation, I learned that he has been using AI to "brainstorm" ideas for his game and to help him write code (in Godot's scripting language, apparently?); and he has been "talking" to a chatbot to help him prepare for the switch to Linux and identify pain points, particularly with discerning which software he uses would just work "out of the box" and which would have problems and/or need compatibility tools. (Fortunately for him, due to my influence over the years, he has already switched a huge chunk of his workflow to FOSS programs.) Me being helpful to a fault, I gave him some pointers to help him adapt to Mint a little quicker (and, if I may be smug, I did it without any use of genAI; I recited some details from memory and sat in front of one of my own computers, also running Mint, to check for and then describe the steps).

I knew I couldn't argue with him about any of his AI usage, so I did the voice call equivalent of smiling and nodding along until we could move on. And there are so many nerds in my older friend circles (which have already been shrinking since I began my gender transition) who just don't see or don't care about the concerns I've expressed. Same as him. It feels like we're each entrenching ourselves further into bubbles, in spite of our efforts to maintain the friendships and enjoy our mutual hobbies (D&D, etc.).

I'm sorry for the off-topic rant here. I'm just .... ugggggghhhhhhhhh. I kind of want to just get off the Internet forever.
 
Upvote
75 (80 / -5)
Status
Not open for further replies.