1) I think enforceability is a bit of a canard. We have all kinds of policies that would be difficult or impossible to enforce in a rigorous manner. Hell, the fast food joint down the street does not have a camera in the bathroom ensuring that employees wash their hands after relieving themselves. Nevertheless, if we have a policy that says "You shall not do ABC and this is the reason why", a lot of conscientious employees will follow that policy regardless of whether they think they'll get caught. Which matters becauseI'm on the fence about zero tolerance for using AI in this line of work. First, it's completely unenforceable for remote work. Second, we don't know what the future holds for this tech and its perception. Third, I think the best policy is to have those that submit work be accountable for what they submit, regardless of how it is produced.
I think you're conflating being "let go" with being terminated for cause.I realize there are real ramifications for unemployment benefits and health insurance when you resign instead of being fired, but it is still the right thing to do.
This is part of why I flatly refuse to use LLM slopcoding machines in my work.I work for an employer which is really pushing AI-based solutions to both its employees and its users.
At one of the information sessions, they (rightly) emphasized that anything the AI spat out had to be double-checked, and that the end product and responsibility belonged to the human being using the AI.
Someone at that session pointed out the many examples of professionals (lawyers, journalists, etc.) failing at that responsibility, and asked what guardrails they had in place to protect the employer from the results of employees not verifying what the AI spat out. Their response was to double down on it being the user's responsibility to double check anything, and otherwise avoided the question.
I guess what I'm trying to say is, if a tech journalist whose beat is Artificial Intelligence can't internalize that message, and AI keeps getting pushed by everyone as the solution to everything, then we are utterly screwed as a society.
It's two separate, compelling stories, and the tragedy is that the second one seems to be overtaking the first one here.You guys absolutely MUST do a post-mortem about this entire thing, starting with the fact that A FUCKING AI AGENT TRIED BLACKMAILING A HUMAN BEING IN THE WILD.
What the non-theoretical FUCK?!?
Benj Edwards also posts on twitter.Bluesky is exactly the medium for people who still get tested for covid whenever they get a cold (even though they're going to isolate anyways because that's what virtuous people with any viral symptoms would always do), and it's exactly the right medium on which to blame covid for one's poor judgment. Bravo on being the perfect poster child for that site!
Unemployment benefits generally are not available for an employee resigning voluntarily.I think you're conflating being "let go" with being terminated for cause.
You do not get unemployment benefits if you're fired for cause.
And although some companies have a policy of never terminating for cause officially even when there was a cause, to avoid possible retaliatory lawsuits... There is a financial incentive to report a termination for cause accurately, because as an employer, your payroll taxes increase each time a former employee claims unemployment benefits.
The reason why you have a zero tolerance policy isn't to catch every single violation. It's when there is a violation, there's zero excuses because there was no ambiguity whatsoever. You were told there was zero tolerance, and you did it anyway.I'm on the fence about zero tolerance for using AI in this line of work. First, it's completely unenforceable for remote work. Second, we don't know what the future holds for this tech and its perception. Third, I think the best policy is to have those that submit work be accountable for what they submit, regardless of how it is produced.
It's a drug, and the first one's free.This is part of why I flatly refuse to use LLM slopcoding machines in my work.
I will fail to check something well enough at some point. I don't want to deal with the consequences of a slopcoded test I failed to properly vet letting a bug through and potentially fucking up processing a card transaction or thousand.
i think this is a key point. while people might have (entirely valid) reasons to oppose any use of LLMs at Ars, the fundamental issue here isn't that an LLM was used, it's that an article was published with fabricated quotes. that didn't happen because of LLMs, it happened because of a serious breakdown in whatever process exists for ensuring the accuracy of published articles.I think the best policy is to have those that submit work be accountable for what they submit, regardless of how it is produced.
I think Jim's point is that Mr. Edwards does not face the question of "resign or be downsized", where there is an open question of what post-employment resources may be available. He perhaps faces the question of "resign or be fired for cause", where there is no question anyway--zero versus zero.Unemployment benefits generally are not available for an employee resigning voluntarily.
I think what he is saying is that it doesn't matter if he resigns or is fired for cause because the consequences are the same.Unemployment benefits generally are not available for an employee resigning voluntarily.
Yes, I know. The point is that if you're going to get terminated for cause, there's no incentive not to resign first, because you aren't getting any unemployment benefits either way.Unemployment benefits generally are not available for an employee resigning voluntarily.
Of course, that presumes you're in a semi-anonymous position where hiding is even remotely possible.Where the gamesmanship comes in is trying to figure out whether you'll get let go or fired, because if it's the former, you don't want to resign and lose your benefits--but if it's the latter, you REALLY want to resign first, so you don't have to put a termination for cause on your resume (or lie about it).
Folks. Deleting the story is at best like hitting a double when it's a homer that is needed. I'll cite the policy over at the NYT: the updated story is appended with a quote of the incorrect text, exactly as it was originally published, along with the corrected text. Here, there is no posting of a direct link to the now-deleted story; Ars merely mentions archive.com. Several commenters here show how they found the original story by less-than-direct sleuthing.
Honestly, I'm really mixed on this, and the more I think about it, the less sensical it sounds. He used a "Claude Code based AI tool" to extract quotes from websites. Claude Code doesn't extract text from websites - it's a coding environment. Per his apology, he developed a tool in Claude Code that scraped webpages for quotes, it didn't work as expected. He then posted the error in ChatGPT. Which lead to the misattributed quote.Personally I believe in second chances.
It's on the internet archive, so that point is likely moot.I agree with your general sentiment around keeping incorrect text up, but I think it gets thorny with fabricated quotes. Future AIs will inevitably slurp that up, ignore the context that they were fabricated, and then confidently assert that they were actual quotes.
I think this was my take the first time I saw a story about an attorney making excuses for not catching fabricated citations in an "AI"-assisted briefing, and any company that does NOT have this clear and direct type of policy at this point is going to have problems.I think the best policy is to have those that submit work be accountable for what they submit, regardless of how it is produced.
The way this was handled lacks transparency and information that should be there. Even a placeholder statement should explain that more decisions and announcements will be forethcoming. Otherwise all we are left with is speculation and looking for answers on our own.Reasonable people are happy to give Ars more than a couple business hours here (the article was posted Friday afternoon and retracted later Friday afternoon). Even without a corrected article, there isn't really anything untoward about this; Ars isn't behavior nefariously or trying to cover anything up; they posted a public apology and retraction as soon as they had an opportunity to investigate and confirm what had happened (and make an official apology to the article subject).
The statement we have is not one that promises a later accounting. And in fact it has been offered to us as "This is our statement." That in and of itself sounds definitive and not open-ended. People are trying to communicate that this is not going to suffice.Kinda crazy how many people here are acting like this is some sort of nefarious cover up and not an in-progress issue over a holiday weekend. Like... the original post was pulled less than 2 hours after it was posted because Benj was too sick to fix it. This isn't the NYT or WaPo, they don't have a 24/7 newsroom working this weekend. Maybe give them more than a Sunday afternoon to figure out the full story, and don't just assume the quick retraction note is the end of it?
Well, I'm more concerned that instead of reading the source material, he's scraping sites for quotes.Honestly, I'm really mixed on this, and the more I think about it, the less sensical it sounds. He used a "Claude Code based AI tool" to extract quotes from websites. Claude Code doesn't extract text from websites - it's a coding environment. Per his apology, he developed a tool in Claude Code that scraped webpages for quotes, it didn't work as expected. He then posted the error in ChatGPT. Which lead to the misattributed quote.
Which makes no sense to me. Because how would ChatGPT generate a fake quote if he's asking for advice on an error.
Somebody - anybody - please walk me through his apology, and explain how the quote is developed from his apology?
Trying to scrape websites for quotes - and failing at it.Well, I'm more concerned that instead of reading the source material, he's scraping sites for quotes.
Probably--just clarifying for those who might read it to suggest otherwise. As previously posted, I've conducted (too) many of these sessions over the years, and quite a few people seemed to think that if they were allowed to resign instead of being terminated, they could apply for benefits.I think what he is saying is that it doesn't matter if he resigns or is fired for cause because the consequences are the same.
Or sometimes they double down, like with Ax Sharma. Admittedly, they eventually let him go but they definitely tried to sane wash Sharma's reporting on Hacker X.The way this was handled lacks transparency and information that should be there. Even a placeholder statement should explain that more decisions and announcements will be forethcoming. Otherwise all we are left with is speculation and looking for answers on our own.
Getting ahead of the narrative doesn't mean throwing something in the trash and saying "I threw it away," it means putting your own narrative out there. Who, what, when, why, what's next. If there is to be more pending an investigation you damn well say "pending an investigation."
But as I've said before, this is not the first time Ars has fallen short of transparency and failed to level with its readership after a huge fuckup that calls into question their editorial practices. They have at least a few instances in the past of simply burying something behind a retraction notice and moving on, without further explanation. Not everyone will remember the times this has happened before but I don't want it to happen again because it's toxic to the readership.
Some of the people still reading this don't know that many people have left the Ars community over the years after incidents like this, where a full public accounting never materialized and company policies did not meaningfully address their concerns.
The statement we have is not one that promises a later accounting. And in fact it has been offered to us as "This is our statement." That in and of itself sounds definitive and not open-ended. People are trying to communicate that this is not going to suffice.
I'm on page 8 as I write this and at least five more pages of comments have materialized. I probably won't be able to fully catch up before skipping forward. Sorry if this post is too much retreading.
That last sentence is the one people overlook. They get fired, then tell a future employer they have never been fired, then when a minor thing happens down the road they look at the file and say "Wait a minute--he falsified his employment application!"Yes, I know. The point is that if you're going to get terminated for cause, there's no incentive not to resign first, because you aren't getting any unemployment benefits either way.
If you're going to get terminated for cause, you're usually better off resigning first. If you're allowed to.
Where the gamesmanship comes in is trying to figure out whether you'll get let go or fired, because if it's the former, you don't want to resign and lose your benefits--but if it's the latter, you REALLY want to resign first, so you don't have to put a termination for cause on your resume (or lie about it).
With an incident like that one it's an open question of how much is intentional deflection and how much is that they genuinely don't see how fucked up the situation was; that they did drink the Kool-Aid and so brush off the criticisms as just a bunch of weird noise.Or sometimes they double down, like with Ax Sharma. Admittedly, they eventually let him go but they definitely tried to sane wash Sharma's reporting on Hacker X.
Honestly, I'm really mixed on this, and the more I think about it, the less sensical it sounds. He used a "Claude Code based AI tool" to extract quotes from websites. Claude Code doesn't extract text from websites - it's a coding environment. Per his apology, he developed a tool in Claude Code that scraped webpages for quotes, it didn't work as expected. He then posted the error in ChatGPT. Which lead to the misattributed quote.
Which makes no sense to me. Because how would ChatGPT generate a fake quote if he's asking for advice on an error.
Somebody - anybody - please walk me through his apology, and explain how the quote is developed from his apology?
No need to repeat yourself. If you don't know which tool is being talked about then ask, if you don't know anything about the tool the tool in question is pro-ported to use, maybe don't skim the first result and then proceed to speculate.Regarding the "Experimental Claude Code Based AI Tool" that Mr. Edwards mentioned on BlueSky: Per Claude,
"Claude Code is an agentic coding tool that reads your codebase, edits files, and runs commands. It works in your terminal, IDE, browser, and as a desktop app."
Did Mr. Edwards try coding his own program, using Claude, to pull quotes from websites? Claude Code is not designed to read text from websites, to my knowledge (but I hope someone corrects me).
That last sentence is the one people overlook. They get fired, then tell a future employer they have never been fired, then when a minor thing happens down the road they look at the file and say "Wait a minute--he falsified his employment application!"
Right, but he lost me at scraping websites for quotes, so I don't care how he supposedly fucked up or if that fuckup makes any sense.Trying to scrape websites for quotes - and failing at it.
There's a massive gap in his apology between "I posted an error message into ChatGPT" where it resulted in "I got a falsified quote". A step is missing.
Which author was it? There were two credited. This is a load of bullshit being served to us.
UPDATE: It was Edwards:
View: https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
if reporters can freely make stuff up without consequence, ars should just shut down as soon as possible, as it would entirely worthless as a source of news.Personally I believe in second chances.
Honestly, that's the journalism equivalent of submitting a hallucinated citation, or like showing up to teach your third grade class without putting on any pants.Right, but he lost me at scraping websites for quotes, so I don't care how he supposedly fucked up or if that fuckup makes any sense.
I saw it as they genuinely didn't understand how fucked up it was. It showed a total lack of introspection on their journalistic standards.With an incident like that one it's an open question of how much is intentional deflection and how much is that they genuinely don't see how fucked up the situation was; that they did drink the Kool-Aid and so brush off the criticisms as just a bunch of weird noise.
100% no, not appropriate for this at all. I meant that even for less important things that I'm researching, it can be a huge timesaver to at least get me started in the right direction. For pulling quotes for an article like this, I don't think it should be used at all. Those should be gleaned from the primary source with a copy/paste.But is being a "timesaver" appropriate at all in a case like this?
You're not being a jerk, it's a fair point. What I actually meant to say but worded poorly is "it's shocking how seemingly good it is".Not trying to be a jerk but this kind of overly-lenient attitude among those experimenting with LLMs needs to stop. The bolded parts are literally contradictory.
An LLM can't be "shockingly...good" AND (very) "often wrong" and then when corrected "still wrong", at the same time.
That's extremely disgraceful. Issue with ethical AI usage? Why not use AI to synthesize what may or may not be verbatim quotes about exactly the issue at hand? Checking facts is for suckers.
Couldn't you read the fucking blog post? I think it took me three minutes. This likely won't look like anything but retrospective justification at this point, I know, but the absence of at least a healthy amount of skepticism in his AI coverage has long bothered me, and that bother has been validated as a fear: he will use AI to generate falsehoods and slop and publish them to Ars Technica, undisclosed and as paid editorial work.