Editor’s Note: Retraction of article containing fabricated quotations

Status
Not open for further replies.

arikol

Ars Centurion
295
Subscriptor++
Could we please get an AI correspondent who isn't drinking the Kool-Aid. Someone who discusses AI more critically.

As can be seen from the comment thread there are quite a few readers who already had Benj on a bit of an ignore due to the "wow, AI is amazing" bent of his stories, as well as his prior admissions to using AI to "assemble" articles.
There are other authors on Ars who generate content that is more like a PR release from manufacturers (I am happy for Gitlin to get to slide a supercar on ice and it is cool that the stability systems are pretty good, but WHY are they good? What are the systems doing? Even the computer game articles tend to go deeper into what makes a game good or bad, and the OS reviews are pretty deep. Product reviews should also do that. Or just release the corporate PR release unchanged)

AI slop content farms are everywhere and Ars runs the risk of becoming one, unless the editorial team takes action.
 
Upvote
94 (97 / -3)

Resistance

Wise, Aged Ars Veteran
418
I think we're distracting ourselves. His point was irrelevant and IMHO you didn't need to make it sillier--probably anymore than I needed to point out that's what you were doing.

Here... I'm putting down my dull butter knife and backing away from the argument :)
How dare you be the bigger person!
 
Upvote
11 (14 / -3)

counterpoint

Smack-Fu Master, in training
65
Subscriptor++
  • We know, for a fact, that AI output was used. It is IRRELEVANT whether the quotes were fabricated to this analysis. Those words were taken from the output of an AI and put directly into the article. EVEN IF THE QUOTES WERE ACCURATE, THE AUTHOR DIDN’T DO THAT WORK. That’s AI output. Period.

If nothing else, I suspect an outcome here is that they may need to tighten up the phrasing of their policy.

I strongly suspect, rightly or wrongly, some people's definitions of "publication of AI-generated material" isn't this literal, and they may assume it means something like "when an LLM is doing the writing for you." If the intention is the more literal "if it passes through an LLM at all and is included in the resulting output in whole or in part, regardless of what it is or what it's for, it must be disclaimed," they may want to clarify.

For example, the author here intended to use a "Claude Code-based AI tool" to extract quotes from a webpage verbatim (and this tool didn't work, leading to use of a general purpose LLM that provided the confabulation); does everyone have a shared understanding that any use of the original tool, even if it works flawlessly and the quotes are perfectly accurate, must be disclosed? I suspect, rightly or wrongly, some people may think "if the result is demonstrably the same as if I had done it by hand, but this just saves me time and it's all verified, it's not a problem." (Not my personal opinon, but for the sake of the argument.)

Especially as the use of LLMs has expanded to incorporate more "tools" (actual deterministic software code being called by an LLM), the line between "AI-generated" and "tool-generated" may continue to blur, at least in some people's perception. Some writers are using LLM-powered or assisted software for researching, outlining, proofreading/rewriting, and so on, even if they still do the final writing "by hand." I don't know to what degree other writers here do any of this, but it's likely important for the policy to clearly align everyone's understanding and expectations.
 
Upvote
29 (32 / -3)
It absolutely is, and I tried to convey this in my post. "Here's how I imagine that worked," would have been a better way to put it than, "Here's my mental model of how that worked".

My default assumption is that clearing recent work was not done cavalierly. If it wasn't done cavalierly, it probably involved a lot of toil that was not emphasized in the two sentences that shared its results.

My suggestion was to share the details of that toil. Personally, I'm a lot more inclined to accept a less than perfect answer (and let's be honest, perfect would be no event like this, so that horse is out of the barn) when I can see an honest effort to address the issue.

I don't really disagree in sentiment, I want more or less the same thing you do. I also have trouble keeping my own posts concise, but I l'm not sure that this kind of speculation really helps. I think the most effective comments are going to be to say concisely what we want, as Ars forum members.

In my case:

1. When articles are retracted, I want it to be very clear what is being retracted so I don't have to have to do extra detective work to find out.

2. I would like a reasonably detailed account of what Ars ended up doing to deal with the issue of publishing an article with fabricated quotes. how the verifies if other Benj Edwards stories are free of easily verifiable factual errors. And what, if any, changes will be made to prevent future problems of this type.

3. I don't have an opinion on whether Edwards should be fired or not. I think that's a decision that only Ars staff more familiar with his conduct as a professional journalist should make. But I do think that whether they decide to keep him on or let him go, they should provide some explanation of why they make that decision.
 
Upvote
34 (35 / -1)
If I've read it correctly, as soon as his head cleared he had an 'oh, fuck!' moment and requested the article be yanked.
We have ZERO information that his “head clearing” was what caused the retraction. What we know for certain is that after publication, the original author came to Ars and said in no uncertain terms that Ars was falsely quoting him. That’s when shit hit the fan. Benj says that he asked for the article to be pulled because he was too sick to fix it. Which seems to have been AFTER it became obvious it was a major problem.

That’s not “my head is cleared”. That’s “the shit already hit the fan and I’m turning off the power to a city block to avoid it blowing around too much.

Not sure what else he could have done?
Not used AI to pull quotes to avoid the terrible labor of … reading a short blog post and selecting a sentence or two to quote.

Verified the quotes before publishing, like a good journalist would.

Not lied by omission in failing to disclose that he had used AI in the creation of his piece.

I mean, just off the top of my head “avoid one of the most serious journalistic errors imaginable” and “don’t violate an important company policy like getting content from AI without disclosing it” seem like things that obviously should have been done.

Edit: clarified we have zero indication his “head clearing” is what caused the retraction
 
Last edited:
Upvote
82 (83 / -1)
I strongly suspect, rightly or wrongly, some people's definitions of "publication of AI-generated material" isn't this literal, and they may assume it means something like "when an LLM is doing the writing for you."
And that’s exactly what happened. It’s ALSO what would have happened if it had generated a real quote.

Let’s say you and I do an interview, and I pull out several quotes to use. Now you may have said the words in the quote, but part of my job as an author is choosing which parts to quote. Which means choosing the parts to quote is part of the writing itself. If I instead tell an LLM to pick the best quotes the resulting selections ARE AI-generated material.

Again, even if the quotes were real, he had a duty to disclose that he had generated them this way. Because that’s HIS job as an author. If the AI had worked, it would have performed part of the author’s job: selecting the portions of a blog to quote IS generating material. Because we expect a human in the loop. Even if they were real quotes, Benj couldn’t tell you if the quotes were in context. He couldn’t tell you if they were the most interesting quotes. He couldn’t tell you why he picked those lines, or why they were more relevant than other quotes. Because he wouldn’t have generated that content. Those choices would have been AI generated even if they were real.
 
Upvote
71 (73 / -2)
Post content hidden for low score. Show…

Raijin

Seniorius Lurkius
48
One thing this has done has been to bring out the lurkers (looking up to the right I see my years vs post count and that is me) and considering when I started reading the comments there were 20 pages and now here as I type this there are 28 pages, this is significant. Some people are saying it is messed up but it was a mistake, I would argue just on scale alone that it is more than that. I am not demanding that Benj be fired but obviously this is a critical issue... (I stepped away for a little while and forgot to hit post, now there are 31 pages)
 
Upvote
35 (35 / 0)

Jim Salter

Ars Legatus Legionis
17,141
Subscriptor++
Whether or not it's ironic for a former Ars journalist who has been describing why posting something a person didn't say as a direct quote is a violation of journalistic standards to be ejected for doing exactly that...
Not "exactly that" at all. I'm not going to complain overmuch about my 12h ejection--Aurich enforced a standing rule, after all, which is pretty much what I'm advocating in the first place.

But, point of order, I did not make a false quote. I very clearly used square brackets to indicate that what was inside the quote tag was NOT in fact a quote.

The reason I chose to do so is because I couldn't figure out a way to actually reply, and note the apparent GPT-gallop, with any short quotation--and the last thing the conversation needed was a repost of that entire multiple page screed. So I chose to bend a rule. And Aurich chose to enforce that rule.

In hindsight, I could probably have--for example--used a spoiler tag to accomplish the same thing, and without breaking that rule. We could quibble about the spirit vs the letter of that rule, or about its extremely selective enforcement. But I'm not going to. The larger points, to me, are:

1. I did not mislead anyone or genuinely attempt to pass off a false quote--but I did break a rule
2. Aurich enforced the rule I broke, and enforcing the rules is what I'm advocating in the first place
 
Upvote
106 (109 / -3)

AdrianS

Ars Tribunus Militum
3,739
Subscriptor
And that’s exactly what happened. It’s ALSO what would have happened if it had generated a real quote.

Let’s say you and I do an interview, and I pull out several quotes to use. Now you may have said the words in the quote, but part of my job as an author is choosing which parts to quote. Which means choosing the parts to quote is part of the writing itself. If I instead tell an LLM to pick the best quotes the resulting selections ARE AI-generated material.

Again, even if the quotes were real, he had a duty to disclose that he had generated them this way. Because that’s HIS job as an author. If the AI had worked, it would have performed part of the author’s job: selecting the portions of a blog to quote IS generating material. Because we expect a human in the loop. Even if they were real quotes, Benj couldn’t tell you if the quotes were in context. He couldn’t tell you if they were the most interesting quotes. He couldn’t tell you why he picked those lines, or why they were more relevant than other quotes. Because he wouldn’t have generated that content. Those choices would have been AI generated even if they were real.
Agreed.

If an author is too lazy to even read the short blog post he's writing about, what's his actual job?
 
Upvote
65 (65 / 0)
There's two situations here, and we don't know which one applies:

1 - he requested the article be yanked as soon as he was thinking clearly (good outcome)(still not great, mind).
2 - he requested the article to be yanked as a result of the shit hitting the fan (bad outcome).

It'll be up to Ars to determine which they think is true, and what action is applicable as a result. But as I say, when it comes to 'trust' issues with Ars articles I think there are bigger fish to fry.

I mean, I understand what you’re saying and why, but 100% of the evidence is on the side of “shit hit the fans because the person falsely quoted showed up and complained, they went to Benj to get his side, and then and only then did he tell them to pull the article”. There’s absolutely zero evidence his “head cleared”. He literally says himself that the reason he wanted it pulled is because he was too sick to fix it. How does that translate to “head cleared”

There ARE other issues with reliability at Ars. I mean, to this day, the only article that covers:

1. Elon Musk violating company policy by starting a sexual relationship with an intern;

2. Elon Musk having this intern do work for a different company on SpaceX time, on SpaceX computers, while being paid by SpaceX;

3. Shotwell targeting this intern for termination because she erroneously believed her husband was cheating on her with the intern;

4. SpaceX HR taking the intern’s complaint directly to Shotwell rather than protecting the employee from retaliation; and

5. Elon Musk terminating the intern because she wanted a relationship, not just to be a booty call

is a blurb in the rocket report that ignores 95% of those details. That’s because Eric Berger chose to bury it there. That’s a serious issue! I agree! It’s not the reason I left, but it definitely affected how I felt about being here when there was no blowback from that choice.

But there’s no indication that the “he had it pulled when his head cleared” explanation here is remotely accurate. None. And I’ve been looking for details since I first posted about it on Friday.

There’s no reason to downplay this or invent excuses.
 
Upvote
72 (74 / -2)
Post content hidden for low score. Show…

counterpoint

Smack-Fu Master, in training
65
Subscriptor++
And that’s exactly what happened. It’s ALSO what would have happened if it had generated a real quote.

Let’s say you and I do an interview, and I pull out several quotes to use. Now you may have said the words in the quote, but part of my job as an author is choosing which parts to quote. Which means choosing the parts to quote is part of the writing itself. If I instead tell an LLM to pick the best quotes the resulting selections ARE AI-generated material.
For whatever it's worth, I wasn't trying to argue against this interpretation, just suggesting that if this is the intended interpretation, they need to make sure that's very clear in the policy. It might seem obvious that this is the intention, but I'm just not sure the current phrasing makes that as clear as it could be. (Then again, we haven't seen the actual, full policy.)
 
Upvote
11 (12 / -1)
Post content hidden for low score. Show…

Dr_Olerif

Ars Centurion
379
Subscriptor++
How refreshing to own a mistake and correct it - if only this was the societal standard instead of a rare instance in a particularly honest enclave.

Thanks Ars, renewal time is soon, i guess that decision will continue to be settled.
The only quibble I have is this is an ideal learning opportunity for people studying journalism in the (somewhat dystopian) era we now find ourselves in, and the 'what and why' of this error being more visible would be useful for that (apologies, I'm not expressing this sentiment as well as I would like, guess that's why I'm not a journalist).
 
Upvote
14 (14 / 0)
And on the flip side there's very little to say it isn't.
I’m sorry, but no, “his head magically cleared hours later, at the exact same time as the person he misquoted showed up in the comments thread, but then was also magically unclear enough that, per his own words, he was too sick to fix the article, despite literally having the author in the comments, who he could literally just ask for quotes” is not equally as plausible as “the person who was misquoted showed up, it caused a kerfluffle, and they reached out to Benj who was, as he said, too sick to fix it”.

These aren’t equally likely theories. One has common sense and matches the evidence we have. One requires magical thinking to annalyze the my data we do have and presuming a shitload of things we don’t have evidence for.

If Ars' investigation leads them to the 'he was never ill and he did this deliberately' conclusion then yes, yeet him onto the job market. It would be appropriate.
I’m not saying he wasn’t ill. I’m saying that the overwhelming likelihood is that the reason the article was pulled isn’t because his head magically cleared right around the time the person he misquoted was already in the comments explaining that those were false quotes.

I’m saying the facts are that:

1. the author came to the comments saying he was misquoted;
2. at least one staff member (Aurich) was in the comments seeing these claims
3. Benj was, by his own words, too sick to fix the article
4. That’s when the article was pulled.

And that the natural, common sense explanation for these facts is that because staff saw the claims, the issue was raised up the chain of command, Benj was contacted but was too sick to take action, and so he asked the article be pulled only after it had already come out.

My theory matches the facts and doesn’t rely on any flights of fancy. Yours requires facts not in evidence, Beni’s sickness to be in a superposition of clear-headed enough to realize he had failed to follow journalistic ethics IR company policy, honest enough to disclose that, but also too sick to ask the person literally in the comments if he could get a quick quote and not forthcoming enough to explain to the person what had happened .

There is no reason to pretend these are equivalent. It’s goofy.
 
Upvote
60 (62 / -2)

WohWonk

Seniorius Lurkius
11
Subscriptor++
I put my subscription on pause shortly after I read Mr. Scott Shambaugh's comments to the original article and I connected the dots like many others.

The way I see it, I very much miss that Ars comes forward and explain why their editorial standards let such mistake pass through the system.
That is a serious failure of our standards. Direct quotations must always reflect what a source actually said

How can you have high standards if you don't have checks that tells if you weer of that said standard?

Now it's a very monstrous problem because the quotes were genAI made, in my opinion it would have been just as bad if they were produced completely by the author's mind instead.

By producing false quote you're putting words in another persons mouth. It's unforgivable and can lead to disastrous outcome.

I've not been around on Ars very long, but I've got used to quite high standards in the articles in general and I read here because I then do not need to verify every single information.

That's the whole point of reading a serious media, you don't have to check everything, you can rely on what you read.

If no action is taken implementing serious guardrails against false information published and the reason those should prevent such incidents as the current one are not explained in a clear and concise way, I'm finish supporting Ars.

If Ars however, shows proper actions and takes the necessary steps to secure that we can expect a higher level of confidence from this incident on, I'm ready to up my subscription/pay more.

I'm aware the fact nothing is produced from nothing and I find it extremely important we have media we can have confidence in. Therefore I support those media I find confidence in.

My wife fell ill to COVID-19 back in 2020 and had 2 blood clots in the basal ganglia.
It was before vaccination was an option. She was a healthcare worker and was ordered to do her job. More than 30 of her colleagues fell ill at the same time. Today she's living from her insurance (luckily), not able to produce much more than just being. One of her colleagues are still dragging around a bottle of oxygen everywhere.

In that light, I hope and wish Benj recover from both the COVID-19 induced problems and also from the article screwup, he's not the only one at failure as I see it.

I hope and wish Ars recover too!
 
Upvote
54 (54 / 0)
For whatever it's worth, I wasn't trying to argue against this interpretation, just suggesting that if this is the intended interpretation, they need to make sure that's very clear in the policy. It might seem obvious that this is the intention, but I'm just not sure the current phrasing makes that as clear as it could be. (Then again, we haven't seen the actual, full policy.)
My point is that even the most restrictive interpretation, that the author cannot use content generated by an LLM, is violated by having the LLM pull quotes. Because the LLM pulling quotes IS GENERATING CONTENT.

The rule, as articulated, is that LLM-generated content must be disclosed. If the LLM is pulling the quotes, that’s LLM generated content.

I mean, sure, they should make clear what stages of the writing process can be informed by AI use without disclosure. But “the quotes you put into the article” is a part of the content. If the AI selected those quotes rather than the author, then the quotes are de facto content generated by the AI. It shouldn’t need to be specified that content generated by an AI is covered by the requirement that AI content must be disclosed.
 
Upvote
49 (50 / -1)

Niles Gazic

Ars Praetorian
405
Subscriptor++
The only quibble I have is this is an ideal learning opportunity for people studying journalism in the (somewhat dystopian) era we now find ourselves in, and the 'what and why' of this error being more visible would be useful for that (apologies, I'm not expressing this sentiment as well as I would like, guess that's why I'm not a journalist).

You find this era to be "somewhat" dystopian?

I keep asking myself, on a near-daily basis, how much worse things can get before this economy and society completely collapses, and those of us who manage to survive are subsisting in communes, compounds, and shanty-towns.

I feel like I should be trying to identify a country that has rampant unemployment, violence and fundamentalism – like maybe Afghanistan – to try and establish a baseline for just how bad things things can get, once these AI advancements render the human mind and body nearly economically useless.
 
Upvote
28 (31 / -3)
the-simpsons-ai.png
 
Upvote
54 (54 / 0)

ChrisSD

Ars Tribunus Angusticlavius
6,168
I think it's fair to say that Ars has had a number of controversies over the years, either because of things authors have done on the job or outside of it.

But this feels like it's in a different category. This is the first time I can recall where a journalist has failed to do the very very basics of journalism. Directly quoting sources accurately is literally journalism 101. And it damages trust in Ars, not just trust in the individual writer.

That AI enabled this failure is interesting but doesn't excuse anything, especially since it's the senior AI reporter who is (by the nature of their job) meant to understand AI and does know Ars' policies. And they definitely should understand journalism given their long experience on the job.

It's tempting to blame Ars for not scrupulously checking every writer's work before publication. But I do understand that is very costly and Ars absolutely should be able to have some trust in their writers. And even if they did catch this before publication, it's still a violation of trust. That isn't to say Ars is necessarily blameless. How they respond to this will affect how Ars is viewed (this "editors note" is more like a "response pending" placeholder than an actual response).
 
Upvote
69 (69 / 0)
Post content hidden for low score. Show…

HoorayForEverything

Ars Scholae Palatinae
892
Subscriptor
[...] This is not the first issue with intergrity they've had and at some point as a reader you have to admit there is something broken with the culture regardless of how much you enjoy the content.
Can I ask which site you will be moving on to, which presumably you believe has a higher watermark for integrity?
 
Upvote
11 (15 / -4)
Post content hidden for low score. Show…

nash076

Wise, Aged Ars Veteran
199
Ars really should have either done something in conjunction with Edwards posting on Bluesky, or should have stopped Edwards posting on Bluesky, I agree with that. I also think everyone should remember that we are at the start of the first working day since this happened and even if Aurich & Ken are pulling 18 hour days since Friday, that's not going to help when there's nobody to answer the phone at Conde HQ.

As for this comparison? 404 Media are the most incredible gossips so this specific comparison isn't a surprise at all, and with the context I've given above, it says more about 404 Media than it does about Ars.
I mean, the Conde Nast publication has people searching anywhere but their own website to find out why there's a terse apology with 1200 comments and no real explanation as to what happened or regarding who. Well, unless you feel like sifting through 1200 comments.

On the other hand, the "incredible gossips" put together an article that covered the bases of explaining what happened, and in a professional manner.

https://www.404media.co/ars-technic...fabricated-quotes-about-ai-generated-article/

Oh, and Aftermath put together a roundup as well, albeit more as a blog than professional reporting.

https://aftermath.site/story-about-...ecause-journalist-used-ai-that-made-mistakes/

So that's two indie outlets keeping me better informed on this topic than Ars is, and they did it over a holiday weekend with no fuss or fanfare.
 
Upvote
71 (72 / -1)
Post content hidden for low score. Show…

anguisette

Wise, Aged Ars Veteran
120
There's another former Ars author who cannot be named, and compared to that, well this is absolutely nothing.
this is not a serious comment, right? that cannot possibly be the standard against which we judge Ars's editorial policies, else they could literally do no wrong. i appreciate people having different opinions on how we should respond to the current topic, but please try to live in the real world.
 
Upvote
60 (63 / -3)
I mean, the Conde Nast publication has people searching anywhere but their own website to find out why there's a terse apology with 1200 comments and no real explanation as to what happened or regarding who. Well, unless you feel like sifting through 1200 comments.

On the other hand, the "incredible gossips" put together an article that covered the bases of explaining what happened, and in a professional manner.

https://www.404media.co/ars-technic...fabricated-quotes-about-ai-generated-article/

Oh, and Aftermath put together a roundup as well, albeit more as a blog than professional reporting.

https://aftermath.site/story-about-...ecause-journalist-used-ai-that-made-mistakes/

So that's two indie outlets keeping me better informed on this topic than Ars is, and they did it over a holiday weekend with no fuss or fanfare.

"Indie outlet" is the difference yes? The Ars staff is pretty senior at this point. Too old to work weekends is what I'm saying.

Jokes aside it's easy for an outsider to write about what's just avaliable on the Internet, but for Ars it is a very serious internal issue. Pinned to Kyle Orlands Bsky is this:
1000046608.png


One can assume Conde Nast is involved, EDIT to add as the owners, meaning Ars might not be allowed to do a writeup until things are straigthened out.
 
Last edited:
Upvote
34 (34 / 0)
Post content hidden for low score. Show…

BruceLGL

Smack-Fu Master, in training
92
Subscriptor
I've been sitting on this a few days considering my comment, and I'ld like to zoom out a little from this specific issue and talk about the future of Ars.

Increasingly news sites are all mass generated LLM slop. This is getting worse not better. It is hard to tell what is real and what is not. Ars might have a future in this flooded world if they hold the line on this very strongly and stand apart. If not, I'm really not sure how they will continue to exist in 5 years time.

The point of this is not just that Ars doesn't generate slop, but that Ars is known to not generate slop. It is very hard to determine what is LLM generated and what isn't (just ask an educational institution), and this isn't going to get better any time soon. The only way consumers can take anything on the internet as truth is via trust.

Just being good isn't enough any more. You need trust to not get drowned out by the slop.
 
Upvote
71 (71 / 0)

maverick

Ars Tribunus Militum
1,681
Subscriptor
I'm not sure how to reconcile Benj supposedly:

1) Being alert and aware enough to understand the events the article was about to co-write it,
2) Being so brain-fogged that he couldn't even manage to copy and paste a few quotes from a blog post he must already have read, surely the simplest part of writing the article?

If you're too brain-fogged to copy and paste some quotes, you're surely too brain-fogged to write an article to Ars Technica's standards?
Assuming COVID brain-fog is genuinely the cause, I'd have to think that the AI use started earlier, possibly by generating a summary that was easier for him to process, maybe by generating a "draft" version of the article for him to edit, etc...

Still, when you've got a co-author on a piece, and you're feeling that unwell, I can't understand not giving him a heads-up, maybe asking him to take over your part, or at the very least to double-check your work.
 
Upvote
63 (63 / 0)

Komarov

Ars Tribunus Militum
2,259
So to summarize:

Benj Edwards is not a programmer, so he wrote an article about kindergarten-level programming WITH AI! Whee. Incidentally duplicating little bits of many existing (open source) tools that he could have used with less hassle, less cost and better results.

Benj Edwards is purportedly a journalist but he can't be bothered to do his job properly so he writes articles WITH AI! Whee! Incidentally publishing the worst possible "mistake" any real journalist would gladly pay to stay away from. By the way, it wasn't a mistake, it was an intentional abdication of responsibility.

It all just proves that old maxim: the search for artificial intelligence implies a lack of natural intelligence.
 
Upvote
8 (22 / -14)

anguisette

Wise, Aged Ars Veteran
120
My opinion is nobody should lose their job over this. Very clearly many people disagree.
saying no one should lose their job over this is perfectly reasonable and you are not the only one to say that, but what you also said, and what i am responding to specifically (which is why i quoted it) is "There's another former Ars author who cannot be named, and compared to that, well this is absolutely nothing"; in making that comparison, you seem to be suggesting that any future editorial transgression should be compared to one of the most widely-condemned atrocities you could imagine an Ars author committing prior to deciding whether those responsible should face any consequences.

that doesn't work, because no form of journalistic malpractice could ever compare to what you're referring to. if that's the standard by which we're judging malpractice, Ars may as well have no editorial standards whatsoever, because no one would ever face any consequences for violating them. ("oh, you murdered someone to get that story? don't worry mate, wait until you hear what this other guy did!")

the reason i described this as not serious is because it seems like such an absurd form of whatabouttism that it could only be intended to derail the discussion. on reflection, i accept that perhaps you were serious, but that does not paint the comment in a better light for me.
 
Upvote
53 (54 / -1)
So while I can see the first level of argument "hide this from AI", I both don't think it hides it from AI at all and I think it's a possible case of "the cure is worse than the disease".
Keep information/article available, whatever the f'ing AI bots do with it later on, well their responsibility. Shite in, shite out.

But here is my snarky comment: The comments were also nuked - maybe because they are sold by CN mothership to AI companies and they want the 'trustworthy IT knowledge of its readership'. Was this an attempt to calm the waters in the comments or to keep it clean? I hope we get answers to the whole fiasco sooner than later - while smaller sites like 404 do publish/talk about what happened here.
 
Upvote
21 (21 / 0)
Status
Not open for further replies.