Editor’s Note: Retraction of article containing fabricated quotations

Status
You're currently viewing only Distraction's posts. Click here to go back to viewing the entire thread.
Not open for further replies.
Post content hidden for low score. Show…

Distraction

Ars Centurion
397
Subscriptor
Sure, as I said, "as explained by Benj in his BlueSky post" (implicit in that is the presumption that Benj is telling the truth, to which I'm happy to say 'trust but verify', but not 'assume he is lying') [snip]
There's no way to verify that the rest of the article was written by a human. Throw some AI slop in anywhere, and you might as well have put it everywhere.
 
Upvote
30 (36 / -6)

Distraction

Ars Centurion
397
Subscriptor
There's nothing wrong about a journalist using a tool to extract quotes, even if that tool uses LLM technologies. However, said tool needs to have deterministic code that always confirms the quote is accurate and exists.

Even when quotes are accurate, a journalist should NOT ask an LLM-based tool directionless questions like "Provide the 5 most interesting quotes from this stack of articles." I don't need to know what Claude "thinks" of an article. I do not want every email, every piece of content I read to be curated by the same big three models.

But I have no problem with a journalist taking a stack of 100 articles and asking an LLM, "provide me with every quote that mentions something about topic X." You are guiding and directing it to perform a specific task. Yes, there is some "judgment" involved, but with proper articulation I think the risk of steering is outweighed by the fact you can find more relevant content more quickly.

Then the actual reporting needs to be done after reading the source material.

In terms of Mr. Edwards putting Ars Technica at risk of libel suits (or whatever I am reading in every third post) ... that is ridiculous at least in terms of USA laws. The bigger risk of a lawsuit involves punishing an employee (W2 or 1099) prematurely due to the pitchfork wielding mobs demanding justice.

If y'all want to cancel subscriptions to make a point - excellent. That's how the system works. Personally I believe in grace & redemption, and while trust needs to be earned again, the idea that a single known lapse in judgment (that in of itself is largely inconsequential) should doom the rest of your career just to prove a point that LLMs are prone to abuse--well, that's pretty rough, and I hope nobody ever judges you that way.
Putting aside the fact that any results you get from asking an LLM to provide 'every quote from 100 articles that mentions something about topic X' would absolutely contain a lot of hallucinated bullshit, it is still selecting your arguments for you, whether you instructed it to or not.

You would also be left with a bunch of sentences stripped of any context and any conclusions you drew from them might very well be the opposite of what the authors actually wrote. Frankly, if you can't even be bothered to read the sources you're using you bolster your own argument, you shouldn't be using them at all.

In this case, the author was summarizing one short blog article. It would have been faster to do it the ethical way.
 
Upvote
50 (50 / 0)
Status
You're currently viewing only Distraction's posts. Click here to go back to viewing the entire thread.
Not open for further replies.