There's nothing wrong about a journalist using a tool to extract quotes, even if that tool uses LLM technologies. However, said tool needs to have deterministic code that always confirms the quote is accurate and exists.
Even when quotes are accurate, a journalist should NOT ask an LLM-based tool directionless questions like "Provide the 5 most interesting quotes from this stack of articles." I don't need to know what Claude "thinks" of an article. I do not want every email, every piece of content I read to be curated by the same big three models.
But I have no problem with a journalist taking a stack of 100 articles and asking an LLM, "provide me with every quote that mentions something about topic X." You are guiding and directing it to perform a specific task. Yes, there is some "judgment" involved, but with proper articulation I think the risk of steering is outweighed by the fact you can find more relevant content more quickly.
Then the actual reporting needs to be done after reading the source material.
In terms of Mr. Edwards putting Ars Technica at risk of libel suits (or whatever I am reading in every third post) ... that is ridiculous at least in terms of USA laws. The bigger risk of a lawsuit involves punishing an employee (W2 or 1099) prematurely due to the pitchfork wielding mobs demanding justice.
If y'all want to cancel subscriptions to make a point - excellent. That's how the system works. Personally I believe in grace & redemption, and while trust needs to be earned again, the idea that a single known lapse in judgment (that in of itself is largely inconsequential) should doom the rest of your career just to prove a point that LLMs are prone to abuse--well, that's pretty rough, and I hope nobody ever judges you that way.