Having read all comments to date (currently up to page 29) I want to offer a revised take now that I've had time to digest many points of view.
The main concern I have at this point isn't about the article in question. It's how Ars chooses to respond to a foreseeable situation: an article is published with inaccurate information. I personally am not fussed whether or not AI is used to write the article. Since it's against Ars policy that is going to be a problem for Mr Edwards. But from my perspective if the article were factually correct then Ars would likely not need to really say anything publicly if an internal policy were found to be violated.
Inaccuracies happen for a variety of reasons. Just because an LLM got in the mix this go around shouldn't change how Ars acknowledges the error, corrects it, and communicates about it. I don't think they took down the article because of the factual errors. Those could easily have been dealt with by simply correcting the quotes. I believe Ars nuked the article because it was highly embarrassing to them that their AI beat writer made such a fundamental error in usage of AI tools.
So either Ars does not have a standard process for dealing with errors in published articles or wasn't willing to follow them for this article. The first seems unlikely. So why aren't they following normal procedures for correcting errors? Because they correctly realized that due to the nature of the error this could blow up badly. So they panicked and pulled the article then issued an intentionally vague retraction statement. And now of course they're losing subs not just over the bad article but because of the perception that they are more interested in covering up the error than addressing it.
Now the sad truth is that the way Ars is handling it is the corporate way. You acknowledge the error as vaguely as possible, give some room for the outrage, make understanding noises, accept the loss of some subs, then wait for it to go away. This minimizes immediate revenue loss. However it is not the morally correct thing to do and over time it slowly erodes a community that is built on shared trust.
Many have requested a thorough post-mortem from Ars and an explanation of how they will amend their current review processes to minimize the chance of a recurrence. I'd also like better insight into Ars correction/retraction process so when such issues arise again, we can clearly understand where Ars staff are in the process and what we as readers should expect to be made known publicly and when.