Hospitals adopt error-prone AI transcription tools despite warnings

k h

Ars Centurion
349
Subscriptor
90% of the time? What about the other 10%. You realize that's like a 1 in 10 failure rate right? And you think that's acceptable?
Getting 90% of the transcript correct when combined with additional data like medications/problem lists/physician order gives enough context that the final summarised note has a 99% accuracy rate.

And that note is finally what is reviewed by the physicians and it isn't me who is saying this is acceptable, it is the doctor. That's where the rubber meets the road - the final review from the physician of our product.

And we have been seeing an extremely enthusiastic adoption rate so far.
At tax time, 90% of the numbers that my AI puts down on the tax form are correct. When combined with all the other taxes I pay, I figure I'm 99% legal. Come to my seminar and learn how much time you'll save with this one simple trick.
 
Upvote
2 (3 / -1)
This constant push by seemingly every industry to say "fuck it, we'll test in production" is really making it difficult to maintain an open mind about potentially useful applications for AI. I've softened a bit on where AI-assisted tools might lead us, but if you have the entire tech industry constantly telling people that they can skip the "verify" part of "trust but verify" then poeple are absolutely going to do that, and they'll keep on doing it until something catastrophic occurs.

At which point the victim's family will have no legal recourse, because this company has decided to delete the original audio "for data security reasons", despite existing in an industry that desperately thirsts for any decent amount of saleable personal data. Like that doesn't instantly scream "we are being careful to cover our tracks here".

TL;DR - I've got nothing against AI per se, apart from the excessive energy use, and I can see how a reliable medical transcription tool would be extremely useful.

But I do have a very big problem with companies continually telling us that the AI Slop is 100% ready for primetime and attempting to force it into every single aspect of our lives. Just fuck off, nobody wants this.
 
Upvote
7 (8 / -1)
The idiots in this scenario are anyone and everyone pretending that LLMs are actually useful. Outside of some extremely limited situations, they're not. And medical transcription is absolutely, positively, 100% not one of those extremely limited situation
This scenario looks like a solution in search of a problem for sure, but count me as an idiot because I've been blown away over the last few weeks using AI assistance for coding. I'm nearly 50 and have seen the wheel reinvented poorly many times, but this is different.

I've used copilot for about a year, but recently started discussing my projects with Claude and it's been amazing, like having an expert on every topic on tap. I've learned new patterns and written some of the best code of my 25 year career in record time with better testing and validation. It handles the mundane stuff with ease, understands complicated data structures, and damn near reads my mind. Saves me from RSI and helps me to focus on the big picture.

Really amazed at how down ARS is on LLMs. Maybe everyone is just old like me. Now get off my lawn!

Edit: Really fun to argue with Claude. Unlike most people, it demonstrates that it "understands" (yes I know how LLMs actually work) and will readily admit when you are right and move ahead. Just as well it will kindly point out your mistakes and suggest solutions.
 
Last edited:
Upvote
-1 (2 / -3)

One off

Ars Tribunus Militum
1,547
Really amazed at how down ARS is on LLMs. Maybe everyone is just old like me. Now get off my lawn!
An awful lot of hot, visceral, takes on any popular topic, and vocal idealogues have emerged following the release of these general use ML tools with applications that can be used by anyone. A lot of people are scared of the financial or economic implications, a few are scared of existential risks. Some people are offended that this tech even exists, some people feel very aggrieved by the use of scraped data for training. A few people seem genuinely politically opposed due to implications within our current political and economic milieu. After almost 3 years, I'd have hoped extreme commenters at either end of the debate would have accepted their positions (this tech is no use to anyone for anything or this tech is a baby AGI) are untenable, but... Thanks for one of the too rare balanced posts.
 
Last edited:
Upvote
-3 (0 / -3)
Gotta love the medical field. On one hand you have training for humans that have to pass a decade of education and on-the-job experience before becoming a doctor, machinery such as MRI machines that took decades of engineering and constant calibration, and centuries of best practices learned and constructed from the suffering of millions. On the other hand you have tech bros and penny pinching idiots saying fuck-it, let’s run AI (against ChatGPT’s recommendation, mind you) to garble critical health information to save a bit of cash.

What’s insane about this is you typically get garbage in, garbage out with algorithms. AI is taking good data in and creating garbage out. Fantastic future we have ahead of us.
These aren't really opposing ideas. The $5 trillion health care marker is going to waste and save money at the time. It is a planet to itself. It will employ every person and buy every product, including of course the penny pinching products that have 500 trillion pennies to pinch.
 
Last edited:
Upvote
2 (2 / 0)
You must be crazy to think that lowering costs will result in lower premiums. Fortunately for us, mental health isn't included in your plan, so you'll just have to remain crazy.
You're right, they'd have to be at least as generous as Scrooge before the ghosts visited to do a thing like that.
 
Upvote
0 (0 / 0)

okojava

Smack-Fu Master, in training
2
It's been the opposite for me: I get my pathology reports out a lot faster now that I don't have to wait for a transcriptionist to type up my dictation, and the error/typo factor hasn't changed significantly. But I suspect that has a lot to do not only with specialty, but with which EMR is being used; my practice uses Cerner Millennium, which has a very nice AP module. Epic's Beaker AP module, on the other hand, is so painful to use that I honestly wonder if whoever designed it has even read an anatomic pathology report, much less written one!

But I'd be very nervous about using actual AI at this point instead of Dragon Medical One. The psychiatrist upthread who mentioned how "breathing technique" could be mistranscribed by Dragon as "breeding technique" has a point; but how much worse if the AI system then goes on to add that, after the patient mastered the new breeding technique, she went on to conceive and bear three healthy (nonexistent) children!
I completely understand your concerns about using AI instead of Dragon Medical One, especially considering the potential for errors like the one you mentioned with "breeding technique." It’s clear that AI still has a long way to go before it can fully match the accuracy needed for medical documentation.

As for epic vs cerner, the differences between these systems can really impact the user experience. While Cerner Millennium’s AP module seems to provide a much smoother workflow, Epic’s Beaker AP module, as you pointed out, can be quite painful to use. The design and functionality of each system make a huge difference in how efficiently and accurately reports are generated, and I can see how this would influence your opinion on switching to AI. Hopefully, as these platforms continue to evolve, they’ll address some of these pain points!
 
Upvote
0 (1 / -1)

Mimsey

Seniorius Lurkius
31
Sometimes close is good enough. If I'm using an AI browser plugin to summarize an hour-long support video (since nobody seems to want to write documentation anymore) before I commit to watching it, it doesn't need to be completely accurate. I'm not going to use the summary for anything other than deciding if it's worth my time to watch the video, and a more-or-less summary today is better than a guaranteed-accurate summary five years from now.
Yes, sometimes close is good enough. If you're just using it for fun, or to generate ideas for a person to expand on, or in a situation where a person is definitely going to compare the output to the original, then go for it. All we're saying is, it's worse than idiotic to use something like this when close isn't good enough.
 
Upvote
0 (1 / -1)

Mimsey

Seniorius Lurkius
31
It use to. In the last year it has gotten noticeably worse. Cuz isn't even a word and it keeps changing my because to cuz. That is only one of 20 different voice to text issues on Google voice to text / keyboard. All of this has happened over the last year. [Insert your typical comment about enshitification here.]
Regarding Google having been good at transcription until recently, what are the odds they've stuck AI into it? It looks like we have a new subset of enshittification: AI-enhanced enshittification.
 
Upvote
-1 (0 / -1)
An awful lot of hot, visceral, takes on any popular topic, and vocal idealogues have emerged following the release of these general use ML tools with applications that can be used by anyone. A lot of people are scared of the financial or economic implications, a few are scared of existential risks. Some people are offended that this tech even exists, some people feel very aggrieved by the use of scraped data for training. A few people seem genuinely politically opposed due to implications within our current political and economic milieu. After almost 3 years, I'd have hoped extreme commenters at either end of the debate would have accepted their positions (this tech is no use to anyone for anything or this tech is a baby AGI) are untenable, but... Thanks for one of the too rare balanced posts.
Actually a pretty balanced take tbh, but you'd have done yourself a favour by formatting it into a few discrete paragraphs rather than a big block of text.

This is coming from someone who just loves brainsplurging a big ol' wall-o-text into the little box and then has to spend quite a while editing it down into what I'm actually trying to say. Cheers ; )
 
Upvote
0 (0 / 0)