OpenAI’s Whisper tool may add fake text to medical transcripts, investigation finds.
See full article...
See full article...
90% of the time? What about the other 10%. You realize that's like a 1 in 10 failure rate right? And you think that's acceptable?
At tax time, 90% of the numbers that my AI puts down on the tax form are correct. When combined with all the other taxes I pay, I figure I'm 99% legal. Come to my seminar and learn how much time you'll save with this one simple trick.Getting 90% of the transcript correct when combined with additional data like medications/problem lists/physician order gives enough context that the final summarised note has a 99% accuracy rate.
And that note is finally what is reviewed by the physicians and it isn't me who is saying this is acceptable, it is the doctor. That's where the rubber meets the road - the final review from the physician of our product.
And we have been seeing an extremely enthusiastic adoption rate so far.
This scenario looks like a solution in search of a problem for sure, but count me as an idiot because I've been blown away over the last few weeks using AI assistance for coding. I'm nearly 50 and have seen the wheel reinvented poorly many times, but this is different.The idiots in this scenario are anyone and everyone pretending that LLMs are actually useful. Outside of some extremely limited situations, they're not. And medical transcription is absolutely, positively, 100% not one of those extremely limited situation
An awful lot of hot, visceral, takes on any popular topic, and vocal idealogues have emerged following the release of these general use ML tools with applications that can be used by anyone. A lot of people are scared of the financial or economic implications, a few are scared of existential risks. Some people are offended that this tech even exists, some people feel very aggrieved by the use of scraped data for training. A few people seem genuinely politically opposed due to implications within our current political and economic milieu. After almost 3 years, I'd have hoped extreme commenters at either end of the debate would have accepted their positions (this tech is no use to anyone for anything or this tech is a baby AGI) are untenable, but... Thanks for one of the too rare balanced posts.Really amazed at how down ARS is on LLMs. Maybe everyone is just old like me. Now get off my lawn!
These aren't really opposing ideas. The $5 trillion health care marker is going to waste and save money at the time. It is a planet to itself. It will employ every person and buy every product, including of course the penny pinching products that have 500 trillion pennies to pinch.Gotta love the medical field. On one hand you have training for humans that have to pass a decade of education and on-the-job experience before becoming a doctor, machinery such as MRI machines that took decades of engineering and constant calibration, and centuries of best practices learned and constructed from the suffering of millions. On the other hand you have tech bros and penny pinching idiots saying fuck-it, let’s run AI (against ChatGPT’s recommendation, mind you) to garble critical health information to save a bit of cash.
What’s insane about this is you typically get garbage in, garbage out with algorithms. AI is taking good data in and creating garbage out. Fantastic future we have ahead of us.
You're right, they'd have to be at least as generous as Scrooge before the ghosts visited to do a thing like that.You must be crazy to think that lowering costs will result in lower premiums. Fortunately for us, mental health isn't included in your plan, so you'll just have to remain crazy.
I completely understand your concerns about using AI instead of Dragon Medical One, especially considering the potential for errors like the one you mentioned with "breeding technique." It’s clear that AI still has a long way to go before it can fully match the accuracy needed for medical documentation.It's been the opposite for me: I get my pathology reports out a lot faster now that I don't have to wait for a transcriptionist to type up my dictation, and the error/typo factor hasn't changed significantly. But I suspect that has a lot to do not only with specialty, but with which EMR is being used; my practice uses Cerner Millennium, which has a very nice AP module. Epic's Beaker AP module, on the other hand, is so painful to use that I honestly wonder if whoever designed it has even read an anatomic pathology report, much less written one!
But I'd be very nervous about using actual AI at this point instead of Dragon Medical One. The psychiatrist upthread who mentioned how "breathing technique" could be mistranscribed by Dragon as "breeding technique" has a point; but how much worse if the AI system then goes on to add that, after the patient mastered the new breeding technique, she went on to conceive and bear three healthy (nonexistent) children!
Yes, sometimes close is good enough. If you're just using it for fun, or to generate ideas for a person to expand on, or in a situation where a person is definitely going to compare the output to the original, then go for it. All we're saying is, it's worse than idiotic to use something like this when close isn't good enough.Sometimes close is good enough. If I'm using an AI browser plugin to summarize an hour-long support video (since nobody seems to want to write documentation anymore) before I commit to watching it, it doesn't need to be completely accurate. I'm not going to use the summary for anything other than deciding if it's worth my time to watch the video, and a more-or-less summary today is better than a guaranteed-accurate summary five years from now.
Regarding Google having been good at transcription until recently, what are the odds they've stuck AI into it? It looks like we have a new subset of enshittification: AI-enhanced enshittification.It use to. In the last year it has gotten noticeably worse. Cuz isn't even a word and it keeps changing my because to cuz. That is only one of 20 different voice to text issues on Google voice to text / keyboard. All of this has happened over the last year. [Insert your typical comment about enshitification here.]
Actually a pretty balanced take tbh, but you'd have done yourself a favour by formatting it into a few discrete paragraphs rather than a big block of text.An awful lot of hot, visceral, takes on any popular topic, and vocal idealogues have emerged following the release of these general use ML tools with applications that can be used by anyone. A lot of people are scared of the financial or economic implications, a few are scared of existential risks. Some people are offended that this tech even exists, some people feel very aggrieved by the use of scraped data for training. A few people seem genuinely politically opposed due to implications within our current political and economic milieu. After almost 3 years, I'd have hoped extreme commenters at either end of the debate would have accepted their positions (this tech is no use to anyone for anything or this tech is a baby AGI) are untenable, but... Thanks for one of the too rare balanced posts.