Accuracy: 4%But that seemingly key “accuracy” metric was only responsible for about 4 percent of a vendor’s overall score, making it easy to meet the minimum threshold for approval even if an AI scribe scored a “zero” on the accuracy metric (a separate metric measuring “domestic presence in Ontario” was worth 30 percent of the overall scoring).
Just so its clear, this is an audit of a simulated situation to make sure that the tools being advertised are up to snuff for Ontario doctors.why can’t the doctors just do their job and write notes??
why can’t the doctors just do their job and write notes??
Short answer...it is an opportunity cost in time that could be spent seeing/treating patients, which people (AKA prospective patients) complain about wait times to see doctors. Which, the trade off between hallucinating LLMs over longer patient wait times is...clearly problematic.why can’t the doctors just do their job and write notes??
The Doctor, ultimately. I work in radiology, and we have been using speech to text for years. It's up to them to proofread. If it is wrong, and there is a lawsuit, they will be hung out to dry.So who is on the hook when these AI tools go wrong, in a field like healthcare, where consequences are life or death? Particularly when the hallucinating tools are actually recommended by government orgs?
Problem Is Between Gurney And Chair. PIBGACAaaaa whaaaaaaaaaat....
The patient probably isn't prompting the LLM correctly. User error. The patients need training
/s
Came here to ask this.Why is AI involved at all rather than basic dictation software we already had?
They can and do; it’s why my wife, who is officially scheduled to work from 8 AM - 4 PM, routinely doesn’t get home until midnight. Because doctors get reimbursed for seeing patients, so their employers schedule them to see as many patients as possible, and don’t make any allowances for all of the ancillary work that has to be done around the patient encounters.why can’t the doctors just do their job and write notes??
We're still in the "race to marketshare" stage of the bubble where venture capital and speculation are propping up more options than will be viable. Consolidation and retrenchment will come eventually though as the free money slows down and profit fails to materialize, that or mergers as bigger players in health or whatever industry look to snap up these products that really belong as a feature rather than as a standalone offeringIt isn't the number in this article that will attract the most attention, but they evaluated twenty vendors of this crap? LLMs weren't quite invented yesterday. How is the marketplace that differentiated, when there aren't really equivalents to production or shipping bottlenecks? Some tool ought to out-compete most of the field, shouldn't it?
I'm reminded of that saying in football--if you have two viable quarterbacks, you really have none. Same goes. If you have twenty approvable LLM medical scribe tools available, you really have none.
In my experience, doctors are still responsible for the notes (whether self-written, AI-scribed, human-scribed, or dictated). The general idea is that these AI tools are sufficiently cheap and good that it frees up the doctor to spend less of their time writing notes and more of their time actually doctoring (either spending more time with each patient or seeing more patients).why can’t the doctors just do their job and write notes??
Because it was never that good either; the gold standard for transcription is a person, often disabled/otherwise home bound, who are usually amazingly fast and accurate, but, you know, cost money.Came here to ask this.
I wasn't sure if a sanctioning org, like actual high-level government, okaying this thing would change that or not.The Doctor, ultimately. I work in radiology, and we have been using speech to text for years. It's up to them to proofread. If it is wrong, and there is a lawsuit, they will be hung out to dry.
As someone else said, moral and legal crumple zones for both their employers and the providers of these models. Despicable.The Doctor, ultimately. I work in radiology, and we have been using speech to text for years. It's up to them to proofread. If it is wrong, and there is a lawsuit, they will be hung out to dry.
Rhetorical question, but the issue is pretty straightforward.why can’t the doctors just do their job and write notes??
Because they're already over scheduled and the cocaine and adderall only last so long.why can’t the doctors just do their job and write notes??
This is what I came to note. It's insane that "does it actually work" is the lowest metric. Then again, that matches my experience with just about any other legal or corporate entity, so I'm not actually surprised either.Accuracy: 4%
Domestic Presence in Ontario: 30%
It is refreshing to see priorities spelled out so honestly. Here's the table from the linked PDF, if anyone else is curious. Domestic presence was the highest-weighted criteria, beating out trivialities such as accuracy, security, formatting, usability, and privacy.
View attachment 135056
Physician here who for years relied on transcriptionists. They were phenomenal, excellent at their jobs and many helped catch mistakes/improved clarity of medical jargon. When my institution switched to dragon, I changed to typing my own notes out. It was faster and more accurate than dragon ever was (hates my southern drawl). And now my institution has rolled out similar AI to what this article is addressing. I plan to never use it.Because it was never that good either; the gold standard for transcription is a person, often disabled/otherwise home bound, who are usually amazingly fast and accurate, but, you know, cost money.
Why is AI involved at all rather than basic dictation software we already had?
Many cannot read their own handwriting?why can’t the doctors just do their job and write notes??
The old adage "If it sounds too good to be true..." smiles, smugly.Buy my tool and make your note taking brainless sounds like a great offer.
Prediction: Malpractice carriers will begin to exclude coverage for errors attributable to unproven models.So who is on the hook when these AI tools go wrong, in a field like healthcare, where consequences are life or death? Particularly when the hallucinating tools are actually recommended by government orgs?
Props for this!Physician here who for years relied on transcriptionists. They were phenomenal, excellent at their jobs and many helped catch mistakes/improved clarity of medical jargon. When my institution switched to dragon, I changed to typing my own notes out. It was faster and more accurate than dragon ever was (hates my southern drawl). And now my institution has rolled out similar AI to what this article is addressing. I plan to never use it.
These days, I’m not sure I’d agree. Digital sovereignty is a serious issue. Let’s say you do have an amazingly accurate solution, but it’s supplied by a company in a different country, maybe even a competitor. Or maybe a country you thought was friendly, but then the e populace elects a completely unfit person to lead the government. How confident are you that you’ll be able to rely on that solution?This is what I came to note. It's insane that "does it actually work" is the lowest metric. Then again, that matches my experience with just about any other legal or corporate entity, so I'm not actually surprised either.