Did ChatGPT help health officials solve a weird outbreak? Maybe.

MilanKraft

Ars Tribunus Angusticlavius
6,713
Who needs knowledge when you can ask a bot and get 80% of an answer. It's almost better than reddit!
/S
80% is being generous... I mean really generous.

If you put any of the brainless, definitely-not-reasoning LLMs through their paces with serious questions, with not-low-bar expectations (whatever your domain might be), I suspect the percentage of fully accurate answers (i.e. both the right answer with reasonable context provided, and zero incorrect details added in) would be 55-60%.

(And no, to the wise guys out there who are about to chime in with "but there are many humans in leadership roles that are wrong 40% of the time, too!" is neither a justification for relying on, nor promoting these unvetted pieces of crap. Put another way: if they had been advertised as "prose search engine results from an unaware, non-reasoning bot, only as good as the mostly unvetted data we scraped" I would not be calling them pieces of crap.

They are being touted, over and over again, as being ready for prime time in a variety of serious domains like medicine, the military, and business in general, so "some humans are wrong 40% of the time" is not an argument in their favor.)
 
Upvote
11 (11 / 0)

Psyact

Ars Tribunus Angusticlavius
8,405
Subscriptor
That's because an LLM has no motive, and we're used to automatically guessing people's motives in any conversation. Motives don't have to be nefarious, for most of us, posting on Ars is primarily motivated by boredom, killing time, etc as well as an interest in the subject. If someone was always posting about how Bitcoin is the future, etc etc people would similarly make some assumptions about their motivations.

LLMs have no motivations, so when we naturally try to guess, it comes across as being fake and insincere in ways that are almost baffling, because we aren't used to a conversation without a motive or any operating theory of mind as we know it. And of course, the LLM cannot understand your motivations and won't respond to them as we expect.
Of course LLMs (or rather, the people that develop them) have motives.

They are built to get you to use and trust them to the exclusion of their peers.

Humans aren't inherently capitalistic. We don't engage in social interactions for some quantitative purpose, at least not directly. We are drawn to other humans because it's wired in our DNA. It's a necessary precondition of our survival, and so we have evolved in that way (yes, this is a wild oversimplification).

LLMs are controlled by corporations that are spending billions of dollars to develop and push them into every aspect of your life because they want to control the market and make the most money. It is inherent to their DNA. It is a necessary precondition to their survival, so they will continue to evolve in that way.

There's no value judgment here, of course, but it would be foolish to pretend that this reality does not exist. It's no different than assuming that a for-profit corporation has morality. It's a tax entity that exists to make money, it cannot have morality. What it can have is value, and if there is market value in appearing to have morals, it will appear to demonstrate that.
 
Upvote
12 (12 / 0)

graylshaped

Ars Legatus Legionis
67,699
Subscriptor++
Not a single source can be trusted 100%. You ask your chatbots to cite its sources and you confirm its statements. Just like you are supposed to do with any other scientific source. I see no issue in correctly using a chatbot for scientific or engineering purposes.
Why not just ask Bob?
 
Upvote
0 (0 / 0)

Astro-CCD

Ars Scholae Palatinae
1,251
“AI was effective in this rural setting for rapid situational awareness,”

Anything that keeps one from having to think too much seems to be useful in a rural setting these days. It might account for rural politics as well, come to think about it, (says someone who retired to a rural setting).
 
Upvote
2 (3 / -1)

thinkreal

Ars Scholae Palatinae
690
The literature also records many cases where public outbreaks can be from mutated pathogens. If any of the patients die it is likely they will revive with an urgent hunger for brains that is highly contagious. The most efficacious treatment is to sterilize the affected area with a tactical nuclear strike.

When training Chatgpt on pirated books how did they distinguish literature from Literature from pulp?
 
Upvote
9 (9 / 0)

The Lurker Beneath

Ars Tribunus Militum
6,636
Subscriptor
You sound as if you know a great deal about this subject so I, for one, believe you.


shakes head

That's false.

There's no such place as Wyoming.

Think about it. Have you ever met anyone from Wyoming?



Well, there you are.


I saw it on a television segment in 1989.

For those of you who are sceptical about the accuracy and value of AI, here is an example. I recently read an article which was illustrated by an image of supposed beer cans inside a cooler filled with ice.

The can has a gold top and white sides.

I spent several minutes looking for cans matching that description but couldn't find anything. Finally, I broke down and asked Claude Haiku 4.5.





There you have it: a polite, succinct answer, instantly.


View attachment 129385

Welp, it's an almost human-like mistake.

Stella Artois over here has a gold top and white sides, though the sides have largish red labels.
 
Upvote
4 (4 / 0)

Veritas super omens

Ars Legatus Legionis
26,351
Subscriptor++
You sound as if you know a great deal about this subject so I, for one, believe you.


shakes head

That's false.

There's no such place as Wyoming.

Think about it. Have you ever met anyone from Wyoming?



Well, there you are.


I saw it on a television segment in 1989.

For those of you who are sceptical about the accuracy and value of AI, here is an example. I recently read an article which was illustrated by an image of supposed beer cans inside a cooler filled with ice.

The can has a gold top and white sides.

I spent several minutes looking for cans matching that description but couldn't find anything. Finally, I broke down and asked Claude Haiku 4.5.





There you have it: a polite, succinct answer, instantly.


View attachment 129385
That's not beer. Also the so called state of Oregon only exists in a video game about pioneers, which by happenstance must avoid contaminated water. Full circle.
 
Upvote
9 (9 / 0)

alansh42

Ars Praefectus
3,597
Subscriptor++
In my long and varied career I have worked in food service. There are a lot of people that think ice dispensers are dumb and we should just scoop ice out of a bucket like God intended. This is why not.
Does anyone really, truly need to ask ChapGPT "Is it OK to drink toilet water?" Because the abandonment of common sense here is appalling.
I'm not drinking toilet water! That stuff's expensive! (And another way to confuse AI.)
1772382750854.jpeg


Decades of apocalyptic SF told us the intelligent machines would destroy us maliciously, with violence.

Turns out the machines will destroy us cheerfully, by making us stupid.
It's kind of amazing how prophetic "A Logic Named Joe" (1946) by Murray Leinster is. A "logic" ("computer" hadn't caught on yet) just starts answering any question. Any question, like how to kill someone and not be caught.
 
Last edited:
Upvote
6 (6 / 0)

graylshaped

Ars Legatus Legionis
67,699
Subscriptor++
That's not beer. Also the so called state of Oregon only exists in a video game about pioneers, which by happenstance must avoid contaminated water. Full circle.
I like Oregon. They were kind enough to put out all the fires and quickly rebuild all the devastation of the riots for the time we spent in Portland. On the other hand, I've also been to Oklahoma, and all those stories you hear about it?

[shudders] They're true.

Sticking with O states, parts of Ohio are nice.
 
Upvote
3 (3 / 0)

Veritas super omens

Ars Legatus Legionis
26,351
Subscriptor++
I like Oregon. They were kind enough to put out all the fires and quickly rebuild all the devastation of the riots for the time we spent in Portland. On the other hand, I've also been to Oklahoma, and all those stories you hear about it?

[shudders] They're true.

Sticking with O states, parts of Ohio are nice.
Shhhsh! I reiterate...there is no state of Oregon...and if there is, it rains all the time and it's too hot and dry...it's chock-a-block with Nazi's...and hippies...and Californians. Whatever your politics are...they don't like them in Oregon. The taxes are horrendous, the roads are falling apart, and the schools have been going downhill since 1990...(those three are actually true). Property is way too expensive, as is gasoline (also true). The people are standoffish...and intrusive.. If you do come and visit, please spend lots of money, but please don't stay. Better yet just Venmo us some cash...
 
Upvote
3 (3 / 0)

norton_I

Ars Praefectus
5,776
Subscriptor++
Confirming hypotheses seems like a really rough use of LLMs. When it's a yes/no answer, then it's just predicting plausible words. Combined with the makers' tendency to make them as sycophantic as possible, it's not a good use of the tech.

Yeah. Just the other day I was having a technical chat with a colleague on whether we should designing something a certain way. I thought he was using a feature to use it so I asked him (a human) "why not combine these into one more generic module." Rather than think about it, he asked an LLM exactly that and then just pasted the resulting vague justifications into the chat window that conformed the framing, but didn't provide any facts (actually it provided a few wrong facts about security).

On the other hand I was just explaining this article to my wife and my 10 year old daughter just piped up from her video game "they make them like that so they people will use them more because people feel good when it agrees with them."

So at least 10 year olds get it.
 
Upvote
25 (25 / 0)

GFKBill

Ars Tribunus Militum
2,864
Subscriptor
I'm not quite sure why you chose to challenge the narrative of my story. I simply stated that I used a tool to help me solve a specific problem. If the point is that these tools should be used cautiously and results should be verified, then I fully agree. If people don't want to use them, that’s cool, that’s their own business. But I don't really see why it's so controversial that I found value in a tool that helped me.
Hi
No challenging the narrative, I have no doubt that what you describe is exactly what happened. And I'm happy you got a solution to your issue. If you had stopped your narrative there, I likely wouldn't have said anything.

Your conclusion, on the other hand, I do disagree with: "as a supplement to medical professionals, there’s value"

I do not agree that for medical advice it is wise to consult an LLM. They are simply too unreliable. If you're an intelligent person with a good background in the basics of research, then maybe. But as a general principle? Hells no.

If symptoms persist, see your Doctor.
 
Upvote
18 (19 / -1)

The Lurker Beneath

Ars Tribunus Militum
6,636
Subscriptor
Hi
No challenging the narrative, I have no doubt that what you describe is exactly what happened. And I'm happy you got a solution to your issue. If you had stopped your narrative there, I likely wouldn't have said anything.

Your conclusion, on the other hand, I do disagree with: "as a supplement to medical professionals, there’s value"

I do not agree that for medical advice it is wise to consult an LLM. They are simply too unreliable. If you're an intelligent person with a good background in the basics of research, then maybe. But as a general principle? Hells no.

If symptoms persist, see your Doctor.

Well, that's the thing, there are plenty on these forums who CAN do their own research, and when it comes to annoying but plainly non-lethal skin rashes, an LLM might well usefully augment it IMO. [Seriously, would you hold out great hopes if you went to your doctor with something like that anyway?]
 
Upvote
-3 (2 / -5)

Mike Uchima

Wise, Aged Ars Veteran
115
Seems like in this sort of situation, it would be a good idea to spray the ice with a little food service sanitizer (e.g. iodophor) each time the ice is refilled. It would be cheap insurance against bacterial contamination of the cooler, and would help ensure that the hands of the people serving the beer are sanitary as well. I wonder if their new sanitation protocols include anything like that.
 
Upvote
-1 (1 / -2)

SixDegrees

Ars Legatus Legionis
48,308
Subscriptor
Seems like in this sort of situation, it would be a good idea to spray the ice with a little food service sanitizer (e.g. iodophor) each time the ice is refilled. It would be cheap insurance against bacterial contamination of the cooler, and would help ensure that the hands of the people serving the beer are sanitary as well. I wonder if their new sanitation protocols include anything like that.
Nah, that's expensive and sciency. Just squirt a tube or two of ivermectin in there and it's all good.

Or, you know, rent an actual refrigerated cooler for the weekend instead of swilling your beer out of an above-ground, jury-rigged septic tank.
 
Upvote
17 (17 / 0)

graylshaped

Ars Legatus Legionis
67,699
Subscriptor++
Yeah. Just the other day I was having a technical chat with a colleague on whether we should designing something a certain way. I thought he was using a feature to use it so I asked him (a human) "why not combine these into one more generic module." Rather than think about it, he asked an LLM exactly that and then just pasted the resulting vague justifications into the chat window that conformed the framing, but didn't provide any facts (actually it provided a few wrong facts about security).

On the other hand I was just explaining this article to my wife and my 10 year old daughter just piped up from her video game "they make them like that so they people will use them more because people feel good when it agrees with them."

So at least 10 year olds get it.
I applaud how you are teaching her.
 
Upvote
10 (10 / 0)

Writer from Texas

Ars Centurion
313
Subscriptor
I've had a few ongoing, very minor medical issues that I've mentioned to doctors with no success (Seborrheic dermatitis is one I've had for years and years). They usually shrugged their shoulders and said, "That’s weird," and didn't offer a helpful suggestion. I gave the symptoms to ChatGPT, and it diagnosed the problem right away and suggested an over-the-counter treatment which worked. It was honestly pretty amazing. I’m not saying this is a substitute for real doctors, and I’m sure a specialist would have diagnosed the same thing. But as a supplement to medical professionals, there’s value, I reckon.
Why are commenters voting this down? She asked a question got an answer that led to an OTC treatment. Presumably the treatment was low-risk and if the issue continued to be bothersome she would have pursued it through the medical channel. If she could afford to.

Many doctors are unaware of many medical issues. I am delighted when I see Dr. Pol check his Merck Veterinary manual. I have never had a doctor check Harrison's or the other major medical references that we had at the University Library I worked for. These medical resources are expensive (horrendously so) and not available in most public library branches. Yet, for good or ill, the AI agents often are trained on them or acceptable surrogates.
 
Upvote
-13 (3 / -16)

SixDegrees

Ars Legatus Legionis
48,308
Subscriptor
Why are commenters voting this down? She asked a question got an answer that led to an OTC treatment. Presumably the treatment was low-risk and if the issue continued to be bothersome she would have pursued it through the medical channel. If she could afford to.

Many doctors are unaware of many medical issues. I am delighted when I see Dr. Pol check his Merck Veterinary manual. I have never had a doctor check Harrison's or the other major medical references that we had at the University Library I worked for. These medical resources are expensive (horrendously so) and not available in most public library branches. Yet, for good or ill, the AI agents often are trained on them or acceptable surrogates.
Probably because it smells of horseshit. OTC treatments are, in fact, the first recommendation for this condition, and there are quite a few to choose from - though not enough to keep one busy trying them for "years and years." Doctors will recommend one, or possibly a few to try if the first don't work, and will also prescribe other alternatives if those don't work. We're not told what the recommended MirAIcle was - again, most likely because the story is crap and naming the product would reveal it as snake oil or as an already-common recommendation.

These no-detail claims of miracle cures are tiresome attempts to shill for, in this case, AI services that never provided a solution.
 
Upvote
14 (14 / 0)

charliebird

Ars Tribunus Militum
2,356
Subscriptor++
Probably because it smells of horseshit. OTC treatments are, in fact, the first recommendation for this condition, and there are quite a few to choose from - though not enough to keep one busy trying them for "years and years." Doctors will recommend one, or possibly a few to try if the first don't work, and will also prescribe other alternatives if those don't work. We're not told what the recommended MirAIcle was - again, most likely because the story is crap and naming the product would reveal it as snake oil or as an already-common recommendation.

These no-detail claims of miracle cures are tiresome attempts to shill for, in this case, AI services that never provided a solution.

You caught me, I created an account 16 years ago just to plan for this moment where I could fabricate a story about using AI to solve a minor medical issue. Foiled.

My GP didn’t diagnose seborrheic dermatitis. They said having itchy ears isn’t uncommon and to try splashing a little water in my ears when I shower to loosen up the wax, or if it gets really bad, to occasionally use a bit of hydrocortisone cream on the edge of the ear canal. None of this worked, and it wasn’t just one GP who suggested things like this.

As I mentioned, I’m sure an ENT specialist would’ve been more insightful, but I wasn’t planning a visit to an ENT and I don’t have unlimited time and money to chase down every little medical issue I have. ChatGPT suggested it was likely seborrheic dermatitis and to occasionally use a dab of Head & Shoulders on the entrance of my ear canal when I shower. I looked it up and it seemed reasonable, so I gave it a shot. I got a positive result.

I get the hate people have towards AI companies and I have a lot of misgivings about them myself. But I’m logical enough to separate my feelings and also explore the tools to see if there is any utility in them.
 
Upvote
-3 (7 / -10)

graylshaped

Ars Legatus Legionis
67,699
Subscriptor++
Why are commenters voting this down? She asked a question got an answer that led to an OTC treatment. Presumably the treatment was low-risk and if the issue continued to be bothersome she would have pursued it through the medical channel. If she could afford to.

Many doctors are unaware of many medical issues.
As another responder suggested, a worthwhile doctor would have said "Would you like a referral to a dermatologist or an ENT?"

Nah. He didn't have time, so a less reliable Dr. Google it is!
 
Upvote
7 (7 / 0)
You caught me, I created an account 16 years ago just to plan for this moment where I could fabricate a story about using AI to solve a minor medical issue. Foiled.

My GP didn’t diagnose seborrheic dermatitis. They said having itchy ears isn’t uncommon and to try splashing a little water in my ears when I shower to loosen up the wax, or if it gets really bad, to occasionally use a bit of hydrocortisone cream on the edge of the ear canal. None of this worked, and it wasn’t just one GP who suggested things like this.

As I mentioned, I’m sure an ENT specialist would’ve been more insightful, but I wasn’t planning a visit to an ENT and I don’t have unlimited time and money to chase down every little medical issue I have. ChatGPT suggested it was likely seborrheic dermatitis and to occasionally use a dab of Head & Shoulders on the entrance of my ear canal when I shower. I looked it up and it seemed reasonable, so I gave it a shot. I got a positive result.

I get the hate people have towards AI companies and I have a lot of misgivings about them myself. But I’m logical enough to separate my feelings and also explore the tools to see if there is any utility in them.
The problem is the AI will confidently suggest a treatment that is not safe, or it will ignore symptoms that should send you in to urgent care.

OpenAI Health - 50% wrong - like flipping a coin for your life - https://www.theguardian.com/technol...pt-health-fails-recognise-medical-emergencies
 
Upvote
18 (18 / 0)

charliebird

Ars Tribunus Militum
2,356
Subscriptor++
The problem is the AI will confidently suggest a treatment that is not safe, or it will ignore symptoms that should send you in to urgent care.

OpenAI Health - 50% wrong - like flipping a coin for your life - https://www.theguardian.com/technol...pt-health-fails-recognise-medical-emergencies
I think we're allowed to have some room for judgment otherwise our emergency rooms will be stuff full of people with minor conditions (might be true anyways). This was a slightly itchy ear not a medical emergency. I've made this pretty clear
 
Upvote
-6 (3 / -9)

Wheels Of Confusion

Ars Legatus Legionis
75,398
Subscriptor
I think we're allowed to have some room for judgment otherwise our emergency rooms will be stuff full of people with minor conditions (might be true anyways).
It is true anyways, and if chatbots are used in lieu of actual medical access that will only exacerbate the problem, not alleviate it.
 
Upvote
13 (13 / 0)
A. You have had crap doctors
B. The AI could have sent you on a wild goose chase or made things worse. It guessed the right answer.
C. Why did you not go to a specialist!?
This is just a ridiculously ignorant take from someone who sounds like they've never had to deal with a bad medical system. (Remembering, sometimes that bad system is your only viable option, and you can't just go see a specialist outside that system without spending the kind of money a lot of people don't have.)

Yes, AI or any other kind of self diagnosis needs to be done carefully and keeping in mind that whatever you come up with, it's the start of a conversation, not the end. And of course anything a chatbot suggests should be thoroughly researched before taking it seriously.

But having personally dealt with doctors who just could not be bothered to take outright debilitating symptoms seriously, because they didn't fit into a neat box aligning with standard tests and screening, sometimes you need to do whatever works to get things moving. If a chatbot is what it takes to find a plausible diagnosis which either gets you straight to the right answer of gets a doctor to finally take things seriously, so be it.

You just have to remember that the chatbot does not have a medical degree, doesn't "know" anything, and is simply doing word association on a vast dataset. But it appears the OP here actually used AI responsibly and solved their problem, which is great.
 
Upvote
6 (10 / -4)

graylshaped

Ars Legatus Legionis
67,699
Subscriptor++
This is just a ridiculously ignorant take from someone who sounds like they've never had to deal with a bad medical system. (Remembering, sometimes that bad system is your only viable option, and you can't just go see a specialist outside that system without spending the kind of money a lot of people don't have.)

Yes, AI or any other kind of self diagnosis needs to be done carefully and keeping in mind that whatever you come up with, it's the start of a conversation, not the end. And of course anything a chatbot suggests should be thoroughly researched before taking it seriously.

But having personally dealt with doctors who just could not be bothered to take outright debilitating symptoms seriously, because they didn't fit into a neat box aligning with standard tests and screening, sometimes you need to do whatever works to get things moving. If a chatbot is what it takes to find a plausible diagnosis which either gets you straight to the right answer of gets a doctor to finally take things seriously, so be it.

You just have to remember that the chatbot does not have a medical degree, doesn't "know" anything, and is simply doing word association on a vast dataset. But it appears the OP here actually used AI responsibly and solved their problem, which is great.
No. The OP said he more or less didn't care about it, couldn't be bothered to seek a referral, and casually asked his doctor who gave his lack of concern back to him, and now that he DiD HiS oWN ReSeaRCH has decided skepticicm for "AI" can take a back seat because anecdote.

/rollseyes
 
Upvote
0 (4 / -4)

nosmadar2016

Ars Scholae Palatinae
922
Subscriptor
This is the incredible danger of current LLM models. They use incredibly compelling language to assert confidence where the system itself literally IS NOT CAPABLE OF. Yes, the LLM said that ice was a "credible and likely" source, but ChatGPT isn't really able to do that, what it is doing instead is predicting that the words credible and likely are the most appropriate next words in a response!

Even if you know this is the major flaw of LLMs, it's really easy to fail to correct for that false assertion of confidence. Humans are creatures of language, and we're "programmed" to interpret confident language as evidence of knowledge and expertise. Even experts in the field (and TBH a health department should be an expert in public health outbreaks) can obviously be fooled to rely on LLM assertions because of this.
Thank You, so much, for this response.

Up to this point it had not coagulated in my brain that the LLM is never doing anything other than "What's the next word" based on the prompt string. No search of any kind other than through the current state of it's tables and other mechanisms to find that next word. So we all just hope and pray, that there is some "knowledge" in the form of words that was previously inhaled and organized (or used to organize) those tables & other mechanisms so that the truly correct string of words are returned to the user. (And, given that Human language is context sensitive, Who determines what is truly "Correct" ?)

YIKES !!!!

The only thing I am left wondering is how, in the case of Google, does it manage to return a list of result options along with a source reference. (Which is a slightly saner type of behavior than what I described above).

And now, imagine that sort of mechanism being used autonomously to decide whether to launch a nuclear weapon.

RUH-ROH!
 
Upvote
0 (1 / -1)

GFKBill

Ars Tribunus Militum
2,864
Subscriptor
ChatGPT suggested it was likely seborrheic dermatitis and to occasionally use a dab of Head & Shoulders on the entrance of my ear canal when I shower. I looked it up and it seemed reasonable, so I gave it a shot. I got a positive result.
Ok, with that context, I wouldn't have reacted as strongly as I did.

Might even try it myself since I use HnS already, though I think I'm just old and have hairy ears making them itch sometimes :flail:
(Now I'll only be able to see that emoji as someone desperately trying to itch their ears)
 
Upvote
11 (11 / 0)

SixDegrees

Ars Legatus Legionis
48,308
Subscriptor
Accusing someone of lying with no proof is a personal attack, there's no need for that
Ejected from thread for 1 days – (Mar 2, 2026 at 11:15 AM)
Upvote
-1 (3 / -4)

jdale

Ars Legatus Legionis
18,261
Subscriptor
The problem in this particular case is that the OP is pretty much certainly lying about this experience.
Eh, it could be true. Doctors sometimes get things wrong, LLMs sometimes get things right. Even the statistic that they misdiagnose half of the cases that should be sent to the ER indicates that they got half of them right.

Is it relevant though? That we can find an anecdote of LLMs getting things right? The problem with LLMs is not that they are always wrong. The problem is that they are right often enough that people believe them the rest of the time.
 
Upvote
14 (14 / 0)