Did ChatGPT help health officials solve a weird outbreak? Maybe.

joelypolly

Smack-Fu Master, in training
55
I've had a few ongoing, very minor medical issues that I've mentioned to doctors with no success (Seborrheic dermatitis is one I've had for years and years). They usually shrugged their shoulders and said, "That’s weird," and didn't offer a helpful suggestion. I gave the symptoms to ChatGPT, and it diagnosed the problem right away and suggested an over-the-counter treatment which worked. It was honestly pretty amazing. I’m not saying this is a substitute for real doctors, and I’m sure a specialist would have diagnosed the same thing. But as a supplement to medical professionals, there’s value, I reckon.
My mother did the same except it gave her some actual tests that the doctor should run. And surprisingly they actually confirmed the diagnosis of vitamin B toxicity which the doctors missed.
 
Upvote
17 (18 / -1)

Simk

Smack-Fu Master, in training
94
Subscriptor++
People are just farming out their thinking to a outside (potentially unreliable) entity.

At work (I'm a developer) I was in a conversation with a few developers about an issue. I stated that there were two solutions, neither great. A colleague chimed in with "Google AI suggested these things, we should try them first" followed by copy and pasted text from the AI. A very cursory glance told me that the suggestions were impossible or just completely unrelated to our issue.

He didn't even bother reading it before posting.

I wrote back a one liner "none of these work because this is ...".

He wrote back "I asked a different AI, and it agreed with you".

Bloody hell.
 
Upvote
70 (71 / -1)
I've noticed AI agrees with me a lot more than I agree with myself
This is the incredible danger of current LLM models. They use incredibly compelling language to assert confidence where the system itself literally IS NOT CAPABLE OF. Yes, the LLM said that ice was a "credible and likely" source, but ChatGPT isn't really able to do that, what it is doing instead is predicting that the words credible and likely are the most appropriate next words in a response!

Even if you know this is the major flaw of LLMs, it's really easy to fail to correct for that false assertion of confidence. Humans are creatures of language, and we're "programmed" to interpret confident language as evidence of knowledge and expertise. Even experts in the field (and TBH a health department should be an expert in public health outbreaks) can obviously be fooled to rely on LLM assertions because of this.
Not just this, I've noticed AI tends to agree with the user often if you give any hint of bias.
 
Upvote
27 (27 / 0)

1337 poster

Ars Scholae Palatinae
600
Who needs knowledge when you can ask a bot and get 80% of an answer. It's almost better than reddit!
/S

AI is excellent at always providing an answer to your question. The AI result on Google search answers exactly what you wanted to know. And convincingly too. Only problem is that it's so often wrong, even to a yes/no question.

I just ignore it, but have seen it assert that Pennsylvania is a made up state, and raving about the breakfasts at a pub that does not open until the afternoon.
 
Upvote
22 (22 / 0)

pnellesen

Ars Tribunus Militum
1,735
Subscriptor++
Thank you! Every time I hear about people wanting to use LLMs as a medical search engine I wonder if I've spent the past two decades hallucinating this tool that we already have!
Given the events of the past year or so, I'm not sure PubMed can be trusted as a source of reliable medical information any longer.
 
Upvote
-8 (3 / -11)
I admit to sometimes looking at the AI response to a Google search, but strictly as a means to get links to narrow the field of my search. I often find that the words I use in a search are too technical or too specific and the AI does a reasonable job (usually) of choosing better statistically-likely words for me to use in searching. As for the actual text of the AI response -- I may read that if I need a laugh.
 
Upvote
9 (9 / 0)

Simk

Smack-Fu Master, in training
94
Subscriptor++
AI is excellent at always providing an answer to your question. The AI result on Google search answers exactly what you wanted to know. And convincingly too. Only problem is that it's so often wrong, even to a yes/no question.

I just ignore it, but have seen it assert that Pennsylvania is a made up state, and raving about the breakfasts at a pub that does not open until the afternoon.
I searched for an issue I was having with a common API. I wanted to do something, but my search of the API wasn't showing any way of doing it.

The Google AI summary described the exact code solution that I needed, complete with API calls I hadn't seen.

"Wow!" I thought, "can't believe I missed that!"

I hadn't. The calls don't exist, not in the current version, not in past versions.

And I wasted my time trying to track down these invented calls.

First and last time.
 
Upvote
58 (58 / 0)

Fatesrider

Ars Legatus Legionis
24,979
Subscriptor
So, the health investigators then turned to ChatGPT for assurances.
Fire the fucking investigators for cause.

The cause? Egregiously unqualified. ANYONE claiming the title of "investigator" who used an AI AT ALL should be standing in an unemployment line scanning the jobs posts for mindless manual labor jobs.
 
Upvote
2 (13 / -11)

1337 poster

Ars Scholae Palatinae
600
And just a reminder here about how good LLMs are for medical issues:

https://www.theguardian.com/technol...pt-health-fails-recognise-medical-emergencies

In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.

“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”

In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.
I think this has to be a textbook example of the Torment Nexus.
 
Upvote
23 (23 / 0)

Uncivil Servant

Ars Scholae Palatinae
4,667
Subscriptor
Given the events of the past year or so, I'm not sure PubMed can be trusted as a source of reliable medical information any longer.

That shouldn't matter for the most part. PubMed is just a search engine and repository for medical journals. Maybe the worst that the administration could do is choose not to enforce the requirement for all federally-funded research from being published on PubMedCentral going forward, but even that would be unlikely and pointless.

I would use PubMed to look for articles or systematic reviews on a specific subject, and then find the best sources available. This gets at the other problem with LLMs: not all source material is equal. The American College of Obstetrics and Gynecology is a much better source than some religious-based non-profit. The American Academy of Child and Adolescent Psychiatry is a much better source on Autism Spectrum Disorders than our current Secretary of Health and Human Services, etc.

This is why any such synthesis or analysis cannot and should not be automated. But hey, my current and last three employers by industry are Government, IT, IT, and Medicine. If I genuinely believed that this could be automated, I'd be going full steam ahead.
 
Upvote
30 (30 / 0)
Post content hidden for low score. Show…

jdale

Ars Legatus Legionis
18,261
Subscriptor
People are just farming out their thinking to a outside (potentially unreliable) entity.

At work (I'm a developer) I was in a conversation with a few developers about an issue. I stated that there were two solutions, neither great. A colleague chimed in with "Google AI suggested these things, we should try them first" followed by copy and pasted text from the AI. A very cursory glance told me that the suggestions were impossible or just completely unrelated to our issue.

He didn't even bother reading it before posting.

I wrote back a one liner "none of these work because this is ...".

He wrote back "I asked a different AI, and it agreed with you".

Bloody hell.
I'm not a fan of firing people and having AI take over their jobs but, really? Why are you even here if you can't contribute anything that doesn't come straight out of an AI? He is arguing to be replaced by AI, and at the current rate it really won't take long for his argument to succeed.
 
Upvote
35 (35 / 0)

silverboy

Ars Tribunus Militum
2,027
Subscriptor++
And LLMs remain a solution in search of a problem.

The common element was they drank beer kept in a jury-rigged cooler made from farm equipment that was not well cleaned and had leftover food of dubious provenance in it. Duh. We don't need Hercule Poirot here.

With due respect to what I am sure are the fine folks at the Brown County Health Department, all that "AI" did in this case was to damage their professional reputation when they used it in some weird attempt to justify their eminently reasonable conclusion. It would be one thing if it had told them something they hadn't thought of, but I prefer my paid professionals to have and exercise the sense Somebody gives a goose.
Thinking about it, they probably asked the LLM to see if their theory was good so they could stop looking for other causes. Given how LLMs work (extensively covered by others here!) that is a pretty risky choice and a pretty dumb one. A straight literature search would have been better. Or, you know, more actual investigation. Even if their theory is right (probable!), asking Clippy if they're done is beyond irresponsible.
 
Upvote
10 (11 / -1)
Upvote
27 (27 / 0)

10Nov1775

Ars Scholae Palatinae
889
Confirming hypotheses seems like a really rough use of LLMs. When it's a yes/no answer, then it's just predicting plausible words. Combined with the makers' tendency to make them as sycophantic as possible, it's not a good use of the tech.

I find them alright at generating hypotheses, as long as they're not too costly to evaluate.
I take great care, myself, not to poison the output—structuring questions very carefully so as not to suggest any possible answer, give it any information, or introduce any (knowable) source of bias.

I seem to get significantly higher quality output when using them in this way, along with other strategies. (e.g. Using jargon from the field you are asking about also helps, presumably because it biases the LLM towards more professional or expert sources.)

Open-ended questions like asking it to generate hypotheses do also seem to improve the output—and tend to be...hm...not necessarily more correct, but more useful to follow up on.
 
Upvote
-15 (3 / -18)

SubWoofer2

Ars Tribunus Militum
2,550
Who needs knowledge when you can ask a bot and get 80% of an answer. It's almost better than reddit!
/S

The AI was on topic and didn't talk about something else or ignore the original question. So bonus points on that, credit where due, in this regard is likely to be rated better than reddit.
 
Upvote
5 (6 / -1)

10Nov1775

Ars Scholae Palatinae
889
This is like including a wordprocessor's autocomplete feature in the acknowledgements of a PhD thesis. Still, if current trends continue, crediting AI may become a prerequisite to getting federal and some corporate jobs, so I can see why a county health official might feel it was a good career move to do so.
Too cynical by half, I think.

Clinical medicine has both a culture of sharing cutting edge anecdotes—I tried this, it worked for my patient(s), might be worth trying if you run out of options—and a culture of being entirely transparent when reporting results, for ethical reasons.

I read their comments as someone who feels an ethical obligation to disclose that AI was used, and to describe in detail how it was used, while also adding their "case study" anecdote to a body of medical folk wisdom about whether AI can be a clinically useful tool.

The triviality of what they report supports this interpretation. They basically arrived at the answer themselves, asked the friendly AI what it thought, and then went to PubMed and double-checked their own hypothesis. The AI didn't play an important role in their decision making process, but because they did use it, they felt an ethical obligation to report in detail how it was used.
 
Upvote
27 (30 / -3)

10Nov1775

Ars Scholae Palatinae
889
Yoiks!! I don't even need a web search to recoil [!] from this. Even if it was fresh off the back lot of the local supply depot (and nothing in the article suggests that), someone presumably had to saw it in half (where? with what?) and hack in some dividers 'cause drainage tile sure doesn't come with "compartments".

Just. Don't. Do. This.
Someone believed that "everybody knows in Roman times they only drank alcoholic drinks because it kills germs" myth, lol.
 
Upvote
-9 (0 / -9)

10Nov1775

Ars Scholae Palatinae
889
There are plenty of fallacies with using LLMs, but that really isn't one of them. The big breakthrough with the attention mechanism (from the Google paper, "Attention is all you need") that enabled LLMs to enter mainstream in the first place was precisely that it enabled distinguishing different meanings of the same, or similar, words and fragments depending on the context they appear in. It is the reason why modern LLM based translation software are much more likely to accurately preserve the meaning of texts when translating from one language to another that than any previous machine translation method.
It's incidentally why they're resistant to typos, and that's an easy example for anyone who would like to know whether what you said is true.

If everything about your request implies, say, that you're asking about a "hyper saline solution", but you accidentally write something like "hyposaline", current LLMs will generally answer your question like you'd asked the question you intended to ask.
 
Upvote
-18 (1 / -19)

norton_I

Ars Praefectus
5,776
Subscriptor++
In this case, the LLM got it right. But the answer here was seemingly so obvious it's a little mystifying why county health investigators needed to ask an LLM at all.

The LLM didn't really get it right. The health workers got it right. The LLM gave it the vernier of authenticity by just confirming that yes, that was a plausible explanation. And the impression I have from the article is that the prompts were heavily biasing it in that direction. I won't even say a chat bot couldn't be useful in similar situations, but here it seems like it was just used as an echo bot. This sounds like something Eliza could have done, no need for chatgpt.

If you want to use a chatbot as a rubber ducky to try to come up with alternative hypothesis, perhaps ones you haven't thought of, you need to be a lot more careful.
 
Upvote
19 (19 / 0)

MöbiusTrip1

Seniorius Lurkius
46
Subscriptor
Hell, why train and employ investigators at all if they're going to ask a CHATBOT for their opinion and leave it at that.
"Looks to me like it was the ice that got them. What do you think ChatGPT?"
"Sounds credible to me. Your hypothesis shows logical thinking and an attention to detail."
"Thanks buddy, case closed."
 
Upvote
24 (25 / -1)

GFKBill

Ars Tribunus Militum
2,864
Subscriptor
I've had a few ongoing, very minor medical issues that I've mentioned to doctors with no success (Seborrheic dermatitis is one I've had for years and years). They usually shrugged their shoulders and said, "That’s weird," and didn't offer a helpful suggestion. I gave the symptoms to ChatGPT, and it diagnosed the problem right away and suggested an over-the-counter treatment which worked. It was honestly pretty amazing. I’m not saying this is a substitute for real doctors, and I’m sure a specialist would have diagnosed the same thing. But as a supplement to medical professionals, there’s value, I reckon.
A. You have had crap doctors
B. The AI could have sent you on a wild goose chase or made things worse. It guessed the right answer.
C. Why did you not go to a specialist!?
 
Upvote
20 (22 / -2)

Fred Duck

Ars Tribunus Angusticlavius
7,166
Even if you know this is the major flaw of LLMs, it's really easy to fail to correct for that false assertion of confidence. Humans are creatures of language, and we're "programmed" to interpret confident language as evidence of knowledge and expertise. Even experts in the field (and TBH a health department should be an expert in public health outbreaks) can obviously be fooled to rely on LLM assertions because of this.
You sound as if you know a great deal about this subject so I, for one, believe you.

I just ignore it, but have seen it assert that Pennsylvania is a made up state, and raving about the breakfasts at a pub that does not open until the afternoon.
shakes head

That's false.

There's no such place as Wyoming.

Think about it. Have you ever met anyone from Wyoming?



Well, there you are.


I saw it on a television segment in 1989.

For those of you who are sceptical about the accuracy and value of AI, here is an example. I recently read an article which was illustrated by an image of supposed beer cans inside a cooler filled with ice.

The can has a gold top and white sides.

I spent several minutes looking for cans matching that description but couldn't find anything. Finally, I broke down and asked Claude Haiku 4.5.

Fred Duck said:
What beer can has a gold top and white sides?

Claude said:
I'm not immediately certain which specific beer can you're referring to based on that description alone. Several beers have gold and white color schemes, so it would help to know a bit more context.

Are you thinking of a Coors Banquet can? That's one of the most recognizable beers with a gold top and white sides design. However, other possibilities could include certain varieties of Michelob or other light lagers that use similar color schemes.

Do you remember anything else about it—like the brand name, whether it's a domestic or imported beer, the country it's from, or any text or logos on the can? That would help me narrow it down for you.

There you have it: a polite, succinct answer, instantly.


Coors-Banquet-443955063.jpg
 
Upvote
1 (11 / -10)
A makeshift cooler that is not properly handled so it gets easily contaminated and yet people will just put and take out shit from it without a thought... Is there a better analogy for LLMs than that?
I would say this is more an analogy for the default state of humanity in general. Even the infrastructure and knowledge that allowed us to track down this one particular infection source are a relatively modern innovation.

Which of course is why we're currently busy tearing that all down. 🙄
 
Upvote
8 (8 / 0)

jdale

Ars Legatus Legionis
18,261
Subscriptor
For those of you who are sceptical about the accuracy and value of AI, here is an example. I recently read an article which was illustrated by an image of supposed beer cans inside a cooler filled with ice.

The can has a gold top and white sides.

I spent several minutes looking for cans matching that description but couldn't find anything. Finally, I broke down and asked Claude Haiku 4.5.

There you have it: a polite, succinct answer, instantly.


View attachment 129385
I suppose the point is that it gave you a can with a white top and gold sides, rather than a gold top and white sides? So, polite, succinct, instant... and wrong.

Of course the original image was just a stock photo from Getty:

View: https://www.gettyimages.com/detail/photo/cold-cans-inside-a-cooler-filled-with-ice-royalty-free-image/1904444791?phrase=beer%20can%20in%20ice&searchscope=image,film&adppopup=true


There's nothing in that picture to assure the cans are even beer. And it was taken in Thailand, so no assurance that it would even be a recognizable brand. But that said? Knowing it was in Thailand, it's not a big jump to guess Singha, which is the actual answer:

1772343368608.png


Aside from the gold top and white sides, it also has the gold curving element wrapping around the base.
 
Upvote
39 (39 / 0)

charliebird

Ars Tribunus Militum
2,356
Subscriptor++
A. You have had crap doctors
B. The AI could have sent you on a wild goose chase or made things worse. It guessed the right answer.
C. Why did you not go to a specialist!?
I'm not quite sure why you chose to challenge the narrative of my story. I simply stated that I used a tool to help me solve a specific problem. If the point is that these tools should be used cautiously and results should be verified, then I fully agree. If people don't want to use them, that’s cool, that’s their own business. But I don't really see why it's so controversial that I found value in a tool that helped me.
 
Upvote
-17 (4 / -21)
Post content hidden for low score. Show…

DarthSlack

Ars Legatus Legionis
23,060
Subscriptor++
Does anyone really, truly need to ask ChapGPT "Is it OK to drink toilet water?" Because the abandonment of common sense here is appalling.

The result of the last Presidential election (and the current state of the entire Republican party) suggests that yes, there is a large segment of the population that needs this level of help.
 
Upvote
20 (20 / 0)

SixDegrees

Ars Legatus Legionis
48,308
Subscriptor
Chatbots are great to bring new perspectives. Issue with health is that they to agree too fast to any crazy theories: this is bad for hypochondriacs.
That said, even without any mention to AI, this article would have been as relevant. And as gross...
Kennedy is proud that he used a chatbot to determine that Tylenol causes autism. He also asked it, "Am I smart?" and - after several intermediate prompts - it said, "Yes!"
 
Upvote
10 (10 / 0)