My mother did the same except it gave her some actual tests that the doctor should run. And surprisingly they actually confirmed the diagnosis of vitamin B toxicity which the doctors missed.I've had a few ongoing, very minor medical issues that I've mentioned to doctors with no success (Seborrheic dermatitis is one I've had for years and years). They usually shrugged their shoulders and said, "That’s weird," and didn't offer a helpful suggestion. I gave the symptoms to ChatGPT, and it diagnosed the problem right away and suggested an over-the-counter treatment which worked. It was honestly pretty amazing. I’m not saying this is a substitute for real doctors, and I’m sure a specialist would have diagnosed the same thing. But as a supplement to medical professionals, there’s value, I reckon.
Not just this, I've noticed AI tends to agree with the user often if you give any hint of bias.This is the incredible danger of current LLM models. They use incredibly compelling language to assert confidence where the system itself literally IS NOT CAPABLE OF. Yes, the LLM said that ice was a "credible and likely" source, but ChatGPT isn't really able to do that, what it is doing instead is predicting that the words credible and likely are the most appropriate next words in a response!
Even if you know this is the major flaw of LLMs, it's really easy to fail to correct for that false assertion of confidence. Humans are creatures of language, and we're "programmed" to interpret confident language as evidence of knowledge and expertise. Even experts in the field (and TBH a health department should be an expert in public health outbreaks) can obviously be fooled to rely on LLM assertions because of this.
Who needs knowledge when you can ask a bot and get 80% of an answer. It's almost better than reddit!
/S
Given the events of the past year or so, I'm not sure PubMed can be trusted as a source of reliable medical information any longer.Thank you! Every time I hear about people wanting to use LLMs as a medical search engine I wonder if I've spent the past two decades hallucinating this tool that we already have!
I searched for an issue I was having with a common API. I wanted to do something, but my search of the API wasn't showing any way of doing it.AI is excellent at always providing an answer to your question. The AI result on Google search answers exactly what you wanted to know. And convincingly too. Only problem is that it's so often wrong, even to a yes/no question.
I just ignore it, but have seen it assert that Pennsylvania is a made up state, and raving about the breakfasts at a pub that does not open until the afternoon.
Fire the fucking investigators for cause.So, the health investigators then turned to ChatGPT for assurances.
I think this has to be a textbook example of the Torment Nexus.And just a reminder here about how good LLMs are for medical issues:
https://www.theguardian.com/technol...pt-health-fails-recognise-medical-emergencies
In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.
“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”
In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.
Given the events of the past year or so, I'm not sure PubMed can be trusted as a source of reliable medical information any longer.
The chatbot was just brown-nosing?For real. Brown County? Stomach bug? How on the nose can we be here?
I'm not a fan of firing people and having AI take over their jobs but, really? Why are you even here if you can't contribute anything that doesn't come straight out of an AI? He is arguing to be replaced by AI, and at the current rate it really won't take long for his argument to succeed.People are just farming out their thinking to a outside (potentially unreliable) entity.
At work (I'm a developer) I was in a conversation with a few developers about an issue. I stated that there were two solutions, neither great. A colleague chimed in with "Google AI suggested these things, we should try them first" followed by copy and pasted text from the AI. A very cursory glance told me that the suggestions were impossible or just completely unrelated to our issue.
He didn't even bother reading it before posting.
I wrote back a one liner "none of these work because this is ...".
He wrote back "I asked a different AI, and it agreed with you".
Bloody hell.
Thinking about it, they probably asked the LLM to see if their theory was good so they could stop looking for other causes. Given how LLMs work (extensively covered by others here!) that is a pretty risky choice and a pretty dumb one. A straight literature search would have been better. Or, you know, more actual investigation. Even if their theory is right (probable!), asking Clippy if they're done is beyond irresponsible.And LLMs remain a solution in search of a problem.
The common element was they drank beer kept in a jury-rigged cooler made from farm equipment that was not well cleaned and had leftover food of dubious provenance in it. Duh. We don't need Hercule Poirot here.
With due respect to what I am sure are the fine folks at the Brown County Health Department, all that "AI" did in this case was to damage their professional reputation when they used it in some weird attempt to justify their eminently reasonable conclusion. It would be one thing if it had told them something they hadn't thought of, but I prefer my paid professionals to have and exercise the sense Somebody gives a goose.
I take great care, myself, not to poison the output—structuring questions very carefully so as not to suggest any possible answer, give it any information, or introduce any (knowable) source of bias.Confirming hypotheses seems like a really rough use of LLMs. When it's a yes/no answer, then it's just predicting plausible words. Combined with the makers' tendency to make them as sycophantic as possible, it's not a good use of the tech.
I find them alright at generating hypotheses, as long as they're not too costly to evaluate.
Who needs knowledge when you can ask a bot and get 80% of an answer. It's almost better than reddit!
/S
Too cynical by half, I think.This is like including a wordprocessor's autocomplete feature in the acknowledgements of a PhD thesis. Still, if current trends continue, crediting AI may become a prerequisite to getting federal and some corporate jobs, so I can see why a county health official might feel it was a good career move to do so.
Someone believed that "everybody knows in Roman times they only drank alcoholic drinks because it kills germs" myth, lol.Yoiks!! I don't even need a web search to recoil [!] from this. Even if it was fresh off the back lot of the local supply depot (and nothing in the article suggests that), someone presumably had to saw it in half (where? with what?) and hack in some dividers 'cause drainage tile sure doesn't come with "compartments".
Just. Don't. Do. This.
It's incidentally why they're resistant to typos, and that's an easy example for anyone who would like to know whether what you said is true.There are plenty of fallacies with using LLMs, but that really isn't one of them. The big breakthrough with the attention mechanism (from the Google paper, "Attention is all you need") that enabled LLMs to enter mainstream in the first place was precisely that it enabled distinguishing different meanings of the same, or similar, words and fragments depending on the context they appear in. It is the reason why modern LLM based translation software are much more likely to accurately preserve the meaning of texts when translating from one language to another that than any previous machine translation method.
In this case, the LLM got it right. But the answer here was seemingly so obvious it's a little mystifying why county health investigators needed to ask an LLM at all.
A. You have had crap doctorsI've had a few ongoing, very minor medical issues that I've mentioned to doctors with no success (Seborrheic dermatitis is one I've had for years and years). They usually shrugged their shoulders and said, "That’s weird," and didn't offer a helpful suggestion. I gave the symptoms to ChatGPT, and it diagnosed the problem right away and suggested an over-the-counter treatment which worked. It was honestly pretty amazing. I’m not saying this is a substitute for real doctors, and I’m sure a specialist would have diagnosed the same thing. But as a supplement to medical professionals, there’s value, I reckon.
You sound as if you know a great deal about this subject so I, for one, believe you.Even if you know this is the major flaw of LLMs, it's really easy to fail to correct for that false assertion of confidence. Humans are creatures of language, and we're "programmed" to interpret confident language as evidence of knowledge and expertise. Even experts in the field (and TBH a health department should be an expert in public health outbreaks) can obviously be fooled to rely on LLM assertions because of this.
shakes headI just ignore it, but have seen it assert that Pennsylvania is a made up state, and raving about the breakfasts at a pub that does not open until the afternoon.
Fred Duck said:What beer can has a gold top and white sides?
Claude said:I'm not immediately certain which specific beer can you're referring to based on that description alone. Several beers have gold and white color schemes, so it would help to know a bit more context.
Are you thinking of a Coors Banquet can? That's one of the most recognizable beers with a gold top and white sides design. However, other possibilities could include certain varieties of Michelob or other light lagers that use similar color schemes.
Do you remember anything else about it—like the brand name, whether it's a domestic or imported beer, the country it's from, or any text or logos on the can? That would help me narrow it down for you.
I would say this is more an analogy for the default state of humanity in general. Even the infrastructure and knowledge that allowed us to track down this one particular infection source are a relatively modern innovation.A makeshift cooler that is not properly handled so it gets easily contaminated and yet people will just put and take out shit from it without a thought... Is there a better analogy for LLMs than that?
I suppose the point is that it gave you a can with a white top and gold sides, rather than a gold top and white sides? So, polite, succinct, instant... and wrong.For those of you who are sceptical about the accuracy and value of AI, here is an example. I recently read an article which was illustrated by an image of supposed beer cans inside a cooler filled with ice.
The can has a gold top and white sides.
I spent several minutes looking for cans matching that description but couldn't find anything. Finally, I broke down and asked Claude Haiku 4.5.
There you have it: a polite, succinct answer, instantly.
View attachment 129385
I'm not quite sure why you chose to challenge the narrative of my story. I simply stated that I used a tool to help me solve a specific problem. If the point is that these tools should be used cautiously and results should be verified, then I fully agree. If people don't want to use them, that’s cool, that’s their own business. But I don't really see why it's so controversial that I found value in a tool that helped me.A. You have had crap doctors
B. The AI could have sent you on a wild goose chase or made things worse. It guessed the right answer.
C. Why did you not go to a specialist!?
Does anyone really, truly need to ask ChapGPT "Is it OK to drink toilet water?" Because the abandonment of common sense here is appalling.
Kennedy is proud that he used a chatbot to determine that Tylenol causes autism. He also asked it, "Am I smart?" and - after several intermediate prompts - it said, "Yes!"Chatbots are great to bring new perspectives. Issue with health is that they to agree too fast to any crazy theories: this is bad for hypochondriacs.
That said, even without any mention to AI, this article would have been as relevant. And as gross...