80% is being generous... I mean really generous.Who needs knowledge when you can ask a bot and get 80% of an answer. It's almost better than reddit!
/S
Of course LLMs (or rather, the people that develop them) have motives.That's because an LLM has no motive, and we're used to automatically guessing people's motives in any conversation. Motives don't have to be nefarious, for most of us, posting on Ars is primarily motivated by boredom, killing time, etc as well as an interest in the subject. If someone was always posting about how Bitcoin is the future, etc etc people would similarly make some assumptions about their motivations.
LLMs have no motivations, so when we naturally try to guess, it comes across as being fake and insincere in ways that are almost baffling, because we aren't used to a conversation without a motive or any operating theory of mind as we know it. And of course, the LLM cannot understand your motivations and won't respond to them as we expect.
Why not just ask Bob?Not a single source can be trusted 100%. You ask your chatbots to cite its sources and you confirm its statements. Just like you are supposed to do with any other scientific source. I see no issue in correctly using a chatbot for scientific or engineering purposes.
“AI was effective in this rural setting for rapid situational awareness,”
You sound as if you know a great deal about this subject so I, for one, believe you.
shakes head
That's false.
There's no such place as Wyoming.
Think about it. Have you ever met anyone from Wyoming?
Well, there you are.
I saw it on a television segment in 1989.
For those of you who are sceptical about the accuracy and value of AI, here is an example. I recently read an article which was illustrated by an image of supposed beer cans inside a cooler filled with ice.
The can has a gold top and white sides.
I spent several minutes looking for cans matching that description but couldn't find anything. Finally, I broke down and asked Claude Haiku 4.5.
There you have it: a polite, succinct answer, instantly.
View attachment 129385
That's not beer. Also the so called state of Oregon only exists in a video game about pioneers, which by happenstance must avoid contaminated water. Full circle.You sound as if you know a great deal about this subject so I, for one, believe you.
shakes head
That's false.
There's no such place as Wyoming.
Think about it. Have you ever met anyone from Wyoming?
Well, there you are.
I saw it on a television segment in 1989.
For those of you who are sceptical about the accuracy and value of AI, here is an example. I recently read an article which was illustrated by an image of supposed beer cans inside a cooler filled with ice.
The can has a gold top and white sides.
I spent several minutes looking for cans matching that description but couldn't find anything. Finally, I broke down and asked Claude Haiku 4.5.
There you have it: a polite, succinct answer, instantly.
View attachment 129385
I'm not drinking toilet water! That stuff's expensive! (And another way to confuse AI.)Does anyone really, truly need to ask ChapGPT "Is it OK to drink toilet water?" Because the abandonment of common sense here is appalling.
It's kind of amazing how prophetic "A Logic Named Joe" (1946) by Murray Leinster is. A "logic" ("computer" hadn't caught on yet) just starts answering any question. Any question, like how to kill someone and not be caught.Decades of apocalyptic SF told us the intelligent machines would destroy us maliciously, with violence.
Turns out the machines will destroy us cheerfully, by making us stupid.
I like Oregon. They were kind enough to put out all the fires and quickly rebuild all the devastation of the riots for the time we spent in Portland. On the other hand, I've also been to Oklahoma, and all those stories you hear about it?That's not beer. Also the so called state of Oregon only exists in a video game about pioneers, which by happenstance must avoid contaminated water. Full circle.
what, no picture of someone throwing up, praying to the porcelain god, ralphing, etc.?
Shhhsh! I reiterate...there is no state of Oregon...and if there is, it rains all the time and it's too hot and dry...it's chock-a-block with Nazi's...and hippies...and Californians. Whatever your politics are...they don't like them in Oregon. The taxes are horrendous, the roads are falling apart, and the schools have been going downhill since 1990...(those three are actually true). Property is way too expensive, as is gasoline (also true). The people are standoffish...and intrusive.. If you do come and visit, please spend lots of money, but please don't stay. Better yet just Venmo us some cash...I like Oregon. They were kind enough to put out all the fires and quickly rebuild all the devastation of the riots for the time we spent in Portland. On the other hand, I've also been to Oklahoma, and all those stories you hear about it?
[shudders] They're true.
Sticking with O states, parts of Ohio are nice.
On reading that my first thought was, 'was it new or used?' I can imagine some guy saying "Hey! I got these old tiles in the barn we can use."made of “a 10-ft length of non-food-grade corrugated black plastic farm drainage tile
If we’re asking “AI” do credentials matter anymore?Is that Bob Dobbs of the Church of the Sub-Genius?
Confirming hypotheses seems like a really rough use of LLMs. When it's a yes/no answer, then it's just predicting plausible words. Combined with the makers' tendency to make them as sycophantic as possible, it's not a good use of the tech.
HiI'm not quite sure why you chose to challenge the narrative of my story. I simply stated that I used a tool to help me solve a specific problem. If the point is that these tools should be used cautiously and results should be verified, then I fully agree. If people don't want to use them, that’s cool, that’s their own business. But I don't really see why it's so controversial that I found value in a tool that helped me.
As I overheard the other day "Common sense isn't a flower that grows in everyone's garden".Does anyone really, truly need to ask ChapGPT "Is it OK to drink toilet water?" Because the abandonment of common sense here is appalling.
Hi
No challenging the narrative, I have no doubt that what you describe is exactly what happened. And I'm happy you got a solution to your issue. If you had stopped your narrative there, I likely wouldn't have said anything.
Your conclusion, on the other hand, I do disagree with: "as a supplement to medical professionals, there’s value"
I do not agree that for medical advice it is wise to consult an LLM. They are simply too unreliable. If you're an intelligent person with a good background in the basics of research, then maybe. But as a general principle? Hells no.
If symptoms persist, see your Doctor.
Nah, that's expensive and sciency. Just squirt a tube or two of ivermectin in there and it's all good.Seems like in this sort of situation, it would be a good idea to spray the ice with a little food service sanitizer (e.g. iodophor) each time the ice is refilled. It would be cheap insurance against bacterial contamination of the cooler, and would help ensure that the hands of the people serving the beer are sanitary as well. I wonder if their new sanitation protocols include anything like that.
Bunnies!chatbot will always agree with your theory, that's quite convenient.
I applaud how you are teaching her.Yeah. Just the other day I was having a technical chat with a colleague on whether we should designing something a certain way. I thought he was using a feature to use it so I asked him (a human) "why not combine these into one more generic module." Rather than think about it, he asked an LLM exactly that and then just pasted the resulting vague justifications into the chat window that conformed the framing, but didn't provide any facts (actually it provided a few wrong facts about security).
On the other hand I was just explaining this article to my wife and my 10 year old daughter just piped up from her video game "they make them like that so they people will use them more because people feel good when it agrees with them."
So at least 10 year olds get it.
Why are commenters voting this down? She asked a question got an answer that led to an OTC treatment. Presumably the treatment was low-risk and if the issue continued to be bothersome she would have pursued it through the medical channel. If she could afford to.I've had a few ongoing, very minor medical issues that I've mentioned to doctors with no success (Seborrheic dermatitis is one I've had for years and years). They usually shrugged their shoulders and said, "That’s weird," and didn't offer a helpful suggestion. I gave the symptoms to ChatGPT, and it diagnosed the problem right away and suggested an over-the-counter treatment which worked. It was honestly pretty amazing. I’m not saying this is a substitute for real doctors, and I’m sure a specialist would have diagnosed the same thing. But as a supplement to medical professionals, there’s value, I reckon.
Probably because it smells of horseshit. OTC treatments are, in fact, the first recommendation for this condition, and there are quite a few to choose from - though not enough to keep one busy trying them for "years and years." Doctors will recommend one, or possibly a few to try if the first don't work, and will also prescribe other alternatives if those don't work. We're not told what the recommended MirAIcle was - again, most likely because the story is crap and naming the product would reveal it as snake oil or as an already-common recommendation.Why are commenters voting this down? She asked a question got an answer that led to an OTC treatment. Presumably the treatment was low-risk and if the issue continued to be bothersome she would have pursued it through the medical channel. If she could afford to.
Many doctors are unaware of many medical issues. I am delighted when I see Dr. Pol check his Merck Veterinary manual. I have never had a doctor check Harrison's or the other major medical references that we had at the University Library I worked for. These medical resources are expensive (horrendously so) and not available in most public library branches. Yet, for good or ill, the AI agents often are trained on them or acceptable surrogates.
Probably because it smells of horseshit. OTC treatments are, in fact, the first recommendation for this condition, and there are quite a few to choose from - though not enough to keep one busy trying them for "years and years." Doctors will recommend one, or possibly a few to try if the first don't work, and will also prescribe other alternatives if those don't work. We're not told what the recommended MirAIcle was - again, most likely because the story is crap and naming the product would reveal it as snake oil or as an already-common recommendation.
These no-detail claims of miracle cures are tiresome attempts to shill for, in this case, AI services that never provided a solution.
As another responder suggested, a worthwhile doctor would have said "Would you like a referral to a dermatologist or an ENT?"Why are commenters voting this down? She asked a question got an answer that led to an OTC treatment. Presumably the treatment was low-risk and if the issue continued to be bothersome she would have pursued it through the medical channel. If she could afford to.
Many doctors are unaware of many medical issues.
The problem is the AI will confidently suggest a treatment that is not safe, or it will ignore symptoms that should send you in to urgent care.You caught me, I created an account 16 years ago just to plan for this moment where I could fabricate a story about using AI to solve a minor medical issue. Foiled.
My GP didn’t diagnose seborrheic dermatitis. They said having itchy ears isn’t uncommon and to try splashing a little water in my ears when I shower to loosen up the wax, or if it gets really bad, to occasionally use a bit of hydrocortisone cream on the edge of the ear canal. None of this worked, and it wasn’t just one GP who suggested things like this.
As I mentioned, I’m sure an ENT specialist would’ve been more insightful, but I wasn’t planning a visit to an ENT and I don’t have unlimited time and money to chase down every little medical issue I have. ChatGPT suggested it was likely seborrheic dermatitis and to occasionally use a dab of Head & Shoulders on the entrance of my ear canal when I shower. I looked it up and it seemed reasonable, so I gave it a shot. I got a positive result.
I get the hate people have towards AI companies and I have a lot of misgivings about them myself. But I’m logical enough to separate my feelings and also explore the tools to see if there is any utility in them.
I think we're allowed to have some room for judgment otherwise our emergency rooms will be stuff full of people with minor conditions (might be true anyways). This was a slightly itchy ear not a medical emergency. I've made this pretty clearThe problem is the AI will confidently suggest a treatment that is not safe, or it will ignore symptoms that should send you in to urgent care.
OpenAI Health - 50% wrong - like flipping a coin for your life - https://www.theguardian.com/technol...pt-health-fails-recognise-medical-emergencies
It is true anyways, and if chatbots are used in lieu of actual medical access that will only exacerbate the problem, not alleviate it.I think we're allowed to have some room for judgment otherwise our emergency rooms will be stuff full of people with minor conditions (might be true anyways).
This is just a ridiculously ignorant take from someone who sounds like they've never had to deal with a bad medical system. (Remembering, sometimes that bad system is your only viable option, and you can't just go see a specialist outside that system without spending the kind of money a lot of people don't have.)A. You have had crap doctors
B. The AI could have sent you on a wild goose chase or made things worse. It guessed the right answer.
C. Why did you not go to a specialist!?
No. The OP said he more or less didn't care about it, couldn't be bothered to seek a referral, and casually asked his doctor who gave his lack of concern back to him, and now that he DiD HiS oWN ReSeaRCH has decided skepticicm for "AI" can take a back seat because anecdote.This is just a ridiculously ignorant take from someone who sounds like they've never had to deal with a bad medical system. (Remembering, sometimes that bad system is your only viable option, and you can't just go see a specialist outside that system without spending the kind of money a lot of people don't have.)
Yes, AI or any other kind of self diagnosis needs to be done carefully and keeping in mind that whatever you come up with, it's the start of a conversation, not the end. And of course anything a chatbot suggests should be thoroughly researched before taking it seriously.
But having personally dealt with doctors who just could not be bothered to take outright debilitating symptoms seriously, because they didn't fit into a neat box aligning with standard tests and screening, sometimes you need to do whatever works to get things moving. If a chatbot is what it takes to find a plausible diagnosis which either gets you straight to the right answer of gets a doctor to finally take things seriously, so be it.
You just have to remember that the chatbot does not have a medical degree, doesn't "know" anything, and is simply doing word association on a vast dataset. But it appears the OP here actually used AI responsibly and solved their problem, which is great.
Thank You, so much, for this response.This is the incredible danger of current LLM models. They use incredibly compelling language to assert confidence where the system itself literally IS NOT CAPABLE OF. Yes, the LLM said that ice was a "credible and likely" source, but ChatGPT isn't really able to do that, what it is doing instead is predicting that the words credible and likely are the most appropriate next words in a response!
Even if you know this is the major flaw of LLMs, it's really easy to fail to correct for that false assertion of confidence. Humans are creatures of language, and we're "programmed" to interpret confident language as evidence of knowledge and expertise. Even experts in the field (and TBH a health department should be an expert in public health outbreaks) can obviously be fooled to rely on LLM assertions because of this.
Ok, with that context, I wouldn't have reacted as strongly as I did.ChatGPT suggested it was likely seborrheic dermatitis and to occasionally use a dab of Head & Shoulders on the entrance of my ear canal when I shower. I looked it up and it seemed reasonable, so I gave it a shot. I got a positive result.

You sound like ars giving bonus points to shit AI for barely doing anything.The AI was on topic and didn't talk about something else or ignore the original question. So bonus points on that, credit where due, in this regard is likely to be rated better than reddit.
The problem in this particular case is that the OP is pretty much certainly lying about this experience.The problem is the AI will confidently suggest a treatment that is not safe, or it will ignore symptoms that should send you in to urgent care.
OpenAI Health - 50% wrong - like flipping a coin for your life - https://www.theguardian.com/technol...pt-health-fails-recognise-medical-emergencies
Eh, it could be true. Doctors sometimes get things wrong, LLMs sometimes get things right. Even the statistic that they misdiagnose half of the cases that should be sent to the ER indicates that they got half of them right.The problem in this particular case is that the OP is pretty much certainly lying about this experience.