Kennedy is proud that he used a chatbot to determine that Tylenol causes autism. He also asked it, "Am I smart?" and - after several intermediate prompts - it said, "Yes!"Chatbots are great to bring new perspectives. Issue with health is that they to agree too fast to any crazy theories: this is bad for hypochondriacs.
That said, even without any mention to AI, this article would have been as relevant. And as gross...
Nah, that's expensive and sciency. Just squirt a tube or two of ivermectin in there and it's all good.Seems like in this sort of situation, it would be a good idea to spray the ice with a little food service sanitizer (e.g. iodophor) each time the ice is refilled. It would be cheap insurance against bacterial contamination of the cooler, and would help ensure that the hands of the people serving the beer are sanitary as well. I wonder if their new sanitation protocols include anything like that.
Probably because it smells of horseshit. OTC treatments are, in fact, the first recommendation for this condition, and there are quite a few to choose from - though not enough to keep one busy trying them for "years and years." Doctors will recommend one, or possibly a few to try if the first don't work, and will also prescribe other alternatives if those don't work. We're not told what the recommended MirAIcle was - again, most likely because the story is crap and naming the product would reveal it as snake oil or as an already-common recommendation.Why are commenters voting this down? She asked a question got an answer that led to an OTC treatment. Presumably the treatment was low-risk and if the issue continued to be bothersome she would have pursued it through the medical channel. If she could afford to.
Many doctors are unaware of many medical issues. I am delighted when I see Dr. Pol check his Merck Veterinary manual. I have never had a doctor check Harrison's or the other major medical references that we had at the University Library I worked for. These medical resources are expensive (horrendously so) and not available in most public library branches. Yet, for good or ill, the AI agents often are trained on them or acceptable surrogates.
The problem in this particular case is that the OP is pretty much certainly lying about this experience.The problem is the AI will confidently suggest a treatment that is not safe, or it will ignore symptoms that should send you in to urgent care.
OpenAI Health - 50% wrong - like flipping a coin for your life - https://www.theguardian.com/technol...pt-health-fails-recognise-medical-emergencies
I don't know how copilot works, but Google's AI search results also have links - and they're most often to products or websites that pay for that placement. No different from their old "prioritized" search results, with ad customers nearer the top presumably based on how much they paid.I agree that vetting the accuracy of AI answers can take as long as just searching for references by oneself.
One thing I like about the copilot summaries on bing search are the references that it used to generate that I can peruse to gain more insight. I never trust the "AI" summary.
For subjects I already know something about, sometimes a few of the references contain incorrect information, so the summary is accordingly flawed.