You sound as if you know a great deal about this subject so I, for one, believe you.
shakes head
That's false.
There's no such place as Wyoming.
Think about it. Have you ever met anyone from Wyoming?
Well, there you are.
I saw it on a television segment in 1989.
For those of you who are sceptical about the accuracy and value of AI, here is an example. I recently read an article which was illustrated by an image of supposed beer cans inside a cooler filled with ice.
The can has a gold top and white sides.
I spent several minutes looking for cans matching that description but couldn't find anything. Finally, I broke down and asked Claude Haiku 4.5.
There you have it: a polite, succinct answer, instantly.
View attachment 129385
what, no picture of someone throwing up, praying to the porcelain god, ralphing, etc.?
Hi
No challenging the narrative, I have no doubt that what you describe is exactly what happened. And I'm happy you got a solution to your issue. If you had stopped your narrative there, I likely wouldn't have said anything.
Your conclusion, on the other hand, I do disagree with: "as a supplement to medical professionals, there’s value"
I do not agree that for medical advice it is wise to consult an LLM. They are simply too unreliable. If you're an intelligent person with a good background in the basics of research, then maybe. But as a general principle? Hells no.
If symptoms persist, see your Doctor.
Just for fun, I "asked Google" the question "will S. Agbeni grow in an improperly drained cooler?" and the AI overview said yes, and referenced this case as its source, and the first search result was the CDC announcement about this (https://www.cdc.gov/mmwr/volumes/75/wr/mm7507a1.htm).
First paragraph of AI response:
Feels bad but I can't really articulate why. Single-source being circularly referenced or something?
Indeed, this has always been a garbage analysis of how LLMs work.
Okay, it predicts the next word. Fine.
What if you gave Einstein the transcript of a whole conversation about relativity, but you cut it off halfway through and asked him to predict the next word.
Would people then complain that "all he did was predict the next word" as if that's some kind of useful f**king insight into Einstein's thought process?
Yup. Ridiculous.
Also, all the people who claim that LLMs just work by calculating the most "probable" subsequent word, as if the way those probabilities are calculated is somehow obvious and trivial.
"How does the weather forecaster come up with his predictions?"
"Oh, it's stupid, all he does is tell us the most probable forecast."
Awesome.