Did ChatGPT help health officials solve a weird outbreak? Maybe.

Status
You're currently viewing only The Lurker Beneath's posts. Click here to go back to viewing the entire thread.

The Lurker Beneath

Ars Tribunus Militum
6,636
Subscriptor
You sound as if you know a great deal about this subject so I, for one, believe you.


shakes head

That's false.

There's no such place as Wyoming.

Think about it. Have you ever met anyone from Wyoming?



Well, there you are.


I saw it on a television segment in 1989.

For those of you who are sceptical about the accuracy and value of AI, here is an example. I recently read an article which was illustrated by an image of supposed beer cans inside a cooler filled with ice.

The can has a gold top and white sides.

I spent several minutes looking for cans matching that description but couldn't find anything. Finally, I broke down and asked Claude Haiku 4.5.





There you have it: a polite, succinct answer, instantly.


View attachment 129385

Welp, it's an almost human-like mistake.

Stella Artois over here has a gold top and white sides, though the sides have largish red labels.
 
Upvote
4 (4 / 0)

The Lurker Beneath

Ars Tribunus Militum
6,636
Subscriptor
Hi
No challenging the narrative, I have no doubt that what you describe is exactly what happened. And I'm happy you got a solution to your issue. If you had stopped your narrative there, I likely wouldn't have said anything.

Your conclusion, on the other hand, I do disagree with: "as a supplement to medical professionals, there’s value"

I do not agree that for medical advice it is wise to consult an LLM. They are simply too unreliable. If you're an intelligent person with a good background in the basics of research, then maybe. But as a general principle? Hells no.

If symptoms persist, see your Doctor.

Well, that's the thing, there are plenty on these forums who CAN do their own research, and when it comes to annoying but plainly non-lethal skin rashes, an LLM might well usefully augment it IMO. [Seriously, would you hold out great hopes if you went to your doctor with something like that anyway?]
 
Upvote
-3 (2 / -5)

The Lurker Beneath

Ars Tribunus Militum
6,636
Subscriptor
Just for fun, I "asked Google" the question "will S. Agbeni grow in an improperly drained cooler?" and the AI overview said yes, and referenced this case as its source, and the first search result was the CDC announcement about this (https://www.cdc.gov/mmwr/volumes/75/wr/mm7507a1.htm).

First paragraph of AI response:


Feels bad but I can't really articulate why. Single-source being circularly referenced or something?

Autocitogenesis.
 
Upvote
-1 (1 / -2)

The Lurker Beneath

Ars Tribunus Militum
6,636
Subscriptor
Indeed, this has always been a garbage analysis of how LLMs work.

Okay, it predicts the next word. Fine.

What if you gave Einstein the transcript of a whole conversation about relativity, but you cut it off halfway through and asked him to predict the next word.

Would people then complain that "all he did was predict the next word" as if that's some kind of useful f**king insight into Einstein's thought process?

Chess-playing computers just predict the next move of a winning game!

But the biggest issue is people thinking it's just a stochastic Markov Chain. Those don't include a cloud of correlations that embody real meaning.
 
Upvote
1 (2 / -1)

The Lurker Beneath

Ars Tribunus Militum
6,636
Subscriptor
Yup. Ridiculous.

Also, all the people who claim that LLMs just work by calculating the most "probable" subsequent word, as if the way those probabilities are calculated is somehow obvious and trivial.

"How does the weather forecaster come up with his predictions?"
"Oh, it's stupid, all he does is tell us the most probable forecast."

Awesome.

I think you could maybe formalise them as mathematically equivalent to Markov chains, but not based on input statements but on a huge corpus of recursively-generated hypothetical statements.

So where a simple Markov chain based on all the text read by an LLM will often confuse say 'hypothermia' and 'hyperthermia' as in the example somebody posted earlier, in the recursive set those confusions have been winnowed down to near zero and even if you see the output as a Markov chain, it will only be sampling over the meaningful - or at least relatively meaningful - inputs.
 
Last edited:
Upvote
0 (0 / 0)
Status
You're currently viewing only The Lurker Beneath's posts. Click here to go back to viewing the entire thread.