Understanding epidemiology models

Post content hidden for low score. Show…
D

Deleted member 1

Guest
What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.

And remember that while any given model run shouldn't be viewed as the final word on what reality will look like, relying on a model will be a lot better than trying to set policy without any idea of what the outcome might be.


The above is the polite model for, "Unless you are a scientist/researcher involved in the relevant model creation, please keep you unknowledgeable and untrained opinion to yourself and follow directions."

I can’t tell if this is meant to be a serious exhortation, or a strawman parody in the mold of what a climate denier conjures up when they think of the word “scientism”.
 
Upvote
31 (38 / -7)
Post content hidden for low score. Show…

BigDH01

Ars Scholae Palatinae
898
And remember that while any given model run shouldn't be viewed as the final word on what reality will look like, relying on a model will be a lot better than trying to set policy without any idea of what the outcome might be.

I don't know if this is logically true. If the model's accuracy is poor then it's not really providing any idea either. If the model is extremely poor then it might even lead to false conclusions that result in outcomes worse than randomness.

In the early days, one model driving major policy decisions was the IHME model. When examined for accuracy, for 70% of states (at the time) reality fell outside the 95% CI. https://www.sydney.edu.au/news-opinion/ ... model.html. I'm not sure that's enough accuracy to say relying on it is a lot better than nothing at all. Imagine a default position of wait and see (no modeling or movement of resources). Then IHME model comes along and says state A is going to experience an outbreak so state B sends resources. Because the model is inaccurate, it's actually state B that experiences an outbreak so the resources need to be shipped back. This is all hypothetical obviously, but in that situation a wait and see approach would've been better than trusting a flawed model. The author of the Sydney paper makes the same point. I don't think it's true that bad data is better than no data.
 
Upvote
19 (27 / -8)
As a UK citizen, at the beginning of the pandemic, I was very happy to see the Government listening to scientists and following the Imperial Model. However, in 20/20 hindsight, I can see how we should've concentrated more in solving the actual problem, with actions, like Germany and South Korea.

The original analysis of the Government didn't really follow through with concrete actions. Afterwards, they just copied whatever the rest of the countries were doing. Too little, too late
 
Upvote
27 (30 / -3)

Jim Z

Ars Legatus Legionis
46,752
Subscriptor
unfortunately you can add years of social media inflating everyone's egos (by design) so you now have millions of people who think they're really damn important- apropos of nothing!- and just refuse to listen to anyone. You know, 'cos "ain't nobody gonna tell me what to do."

Twitter and FB should have been shut the fuck down years ago.
 
Upvote
6 (19 / -13)
What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.
Absolutely excellent advice, and a good article. Thanks.

Sadly, I somehow doubt the large media outlets will pay attention.
 
Upvote
17 (17 / 0)

Jim Z

Ars Legatus Legionis
46,752
Subscriptor
Consider your terminology. 'Model' is opaque and suggests a small but perfect version of something. Just say 'spreadsheet'.

only to a moron who doesn't know what a "model" is.

Consider what goes into your spreadsheet: What you 'know', and assumptions about how what you 'know' might react to create the future. Just say 'psychic reading'.

Consider that what you 'know' is only a fraction of what you don't 'know' and even that is likely to be wrong.

Boil that down to a headline: 'Three Million to Die'

Now, remember that you (you!) cannot produce an accurate budget for yourself from now until your death. You don't even know enough about yourself to predict your own future.

See, models are easy.

Only to people who have no clue what they're talking about. Case in point: your stupid comment.
 
Upvote
6 (28 / -22)

Jim Z

Ars Legatus Legionis
46,752
Subscriptor
What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.
Absolutely excellent advice, and a good article. Thanks.

Sadly, I somehow doubt the large media outlets will pay attention.

no, because people will not pay attention. The average person is an impatient BABY who wants an easy, simple answer RIGHT NOW, preferably one which says they don't have to do anything.

they see things like "based on new data we've revised our models" and interpret that as "WELL YOU WERE WRONG BEFORE SO YOU ARE ALWAYS WRONG FOREVER AND EVER DON'T TELL ME WHAT TO DO!"

We're beyond saving.
 
Upvote
-2 (18 / -20)
Some scattered thoughts while things are slow at work:

1) Models also differ based on what data the modellers think they can feasible get and how much computational power they have at their disposal. A graph-based model is probably what every epidemiologist wants, but it doesn't mean anything if they don't have the data to construct a realistic interaction graph. Similarly, you may be able to construct a highly detailed mechanistic model based on detailed population data, but you won't be able to do anything with it if all you have are a couple of old laptops and some programming skill in R.

2) Besides the rules of the model, comparing model results with real world data can also tell us something about the conditions the model runs used. If you run the model under a range of assumed conditions and some match real world results closer than others, those are the ones that are more likely to be accurate.

3) Besides the things pointed out in the article, it's also important to look for a discussion of the model's robustness. As the actual conditions (parameters, to be technical) are never completely known, it behooves the modelers to run their models under different (but still realistic) conditions to see if their conclusions hold if their read on the actual situation is somewhat off. This discussion, if I recall correctly, was missing from the original Imperial College model pre-print.
 
Upvote
29 (30 / -1)
Models have become yet another aspect of life embroiled in political controversy. And it's fair for the public to ask why different models—or even the same model run a few days apart—can produce dramatically different estimates of future fatalities.

What's much less fair is that the models and the scientists behind them have come under attack by people who don't understand why these different numbers are an expected outcome of the modeling process. And it's downright unfortunate that these attacks are often politically motivated—driven by a focus on whether the numbers are convenient from a partisan perspective.

Consider your terminology. 'Model' is opaque and suggests a small but perfect version of something. Just say 'spreadsheet'.

Consider what goes into your spreadsheet: What you 'know', and assumptions about how what you 'know' might react to create the future. Just say 'psychic reading'.

Consider that what you 'know' is only a fraction of what you don't 'know' and even that is likely to be wrong.

Boil that down to a headline: 'Three Million to Die'

Now, remember that you (you!) cannot produce an accurate budget for yourself from now until your death. You don't even know enough about yourself to predict your own future.

See, models are easy.

1) No, models don't suggest they are small but perfect versions of the system. Unless you think all models of Boeing 747s have miniaturized jet engines in them? What models should do is capture the relevant aspects of the original system. What aspects are relevant? Depends on what you are making the model for.
Absolutely no one even slightly competent runs models using spreadsheets.

2) Just because you don't know something exactly doesn't mean you can't put a reasonable range on it. I may not know exactly when Snapdragon apples will ripen in Geneva orchards, but I don't need to bother checking before August nor after October.
Once the bounds are set, we can run the models many times under different conditions within the bounds. If a conclusion holds for almost all of the runs, then we can be pretty certain that it will hold up in the real world.

3) It's hard to predict an individual outcome, true. Luckily the variance of a mean value decreases with group size: in fact, it's inversely proportional to the square root of the group size.
 
Upvote
49 (50 / -1)

MilleniX

Ars Tribunus Angusticlavius
7,840
Subscriptor++
If you want to read about a really sophisticated epidemic policy-response model, that didn't get used this time around, have a look at the last paper on its computation performance. In very brief, it could perform individual agent scale modeling of the entire US population, with individual 180-day scenarios running in 6 minutes each. It could run multiple scenarios concurrently, even sharing state until they diverge, and get even faster throughput, because each scenario run can't use every computer core 100% of the time.

Sadly, while the code was built and tuned in 2017, and the Blue Waters supercomputer that it was tuned for would have been available for it, the team of computer scientists that built it and the epidemiology researchers who drove its development lost their funding back in 2017, disbanded, and moved on to other jobs. This was at least partly funded by a contract from the NIH pandemic flu preparedness program. It was also used to help plan US and international aid response to the Ebola outbreak in West Africa - they were doing runs to decide where to send field hospitals while the material was being loaded on planes, with last minute decisions of where to set them up.
 
Upvote
48 (48 / 0)

hoodafa-kizit

Smack-Fu Master, in training
72
What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.
Absolutely excellent advice, and a good article. Thanks.

Sadly, I somehow doubt the large media outlets will pay attention.

no, because people will not pay attention. The average person is an impatient BABY who wants an easy, simple answer RIGHT NOW, preferably one which says they don't have to do anything.

they see things like "based on new data we've revised our models" and interpret that as "WELL YOU WERE WRONG BEFORE SO YOU ARE ALWAYS WRONG FOREVER AND EVER DON'T TELL ME WHAT TO DO!"

We're beyond saving.

So maybe we should do scientific modelling on stupid public responses to epidemic models, and adjust the release schedule of model information accordingly. This way we could get an appropriate public response with stupidity factored in. I accept that this might not work for adjusting due to political responses since in this day and age it's almost impossible to predict the increasingly common "Holy fuck, he/she actually said that???" component
 
Upvote
7 (8 / -1)

traumadog

Ars Tribunus Angusticlavius
8,229
What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.
Absolutely excellent advice, and a good article. Thanks.

Sadly, I somehow doubt the large media outlets will pay attention.

I've always called models our "Ghost of Christmas Yet To Come"... meaning that any outcome in the future depended on what we want it to mean and what we did now to change (or keep) it.
 
Upvote
5 (5 / 0)
Two additional problems I've noticed:

1) The preprint phenomenon.
This can have value, but there's definitely been cases where models have been published with fundamental flaws. Not just the inherent limitations alluded to above, but giving completely invalid results.
Here in Austin, we've been using a model from the university of Texas. The first version of which they published gave ranges of results (good) including potential numbers of infected that exceeded the population of Austin (bad). Simple issues that certainly would have been caught in peer review.

They since corrected it, but some of the initial responses ended up being based on the invalid model.

2) This is probably just human nature, but there definitely seems to be an inclination on the behalf of researchers to dig in their heels rather than accepting flaws in their model.

Again the aforementioned UT model. They started with certain assumptions (reasonable) and then after a time the real world results diverged from their model (again, expected). But they seemed hesitant to fundamentally revisit their initial assumptions. Instead, they stuck by their model and attributed most/all of the differences to behavior changes in the population.

Their conclusion was that Austin had achieved 95% social distancing (even though, as noted in the article, a bunch of businesses were considered essential and exempt).
 
Upvote
5 (6 / -1)
Post content hidden for low score. Show…

jimlux

Ars Tribunus Militum
1,668
Models have become yet another aspect of life embroiled in political controversy. And it's fair for the public to ask why different models—or even the same model run a few days apart—can produce dramatically different estimates of future fatalities.

What's much less fair is that the models and the scientists behind them have come under attack by people who don't understand why these different numbers are an expected outcome of the modeling process. And it's downright unfortunate that these attacks are often politically motivated—driven by a focus on whether the numbers are convenient from a partisan perspective.

Consider your terminology. 'Model' is opaque and suggests a small but perfect version of something. Just say 'spreadsheet'.

Consider what goes into your spreadsheet: What you 'know', and assumptions about how what you 'know' might react to create the future. Just say 'psychic reading'.

Consider that what you 'know' is only a fraction of what you don't 'know' and even that is likely to be wrong.

Boil that down to a headline: 'Three Million to Die'

Now, remember that you (you!) cannot produce an accurate budget for yourself from now until your death. You don't even know enough about yourself to predict your own future.

See, models are easy.

Yes - seductively easy - and look what happened in 2008, when financial instrument risk models (erroneously) assumed low covariances between inputs.
 
Upvote
12 (12 / 0)

Perfectly Frank

Ars Centurion
298
Subscriptor
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....

The question I haven't seen answered anywhere: Can any model accurately reproduce the results already seen? Only then would the model be useful for public policy formulation within the context and limits of that model.

All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?

The problem with waiting until a model accurately reproduces the results already seen is that it's a fast-moving situation, the results keep changing, and in broad principle the (not entirely accurate) models were right to show an exponential increase of infections followed a couple of weeks later by deaths. By which time you're left saying "oh dear, the model was right, but huge numbers have died and we're no nearer controlling it".


As Jules and James (linked earlier) showed, the Imperial Model was good, one of the factors was wrong so the result was too optimistic and the death rate has been higher than that first forecast. Fortunately offset to an extent by social distancing in the UK, but worse than other countries due to time wasted before implementing lockdown and track & trace which still isn't ready. Your country may vary.
 
Upvote
26 (27 / -1)

Jim Z

Ars Legatus Legionis
46,752
Subscriptor
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....

that's bullshit. when you're dealing with an unknown, all you can do is try to model it based on the closest real world data/experience you have.

when it comes to something like a viral pandemic, waiting too long means you'll probably be far too late to do anything about it.

this is what enrages me about some people. they seem to believe orgs like the WHO and CDC should be clairvoyant and should have known how things were going to turn out from the start.
 
Upvote
37 (39 / -2)

Veritas super omens

Ars Legatus Legionis
26,613
Subscriptor++
Models are a tool. In the pandemic situation a good model gives you the best information to anticipate needs. That information should inform policymakers. The proper action is still a decision of policymakers. But any model worth anything will predict that pretending the disease will magically disappear is likely NEVER going to be the best policy. The universe is very complex and to get really accurate models requires detailed understanding of the problem being addressed and a good handle on the inputs.
 
Upvote
16 (16 / 0)

jdale

Ars Legatus Legionis
18,384
Subscriptor
And remember that while any given model run shouldn't be viewed as the final word on what reality will look like, relying on a model will be a lot better than trying to set policy without any idea of what the outcome might be.

I don't know if this is logically true. If the model's accuracy is poor then it's not really providing any idea either. If the model is extremely poor then it might even lead to false conclusions that result in outcomes worse than randomness.

In the early days, one model driving major policy decisions was the IHME model. When examined for accuracy, for 70% of states (at the time) reality fell outside the 95% CI. https://www.sydney.edu.au/news-opinion/ ... model.html. I'm not sure that's enough accuracy to say relying on it is a lot better than nothing at all. Imagine a default position of wait and see (no modeling or movement of resources). Then IHME model comes along and says state A is going to experience an outbreak so state B sends resources. Because the model is inaccurate, it's actually state B that experiences an outbreak so the resources need to be shipped back. This is all hypothetical obviously, but in that situation a wait and see approach would've been better than trusting a flawed model. The author of the Sydney paper makes the same point. I don't think it's true that bad data is better than no data.

What you have to bear in mind is that all decisions are based on models. All of them. Formal models have the advantage of forcing you to explicitly define your terms and estimate the relevant factors, which is extremely useful in thinking through how things will work. But in the absence of a formal model, when you just guess how things are going to go, that's a model too, albeit one entirely in your head based on past expectations that you have not examined closely. For a novel situation, that intuition-based model is not likely to be very accurate. In the case where the data itself is false, that's of course going to lead to wrong model results, but it's also going to deceive your mental intuition-based model. The response of simply not doing any math because you don't know how accurate your data is, is not going to lead you to truth.

Of course for both formal and simple mental models, if you are dealing with unknowns and with chance, there is an opportunity for error, so you need to keep paying attention to the situation and update as things progress.

We definitely did see models predict higher or lower numbers of cases than actually resulted. In some cases that led to acquiring more resources than were needed, e.g. especially ventilators. But I'm not aware of any actual instances where overprovisioning a state where cases were overestimated led to underprovisioning a different state where cases were drastically underestimated.

I would also argue that although models did in some cases predict more cases than occurred for a variety of reasons, they did not lead us to make decisions that were harmful. https://meincmagazine.com/science/2020/05 ... e-thought/ is a good case in point. They were wrong, but that error had the effect of saving lives, because other factors at that point were worse than predicted.
 
Upvote
34 (34 / 0)

Veritas super omens

Ars Legatus Legionis
26,613
Subscriptor++
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....

The question I haven't seen answered anywhere: Can any model accurately reproduce the results already seen? Only then would the model be useful for public policy formulation within the context and limits of that model.

All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?
Your professor should have been fired. That is a really stupid question.
 
Upvote
-4 (10 / -14)

Grey Bird

Ars Scholae Palatinae
760
Subscriptor++
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....

The question I haven't seen answered anywhere: Can any model accurately reproduce the results already seen? Only then would the model be useful for public policy formulation within the context and limits of that model.

All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?

Your professor didn't ask a good question there. The seemingly simple answer is no. Both planes crashed, so both are equally bad. Any other answer needs more data, as the correct answer is: we need more data. how many people were on each plane? What caused the crash? etc. Without more data, the question simply isn't a good one.
 
Upvote
27 (28 / -1)

Borderliner

Smack-Fu Master, in training
83
I wonder if a model that was heavily weighted to answering the question "how many ICU beds will be needed" was the right one to shape the UK's response to COVID-19?

It seems to me the model missed some pretty important stuff, and using a model to demonstrate that one country is about 3 weeks behind another country's trajectory and then failing to do anything substantive to alter that trajectory smells like a failure.
 
Upvote
9 (9 / 0)

Perfectly Frank

Ars Centurion
298
Subscriptor
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....

The question I haven't seen answered anywhere: Can any model accurately reproduce the results already seen? Only then would the model be useful for public policy formulation within the context and limits of that model.

All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?

Your professor didn't ask a good question there. The seemingly simple answer is no. Both planes crashed, so both are equally bad. Any other answer needs more data, as the correct answer is: we need more data. how many people were on each plane? What caused the crash? etc. Without more data, the question simply isn't a good one.

Nicely put. It's legitimate to for a professor to ask that question, but it doesn't seem to have taught charlie s how to think about the issue, so not a successful teaching outcome.

As jdale noted above, all decisions are based on models, so it's silly to say "The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out....."

In formulating public policy, the epidemic models have been up against other models with more influence on decision making. Such as Trump's model that calling it fake news will satisfy his base, or his model that economic implications of protecting the public would lose him re-election.
 
Upvote
24 (24 / 0)

Perfectly Frank

Ars Centurion
298
Subscriptor
I wonder if a model that was heavily weighted to answering the question "how many ICU beds will be needed" was the right one to shape the UK's response to COVID-19?

It seems to me the model missed some pretty important stuff, and using a model to demonstrate that one country is about 3 weeks behind another country's trajectory and then failing to do anything substantive to alter that trajectory smells like a failure.

As the article says, models answer different questions in different ways. The failure isn't in the model, it's in the simplistic policy making response to the answer.

In the UK, there was already failure in years of skimping on public sector provision ("austerity") and policies of passing public works to private organisations in the expectation that they'd be cheaper and better. That meant inadequate supplies for coronavirus, and the NHS already under stress.
When the model correctly showed that exponential growth of illness would soon overwhelm hospital beds, the government's response was to panic and get private organisations to swiftly convert conference centres etc. to emergency hospital beds. The ironically miscalled "Nightingale" hospitals. Past skimping meant they didn't have the nurses or staff for these beds. Florence Nightingale would have told them to check the data first.

Testing was put out to other private organisations with a track record of failures. In focussing on hospital beds before getting testing organised, elderly patients were pushed back to care homes without testing. Care homes with underpaid staff who were working several jobs to survive.

The government had failed to provide adequate PPE – after all, we'd closed unprofitable industries and could always get more from.... China? ... Turkey?? .... with hospital staff lacking PPE, there were also huge shortages for care home workers. And no testing. So many deaths in care homes.

Not failures of the models, but policy failures persistently masked by government claims that they were always just doing what the science said. Next step, blame the scientists.
 
Upvote
14 (15 / -1)
Post content hidden for low score. Show…

Dr. Jay

Editor of Sciency Things
9,829
Ars Staff
With most of these infectious disease models, we are oversimplifying human behavior and virus transmission and replacing both with a value R that indicates the exponential spread.

Very small inaccuracies in R will lead to large cascading errors by the nature of exponential math.
That's all completely false. Where did you get that information?
 
Upvote
32 (32 / 0)

crmarvin42

Ars Praefectus
3,173
Subscriptor
An aphorism from my discipline: “all models are wrong, but some are useful.” Asking “why does that model give that outcome?” is the point of them. None of them should ever claim to be perfect predictions of the future.

Edit: bad English aided by autocorrect 🤦🏼‍♂️
Fact is, no one who understands modeling is expecting them to be perfect, or is telling anyone else to expect perfection. It's the unfamiliar (if we are being generious) or the malicious (if we are not) who claim models are supposed to be unassailable.

By design, models are a simplification of the thing being modeled. Nothing models reality as good as reality itself, but reality itself is too complicated and slow to be useful for making predictions. Therefore we make simplified analogies to reality to help us understand or predict the more complicated system, and refine them over time.

Can't remeber where I read it, but in one stats book or another (probably a popular press book like Numbers Rule Your World, or something similar) there was a bit about a chinese emperor who was unhappy with the detail/resolution of the small paper maps he had to plan from. Then he moved to large paper maps, then to a small garden outside representing the map, then to larger and larger gardens with greater and greater fidelity to his empire until his advisors finally pointed out that the best model there could be would be a 1:1 scale version of his empire, which of course is the empire itself. That's a long alegorical way of saying "we don't want perfect models, becuase perfect models are too expensive to use".
 
Upvote
5 (5 / 0)
An aphorism from my discipline: “all models are wrong, but some are useful.” Asking “why does that model give that outcome?” is the point of them. None of them should ever claim to be perfect predictions of the future.

Edit: bad English aided by autocorrect 🤦🏼‍♂️
This is a misinterpretation of George Box as the quote is not about the model being wrong. Rather it is in terms of "Is the model illuminating and useful?".

In terms of John's article, this Dilbert strip sum it all up:
Noble Bad Data
 
Upvote
2 (2 / 0)

real mikeb_60

Ars Tribunus Angusticlavius
13,104
Subscriptor
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....

The question I haven't seen answered anywhere: Can any model >or group of models< accurately reproduce the results already seen?

All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?
Your airplanes are based on models. In the old days, a small model would be placed in a wind tunnel and its reactions would be measured; the forces could then be scaled up to predict how the real thing would work. Most of the time, adjustments were then needed to deal with things in the real world that were not seen in the model tests. Same thing still happens with computer-based models; we have enough experience and the models have been adjusted enough, now, that the plane will mostly work when it's built. Still need test flights, though, including operation under conditions that are well beyond "normal" as defined in the models.

Epidemiological and other models of natural systems, also, can be rough or refined, and if something new comes along it's hard to choose the right model and figure out what needs to be tweaked right away. Still, if you know what a model was designed for you can make an informed decision about how useful it might be, depending on how close its assumptions are to the going situation. Also, publishing stuff for other scientists can uncover errors: see discussion here of the Imperial model for Covid.

Unfortunately, outside of the scientific realm, snap judgements are the order of the day. What plays well on the evening news and grabs clicks is seldom: here's a model, and here's the confidence interval based on what we know and think we know. It's *millions dead* or *hundreds dead* or *model busted*. Meaning the model is good as long as it provides useful clickbait and soundbites, and as soon as Real Life intrudes it's busted and has to be thrown away. Actually, it seldom is busted; more often, something we didn't know or interpret properly showed up or, as usual, it's more complicated than our first model was able to deal with.
 
Upvote
5 (5 / 0)

real mikeb_60

Ars Tribunus Angusticlavius
13,104
Subscriptor
You scientists and your fancy "models".

All you need to do is use Excel to fit a cubic spline to the data and, boom, problem solved. It even shows a future negative infection rate! The dead will rise, clippy said so!
You don't even need Excel (I use LibreOffice) for that. A Sharpie on a board in a press conference will do. More Clikz!
 
Upvote
8 (8 / 0)