Not all models are the same, and not all of them are used to answer the same questions.
Read the whole story
Read the whole story
What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.
And remember that while any given model run shouldn't be viewed as the final word on what reality will look like, relying on a model will be a lot better than trying to set policy without any idea of what the outcome might be.
The above is the polite model for, "Unless you are a scientist/researcher involved in the relevant model creation, please keep you unknowledgeable and untrained opinion to yourself and follow directions."
And remember that while any given model run shouldn't be viewed as the final word on what reality will look like, relying on a model will be a lot better than trying to set policy without any idea of what the outcome might be.
Absolutely excellent advice, and a good article. Thanks.What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.
Consider your terminology. 'Model' is opaque and suggests a small but perfect version of something. Just say 'spreadsheet'.
Consider what goes into your spreadsheet: What you 'know', and assumptions about how what you 'know' might react to create the future. Just say 'psychic reading'.
Consider that what you 'know' is only a fraction of what you don't 'know' and even that is likely to be wrong.
Boil that down to a headline: 'Three Million to Die'
Now, remember that you (you!) cannot produce an accurate budget for yourself from now until your death. You don't even know enough about yourself to predict your own future.
See, models are easy.
Absolutely excellent advice, and a good article. Thanks.What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.
Sadly, I somehow doubt the large media outlets will pay attention.
Models have become yet another aspect of life embroiled in political controversy. And it's fair for the public to ask why different models—or even the same model run a few days apart—can produce dramatically different estimates of future fatalities.
What's much less fair is that the models and the scientists behind them have come under attack by people who don't understand why these different numbers are an expected outcome of the modeling process. And it's downright unfortunate that these attacks are often politically motivated—driven by a focus on whether the numbers are convenient from a partisan perspective.
Consider your terminology. 'Model' is opaque and suggests a small but perfect version of something. Just say 'spreadsheet'.
Consider what goes into your spreadsheet: What you 'know', and assumptions about how what you 'know' might react to create the future. Just say 'psychic reading'.
Consider that what you 'know' is only a fraction of what you don't 'know' and even that is likely to be wrong.
Boil that down to a headline: 'Three Million to Die'
Now, remember that you (you!) cannot produce an accurate budget for yourself from now until your death. You don't even know enough about yourself to predict your own future.
See, models are easy.
Absolutely excellent advice, and a good article. Thanks.What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.
Sadly, I somehow doubt the large media outlets will pay attention.
no, because people will not pay attention. The average person is an impatient BABY who wants an easy, simple answer RIGHT NOW, preferably one which says they don't have to do anything.
they see things like "based on new data we've revised our models" and interpret that as "WELL YOU WERE WRONG BEFORE SO YOU ARE ALWAYS WRONG FOREVER AND EVER DON'T TELL ME WHAT TO DO!"
We're beyond saving.
Absolutely excellent advice, and a good article. Thanks.What's a non-scientist to do while that consensus is still emerging? When you see different numbers produced by the models, focus on the important questions: what were those models looking at, how do the conditions they were testing differ, and what assumptions went into them? It's not enough to know they're different numbers—it's important to have a sense of why they're different.
Sadly, I somehow doubt the large media outlets will pay attention.
Models have become yet another aspect of life embroiled in political controversy. And it's fair for the public to ask why different models—or even the same model run a few days apart—can produce dramatically different estimates of future fatalities.
What's much less fair is that the models and the scientists behind them have come under attack by people who don't understand why these different numbers are an expected outcome of the modeling process. And it's downright unfortunate that these attacks are often politically motivated—driven by a focus on whether the numbers are convenient from a partisan perspective.
Consider your terminology. 'Model' is opaque and suggests a small but perfect version of something. Just say 'spreadsheet'.
Consider what goes into your spreadsheet: What you 'know', and assumptions about how what you 'know' might react to create the future. Just say 'psychic reading'.
Consider that what you 'know' is only a fraction of what you don't 'know' and even that is likely to be wrong.
Boil that down to a headline: 'Three Million to Die'
Now, remember that you (you!) cannot produce an accurate budget for yourself from now until your death. You don't even know enough about yourself to predict your own future.
See, models are easy.
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....
The question I haven't seen answered anywhere: Can any model accurately reproduce the results already seen? Only then would the model be useful for public policy formulation within the context and limits of that model.
All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....
And remember that while any given model run shouldn't be viewed as the final word on what reality will look like, relying on a model will be a lot better than trying to set policy without any idea of what the outcome might be.
I don't know if this is logically true. If the model's accuracy is poor then it's not really providing any idea either. If the model is extremely poor then it might even lead to false conclusions that result in outcomes worse than randomness.
In the early days, one model driving major policy decisions was the IHME model. When examined for accuracy, for 70% of states (at the time) reality fell outside the 95% CI. https://www.sydney.edu.au/news-opinion/ ... model.html. I'm not sure that's enough accuracy to say relying on it is a lot better than nothing at all. Imagine a default position of wait and see (no modeling or movement of resources). Then IHME model comes along and says state A is going to experience an outbreak so state B sends resources. Because the model is inaccurate, it's actually state B that experiences an outbreak so the resources need to be shipped back. This is all hypothetical obviously, but in that situation a wait and see approach would've been better than trusting a flawed model. The author of the Sydney paper makes the same point. I don't think it's true that bad data is better than no data.
Your professor should have been fired. That is a really stupid question.The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....
The question I haven't seen answered anywhere: Can any model accurately reproduce the results already seen? Only then would the model be useful for public policy formulation within the context and limits of that model.
All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....
The question I haven't seen answered anywhere: Can any model accurately reproduce the results already seen? Only then would the model be useful for public policy formulation within the context and limits of that model.
All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?
The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....
The question I haven't seen answered anywhere: Can any model accurately reproduce the results already seen? Only then would the model be useful for public policy formulation within the context and limits of that model.
All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?
Your professor didn't ask a good question there. The seemingly simple answer is no. Both planes crashed, so both are equally bad. Any other answer needs more data, as the correct answer is: we need more data. how many people were on each plane? What caused the crash? etc. Without more data, the question simply isn't a good one.
I wonder if a model that was heavily weighted to answering the question "how many ICU beds will be needed" was the right one to shape the UK's response to COVID-19?
It seems to me the model missed some pretty important stuff, and using a model to demonstrate that one country is about 3 weeks behind another country's trajectory and then failing to do anything substantive to alter that trajectory smells like a failure.
That's all completely false. Where did you get that information?With most of these infectious disease models, we are oversimplifying human behavior and virus transmission and replacing both with a value R that indicates the exponential spread.
Very small inaccuracies in R will lead to large cascading errors by the nature of exponential math.
Twitter and FB should have been shut the fuck down years ago.
Fact is, no one who understands modeling is expecting them to be perfect, or is telling anyone else to expect perfection. It's the unfamiliar (if we are being generious) or the malicious (if we are not) who claim models are supposed to be unassailable.An aphorism from my discipline: “all models are wrong, but some are useful.” Asking “why does that model give that outcome?” is the point of them. None of them should ever claim to be perfect predictions of the future.
Edit: bad English aided by autocorrect![]()
This is a misinterpretation of George Box as the quote is not about the model being wrong. Rather it is in terms of "Is the model illuminating and useful?".An aphorism from my discipline: “all models are wrong, but some are useful.” Asking “why does that model give that outcome?” is the point of them. None of them should ever claim to be perfect predictions of the future.
Edit: bad English aided by autocorrect![]()
Your airplanes are based on models. In the old days, a small model would be placed in a wind tunnel and its reactions would be measured; the forces could then be scaled up to predict how the real thing would work. Most of the time, adjustments were then needed to deal with things in the real world that were not seen in the model tests. Same thing still happens with computer-based models; we have enough experience and the models have been adjusted enough, now, that the plane will mostly work when it's built. Still need test flights, though, including operation under conditions that are well beyond "normal" as defined in the models.The fundamental problem with models (i.e. simulations) is, until they have demonstrated some degree of accuracy to the real world they are trying to model they are of no use, especially for formulating public policy. Saying their usefulness is in the relative results between outcomes is simply nonsense. Garbage in, garbage out.....
The question I haven't seen answered anywhere: Can any model >or group of models< accurately reproduce the results already seen?
All this talk about using inaccurate models reminds me of the question my Structures Professor in college posed on the day of first class. If my plane crashes and kills 10 people and your airplane crashes and kills 100 is my airplane better than yours?
You don't even need Excel (I use LibreOffice) for that. A Sharpie on a board in a press conference will do. More Clikz!You scientists and your fancy "models".
All you need to do is use Excel to fit a cubic spline to the data and, boom, problem solved. It even shows a future negative infection rate! The dead will rise, clippy said so!