YouTube: https://youtube.com/watch?v=TqzicvDVcKg
Previous: Brittle Stars Could Teach Robots To See With Their Skin
Next: Why Old School Gardening May Hold the Key to Our Future Survival

Categories

Statistics

View count:162,149
Likes:8,009
Comments:722
Duration:06:07
Uploaded:2020-05-06
Last sync:2024-03-09 20:30

Citation

Citation formatting is not guaranteed to be accurate.
MLA Full: "Why It's Good for COVID-19 Models to Be Wrong." YouTube, uploaded by SciShow, 6 May 2020, www.youtube.com/watch?v=TqzicvDVcKg.
MLA Inline: (SciShow, 2020)
APA Full: SciShow. (2020, May 6). Why It's Good for COVID-19 Models to Be Wrong [Video]. YouTube. https://youtube.com/watch?v=TqzicvDVcKg
APA Inline: (SciShow, 2020)
Chicago Full: SciShow, "Why It's Good for COVID-19 Models to Be Wrong.", May 6, 2020, YouTube, 06:07,
https://youtube.com/watch?v=TqzicvDVcKg.
As we react to the predictions that epidemiological models make, changing the ways we act and go about our lives, those estimates can appear totally off. But if a model’s predictions end up being wrong, that might mean it's done exactly the job it was supposed to.

COVID-19 News & Updates: https://www.youtube.com/playlist?list=PLsNB4peY6C6IQediwz2GzMTNvm_dMzr47

Hosted by: Hank Green

SciShow has a spinoff podcast! It's called SciShow Tangents. Check it out at http://www.scishowtangents.org
----------
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow
----------
Huge thanks go to the following Patreon supporters for helping us keep SciShow free for everyone forever:

Kevin Bealer, Jacob, Katie Marie Magnone, D.A. Noe, Charles Southerland, Eric Jensen, Christopher R Boucher, Alex Hackman, Matt Curls, Adam Brainard, Scott Satovsky Jr, Sam Buck, Ron Kakar, Chris Peters, Kevin Carpentier, Patrick D. Ashmore, Piya Shedden, Sam Lutfi, Charles George, Christoph Schwanke, Greg

----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
----------
Sources:
https://www.nytimes.com/2020/03/17/world/europe/coronavirus-imperial-college-johnson.html
https://www.ft.com/content/67e6a4ee-3d05-43bc-ba03-e239799fa6ab
https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/deaths/bulletins/deathsregisteredweeklyinenglandandwalesprovisional/latest
http://www-sop.inria.fr/members/Ian.Jermyn/philosophy/writings/Boxonmaths.pdf
https://www.nature.com/articles/d41586-020-01003-6
https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf
https://covid19.healthdata.org/united-states-of-america
https://wwwnc.cdc.gov/eid/article/26/7/20-0282_article
[♪ INTRO].

This video was filmed on April 28th. For our most recent episodes on COVID-19, check out the playlist in the description.

The COVID-19 pandemic has been dominated by numbers, especially the numbers of new cases and mortalities every day. But what's just as important as those numbers is how they're changing. Predictions on how they'll change over time depend on epidemiological modeling, which aims to mathematically describe how diseases spread.

And this is what leaders are using to figure out how drastic the effects of the pandemic could be and what measures we need to take. Unfortunately, these models aren't simple and they don't always give clear-cut answers, which has led to some confusion about how to interpret them. On top of that, even when a model does give a clear estimate, it often ends up being wrong weeks or months later, because our response to the model can change the course we're on.

So if you see a model that overestimated something, it doesn't necessarily mean we overreacted:. It often means we did exactly the right thing. One study that got a lot of attention recently came out of Imperial College London in March, and it predicted that, without drastic measures, there could be up to a half a million fatalities in the U.

K. It had a huge impact, and even shaped the U. K's response to the pandemic, but a few days later, the lead author told the British parliament that he now only expected 20,000 fatalities.

Now, any number bigger than zero is bad news, and honestly, talking about fatalities at all these days can be really difficult, since we're still right in the middle of this, and these are real people. So forgive us if we sound a bit clinical here. But as bad as 20,000 deaths is, that's 25 times less than half a million.

That made some people think the model was unreliable or wrong, but in reality, both numbers came out of the same model. It's just that the half-a-million figure was if things continued as they were, while the 20,000 figure factored in the effects of new protective measures, and the predictions reflected that. In fact, those circumstances, and the predictions, are still changing, because models are constantly being updated to reflect new information.

So, to quote the statistician George E. P.

Box: “All models are wrong, but some are useful.” And that's because of how they're designed. The simplest epidemiological models have three groups of people: the “Susceptible” who haven't caught the disease at all, the “Infected” who have it, and the “Removed” who already had it and either recovered or didn't survive. For that last group, the model assumes that once someone has stopped being Infected, they can't catch the virus again or give it to anyone else, although technically, we don't know for sure that that's true for COVID-19. Together, these form what's called an SIR model.

SIR models start with some number of people in each group, based on real-world data, and use computer models to step ahead in time. At each step, the computer simulates some number of Susceptibles catching the disease from the Infected. Meanwhile, some of the Infected become Removed if they've had the disease for long enough, either by recovering or by dying.

As the number of people in each group changes, that tells you how many people catch the disease over time. These models can be really helpful for making predictions, but they also come with challenges. For instance, epidemiologists have to determine, on average, how many susceptible people can catch the disease from one infected person.

This number you might have heard of, it's called R0. When the Imperial College did that first study that predicted half a million fatalities, researchers estimated that R0 was 2.4, based on data from cases in Wuhan, China. That's lower than the latest estimates, though, and it's just one reason modeling is so hard.

Models depend on inputs like this, and those numbers aren't perfectly known. Even where scientists have data, that data more often looks like a range of possible answers instead of one precise value. And since the numbers that go into the model are uncertain, the predictions are also uncertain, but that doesn't mean they can't be useful.

For instance, you can run the model with a few different values, and come up with a range of possible outcomes, from best case to worst. But narrowing down values like R0 isn't all there is to it. To make a model that predicts the complexity of the real world, you also have to add even more inputs that take into account things like how different groups interact and how the people in them vary by age and location.

So, basically, models get messy fast. Over time, as we gather more data, we can compare the actual number of new infections with the predicted numbers to pin down those inputs, and that gives us even better predictions of what's to come, even if they don't agree with the old ones. The models can also take into account our current behavior, like how well we're staying at home and social distancing.

That, of course, changes R0, since social distancing leads to fewer infections, and that can change the outcome of the model. For instance, the Imperial College model predicted two million fatalities in the U. S. if no drastic action was taken.

But at that point, lockdowns hadn't begun, and social distancing hadn't been factored into that number. So now, at the time of filming, a new model from the University of Washington is expecting a figure of around 74,000. Which again, is a big and bad number, but it is at least a smaller one.

That difference isn't because the Imperial model was totally wrong:. It's largely because the reaction to the early predictions led us to change our behaviors, and the latest predictions reflect that. In a way, this is great.

We can actually see ourselves changing our future for the better. As we change the ways we act and go about our lives, scientists change their models to better reflect the new, safer path that we are on. So if a model's predictions end up being wrong, that could mean it has done exactly the job it was supposed to.

Thanks for watching this episode of SciShow News! And thank you especially to our patrons who make it possible for us to cover topics like this each week, especially in a world that is changing so quickly. We couldn't do it without your support.

And if you're not a patron but you'd like to support what we do, you can find out more at patreon.com/SciShow. [♪ OUTRO].