scishow
Could Scientists Predict the Next Political Crisis?
YouTube: | https://youtube.com/watch?v=UV42LFnus4c |
Previous: | Why Does Scratching Make Itching Worse? |
Next: | Killer Bees: The Real Zom-bee Apocalypse |
Categories
Statistics
View count: | 145,789 |
Likes: | 5,053 |
Comments: | 632 |
Duration: | 11:14 |
Uploaded: | 2018-08-22 |
Last sync: | 2024-10-28 22:45 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "Could Scientists Predict the Next Political Crisis?" YouTube, uploaded by SciShow, 22 August 2018, www.youtube.com/watch?v=UV42LFnus4c. |
MLA Inline: | (SciShow, 2018) |
APA Full: | SciShow. (2018, August 22). Could Scientists Predict the Next Political Crisis? [Video]. YouTube. https://youtube.com/watch?v=UV42LFnus4c |
APA Inline: | (SciShow, 2018) |
Chicago Full: |
SciShow, "Could Scientists Predict the Next Political Crisis?", August 22, 2018, YouTube, 11:14, https://youtube.com/watch?v=UV42LFnus4c. |
Thanks to modern science and technology, we can predict what the weather will be like in 5 days, but it’s still a bit more challenging to predict what will happen to us and our societies.
Hosted by: Hank Green
Head to https://scishowfinds.com/ for hand selected artifacts of the universe!
----------
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow
----------
Dooblydoo thanks go to the following Patreon supporters: Lazarus G, Sam Lutfi, D.A. Noe, سلطان الخليفي, Piya Shedden, KatieMarie Magnone, Scott Satovsky Jr, Charles Southerland, Patrick D. Ashmore, Tim Curwick, charles george, Kevin Bealer, Chris Peters
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
----------
Sources:
http://science.sciencemag.org/content/355/6324
http://science.sciencemag.org/content/sci/355/6324/481.full.pdf tetlock
https://hbr.org/2015/10/question-certainty
https://www.economist.com/books-and-arts/2015/09/26/unclouded-vision
http://science.sciencemag.org/content/sci/355/6324/474.full.pdf violence
http://freakonomics.com/2012/03/22/false-positive-science-why-we-cant-predict-the-future/
https://www.edge.org/conversation/win-at-forecasting
Superforecasting: The Art and Science of Prediction (book, by Tetlock & Gardner)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3270390/
http://www.abc.net.au/radionational/programs/scienceshow/the-power-and-limits-of-weather-models/6507508#transcript
http://newsroom.haas.berkeley.edu/harnessing-the-wisdom-of-the-crowd-to-forecast-world-events/
https://www.iarpa.gov/index.php/research-programs/ace
https://www.sas.upenn.edu/tetlock/publications
https://goodjudgment.com/science.html
http://mentalfloss.com/article/12997/why-cant-we-predict-earthquakes
http://www.bbc.co.uk/guides/zxyd2p3
http://www.planet-science.com/categories/over-11s/natural-world/2011/03/can-we-predict-earthquakes.aspx
https://www.usgs.gov/faqs/can-you-predict-earthquakes?qt-news_science_products=0#qt-news_science_products
Hosted by: Hank Green
Head to https://scishowfinds.com/ for hand selected artifacts of the universe!
----------
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow
----------
Dooblydoo thanks go to the following Patreon supporters: Lazarus G, Sam Lutfi, D.A. Noe, سلطان الخليفي, Piya Shedden, KatieMarie Magnone, Scott Satovsky Jr, Charles Southerland, Patrick D. Ashmore, Tim Curwick, charles george, Kevin Bealer, Chris Peters
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
----------
Sources:
http://science.sciencemag.org/content/355/6324
http://science.sciencemag.org/content/sci/355/6324/481.full.pdf tetlock
https://hbr.org/2015/10/question-certainty
https://www.economist.com/books-and-arts/2015/09/26/unclouded-vision
http://science.sciencemag.org/content/sci/355/6324/474.full.pdf violence
http://freakonomics.com/2012/03/22/false-positive-science-why-we-cant-predict-the-future/
https://www.edge.org/conversation/win-at-forecasting
Superforecasting: The Art and Science of Prediction (book, by Tetlock & Gardner)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3270390/
http://www.abc.net.au/radionational/programs/scienceshow/the-power-and-limits-of-weather-models/6507508#transcript
http://newsroom.haas.berkeley.edu/harnessing-the-wisdom-of-the-crowd-to-forecast-world-events/
https://www.iarpa.gov/index.php/research-programs/ace
https://www.sas.upenn.edu/tetlock/publications
https://goodjudgment.com/science.html
http://mentalfloss.com/article/12997/why-cant-we-predict-earthquakes
http://www.bbc.co.uk/guides/zxyd2p3
http://www.planet-science.com/categories/over-11s/natural-world/2011/03/can-we-predict-earthquakes.aspx
https://www.usgs.gov/faqs/can-you-predict-earthquakes?qt-news_science_products=0#qt-news_science_products
For centuries, people have been trying to predict the future.
The Greeks had their oracles; the Romans had their soothsayers. And today we have… well, for some things, we have scientists.
Like, thanks to the laws of physics, I can tell you with near certainty when the sun will rise tomorrow, if you give me your exact location. Technically, it’s still not 100%, since some weird freak planetary collision could nudge Earth out of its typical orbit. But it’s a very reliable guess.
Seriously. Don’t lose sleep over that. We’re not getting hit by a planet.
As scientists have learned more about how the world works — and we’ve started feeding computers a lot of data — we’ve gotten better and better at making predictions about the future. In a way, it’s surprising how much we can predict. And yet, there are still these gaping holes, especially when it comes to human behavior.
Do you remember for example, all of 2016. Psychologists, though, are actually making headway on figuring out how we can learn to be better predictors. So today, on SciShow, here’s what we can and can’t predict with much accuracy — and how science is moving the art of prediction forward. [♪ INTRO].
Perhaps the best success story — and cautionary tale — for prediction science is the weather. The forecast used to be not much more than a guess about what the next few days’ weather would hold. But by learning more about how clouds form, and how pressure interacts with temperature, meteorologists have dramatically improved their predictions in recent decades.
These days, they use complicated computer models that take into account the underlying physics. And by feeding those models reams of data from a variety of instruments all over the world, their five-day forecasts today are as accurate as three-day forecasts in 2005. That’s a huge improvement in a pretty short period of time.
And it’s not just helpful for planning your weekend cookout. Getting better with the weather also means we’ve been able to save more lives during natural disasters. But while we’ve improved, we’re still pretty bad about forecasting weather much beyond a week.
That’s because of inherent unpredictability in the way something like a cloud forms. We can know every detail of it, but part of it still depends on an initial starting condition. And according to chaos theory, small changes that you cannot predict will change that outcome.
This is also known as the butterfly effect. So we can keep learning more and improving our measurements and models, but most of our progress will be incremental. There’s a limit to how accurate long-range forecasts can get.
All things considered, though, weather prediction is pretty darn good, as long as you go in with the right expectations. On the other hand, there are certain things that you’d think we’d be able to predict that we just haven’t been able to crack. Like earthquakes.
They’re largely a natural phenomenon, which, like weather, you’d think we’d be able to understand the basics of and then load in a bunch of data to model. But so far, we can’t — at least, not in the same way. We do know a lot about them, like where they’re most likely, based on fault lines and historical data.
But seismologists haven’t yet found a signal that reliably precedes a quake that we can follow for advanced warning. You can detect rumbling just prior to one, but it’s not enough time to evacuate an entire city. We just don't understand the factors that go into how two tectonic plates will interact with each other.
So, the timing and magnitude of any single specific earthquake remains a mystery. Which is obviously bad for trying to keep people safe. Maybe, one day, seismologists will discover new basic phenomena that will allow us to forecast earthquakes with much better foresight.
But it’s also possible that we won’t. And without that foundation, earthquakes will remain an enigma that we can only loosely estimate. It’s a reminder that predicting the future depends on mountains of carefully collected data — which is great, but also sometimes hard to come by.
The vast majority of things that we have real trouble predicting, though, aren’t based on the physical world. There, at least we can partially model things to get some unbiased idea of the probability. Instead, the real mystery is... you, and me, and us.
Elections, stock markets, political uprisings — things that hinge on people and societies — these are much more challenging, which is not that big of a surprise. The classic approach to these questions is to use experts. After all, if someone knows a lot about a specific country, they should be able to say with more accuracy whether a foreign leader will make a certain trade deal, right?
Well, it turns out that experts aren’t very good at economic and political forecasts. In one landmark experiment that collected these kinds of predictions from more than 280 experts over nearly two decades, the so-called ‘experts’ were only a tad better than random guessing. We’ll get back to why scientists think experts aren’t very good — it has a lot to do with how they think and common psychological biases we all fall prey to.
But don’t always assume that knowledge is power when it comes to the future. At least, when people are involved. The other main way to tackle these sorts of questions is to use data.
For something like an election, you can use polling data — and the more there is, the better. And if you know a bit about the quality of each poll, you can weight them accordingly and aggregate them together to get your best guess. This isn’t foolproof, but this type of analytical approach is usually much better than asking a single expert.
Depending on the question you’re trying to answer, you can even use artificial intelligence and machine learning to make forecasts, although this is work in progress. In machine learning, computers use algorithms, which are basically just a set of rules, to teach themselves over time. The advantage here is if you don’t actually know how something works — like, say, what causes political violence — you can feed a computer a bunch of data, and see if it can find any patterns for you.
So far, this method hasn’t pulled off any notable victories — at least, in those tricky human situations — but it’s something to keep tabs on in, that future we’ve been talking about. It’s all pretty new, but as we keep trying out this technology and improving it, we’ll hopefully make some progress. Okay, so we know that experts aren’t as good as we’d think they’d be.
But the whole story is a little more complicated. Because one of the things that that long-running study found was that some experts are better than others. Experts who believed in big, grand ideas — like the idea that all governmental regulation is bad, or that the environment is doomed — generally didn’t do so well.
Those who were less wedded to these kinds of concepts, and were willing to change their opinions, did far better. This suggests that personality and styles of thinking are important for our ability to make good predictions. And that perhaps, if you’re willing, you can learn to get better at it, too.
The strongest case for this comes from a remarkable project sponsored by a US agency called Intelligence. Advanced Research Projects Activity, or IARPA. It’s kind of like DARPA, but for military intelligence.
Back in 2011, IARPA realized that even well-trained intelligence officers weren’t so hot at predicting events, and that maybe they could find a better way. So they set up a four-year forecasting tournament for people to predict political or economic outcomes. It was a contest, but also an experiment.
Different teams tried out different ideas for coming up with a strategy to produce the most accurate predictions. And one team, called the Good Judgment Project, blew the other four out of the water — so much that the government stopped funding the others just to focus on the winner. The Good Judgment Project was actually led by the same psychologist behind the other study showing that experts are, on average, poor predictors.
But what he also realized was that a small number of people are remarkably good at answering certain questions — stuff like, ‘Will Serbia leave the EU in the next six months?’ or, ‘Will this politician resign by March?’ It wasn’t just luck, and it wasn’t just that these people were smart or well-versed on international affairs, either. The participants were normal folks who volunteered; they had no particular expertise. And they outperformed intelligence analysts with access to classified material.
Which sounds, like pretty humbling for those intelligence analysts. What set these so-called superforecasters apart were certain shared personality traits, like an openness to consider new ideas, and a willingness to revise them in the face of new facts. They’re intelligent, but not geniuses, and while they are usually comfortable with numbers, they weren’t using statistics or models to arrive at their answers.
Instead, the superforecasters were thinking through the problems probabilistically. In other words, they carefully assessed the likelihood of various things, and factored everything into their decision. This prevented them from being susceptible to a lot of biases, including our natural tendency to make quick, intuitive decisions by falling back on heuristics, or shortcuts.
For instance, forecasters who read a lot about terrorism, even in an effort to become more informed, might begin to think terrorism is more frequent than it actually is, simply because they’re exposed to it a lot. This is known as the availability heuristic. But by becoming aware of these pitfalls, and sticking to probabilistic thinking, these superforecasters could avoid it.
Fortunately for us mere mortals, the Good Judgment Project was able to develop a short training program that can improve accuracy by 10% over a year. In it, participants learn about cognitive biases and are encouraged to break down big problems into smaller parts so they can think more easily about them. They’re also taught to think about problems from all sorts of perspectives.
They learned not overreact or underreact to new information. And most importantly, to learn from their mistakes. This is where most experts don’t put in the work.
But if you never pause to think about where you went wrong, you can’t learn how to be better. Which is true for a lot of things, come to think about it. To arrive at its winning predictions, the Good Judgment Project team also took advantage of the wisdom of the crowd, but added a tweak to traditional methods.
Basically, if you average everyone’s predictions, they’re usually fairly close. But this team didn’t stop there. Instead, they then gave extra weight to their group of 40 or so superforecasters, and finally, adjusted that number up or down a bit further, in what is called extremizing.
This technique worked really well. It’s still not perfect, of course but is proof that sometimes, people can be fairly good about glimpsing the future. Well, as long as you ask a lot of them, and do some fancy math to bias things towards your most talented group.
Like with any prediction, data is still really important. This method won’t work for everything, and many people think there are very rare, but still very important events that are too hard to predict — something like 9/11. They call these black swans.
But it’s possible breaking things down and learning more will allow us to get better at these, too. Ultimately, most experts agree that the best predictions about these sorts of tough questions will come from a combination of human and machine. Really, though, the only thing we can be certain of is that we won’t be able to predict everything.
Thank you for watching this episode of SciShow! If you want to learn more about how our minds work and influence the ways we think and respond to things, we have a channel called SciShow Psychology and you can check it out over at youtube.com/scishowpsych. [♪ OUTRO ].
The Greeks had their oracles; the Romans had their soothsayers. And today we have… well, for some things, we have scientists.
Like, thanks to the laws of physics, I can tell you with near certainty when the sun will rise tomorrow, if you give me your exact location. Technically, it’s still not 100%, since some weird freak planetary collision could nudge Earth out of its typical orbit. But it’s a very reliable guess.
Seriously. Don’t lose sleep over that. We’re not getting hit by a planet.
As scientists have learned more about how the world works — and we’ve started feeding computers a lot of data — we’ve gotten better and better at making predictions about the future. In a way, it’s surprising how much we can predict. And yet, there are still these gaping holes, especially when it comes to human behavior.
Do you remember for example, all of 2016. Psychologists, though, are actually making headway on figuring out how we can learn to be better predictors. So today, on SciShow, here’s what we can and can’t predict with much accuracy — and how science is moving the art of prediction forward. [♪ INTRO].
Perhaps the best success story — and cautionary tale — for prediction science is the weather. The forecast used to be not much more than a guess about what the next few days’ weather would hold. But by learning more about how clouds form, and how pressure interacts with temperature, meteorologists have dramatically improved their predictions in recent decades.
These days, they use complicated computer models that take into account the underlying physics. And by feeding those models reams of data from a variety of instruments all over the world, their five-day forecasts today are as accurate as three-day forecasts in 2005. That’s a huge improvement in a pretty short period of time.
And it’s not just helpful for planning your weekend cookout. Getting better with the weather also means we’ve been able to save more lives during natural disasters. But while we’ve improved, we’re still pretty bad about forecasting weather much beyond a week.
That’s because of inherent unpredictability in the way something like a cloud forms. We can know every detail of it, but part of it still depends on an initial starting condition. And according to chaos theory, small changes that you cannot predict will change that outcome.
This is also known as the butterfly effect. So we can keep learning more and improving our measurements and models, but most of our progress will be incremental. There’s a limit to how accurate long-range forecasts can get.
All things considered, though, weather prediction is pretty darn good, as long as you go in with the right expectations. On the other hand, there are certain things that you’d think we’d be able to predict that we just haven’t been able to crack. Like earthquakes.
They’re largely a natural phenomenon, which, like weather, you’d think we’d be able to understand the basics of and then load in a bunch of data to model. But so far, we can’t — at least, not in the same way. We do know a lot about them, like where they’re most likely, based on fault lines and historical data.
But seismologists haven’t yet found a signal that reliably precedes a quake that we can follow for advanced warning. You can detect rumbling just prior to one, but it’s not enough time to evacuate an entire city. We just don't understand the factors that go into how two tectonic plates will interact with each other.
So, the timing and magnitude of any single specific earthquake remains a mystery. Which is obviously bad for trying to keep people safe. Maybe, one day, seismologists will discover new basic phenomena that will allow us to forecast earthquakes with much better foresight.
But it’s also possible that we won’t. And without that foundation, earthquakes will remain an enigma that we can only loosely estimate. It’s a reminder that predicting the future depends on mountains of carefully collected data — which is great, but also sometimes hard to come by.
The vast majority of things that we have real trouble predicting, though, aren’t based on the physical world. There, at least we can partially model things to get some unbiased idea of the probability. Instead, the real mystery is... you, and me, and us.
Elections, stock markets, political uprisings — things that hinge on people and societies — these are much more challenging, which is not that big of a surprise. The classic approach to these questions is to use experts. After all, if someone knows a lot about a specific country, they should be able to say with more accuracy whether a foreign leader will make a certain trade deal, right?
Well, it turns out that experts aren’t very good at economic and political forecasts. In one landmark experiment that collected these kinds of predictions from more than 280 experts over nearly two decades, the so-called ‘experts’ were only a tad better than random guessing. We’ll get back to why scientists think experts aren’t very good — it has a lot to do with how they think and common psychological biases we all fall prey to.
But don’t always assume that knowledge is power when it comes to the future. At least, when people are involved. The other main way to tackle these sorts of questions is to use data.
For something like an election, you can use polling data — and the more there is, the better. And if you know a bit about the quality of each poll, you can weight them accordingly and aggregate them together to get your best guess. This isn’t foolproof, but this type of analytical approach is usually much better than asking a single expert.
Depending on the question you’re trying to answer, you can even use artificial intelligence and machine learning to make forecasts, although this is work in progress. In machine learning, computers use algorithms, which are basically just a set of rules, to teach themselves over time. The advantage here is if you don’t actually know how something works — like, say, what causes political violence — you can feed a computer a bunch of data, and see if it can find any patterns for you.
So far, this method hasn’t pulled off any notable victories — at least, in those tricky human situations — but it’s something to keep tabs on in, that future we’ve been talking about. It’s all pretty new, but as we keep trying out this technology and improving it, we’ll hopefully make some progress. Okay, so we know that experts aren’t as good as we’d think they’d be.
But the whole story is a little more complicated. Because one of the things that that long-running study found was that some experts are better than others. Experts who believed in big, grand ideas — like the idea that all governmental regulation is bad, or that the environment is doomed — generally didn’t do so well.
Those who were less wedded to these kinds of concepts, and were willing to change their opinions, did far better. This suggests that personality and styles of thinking are important for our ability to make good predictions. And that perhaps, if you’re willing, you can learn to get better at it, too.
The strongest case for this comes from a remarkable project sponsored by a US agency called Intelligence. Advanced Research Projects Activity, or IARPA. It’s kind of like DARPA, but for military intelligence.
Back in 2011, IARPA realized that even well-trained intelligence officers weren’t so hot at predicting events, and that maybe they could find a better way. So they set up a four-year forecasting tournament for people to predict political or economic outcomes. It was a contest, but also an experiment.
Different teams tried out different ideas for coming up with a strategy to produce the most accurate predictions. And one team, called the Good Judgment Project, blew the other four out of the water — so much that the government stopped funding the others just to focus on the winner. The Good Judgment Project was actually led by the same psychologist behind the other study showing that experts are, on average, poor predictors.
But what he also realized was that a small number of people are remarkably good at answering certain questions — stuff like, ‘Will Serbia leave the EU in the next six months?’ or, ‘Will this politician resign by March?’ It wasn’t just luck, and it wasn’t just that these people were smart or well-versed on international affairs, either. The participants were normal folks who volunteered; they had no particular expertise. And they outperformed intelligence analysts with access to classified material.
Which sounds, like pretty humbling for those intelligence analysts. What set these so-called superforecasters apart were certain shared personality traits, like an openness to consider new ideas, and a willingness to revise them in the face of new facts. They’re intelligent, but not geniuses, and while they are usually comfortable with numbers, they weren’t using statistics or models to arrive at their answers.
Instead, the superforecasters were thinking through the problems probabilistically. In other words, they carefully assessed the likelihood of various things, and factored everything into their decision. This prevented them from being susceptible to a lot of biases, including our natural tendency to make quick, intuitive decisions by falling back on heuristics, or shortcuts.
For instance, forecasters who read a lot about terrorism, even in an effort to become more informed, might begin to think terrorism is more frequent than it actually is, simply because they’re exposed to it a lot. This is known as the availability heuristic. But by becoming aware of these pitfalls, and sticking to probabilistic thinking, these superforecasters could avoid it.
Fortunately for us mere mortals, the Good Judgment Project was able to develop a short training program that can improve accuracy by 10% over a year. In it, participants learn about cognitive biases and are encouraged to break down big problems into smaller parts so they can think more easily about them. They’re also taught to think about problems from all sorts of perspectives.
They learned not overreact or underreact to new information. And most importantly, to learn from their mistakes. This is where most experts don’t put in the work.
But if you never pause to think about where you went wrong, you can’t learn how to be better. Which is true for a lot of things, come to think about it. To arrive at its winning predictions, the Good Judgment Project team also took advantage of the wisdom of the crowd, but added a tweak to traditional methods.
Basically, if you average everyone’s predictions, they’re usually fairly close. But this team didn’t stop there. Instead, they then gave extra weight to their group of 40 or so superforecasters, and finally, adjusted that number up or down a bit further, in what is called extremizing.
This technique worked really well. It’s still not perfect, of course but is proof that sometimes, people can be fairly good about glimpsing the future. Well, as long as you ask a lot of them, and do some fancy math to bias things towards your most talented group.
Like with any prediction, data is still really important. This method won’t work for everything, and many people think there are very rare, but still very important events that are too hard to predict — something like 9/11. They call these black swans.
But it’s possible breaking things down and learning more will allow us to get better at these, too. Ultimately, most experts agree that the best predictions about these sorts of tough questions will come from a combination of human and machine. Really, though, the only thing we can be certain of is that we won’t be able to predict everything.
Thank you for watching this episode of SciShow! If you want to learn more about how our minds work and influence the ways we think and respond to things, we have a channel called SciShow Psychology and you can check it out over at youtube.com/scishowpsych. [♪ OUTRO ].