scishow
Can AI Evolve?
YouTube: | https://youtube.com/watch?v=o81QwXHbH5E |
Previous: | Can Gargling Salt Water Cure a Sore Throat? |
Next: | Fish Are Way Smarter Than You Think |
Categories
Statistics
View count: | 198,422 |
Likes: | 6,855 |
Comments: | 721 |
Duration: | 10:14 |
Uploaded: | 2019-04-17 |
Last sync: | 2024-11-14 21:45 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "Can AI Evolve?" YouTube, uploaded by SciShow, 17 April 2019, www.youtube.com/watch?v=o81QwXHbH5E. |
MLA Inline: | (SciShow, 2019) |
APA Full: | SciShow. (2019, April 17). Can AI Evolve? [Video]. YouTube. https://youtube.com/watch?v=o81QwXHbH5E |
APA Inline: | (SciShow, 2019) |
Chicago Full: |
SciShow, "Can AI Evolve?", April 17, 2019, YouTube, 10:14, https://youtube.com/watch?v=o81QwXHbH5E. |
AI can do some pretty amazing things, but if we want it to learn on its own, we're going to have to teach AI how to evolve.
Hosted by: Olivia Gordon
SciShow has a spinoff podcast! It's called SciShow Tangents. Check it out at https://www.scishowtangents.org
----------
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow
----------
Huge thanks go to the following Patreon supporters for helping us keep SciShow free for everyone forever:
Adam Brainard, Greg, Alex Hackman. Sam Lutfi, D.A. Noe, الخليفي سلطان, Piya Shedden, KatieMarie Magnone, Scott Satovsky Jr, Charles Southerland, Patrick D. Ashmore, charles george, Kevin Bealer, Chris Peters
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
----------
Sources:
https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html
https://www.wired.com/2017/01/mystery-ai-just-crushed-best-human-players-poker/
https://qz.com/636637/the-beginning-of-the-end-googles-ai-has-beaten-a-top-human-player-at-the-complex-game-of-go/
https://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/
https://www.forbes.com/sites/bruceupbin/2013/02/08/ibms-watson-gets-its-first-piece-of-business-in-healthcare/#4920c68c5402
https://medium.com/@froger_mcs/moravecs-paradox-c79bf638103f
https://books.google.com/books?hl=en&lr=&id=56mb7XuSx3QC&oi=fnd&pg=PA1&dq=hans+moravec+mind+children&ots=ijpLMTHIxG&sig=7Dv1MIrzMAbtQ4zllzD5NMd6ET0#v=onepage&q&f=false
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3368062/
https://www.nature.com/articles/nature03687
https://www.cell.com/neuron/fulltext/S0896-6273(15)00559-0
https://hackaday.com/2017/06/23/what-if-learning-new-things-made-you-forget-the-old/
https://arxiv.org/abs/1612.00796
https://rylanschaeffer.github.io/content/research/overcoming_catastrophic_forgetting/main.html
https://www.pnas.org/content/115/44/E10467.short
https://arxiv.org/abs/1611.05763
https://medium.com/explore-artificial-intelligence/an-introduction-to-recurrent-neural-networks-72c97bf0912
http://web.mst.edu/~gosavia/neural_networks_RL.pdf
https://deepmind.com/blog/alphago-zero-learning-scratch/
https://arxiv.org/abs/1611.05763
https://deepmind.com/blog/prefrontal-cortex-meta-reinforcement-learning-system/
https://arxiv.org/abs/1312.5602
https://arxiv.org/abs/1705.05363
https://www.quantamagazine.org/clever-machines-learn-how-to-be-curious-20170919/
http://robotics.sciencemag.org/content/4/26/eaau9354
https://www.quantamagazine.org/clever-machines-learn-how-to-be-curious-20170919/
Hosted by: Olivia Gordon
SciShow has a spinoff podcast! It's called SciShow Tangents. Check it out at https://www.scishowtangents.org
----------
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow
----------
Huge thanks go to the following Patreon supporters for helping us keep SciShow free for everyone forever:
Adam Brainard, Greg, Alex Hackman. Sam Lutfi, D.A. Noe, الخليفي سلطان, Piya Shedden, KatieMarie Magnone, Scott Satovsky Jr, Charles Southerland, Patrick D. Ashmore, charles george, Kevin Bealer, Chris Peters
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
----------
Sources:
https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html
https://www.wired.com/2017/01/mystery-ai-just-crushed-best-human-players-poker/
https://qz.com/636637/the-beginning-of-the-end-googles-ai-has-beaten-a-top-human-player-at-the-complex-game-of-go/
https://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/
https://www.forbes.com/sites/bruceupbin/2013/02/08/ibms-watson-gets-its-first-piece-of-business-in-healthcare/#4920c68c5402
https://medium.com/@froger_mcs/moravecs-paradox-c79bf638103f
https://books.google.com/books?hl=en&lr=&id=56mb7XuSx3QC&oi=fnd&pg=PA1&dq=hans+moravec+mind+children&ots=ijpLMTHIxG&sig=7Dv1MIrzMAbtQ4zllzD5NMd6ET0#v=onepage&q&f=false
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3368062/
https://www.nature.com/articles/nature03687
https://www.cell.com/neuron/fulltext/S0896-6273(15)00559-0
https://hackaday.com/2017/06/23/what-if-learning-new-things-made-you-forget-the-old/
https://arxiv.org/abs/1612.00796
https://rylanschaeffer.github.io/content/research/overcoming_catastrophic_forgetting/main.html
https://www.pnas.org/content/115/44/E10467.short
https://arxiv.org/abs/1611.05763
https://medium.com/explore-artificial-intelligence/an-introduction-to-recurrent-neural-networks-72c97bf0912
http://web.mst.edu/~gosavia/neural_networks_RL.pdf
https://deepmind.com/blog/alphago-zero-learning-scratch/
https://arxiv.org/abs/1611.05763
https://deepmind.com/blog/prefrontal-cortex-meta-reinforcement-learning-system/
https://arxiv.org/abs/1312.5602
https://arxiv.org/abs/1705.05363
https://www.quantamagazine.org/clever-machines-learn-how-to-be-curious-20170919/
http://robotics.sciencemag.org/content/4/26/eaau9354
https://www.quantamagazine.org/clever-machines-learn-how-to-be-curious-20170919/
[ intro ].
Artificial intelligence is really, well, intelligent. It's beaten humans at chess and poker and AlphaGo famously beat one of the world's top players at the super-complex game Go.
IBM Watson even won Jeopardy, and now a different version of it is designing personalized cancer treatments. These are tasks humans spend decades mastering, and AI is beating them left and right. Except, for as good as we are at teaching AI to do complicated tasks, we're terrible at getting them to learn even the most basic, toddler-level skills.
Like, ask AlphaGo to recognize a cat — let alone a specific person — and it will have no idea what to do. This problem is called Moravec's paradox, after one computer scientist who studied it in the 1980s. And while it seems to have a pretty straightforward solution, it's definitely not an easy one.
Because to make a program that thinks like a toddler… we'll likely have to teach. AI to evolve. On the most basic level, the reason for Moravec's paradox is simple:.
We don't know how to program general intelligence. We're great at getting AI to do one thing, but most toddler-level skills — like facial recognition — involve learning new things and then transferring them to other contexts. Getting computers to do that is one goal of what's called general AI.
And in 1988, Hans Moravec pointed out that there's a simple reason it's so hard: evolution. His point was that things that seem really easy to us are actually the result of thousands of years of development. So even though most kids can easily tell the difference between yellow and blue, or a friend and a stranger, those aren't actually simple skills.
They only seem basic to us because our species has spent tens of thousands of years refining them. Meanwhile, we've only been making computers for about a century, tops. So there's no way we could have figured out general AI in that time.
Then again, Moravec didn't think time was the only issue:. He also thought researchers were approaching the problem the wrong way. In the ‘80s, developers were mainly working from the top down, just trying to copy the mental processes of fully-formed human brains.
But Moravec believed that the most successful approach would be to work from the bottom up. In other words, instead of building a complex brain from scratch, he thought we should mimic evolution. Just like in nature, we would start small, then add complexity to our AI little by little, all the while challenging these programs to adapt.
We could even study how the human brain does this and apply those lessons to machines. Which, obviously, is not difficult whatsoever. Still, it does seem to be a solution that works.
Because the more computer scientists base their AI on our brains… the smarter they seem to get. A lot of this research is focused on neural networks. These are systems that can teach themselves to recognize patterns, and they're modeled after how our brains learn information.
When you learn something new, your brain strengthens the connections between its neurons. v. Over time, those connections grow stronger and stronger, and will likely stay in your brain for a while. On the flip side, if your brain realizes a piece of information isn't worth keeping — like where you parked that one time three months ago — it can remove receptors.
That way, the connection can get overwritten with something more helpful. These stable and dynamic connections allow your brain to keep what it needs, and get rid of the stuff it doesn't. And neural networks work in a similar way.
They start off with some basic framework for how to do a task, and then they practice that task to refine the connections between their artificial neurons. Like, say you want to train a network to identify dog breeds. To do it, you would first give the system some basic guidelines.
Then, you would feed it a bunch of pictures, and the AI would try to identify each one. At first, it would be terrible at this. But with each image, the network would make small tweaks to connections between its neurons, called weights.
For example, it might make size more important than paw color. After thousands or millions of pictures, those tweaks would eventually be good enough for the network to identify dog breeds accurately. The program would have strengthened the connections it needed, and scaled back the ones it didn't, just like your brain.
Neural networks are a big step toward general AI, but they're not perfect. Actually, in a lot of ways, they're pretty narrow, because most of them can still only do one thing. While your brain can make connections about all kinds of information at once, many neural networks have connections that are too weak and dynamic.
I means that all of their weights get adjusted with every new piece of data. So if you suddenly started feeding your network pictures of cats instead of dogs, it would adjust all of the weights you worked so hard to perfect. Every connection would now be about cats.
This problem even has a dramatic name: catastrophic forgetting. But the cool thing is, we can use other knowledge about our brains to solve it. One approach took inspiration from the fact that the brain doesn't just go for a grab-bag of whatever neurons are available.
Instead, it activates different sets of neurons for different tasks. In a 2018 study published in PNAS, researchers showed that you can do this in a neural network, too:. You can make one task activate one set of neurons, and another task activate another set.
By combining this approach with previous methods from other teams, these researchers were able to program a network that achieved 90% accuracy on 500 tasks. Which isn't perfect, but is promising. And the more we learn about how our brains refine connections, the better these methods are going to get.
Of course, catastrophic forgetting isn't the only barrier to general AI. Another challenge is getting systems to learn from more than just examples. Not every task has a huge dataset for a network to sort through, and anyway, who has time for that?
If a program is going to think like a human, it has to start grasping the rules that govern whether an answer is correct. And in 2016, UK researchers came up with a way to achieve that. They relied on two concepts: a recurrent neural network and reinforcement learning.
A recurrent neural network uses feedback loops to keep tabs on what just happened and how that should inform an AI's next move. They're used a lot in language processing. For example, if a program started a sentence with a noun, it would remember that.
Then, it would use the rules of English grammar to tell itself that the next word should probably be a verb. Reinforcement learning is how a network can figure out the best next move on its own. It guesses the right answer and then immediately receives feedback by getting some kind of reward signal or lack thereof.
Then, it uses that feedback to learn what to do next time. Going back to the language example, if an AI guessed that “Olivia dog†was a good sentence, it wouldn't get a reward signal. But if it said, “Olivia ran,†it would.
This is the approach the AI system AlphaGo used to beat the world's top Go player. In this 2017 study, the UK team trained a recurrent network using reinforcement learning, but they also got it to use a different, secondary reinforcement learning algorithm at the same time. In that way, one part of the AI learned how to respond to different examples, while the other part learned how those examples fell into a larger rule structure.
They called this deep meta-reinforcement learning, and their approach helped the network quickly learn and adapt to seven very different tasks. It could do things like navigate a labyrinth with a changing goal and pull a series of rigged slot machines to get the maximum reward. While it might not seem as obvious, these systems are based on our brains, too.
When a system gets a reward in reinforcement learning, it's like how your brain uses chemicals like dopamine to give you a reward. That reinforcement encourages you to practice certain behaviors. So by building a similar system into neural networks, researchers are hoping to encourage them to keep learning and adapting.
Now, all of the techniques we've talked about are great. A program with general intelligence should totally be able to process multiple kinds of information, and should be able to learn new rules. But to build a truly evolving system, we'll also need to make AI curious.
Because having an intrinsic desire to figure out how stuff works and fits together is a big part of how we learn and explore. Think about kids. They'll go turn over a rock and poke at the bugs underneath it because exploring sounds fun, not because they were promised a reward for learning.
Of course, we can't just push AI out into the world and tell it to be home before supper. So instead, scientists are using video games to teach them curiosity. In 2017, Berkeley researchers managed to do this using Super Mario Brothers.
They trained their AI to predict what each frame of the game would look like as it explored. But instead of generating a reward for being right, this AI got a reward for being wrong. That is, the less reality matched its prediction, the bigger the reward it got.
Essentially, it was rewarded for being surprised. This led the system to explore new parts of the game, which means the team basically programmed it to be curious about its environment. Unfortunately, it could never even beat the first level.
But hey, it's a start. Now, other projects are trying different ways of getting AI to be self-sufficient. In January 2019, a team from Columbia University successfully got a robot arm to create an internal model of itself.
It figured out what it looked like and how it worked without any outside input. It did this by trying out a thousand different movements, recording each one to figure out which ones worked and which ones were physically impossible. Kind of like the robot version of a baby playing with its hands.
Once it was all done, the arm could successfully pick up and place small balls into containers and write with a marker — even though researchers never told the arm what it could do. The team even replaced one of the arm's parts with a deformed piece, and it quickly adapted to the change. The robot learned what it was on its own, and that made it easier for it to adjust to new situations.
By building programs that process information like our brains, and teaching these programs to be curious instead of just correct, scientists are heading down a road where AI might one day be able to evolve. Someday, we might make a machine that learns to treat diseases just by learning a little about biochemistry, or one that can design cars by studying engineering. Essentially, we'd be making the AI equivalent of students — programs that learn to synthesize and apply information.
But there's a lot to figure out, because in reality, we don't know everything about our brains work — let alone how to apply those things to machines. We know a lot, sure, but to really make an AI that thinks like us… well, we're going to have to understand ourselves a little better first. And that's a whole different field of research.
Thanks for watching this episode of SciShow! If you want to keep exploring the universe with us, you can go to youtube.com/scishow and subscribe. [ outro ].
Artificial intelligence is really, well, intelligent. It's beaten humans at chess and poker and AlphaGo famously beat one of the world's top players at the super-complex game Go.
IBM Watson even won Jeopardy, and now a different version of it is designing personalized cancer treatments. These are tasks humans spend decades mastering, and AI is beating them left and right. Except, for as good as we are at teaching AI to do complicated tasks, we're terrible at getting them to learn even the most basic, toddler-level skills.
Like, ask AlphaGo to recognize a cat — let alone a specific person — and it will have no idea what to do. This problem is called Moravec's paradox, after one computer scientist who studied it in the 1980s. And while it seems to have a pretty straightforward solution, it's definitely not an easy one.
Because to make a program that thinks like a toddler… we'll likely have to teach. AI to evolve. On the most basic level, the reason for Moravec's paradox is simple:.
We don't know how to program general intelligence. We're great at getting AI to do one thing, but most toddler-level skills — like facial recognition — involve learning new things and then transferring them to other contexts. Getting computers to do that is one goal of what's called general AI.
And in 1988, Hans Moravec pointed out that there's a simple reason it's so hard: evolution. His point was that things that seem really easy to us are actually the result of thousands of years of development. So even though most kids can easily tell the difference between yellow and blue, or a friend and a stranger, those aren't actually simple skills.
They only seem basic to us because our species has spent tens of thousands of years refining them. Meanwhile, we've only been making computers for about a century, tops. So there's no way we could have figured out general AI in that time.
Then again, Moravec didn't think time was the only issue:. He also thought researchers were approaching the problem the wrong way. In the ‘80s, developers were mainly working from the top down, just trying to copy the mental processes of fully-formed human brains.
But Moravec believed that the most successful approach would be to work from the bottom up. In other words, instead of building a complex brain from scratch, he thought we should mimic evolution. Just like in nature, we would start small, then add complexity to our AI little by little, all the while challenging these programs to adapt.
We could even study how the human brain does this and apply those lessons to machines. Which, obviously, is not difficult whatsoever. Still, it does seem to be a solution that works.
Because the more computer scientists base their AI on our brains… the smarter they seem to get. A lot of this research is focused on neural networks. These are systems that can teach themselves to recognize patterns, and they're modeled after how our brains learn information.
When you learn something new, your brain strengthens the connections between its neurons. v. Over time, those connections grow stronger and stronger, and will likely stay in your brain for a while. On the flip side, if your brain realizes a piece of information isn't worth keeping — like where you parked that one time three months ago — it can remove receptors.
That way, the connection can get overwritten with something more helpful. These stable and dynamic connections allow your brain to keep what it needs, and get rid of the stuff it doesn't. And neural networks work in a similar way.
They start off with some basic framework for how to do a task, and then they practice that task to refine the connections between their artificial neurons. Like, say you want to train a network to identify dog breeds. To do it, you would first give the system some basic guidelines.
Then, you would feed it a bunch of pictures, and the AI would try to identify each one. At first, it would be terrible at this. But with each image, the network would make small tweaks to connections between its neurons, called weights.
For example, it might make size more important than paw color. After thousands or millions of pictures, those tweaks would eventually be good enough for the network to identify dog breeds accurately. The program would have strengthened the connections it needed, and scaled back the ones it didn't, just like your brain.
Neural networks are a big step toward general AI, but they're not perfect. Actually, in a lot of ways, they're pretty narrow, because most of them can still only do one thing. While your brain can make connections about all kinds of information at once, many neural networks have connections that are too weak and dynamic.
I means that all of their weights get adjusted with every new piece of data. So if you suddenly started feeding your network pictures of cats instead of dogs, it would adjust all of the weights you worked so hard to perfect. Every connection would now be about cats.
This problem even has a dramatic name: catastrophic forgetting. But the cool thing is, we can use other knowledge about our brains to solve it. One approach took inspiration from the fact that the brain doesn't just go for a grab-bag of whatever neurons are available.
Instead, it activates different sets of neurons for different tasks. In a 2018 study published in PNAS, researchers showed that you can do this in a neural network, too:. You can make one task activate one set of neurons, and another task activate another set.
By combining this approach with previous methods from other teams, these researchers were able to program a network that achieved 90% accuracy on 500 tasks. Which isn't perfect, but is promising. And the more we learn about how our brains refine connections, the better these methods are going to get.
Of course, catastrophic forgetting isn't the only barrier to general AI. Another challenge is getting systems to learn from more than just examples. Not every task has a huge dataset for a network to sort through, and anyway, who has time for that?
If a program is going to think like a human, it has to start grasping the rules that govern whether an answer is correct. And in 2016, UK researchers came up with a way to achieve that. They relied on two concepts: a recurrent neural network and reinforcement learning.
A recurrent neural network uses feedback loops to keep tabs on what just happened and how that should inform an AI's next move. They're used a lot in language processing. For example, if a program started a sentence with a noun, it would remember that.
Then, it would use the rules of English grammar to tell itself that the next word should probably be a verb. Reinforcement learning is how a network can figure out the best next move on its own. It guesses the right answer and then immediately receives feedback by getting some kind of reward signal or lack thereof.
Then, it uses that feedback to learn what to do next time. Going back to the language example, if an AI guessed that “Olivia dog†was a good sentence, it wouldn't get a reward signal. But if it said, “Olivia ran,†it would.
This is the approach the AI system AlphaGo used to beat the world's top Go player. In this 2017 study, the UK team trained a recurrent network using reinforcement learning, but they also got it to use a different, secondary reinforcement learning algorithm at the same time. In that way, one part of the AI learned how to respond to different examples, while the other part learned how those examples fell into a larger rule structure.
They called this deep meta-reinforcement learning, and their approach helped the network quickly learn and adapt to seven very different tasks. It could do things like navigate a labyrinth with a changing goal and pull a series of rigged slot machines to get the maximum reward. While it might not seem as obvious, these systems are based on our brains, too.
When a system gets a reward in reinforcement learning, it's like how your brain uses chemicals like dopamine to give you a reward. That reinforcement encourages you to practice certain behaviors. So by building a similar system into neural networks, researchers are hoping to encourage them to keep learning and adapting.
Now, all of the techniques we've talked about are great. A program with general intelligence should totally be able to process multiple kinds of information, and should be able to learn new rules. But to build a truly evolving system, we'll also need to make AI curious.
Because having an intrinsic desire to figure out how stuff works and fits together is a big part of how we learn and explore. Think about kids. They'll go turn over a rock and poke at the bugs underneath it because exploring sounds fun, not because they were promised a reward for learning.
Of course, we can't just push AI out into the world and tell it to be home before supper. So instead, scientists are using video games to teach them curiosity. In 2017, Berkeley researchers managed to do this using Super Mario Brothers.
They trained their AI to predict what each frame of the game would look like as it explored. But instead of generating a reward for being right, this AI got a reward for being wrong. That is, the less reality matched its prediction, the bigger the reward it got.
Essentially, it was rewarded for being surprised. This led the system to explore new parts of the game, which means the team basically programmed it to be curious about its environment. Unfortunately, it could never even beat the first level.
But hey, it's a start. Now, other projects are trying different ways of getting AI to be self-sufficient. In January 2019, a team from Columbia University successfully got a robot arm to create an internal model of itself.
It figured out what it looked like and how it worked without any outside input. It did this by trying out a thousand different movements, recording each one to figure out which ones worked and which ones were physically impossible. Kind of like the robot version of a baby playing with its hands.
Once it was all done, the arm could successfully pick up and place small balls into containers and write with a marker — even though researchers never told the arm what it could do. The team even replaced one of the arm's parts with a deformed piece, and it quickly adapted to the change. The robot learned what it was on its own, and that made it easier for it to adjust to new situations.
By building programs that process information like our brains, and teaching these programs to be curious instead of just correct, scientists are heading down a road where AI might one day be able to evolve. Someday, we might make a machine that learns to treat diseases just by learning a little about biochemistry, or one that can design cars by studying engineering. Essentially, we'd be making the AI equivalent of students — programs that learn to synthesize and apply information.
But there's a lot to figure out, because in reality, we don't know everything about our brains work — let alone how to apply those things to machines. We know a lot, sure, but to really make an AI that thinks like us… well, we're going to have to understand ourselves a little better first. And that's a whole different field of research.
Thanks for watching this episode of SciShow! If you want to keep exploring the universe with us, you can go to youtube.com/scishow and subscribe. [ outro ].