crashcourse
The Future of Artificial Intelligence: Crash Course AI #20
Categories
Statistics
View count: | 212,080 |
Likes: | 4,175 |
Comments: | 191 |
Duration: | 11:00 |
Uploaded: | 2019-12-27 |
Last sync: | 2024-10-23 08:45 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "The Future of Artificial Intelligence: Crash Course AI #20." YouTube, uploaded by CrashCourse, 27 December 2019, www.youtube.com/watch?v=T7Rv4tGRlfc. |
MLA Inline: | (CrashCourse, 2019) |
APA Full: | CrashCourse. (2019, December 27). The Future of Artificial Intelligence: Crash Course AI #20 [Video]. YouTube. https://youtube.com/watch?v=T7Rv4tGRlfc |
APA Inline: | (CrashCourse, 2019) |
Chicago Full: |
CrashCourse, "The Future of Artificial Intelligence: Crash Course AI #20.", December 27, 2019, YouTube, 11:00, https://youtube.com/watch?v=T7Rv4tGRlfc. |
Today, in our final episode of Crash Course AI, we're going to look towards the future. We've spent much of this series explaining how and why we don't have the Artificial General Intelligence (or AGI) that we see in the movies like Bladerunner, Her, or Ex Machina. Siri frequently doesn't understand us, we probably shouldn't sleep in our self-driving cars, and those recommended videos on YouTube and Netflix often aren't what we really want to watch next. So let's talk about what we do know, how we got here, and where we think it's all headed. Thanks so much everyone for watching!
Don't forget to subscribe to Jabril’s channel here!
http://youtube.com/c/jabrils
And you can find some more free recourses to learn about AI below!
https://course.fast.ai/
https://www.coursera.org/learn/ai-for-everyone
https://www.coursera.org/learn/machine-learning
https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
https://www.kaggle.com/learn/overview
https://www.kaggle.com/competitions?sortBy=grouped&group=general&page=1&pageSize=20&category=gettingStarted
Crash Course AI is produced in association with PBS Digital Studios:
https://www.youtube.com/pbsdigitalstudios
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Eric Prestemon, Sam Buck, Mark Brouwer, Efrain R. Pedroza, Matthew Curls, Indika Siriwardena, Avi Yashchin, Timothy J Kwist, Brian Thomas Gossett, Haixiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, DAVID NOE, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
#CrashCourse #MachineLearning #ArtificialIntelligence
Don't forget to subscribe to Jabril’s channel here!
http://youtube.com/c/jabrils
And you can find some more free recourses to learn about AI below!
https://course.fast.ai/
https://www.coursera.org/learn/ai-for-everyone
https://www.coursera.org/learn/machine-learning
https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
https://www.kaggle.com/learn/overview
https://www.kaggle.com/competitions?sortBy=grouped&group=general&page=1&pageSize=20&category=gettingStarted
Crash Course AI is produced in association with PBS Digital Studios:
https://www.youtube.com/pbsdigitalstudios
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Eric Prestemon, Sam Buck, Mark Brouwer, Efrain R. Pedroza, Matthew Curls, Indika Siriwardena, Avi Yashchin, Timothy J Kwist, Brian Thomas Gossett, Haixiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, DAVID NOE, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
#CrashCourse #MachineLearning #ArtificialIntelligence
Hey everyone, I'm Jabril and welcome to the final episode of Crash Course AI.
We've covered a lot of ground together, from the basics of neural networks to game playing, language modeling, and algorithmic bias. We've even experimented with code in labs!
And as we've been learning about different parts of artificial intelligence as a field, there have been a couple themes that keep coming up.. First, AI is in more places than ever before. The machine learning professor Andrew Ng says that “Artificial Intelligence is the New.
Electricity.†This is a pretty bold claim, but lots of governments are taking it seriously and planning to grow education, research, and development in AI. China's plan alone calls for over 100 billion U. S. dollars in funding over the next 10 years.
Second, AI is awesome. It can help make our lives easier and sort of gives us superpowers. Who knows what we can accomplish with the help of machine learning and AI?
And third, AI doesn't work that well yet. I still can't ask my phone or any “smart†device to do much, and we're far away from personal robot butlers. So what's next?
What's the future of AI? INTRO. One way to think about the future of AI is to consider milestones AI hasn't reached yet.
Current soccer robots aren't quite ready to take on human professionals, and Siri still has a lot of trouble understanding exactly what I'm saying. For every AI system, we can try and list what abilities would take the current technology to the next level. In 2014, for example, the Society of Automotive Engineers attempted to do just that for self-driving cars.
They defined five levels of automation. For each additional level, they expected that the AI controlling the car can do more without human help. At level 1, cruise control automatically accelerates and decelerates to keep the car at a constant speed, but everything else is on the human driver.
At level 3, the car is basically on its own. It's driving, monitoring its surroundings, navigating, and so on... but a human driver will need to take over if something goes wrong, like really bad weather or a downed power line. And at level 5, the human driver can just sit back, have a smoothie, and watch Crash.
Course AI while the car takes them to work through rush-hour traffic. And obviously, we don't have cars with the technology to do all this yet. But these levels are a way to evaluate how far we've come, and how far our research still has to go.
We can even think about other AIs using “levels of automation.†Like, for example, maybe we have level 1 AI assistants right now that can set alarms for us, but we still need to double-check their work. But what are levels 2 through 5? What milestones would need to be achieved for AI to be as good as a human assistant?
What would be milestones for computer vision or recommender systems or any of the other topics in this course? We'd love to read your ideas in the comments! Sometimes it's useful to think about different kinds of AI on their own as we make progress on each very difficult problem.
But sometimes people try and imagine an ultimate AI for all applications: an Artificial General. Intelligence or AGI. To understand why there's such an emphasis on being “general,†it can be helpful to remember where all this AI stuff first started.
For that, let's go to the Thought Bubble. Alan Turing was a British mathematician who helped break the German enigma codes during. World War II, and helped define the mathematical theory behind computers.
In his paper “Computing Machinery and Intelligence†from 1950, he introduced the now-famous “Turing. Testâ€, or “The Imitation Game.†Turing proposed an adaptation of a guessing game. In his version, there's an “interrogator†in one room, and a human and a machine in the other.
The interrogator talks to the hidden players and tries to figure out which is a human and which is a machine. Turing even gave a series of talking points, like:. Please write me a sonnet on the subject of the Forth Bridge.
Add 34,957 and 70,764. Do you play chess? I have K at K1 and no other pieces.
You have only K at K6 and R at R1. It's your move. What do you play?
The goal of The Imitation Game was to test a machine's intelligence about any human thing, from math to poetry. We wouldn't just judge how “real†a robot's fake human skin looks. As Turing put it: “We do not wish to penalize the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane.†This idea suggests a unified goal for AI, an artificial general intelligence.
But over the last 70 years, AI researchers focused on sub-fields like computer vision, knowledge representation, economic markets, planning, and so on. Thanks, Thought Bubble! And even though we're not sure if an Artificial General Intelligence is possible many communities are doing interdisciplinary research, and many AI researchers are taking baby steps to combine specialized sub-fields.
This involves projects like teaching a robot to understand language, or teaching an AI system that models the stock market to read the news and better understand market fluctuations. To be clear, most of AI is still science fiction… we're nowhere near Blade Runner, Her, or any similar movies. Before we get too excited about combining everything we've built to achieve AGI, we should remember that we still don't know how to make specialized AIs for most problems.
Some sub-fields are making progress more quickly than others and we're seeing AI systems pop up in lots of places with awesome potential. To understand how AI might be able to change our lives, AI Professors Yolanda Gil and Bart. Selman put together the Computing Research Association's AI Roadmap for the next 20 years.
They predict AI reducing healthcare costs, personalizing education, accelerating scientific discoveries, helping national defense, and more. Part of the reason they expect so much progress is that more people than ever (including us!) are learning how to build AI systems. And all of these problems have lots of data to train new algorithms.
It used to be hard to collect training data, going to libraries to copy facts and transcribe books. But now, a lot of data is already digital. If you want to know what's happening on the other side of the planet, you can download newspapers or grab tweets from the Twitter API.
Interested in hyperlocal weather prediction? You can combine free data from the weather service with personal weather stations to help know when to water your plants. And if you feed that data into a robot gardener, you could build a fully-automated weather-knowing plant-growing food-making garden!
Maker communities around the globe are combining data, AI, and cheap hardware to create the future and personalize AI technologies. While imagining an AI/human utopia is exciting, we have to be realistic, too. In many industries, automation doesn't only enhance human activities, it can replace humans entirely.
Truck, delivery, and tractor drivers are some of the most common jobs in the US as of 2014. If self-driving vehicles revolutionize transportation in the near future, will all those people lose their jobs? We can't know for sure, but Gödel Prize winning Computer Science Professor Moshe Vardi points out that this is already the trend in some industries.
For example, U. S. manufacturing output will likely keep rising, but manufacturing jobs have been decreasing a lot. Plus, computers use energy, and that means we're not getting any benefits from AI for free.
Massive amounts of machines running these algorithms can have a substantial carbon footprint. On top of that, as we've discussed, you have to be pretty careful when it comes to trusting AI systems because they often end up with all kinds of biases you may not want. So we have to consider the benefits of massive AI deployment with the costs.
In a now-famous story from a few years ago, Target figured out a woman was pregnant based on her shopping history, and they sent her maternity coupons. But she was still in high school, so her family saw the mail, even though she hadn't told them. Do we want our data being used like this, and potentially revealing personal details?
Or what about the government. Should it be allowed to track people with facial recognition installed on cameras at intersections? When we provide companies location data from our phones we could help them build better traffic models so we can get to places faster.
Cities could improve bus routes, but it also means … someone … is … always … watching you. AI could also track your friends and family, where you shopped, ate, and who you hung out with. If statistics have shown that people who leave home late at night are more likely to commit a crime... and an AI knows you left (even though it's just for some late night cookie dough), should it call the police to watch you -- just in case?
Sooo, we can go down any number of scary thought experiments. And there's a lot to consider when it comes to the future of AI. AI is a really new tool and it's great that so many people have access to it, but that also means there are very few laws or protections about what they can and can't do.
Innovations in AI have awesome potential to make positive changes, but there are also plenty of risks, especially if the technology advances faster than the average person's understanding of it. It's probably the most accurate to say that the future is... complicated. And the most important thing we can do is be educated and involved in AI as the field changes.
Which we're doing right now! In Crash Course AI labs, we used some of the same machine learning technologies that the biggest companies use in their products, and that universities rely on for cutting edge research. So when we see a company or government rolling out a new technology, we know what questions to ask:.
Where did they get their data? Is this even a situation where we want AI to help humans? Is this the right tool to use?
What privacy are we giving up for this cool new feature? Is anyone auditing this model? Is this AI really doing what the developers hoped it would?
We're also hopefully walking away from Crash Course AI with some basic tools to build different kinds of AI, from handwriting recognition to recommender systems. We're excited to see what future you decide to build. If you want to learn more about AI we'll include more free learning resources in the description.
In the meantime, I've been making some pretty good progress with John-Green-bot. Oh John Green Bot? John Green Bot tell the audience what is this?
John-Green-
Bot: Pizza!
Jabril: See, not just donuts and bagels anymore! I want to thank you all for watching Crash Course AI and as they say in John-Green-bot's hometown:. John-Green-bot: Don't forget to be awesome. Crash Course AI is produced in association with PBS Digital Studios!
If you want to help keep Crash Course free for everyone, forever, you can join our community on Patreon. And if you want to keep up to date with my prototyping adventures check out my channel below.
We've covered a lot of ground together, from the basics of neural networks to game playing, language modeling, and algorithmic bias. We've even experimented with code in labs!
And as we've been learning about different parts of artificial intelligence as a field, there have been a couple themes that keep coming up.. First, AI is in more places than ever before. The machine learning professor Andrew Ng says that “Artificial Intelligence is the New.
Electricity.†This is a pretty bold claim, but lots of governments are taking it seriously and planning to grow education, research, and development in AI. China's plan alone calls for over 100 billion U. S. dollars in funding over the next 10 years.
Second, AI is awesome. It can help make our lives easier and sort of gives us superpowers. Who knows what we can accomplish with the help of machine learning and AI?
And third, AI doesn't work that well yet. I still can't ask my phone or any “smart†device to do much, and we're far away from personal robot butlers. So what's next?
What's the future of AI? INTRO. One way to think about the future of AI is to consider milestones AI hasn't reached yet.
Current soccer robots aren't quite ready to take on human professionals, and Siri still has a lot of trouble understanding exactly what I'm saying. For every AI system, we can try and list what abilities would take the current technology to the next level. In 2014, for example, the Society of Automotive Engineers attempted to do just that for self-driving cars.
They defined five levels of automation. For each additional level, they expected that the AI controlling the car can do more without human help. At level 1, cruise control automatically accelerates and decelerates to keep the car at a constant speed, but everything else is on the human driver.
At level 3, the car is basically on its own. It's driving, monitoring its surroundings, navigating, and so on... but a human driver will need to take over if something goes wrong, like really bad weather or a downed power line. And at level 5, the human driver can just sit back, have a smoothie, and watch Crash.
Course AI while the car takes them to work through rush-hour traffic. And obviously, we don't have cars with the technology to do all this yet. But these levels are a way to evaluate how far we've come, and how far our research still has to go.
We can even think about other AIs using “levels of automation.†Like, for example, maybe we have level 1 AI assistants right now that can set alarms for us, but we still need to double-check their work. But what are levels 2 through 5? What milestones would need to be achieved for AI to be as good as a human assistant?
What would be milestones for computer vision or recommender systems or any of the other topics in this course? We'd love to read your ideas in the comments! Sometimes it's useful to think about different kinds of AI on their own as we make progress on each very difficult problem.
But sometimes people try and imagine an ultimate AI for all applications: an Artificial General. Intelligence or AGI. To understand why there's such an emphasis on being “general,†it can be helpful to remember where all this AI stuff first started.
For that, let's go to the Thought Bubble. Alan Turing was a British mathematician who helped break the German enigma codes during. World War II, and helped define the mathematical theory behind computers.
In his paper “Computing Machinery and Intelligence†from 1950, he introduced the now-famous “Turing. Testâ€, or “The Imitation Game.†Turing proposed an adaptation of a guessing game. In his version, there's an “interrogator†in one room, and a human and a machine in the other.
The interrogator talks to the hidden players and tries to figure out which is a human and which is a machine. Turing even gave a series of talking points, like:. Please write me a sonnet on the subject of the Forth Bridge.
Add 34,957 and 70,764. Do you play chess? I have K at K1 and no other pieces.
You have only K at K6 and R at R1. It's your move. What do you play?
The goal of The Imitation Game was to test a machine's intelligence about any human thing, from math to poetry. We wouldn't just judge how “real†a robot's fake human skin looks. As Turing put it: “We do not wish to penalize the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane.†This idea suggests a unified goal for AI, an artificial general intelligence.
But over the last 70 years, AI researchers focused on sub-fields like computer vision, knowledge representation, economic markets, planning, and so on. Thanks, Thought Bubble! And even though we're not sure if an Artificial General Intelligence is possible many communities are doing interdisciplinary research, and many AI researchers are taking baby steps to combine specialized sub-fields.
This involves projects like teaching a robot to understand language, or teaching an AI system that models the stock market to read the news and better understand market fluctuations. To be clear, most of AI is still science fiction… we're nowhere near Blade Runner, Her, or any similar movies. Before we get too excited about combining everything we've built to achieve AGI, we should remember that we still don't know how to make specialized AIs for most problems.
Some sub-fields are making progress more quickly than others and we're seeing AI systems pop up in lots of places with awesome potential. To understand how AI might be able to change our lives, AI Professors Yolanda Gil and Bart. Selman put together the Computing Research Association's AI Roadmap for the next 20 years.
They predict AI reducing healthcare costs, personalizing education, accelerating scientific discoveries, helping national defense, and more. Part of the reason they expect so much progress is that more people than ever (including us!) are learning how to build AI systems. And all of these problems have lots of data to train new algorithms.
It used to be hard to collect training data, going to libraries to copy facts and transcribe books. But now, a lot of data is already digital. If you want to know what's happening on the other side of the planet, you can download newspapers or grab tweets from the Twitter API.
Interested in hyperlocal weather prediction? You can combine free data from the weather service with personal weather stations to help know when to water your plants. And if you feed that data into a robot gardener, you could build a fully-automated weather-knowing plant-growing food-making garden!
Maker communities around the globe are combining data, AI, and cheap hardware to create the future and personalize AI technologies. While imagining an AI/human utopia is exciting, we have to be realistic, too. In many industries, automation doesn't only enhance human activities, it can replace humans entirely.
Truck, delivery, and tractor drivers are some of the most common jobs in the US as of 2014. If self-driving vehicles revolutionize transportation in the near future, will all those people lose their jobs? We can't know for sure, but Gödel Prize winning Computer Science Professor Moshe Vardi points out that this is already the trend in some industries.
For example, U. S. manufacturing output will likely keep rising, but manufacturing jobs have been decreasing a lot. Plus, computers use energy, and that means we're not getting any benefits from AI for free.
Massive amounts of machines running these algorithms can have a substantial carbon footprint. On top of that, as we've discussed, you have to be pretty careful when it comes to trusting AI systems because they often end up with all kinds of biases you may not want. So we have to consider the benefits of massive AI deployment with the costs.
In a now-famous story from a few years ago, Target figured out a woman was pregnant based on her shopping history, and they sent her maternity coupons. But she was still in high school, so her family saw the mail, even though she hadn't told them. Do we want our data being used like this, and potentially revealing personal details?
Or what about the government. Should it be allowed to track people with facial recognition installed on cameras at intersections? When we provide companies location data from our phones we could help them build better traffic models so we can get to places faster.
Cities could improve bus routes, but it also means … someone … is … always … watching you. AI could also track your friends and family, where you shopped, ate, and who you hung out with. If statistics have shown that people who leave home late at night are more likely to commit a crime... and an AI knows you left (even though it's just for some late night cookie dough), should it call the police to watch you -- just in case?
Sooo, we can go down any number of scary thought experiments. And there's a lot to consider when it comes to the future of AI. AI is a really new tool and it's great that so many people have access to it, but that also means there are very few laws or protections about what they can and can't do.
Innovations in AI have awesome potential to make positive changes, but there are also plenty of risks, especially if the technology advances faster than the average person's understanding of it. It's probably the most accurate to say that the future is... complicated. And the most important thing we can do is be educated and involved in AI as the field changes.
Which we're doing right now! In Crash Course AI labs, we used some of the same machine learning technologies that the biggest companies use in their products, and that universities rely on for cutting edge research. So when we see a company or government rolling out a new technology, we know what questions to ask:.
Where did they get their data? Is this even a situation where we want AI to help humans? Is this the right tool to use?
What privacy are we giving up for this cool new feature? Is anyone auditing this model? Is this AI really doing what the developers hoped it would?
We're also hopefully walking away from Crash Course AI with some basic tools to build different kinds of AI, from handwriting recognition to recommender systems. We're excited to see what future you decide to build. If you want to learn more about AI we'll include more free learning resources in the description.
In the meantime, I've been making some pretty good progress with John-Green-bot. Oh John Green Bot? John Green Bot tell the audience what is this?
John-Green-
Bot: Pizza!
Jabril: See, not just donuts and bagels anymore! I want to thank you all for watching Crash Course AI and as they say in John-Green-bot's hometown:. John-Green-bot: Don't forget to be awesome. Crash Course AI is produced in association with PBS Digital Studios!
If you want to help keep Crash Course free for everyone, forever, you can join our community on Patreon. And if you want to keep up to date with my prototyping adventures check out my channel below.