crashcourse
Robotics: Crash Course AI #11
YouTube: | https://youtube.com/watch?v=_U21fT8VLp0 |
Previous: | Napoleon Bonaparte: Crash Course European History #22 |
Next: | The Congress of Vienna: Crash Course European History #23 |
Categories
Statistics
View count: | 140,126 |
Likes: | 2,701 |
Comments: | 77 |
Duration: | 10:12 |
Uploaded: | 2019-10-25 |
Last sync: | 2024-12-08 18:30 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "Robotics: Crash Course AI #11." YouTube, uploaded by CrashCourse, 25 October 2019, www.youtube.com/watch?v=_U21fT8VLp0. |
MLA Inline: | (CrashCourse, 2019) |
APA Full: | CrashCourse. (2019, October 25). Robotics: Crash Course AI #11 [Video]. YouTube. https://youtube.com/watch?v=_U21fT8VLp0 |
APA Inline: | (CrashCourse, 2019) |
Chicago Full: |
CrashCourse, "Robotics: Crash Course AI #11.", October 25, 2019, YouTube, 10:12, https://youtube.com/watch?v=_U21fT8VLp0. |
Robots aren’t like humans who can do a lot of different things. They’re designed for very specific tasks like vacuuming our homes, assembling cars in a factory, or exploring the surface of other planets. So even though it may be a while before we have a general household robot that can do it all, robots are still really important because they can do some things incredibly well even better than humans. So today, we're going to take a look at the role of AI in overcoming three key challenges in the field of robotics: localization, planning, and manipulation.
Crash Course is produced in association with PBS Digital Studios:
https://www.youtube.com/pbsdigitalstudios
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Eric Prestemon, Sam Buck, Mark Brouwer, Indika Siriwardena, Avi Yashchin, Timothy J Kwist, Brian Thomas Gossett, Haixiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Zach Van Stanley, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Sam Ferguson, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, David Noe, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore
--
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
#CrashCourse #ArtificialIntelligence #MachineLearning
Crash Course is produced in association with PBS Digital Studios:
https://www.youtube.com/pbsdigitalstudios
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Eric Prestemon, Sam Buck, Mark Brouwer, Indika Siriwardena, Avi Yashchin, Timothy J Kwist, Brian Thomas Gossett, Haixiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Zach Van Stanley, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Sam Ferguson, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, David Noe, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore
--
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
#CrashCourse #ArtificialIntelligence #MachineLearning
(00:00) to (02:00)
John Green-Bot: Hi, I'm John Green-Bot and welcome to Crash Course: AI. Today, we're learning about me!
Jabril: Hey. This is my show.
JGB: Uh-oh.
J: It's okay, John Green-Bot, we can do the intro together. Robotics is a broad topic because it's the science of building a computer that moves and interacts with the world, or even beyond the world, in space.
JGB: So today, we're going to talk about robots, like me, and what makes us tick.
(Intro)
J: Some of the most exciting AIs are robots that can move through the world with us, gathering data and taking actions. Robots can have wings to fly, fins to swim, wheels to drive, or legs to walk, and they can explore environments that humans can't even survive in, but unlike humans who can do many different things, robots are built to perform specific tasks, with different requirements for hardware and for learning.
Curiosity is a pretty amazing robot who spent seven years exploring Mars for us, but it wouldn't be able to build cars like industrial robots or clean your apartment like a Roomba. Robotics is such a huge topic that it's also a part of computer science, engineering, and other fields. In fact, this is the third Crash Course video we've made about robots.
In the field of AI, robotics is full of huge challenges. In some cases, what's easy for computers, like doing millions of computations per second, is hard for humans, but with robotics, what's easy for humans like making sense of a bunch of diverse data in complex environments is really hard for computers. Like, for example, in the reinforcement learning episode, we talked about walking and how hard it would be to precisely describe all the joints and small movements involved in a single step, but if we were going to build robots to explore the stars or get me a snack, we have to figure out all those details, from how to build an arm to how to use it to grab things.
(02:00) to (04:00)
So, we're gonna focus on three core problems in robotics: localization, planning, and manipulation. The most basic feature of a robot is that it interacts with the world. To do that, it needs to know where it is, which is localization and how to get somewhere else, which is planning. So, localization and planning go hand-in-hand. We humans do localization and planning all the time. Let's say you go to a new mall and you want to find some shoes. What do you do? You start to build a map of the mall in your head, by looking around at the walls, escalators, shops, and doors. As you move around, you can update your mental map and keep track of how you got there. That's localization, and once you have a mental map and know the way to the shoe store, you can get there more quickly next time. So, maybe you plan that the escalator is faster than the elevator.
The most common way we input that data is with our eyes through perception. Our eyes give us two slightly different views of the world, which allow us to see how far away objects are in space. This is called stereoscopic vision and this mental map is the key to what many robots do, too if they need to move around the world. As they explore, they need to simultaneously track their position and update their mental map of what they see. This process is called simultaneous localization and mapping, which goes by the cool nickname SLAM. But instead of eyes, robots use all kinds of different cameras. Many robots use RGB cameras for perception, which gather color images of the world. Some robots, like John Green-bot, use two cameras to achieve stereoscopic vision, like us.
But robots can also have sensors that help them see the world in ways that humans can't. One example is infrared depth cameras. These cameras measure distances by shooting out infrared light, which is invisible to our eyes, and then, seeing how long it takes to bounce back. Infrared depth cameras are how some video game motion sensors work, like how the Microsoft Connect could figure out where a player is and how they're gesturing.
(04:00) to (06:00)
This is also how many self-driving cars work. Using a technology called LiDAR, which emits over 100,000 laser pulses a second and measures when they bounce back. This generates a map of the world that marks out flat surfaces and the rough placement of 3D objects like street lamps, a mailbox, or a tree on the side of the road. Once robots know how close or far away things are, they can build maps of what they think the world looks like and navigate around objects more safely. With each observation and by keeping track of its own path, a robot can update its mental map, but just keep in mind, most environments change and no sensor is perfect.
So, a lot goes into localization, but after a robot learns about the world, it can plan paths and navigate through them. Planning is when an AI strings together a sequence of events to achieve some goal and this is where robotics can tie into symbolic AI from the last episode. For example, let's say John Green-bot had been trained to learn a map of this office and I wanted him to grab me a snack from the kitchen. He has localization covered and now it's time to plan.
To plan, we need to define actions or things that John Green-bot can do. Actions require preconditions or how objects currently exist in the world, and actions have effects on those objects to change how they exist. So, if John Green-bot's mental map has a door between this current location and the kitchen, he might want to use an open door action to go through it. This action requires a precondition of the door being closed and the effect is that the door will be open so that John Green-bot can go through it.
John Green-bot's AI would need to consider different possible sequences of actions, including their preconditions and effects to reason through all the routes to the kitchen in this building and choose which one to take.
Searching through all these possibilities can be really challenging and there are lots of different approaches we can use to help AIs plan, but that would deserve a video of its own. Anyway, during planning, we run into the third core problem of robotics: manipulation.
(06:00) to (08:00)
What can John Green-bot's mechanical parts actually do? Can he reach out his arms to interact with objects in the world? Many humans can become great at manipulating things, and I'm talking about objects, not that force power stuff. Like, for example, I can do this, but it took me a while to get good at it. I mean, just look at babies. They're really clumsy by comparison.
Two traits that help us with manipulation and that can help a robot are proprioception and a closed loop control. Proprioception is how we know where our body is and how it's moving, even if we can't see our limbs. Let's try an experiment. I'm gonna close my eyes, stretch my arms out wide and point with both hands. Now, I'm gonna try and touch both index fingers without looking. Almost perfect, and I wasn't way off because of proprioception.
Our nervous system and muscles help our body's sense of proprioception, but most robots have motors and need sensors to figure out if their machine parts are moving and how quickly. The second piece of the puzzle is closed loop control or a control with feedback. The loop we're talking about involves the sensors that perceive what's going on and whatever mechanical pieces control what's going on. If I tried that experiment again with my eyes open instead of closed, it would go even better. As my fingers get closer to each other, I can see their positions and make tiny adjustments.
I use my eyes to perceive, and I control my arms and fingers with my muscles and there's a closed loop between them. They're all part of my body and connected to my brain. It'd be a totally different problem if there was an open loop, or control without feedback. Like, if I close my eyes and try to touch my fingers to someone else's. My brain can't perceive with their eyes or control their muscles, so I don't get any feedback and basically have to keep doing whatever I start doing.
We use closed loop control in lots of situations without even thinking about. If a box we're picking up is heavier than expected, we feel it pull the skin on our fingers, so we tighten our grip, and if it's even heavier than expected, we might use our other hand, and if it's still too heavy, well, we'll call over our open loop example buddy, but this process has to be programmed when it comes to building robots.
(08:00) to (10:00)
Manipulation can look different depending on a robot's hardware and programming, but with enough work, we can get robots to perform specific tasks, like removing the creme from an Oreo. Beyond building capable robots that work on their own, we also have to consider how robots interact and coordinate with other robots and even humans. In fact, there's a whole field of human/robot interaction that studies how to have robots work with or learn from humans. This means they have to understand our body and spoken language commands.
What's so exciting about robotics is that it brings together every area of AI into one machine and in the future, it can bring us superpowers, help with disabilities, and even make the world a little more convenient by delivering snacks.
John Green-Bot: Here you go, Jabril.
J: Thanks, John Green-Bot. Go get me a spoon.
But we're still a long way from household robots that can do all of these things, and when we're building and training robots, we're working in test spaces rather than the real world. For instance, a lot of work gets done on self-driving car AI before it even gets to an actual road. We don't want flawed systems to accidentally hurt humans. These test spaces for AI can be anything from warehouses where robots can practice walking to virtual mazes that can help an AI model learn to navigate. In fact, some of the common virtual test spaces are programmed for human entertainment. Games.
So next week, we'll see how teaching AI to play games, even games like chess, can help us solve real world problems. See you then.
JGB: Crash Course: AI is produced in association with PBS Digital Studios. If you want to help keep Crash Course free for everyone forever, you can join our community on Patreon, and if you want to learn more about engineering robots, check out this video.