scishow
How Robotics Got Started: A Brief History
YouTube: | https://youtube.com/watch?v=uoC2ZGRI8a8 |
Previous: | Do Fish Drink Water? |
Next: | How to Make a Superbug, and an Even More Super-Collider! |
Categories
Statistics
View count: | 935,577 |
Likes: | 14,707 |
Comments: | 1,073 |
Duration: | 10:37 |
Uploaded: | 2015-03-05 |
Last sync: | 2024-12-06 18:45 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "How Robotics Got Started: A Brief History." YouTube, uploaded by SciShow, 5 March 2015, www.youtube.com/watch?v=uoC2ZGRI8a8. |
MLA Inline: | (SciShow, 2015) |
APA Full: | SciShow. (2015, March 5). How Robotics Got Started: A Brief History [Video]. YouTube. https://youtube.com/watch?v=uoC2ZGRI8a8 |
APA Inline: | (SciShow, 2015) |
Chicago Full: |
SciShow, "How Robotics Got Started: A Brief History.", March 5, 2015, YouTube, 10:37, https://youtube.com/watch?v=uoC2ZGRI8a8. |
With vast technological advancements over the last few decades, why don't we have robots running everything by now? Join Hank Green for a brief history of the field of robotics—it might help you understand how hard it is to get machines to perform tasks, and how far we’ve actually come!
Hosted by: Hank Green
Human played by: Stefan Chin
----------
Message from our Subbable subscribers:
STEAM fields are AWESOME! Fluid dynamics ftw. ;) -OhoyoTohbi Chula
----------
Like SciShow? Want to help support us, and also get things to put on your walls, cover your torso and hold your liquids? Check out our awesome products over at DFTBA Records: http://dftba.com/scishow
Or help support us by subscribing to our page on Subbable: https://subbable.com/scishow
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
Sources:
http://ed-thelen.org/comp-hist/robots.html
http://www.ifr.org/history/
http://loebner.net/Prizef/TuringArticle.html
http://www.bbc.com/news/technology-18475646
http://www.businessinsider.com/ibms-watson-may-soon-be-the-best-doctor-in-the-world-2014-4
http://www.medscape.com/viewarticle/466691
http://depts.washington.edu/givemed/magazine/2011/03/robotics-and-rehab/
http://www.businessinsider.com/military-exoskeletons-2014-8
http://www.defenseindustrydaily.com/ReconRobotics-throwbot-Micro-UGVs-07309/
http://www.army.mil/article/48456/robots-to-rescue-wounded-on-battlefield/
https://books.google.com/books?id=uY-Z3vORugwC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?pagewanted=all&_r=0
http://www.economist.com/blogs/babbage/2013/02/computer-aided-medicine
http://www.biomedcentral.com/1471-2482/13/S2/S12
http://www.gizmag.com/airdog-auto-follow-action-sports-drone/32576/
http://www.theroboticschallenge.org/content/drc-finals-announcement
Hosted by: Hank Green
Human played by: Stefan Chin
----------
Message from our Subbable subscribers:
STEAM fields are AWESOME! Fluid dynamics ftw. ;) -OhoyoTohbi Chula
----------
Like SciShow? Want to help support us, and also get things to put on your walls, cover your torso and hold your liquids? Check out our awesome products over at DFTBA Records: http://dftba.com/scishow
Or help support us by subscribing to our page on Subbable: https://subbable.com/scishow
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
Sources:
http://ed-thelen.org/comp-hist/robots.html
http://www.ifr.org/history/
http://loebner.net/Prizef/TuringArticle.html
http://www.bbc.com/news/technology-18475646
http://www.businessinsider.com/ibms-watson-may-soon-be-the-best-doctor-in-the-world-2014-4
http://www.medscape.com/viewarticle/466691
http://depts.washington.edu/givemed/magazine/2011/03/robotics-and-rehab/
http://www.businessinsider.com/military-exoskeletons-2014-8
http://www.defenseindustrydaily.com/ReconRobotics-throwbot-Micro-UGVs-07309/
http://www.army.mil/article/48456/robots-to-rescue-wounded-on-battlefield/
https://books.google.com/books?id=uY-Z3vORugwC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?pagewanted=all&_r=0
http://www.economist.com/blogs/babbage/2013/02/computer-aided-medicine
http://www.biomedcentral.com/1471-2482/13/S2/S12
http://www.gizmag.com/airdog-auto-follow-action-sports-drone/32576/
http://www.theroboticschallenge.org/content/drc-finals-announcement
You'd think the Robot Revolution would have happened by now, with the self-aware robots taking over the world, becoming our overlords, making us work in the tungsten mines. Well, the thing is, the Robot Revolution already is happening. It's been happening for decades. It's just been a lot less bloody than the movies made us think it would be, and the robots are not self-aware... yet, but in all likelihood, your car was made by a robot. These days, there are robots helping spacecraft dock at the International Space Station, and you might even have one faithfully vacuuming your kitchen floor. Not exactly Skynet, but all of the convenience that we already enjoy thanks to robots took decades of hard work by scientists, and some of the problems that robotics engineers were struggling with fifty years ago still haven't been solved, so there's a whole lot of history that went into the robots that we rely on today, and it just might help you understand why we don't have robots taking care of our every need, or forcing us to mine their tungsten.
[Intro]
If we're going to talk about the history of robotics, first, we need to talk about what a robot actually is, and for such a common term, it's surprisingly slippery to define, but technically speaking, a robot is just a machine designed to accomplish a task. That's it.
Now, it might sound like that would cover everything from a four-function calculator to NASA's Pleiades Supercomputer, but that is not what we're talking about here.
When we talk about robots, we're really talking about machines that use their programming to make decisions. For example, if you, a human, decide to pick up a coin from the ground, there are three main steps you have to go through. First, your eyes need to see the coin and then send that information to your brain. Then, your brain processes that input and uses things like previous experience to make a decision to pick it up. Eventually, your brain sends messages to your body to grasp the coin.
Robots go through a similar process, but without the getting the Slim Jim at the gas station. They can go through that process because most of the time, they have the components that let them carry out each step. They have sensors for input, control systems for decision-making, and end effectors for output. Sounds simple enough, but developing each of these components can be challenging.
Sensors have to be able to detect things like images and sound accurately, effectors have to be flexible and fast enough to do what we need them to do, and the control system has to make all of the decisions necessary to get the sensors and effectors working together.
Of course, there are so many different kinds of robots that these components can vary considerably, but that's the basics of robot anatomy.
Now if we tried to talk about every advance that's been made in every subfield of robotics in just the past half-century, we'd be here all day. But to understand how we've gotten this far and why robots haven't taken over the world yet, we first have to talk about how the development of industrial, humanoid, and military robots got the field where it is today.
Industry is a good place to start, because that's where robots first became useful. Since factory work can be repetitive and often involves lifting lots of heavy stuff, it's a perfect fit for a machine. The world's first ever industrial robot, Unimate, was installed on a General Motors production line in New Jersey in 1961. Weighing in at nearly a metric ton, it was basically a giant robot arm. Its instructions programmed on a huge magnetic drum told the arm to stack and weld hot pieces of metal over and over again.
Soon, other car companies got in on the game, installing their own robotic arms. But this first generation of robots was still in its awkward stage. The arms weren't particularly flexible, they were often powered by clunky hydraulics, and they were ultimately difficult to program. So when a robotic arm called IRB-6 came along in 1974, it was a pretty big deal. This was the first electric industrial robot that was controlled by a microcomputer. It had 16 KB of RAM, it was programmable, and it could display four whole digits with its LEDs. Developed by a Swedish engineering firm, ABB, this robot was used to perform inauspicious tasks like polishing steel tubes, but it was a crucial step toward developing robots that were easier to program.
But while controlling robotic arms was getting simpler, another issue came up. You can give a robot as much programming as you want, but if it can't see, it's not going to be able to do even seemingly simple things, like figure out which box should go where on a pallet. Crude visual scanners had been around since the '50s; they could only see black and white, and the resolution was worse than what you get from a flip phone camera. But to give vision to industrial robots, engineers had to tap into another field that would completely change the robotics game: artificial intelligence.
Now, artificial intelligence, or AI, is another broad, vague term used to describe any attempt to make a computer do something that we'd normally associate with human intelligence, like translate languages or play chess or recognize objects. In the 60s, the problem was that even though AIs were getting better at complex reasoning tasks, like playing chess and proving mathematical theorems, it was incredibly difficult to actually get the programs to interact with the real world. There's a difference, for example, between figuring out the ideal placement of wooden blocks in a theoretical model and actually moving those blocks into place, because moving them involves a whole series of discrete decisions and actions, and the robots at the time simply couldn't manage that.
For robots, vision isn't just about taking pictures, it's also about recognizing objects so that they can react to things and situations in real time. By the late 1970s, engineers had developed new algorithms that allowed cameras to recognize edges and shapes by using visual cues like highlights and shadows, but these programs were still just experimental, stuck in research labs. That all changed in 1981, when the first industrial robot got the gift of vision.
A General Motors factory was once again the guinea pig, implementing a system called Consight, in which three separate robots could use visual sensors to pick out and sort six different kinds of auto parts as 1,400 parts per hour moved by on a conveyor belt. Progress! For the next two decades, technology kept improving -- industrial robots were able to see better, move faster, carry heavier loads, and handle more decisions.
These days, industrial robots are advanced enough that it's totally normal for a factory to install a robotic assembly line that handles nearly all of its production, and some industrial robots are heading in the direction of a more general purpose use, like Baxter, the humanoid industrial robot. Humanoid, yet again a very subjective term, but it just means human-like. For robots, that usually implies that they're designed to look and act as human-like as possible.
So Baxter, for instance, is nearly two meters tall, weighs 136 kg, and has a screen for a face. It also has a vaguely human-shaped torso and two arms. But more importantly, it can be quickly programmed to do practically anything. If a factory worker or researcher has a task to do, Baxter can probably handle it, as long as it's lifting less than 2.2 kg, because its arms are not industrial strength.
But it does need to be told what to do. If you want Baxter to stack some products in a box, for instance, you've got to program it by manually making it do what you want the first time, and then it'll imitate that task. And its shape may be loosely based on a human, but Baxter cannot walk or climb stairs or talk, so getting a robot to act like a human has proved to be a whole other ballgame, and it's one that researchers have been working on for decades, and it's been slow going.
The Wabot I is usually considered to be the first full-scale humanoid robot. Developed by researchers at Waseda University in Japan in 1973, it had arms, legs, and a vision system. It could walk, it could pick things up with its hands, it could even talk. Except that it could only reply with pre-recorded responses to very specific statements, and it took 45 seconds to take one step. This bot and its successor Wabot II were a really big deal in their day, but they also pointed out an important fact: it's just much easier to design robots to do one task at a time.
So recently, the thinking has been if general purpose humanoid robotics are out of our grasp, we might as well focus on making something that can do at least one useful task. That's why in the past 10 years, there have been more household robots in use than ever, programmed to perform a single function like vacuuming the floor, mowing the lawn, washing windows or cleaning the pool. They're not quite Rosie from The Jetsons, but they were all made possible by the advances that came before them, like having the ability to sense their surroundings and make decisions in order to navigate the world.
And it's not like researchers have given up on the humanoid front. There are humanoid robots in development that can perform some impressive feats. Honda's ASIMO, for example, can walk at speeds of almost 6 kilometers an hour, climb up and down stairs, carry a tray, push a cart, among other things. So again, progress. You just can't buy one. For the time being, ASIMO is basically a research tool and a spokesmodel for what the future of robotics might look like.
But probably the most cutting edge research going on in robotics today is being done by the military. Take BEAR, a military robot that's been in development since 2005. Unlike with humanoid or even single task robots, in the military, robot design is more about function than form. So BEAR has some humanoid components, like a head and two arms, but instead of walking like a human, its legs are covered in treads, like a tank. The legs have a joint that acts kind of like a knee that can rotate all the way around. Using their special limb design, BEAR has proven really good at moving through rough terrain, including stairs. It can climb through debris, carry an injured soldier back to base, it can carry light loads; tell it to go to a location and it'll go there. That's a lot more than most civilian robots can do. But discussion of military robots would not be complete without talking about DARPA.
A division of the US Department of Defense, DARPA has been working for the last 50 years to turn even the wackiest of concepts into working technology, and it's been one of the most active promoters in the field of robotics. DARPA has stoked innovation by holding robot competitions; it held contests for autonomous vehicles where robotic cars compete in a race, hundreds of kilometers through the Mojave Desert, and in the summer of 2015, 11 teams will compete in the DARPA Robotics Challenge Finals, where human supervised robots will try to carry out complicated and kind of dangerous tasks associated with disaster response, like scrambling over debris and carrying things to safety. The aim is to develop robots that can be sent into dicey situations without putting human lives at risk, and we're getting there.
So robot tech has come a long way since that first robotic arm in 1961, it's just that, as often is the case when it comes to humans and technology, we can dream up awesome designs and uses for robots faster than we can actually invent them. For now, I'm perfectly happy to have a clean kitchen floor and my freedom from the tungsten mines while science takes robotics to the next level.
Thanks for watching this SciShow Infusion. If you have questions or comments, be sure to let us know on Facebook, Twitter, Tumblr, or down below in the comments, and don't forget to go to YouTube.com/SciShow and subscribe.
[Intro]
If we're going to talk about the history of robotics, first, we need to talk about what a robot actually is, and for such a common term, it's surprisingly slippery to define, but technically speaking, a robot is just a machine designed to accomplish a task. That's it.
Now, it might sound like that would cover everything from a four-function calculator to NASA's Pleiades Supercomputer, but that is not what we're talking about here.
When we talk about robots, we're really talking about machines that use their programming to make decisions. For example, if you, a human, decide to pick up a coin from the ground, there are three main steps you have to go through. First, your eyes need to see the coin and then send that information to your brain. Then, your brain processes that input and uses things like previous experience to make a decision to pick it up. Eventually, your brain sends messages to your body to grasp the coin.
Robots go through a similar process, but without the getting the Slim Jim at the gas station. They can go through that process because most of the time, they have the components that let them carry out each step. They have sensors for input, control systems for decision-making, and end effectors for output. Sounds simple enough, but developing each of these components can be challenging.
Sensors have to be able to detect things like images and sound accurately, effectors have to be flexible and fast enough to do what we need them to do, and the control system has to make all of the decisions necessary to get the sensors and effectors working together.
Of course, there are so many different kinds of robots that these components can vary considerably, but that's the basics of robot anatomy.
Now if we tried to talk about every advance that's been made in every subfield of robotics in just the past half-century, we'd be here all day. But to understand how we've gotten this far and why robots haven't taken over the world yet, we first have to talk about how the development of industrial, humanoid, and military robots got the field where it is today.
Industry is a good place to start, because that's where robots first became useful. Since factory work can be repetitive and often involves lifting lots of heavy stuff, it's a perfect fit for a machine. The world's first ever industrial robot, Unimate, was installed on a General Motors production line in New Jersey in 1961. Weighing in at nearly a metric ton, it was basically a giant robot arm. Its instructions programmed on a huge magnetic drum told the arm to stack and weld hot pieces of metal over and over again.
Soon, other car companies got in on the game, installing their own robotic arms. But this first generation of robots was still in its awkward stage. The arms weren't particularly flexible, they were often powered by clunky hydraulics, and they were ultimately difficult to program. So when a robotic arm called IRB-6 came along in 1974, it was a pretty big deal. This was the first electric industrial robot that was controlled by a microcomputer. It had 16 KB of RAM, it was programmable, and it could display four whole digits with its LEDs. Developed by a Swedish engineering firm, ABB, this robot was used to perform inauspicious tasks like polishing steel tubes, but it was a crucial step toward developing robots that were easier to program.
But while controlling robotic arms was getting simpler, another issue came up. You can give a robot as much programming as you want, but if it can't see, it's not going to be able to do even seemingly simple things, like figure out which box should go where on a pallet. Crude visual scanners had been around since the '50s; they could only see black and white, and the resolution was worse than what you get from a flip phone camera. But to give vision to industrial robots, engineers had to tap into another field that would completely change the robotics game: artificial intelligence.
Now, artificial intelligence, or AI, is another broad, vague term used to describe any attempt to make a computer do something that we'd normally associate with human intelligence, like translate languages or play chess or recognize objects. In the 60s, the problem was that even though AIs were getting better at complex reasoning tasks, like playing chess and proving mathematical theorems, it was incredibly difficult to actually get the programs to interact with the real world. There's a difference, for example, between figuring out the ideal placement of wooden blocks in a theoretical model and actually moving those blocks into place, because moving them involves a whole series of discrete decisions and actions, and the robots at the time simply couldn't manage that.
For robots, vision isn't just about taking pictures, it's also about recognizing objects so that they can react to things and situations in real time. By the late 1970s, engineers had developed new algorithms that allowed cameras to recognize edges and shapes by using visual cues like highlights and shadows, but these programs were still just experimental, stuck in research labs. That all changed in 1981, when the first industrial robot got the gift of vision.
A General Motors factory was once again the guinea pig, implementing a system called Consight, in which three separate robots could use visual sensors to pick out and sort six different kinds of auto parts as 1,400 parts per hour moved by on a conveyor belt. Progress! For the next two decades, technology kept improving -- industrial robots were able to see better, move faster, carry heavier loads, and handle more decisions.
These days, industrial robots are advanced enough that it's totally normal for a factory to install a robotic assembly line that handles nearly all of its production, and some industrial robots are heading in the direction of a more general purpose use, like Baxter, the humanoid industrial robot. Humanoid, yet again a very subjective term, but it just means human-like. For robots, that usually implies that they're designed to look and act as human-like as possible.
So Baxter, for instance, is nearly two meters tall, weighs 136 kg, and has a screen for a face. It also has a vaguely human-shaped torso and two arms. But more importantly, it can be quickly programmed to do practically anything. If a factory worker or researcher has a task to do, Baxter can probably handle it, as long as it's lifting less than 2.2 kg, because its arms are not industrial strength.
But it does need to be told what to do. If you want Baxter to stack some products in a box, for instance, you've got to program it by manually making it do what you want the first time, and then it'll imitate that task. And its shape may be loosely based on a human, but Baxter cannot walk or climb stairs or talk, so getting a robot to act like a human has proved to be a whole other ballgame, and it's one that researchers have been working on for decades, and it's been slow going.
The Wabot I is usually considered to be the first full-scale humanoid robot. Developed by researchers at Waseda University in Japan in 1973, it had arms, legs, and a vision system. It could walk, it could pick things up with its hands, it could even talk. Except that it could only reply with pre-recorded responses to very specific statements, and it took 45 seconds to take one step. This bot and its successor Wabot II were a really big deal in their day, but they also pointed out an important fact: it's just much easier to design robots to do one task at a time.
So recently, the thinking has been if general purpose humanoid robotics are out of our grasp, we might as well focus on making something that can do at least one useful task. That's why in the past 10 years, there have been more household robots in use than ever, programmed to perform a single function like vacuuming the floor, mowing the lawn, washing windows or cleaning the pool. They're not quite Rosie from The Jetsons, but they were all made possible by the advances that came before them, like having the ability to sense their surroundings and make decisions in order to navigate the world.
And it's not like researchers have given up on the humanoid front. There are humanoid robots in development that can perform some impressive feats. Honda's ASIMO, for example, can walk at speeds of almost 6 kilometers an hour, climb up and down stairs, carry a tray, push a cart, among other things. So again, progress. You just can't buy one. For the time being, ASIMO is basically a research tool and a spokesmodel for what the future of robotics might look like.
But probably the most cutting edge research going on in robotics today is being done by the military. Take BEAR, a military robot that's been in development since 2005. Unlike with humanoid or even single task robots, in the military, robot design is more about function than form. So BEAR has some humanoid components, like a head and two arms, but instead of walking like a human, its legs are covered in treads, like a tank. The legs have a joint that acts kind of like a knee that can rotate all the way around. Using their special limb design, BEAR has proven really good at moving through rough terrain, including stairs. It can climb through debris, carry an injured soldier back to base, it can carry light loads; tell it to go to a location and it'll go there. That's a lot more than most civilian robots can do. But discussion of military robots would not be complete without talking about DARPA.
A division of the US Department of Defense, DARPA has been working for the last 50 years to turn even the wackiest of concepts into working technology, and it's been one of the most active promoters in the field of robotics. DARPA has stoked innovation by holding robot competitions; it held contests for autonomous vehicles where robotic cars compete in a race, hundreds of kilometers through the Mojave Desert, and in the summer of 2015, 11 teams will compete in the DARPA Robotics Challenge Finals, where human supervised robots will try to carry out complicated and kind of dangerous tasks associated with disaster response, like scrambling over debris and carrying things to safety. The aim is to develop robots that can be sent into dicey situations without putting human lives at risk, and we're getting there.
So robot tech has come a long way since that first robotic arm in 1961, it's just that, as often is the case when it comes to humans and technology, we can dream up awesome designs and uses for robots faster than we can actually invent them. For now, I'm perfectly happy to have a clean kitchen floor and my freedom from the tungsten mines while science takes robotics to the next level.
Thanks for watching this SciShow Infusion. If you have questions or comments, be sure to let us know on Facebook, Twitter, Tumblr, or down below in the comments, and don't forget to go to YouTube.com/SciShow and subscribe.