YouTube: https://youtube.com/watch?v=39EdqUbj92U
Previous: The Olympics, FIFA, and why we love sports: Crash Course Games #17
Next: 100 Years of Solitude Part 1: Crash Course Literature 306

Categories

Statistics

View count:1,001,324
Likes:22,079
Comments:1,995
Duration:09:26
Uploaded:2016-08-08
Last sync:2024-04-12 15:15

Citation

Citation formatting is not guaranteed to be accurate.
MLA Full: "Artificial Intelligence & Personhood: Crash Course Philosophy #23." YouTube, uploaded by CrashCourse, 8 August 2016, www.youtube.com/watch?v=39EdqUbj92U.
MLA Inline: (CrashCourse, 2016)
APA Full: CrashCourse. (2016, August 8). Artificial Intelligence & Personhood: Crash Course Philosophy #23 [Video]. YouTube. https://youtube.com/watch?v=39EdqUbj92U
APA Inline: (CrashCourse, 2016)
Chicago Full: CrashCourse, "Artificial Intelligence & Personhood: Crash Course Philosophy #23.", August 8, 2016, YouTube, 09:26,
https://youtube.com/watch?v=39EdqUbj92U.
Today Hank explores artificial intelligence, including weak AI and strong AI, and the various ways that thinkers have tried to define strong AI including the Turing Test, and John Searleā€™s response to the Turing Test, the Chinese Room. Hank also tries to figure out one of the more personally daunting questions yet: is his brother John a robot?

Curious about AI? Check out this playlist from Crash Course Artificial Intelligence: https://youtube.com/playlist?list=PL8dPuuaLjXtO65LeD2p4_Sb5XQ51par_b

--

All other images and video either public domain or via VideoBlocks, or Wikimedia Commons, licensed under Creative Commons BY 4.0: https://creativecommons.org/licenses/by/4.0/

--

Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios

Crash Course Philosophy is sponsored by Squarespace.
http://www.squarespace.com/crashcourse

--

Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashC...
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support CrashCourse on Patreon: http://www.patreon.com/crashcourse

CC Kids: http://www.youtube.com/crashcoursekids

(PBS Digital Studios Intro)

Crash Course: Philosophy is brought to you by Squarespace. Squarespace, share your passion with the world.

Okay, guys, real talk, I'm kinda worried. I think my brother John might be a robot. I know it sounds ridiculous, he looks like a human, pretty much, and he acts like a human, most of the time. But how could I really 100% for sure know that he is what he looks like, at least without getting a close look at what's inside him, in his head, his body, his inner workings. And keep in mind, I'm the younger brother. For all I know, Mom and Dad brought him home from RadioShack, not the hospital. So how can I tell whether my brother John Green is a human or just a really intelligent machine?

(Intro)

A couple of weeks ago, we talked about what it means to be a person, but a subject that we need to explore a little better is whether a non-living being, like a robot, could be a person too. This isn't just a concern for science fiction writers, this issue matters because technology is getting better all the time, and we need to figure out how we're going to treat potential new persons if we end up creating beings that we decide meet the threshold of personhood. I'm talking about robots, androids, Replicants, Cylons, whatever you call 'em, if you read and watch the right stuff, you know who I'm talking about.

Now, you might be thinking, "Don't we have artificial intelligence already, like on my phone?" Well, yeah, but the kind of AI that we use to send our texts and proofread our emails and plot our commutes to work is pretty weak in the technical sense. A machine or system that mimics some aspect of human intelligence is known as weak AI. Siri is a good example, but similar technology has been around a lot longer than that. Autocorrect, spell check, even old school calculators are capable of mimicking portions of human intelligence. Weak AI is characterized by its relatively narrow range of thought-like abilities. Strong AI, on the other hand, is a machine or system that actually thinks like us, whatever it is that our brains do, Strong AI is an inorganic system that does the same thing. While Weak AI has been around for a long time and keeps getting stronger, we have yet to design a system with strong AI.

But what would it mean for something to have strong AI? Would we even know when it happened? Way back in 1950, British mathematician Alan Turing was thinking about this very question, and he devised a test called the Turing Test, that he thought would be able to demonstrate when a machine had developed the ability to think like us. Turing's description of the test was a product of its time, a time in which there were really no computers to speak of, but if Turing were describing it today, it would probably go something like this: You're having a conversation via text with two individuals, one is a human and the other is a computer or AI of some kind, and you aren't told which is which. You might ask both of your interlocutors anything you would like, and they are free to answer however they would like. They can even lie. Do you think you'd be able to tell which one was the human? How would you tell? What sort of questions would you ask? And what kind of answers would you expect back? A machine with complex enough programming ought to be able to fool you into believing you're conversing with another human, and Turing said if a machine can fool a human into thinking it's a human, then it has strong AI. So in his view, all it means for something to think like us is for it to be able to convince us that it's thinking like us. If we can't tell the difference, there really is no difference. It's a strictly behavior-based test and if you think about it, isn't behavior really the standard we use to judge each other? I mean, really, I could be a robot, so could these guys who are helping me shoot this episode. The reason I don't think I'm working with a bunch of androids is that they act the way that I have come to expect people to act. At least, most of the time. And when we see someone displaying behaviors that seem a lot like ours, displaying things like intentionality and understanding, we assume that they have intentionality and understanding.

Now, fast forward a few decades and meet contemporary American philosopher William Lycan. He agrees with Turing on many points and has the benefit of living in a time when artificial intelligence has advanced like crazy, but Lycan recognizes that a lot of people still think that you can make a person-like robot, but you could never actually make a robot that's a person, and for those people, Lycan would offer up this guy for consideration: Harry.

Harry is a humanoid robot with lifelike skin. He can play golf and the viola, he gets nervous, he makes love, he has a weakness for expensive gin. Harry, like John Green, gives every impression of being a person. He has intentions and emotions. You consider him to be your friend. So if Harry gets a cut and then motor oil rather than blood spills out, you would certainly be surprised. But Lycan says this revelation shouldn't cause you to downgrade Harry's cognitive state from person to person-like. If you would argue that Harry's not a person, then what's he missing?

One possible answer is that he's not a person because he was programmed. Lycan's response to that is well, weren't we all? Each of us came loaded with a genetic code that predisposed us to all sorts of different things. You might have a short fuse like your mom or a dry sense of humor like your grandfather, and in addition to the coding you had at birth, you were programmed in all sorts of other ways by your parents and teachers, you were programmed to use a toilet, silverware, to speak English rather than Portuguese, unless, of course, you speak Portuguese, but if you do, you were still programmed. And what do you think I'm doing to you right now? I'm programming you. Sure, you have the ability to go beyond your programming, but so does Harry. That's Lycan's point.

Now, another distinction you might make between persons like us and Harry is that we have souls and Harry doesn't. Now, you've probably seen enough Crash Course: Philosophy by now to know how problematic this argument is, but let's suppose there is a God and let's suppose that he gave each of us a soul. We, of course, have no idea what the process of ensoulment might look like, but suffice it to say, if God can zap a soul into a fertilized egg or a newborn baby, there's no real reason to suppose he couldn't zap one into Harry as well. Harry can't reproduce, but neither can plenty of humans, and we don't call them non-persons. He doesn't have blood, but really, do you think that that's the thing that makes you, you?

Lycan says Harry's a person. His origin and material constitution are different than yours and mine, but who cares? After all, there have been times and places in which having a different color of skin or different sex organs has caused someone to be labeled a non-person, but we know that that kind of thinking doesn't hold up to scrutiny.

Back in 1950, Turing knew no machine could pass his test, but he thought it would happen by the year 2000. It turns out, though, that because we can think outside of our programming in ways that computer programs can't, it's been really hard to design a program that can pass the Turing test. But what will happen when something can?

Many argue that even if a machine does pass the Turing test, that doesn't tell us that it actually has Strong AI. These objectors argue that there's more to thinking like us than simply being able to fool us. Let's head over to the Thought Bubble for some flash philosophy.

Contemporary American philosopher John Searle constructed a famous thought experiment called the Chinese Room designed to show that passing for human isn't sufficient to qualify for Strong AI. Imagine you're a person who speaks no Chinese. You're locked in a room with boxes filled with Chinese characters and a codebook in English with instructions about what characters to use in response to what input. Native Chinese speakers pass written messages in Chinese into the room. Using the codebook, you figure out how to respond to the characters you receive and you pass out the appropriate characters in return. You have no idea what any of it means, but you successfully follow the code. You do this so well in fact, that the native Chinese speakers believe you know Chinese. You've passed the Chinese speaking Turing test, but do you know Chinese? Of course not. You just know how to manipulate symbols, with no understanding of what they mean in a way that fools people into thinking you know something you don't. Likewise, according to Searle, the fact that a machine can fool someone into thinking it's a person doesn't mean it has Strong AI. Searle argues that Strong AI would require the machine have actual understanding, which he thinks is impossible for a computer to ever achieve. Thanks, Thought Bubble.

One more point before we get out of here. Some people have responded to the Chinese Room thought experiment by saying, sure, you don't know Chinese, but no particular region of your brain knows English either. The whole system that is your brain knows English. Likewise, the whole system that is the Chinese Room, you, the codebook, the symbols, together know Chinese, even though the particular piece of the system that is you does not.

So, I've been thinking about it. I'm still not convinced that John isn't a robot. In fact, Harry really drove home the point for me that we don't actually know what's actually going on inside any of us. But if it would turn out that John, the John I've known my entire life, has motor oil instead of blood inside of him, well, he'd still be my brother.

Today we learned about artificial intelligence, including Weak AI and Strong AI, and the various ways thinkers have tried to define Strong AI. We considered the Turing Test and John Searle's response to the Turing Test, the Chinese Room. We also talked about William Lycan, Harry, and my brother, the still-possibly-but-probably-not android. Next time, we will look into an issue that's been lurking around this whole discussion of artificial intelligence: do any of us have free will?

This episode is brought to you by Squarespace. Squarespace helps to create websites, blogs, or online stores for you and your ideas. Websites look professionally designed regardless of skill level, no coding required. Try Squarespace at squarespace.com/crashcourse for a special offer. Squarespace, share your passion with the world.

Crash Course: Philosophy is produced in association with PBS Digital Studios. You can head over to their channel and check out a playlist of the latest episodes from shows like PBS Off Book, The Art Assignment, and Blank on Blank. This episode of Crash Course was filmed in the Dr. Cheryl C. Kinney Crash Course studio with the help of all of these awesome people and our equally fantastic graphics team is Thought Café.