Previous: Will Grayson, Will Grayson
Next: Shmancy Hotel Room Is Shmancy



View count:347,612
Last sync:2024-05-11 16:30


Citation formatting is not guaranteed to be accurate.
MLA Full: "Google in your Brain." YouTube, uploaded by vlogbrothers, 5 April 2010,
MLA Inline: (vlogbrothers, 2010)
APA Full: vlogbrothers. (2010, April 5). Google in your Brain [Video]. YouTube.
APA Inline: (vlogbrothers, 2010)
Chicago Full: vlogbrothers, "Google in your Brain.", April 5, 2010, YouTube, 03:59,
In which Hank talks about the implications of 100% accurate speech recognition software as well as the unlikeliness of artificial intelligence and the possibilities of bionic implants.

He also talks about how bad Google is at understanding him.


Shirts and Stuff:
Hank's Music:
John's Books:


Hank's Twitter:
Hank's Facebook:
Hank's tumblr:

John's Twitter:
John's Facebook:
John's tumblr:


Other Channels
Crash Course:
Hank's Channel:
Truth or Fail:



A Bunny
( - -)
((') (')
"I'm still in my kid. Ask that it be that have to say I don't understand. Called grandpa, or I am going to say. The sky is because in space there are these giant birds like terry battle. That ended and over that atmosphere. Except when they are state with a gray. And that is why house are gray. There are signs. The disease. And I believe deep down in there. These birds that for leading the world in of the fact the color of our sky. And I will be a failure."

Good morning John, that was a dramatic reading of YouTube's auto-captioning of a video in which you implore me to teach your son about science. Because otherwise, he'll ask you why the sky is blue, and you'll tell him the story about pterodactyl poop, or something.

One thing is very clear. Google/YouTube's auto-captioning right now, is very bad. You can turn it on by clicking down here on the CC button and sliding over, and you can see how bad it is auto-captioning me right now.

Generally right now Google's auto-captioning is "Donald, report back with an elephant" bad, but there's no denying that they're getting better at it, and Google, like it or not, has in-depth, analytical access to the world's largest database of recorded spoken words, which is YouTube. Kinda strange to think of it that way, but that is one of the things that YouTube is. They also have a very deep well of talent, very deep pockets, and strong economic incentives for being able to figure out the words that are coming out of my mouth. "Go to find the lady that held the brotherhood's web servers." If they can figure out what I'm talking about, they can run some ads down here, and over here, talking about exactly the stuff that I'm talking about.

This contextual ad thing is how Google makes all of its money, so they would like to be able to do that with video. They also want us to be able to talk into our phones, which I can do right now. "Google is sometimes kind of creepy." Google is the first kind of creepy. And that's pretty amazing. "Robot Nita, I am a role model.

What are your genes?" But there are broader implications of this technology that, ugh, kind of creep me out. Unfortunately for Google, understanding what words are being said and understanding what those words mean when strung together into a sentence are completely different tasks, and one of them, which we're talking about right now, is quite difficult. "I have recently exciting sex of the senate last a couple days." But understanding the meaning of this sentence turns out to be even harder. Google has very strong economic incentives to figure out what sentences mean and so far they have been profoundly unable to do that.

Even with those deep pockets, and all that talent, Google is completely unable to turn a sentence like the one I'm saying right now into useful information. The software would be required to do far too much analysis and synthesis of information on its own, and there's so much cultural context that goes into all the stuff that we say, and the tone of our voice. If the program could actually understand and synthesize data from these complex sentences, that would pretty much be artificial intelligence.

I think that we dramatically underestimate the complexity of our own brains, and artificial intelligence would require us to write the software that runs our brains, and we have no idea how our brains work. But that does not mean that 100% accurate speech recognition would not have huge implications for the world. If speech recognition were 100% accurate for two different languages, subtitling conversations using hand-held devices or even in-ear or even in-eye devices would become trivial, except that the translations would of course be machine-based, and thus not entirely accurate, as anyone who has used Google Translate can tell you, but combining speech recognition with an actual bionic implant of a kind, which doesn't actually, to me, seem all that far off, would, I think, have really dramatic implications on what it means to be a person, and anybody who hasn't thought about that should read this book, it's called "Feed." So yes, these funny little snippets of wrongly translated text are funny.

They're cute, they're bad, and we talk way too fast for them to understand us, but that is just right now, and if we know anything about how technology works, it probably won't be too long before they figure it out, so it is probably very worth you and I thinking about what that will mean when it does work. What it means when "Chinese cover President has heeded his advice from the beginning." John, I'll see you on Wednesday.