Previous: 8 Truths and Myths About the Full Moon
Next: Why You Think Your Phone Just Buzzed



View count:190,061
Last sync:2022-11-22 13:45
Thanks to deepfakes, CGI can be more realistic and even amateurs can easily create some fun footage. But there are also sinister uses out there.

Hosted by: Hank Green

Jordan Peele's fakedeep Obama video:

SciShow has a spinoff podcast! It's called SciShow Tangents. Check it out at
Support SciShow by becoming a patron on Patreon:
Huge thanks go to the following Patreon supporters for helping us keep SciShow free for everyone forever:

Kevin Carpentier, Eric Jensen, Matt Curls, Sam Buck, Christopher R Boucher, Avi Yashchin, Adam Brainard, Greg, Alex Hackman, Sam Lutfi, D.A. Noe, Piya Shedden, KatieMarie Magnone, Scott Satovsky Jr, Charles Southerland, Patrick D. Ashmore, charles george, Kevin Bealer, Chris Peters
Looking for SciShow elsewhere on the internet?

Image Sources:


Thanks to CGI, people are used to seeing something that looks incredibly real but is actually fake, like Captain America's body.  There's no way that's real.  The computer-generated images from TV shows and movies and video games require millions of dollars and months of work by trained professionals, but that could be changing thanks to deepfakes.  You probably already know about deepfakes.  They're based on machine learning and artificial intelligence and they're beginning to give amateurs the power to doctor footage quickly and on the cheap.

Deepfakes aren't perfect quality right now, but in the future, they could be pretty convincing, maybe even indistinguishable from reality and if that happens, well, people are already preparing.  A deepfake video is one where artificial intelligence has been used to make it appear that someone is doing or saying something that they didn't actually do or say, like a face swap, putting one person's face on another person's body.  You can use this for fun and enjoyment like the person who changed clips of famous films so that they all starred, who else but Nicolas Cage, but there are also more sinister uses out there.

For instance, in 2018, director Jordan Peele created a convincing proof-of-concept video of Barack Obama saying a bunch of funny swear words.  The video was obviously not real and it said so in the video, but it was an important PSA about how easily political videos can be faked.  Also, unfortunately, we can't not mention what around 95% of deepfake videos are right now, pornography that's been nonconsensually edited to feature celebrities and even ordinary people, so bad stuff.

Deepfake technology is really new but it's evolving quickly.  By one measure, the number of these videos almost doubled in 2019 and that's likely because they're becoming pretty easy to make.  Deepfakes take advantage of new powerful computing methods called generative adversarial networks or GANs.  To make one, you start with neural networks, a type of teachable AI inspired by the connections between the neurons in your brain.  You take two of these networks and you give them lots of training data, which in this case, means lots of pictures and/or videos of the person you want to fake.  Then, you get your networks to compete with each other.  One network is tasked with using the training data to make a deepfake, to take some person's face and animate it onto a template and the other network is tasked with watching that deepfake, comparing it to the training data and guessing whether or not the footage is real.  These are called the generative network and the discriminative network respectively.  As these two sides compete with each other over multiple tests, the generative network gets better and better at creating convincing deepfakes and eventually, it reaches an equilibrium where the discriminative network can't tell real from fake.

Of course, your deepfake is only as good as your neural networks and just because the computer is fooled doesn't mean a human would be.  These days, you can make a cheap amateur deepfake video using about 250 images of the target and about two days of computer processing time, but those won't fool a person.  They're not perfect forgeries and are only good for like, the Nicolas Cage party trick or faking really low quality footage.  The best videos, like the Obama one, still involve professional touch-ups.  

Still, spotting deepfakes is getting harder.  Like, you used to be able to spot one by looking for someone who wasn't blinking enough, but lately, experts are saying that you can't rely on that anymore.  Like the GANs themselves, human programmers are improving their algorithms over time so what works for detecting fakes today may not work tomorrow.  

So how close are we to deepfakes that indistinguishable from the real thing?  That is still up for debate, but one way or another, faking the video doesn't seem to be the biggest hurdle here.  It's the audio.  We're getting pretty good at pasting faces onto bodies and it's only a matter of time before GANs can generate something that really fools us, but making a person say something is a whole separate challenge.  

Generally, faking voices does work in much the same way with machine learning and GANs.  You just need a few recordings of a person, but the state of the art voice faking software isn't fooling anyone for now, likely because people just haven't studied it as much.  Still, that could change in the future and while it would be really cool for movie makers, it could also make identifying trustworthy information that much harder.  

This is enough of a concern that some companies are already preparing for this.  For example, one called Truepic is trying to combat deepfakes by focusing on the metadata of photos and videos.  For smartphones, that means things like the GPS position of the phone when the video was filmed and the hard to fake things like the phone's pressure readings and orientation.  The company has made a camera app that reads that data as you film the video and immediately sends it off to the company's servers.  That helps prove that the original is real and makes it harder to produce forged versions after it's taken.

Additionally, one group that makes fake audioclips has experimented with adding a watermark to their content, an artifact that will always identify the clip as being made by them.  Unfortunately, though, no matter how many safeguards people put in place, there will always be those who don't want to play by these rules, but honestly, that's true for many kinds of content and it's been possible to edit clips and make misleading videos for a very long time, so no matter when the perfect deepfakes arrive, it's probably worth brushing up on your media literacy skills anyway.  If nothing else, it cannot hurt.

If you want to learn more about media literacy, we recommend checking out Crash Course Media Literacy.  It's produced by Complexly, the same studio that brings you SciShow, and it's a 12 episode series that talks all about understanding, analyzing, and accessing information.  You can find it at