YouTube: https://youtube.com/watch?v=Cb8v0C8ZbXU
Previous: What Actually Happens on the Full Moon? | 8 Full-Moon Myths & Facts
Next: Why You Think Your Phone Just Buzzed

Categories

Statistics

View count:191,171
Likes:7,386
Comments:527
Duration:06:03
Uploaded:2020-01-13
Last sync:2024-10-17 03:45

Citation

Citation formatting is not guaranteed to be accurate.
MLA Full: "How Close Are We to the Perfect Deepfake?" YouTube, uploaded by SciShow, 13 January 2020, www.youtube.com/watch?v=Cb8v0C8ZbXU.
MLA Inline: (SciShow, 2020)
APA Full: SciShow. (2020, January 13). How Close Are We to the Perfect Deepfake? [Video]. YouTube. https://youtube.com/watch?v=Cb8v0C8ZbXU
APA Inline: (SciShow, 2020)
Chicago Full: SciShow, "How Close Are We to the Perfect Deepfake?", January 13, 2020, YouTube, 06:03,
https://youtube.com/watch?v=Cb8v0C8ZbXU.
Thanks to deepfakes, CGI can be more realistic and even amateurs can easily create some fun footage. But there are also sinister uses out there.

Hosted by: Hank Green

Jordan Peele's fakedeep Obama video: https://www.youtube.com/watch?v=cQ54GDm1eL0

SciShow has a spinoff podcast! It's called SciShow Tangents. Check it out at http://www.scishowtangents.org
----------
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow
----------
Huge thanks go to the following Patreon supporters for helping us keep SciShow free for everyone forever:

Kevin Carpentier, Eric Jensen, Matt Curls, Sam Buck, Christopher R Boucher, Avi Yashchin, Adam Brainard, Greg, Alex Hackman, Sam Lutfi, D.A. Noe, Piya Shedden, KatieMarie Magnone, Scott Satovsky Jr, Charles Southerland, Patrick D. Ashmore, charles george, Kevin Bealer, Chris Peters
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
----------
Sources:
https://www.wired.com/story/prepare-deepfake-era-web-video/
https://www.bbc.co.uk/news/technology-49961089
https://www.youtube.com/watch?v=g5wLaJYBAm4
https://www.youtube.com/watch?v=8LhI-e2B8Lg
https://www.ft.com/content/4bf4277c-f527-11e9-a79c-bc9acae3b654
https://ieeexplore.ieee.org/abstract/document/8253599
https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
https://arxiv.org/abs/1701.00160
https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/
https://www.wired.com/video/watch/researcher-explains-deepfake-videos
https://thenextweb.com/insights/2018/01/22/i-trained-an-ai-to-copy-my-voice-and-scared-myself-silly/
https://arxiv.org/abs/1910.06711

Image Sources:
https://www.istockphoto.com/photo/online-movie-stream-with-mobile-device-gm880861434-245318394
https://commons.wikimedia.org/w/index.php?sort=relevance&search=generative+adversarial+networks&title=Special:Search&profile=advanced&fulltext=1&advancedSearch-current=%7B%7D&ns0=1&ns6=1&ns12=1&ns14=1&ns100=1&ns106=1#/media/File:Woman_4.jpg
https://commons.wikimedia.org/wiki/File:Boy_1.jpg
https://commons.wikimedia.org/wiki/File:Man_2.jpg
https://commons.wikimedia.org/wiki/File:Woman_1.jpg
https://www.istockphoto.com/photo/technology-abstract-gm1148091793-309936308
https://www.videoblocks.com/video/fashion-vlogger-recording-on-mobile-video-camera-r-9sdu_tnju7waot5

(Intro)

Thanks to CGI, people are used to seeing something that looks incredibly real but is actually fake, like Captain America's body.  There's no way that's real.  The computer-generated images from TV shows and movies and video games require millions of dollars and months of work by trained professionals, but that could be changing thanks to deepfakes.  You probably already know about deepfakes.  They're based on machine learning and artificial intelligence and they're beginning to give amateurs the power to doctor footage quickly and on the cheap.

Deepfakes aren't perfect quality right now, but in the future, they could be pretty convincing, maybe even indistinguishable from reality and if that happens, well, people are already preparing.  A deepfake video is one where artificial intelligence has been used to make it appear that someone is doing or saying something that they didn't actually do or say, like a face swap, putting one person's face on another person's body.  You can use this for fun and enjoyment like the person who changed clips of famous films so that they all starred, who else but Nicolas Cage, but there are also more sinister uses out there.

For instance, in 2018, director Jordan Peele created a convincing proof-of-concept video of Barack Obama saying a bunch of funny swear words.  The video was obviously not real and it said so in the video, but it was an important PSA about how easily political videos can be faked.  Also, unfortunately, we can't not mention what around 95% of deepfake videos are right now, pornography that's been nonconsensually edited to feature celebrities and even ordinary people, so bad stuff.

Deepfake technology is really new but it's evolving quickly.  By one measure, the number of these videos almost doubled in 2019 and that's likely because they're becoming pretty easy to make.  Deepfakes take advantage of new powerful computing methods called generative adversarial networks or GANs.  To make one, you start with neural networks, a type of teachable AI inspired by the connections between the neurons in your brain.  You take two of these networks and you give them lots of training data, which in this case, means lots of pictures and/or videos of the person you want to fake.  Then, you get your networks to compete with each other.  One network is tasked with using the training data to make a deepfake, to take some person's face and animate it onto a template and the other network is tasked with watching that deepfake, comparing it to the training data and guessing whether or not the footage is real.  These are called the generative network and the discriminative network respectively.  As these two sides compete with each other over multiple tests, the generative network gets better and better at creating convincing deepfakes and eventually, it reaches an equilibrium where the discriminative network can't tell real from fake.

Of course, your deepfake is only as good as your neural networks and just because the computer is fooled doesn't mean a human would be.  These days, you can make a cheap amateur deepfake video using about 250 images of the target and about two days of computer processing time, but those won't fool a person.  They're not perfect forgeries and are only good for like, the Nicolas Cage party trick or faking really low quality footage.  The best videos, like the Obama one, still involve professional touch-ups.  

Still, spotting deepfakes is getting harder.  Like, you used to be able to spot one by looking for someone who wasn't blinking enough, but lately, experts are saying that you can't rely on that anymore.  Like the GANs themselves, human programmers are improving their algorithms over time so what works for detecting fakes today may not work tomorrow.  

So how close are we to deepfakes that indistinguishable from the real thing?  That is still up for debate, but one way or another, faking the video doesn't seem to be the biggest hurdle here.  It's the audio.  We're getting pretty good at pasting faces onto bodies and it's only a matter of time before GANs can generate something that really fools us, but making a person say something is a whole separate challenge.  

Generally, faking voices does work in much the same way with machine learning and GANs.  You just need a few recordings of a person, but the state of the art voice faking software isn't fooling anyone for now, likely because people just haven't studied it as much.  Still, that could change in the future and while it would be really cool for movie makers, it could also make identifying trustworthy information that much harder.  

This is enough of a concern that some companies are already preparing for this.  For example, one called Truepic is trying to combat deepfakes by focusing on the metadata of photos and videos.  For smartphones, that means things like the GPS position of the phone when the video was filmed and the hard to fake things like the phone's pressure readings and orientation.  The company has made a camera app that reads that data as you film the video and immediately sends it off to the company's servers.  That helps prove that the original is real and makes it harder to produce forged versions after it's taken.

Additionally, one group that makes fake audioclips has experimented with adding a watermark to their content, an artifact that will always identify the clip as being made by them.  Unfortunately, though, no matter how many safeguards people put in place, there will always be those who don't want to play by these rules, but honestly, that's true for many kinds of content and it's been possible to edit clips and make misleading videos for a very long time, so no matter when the perfect deepfakes arrive, it's probably worth brushing up on your media literacy skills anyway.  If nothing else, it cannot hurt.

If you want to learn more about media literacy, we recommend checking out Crash Course Media Literacy.  It's produced by Complexly, the same studio that brings you SciShow, and it's a 12 episode series that talks all about understanding, analyzing, and accessing information.  You can find it at youtube.com/crashcourse.

(Endscreen/Credits)