scishow
DeepDream: Inside Google's 'Daydreaming' Computers
YouTube: | https://youtube.com/watch?v=3hnWf_wdgzs |
Previous: | Inside The World's Most Powerful New Microscopes |
Next: | How Smells Trigger Memories |
Categories
Statistics
View count: | 375,852 |
Likes: | 11,084 |
Comments: | 587 |
Duration: | 04:39 |
Uploaded: | 2015-07-17 |
Last sync: | 2024-10-19 18:45 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "DeepDream: Inside Google's 'Daydreaming' Computers." YouTube, uploaded by SciShow, 17 July 2015, www.youtube.com/watch?v=3hnWf_wdgzs. |
MLA Inline: | (SciShow, 2015) |
APA Full: | SciShow. (2015, July 17). DeepDream: Inside Google's 'Daydreaming' Computers [Video]. YouTube. https://youtube.com/watch?v=3hnWf_wdgzs |
APA Inline: | (SciShow, 2015) |
Chicago Full: |
SciShow, "DeepDream: Inside Google's 'Daydreaming' Computers.", July 17, 2015, YouTube, 04:39, https://youtube.com/watch?v=3hnWf_wdgzs. |
It may produce creepy images with way too many dogs and eyeballs, but Google’s DeepDream program is actually a valuable window into artificial intelligence.
Installation instructions: https://www.reddit.com/r/deepdream/comments/3cawxb/what_are_deepdream_images_how_do_i_make_my_own/
Hosted by: Hank Green
----------
Dooblydoo thanks go to the following Patreon supporters -- we couldn't make SciShow without them! Shout out to Justin Ove, Chris Peters, John Szymakowski, Peso255, Fatima Iqbal, Justin Lentz, and David Campos.
----------
Like SciShow? Want to help support us, and also get things to put on your walls, cover your torso and hold your liquids? Check out our awesome products over at DFTBA Records: http://dftba.com/scishow
Or help support us by becoming our patron on Patreon:
https://www.patreon.com/scishow
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
Sources:
http://arxiv.org/pdf/1409.4842.pdf
http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html
http://googleresearch.blogspot.com/2015/07/deepdream-code-example-for-visualizing.html
http://gizmodo.com/googles-dream-robot-is-running-wild-across-the-internet-1715839224
https://github.com/google/deepdream/blob/master/dream.ipynb
http://gizmodo.com/these-are-the-incredible-day-dreams-of-artificial-neura-1712226908
Installation instructions: https://www.reddit.com/r/deepdream/comments/3cawxb/what_are_deepdream_images_how_do_i_make_my_own/
Hosted by: Hank Green
----------
Dooblydoo thanks go to the following Patreon supporters -- we couldn't make SciShow without them! Shout out to Justin Ove, Chris Peters, John Szymakowski, Peso255, Fatima Iqbal, Justin Lentz, and David Campos.
----------
Like SciShow? Want to help support us, and also get things to put on your walls, cover your torso and hold your liquids? Check out our awesome products over at DFTBA Records: http://dftba.com/scishow
Or help support us by becoming our patron on Patreon:
https://www.patreon.com/scishow
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
Sources:
http://arxiv.org/pdf/1409.4842.pdf
http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html
http://googleresearch.blogspot.com/2015/07/deepdream-code-example-for-visualizing.html
http://gizmodo.com/googles-dream-robot-is-running-wild-across-the-internet-1715839224
https://github.com/google/deepdream/blob/master/dream.ipynb
http://gizmodo.com/these-are-the-incredible-day-dreams-of-artificial-neura-1712226908
You may have been seeing some cool but mildly disturbing pictures on the internet recently. I mean pictures that are more disturbing than usual, and in a different way from like that fish with people teeth. The images are part of a project that Google's calling "DeepDream," as in inception style digital day dreams. Which is actually not too far from what they really are, because when you teach computers how to recognize images, and then ask them to produce their own, apparently this is what you come up with.
DeepDream is the latest step towards solving a problem that researchers have been dealing with since the 1960s: how to get a computer to recognize objects in an image. And so far, the best way seems to involve using neural networks, a form of artificial intelligence where you link up a bunch of individual processing units, or nodes, and have them work together. Usually, the nodes are organized into interconnected layers. You start out with some input, say an image, and each layer of nodes is designed to modify that image in a particular way. By adjusting the parameters, or rules for each node, you change what it does. But what makes neural networks really powerful, especially for image recognition, is that you can train them, which is how Google developed DeepDream.
First, computer scientists took a database of 1.2 million images and classified every single one, according to about a thousand categories, like bird, or building. And as you probably can imagine, that was a pretty big project all by itself. Then, to train the network, they fed it those images and asked it to classify them. So lets say one of the inputs was a picture of a dog. If the network said that it was a dog, then great, it moved to the next image. But if it didn't know what it was, or it came up with the wrong category, then researchers changed the parameters of different nodes until it did identify the image as a dog. It's like the network was learning "Oh, the arrangement of pixels in this picture of a chihuahua is a dog, and this arrangement in a picture of a St. Bernard is also a dog."
The process of teaching neural networks to classify images isn't exactly new, but the specific architecture of the network used for DeepDream is new. And once they were done training it, the team wanted to test it out. They especially wanted to see what qualities the network used to distinguish between one category and another. So they fed the network some images and asked it to find certain other images in them using what it thought the defining characteristics of that category would be.
So say they gave DeepDream a picture that had nothing to do with dogs, and then told it to find dogs in there. To do that, it would use a process that developers likened to daydreaming, the way you look at the clouds and find shapes of animals in them. But DeepDream would actually modify the original image, highlighting the patterns that it found to fit the category. This allowed the developers to actually see what each individual layer was doing to the image, whether it was just enhancing certain lines, or if it was adding more specific details. And in the process, it showed the researchers how DeepDream defined categories like "dog" in it's digital brain.
What they found was the first layer of the network looked for hints of anything that might be a dog, just a vague outline or some edges of a shape. In reality, the curve of what the network thought was a dog's nose might be a human foot, but it enhanced that edge anyway, and sent the new image off to the next layer for processing. Then, the next layer took those edges and started to fill them in, looking for more details that it might be able to interpret as part of a dog and then enhanced them and then sent that picture onto the next layer and so on.
Using this system, the network was able to turn even a screen full of random static into a picture of bananas. Now the database happened to have more pictures in some categories than in others, which is why you see a lot of things like dogs and eyes in the final DeepDream images.
But beyond just creating trippy pictures, the results are useful insights into how the network, for lack of a better word, thinks. For example, the researchers noticed that when they asked the network to come up with pictures of dumbbells, it also included disembodied human hands grasping the dumbbells. That's when they realized that they should probably train it with some pictures of dumbbells without people holding them.
So these robot dreams are more than just pretty or creepy, they're also valuable research tools as scientists teach computers to be better at recognizing objects in images. And, if you want to generate your own DeepDream pictures, you can. Google just released some of the code, and you can install a simulated version of the network on your computer. There are links to instructions in the description below, we are interested to see what you come up with, so tweet us with your pictures.
Thanks for watching this episode of SciShow News and thanks especially to our patrons on Patreon who make this channel possible. If you want to help us keep making this show and get some cool stuff, you can go to patreon.com/scishow and if you want to keep getting smarter with us, just go to youtube.com/scishow and subscribe.
DeepDream is the latest step towards solving a problem that researchers have been dealing with since the 1960s: how to get a computer to recognize objects in an image. And so far, the best way seems to involve using neural networks, a form of artificial intelligence where you link up a bunch of individual processing units, or nodes, and have them work together. Usually, the nodes are organized into interconnected layers. You start out with some input, say an image, and each layer of nodes is designed to modify that image in a particular way. By adjusting the parameters, or rules for each node, you change what it does. But what makes neural networks really powerful, especially for image recognition, is that you can train them, which is how Google developed DeepDream.
First, computer scientists took a database of 1.2 million images and classified every single one, according to about a thousand categories, like bird, or building. And as you probably can imagine, that was a pretty big project all by itself. Then, to train the network, they fed it those images and asked it to classify them. So lets say one of the inputs was a picture of a dog. If the network said that it was a dog, then great, it moved to the next image. But if it didn't know what it was, or it came up with the wrong category, then researchers changed the parameters of different nodes until it did identify the image as a dog. It's like the network was learning "Oh, the arrangement of pixels in this picture of a chihuahua is a dog, and this arrangement in a picture of a St. Bernard is also a dog."
The process of teaching neural networks to classify images isn't exactly new, but the specific architecture of the network used for DeepDream is new. And once they were done training it, the team wanted to test it out. They especially wanted to see what qualities the network used to distinguish between one category and another. So they fed the network some images and asked it to find certain other images in them using what it thought the defining characteristics of that category would be.
So say they gave DeepDream a picture that had nothing to do with dogs, and then told it to find dogs in there. To do that, it would use a process that developers likened to daydreaming, the way you look at the clouds and find shapes of animals in them. But DeepDream would actually modify the original image, highlighting the patterns that it found to fit the category. This allowed the developers to actually see what each individual layer was doing to the image, whether it was just enhancing certain lines, or if it was adding more specific details. And in the process, it showed the researchers how DeepDream defined categories like "dog" in it's digital brain.
What they found was the first layer of the network looked for hints of anything that might be a dog, just a vague outline or some edges of a shape. In reality, the curve of what the network thought was a dog's nose might be a human foot, but it enhanced that edge anyway, and sent the new image off to the next layer for processing. Then, the next layer took those edges and started to fill them in, looking for more details that it might be able to interpret as part of a dog and then enhanced them and then sent that picture onto the next layer and so on.
Using this system, the network was able to turn even a screen full of random static into a picture of bananas. Now the database happened to have more pictures in some categories than in others, which is why you see a lot of things like dogs and eyes in the final DeepDream images.
But beyond just creating trippy pictures, the results are useful insights into how the network, for lack of a better word, thinks. For example, the researchers noticed that when they asked the network to come up with pictures of dumbbells, it also included disembodied human hands grasping the dumbbells. That's when they realized that they should probably train it with some pictures of dumbbells without people holding them.
So these robot dreams are more than just pretty or creepy, they're also valuable research tools as scientists teach computers to be better at recognizing objects in images. And, if you want to generate your own DeepDream pictures, you can. Google just released some of the code, and you can install a simulated version of the network on your computer. There are links to instructions in the description below, we are interested to see what you come up with, so tweet us with your pictures.
Thanks for watching this episode of SciShow News and thanks especially to our patrons on Patreon who make this channel possible. If you want to help us keep making this show and get some cool stuff, you can go to patreon.com/scishow and if you want to keep getting smarter with us, just go to youtube.com/scishow and subscribe.