YouTube: https://youtube.com/watch?v=E-VqgkRYitk
Previous: Active Volcanoes: The Perfect Egg Incubators
Next: Kids, Kawasaki Disease, and COVID-19: What Parents Should Know

Categories

Statistics

View count:223,288
Likes:8,406
Comments:214
Duration:10:46
Uploaded:2020-05-31
Last sync:2024-11-25 18:30

Citation

Citation formatting is not guaranteed to be accurate.
MLA Full: "4 Algorithms We Borrowed from Nature." YouTube, uploaded by SciShow, 31 May 2020, www.youtube.com/watch?v=E-VqgkRYitk.
MLA Inline: (SciShow, 2020)
APA Full: SciShow. (2020, May 31). 4 Algorithms We Borrowed from Nature [Video]. YouTube. https://youtube.com/watch?v=E-VqgkRYitk
APA Inline: (SciShow, 2020)
Chicago Full: SciShow, "4 Algorithms We Borrowed from Nature.", May 31, 2020, YouTube, 10:46,
https://youtube.com/watch?v=E-VqgkRYitk.
We use algorithms every day for things like image searches, predictive text, and securing sensitive data. Algorithms show up all over nature, too, in places like your immune system and schools of fish, and computer scientists have learned a lot from studying them. Here are four ways our technology has improved thanks to algorithms we swiped from nature!

Hosted by: Hank Green

SciShow has a spinoff podcast! It's called SciShow Tangents. Check it out at http://www.scishowtangents.org
----------
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow
----------
Huge thanks go to the following Patreon supporters for helping us keep SciShow free for everyone forever:

Kevin Bealer, Jacob, Katie Marie Magnone, D.A. Noe, Charles Southerland, Eric Jensen, Christopher R Boucher, Alex Hackman, Matt Curls, Adam Brainard, Jeffrey McKishen, Scott Satovsky Jr, Sam Buck, Ron Kakar, Chris Peters, Kevin Carpentier, Patrick D. Ashmore, Piya Shedden, Sam Lutfi, Charles George, Christoph Schwanke, Greg

----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
----------
Sources:
https://www.merriam-webster.com/dictionary/algorithm
https://www.pnas.org/content/108/46/18726
https://link.springer.com/chapter/10.1007/978-3-642-00267-0_9
Fly hashing:
https://www.slaney.org/malcolm/yahoo/Slaney2008-LSHTutorial.pdf
https://science.sciencemag.org/content/358/6364/793
https://science.sciencemag.org/content/suppl/2017/11/09/358.6364.793.DC1
https://medium.com/@jaiyamsharma/efficient-nearest-neighbors-inspired-by-the-fruit-fly-brain-6ef8fed416ee
Object recognition:
https://www.sciencedirect.com/science/article/abs/pii/S1361841517301135 (open copy at https://arxiv.org/pdf/1702.05747.pdf)
https://dev.to/clarifai/what-convolutional-neural-networks-look-at-when-they-see-nudity
http://www.cse.iitm.ac.in/~vplab/courses/CV_DIP/PDF/Feature_Detectors_and_Descriptors.pdf
http://www.cs.cmu.edu/~10701/slides/Perceptron_Reading_Material.pdf
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2718241/
https://knowingneurons.com/2014/10/29/hubel-and-wiesel-the-neural-basis-of-visual-perception/
http://www.scholarpedia.org/article/Neocognitron
https://www.frontiersin.org/articles/10.3389/frobt.2015.00036/full
https://www.rctn.org/bruno/public/papers/Fukushima1980.pdf
Anomaly detection:
https://blogs.oracle.com/datascience/introduction-to-anomaly-detection
ftp://ftp.dca.fee.unicamp.br/pub/docs/vonzuben/lnunes/ieee_tec01.pdf
https://dl.acm.org/doi/10.1145/283699.283742
https://link.springer.com/content/pdf/10.1007/s11047-006-9026-4.pdf
https://www.cs.unm.edu/~immsec/publications/virus.pdf
https://www.sciencedirect.com/science/article/abs/pii/S221421261630240X
https://link.springer.com/content/pdf/10.1007%2F978-3-642-59901-9.pdf
Swarm algorithms and TERMES:
https://www.sciencedirect.com/science/article/pii/B9780124051638000016
https://dash.harvard.edu/bitstream/handle/1/13068244/Petersen_gsas.harvard.inactive_0084L_11836.pdf
https://www.youtube.com/watch?v=t2e4GIZ3W1o
https://science.sciencemag.org/content/343/6172/754
Simulated annealing:
https://pdb101.rcsb.org/learn/guide-to-understanding-pdb-data/methods-for-determining-structure
https://www.annualreviews.org/doi/pdf/10.1146/annurev.pc.42.100191.001213
https://pubs.acs.org/doi/full/10.1021/acs.cgd.9b00517#
https://www.sciencedirect.com/book/9780124167438/nature-inspired-optimization-algorithms
https://www.sciencedirect.com/topics/materials-science/annealing
https://science.sciencemag.org/sites/all/libraries/pdfjs/web/viewer.html?file=/content/sci/220/4598/671.full.pdf
https://www.sciencedirect.com/topics/materials-science/simulated-annealing
https://link.springer.com/article/10.1007/s12559-015-9370-8
https://onlinelibrary.wiley.com/doi/abs/10.1111/itor.12001

Image Sources:
https://commons.wikimedia.org/wiki/File:Hubel%2Bwiesel.jpg
https://www.flickr.com/photos/niaid/29122039942
https://commons.wikimedia.org/wiki/File:Diagram_showing_the_position_of_the_thymus_gland_CRUK_362.svg
{♫Intro♫}.

When you think about algorithms, you probably think of Google searches or YouTube recommendations, or predictive text—situations where powerful computers are coming up with information you're looking for. An algorithm, though, is basically any recipe of calculations that a computer can follow to produce a specific kind of information.

And algorithms aren't just for computers. They show up all over nature, too, in places like your immune system and in schools of fish. And just as engineers borrow ideas from nature's physical designs, some computer scientists look for inspiration in nature's algorithms.

Here are four ways our technology has improved thanks to algorithms we swiped from nature. Say you're looking for the perfect fuzzy animal photo to send as a virtual hug to your friend. An image search pulls up some cuddling kittens that are almost right, if only you could find a slightly more zoomed-out version….

What you want in this situation is something called nearest-neighbors search—an algorithm that can quickly search a big database to find the items most similar to one you specify. That gets harder as the database gets bigger; and on the internet, there are way too many images for the search engine to compare your photo with every single one. So how do search engines pull off that feature that gives you “visually similar images”?

One technique is called locality-sensitive hashing. This is a type of algorithm that digests each image into a short digital fingerprint called a hash, with similar hashes for similar inputs. For example, if your inputs were essays, a decent hash might be the first letters of the first twenty sentences.

So if one essay was copied from another, their hashes would likely be very close. This method makes it easy to find similar inputs. Instead of comparing your kittens to every other image on the internet, Google can organize images by their hashes and just pull out the similar ones.

The catch is that locality-sensitive hashing can still be kind of slow, and sometimes inaccurate. That is where fly brains come to the rescue. See, a fly can smell, but it doesn't differentiate every subtle variation of odor; it groups odors into categories so it can learn that, for instance, cheese smells often lead to fruit, but book smells don't.

In 2017, a team of computer scientists and biologists realized that fly brains group odors using a form of locality-sensitive hashing. Except in the flies' version, the brain boils a smell down to a few numbers by first expanding the smell data into a much larger collection of numbers. Only then does it select a few of those numbers as the hash.

It's sort of like expanding an essay by replacing each character with a random 10-character code, producing a string of gibberish ten times as long. Then you could find the hundred gibberish words that appear most frequently, take the first letter, and use that as the essay's hash. As strange as that strategy sounds, it turns out to work really well.

All the extra gobbledegook gives the algorithm more opportunities to find patterns that jump out strongly for one cluster of inputs but are conspicuously absent for others. When the computer scientists built their own fly-based hashing algorithm, it was up to twice as accurate as traditional methods—and also twenty times faster! Computer vision is everywhere.

Self-driving cars, MRI technology, facial recognition; they all use it. Most of these systems need to do some form of object recognition—meaning they need to identify the contents of an image. For decades, computer scientists used handcrafted algorithms to extract image features like edges and contiguous shapes.

Then, they could then build other algorithms that used those features to guess what was in each part of an image. But all these hand-tuned algorithms tend to be fragile. It's up to the cleverness of engineers to design the right kinds of analysis and tweak the parameters just so.

Now, engineers are pretty clever, but there's only so much subtlety and detail they can code up. In the background, though, a different approach was taking shape: convolutional neural networks, or CNNs. In artificial intelligence, most kinds of neural networks are based on nature only in a crude way.

Like, they're called neural networks because they kind of work like neurons. But convolutional neural networks are based on Nobel Prize-winning research on cat brains. Back in the 1950s, a pair of neuroscientists discovered that some neurons in a cat's visual cortex, called simple cells, would respond only to simple visual elements—like a line in a specific place at a specific orientation.

Those simple cells pass information to so-called complex cells, which aggregate the information across a wider area. In other words, these researchers discovered a hierarchy in the brain's visual processing:. Earlier layers detect basic features at different locations, then later layers add all that together to detect more complex patterns.

That structure directly inspired the first convolutional neural networks. In the first layer of a CNN, each simulated neuron looks only at one small patch of the image and checks how well that matches a simple template, like a spot of blue or an edge between light and dark. The neuron gives the patch a score depending on how well that patch matches the neuron's template.

Then, the next level looks at all the scores for edges and spots in a slightly bigger patch and matches them against a more complex template, and so on up the hierarchy until you're looking for cat paws and bicycle wheels. A CNN learns these templates automatically from data, saving engineers from manually specifying what to look for. Today, CNNs totally dominate computer vision.

And although they now have bells and whistles that have nothing to do with the brain, the visual hierarchy is still baked in. Next, companies really hate getting hacked. There are lawsuits and bad press, and it's pretty inconvenient for them and the people who rely on them.

So if a company's network starts getting hammered with unusual traffic, it might be a good idea to lock things down. But detecting what counts as unusual traffic isn't always easy. It's an example of what's called anomaly detection, or scanning for atypical data, which can be tricky.

See, you can't just lay out rules for what normal traffic looks like. For one thing, what is normal is always changing. And anyway, hard rules would be too rigid: You wouldn't want a red alert before every holiday just because a bunch of employees traveled early and logged in from home.

It might be tempting to try supervised machine learning, where you show an algorithm lots of good and bad examples, and it figures out how to tell them apart. But with anomaly detection, you often don't have many examples of the bad stuff you're trying to catch! Most of what a company has, of course, is logs of normal network traffic.

So how can it learn what abnormal traffic looks like? One particularly cool solution is based on our bodies. Because you know what's really good at detecting a few bad guys in a sea of things that belong?

Your immune system. To recognize and kill off invaders, your immune system uses cells called lymphocytes, which have little receptors that detect foreign proteins. But your body actually produces a huge variety of lymphocytes, with receptors that detect pretty much any random protein snippet—including bits of proteins that are supposed to be around.

You don't want to attack those, so before your body releases its lymphocytes, your thymus gland selectively kills off the ones that would detect familiar proteins. As a result, the only lymphocytes that survive are ones that detect foreign proteins. This is called negative selection, and anomaly-detection algorithms can use a similar concept to spot unusual traffic.

They can generate detectors for random sequences of traffic data, then delete any detectors that go off on normal traffic logs. The ones that remain, thus, respond only to abnormal patterns. Finally, in lots of situations, having multiple computers coordinate to divide up a task is crucial—for example, to carry out a robotic search-and-rescue mission, or to index the entire internet.

When you have just a few computers in a network, it's easy to have one central command computer coordinate them all. But if you're coordinating hundreds of thousands of machines, or the machines are cut off from one another, controlling them with one central computer becomes impractical. So all those machines need a process that they all follow independently that somehow gets the job done efficiently and without horrible mistakes.

Little machines… acting independently… getting big projects done… sounds… kind of like an insect colony! As it happens, there's a whole niche of what are called swarm intelligence algorithms that tackle problems like this, and many are based on insect behavior. For example, there are construction robots that collaborate by imitating termites.

We still don't know exactly how termites build their massive mounds. But we do know that each worker can only see its local environment—what's been built right there and where surrounding workers are. That means the only way for the termites to coordinate is by leaving indirect signals for each other in their shared environment.

Like, when one termite does a bit of construction work, it leaves the soil arranged as some kind of cue to other termites about what needs to be done next. This indirect coordination strategy is called stigmergy. Inspired by termite stigmergy, a system of robots called TERMES allows a fleet of little robots to build arbitrary structures with no central coordination.

Just by sensing what's been built and following some basic traffic rules, each robot figures out what to do next to get closer to the target structure. The hope is for similar robots to one day build complex structures even in hostile environments, like war zones or on Mars, without depending on a centralized controller. Now, nature-inspired algorithms can get a bit out of hand.

People have designed algorithms based on wolf pack behavior, virus evolution, lightning paths, and on and on. Nature-inspired computing has been criticized for encouraging cute metaphors that don't add insight or are unnecessarily complicated. But as you've seen, sometimes natural phenomena really can make for great inspiration.

Nature can be quite the computer scientist, if you just know where to look! Thanks for watching this episode of SciShow! And if you want to learn more about the amazing ways that nature is inspiring modern technology, you might like our episode on how animals are helping us build better drones.

You can watch that right after this. {♫Outro♫}.