YouTube: https://youtube.com/watch?v=KP9HFcK6YOM
Previous: When Waking up After Decades Turned out to Be Temporary
Next: To Heal the Brain, Sometimes We Need to Damage It

Categories

Statistics

View count:738
Likes:88
Dislikes:2
Comments:18
Duration:05:11
Uploaded:2018-08-06
Last sync:2018-08-06 15:20
Your brain relies a lot on context to tell you what sounds are bouncing around in your ears, and without enough of that context it can get a little confused.

Hosted by: Brit Garner
----------
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow
----------
Dooblydoo thanks go to the following Patreon supporters:
Lazarus G, Sam Lutfi, D.A. Noe, سلطان الخليفي, Piya Shedden, KatieMarie Magnone, Scott Satovsky Jr, Charles Southerland, Patrick D. Ashmore, Tim Curwick, charles george, Kevin Bealer, Chris Peters
----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
----------
Sources:
http://www.utdallas.edu/~assmann/hcs6367/werker89.pdf
https://www.jstor.org/stable/27855552?seq=1#page_scan_tab_contents
http://www.haskins.yale.edu/SR/sr114/SR114_03.pdf
https://www.ncbi.nlm.nih.gov/books/NBK11007/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2947444/
https://www.nature.com/articles/nn0998_351
https://pdfs.semanticscholar.org/b574/e49db968fbe495c01f1ccd7525969a6dcf8d.pdf
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4091305/
http://psycnet.apa.org/fulltext/1981-07020-001.pdf
http://science.sciencemag.org/content/167/3917/392
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3677062/#SD1
https://www.npr.org/sections/thetwo-way/2018/05/16/611701171/yanny-or-laurel-why-people-hear-different-things-in-that-viral-clip
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4122614/
http://ecs.utdallas.edu/loizou/cimplants/cdemos.htm
https://academic.oup.com/cercor/article/18/2/278/333699
https://dash.harvard.edu/bitstream/handle/1/12785971/4121538.pdf?sequence=1
http://www.jneurosci.org/content/30/7/2414
https://commons.wikimedia.org/wiki/File:Lindos1.svg
https://www.cdc.gov/ncbddd/hearingloss/sound.html
https://www.researchgate.net/publication/238716960_Precise_and_Full-range_Determination_of_Two-dimensional_Equal_Loudness_Contours
https://www.nytimes.com/interactive/2018/05/16/upshot/audio-clip-yanny-laurel-debate.html
http://www.wbur.org/npr/611701171/yanny-or-laurel-why-people-hear-different-things-in-that-viral-clip
https://www.researchgate.net/publication/12444336_Relative_loudness_perception_of_low_and_high_frequency_sounds_in_the_open_and_occluded_ear?_sg=XYcJ_OO2JMa3CKCeMwzzdznpYL6EJa6hV3OoP95YGOZ2h4xMzLk4XBcIY73JS9cYMZqWJz9weQ
https://www.researchgate.net/publication/266945616_Development_and_Current_Status_of_the_Cambridge_Loudness_Models?_sg=9JJ23CU_PHhwFOpklZ6r9PGAzw6wtb9DSsoOES9ZssY7R9uIf3u4Cj1bxImVIZJ-qw8ALU2g0Q
http://www.cns.nyu.edu/~david/courses/perception/lecturenotes/loudness/loudness.html
https://asa.scitation.org/doi/abs/10.1121/1.395554
https://biology.stackexchange.com/questions/27822/why-do-adults-lose-hearing-at-high-frequencies
[INTRO ♪].

Communicating is hard. And I don’t just mean figuring out what your dad really means when he says your new hair color is “a bold choice.” It takes a whole lot of processing to understand what words people are saying at all.

Like, say your cousin calls you on the phone to say how excited they are for you to meet their new friend Laurel. ...or was it Yanny? Ugh. How you hear language depends a lot on the context, like what you're expecting, visual cues, and even what culture you were raised in.

And without those cues, you can be tricked into hearing something you actually didn’t. In English, we use about 44 different sounds to communicate. Like, C and K can make the same sound, but we also need to combine letters to make sounds, like TH.

These units are called phonemes. And there are actually hundreds of phonemes used across all the languages on Earth. But if you only speak English, chances are you can only hear the difference between those 44.

For example, in the words "keep" and "cool," adult English speakers generally hear the same sound at the beginning. But a fluent Arabic speaker would likely hear two different sounds: a scratchier KHEE and a sharper COO. And the weird thing is: 6-month-old babies can also hear a meaningful difference.

This has been tested using habituation studies, which are based on the idea that when a baby is sucking on a pacifier and sees something new, their sucking slows down, so they can take it all in. Using this method, researchers have found that 6-month-old infants can tell apart pretty much any phonemes you play them through headphones. They seem to be processing all the sounds generally, not through the lens of any particular language.

But they lose this ability by 10-12 months. Then, they can really only discern sounds from whatever languages they've been surrounded by in that first year. It’s possible that this is because any extra neural connections that could discern other sounds get eliminated from lack of use.

Language gets lateralized in the brain, so after a year, we process it pretty much just with our left hemisphere. So these phoneme differences can make it tricky to learn new languages... but don't worry, there are plenty of chances for confusion in your native language, too. Take the McGurk effect.

It’s an illusion where people think they hear two different sounds based on what someone's face is doing— even though the audio is exactly the same. Like, try this: watch one side of the screen for a couple seconds, and then the other, and see if you hear a change. Ba ba ba ba ba ba ba.

What you really heard was just "ba" with a B. But for most people, that shifts from “ba” to “fa” with an F depending on the mouth movements you’re looking at. So what happens if you don't have a person's face to look at, like on a phone call?

Most of the time people use context to figure out ambiguous sounds. And one way to see this is called the Ganong effect. In one study, researchers artificially made a sound halfway between a D and a T, so most people couldn't tell the difference.

When they put that sound at the start of a word with A-S-H at the end, most people think it's a D— because "dash" is an English word, and "tash" isn't. But if they put A-S-K after it, most people think it's a T, because "task" is an English word, even though it's the same ambiguous sound. Peoples’ brains are so good at this that you can even replace the sound with a short burst of static.

And this can be taken to some extremes. For example, listen to this: [heavily distorted audio clip]. Not a lot there, right?

If you don't have any context to guide your expectation, it can be really hard to understand noisy sounds. But if you know it's going to be someone saying "the football player lost a shoe"... [same heavily distorted audio clip]. It becomes easier to hear that message even though it’s garbled, and it’s super hard to unhear it.

It’s hard to say exactly why these illusions happen. But neuroimaging studies have found that they tend to involve parts of the temporal lobe, which is involved in hearing and language. For example, one fold of the temporal lobe called the superior temporal sulcus, seems to help integrate auditory and visual information.

Now, even if you have context, there’s the added fact that people can't hear all frequencies of sound equally well. Most average people can hear frequencies from about 20 Hertz to 20,000 Hertz. But we hear the range near human voices best, which are usually around 250 Hertz to 6000 Hertz.

People need to crank really low and really high frequencies up to a higher amplitude in order to perceive them to be the same volume. You can illustrate this with an equal loudness curve. This probably has to do with the anatomy of the inner ear and which frequencies can reverberate through it the most to stimulate nerves— although hearing is pretty complex.

So remember that “yanny” and “laurel” recording that went viral? The sound of "yanny" was above 4500 Hertz, which is near the top of where our brains expect spoken language to be. Some people just hear those frequencies a little better or worse.

The recording was of the word "laurel.” But when it’s played back on devices that can’t reproduce lower frequencies as loud, some of those “yanny” frequencies could be louder by comparison. So it made an ambiguous noise— like the sound halfway between D and T— that you could use context to make sense of. And, like the shoeless football player, lots of people found they could change what they heard just by focusing their brains on one word or the other.

Basically, brains are really weird and amazing, so thanks for exploring them with us here on SciShow Psych. To learn about ways to trick your other senses, check out our video about the rubber hand illusion. And don’t forget to go to youtube.com/scishowpsych and subscribe! [OUTRO ♪].