crashcourse
Psycholinguistics: Crash Course Linguistics #11
YouTube: | https://youtube.com/watch?v=A5uNFKEn4_A |
Previous: | Alkene Redox Reactions: Crash Course Organic Chemistry #17 |
Next: | Language Acquisition: Crash Course Linguistics #12 |
Categories
Statistics
View count: | 193,347 |
Likes: | 6,381 |
Comments: | 134 |
Duration: | 11:03 |
Uploaded: | 2020-12-04 |
Last sync: | 2024-10-23 12:00 |
Citation
Citation formatting is not guaranteed to be accurate. | |
MLA Full: | "Psycholinguistics: Crash Course Linguistics #11." YouTube, uploaded by CrashCourse, 4 December 2020, www.youtube.com/watch?v=A5uNFKEn4_A. |
MLA Inline: | (CrashCourse, 2020) |
APA Full: | CrashCourse. (2020, December 4). Psycholinguistics: Crash Course Linguistics #11 [Video]. YouTube. https://youtube.com/watch?v=A5uNFKEn4_A |
APA Inline: | (CrashCourse, 2020) |
Chicago Full: |
CrashCourse, "Psycholinguistics: Crash Course Linguistics #11.", December 4, 2020, YouTube, 11:03, https://youtube.com/watch?v=A5uNFKEn4_A. |
We couldn't have language without the brain, but our brains are a bit harder to study than other parts of the body that we use to make languages like our mouths and hands. In this episode of Crash Course Linguistics, we'll learn about the field that studies where and how language happens in the brain, called psycholinguistics. We'll cover old and new research in the field, classic studies, and the methods psycholinguistics use to uncover the connections between language and the brain.
Acknowledgment: Suzy Styles
Want even more linguistics? Check out the Lingthusiasm podcast, hosted by the writers of Crash Course Linguistics: https://lingthusiasm.com/
***
Watch our videos and review your learning with the Crash Course App!
Download here for Apple Devices: https://apple.co/3d4eyZo
Download here for Android Devices: https://bit.ly/2SrDulJ
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Eric Prestemon, Mark, DAVID MORTON HUDSON, Perry Joyce, Isaac Liu, Scott Harrison, Mark & Susan Billian, Junrong Eric Zhu, Alan Bridgeman, Jennifer Smith, Matt Curls, Tim Kwist, Jonathan Zbikowski, Jennifer Killen, Sarah & Nathan Catchings, Brandon Westmoreland, team dorsey, Trevin Beattie, Eric Koslow, Indika Siriwardena, Khaled El Shalakany, Shawn Arnold, Siobhán, Ken Penttinen, Nathan Taylor, William McGraw, Jirat, Brian Thomas Gossett, Ian Dundore, Jason A Saslow, Jessica Wode, Caleb Weeks
__
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
Acknowledgment: Suzy Styles
Want even more linguistics? Check out the Lingthusiasm podcast, hosted by the writers of Crash Course Linguistics: https://lingthusiasm.com/
***
Watch our videos and review your learning with the Crash Course App!
Download here for Apple Devices: https://apple.co/3d4eyZo
Download here for Android Devices: https://bit.ly/2SrDulJ
Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse
Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:
Eric Prestemon, Mark, DAVID MORTON HUDSON, Perry Joyce, Isaac Liu, Scott Harrison, Mark & Susan Billian, Junrong Eric Zhu, Alan Bridgeman, Jennifer Smith, Matt Curls, Tim Kwist, Jonathan Zbikowski, Jennifer Killen, Sarah & Nathan Catchings, Brandon Westmoreland, team dorsey, Trevin Beattie, Eric Koslow, Indika Siriwardena, Khaled El Shalakany, Shawn Arnold, Siobhán, Ken Penttinen, Nathan Taylor, William McGraw, Jirat, Brian Thomas Gossett, Ian Dundore, Jason A Saslow, Jessica Wode, Caleb Weeks
__
Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids
Hi, I'm Taylor and welcome to Crash Course Linguistics!
Language happens thanks to the brain. This spongy thing sitting in our skull is responsible for our abilities to acquire complex linguistic skills like phonology, semantics, syntax and reading.
It’s what helps us write poems and compose texts, or get the meaning of a raised eyebrow or a string of random emojis. Understanding the brain helps us understand how language is produced and interpreted. We can point directly to the parts of our mouths and hands that are used to make language, but you can’t reach in and feel your own brain.
Luckily, over the years, people have devised a whole range of ways of looking at the brain to figure out where and how language happens. This is the field of psycholinguistics. [THEME MUSIC]. Interest in the relationship between language and the brain really picked up in the 19th century.
Researchers observed people with language disorders and then looked at their brains after they died. If damage to one part of the brain from a stroke, dementia, or a head injury correlated with a difference in the subject’s ability to understand or produce language, the scientists could predict there was a relationship there. These kinds of injuries that affect our language abilities are known as aphasia.
Two famous kinds of aphasia discovered at the time are called Broca's aphasia and Wernicke's aphasia. The areas of the brain related to each kind of aphasia therefore became known as Broca’s area and Wernicke’s area. You may have heard about them in a Psychology or Anatomy class.
Broca's area is located around your left temple, and it was named after Paul Broca, a 19th century French physician who noticed that people who'd been injured in this part of their brain acted in similar ways. While they could still understand language, they could only produce maybe a handful of words, one at a time. Broca’s area affected their ability to speak or sign in a fluid, grammatical way.
In other words, it affected their ability to use morphosyntax. Right around the same time Broca was making his discovery, German physician Carl Wernicke discovered that if a different part of the brain was injured, there was a totally different effect. The people injured in this spot, located just above your left ear, tended to talk in a way that sounded fluent and grammatical but was nonsensical.
Wernicke’s area is associated with the meaning of language. But those 19th century studies were limited, and the brain is amazingly complex and flexible. More recent research has found that some people can majorly damage Broca’s area and never develop aphasia.
Other people can re-learn how to speak through extensive practice building on their ability to sing, which is controlled by a different part of the brain. These newer studies help us understand neuroplasticity, the ability of the brain to flexibly build and connect parts of the brain in response to injury or as part of learning. And though the language areas are usually located on the left hemisphere of the brain, some people’s language areas are found predominantly in the right hemisphere, or spread across both sides especially for left-handed or ambidextrous people.
So the relationship between language and the brain is even more complicated than we first thought. Even now, errors and differences in language use can teach us about the different skills involved in language and how they're organized inside our minds. We all sometimes forget a word that we know perfectly well, or accidentally swap words, parts of words, or idioms what you might encounter as spoonerisms, tip of the tongue experiences or mixed metaphors.
These production errors tell us valuable things about how the mind handles language. Like, you know when you just can't quite remember a word? You know it, you almost have it, it's right there... you just can't retrieve it.
This phenomenon is known as a Tip of the Tongue experience, and psycholinguists have found that people with a word on the tips of their tongues can often recall other information about it. They might remember its meaning, its first letter, and sometimes how many syllables it has, but they can't quite recall the complete word. Signed languages also have this phenomenon, which is known as Tip of the Fingers, naturally.
And signers experiencing Tips of the Fingers can also recall certain information about the sign they're seeking, especially the initial handshape and location of the hand. They just can't recall the movement they'd need to complete the sign. Tip of the Tongue and Finger experiences can show us how our thoughts are organized, because we can have access to the first letter or initial hand position without having access to the remaining sounds or movement.
Knowing a word isn't a binary state of "yes" or "no" like a computer. Our brains can also retain partial information. Production errors are so useful that psycholinguists have techniques for trying to get people to make even more of them, so they can study those errors in a laboratory setting.
Psycholinguists can induce Tip of the Tongue or Finger experiences by asking people to translate words or to recall proper nouns. Let’s head to the Thought Bubble to try another psycholinguistic experiment right here! In a moment two shapes are going to appear on the screen.
Let’s decide which one is called kiki, and which one is called bouba. Are you ready? It’s more than likely that you called the shape on the left ‘bouba’ and the shape on the right ‘kiki’.
About nine out of every ten people make that choice. Experiments have repeatedly shown that we think of voiceless stops like /k/ and high vowels like /i/ as small, sharp, crunchy, bright and spikey, compared to voiced sounds like /b/ and rounded vowels like /u/, which are big, lumpy, dark and heavy, across many different languages. The kiki/bouba experiment shows us that language is related to other sensory experiences.
Yes, words are still made up of arbitrary smaller parts, but it’s not completely 100% arbitrary, and this mapping between senses can show up in some general tendencies for naming things across language. One large study showed that words for "nose" were more likely to have a nasal sound like /m/ or /n/, across many different languages. Marketers are very aware of these links bouba wouldn’t make a good name for a brand of crackers, but kiki would, because we generally want our crackers to be crispy.
But I’m sure bouba brand ice cream would taste much better — round and smooth and creamy. Despite these general tendencies, there are also language-specific differences. If your language doesn’t have a /b/ or /u/ sound, you might not think of ‘bouba’ as a possible word, so you might not associate it consistently with the blobby shape.
Different languages can also label the shapes differently depending on how their sound systems work. Tone can influence how Mandarin speakers label these shapes. The human brain doesn’t completely separate our linguistic knowledge from other knowledge of the world, and experiments like the kiki/bouba test help show that.
Thanks, Thought Bubble! Or should I say...thought bouba? That’s one kind of psycholinguistic experiment, but it’s far from the only one.
Psycholinguists might use a priming experiment to test how closely words are related in the brain. They “prime” the participant with one word and measure the speed of their responses to other words. Say a subject is primed with the word “dog” and then has a faster response to “cat” than to other words, we might conclude “cat” and “dog” are more closely related in the brain.
We can also use gating experiments, where we measure how much of a word a participant needs to hear or see until they know I’m saying, say, “park”, instead of “part”. Gating experiments show that sounds aren't always produced in discrete sequences like our alphabet makes them look. Like, most English speakers will produce the /k/ sound in "cube" a little bit differently than /k/ sound in "calm".
Psycholinguists have even looked into such mysteries as whether swearing helps us manage pain. In that study, psycholinguists compared how long people could hold their hand in a bucket of ice water when they were allowed to swear and when they were not. When people were allowed to swear, they could hold their hand in the iced water for longer.
Huh! I’m definitely going to find a practical application for this! Other ways of figuring out what's going on in the brain when we use language involve using various kinds of equipment.
Eyetracking studies try to figure out what we're thinking about based on what we're looking at. Let's say we're reading a sentence like this one:. Now, "The rabbit crouched on the cushion" is a totally reasonable English sentence, so that's where most people assume it's going at first.
But then when we get to the word "seemed", we need to re-evaluate. That's where eyetracking shows that a lot of people look back at the earlier portion of the sentence to figure out what's going on--in this case, a structure more like "The rabbit that was crouched on the cushion seemed friendly." Misleading sentences like these are called garden path sentences, because they seem to "lead you up a garden path" of one interpretation before you realize that there's actually a different structure going on. Eyetracking and garden path sentences show us that we process sentences as we're experiencing them we don't wait until we've seen or heard a whole sentence before starting to figure out what's going on.
Electro-encephalography or EEG records the electrical activity of neurons firing through a cap of small sensors on the scalp. A psycholinguist might hook a person up to an EEG and say a sentence like, “my favourite ice cream is chocolate and socks.” “Socks” is semantically unexpected information in a sentence that we assumed would be about food, so the brain reacts accordingly. And an EEG is especially good at indicating when a surge in electricity happens.
So here it might map a kind of surge, known as N400, around 400 milliseconds after hearing “socks.” EEGs are quiet and relatively affordable, but they can be disturbed even by small things like blinking. Plus, they’re not that great at mapping where things happen in the brain. Functional magnetic resonance imaging, or fMRI, on the other hand, is relatively precise in locating brain activity getting within a few millimeters of where the activity is happening.
It does this by measuring when there is increased oxygen in parts of the brain. The more neural activity, or thinking, the more blood goes to the area of the brain, and that blood brings lots of oxygen to help those busy neurons. For example, a psycholinguist might have someone learn, and recite back, a few words in a made-up language to see what happens in the brain when we try to learn a new language.
While fMRI is relatively precise in locating brain activity, it’s less precise at when that activity is happening. It only gets within a few seconds, while thoughts can happen in fractions of a second. They’re also very expensive and pretty dang uncomfortable to hang out in.
So there's sort of a tradeoff:. EEG machines are precise about time, but imprecise about space, whereas MRI machines are precise about space, but imprecise about time. These machines, with their precise data and complex graphs, might seem like just the thing 19th century researchers like Broca and Wernicke needed to understand the link between the brain and language.
But really, we need to approach them with just as much caution as those older experiments. There's still a lot of individual variation in how our brains get organized as we learn things, and lots of psycholinguistics work has been done with small numbers of people who speak dominant languages like English. So we only know a little about if and how being fluent in other languages affects what happens in the brain.
There’s always more to learn. See you next time, when we talk about how we learn language in the first place! Thanks for watching this episode of Crash Course Linguistics.
If you want to help keep all Crash Course free for everybody, forever, you can join our community on Patreon.
Language happens thanks to the brain. This spongy thing sitting in our skull is responsible for our abilities to acquire complex linguistic skills like phonology, semantics, syntax and reading.
It’s what helps us write poems and compose texts, or get the meaning of a raised eyebrow or a string of random emojis. Understanding the brain helps us understand how language is produced and interpreted. We can point directly to the parts of our mouths and hands that are used to make language, but you can’t reach in and feel your own brain.
Luckily, over the years, people have devised a whole range of ways of looking at the brain to figure out where and how language happens. This is the field of psycholinguistics. [THEME MUSIC]. Interest in the relationship between language and the brain really picked up in the 19th century.
Researchers observed people with language disorders and then looked at their brains after they died. If damage to one part of the brain from a stroke, dementia, or a head injury correlated with a difference in the subject’s ability to understand or produce language, the scientists could predict there was a relationship there. These kinds of injuries that affect our language abilities are known as aphasia.
Two famous kinds of aphasia discovered at the time are called Broca's aphasia and Wernicke's aphasia. The areas of the brain related to each kind of aphasia therefore became known as Broca’s area and Wernicke’s area. You may have heard about them in a Psychology or Anatomy class.
Broca's area is located around your left temple, and it was named after Paul Broca, a 19th century French physician who noticed that people who'd been injured in this part of their brain acted in similar ways. While they could still understand language, they could only produce maybe a handful of words, one at a time. Broca’s area affected their ability to speak or sign in a fluid, grammatical way.
In other words, it affected their ability to use morphosyntax. Right around the same time Broca was making his discovery, German physician Carl Wernicke discovered that if a different part of the brain was injured, there was a totally different effect. The people injured in this spot, located just above your left ear, tended to talk in a way that sounded fluent and grammatical but was nonsensical.
Wernicke’s area is associated with the meaning of language. But those 19th century studies were limited, and the brain is amazingly complex and flexible. More recent research has found that some people can majorly damage Broca’s area and never develop aphasia.
Other people can re-learn how to speak through extensive practice building on their ability to sing, which is controlled by a different part of the brain. These newer studies help us understand neuroplasticity, the ability of the brain to flexibly build and connect parts of the brain in response to injury or as part of learning. And though the language areas are usually located on the left hemisphere of the brain, some people’s language areas are found predominantly in the right hemisphere, or spread across both sides especially for left-handed or ambidextrous people.
So the relationship between language and the brain is even more complicated than we first thought. Even now, errors and differences in language use can teach us about the different skills involved in language and how they're organized inside our minds. We all sometimes forget a word that we know perfectly well, or accidentally swap words, parts of words, or idioms what you might encounter as spoonerisms, tip of the tongue experiences or mixed metaphors.
These production errors tell us valuable things about how the mind handles language. Like, you know when you just can't quite remember a word? You know it, you almost have it, it's right there... you just can't retrieve it.
This phenomenon is known as a Tip of the Tongue experience, and psycholinguists have found that people with a word on the tips of their tongues can often recall other information about it. They might remember its meaning, its first letter, and sometimes how many syllables it has, but they can't quite recall the complete word. Signed languages also have this phenomenon, which is known as Tip of the Fingers, naturally.
And signers experiencing Tips of the Fingers can also recall certain information about the sign they're seeking, especially the initial handshape and location of the hand. They just can't recall the movement they'd need to complete the sign. Tip of the Tongue and Finger experiences can show us how our thoughts are organized, because we can have access to the first letter or initial hand position without having access to the remaining sounds or movement.
Knowing a word isn't a binary state of "yes" or "no" like a computer. Our brains can also retain partial information. Production errors are so useful that psycholinguists have techniques for trying to get people to make even more of them, so they can study those errors in a laboratory setting.
Psycholinguists can induce Tip of the Tongue or Finger experiences by asking people to translate words or to recall proper nouns. Let’s head to the Thought Bubble to try another psycholinguistic experiment right here! In a moment two shapes are going to appear on the screen.
Let’s decide which one is called kiki, and which one is called bouba. Are you ready? It’s more than likely that you called the shape on the left ‘bouba’ and the shape on the right ‘kiki’.
About nine out of every ten people make that choice. Experiments have repeatedly shown that we think of voiceless stops like /k/ and high vowels like /i/ as small, sharp, crunchy, bright and spikey, compared to voiced sounds like /b/ and rounded vowels like /u/, which are big, lumpy, dark and heavy, across many different languages. The kiki/bouba experiment shows us that language is related to other sensory experiences.
Yes, words are still made up of arbitrary smaller parts, but it’s not completely 100% arbitrary, and this mapping between senses can show up in some general tendencies for naming things across language. One large study showed that words for "nose" were more likely to have a nasal sound like /m/ or /n/, across many different languages. Marketers are very aware of these links bouba wouldn’t make a good name for a brand of crackers, but kiki would, because we generally want our crackers to be crispy.
But I’m sure bouba brand ice cream would taste much better — round and smooth and creamy. Despite these general tendencies, there are also language-specific differences. If your language doesn’t have a /b/ or /u/ sound, you might not think of ‘bouba’ as a possible word, so you might not associate it consistently with the blobby shape.
Different languages can also label the shapes differently depending on how their sound systems work. Tone can influence how Mandarin speakers label these shapes. The human brain doesn’t completely separate our linguistic knowledge from other knowledge of the world, and experiments like the kiki/bouba test help show that.
Thanks, Thought Bubble! Or should I say...thought bouba? That’s one kind of psycholinguistic experiment, but it’s far from the only one.
Psycholinguists might use a priming experiment to test how closely words are related in the brain. They “prime” the participant with one word and measure the speed of their responses to other words. Say a subject is primed with the word “dog” and then has a faster response to “cat” than to other words, we might conclude “cat” and “dog” are more closely related in the brain.
We can also use gating experiments, where we measure how much of a word a participant needs to hear or see until they know I’m saying, say, “park”, instead of “part”. Gating experiments show that sounds aren't always produced in discrete sequences like our alphabet makes them look. Like, most English speakers will produce the /k/ sound in "cube" a little bit differently than /k/ sound in "calm".
Psycholinguists have even looked into such mysteries as whether swearing helps us manage pain. In that study, psycholinguists compared how long people could hold their hand in a bucket of ice water when they were allowed to swear and when they were not. When people were allowed to swear, they could hold their hand in the iced water for longer.
Huh! I’m definitely going to find a practical application for this! Other ways of figuring out what's going on in the brain when we use language involve using various kinds of equipment.
Eyetracking studies try to figure out what we're thinking about based on what we're looking at. Let's say we're reading a sentence like this one:. Now, "The rabbit crouched on the cushion" is a totally reasonable English sentence, so that's where most people assume it's going at first.
But then when we get to the word "seemed", we need to re-evaluate. That's where eyetracking shows that a lot of people look back at the earlier portion of the sentence to figure out what's going on--in this case, a structure more like "The rabbit that was crouched on the cushion seemed friendly." Misleading sentences like these are called garden path sentences, because they seem to "lead you up a garden path" of one interpretation before you realize that there's actually a different structure going on. Eyetracking and garden path sentences show us that we process sentences as we're experiencing them we don't wait until we've seen or heard a whole sentence before starting to figure out what's going on.
Electro-encephalography or EEG records the electrical activity of neurons firing through a cap of small sensors on the scalp. A psycholinguist might hook a person up to an EEG and say a sentence like, “my favourite ice cream is chocolate and socks.” “Socks” is semantically unexpected information in a sentence that we assumed would be about food, so the brain reacts accordingly. And an EEG is especially good at indicating when a surge in electricity happens.
So here it might map a kind of surge, known as N400, around 400 milliseconds after hearing “socks.” EEGs are quiet and relatively affordable, but they can be disturbed even by small things like blinking. Plus, they’re not that great at mapping where things happen in the brain. Functional magnetic resonance imaging, or fMRI, on the other hand, is relatively precise in locating brain activity getting within a few millimeters of where the activity is happening.
It does this by measuring when there is increased oxygen in parts of the brain. The more neural activity, or thinking, the more blood goes to the area of the brain, and that blood brings lots of oxygen to help those busy neurons. For example, a psycholinguist might have someone learn, and recite back, a few words in a made-up language to see what happens in the brain when we try to learn a new language.
While fMRI is relatively precise in locating brain activity, it’s less precise at when that activity is happening. It only gets within a few seconds, while thoughts can happen in fractions of a second. They’re also very expensive and pretty dang uncomfortable to hang out in.
So there's sort of a tradeoff:. EEG machines are precise about time, but imprecise about space, whereas MRI machines are precise about space, but imprecise about time. These machines, with their precise data and complex graphs, might seem like just the thing 19th century researchers like Broca and Wernicke needed to understand the link between the brain and language.
But really, we need to approach them with just as much caution as those older experiments. There's still a lot of individual variation in how our brains get organized as we learn things, and lots of psycholinguistics work has been done with small numbers of people who speak dominant languages like English. So we only know a little about if and how being fluent in other languages affects what happens in the brain.
There’s always more to learn. See you next time, when we talk about how we learn language in the first place! Thanks for watching this episode of Crash Course Linguistics.
If you want to help keep all Crash Course free for everybody, forever, you can join our community on Patreon.