YouTube: https://youtube.com/watch?v=xq7tSzfjolE
Previous: How to Write Directly on the Brain
Next: What Whistled Speech Tells Us About How the Brain Interprets Language

Categories

Statistics

View count:777,248
Likes:16,810
Comments:1,078
Duration:27:56
Uploaded:2020-08-06
Last sync:2024-04-11 09:00
There are some persistent myths about human psychology that appear on TV all the time. But people are complicated, and a lot of times, what we (and these shows) take to be true about human natureā€¦ may not be as accurate as we think.

Hosted by: Anthony Brown
----------
Support SciShow by becoming a patron on Patreon: https://www.patreon.com/scishow

SciShow has a spinoff podcast! It's called SciShow Tangents. Check it out at https://www.scishowtangents.org
----------
Huge thanks go to the following Patreon supporters for helping us keep SciShow free for everyone forever:

Bd_Tmprd, Jeffrey Mckishen, James Knight, Christoph Schwanke, Jacob, Matt Curls, Sam Buck, Christopher R Boucher, Eric Jensen, Lehel Kovacs, Adam Brainard, Greg, Sam Lutfi, Piya Shedden, Katie Marie Magnone, Scott Satovsky Jr, Charles Southerland, Charles george, Alex Hackman, Chris Peters, Kevin Bealer

----------
Looking for SciShow elsewhere on the internet?
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Tumblr: http://scishow.tumblr.com
Instagram: http://instagram.com/thescishow
----------
Links to Original Episodes and Sources:

The Truth About Subliminal Messages
https://www.youtube.com/watch?v=dS_kKbIL4dY

Are Repressed Memories Real?
https://www.youtube.com/watch?v=ewc2U_eYnFY

Does Everyone Have a Midlife Crisis?
https://www.youtube.com/watch?v=Kis4Ziz0TPk

Is the Bystander Effect Real?
https://www.youtube.com/watch?v=Rv22TMtNNkI

Does IQ Really Measure how Smart You Are?
https://www.youtube.com/watch?v=7p2a9B35Xn0

[Intro music]

[Anthony] Even if you're a cynical person and you take the things you see on TV with a grain of salt, there are some myths about human psychology that run really deep. And they might just seem like facts of life, especially when they show up on TV all the time. 

But people are complicated and a lot of times what we take to be true about human nature isn't as true as we think. We've done plenty of episodes about these psychological myths over the last couple of years, and here we're going to take a look at some of the highlights.

Like, take the idea that subliminal messages can affect the way you act. In 1957 the first subliminal messages were hidden in a movie and a year later that form of advertisement was banned in the US.  But can you really be influenced by signals you're totally unaware of? Brit has the scoop.


 The Truth About Subliminal Messages (00:47)



[Brit] In 1957 an advertising executive from New Jersey announced that he had convinced movie goers at a local theater to buy more popcorn using subliminal messaging. He claimed that 45 thousand moviegoers were exposed to flashes of the words "eat popcorn" and "drink coca cola" on screen during a movie, with the words appearing and disappearing so quickly that viewers weren't even aware they were there.

And as a result, popcorn sales increased an average of 57.5% and coke sales increased 18.1%. He got a Lot of attention for this supposedly scientific test because those numbers are. It was every advertiser's perfect fantasy and every consumer's worst nightmare.

The idea that we could be constantly influenced by messages we don't even realize we're getting freaked people out.

Except... the whole thing was a hoax! It turned out that there hadn't been an increase in popcorn or coke sales at the theater in question. And according to the theater manager there hadn't been an experiment at all.

But that hasn't stopped us from believing in the power of subliminal messages. Surveys from 1983, 1994, and 2004 show that about 3/4 of people who are familiar with subliminal messaging believe that companies use it. And a majority of those people think that it works.

Thankfully, research does not agree.

Subliminal perception is for sure a thing. We definitely can react to a stimulus even when we can't consciously perceive it.

Which is different from superliminal perception: things that we do consciously perceive, even if we don't pay direct attention to them, like product placement. The line in between those two is known as the subjective threshold. Then, below that, there's the objective threshold: the level at which we don't perceive or react to the thing.

Subliminal perception research dates back to a book published in 1898, when a psychologist was looking to confirm the idea of a so-called sub-waking self. In a few experiments, he asked about two dozen participants to read numbers or letters on cards. But he held the cards so far away that they could only see a blur or a small dot.

When forced to choose, participants could usually distinguish between numbers and letters. They got it right about two-thirds of the time. And they actually did better than chance at guessing exactly what was on each card. All of which suggested that they were perceiving the images on some level, even though they thought they were just guessing.

A 1951 study in the journal Psychological Review found even clearer evidence by conditioning people to associate certain nonsense words with an electric shock. Later, when the words were shown to them too briefly to be consciously seen, the researchers measured greater electrodermal activity for words associated with the shock.

That's a slight change in how well skin conducts electricity, which is associated with sweating. In other words, even though subjects believed they hadn't seen anything, their bodies still anticipated the jolt.

So, we know subliminal perception is real. But subliminal advertising doesn't really work.

Researchers have done plenty of studies, but no one seems to be able to show any real change in consumer behavior in response to subliminal ads.

For instance, a 1975 study in the journal flashed the words "Hershey's chocolate" over a movie, but the researchers found that none of the 33 subjects given the message bought Hershey's in the ten days after the exposure.

There's one exception though. A subliminal message can sort of work––but only if you're already motivated to follow it.

A 2002 study of 81 undergraduates found that they did drink more water when subliminally primed with words like "dry" and "thirsty"––but if and only if they were already thirsty. If they weren't, the hidden messages didn't do anything.

In a follow-up experiment in the same study, 35 undergraduates were asked to pick between sports drinks described as "thirst-quenching" or "electrolyte-restoring."

They were only more likely to prefer the "thirst-quenching" drink if they were subliminally primed with thirst-related words and were already thirsty.

So, subliminal messages aren't going to make you do things you don't want to, but they might nudge you gently in a direction you're already headed.

It makes sense that subliminal messaging wouldn't drastically impact your behavior. The message is so subtle that it only has a subtle effect.

Which is good, because subliminal advertising isn't technically illegal in the U.S.While Australia and the U.K. have laws against it, the U.S. doesn't forbid advertisers or networks from using it.

That said, a 1979 Supreme Court case ruled that it isn't protected by the First Amendment. And the Federal Communications Commission's official stance is that it's misleading and shouldn't be used. And, they reserve the right to yank the broadcasting license of anyone who does.

So nobody likes subliminal advertising, there just hasn't been that much effort put into stopping it. Probably because, in the end, it isn't that useful. Advertisers don't really care about sneakily targeting people who are already motivated to buy their stuff.

They'd rather invest in snagging new customers.

 Are Repressed Memories Real? (05:20)


[Anthony] Speaking of being influenced by your subconscious, you've probably heard plenty about repressed memories: memories of your past that you've buried deep down, but that still affect how you feel and act. You might even have seen TV characters haunted by memories that they don't remember at all. But can old, forgotten memories really explain your fears and desires today? Here's Hank with more on that.

[Hank] Alright, think back with me for a moment. Say like, ten years back. Ten years ago today, do you remember exactly what you were doing?

I definitely don't, and neither do most people. We might vaguely remember what life was like, but most of our daily memories from that long ago are just gone.

Still, some psychoanalysts would argue that you might have uncomfortable memories from that day hiding in the back of your mind, waiting to be rediscovered, also known as repressed memories.

That used to be a really popular idea, but now, we know these memories aren't always what they seem. The idea of repressed memories is sometimes thrown around in pop psychology but it has a pretty specific definition.

For one, a repressed memory isn't something you just haven't thought about for years, like your first elementary school art project.

And it's also not just, like, forgetting something. Like, how you probably can't remember what you had for breakfast three weeks ago. I can't remember what I had for breakfast this morning.

The real idea of a repressed memory comes from everybody's favorite misguided psychoanalyst, Sigmund Freud. His idea was that if you have thoughts or experiences that you don't want to deal with consciously, like memories of being abused, they'd get pushed into your unconscious mind. And Freud argued that everyone has all kinds of desires, motivations, and memories just waiting to be uncovered.

Back around the 1980s, it was common for therapists who were into Freud's ideas to suspect that their patients had repressed memories of trauma or abuse but, unfortunately, some therapists might have been a bit overzealous in finding trauma when it wasn't actually there.

Many used guided imagery techniques with their patients, like imagining what a hypothetical abuse scenario might look like, to help them recall those supposedly hidden memories. Which sounds horrible, and today isn't seen as a useful therapy for dealing with abuse.

Aside from sounding just pretty unpleasant, it also looked a lot like how you can create false memories. Sometimes people can be bad at distinguishing their real memories from things that they just imagined happening to them. Like if you think that you remember something from when you were a baby because your family has told that story a bunch of times.

So telling people to imagine experiences makes them more likely to misremember them as true. For patients who had been abused, it was great that therapists were finally acknowledging how common it was and taking them seriously. But if a patient comes to your office saying that they've never been abused, you definitely don't want to accidentally convince them that they were.

Thanks to that imagery technique, it's likely that many supposedly repressed memories from around then were actually just things suggested by well-meaning therapists.

And research supports that idea. Some studies have shown that people who believe they've recovered repressed memories are more likely to get false memories.

For example, in a paper published in 2000 in Psychological Science, researchers studied 57 women. Some had always remembered abuse from earlier in their lives, but others had either supposedly recovered memories of abuse or suspected that they had repressed memories. They had all of these people and a control group, people who knew they had never been abused, take a memory test.

It involved remembering lists of related words and, in other research, most people end up creating false memories and accidentally remembering words that aren't on the list.

The results showed that people who had always remembered their traumatic memories were about as likely as the control group to have false memories of the missing words. But those who had recovered memories were about 20% more likely to have the false memories from the lists.

And this phenomenon happens with more significant events too, not only word lists. One study published in 1999 in the Journal of Traumatic Stress looked at 24 people and found that they could induce false memories of some unusual life events, like breaking a window with your hand, or getting stuck in a tree. They did this using the same guided imagery technique that therapists would use in finding recovered memories.

Another study from 1989 surveyed about 130 children whose school was attacked by a shooter. Several children remembered being at the scene of the shooting, but weren't actually anywhere near it.

One boy even remembered walking to school, turning back when he heard the shots, and seeing someone lying on the ground. Except, his parents confirmed that they were on vacation that day.

Now, it's important to remember that all of this research is correlational. No one researches repressed memories by randomly assigning some people to experience trauma to test their memory of it.

So we can't say repressed memories are always false. But we do know that it's really hard to demonstrate that they're reliable.

And for most people, it's really easy to get a false memory. Without corroborating evidence, it can be hard to distinguish a true recovered memory from a false one.

Still, when it comes to trauma and abuse, most people have continuous memories of it, so it's important to take them seriously.

And even if someone can't prove their repressed memories are real, having traumatic, troubling, or stressful thoughts is a good reason to talk to a professional anyway. No matter how much the internet and TV shows have to say about repressed memories, like most of Freud's ideas, they're definitely not as straightforward as they sound.

[Anthony] Well I guess that spoils a few plot lines. But on the bright side, it's good to know therapists are no longer planting traumatic memories in people. Now, as you get older and start forgetting the things that happened to you as a kid or young adult, you might enter a stage of your life that's tied to another common plot point in TV: the midlife crisis.

If you haven't hit your 40s yet, you might be wondering if you too will start dreaming of turning over your life and buying a bright yellow Ferrari one day. But how worried should you really be about having a midlife crisis? I'll hand it over to Brit to explain.

 Does Everyone Have a Midlife Crisis? (11:05)


If you haven't gotten to your midlife crisis yet, you're probably not looking forward to it. According to pop culture, people hit their 40s and suddenly become miserable. And to deal with it, they quit their jobs, buy sports cars they can't afford, and have affairs with much younger people.

Yikes. Still, if you think about it, it's pretty weird to think that turning a specific age would be enough to make us upend our lives. So is the midlife crisis really a thing?

Well, it's kind of complicated, but there's probably less to worry about than you think.

The term midlife crisis was coined by psychoanalyst Elliott Jaques in 1965. He believed that you had your crisis when you realized that you'd already lived more than half of your life.

He studied quote unquote "geniuses" like Bach, Shakespeare, and Mozart, most of whom either died tragically or became much more prolific after their late 30s. He thought that the fear of not accomplishing everything they wanted to either killed them or lit a fire under them.

Admittedly, he also thought that this didn't really apply to women, because they went through menopause instead.

That's clearly not accurate but, because of it, most midlife crisis stereotypes today are still about men. Of course, other thinkers at that time were also talking about developmental crises.

And the one who really popularized the idea of the midlife crisis was researcher Daniel Levinson. In 1975, he proposed that life was made up of a series of stable periods interspersed with crises known as transitional periods. He based his stages on work from previous psychologists and on his own study of 40 American men aged 35-45.

Levinson thought that the biggest transition, which happened in middle age, had to do with a sense of not accomplishing enough. He believed it could be dealt with by learning to set more reasonable goals. Still, tiny sample sizes of one group aren't always reliable.

So more recently, researchers have tried looking for the midlife crisis in bigger, more diverse samples. And they seem to have found it. One trend that has emerged is a U-curve in reported happiness levels. People seem to be happy early in life and at the end of it, but they slump in the middle.

This trend has been found in multiple studies, looking at over a million people in more than 50 countries. In 2013, one researcher proposed a possible explanation for the U-curve pattern after analyzing a 13-year long German study of 23,000 people.

He said it had to do with expectations. According to his hypothesis, young people expect to beat the average when it comes to careers and happy relationships––and when things don't quite work out that way, they're disappointed. They do eventually adjust their expectations, but not always fast enough to prevent that disappointment.

The result is pessimism and dissatisfaction, a double whammy of misery. But at some point, as they get older, those expectations do align with reality. Possibly because (according to some research) the aging brain is less prone to regret.

Life starts getting better, and because expectations are lower, it's a pleasant surprise that brings people back up the curve. Now, if this all sounds pretty depressing, it is worth mentioning that the U-curve isn't set in stone. 

It's still pretty hotly debated for a number of reasons.

For one, several recent studies have found that well-being simply increases as we age, without the middle-age dip. 

And there are some issues with the studies that showed the U-curve too. Many for them are cross-sectional studies, meaning that they look at lots of different-aged people and use them to estimate trends over the lifespan. This is different from a longitudinal study, which follows the same subjects over a long period of time. 

Longitudinal studies can be more accurate for long-term research, but not many have been done about midlife crises. Until recently, old age, childhood and adolescence were studied much more often than middle age.

Still, the longitudinal studies that have been done tend to show that stead increase in well-being. Than can mean cross-sectional studies aren't entirely accurate. But we'll need to do more research to know for sure. 

There's also an issue of definitions. You might call a "midlife crisis" a difficult transition that occurs age 40. But different researchers have different criteria. Is it stressful? Is it eventful? Is in internally- or externally-driven?

Even when researchers do agree, the public's definition tends to be much broader. A 1992 study found that just 10% of people had had midlife crises when the researcher determined whether they met the right criteria. 

But, in a 2000 study, when people were directly asked if they'd had a midlife crisis, 26% of them said yes. The public's definition of this is similar to researchers, but tends to include any stress or turmoil encountered between 30 and 65.

So the idea of the midlife crisis may prevail in pop culture partly because we take any stressful event in the middle of our lives and slap that label on it. 

One way or another, this is definitely a topic that needs more investigation. But the good is that even if the U-curve does exist, it doesn't mean that middle-aged people are all miserable.

On average, studies so far have shows it's actually a pretty small decrease in happiness, not the life-altering angst we associate with the stereotype.

So don't worry about it too much––your job-quitting, Ferrari-buying phase may never arrive. 

[Anthony] So don't worry if you havent's started saving for that Ferrari: you might not need it after all.

Speaking of crises though, if you've ever thought about what would happen if you witnessed a crisis or even had a crisis in public, you might worry about something else you've likely seen on TV: the bystander effect.

You know, the idea that people are less likely to come to someone's rescue during a crisis if there are other people around. And that actually does sound like something to worry about.

But is the bystander effect really a thing? I'll stand back and let Brit handle that one.

 Is the Bystander Effect Real? (16:26)


[Brit] If you've ever taken a psychology class, you've probably hear the shocking story of Kitty Genovese. 

As the story goes, she was murdered one night in 1964 with 38 witnesses, yet no one helped her or even called the police until it was too late. 

Reports about this horrible, bizarre event sparked research on what came to be known as the bystander effect. Despite what you'd think, it says that sometimes someone is actually less likely to help if there are others around. 

But even though it's talked about in every Intro Psych course, the bystander effect isn't as simple as more people = worse odds of getting help. 

Sometimes, more is better, and there are other factors that matter too. Oh, and also, that original story of Kitty's murder isn't entirely true. 

After the New York Times published their story about Kitty Genovese, scientists set to work trying to figure out why so many witnesses hasn't responded. The first major study was published in the Journal of Personality and Social Psychology in 1968.

In it, two researchers created situation in the lab. They had 72 undergrads come in to what they thought was a study on common problems in students' lives. Each participant was seated alone in a room with an intercom to share their problems with one, two, or five other so-called "participants"––althought they were actually recordings.

Then, one of these pre-recorded participants pretended to have a seizure, and the scientists filmed how long it took for the undergrad to get help. 

They found that the more bystanders there were the longer it took, if they got help at all. When they were alone, 85% of participants got assistance. But in the largest group of 5 bystanders, only 31% did.

Admitedly, most people were concerned about the sick person, but they didn't know if they should do something. And so, the bystander effect was born. 

Since then, multiple studies have confirmed this effect. But they've also found it isn't always as straightforward as it seems. 

Sometimes, people are more likely to help with bystanders, or simply aren't affected by their presence. One major influence on this is the bystanders themselves.

Not surprisingly, people who are in a hurry are typically less likely to stop and help someone. And people who are highly skilled in a certain emergency, like nurses trained to handle medical situations, are also more likely to try to help, whether bystanders are there or not.

Insterestingly, though, making a commitment also matters. In a 2015 study in France, a man sat down his bag and asked either one specific person to watch it, everyone in general to watch it, or said nothing, then headed to a nearby ATM.

Then, the researchers faked the backpack getting stolen. They repeated trials of this until they had a total of 150 different bystanders––50 for each scenario.

Ultimately, the more direct the commitment, the more likely people were to intervene when someone took the bag.

Other studies suggest that responses in situations like this have to do with a couple of things.

One is social influence. In general, if you aren't sure what's going on, you probably tend to look at other people for more information. And if no one else seems to be concerned, then maybe this guy's backpack isn't a big deal. So you don't do anything, just like everyone else. 

Another factor is diffusion of responsibility. If something happens when you're in a big group, like some participants in this backpack study, it isn't up to only you to help––other people could help too. So you don't feel as responsible and don't act and, suddenly, that man's out of a bag.

Besides the bystanders, another major factor in general is the specific situation. Sometimes, it's hard to tell if someone needs help or not. Many studies have found that when things are ambiguous, people are less likely to jump in. 

Which seems reasonable––after all, if it turns out someone is just playing around, it can be really embarrassing to be wrong. 

Research suggests that ambiguous situations can make people fear being judged negatively, which can stop them from acting. The good news is that, when it's clear that there is an emergency, the bystander effect doesn't usually happen.

A 2011 meta-analysis of more than 50 studies also showed that if the situation is dangerous, like if the perpetrator is still there, people are more likely to help if there are bystanders. 

And that makes sense: those situations are clearly an emergency, and it's safer if other people have your back.

Ultimately, although there are some trends, a lot of social and psychological factors determine whether or not someone will offer help. 

Today, research suggests that your best bet in an emergency is to make clear that you do need assistance, and to make individuals feel responsible for stepping in.

Really, though, it isn't that surprising that this effect isn't totally straightforward––humans aren't exactly clear-cut, so the bystander effect isn't either. 

Even the original Kitty Genovese story wasn't as black and white as the New York Times reported. The truth is, 38 people did not witness the murder.

When Kitty was first attacked on the street, many may have briefly heard something, but only a handful of people saw anything happening in the dark. And even then, it was the middle of the night, and it was hard to tell what was going on. In other words, it was ambiguous.

One person scared the attacker away by yelling out the window, and, injured, Kitty tried to get to her apartment. Then, unfortunately, in the building's entrance where people couldn't see of hear her very well, the attacker came back. 

Police were called but didn't arrive until it was too late to save her. The newspaper article wasn't published until 2 weeks after the event, so there was time for details to get a little fuzzy.

Thankfully, we have researchers studying this phenomenon to make sure that it's less likely to happen again. 

[Anthony] It seems like every time you try to figure out a rule for how people work, people turn out to be more complicated than we thought. 

And that's kind of what happened when we tried to figure out a scale to measure human intelligence. I'm talking about IQ tests, which are the hallmark for every supposed genius in television––I'm looking at you young Sheldon! Urkel! Dougie Howser!

Here's Hank for more on the trouble with these tests, and also why they're still around. 

 Does IQ Really Measure how Smart You Are? (21:40)


[Hank] When people talk about smarts, Intelligence Quotient––or IQ––always seems to come up. People love to bring up how Einstein had a genius level IQ fo 160. And to join Mensa, you need to have an IQ of at least 130.

But is IQ even a good way to measure intelligence? Well, that depends on how you define intelligence.

IQ scores may be a useful shorthand to talk about education strategies for big groups of people, like when discussing public policy. But IQ can be affected by a lot of factors, even things as subjected as your motivation while taking the test. 

The first sort of IQ test was invented by the French psychologist Alfred Binet in the early 1900s. A law in 1882 aimed at egalitarianism said that any healthy child had to go to school and learn the basics, like reading, writing, arithmetic, history, public policy, and the natural sciences. 

The law even included a special consideration for children with disabilities, like deafness or blindness. The French government acknowledged that not every kid would be able to keep up with the normal curriculum for lots of possible reasons. 

So Binet and other psychologists were commissioned to created a standardized test to measure how different kids handled their schoolwork.

Along with Théodore Simon, Binet developed the Binet-Simon test in which children would answer a series of questions until they couldn't anymore. That way, kids could be grouped in classes with students with similar scores, instead of relying on their age or the subjective judgments of teachers.

In the next decade or so, the scale was revised for use with both kids and adults and renamed the Stanford-Binet Intelligence Scale. This popular IQ test is still used today along with other standardized tests that are meant to measure leaning ability, sometimes defined as how quickly and easily we learn new things. 

On early versions of the test, IQ was calculated by taking a person's score on a standardized test, dividing it by their chronological age, and then multiplying the result by 100.

In more modern versions, you're basically ranked against other test takers. The scores of a group of people are scaled so that 100 is the average, and your IQ score is determined based on where you are in relation to that average. 

But here is the thing, whether or not IQ tests actually measure your intelligence, depends on how you're defining intelligence. In simple definitions, intelligence is the ability to learn new things or adapt to new situations. 

 But the definition could also include the ability to use logic or reason or to think abstractly.

These definitions are all focused on intellectual capacity, which is how intelligence is defined by the American Psychological Association. And they don't include other kinds of intelligence, like social, or emotional intelligence, or things like creativity, or self-awareness. 

The Standord-Binet test, for instance, focuses on testing 5 main categories of information: 


  • Baseline knowledge

  • Basic mathematics

  • Visualizing objects in space

  • Working memory

  • Fluid reasoning, or the ability to solve new problems


So, depending on what you're trying to understand about someone, IQ tests might be useful, or they might be a waste of time.

It turns out that your IQ score can be affected by a lot of different things and, because intelligence is so complex, we're not sure how strongly different factors might affect it. 

There is some evidence that says cognitive abilities are somewhat heritable, meaning there might be some kind of genetic component to IQ. But it's not that simple. 

Recent studies have shown that IQ tests are affected by motivation. For example, one 2011 meta-analysis found that people who are offered cash if they do well on an IQ test scored higher than people who weren't offered anything––like, up to 20 points higher for just a $10 reward. That is a huge effect!

And we know that motivation can play a role in other things, like your grades and your career path, that could be wrongly chalked up to just an IQ score.

IQ also seems to be affected by environmental factors. Cultural values can influence your IQ score, for example a kid who grows up in a community that prizes storytelling might do better on verbal of the test, or problems that require you to remember and reuse information. 

How much education you get (and the quality of that education) may have an effect too. Kids who miss school because it's hard for them to get there or who attend schools without any resources then to score lower than their peers. 

Even your family environment can affect your IQ, like whether you grow up in a low-income household or whether you experience a lot of trauma as a kid.

So, like a lot of things, IQ seems to result from a mix of nature and nurture. There are just so many factors that affect your learning ability as you grow up, from the environment you develop in before you're born to things like education opportunities and family dynamics. 

Psychologists seem to agree that one that seems to help people with learning and academic achievement is thinking about intelligence as a thing that can change. 

So IQ tests aren't anywhere near perfect or comprehensive, but they can help us predict how people might learn in the near future, which can make a difference in the support they receive. 

For instance, IQ scores can affect diagnosis of intellectual disability, which can inform public policy about education programs to support different students. 

It's understandable why it's valuable to have a standard way to sort of measure intelligence, like when it comes to making these general policy decisions.

But it's also easy to see why IQ tests have been surrounded in controversy too. There is a lot we don't understand about intelligence and a lot that an IQ score can't tell us about a person of groups of people. 

So while IQ can be useful shorthand in some cases, it is not something that is set in stone. Do not let a number define you!

[Anthony] So way more goes into an IQ score than your raw intelligence, and your IQ score can be influenced by lots of things.

In fact, hopefully you're a little smarter now than when you started this video and a bit more ready to debunk some myths the next time they pop up on Netflix. 

Thanks for watching this episode of SciShow Psych, and a special thanks to our Patreons, who make all these videos possible! It takes a lot of people to make a SciShow episode, and we couldn't do it without your support. 

If you're not yet a Patreon but would like to support what we do, you can find out more at patreon.com/SciShow.
[Outro music]