YouTube: https://youtube.com/watch?v=ZwqOoD17_LU
Previous: Online Advertising: Crash Course Media Literacy #7
Next: Moonlight: Crash Course Film Criticism #13

Categories

Statistics

View count:168,061
Likes:4,136
Comments:156
Duration:10:42
Uploaded:2018-04-11
Last sync:2024-12-08 14:15

Citation

Citation formatting is not guaranteed to be accurate.
MLA Full: "Science Journalism: Crash Course Statistics #11." YouTube, uploaded by CrashCourse, 11 April 2018, www.youtube.com/watch?v=ZwqOoD17_LU.
MLA Inline: (CrashCourse, 2018)
APA Full: CrashCourse. (2018, April 11). Science Journalism: Crash Course Statistics #11 [Video]. YouTube. https://youtube.com/watch?v=ZwqOoD17_LU
APA Inline: (CrashCourse, 2018)
Chicago Full: CrashCourse, "Science Journalism: Crash Course Statistics #11.", April 11, 2018, YouTube, 10:42,
https://youtube.com/watch?v=ZwqOoD17_LU.
We’ve talked a lot in this series about how often you see data and statistics in the news and on social media - which is ALL THE TIME! But how do you know who and what you can trust? Today, we’re going to talk about how we, as consumers, can spot flawed studies, sensationalized articles, and just plain poor reporting. And this isn’t to say that all science articles you read on facebook or in magazines are wrong, but that it's valuable to read those catchy headlines with some skepticism.

Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse

Thanks to the following Patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:

Mark Brouwer, Glenn Elliott, Justin Zingsheim, Jessica Wode, Eric Prestemon, Kathrin Benoit, Tom Trval, Jason Saslow, Nathan Taylor, Divonne Holmes à Court, Brian Thomas Gossett, Khaled El Shalakany, Indika Siriwardena, Robert Kunz, SR Foxley, Sam Ferguson, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, Evren Türkmenoğlu, Alexander Tamas, D.A. Noe, Shawn Arnold, mark austin, Ruth Perez, Malcolm Callis, Ken Penttinen, Advait Shinde, Cody Carpenter, Annamaria Herrera, William McGraw, Bader AlGhamdi, Vaso, Melissa Briski, Joey Quek, Andrei Krishkevich, Rachel Bright, Alex S, Mayumi Maeda, Kathy & Tim Philip, Montather, Jirat, Eric Kitchen, Moritz Schmidt, Ian Dundore, Chris Peters, Sandra Aft, Steve Marshall
--

Want to find Crash Course elsewhere on the internet?
Facebook - http://www.facebook.com/YouTubeCrashCourse
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse

CC Kids: http://www.youtube.com/crashcoursekids
Hi, I'm Adrian Hill, and welcome back to CrashCourse Statistics. 

In this series we talk a lot about how often you see data and statistics on the news and on social media. There are all sorts of studies and data sets promising the keys to a better life. Coffee's good for you! No,wait, wait, it's killing you. So, what do you trust?

[Intro Music]

Journalism has many goals; to inform, expose, help people make better decisions about their communities and their lives. But the only way journalism matters is if people read it, or watch it, or listen to it. Journalists have to capture the attention of their audience and help them connect with the story. 

Case studies and observational studies can be a great source of material for an article or a podcast, but it is always important to look at both the quality of the science and the quality of the journalism. If there is a study without a control group or a non-randomized design, it's gonna be less reliable. And if a journalist doesn't ask those questions, or just doesn't include the answers in the story, how are you gonna know?

Back in 2015, newspapers around the world ran stories heralding chocolate as a way to lose weight. It sounds great, but turns out it wasn't good journalism or good science. A science journalist and PhD named John Bohannon created this story by doing a real, randomized study, but one that was intentionally riddles with flaws. It was meant to be a way to show that academic journals would publish the very flawed study, and so would news outlets. 

The goal, writes Bohannon, was to demonstrate how easy it is to turn bad science into big headlines. And it worked. Once the study was published and the press release went out, journalists jumped on it. Bohannon says that many outlets ran the story without ever contacting him. Very few reporters asked about the number of subjects they tested-- only 16. And no one, he says, reported that number. Also, says Bohannon, the stories that ran didn't quote any outside researchers for corroboration.

And while it's disappointing that you can't add some Cadbury to your diet and lose weight, and maybe your aunt keeps quoting this study to you, and she downs that fifth and sixth bon-bon, but bad science and bad scientific journalism is not always that harmless. 

Most clinical studies base their conclusions on statistical tests that give researchers and the rest of us, a quantifiable way to measure the evidence a study provides. 

For example, when a reputable doctor claims that ibuprofen increases the risk of fertility issues in men, it's because there was a study with a group that took ibuprofen and a control group that didn't. And, the subjects taking ibuprofen showed a tangible increase in some measure of infertility. But, which measure and was the control group given a placebo? An article that you read on Yahoo! Health probably is not going to tell you. For that information, you have to go to the original academic article, and those can be kind of dense. It turns out that this study did have a placebo control group, and measured infertility in a clinically respectable way, by measuring levels of fertility-related hormones. But, these facts are important when considering how trustworthy the conclusion of a study is, and a lot of news articles don't have them. As a side note here, if this study was done in rats instead of humans, the conclusion that ibuprofen increases the risk of fertility issues in men would not be as strongly supported.

But, let's go to the thought bubble. Imagine that you're going about your morning as usual, sipping your coffee and scrolling through the latest news, when you see an article with the title, "Miracle food causes weight loss!" You want to fit better in your jeans, so you click through. You see that the miracle food is calls, "targ," and the results were statistically significant. So, it seems legit. You jump in your car and drive down to the local grocery store, and see that targ is on sale. So, you pick up the largest pack, and start eating. You're already feeling stronger. But, then you begin to experience side effects, like heartburn, and stomach ulcers, and a weird desire to fight. The article you read didn't mention that when researchers considered the over twenty thousand subjects, the weight loss was only about one-tenth of a pound more for targ eaters. That's not very much. Maybe, not worth these side effects.

When a study reports something is significant, you probably assume that means it's really going to matter. But, this isn't always the case, since significant means something different in statistics than in every day English. And, science journalists can misuse this confusion by not mentioning how big of an effect was observed. Thanks thought bubble.

We don't have time to read all the academic articles on even one topic that affects us. Take what gets called text neck, a condition that includes sore neck muscles from looking down at your phone and laptop all the time. A google scholar search for academic articles about text neck returns over a hundred and eighty results, and that's only since 2013. There's no way that you could read all of those without exacerbating your already sore neck.

So, we need people, like scientific journalists, who can distill all those articles into digestible and engaging pieces for us to consume. It's helpful to be skeptical, but we should keep reading about science. When reading a science story, it's important to note a couple things: who wrote it, who published it, who did the science, and who funded the science. If an article that tells you that drinking diet coke is good for your teeth is on the coke website, you're suspicion should be raised. More so than if it was published by Scientific American.

You should also consider who funded and completed the research the articles based on. If you read an article that claims a rare fruit juice will reduce your blood pressure and stave off cancer, and you see that study was funded by the juice company, be suspicious.

Not every study funded by a company is inherently flawed. Science cost money. It can be expensive, and, while there are sources of funding from governments and other neutral organizations, the reality is that often the people willing to pay to have the research done are the companies who have a vested interest in the results. Sometimes, to get the research done, researchers need to partner with these organizations. Privately funded research can be done well.

Another thing to watch for in science and health journals is whether the claims made in the headline actually match the claims made in the story. You don't see many stories with a headline like "ketchup may have mild relationship with weight gain in men over 40," because who's going to read that? It might be accurate, but it's just not as flashy as "is ketchup making you fat?"

There are a number of reasons we get these flashy headlines. Media outlets, from BuzzFeed to Goop to the old-gaurd newspapers, are all fighting for audience these days. Alright, maybe not the same audience, but audience. And, that competition makes the super sexy headline really, really appealing. Sensational gets clicks. Content creators are under pressure to write and find what's going to get shared. The language of correlation is uncertain, and, as such, less catchy.

You'll also spot plenty of causation problems in science and health reporting. When you see an article that claims that doing yoga cures cancer, you should check to see whether it was an experimental study or whether the claim is based on correlation on a survey between doing yoga and not having cancer. Only experimental studies with randomized designs and control groups have a shot at showing evidence of causation. Because, personally, I can think of a lot of confounding factors for that yoga and cancer study.

Now, on to another way science can get mischaracterized. There's studies done on mice and rats that get reported as if they were studies on humans. And, while a lot of medical and health related studies get their start in mouse models, many of the treatments that work in mice don't end up being successful in their human counterparts.

Similarly, you are going to see clickbait-y headlines that say things like hydrogen peroxide kills cancer, and list all the ways you can now incorporate hydrogen peroxide into your daily life. But, what the title doesn't tell you is that these were in vitro studies, which means they're done on real cancer cells, but in a petri dish. In a very simplified sense, the cells were grown by themselves in a dish and the substance of interest was put into the dish, and it killed the cancer cells. But, in a dish, lots of things we consume every day will kill cancer cells, like coffee or alcohol. But they, even working in tandem, aren't going to cure cancer. Anyway, these misguided, hydrogen peroxide kills cancer headlines gets shared around online, and people come up with alternative therapies that involve consuming hydrogen peroxide, which can be really, really dangerous. Like dead, dangerous. 

Science stories can make for great journalism, and they can give you something clever to say at your next dinner party. But, any time you hear a cable talk show host say the phrase, "scientists have found" or "a new study suggests," you should always look up that study to be sure, at least before you start spreading it around. And, if the results of a study will cause you to make any changes in your life or your family's life, you should really go back and check the science. No matter how reputable the source, it's always important to be aware of these issues, whether you see it in Buzzfeed or The Economist.

Articles often gloss over all kinds of details, the kinds of control group that was used or whether the study was done in mice or monkeys. All of which can make a huge difference in how strongly you can take the claims of the study. And, the bigger the life change you're thinking about making, the more in depth your search for information should be. Adding a square or two of dark chocolate is not going to be a big deal; Trying to cure cancer with high doses of Vitamin K, just because some study found it kills cancer cells in a dish, that is.

So, this doesn't mean all the science you read about on Reddit or watch on your favorite YouTube channel is wrong. It just means that you need to use statistical thinking to check which claims are reasonable and which aren't. In order to help us remember some of the rules of thumb we talked about today, our writer, Chelsea, came up with a limerick. And so, without further ado, CrashCourse's first original limerick:

When a study reports correlations or has mice as its main population, the results it declares may not be quite fair. So, be careful about generalizations.

[Laughing] Alright, let's see you do better.

Thanks for watching. I'll see you next time.

[Outro Music]

CrashCourse Statistics is filmed in the Chad & Stacey Emigholz Studio in Indianapolis, Indiana, and, it's made with the help of all these nice people. Our animation team is Thought Cafe. If you'd like to keep CrashCourse free for everyone, forever, you can support the series at Patreon, a crowdfunding platform that allows you to support the content you love. Thank you to all our patrons for your continued support. 

CrashCourse is a production of Complexly. If you like content designed to get you thinking, check out some of our other channels at complexly.com. Thanks for watching.