Previous: Why Do Apples Turn Brown?
Next: Moore's Law and The Secret World Of Ones And Zeroes



View count:607,036
Last sync:2023-01-23 19:15
SciShow News explains the science behind a psychological experiment performed on about seven hundred thousand Facebook users, although none of them knew that they were participating.
Like SciShow? Want to help support us, and also get things to put on your walls, cover your torso and hold your liquids? Check out our awesome products over at DFTBA Records:

Or help support us by subscribing to our page on Subbable:
Looking for SciShow elsewhere on the internet?

Thanks Tank Tumblr:



You've probably already heard about it. In fact, you might have been part of it. Facebook caused an uproar last week when scientists from Cornell University working with Facebook's own data science team released results of a psychological experiment that they performed on about 700,000 Facebook users, none of which knew they were participating.

The team was studying what they call 'Emotional contagion' or how people influence each others' emotions. The experiment took place over one week of January 2012. In all, 683,000 English speaking Facebook users were selected at random to take part. That's about 1/1500 total Facebook users.

Once they were chosen, the contents of those users’ newsfeeds were filtered without their knowledge. The filter hid certain posts that contained terms deemed to be either positive, like 'happy, good, yes' or negative, like 'sad, angry' or maybe a frowny emoticon.

Posts were scanned for these terms using an automated program called the Linguistic Inquiry and Word Count Software, so they weren't actually read by someone on the Facebook staff.

The software was set up so that one group of subjects had between a 10% and 90% chance of not seeing a post if it had a lot of negative terms in it. Another group had the same chance of not seeing posts with positive words. And then after the feeds were tweaked, the same program then scanned what the subjects in the experiments posted for themselves.

After scanning through 3 million posts, the psychologists found that exposing users to more positive or more negative content seemed to have only a small effect on what they posted. If a subject saw more posts with positive language, they were about 0.07% more likely to include positive words in their own posts. That is 1/15 of 1% - like I said, it was a pretty small effect. You'd have to write a couple thousand words before you'd use one more 'awesome' in your status update that would make a difference.

Interestingly the findings themselves got less attention from the public than the fact that the study was done at all. A lot of users, somewhat understandably, thought it was creepy that Facebook was intentionally trying to manipulate their emotions, even if it was only by 1/15 of 1%.

Meanwhile, the psychological community and even some government regulators were more concerned with whether or not this was actually legal. In its defense, Facebook says that when users agree to its terms of use they agree to take part in market research. But as any of you who watch Crash Course psychology know, the first principle of research on human subjects is that you need their consent. Like, their explicit, written, informed consent. In international law this is known as The Common Rule.

So, the office of the Data Protection Commissioner in Ireland and the Information Commission's office in Britain are both asking Facebook some very pointed questions. But the fact is these go on all the time. In 2013 the Facebook data science team conducted an experiment on about 250 million users. They wanted to see if it was true that friend's networks created so called Echo Chambers that prevented users from learning and therefore posting about things outside the interests of their group. So for a period of 7 weeks, Facebook just randomly stopped some users from seeing a link that was shared by friends. It turned out that the Echo Chamber effect wasn't very strong, the results showed that people get their news from a wide array of sources and what your friends reblog have surprisingly little to do with what you end up posting about.

And it's not just Facebook that does tests like this. Google data science team conduct more than 20,000 experiments per year on its users to do stuff like refine its recommended links algorithm or see what shade of blue people respond to more in an ad. Information like this is valuable in a concrete, cash, money sense, so companies are probably going to keep collecting it. But have we reached a point where this is something that needs to be regulated? I mean sure they're doing experiments on humans, but does it count as 'human experimentation'? I wouldn't be surprised if people are debating this a lot in the next few years.

Thank you for watching this episode of SciShow News, brought to you by Audible which is giving away a free audiobook to SciShow viewers. If you head over to you can download Carl Jung's classic 'Psychology of the Unconscious' or practically any other book you want to listen to, anytime for free. So