Facebook and Filter Bubbles

facebookdislikeYes, this is about the Facebook kerfluffle of the week, the probably-unethical and definitely-badly-managed study showing that changing users’ News Feeds would change what users posted themselves. Oddly enough, given my standing as a teacher of research methods and a practicing psychology researcher, I am not outraged about the ethics of the study itself; I am more concerned about just what this study should be showing us about how Facebook, Google, and any other social media site have the power to control what information we see, and perhaps ultimately how we feel and what we think.

Let’s start with the basics of the study, which tend to be glossed over in most reports. The topic at hand is “emotional contagion”, or whether you can catch someone else’s bad mood. In person, we’re pretty confident that people can, although there will be individual differences in who is susceptible based on factors such as empathy. Most of this transmission has been demonstrated in person, when you can actually see the other person. So it is an unknown question whether emotional contagion can happen online, through Facebook. On the one hand, contagion may be a result of mirror neurons, which lead to us unconsciously mimicking the facial expressions we see and therefore feeling the emotions ourselves (see my post on “Which comes first: facial expressions or feelings” for more on this). On the other hand, the words on our feed alone might be enough to prime us to feel a certain way; perhaps we read about a fight with a boss, or other general bad news of the day, and that activates our own memories of the bad events of the day, altering what would otherwise have been posted. So the study did ask a legitimate question, and does have the potential for helping us understand how people interact and transmit emotion.

The article itself was published in the Proceedings of the National Academy of Sciences (better known as PNAS), which tends toward brevity and a lack of methodological detail in its articles; I could not find any supplemental files to help unpack exactly what was done. We do know that between January 11 and 18th, 2012, some 680,000 people who viewed theirs Facebook News Feed in English had that News Feed tweaked. The posts that would have appeared in the News Feed were scanned by text-analysis software for words that were positive or negative. For some people, 10% of the News Feed items containing negative words were filtered out;  for others, 20%; all the way up through 90%. Another group of people had 10%, 20%, up to 90% of the items with positive words filtered out. And then there was a control group, who had 10% to 90% of all of the updates filtered. (One has to assume that the people who had 90% of the feed filtered were wondering what everyone else was doing, since Facebook would have been a rather boring place).

When people’s News Feeds were filtered of negative items, they tended to use more positive and fewer negative words….in a very technical, statistical sense. Positive words increased less than 0.1%, maybe only 0.05%; negative words decreased about the same amount. Only with such a massive sample of people could those differences be significant, and I have to question whether that really means anything out in the real world. Filtering out the positive words had a slightly more dramatic effect, reducing positive words in the 1.5% range, but the negative words changed barely at all. And in both groups, positive words were always more than 5% of the status update, while negative words were never more than 2%. There’s also no word on whether even the 10% filtering was enough to get these effects, or whether filtering out relatively more negative News Feed items led to even more upbeat status updates.

Overall, I am not impressed with the size of the changes observed; I certainly wouldn’t qualify such a tiny change in the content of a post as manipulating anyone’s emotions. And on a purely methodological level, I have to take exception to one conclusion made toward the end of the paper, which was not sufficiently supported by data or statistics and seems to illustrate bias on the part of the authors:

People who were exposed to fewer emotional posts (of either valence) in their News Feed were less expressive overall on the following days…This observation, and the fact that people were more emotionally positive in response to positive emotion updates from their friends, stands in contrast to theories that suggest viewing positive posts by friends on Facebook may somehow affect us negatively, for example, via social comparison.

Reading this, I begin to wonder if Facebook’s goal with the study was to inject a counterpoint in the scientific literature to the rather well-publicized findings that Facebook could hurt our happiness.

Now, the ethics. Ethically speaking, Facebook and/or the researchers were in the wrong, but it was a nuanced wrong. Everything hinges on whether Facebook’s “Data Use Policy” counts as informed consent. For the purposes of internal research by Facebook itself to decide just what its filters should be, it does; for publicly sharing the research findings, it does not.

Researchers interested in what we call the “scholarship of teaching and learning” (deciding the best ways to teach) are familiar with this nuance. if I give my students special assignments or assessments and analyze them in the privacy of my own office, to decide what and how I should teach in the future, no institutional approval or consent forms are required. If I want to take those findings on the road to a conference or workshop and show them to other professors, out come the consent forms. Essentially, once you are planning on making the data public – even just the overall averages with no personally identifiable information – the bar of ethics moves higher.

There are some great evaluations of this already out there; I recommend James Grimmelmann’s analys on The Laboratorium for those interested in the research ethics nitty-gritty, and Katy Waldmann’s article on Slate for the layperson. In a nutshell, people are supposed to be told that they are in a study, given the option of not being in the study (in this case, we should be able to opt out of a specific individual studies Facebook does), and be warned of the potential risks – in my own research past, I have been mandated to include the potential for boredom and eyestrain. Messing with emotions would get additional special scrutiny.

However, I can see that people who do not conduct human research for a living would not understand this relatively subtle distinction between what’s okay for business and what’s okay for publishing. Facebook was (sadly) probably within its Data Use Policy to do the research itself, for internal purposes, just not to go out there and try to make a social psychology splash with it. Frankly, I’m more irked by the reviewers and editors, who should have known better – and who perhaps should have been more critical of whether these small differences merited publication in one of the more prestigious and competitive journals out there.

But finally, the thing that struck me the most about this study, and that I kind of wish people had gotten more up in arms about than the ethics of this particular study: just what is Facebook filtering out of our awareness? I have enough of a dislike of my daily battle with the News Feed, as I try to insist that it show me the latest updates in a chronological (“most recent”) fashion and not in what it thinks is popular right now. Now we know that for one week, at least, Facebook filtered up to 90% of the positive posts out of people’s News Feeds. Yes, you could go and check each individual’s Wall to see the post – but who does that anymore? Even with the crazy filtering algorithms, I have trusted Facebook to always show me my sisters’ updates, which I make a point of liking or commenting on just to make sure those algorithms have something to work with. Facebook has alerted me to successes, illnesses and even deaths in my extended family. Would those have been filtered out in this study, and will they be filtered out in some future instance because Facebook felt like studying its users…or because they decided everyone on Facebook needed to be a little more upbeat?

This broader question of filtering strikes me particularly, because just a few weeks ago I watched Eli Pariser’s TED talk “Beware online filter bubbles“, a must read for anyone who gets information online. I knew Facebook used filtering algorithms, but I didn’t expect Google to change search results based on where you searched from, to such an extend that you might find out about political protests when you search for “Egypt”, or you might find nothing about them. (From my laptop, at my local bookstore, I am satisfied with the coverage my own search gets. But is there something I don’t know I should be seeing, but am not?)

I am actually relieved that the results Facebook got seem so drastically small; it suggests that we do not need to panic quite yet that Facebook could do much to govern our emotions based on what we read. But it does try to influence what we think is important out there in the world, or in our friends’ lives, based on what updates it chooses to show us, and what’s “trending”. Twitter, Google, and who know what other sites do the same. That, to me, is now much more of a concern than what Facebook may be doing with my data.

ResearchBlogging.org
Kramer AD, Guillory JE, & Hancock JT (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences of the United States of America, 111 (24), 8788-90 PMID: 24889601

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s