A study published in the June 17th issue of the Proceedings of the National Academy of Science is raising troubling ethical questions. Researchers manipulated the content of Facebook feeds of unknowing subjects, then tracked whether the subjects’ subsequent posts showed an emotional change. Does that breach the threshold for which informed consent is required? [The full paper is freely available as a pdf: Experimental evidence of massive-scale emotional contagion through social networks. I don’t want to rehash what others have written so well; take a quick look at one of the following articles from yesterday for more details. I particularly recommend the first one:
- The Atlantic: Everything We Knew About Facebook’s Secret Mood Manipulation Experiment
- Der Spiegel: Manipulierte Newsfeeds:Facebook-Nutzer empört über Psycho-Experiment
- Slate: Facebook’s Unethical Experiment
The editor of the paper said she had reservations until the authors assured her that their institutional review board had approved the research and “Facebook apparently manipulates people’s News Feeds all the time.” She now says
“It’s ethically okay from the regulations perspective, but ethics are kind of social decisions. There’s not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn’t have been done…I’m still thinking about it and I’m a little creeped out, too.”
The Atlantic piece has the best analysis so far of the problems with the research methodology, apart from whether it’s acceptable to specifically eliminate happy or sad posts from someone’s news feed without his knowledge. The way that the content was evaluated fairly calls the results into question, so don’t get too panicked about the alleged “emotional contagion” yet. The most valid conclusion seems to be that people respond more to emotional content than bland posts. Shocking.
As an independent researcher unaffiliated with a university, I am excruciatingly sensitive to issues of informed consent. My methods have extra steps for anonymity and protection of subjects because I don’t have an institutional review board to check my methodology. I have to wonder if the researchers on this project simply asked themselves how they would feel if their own feeds were manipulated in this way, for study purposes or because Facebook wanted to do so. My gut instinct says that intentionally manipulating the overall emotional timbre of a person’s feed is not just creepy and unethical, it borders on evil.