By Jeff MacKie-Mason

There’s a good chance you heard about a firestorm in the social media research world this summer. How we researchers do our research generally isn’t front-page news…but this was. There have been at least three articles in the New York Times (see, e.g., “As Data Overflows Online, Researchers Grapple with Ethics”), as well as articles in The Atlantic, Slate, The Wall Street Journal, Forbes, and the Financial Times; here’s a list of over 100  – see more links to stories at the end of this article.

Dean Jeff MacKie-Mason

Dean Jeff MacKie-Mason

What was the hullabaloo about? In short, a data scientist at Facebook and a faculty member and graduate student at Cornell were accused of manipulating what people saw on Facebook to intentionally make “thousands upon thousands of people sad”.¹ Upon a closer reading, most observers (including me) don’t think that’s what the research team did. However, they did run an experiment on human subjects, without explicit informed consent, in which they changed (very slightly) the number of “positive” and “negative” posts that some users saw, to see whether the positivity or negativity spread throughout their social networks (“contagiously”).

They found strong statistical evidence of a small effect on the negativity of words people used in their subsequent Facebook posts. They did not attempt to measure whether people actually felt better or worse.

Many folks here at UMSI, and many of our alumni, engage in similar experiments.² To find out if A causes B, experiments are of course one of the most powerful scientific tools we have. If we want to design online information resources and social networking platforms that will improve people’s lives – whether for education or health or civic discourse or social interaction – we need to know how different design features affect user behaviors and experiences.

The core controversy in the end is not about whether we should experiment to learn how people use and are affected by online information resources, but about the proper ethical conduct of such research. All research with human subjects at UMSI (and universities in general) is regulated by “institutional review boards” (IRBs). IRBs have elaborated standards that limit and shape what we can do in our experiments. But as information environments rapidly evolve, there is a never-ending stream of new, tough questions about just what should be permitted in the pursuit of science, no matter how well-intentioned the research.

One interesting issue brought to much wider visibility by the Facebook experiment controversy is that online environment providers (like Facebook, Google, Twitter, Microsoft, etc., etc.) are running this kind of experiment all the time, not for academic research but as basic business practice.

To figure out if users will like a change in the Google search page, it runs an “A/B” test, delivering the new page to thousands of users and testing their use of and reactions to it, versus the old version. Apparently many (maybe most) users don’t realize that A/B experiments are happening every day on services they use.

Indeed, for many years Facebook has not shown all posts in a user’s News Feed: it has an algorithm that makes choices and shows more of some types of posts and less of others. And it’s constantly testing different combinations to see which ones users like best: just like this academic research experiment that slightly changed the ratio of positive to negative posts.

Why, some researchers have asked, shouldn’t independent, not-profit-motivated scholars be able to manipulate what users see, if the companies providing the services do it all the time and no one blinks? What harm? Others argue that academic research should adhere to higher ethical standards than corporate inquiry, in part to preserve confidence and respect for what we do.

If you’d like to learn what other UMSI faculty and alumni have been writing about this ethical controversy, here are some pointers: Cliff Lampe; Christian Sandvig [1] and  [2]; and Emilee Rader (PhD alum).

A selection of additional articles on Facebook’s social experiment:

The Atlantic

“Even the Editor of Facebook’s Mood Study Thought It Was Creepy”

“Everything We Know About Facebook’s Secret Mood Manipulation Experiment”

“The Test We Can–and Should–Run on Facebook”

Wall Street Journal

“Facebook Study Sparks Soul-Searching and Ethical Questions

“Irish Data Privacy Watchdog to Probe Facebook’s Research Methods”

“Sandberg: Facebook Study Was ‘Poorly Communicated’”

Forbes

“Facebook Doesn’t Understand the Fuss about Its Emotion Manipulation Study”

“Facebook Added ‘Research’ To User Agreement 4 Months after Emotion Manipulation Study”

“The Outrage Over Facebook’s ‘Creepy’ Experiment Is Out-Of-Bounds — And This Study Proves It”

“Dear Facebook, Please Experiment on Me”

¹ Katy Waldman, “Facebook’s Unethical Experiment”, Slate, 28 June 2014.
² In fact, one of our recent PhD graduates, and one of our former faculty colleagues, both work in the same data science group at Facebook, though they did not participate in this particular study.