In defense of Facebook

[UPDATE July 1st: I’ve now posted some additional thoughts in a second post here.]

It feels a bit strange to write this post’s title, because I don’t find myself defending Facebook very often. But there seems to be some discontent in the socialmediaverse at the moment over a new study in which Facebook data scientists conducted a large-scale–over half a million participants!–experimental manipulation on Facebook in order to show that emotional contagion occurs on social networks. The news that Facebook has been actively manipulating its users’ emotions has, apparently, enraged a lot of people.

The study

Before getting into the sources of that rage–and why I think it’s misplaced–though, it’s worth describing the study and its results. Here’s a description of the basic procedure, from the paper:

The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. This tested whether exposure to emotions led people to change their own posting behaviors, in particular whether exposure to emotional content led people to post content that was consistent with the exposure—thereby testing whether exposure to verbal affective expressions leads to similar verbal expressions, a form of emotional contagion. People who viewed Facebook in English were qualified for selection into the experiment. Two parallel experiments were conducted for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which exposure to negative emotional content in their News Feed was reduced. In these conditions, when a person loaded their News Feed, posts that contained emotional content of the relevant emotional valence, each emotional post had between a 10% and 90% chance (based on their User ID) of being omitted from their News Feed for that specific viewing.

And here’s their central finding:

What the figure shows is that, in the experimental conditions, where negative or positive emotional posts are censored, users produce correspondingly more positive or negative emotional words in their own status updates. Reducing the number of negative emotional posts users saw led those users to produce more positive, and fewer negative words (relative to the unmodified control condition); conversely, reducing the number of presented positive posts led users to produce more negative and fewer positive words of their own.

Taken at face value, these results are interesting and informative. For the sake of contextualizing the concerns I discuss below, though, two points are worth noting. First, these effects, while highly statistically significant, are tiny. The largest effect size reported had a Cohen’s d of 0.02–meaning that eliminating a substantial proportion of emotional content from a user’s feed had the monumental effect of shifting that user’s own emotional word use by two hundredths of a standard deviation. In other words, the manipulation had a negligible real-world impact on users’ behavior. To put it in intuitive terms, the effect of condition in the Facebook study is roughly comparable to a hypothetical treatment that increased the average height of the male population in the United States by about one twentieth of an inch (given a standard deviation of ~2.8 inches). Theoretically interesting, perhaps, but not very meaningful in practice.

Second, the fact that users in the experimental conditions produced content with very slightly more positive or negative emotional content doesn’t mean that those users actually felt any differently. It’s entirely possible–and I would argue, even probable–that much of the effect was driven by changes in the expression of ideas or feelings that were already on users’ minds. For example, suppose I log onto Facebook intending to write a status update to the effect that I had an “awesome day today at the beach with my besties!” Now imagine that, as soon as I log in, I see in my news feed that an acquaintance’s father just passed away. I might very well think twice about posting my own message–not necessarily because the news has made me feel sad myself, but because it surely seems a bit unseemly to celebrate one’s own good fortune around people who are currently grieving. I would argue that such subtle behavioral changes, while certainly responsive to others’ emotions, shouldn’t really be considered genuine cases of emotional contagion. Yet given how small the effects were, one wouldn’t need very many such changes to occur in order to produce the observed results. So, at the very least, the jury should still be out on the extent to which Facebook users actually feel differently as a result of this manipulation.

The concerns

Setting aside the rather modest (though still interesting!) results, let’s turn to look at the criticism. Here’s what Katy Waldman, writing in a Slate piece titled “Facebook’s Unethical Experiment“, had to say:

The researchers, who are affiliated with Facebook, Cornell, and the University of California““San Francisco, tested whether reducing the number of positive messages people saw made those people less likely to post positive content themselves. The same went for negative messages: Would scrubbing posts with sad or angry words from someone’s Facebook feed make that person write fewer gloomy updates?

The upshot? Yes, verily, social networks can propagate positive and negative feelings!

The other upshot: Facebook intentionally made thousands upon thousands of people sad.

Or consider an article in the The Wire, quoting Jacob Silverman:

“What’s disturbing about how Facebook went about this, though, is that they essentially manipulated the sentiments of hundreds of thousands of users without asking permission (blame the terms of service agreements we all opt into). This research may tell us something about online behavior, but it’s undoubtedly more useful for, and more revealing of, Facebook’s own practices.”

On Twitter, the reaction to the study has been similarly negative). A lot of people appear to be very upset at the revelation that Facebook would actively manipulate its users’ news feeds in a way that could potentially influence their emotions.

Why the concerns are misplaced

To my mind, the concerns expressed in the Slate piece and elsewhere are misplaced, for several reasons. First, they largely mischaracterize the study’s experimental procedures–to the point that I suspect most of the critics haven’t actually bothered to read the paper. In particular, the suggestion that Facebook “manipulated users’ emotions” is quite misleading. Framing it that way tacitly implies that Facebook must have done something specifically designed to induce a different emotional experience in its users. In reality, for users assigned to the experimental condition, Facebook simply removed a variable proportion of status messages that were automatically detected as containing positive or negative emotional words. Let me repeat that: Facebook removed emotional messages for some users. It did not, as many people seem to be assuming, add content specifically intended to induce specific emotions. Now, given that a large amount of content on Facebook is already highly emotional in nature–think about all the people sharing their news of births, deaths, break-ups, etc.–it seems very hard to argue that Facebook would have been introducing new risks to its users even if it had presented some of them with more emotional content. But it’s certainly not credible to suggest that replacing 10% – 90% of emotional content with neutral content constitutes a potentially dangerous manipulation of people’s subjective experience.

Second, it’s not clear what the notion that Facebook users’ experience is being “manipulated” really even means, because the Facebook news feed is, and has always been, a completely contrived environment. I hope that people who are concerned about Facebook “manipulating” user experience in support of research realize that Facebook is constantly manipulating its users’ experience. In fact, by definition, every single change Facebook makes to the site alters the user experience, since there simply isn’t any experience to be had on Facebook that isn’t entirely constructed by Facebook. When you log onto Facebook, you’re not seeing a comprehensive list of everything your friends are doing, nor are you seeing a completely random subset of events. In the former case, you would be overwhelmed with information, and in the latter case, you’d get bored of Facebook very quickly. Instead, what you’re presented with is a carefully curated experience that is, from the outset, crafted in such a way as to create a more engaging experience (read: keeps you spending more time on the site, and coming back more often). The items you get to see are determined by a complex and ever-changing algorithm that you make only a partial contribution to (by indicating what you like, what you want hidden, etc.). It has always been this way, and it’s not clear that it could be any other way. So I don’t really understand what people mean when they sarcastically suggest–as Katy Waldman does in her Slate piece–that “Facebook reserves the right to seriously bum you out by cutting all that is positive and beautiful from your news feed”. Where does Waldman think all that positive and beautiful stuff comes from in the first place? Does she think it spontaneously grows wild in her news feed, free from the meddling and unnatural influence of Facebook engineers?

Third, if you were to construct a scale of possible motives for manipulating users’ behavior–with the global betterment of society at one end, and something really bad at the other end–I submit that conducting basic scientific research would almost certainly be much closer to the former end than would the other standard motives we find on the web–like trying to get people to click on more ads. The reality is that Facebook–and virtually every other large company with a major web presence–is constantly conducting large controlled experiments on user behavior. Data scientists and user experience researchers at Facebook, Twitter, Google, etc. routinely run dozens, hundreds, or thousands of experiments a day, all of which involve random assignment of users to different conditions. Typically, these manipulations aren’t conducted in order to test basic questions about emotional contagion; they’re conducted with the explicit goal of helping to increase revenue. In other words, if the idea that Facebook would actively try to manipulate your behavior bothers you, you should probably stop reading this right now and go close your account. You also should definitely not read this paper suggesting that a single social message on Facebook prior to the last US presidential election the may have single-handedly increased national voter turn-out by as much as 0.6%). Oh, and you should probably also stop using Google, YouTube, Yahoo, Twitter, Amazon, and pretty much every other major website–because I can assure you that, in every single case, there are people out there who get paid a good salary to… yes, manipulate your emotions and behavior! For better or worse, this is the world we live in. If you don’t like it, you can abandon the internet, or at the very least close all of your social media accounts. But the suggestion that Facebook is doing something unethical simply by publishing the results of one particular experiment among thousands–and in this case, an experiment featuring a completely innocuous design that, if anything, is probably less motivated by a profit motive than most of what Facebook does–seems kind of absurd.

Fourth, it’s worth keeping in mind that there’s nothing intrinsically evil about the idea that large corporations might be trying to manipulate your experience and behavior. Everybody you interact with–including every one of your friends, family, and colleagues–is constantly trying to manipulate your behavior in various ways. Your mother wants you to eat more broccoli; your friends want you to come get smashed with them at a bar; your boss wants you to stay at work longer and take fewer breaks. We are always trying to get other people to feel, think, and do certain things that they would not otherwise have felt, thought, or done. So the meaningful question is not whether people are trying to manipulate your experience and behavior, but whether they’re trying to manipulate you in a way that aligns with or contradicts your own best interests. The mere fact that Facebook, Google, and Amazon run experiments intended to alter your emotional experience in a revenue-increasing way is not necessarily a bad thing if in the process of making more money off you, those companies also improve your quality of life. I’m not taking a stand one way or the other, mind you, but simply pointing out that without controlled experimentation, the user experience on Facebook, Google, Twitter, etc. would probably be very, very different–and most likely less pleasant. So before we lament the perceived loss of all those “positive and beautiful” items in our Facebook news feeds, we should probably remind ourselves that Facebook’s ability to identify and display those items consistently is itself in no small part a product of its continual effort to experimentally test its offering by, yes, experimentally manipulating its users’ feelings and thoughts.

What makes the backlash on this issue particularly strange is that I’m pretty sure most people do actually realize that their experience on Facebook (and on other websites, and on TV, and in restaurants, and in museums, and pretty much everywhere else) is constantly being manipulated. I expect that most of the people who’ve been complaining about the Facebook study on Twitter are perfectly well aware that Facebook constantly alters its user experience–I mean, they even see it happen in a noticeable way once in a while, whenever Facebook introduces a new interface. Given that Facebook has over half a billion users, it’s a foregone conclusion that every tiny change Facebook makes to the news feed or any other part of its websites induces a change in millions of people’s emotions. Yet nobody seems to complain about this much–presumably because, when you put it this way, it seems kind of silly to suggest that a company whose business model is predicated on getting its users to use its product more would do anything other than try to manipulate its users into, you know, using its product more.

Why the backlash is deeply counterproductive

Now, none of this is meant to suggest that there aren’t legitimate concerns one could raise about Facebook’s more general behavior–or about the immense and growing social and political influence that social media companies like Facebook wield. One can certainly question whether it’s really fair to expect users signing up for a service like Facebook’s to read and understand user agreements containing dozens of pages of dense legalese, or whether it would make sense to introduce new regulations on companies like Facebook to ensure that they don’t acquire or exert undue influence on their users’ behavior (though personally I think that would be unenforceable and kind of silly). So I’m certainly not suggesting that we give Facebook, or any other large web company, a free pass to do as it pleases. What I am suggesting, however, is that even if your real concerns are, at bottom, about the broader social and political context Facebook operates in, using this particular study as a lightning rod for criticism of Facebook is an extremely counterproductive, and potentially very damaging, strategy.

Consider: by far the most likely outcome of the backlash Facebook is currently experiencing is that, in future, its leadership will be less likely to allow its data scientists to publish their findings in the scientific literature. Remember, Facebook is not a research institute expressly designed to further understanding of the human condition; it’s a publicly-traded corporation that exists to create wealth for its shareholders. Facebook doesn’t have to share any of its data or findings with the rest of the world if it doesn’t want to; it could comfortably hoard all of its knowledge and use it for its own ends, and no one else would ever be any wiser for it. The fact that Facebook is willing to allow its data science team to spend at least some of its time publishing basic scientific research that draws on Facebook’s unparalleled resources is something to be commended, not criticized.

There is little doubt that the present backlash will do absolutely nothing to deter Facebook from actually conducting controlled experiments on its users, because A/B testing is a central component of pretty much every major web company’s business strategy at this point–and frankly, Facebook would be crazy not to try to empirically determine how to improve user experience. What criticism of the Kramer et al article will almost certainly do is decrease the scientific community’s access to, and interaction with, one of the largest and richest sources of data on human behavior in existence. You can certainly take a dim view of Facebook as a company if you like, and you’re free to critique the way they do business to your heart’s content. But haranguing Facebook and other companies like it for publicly disclosing scientifically interesting results of experiments that it is already constantly conducting anyway–and that are directly responsible for many of the positive aspects of the user experience–is not likely to accomplish anything useful. If anything, it’ll only ensure that, going forward, all of Facebook’s societally relevant experimental research is done in the dark, where nobody outside the company can ever find out–or complain–about it.

[UPDATE July 1st: I’ve posted some additional thoughts in a second post here.]

111 thoughts on “In defense of Facebook”

  1. I don’t want to hear about how fb manipulates our feed all the time. Irrelevant. This is systematic research that, yes, manipulated an affective variable, and it should require informed consent. I also don’t want to hear that the users’ agreement counts as informed consent. A general warning that studies may be conducted is not informed consent for any particular study.

    My understanding is that the study was vetted by an IRB, who apparently waived consent because “fb manipulates our feed all the time.” If that was the true basis for the waiver of consent, then that IRB has major problems.

    1. One thing I’m not clear on is what you and others upset about the ethics of the study think should be different. I can’t think of changes that would address concerns without making things worse, which makes me wonder if I’m misunderstanding.

      Would you prefer that it be non-systematic? That designers at Facebook just say “yeah, I think the feed would be better if we had more/less positive emotion” and pick a value, without measuring possible effects?

      If the argument is for more of a consent process, how would you like consent handled for the thousands of experiments? And, since full details would make experiments ineffective, what should they say? “Hey, just a reminder, the Facebook feed is a designed environment. Sometimes we test changes and those changes might affect you.”? If that’s the solution, is there any website that shouldn’t carry that reminder?

      1. I would prefer that this research be held to the same standards as human subjects research conducted in the scientific community. I would prefer that this research be vetted by an independent (not internal) IRB. It still remains unclear whether or not this was the case.

        Yes, the main issue is consenting/debriefing. There are well-practiced ways for obtaining consent for such studies that do not ruin the study (e.g., As part of this study, your exposure to positive and negative information may be altered…).

        More to the point, convenience is not an acceptable excuse for neglecting informed consent or debriefing. If you can’t do the study without properly consenting subjects, then you can’t do it, period (assuming the potential effects of the study are greater than minimal risk that would be encountered on a daily basis).

        1. But it’s clear beyond a shadow of a doubt that the potential effects of this particular study are not greater than those that would be encountered on a daily basis. There is no sense in which cutting out a proportion of negative and positive items in one’s news feed poses any conceivable risk beyond the ordinary. On any day of the week, you might happen to log into Facebook and see news about several people dying, being diagnosed with cancer, getting hit by cars, and so on and so forth. It really strains credulity to argue that removing a portion of emotional posts (based on rather crude labeling methods, I might add) puts users at any risk that they don’t already face by, you know, being exposed to all of the trials and tribulations of their online friends and acquaintances. But once you accept that the risks are minimal, Facebook has a perfectly legitimate claim to a waiver of consent. And you’re wrong that convenience is not an acceptable reason for not providing informed consent. The HHS guidelines seem pretty clear on that point: if it’s minimal risk and not practical to do the study without consent, you can ask for a waiver. I’ve now successfully obtained waivers of consent for online studies at three separate institutions, always without any fuss from the IRB. So I don’t think your representation of how IRBs operate in general is accurate (though of course I can’t speak for your institution).

      2. You write that as if you knew it before the study was done and showed a small effect size. You didn’t. It is not at all inconceivable that manipulating someone’s emotional experience by messing with their feed could have a significant effect. Moreover, we still know nothing about potentially important moderators of the effect. What might the effect be on someone who is contemplating suicide? Are you confident enough to start pushing that person’s emotions around without at least offering him the option to opt out? Not me.

        Yes, as I said, convenience may matter if there is minimal risk. If there is minimal risk (as judged by an accredited IRB).

        1. By that reasoning, no study ever involves just minimal risk, because no one ever knows with reasonable certainty what the effect size will be before they conduct the study (or else there would be no point to doing the study). Can you come up with a scenario in which someone commits suicide because of this manipulation? Of course. But I can also come up with a scenario in which someone already suicidal gets so bored doing a Stroop task that it pushes them over the edge. Judgments of risk are not about what one can conceive of at the extremes (which is just about anything), but about what is actually likely.

          Remember: minimal risk is not defined in absolute terms, it’s defined entirely in relation to the risks already posed by the ordinary activities people are engaged in. There can be little doubt that items that show up in people’s news feeds on Facebook can and do precipitate suicide on occasion–very rarely, of course, but certainly more often than, say, a Stroop task. So, to answer your question: yes, I would be comfortable approving this particular manipulation without any hesitation.

          That said, I think we’re basically in agreement on everything except the actual relative risk posed by the manipulation in question here. I.e., I think we agree that it would be reasonable and ethical for Facebook to seek a waiver of consent from the IRB in future cases where its researchers intend to publish generalizable new knowledge. My contention is that many or most IRBs would approve the request; your contention is that few would. It’s an open empirical question, and I’m happy to agree to disagree on our predictions.

      3. No. By that reasoning, you must subject your protocol to an accredited IRB, whose job is to decide whether or not the risks are more than minimal.You cannot decide for yourself.

        In fact, mood studies are quite frequently deemed to be of sufficiently minor risk that they are granted “expedited” review, which means that they can be reviewed by a single member of the IRB, rather than by the whole committee. However, even when mood studies are expedited, I have never seen a case in which the risk was deemed sufficiently minimal to exempt the study from standard requirements of consent/debriefing. I do not believe you would find many IRBs willing to suspend consent/debriefing for a mood intervention.

    2. In fact, across the millions of facebook users, many people will be affected. That’s what the study showed.

    3. Actually, wrong. They IRB was only presented with what they were told was a “pre-existing data set.” To them, the research was presented as a fait accompli. The researchers did not apply for permission before the data were gathered even though the university researchers were part of the study design, as per the PNAS paper.

  2. “, Facebook would be crazy not to try to empirically determine how to improve user experience.” < agreed

  3. Great post as always, Tal. I think you make some excellent points and I particularly agree that the backlash risks dissuading private companies in future from sharing such data for public betterment.

    Still, I’m not sure where I stand on this because I think there are some problems with some of your arguments too.

    First, the defence only holds water if Facebook would have run the exact same experiment anyway. That way the academics could legitimately argue to their IRB that the study is essentially observational research – observing an intervention that they had no control over, as you might any other corporate behaviour or even a force of nature. But is that the case? Would Facebook have conducted this EXACT study regardless? I suspect not. We can infer this from the fact that the PNAS articles lists the academics involved as contributing to the experimental design of the research. The moment the academics are changing the design it raises the ethical standards in accordance with the Declaration of Helsinki, and for good reason.

    It isn’t a legitimate ethical defence to say that Facebook would have performed similar research anyway, because “similarity” is not sufficient to inform adherence to ethics. Similar experiments can very easily differ considerably in their ethical status.

    Second, while there is a risk of a pushback reducing corporate transparency in the future, there is an equal and opposite risk that such studies expose a loophole in current ethical standards. Imagine a future where, to bypass pesky IRBs, academics routinely team up with companies that have the ability to perform such research on unsuspecting participants. Maybe this scenario falls prey to a slippery slope fallacy. Maybe not.

    Third, your defence of Facebook in terms of the weakness of the intervention seems sounds in this case (it was almost trivially small), but where would you draw the line in cases where it was more substantive? What if the intervention did involve potentially causing harm or risks to the participants?

    Fourth, there is has been no explanation as yet from the authors or PNAS editor as to why the IRB that apparently provided approval for the study was not listed in the published article, as required by PNAS editorial policy. I’m considering covering the study at our Guardian blog so I’ve put this question to them (currently awaiting a response).

  4. I’m not sure your defense of Facebook is consistent with ethical standards for human research. Consider the following:

    “Facebook removed emotional messages for some users. It did not, as many people seem to be assuming, add content specifically intended to induce specific emotions.”

    So withholding an intervention a user may be expecting is different than giving an intervention a user was not expecting? Does this mean randomizing a patient to not receive a drug they otherwise would receive is a different ethical standard than randomizing a patient to receive a drug they otherwise would not receive? In conventional medical research, these are considered the same.

    Also:

    “For the sake of contextualizing the concerns I discuss below, though, two points are worth noting. First, these effects, while highly statistically significant, are tiny.”

    A researcher cannot use the argument that the magnitude of the effect of their research on their subjects was small, in retrospect, as justification for not obtaining informed consent. An IRB should wave informed consent of subjects only if the researcher can demonstrate beyond reasonable doubt that the effect on the subjects will be zero, prospectively.

    Finally:

    “Second, the fact that users in the experimental conditions produced content with very slightly more positive or negative emotional content doesn’t mean that those users actually felt any differently.”

    Again, in order for an IRB to grant an exemption to inform consent based on this argument, this hypothesis would need to be proven beyond reasonable doubt prospectively. The burden of proof is on the researcher to prove lack of effect on the subject.

    For an example of an insightful use of large amounts of social media data for research (albeit, much less scientifically rigorous research), without violating ethical standards for human experimentation, I recommend checking at the blog for OKCupid. It’s no longer updated, but old posts are still available to read.

    1. Here’s a core problem with the issue at hand when you write, “So withholding an intervention a user may be expecting is different than giving an intervention a user was not expecting?”:

      Given that Facebook is not open about its algorithmic filtering, we currently have no idea what content on a normal day-to-day basis is provided in individuals’ Facebook feeds. It’s completely possible that Facebook has manipulated the feed as it currently stands on a day-to-day basis to show us particularly content. There should be absolutely no possible argument that Facebook’s filters were showing anyone “what they expected to see” regardless of individuals’ expectations.

  5. Spoken just like someone who’s not a psychologist. Are you a psychologist? You don’t sound like one. Experimenting on people requires informed consent. That’s not optional, even for facebook. It doesn’t matter how interesting you think the results are, they should have had informed consent from study participants. They broke the law.

  6. This defense is weaker than homeopathic tea. First, parsing the distinction between “removing happy sentiments” and “adding negative sentiments” is silly and does not stand up to serious thought. Second, while Facebook are of course allowed to do whatever market research they want, and publish it on a web page if they want , they do not get to call themselves scientists or publish in a science journal or use federal funds unless they play by the same rules as other scientists, viz, informed consent. Which was totally absent here. This is not just an academic question; Facebook, Twitter et al have been billing themselves as the Big Data God’s Gift to the Social Sciences for years. The pattern of these web startups is consistently to gain an edge by insisting that innovative disrupters need to be free of the usual pesky regulatory constraints that the rest of us face . BS. The data may be useful; so were the data from experiments on humans in high & low pressure chambers from Nazi concentration camps; sure it’s best that data is made public; does not make it science, does not make the experiments ethical.

  7. The problem here is not what they did, but how they did it. PNAS, along with all other journals, states that research published there should be ethically approved by an independent research board and follow the declaration of Helsinki guidelines. This research did not. It certainly did not use fully informed consent, nor sufficiently debrief participants as to how to get help if they were affected by the study (which according to the results, they were). Facebook is potentially a brilliant source of scientific data, shame they have totally screwed this up and even more shame on the editors and reviewers for allowing this to be published, espeically in such a high impact journal. Research papers are rejected every day for much much less.

  8. Meh. I prefer to look at this objectively. Hopefully Facebook will offer better filtering options for the user News Feed.

    I am fed up of my “friends” posting silly pictures of their cats and dogs. I see these “experiments” as a good thing.

    In fact, I bet if Facebook kept their mouth shut and simply announced a different News Feed filter was available to enhance the FBExperience project for grumps like me, EVERYBODY would go “Uh… Ok.” and keep scrolling on down to see more of Fluffy without knowing the difference.

    —-

    What a stupid complaint against Facebook programming.

    A bunch of people click on everything they “Like” on their News Feed.
    Facebook shows them more of what they “Like” and that is supposed to be a bad thing? That is manipulation?? Sheesh! What is next? Outlawing impulse items and the magazine rack at the checkout counter?

  9. I think everybody understood precisely what they did. And that’s why people are pissed.

    Not only did they actively increase the ratio of negativity in people’s lives, they also prevented people’s messages from being shown to their friends and loved ones.

    All of this without anyone’s consent.

  10. @Charles Anthony: You do not seem to grasp what this discussion is about, at all. Have you read the article?

    1. @Bob_q, no need to waste our time with Charles Anthony. He appears to show significant negative effects as result of this experiment. His mood is in the outlier territory 😉

  11. The real problem here is that most of the commenters apparently have never filled out an IRB application or were not paying attention when they did so. If they did, they would see that one can ask for a waiver of informed consent. Here are the federal regs on this issue.

    http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html#46.116

    Here’s the pertinent section:

    (d) An IRB may approve a consent procedure which does not include, or which alters, some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that:

    (1) The research involves no more than minimal risk to the subjects;

    (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;

    (3) The research could not practicably be carried out without the waiver or alteration; and

    (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

    So this is the actual law.

  12. @MEM:
    (a) Following the law (in all its technical details and loopholes) does not make something ethical.
    (b) The welfare of the subjects was affected by the alteration.
    (c) This research actually could be carried out without the alteration (letting people know they will participate in a study and requesting their consent, without letting them know about exactly what manipulation would occur, would be feasible and would not introduce significant additional doubt with regards to the results).
    (d) To my knowledge, no additional information was given to the unwilling, unknowing participants.

    So, really, you have no argument here.

    More importantly, have you actually carried out a psychology study or filled in an IRB application yourself? If so, our ethical practices differ enormously, and I would question your institute’s ability to match the most basic ethical research standards.

    1. Hi Bob_q, quite frankly I would question your training and abilities as a scientist based on your comments. To answer your question, not only have I filled out many IRB applications, but I have served as an IRB chair where we made our decisions based on the law and reason not personal opinions about what is and is not ethical. If I had reviewed the application for this study I would note that 1) the positivity or negativity of any Facebook feed varies from week to week 2) the content of any individual’s Facebook feed is manipulated by Facebook for all kinds of reasons and users know that or at least have agreed to it 3) the manipulation proposed by the experimenters is very weak lasting only a week and consisting of removing a proportion of positive or negative content and replacing it with neutral content. I see no reasonable expectation of harm here and your contention that the “welfare” of the participants was affected is not supported by the data. I would probably have asked the researchers to include a debriefing or some way to give participants direct access to the posts that were withheld from the feed, but perhaps there was some logical argument against that. I suggest you read Tal’s comments below carefully as he does bring up some excellent points about what we really should be concerned about.

  13. Thanks for the comments, all. I’ll try to address them in the aggregate rather than responding in detail to each one (though I’ve left additional comments above).

    First, it’s simply not correct to suggest that all human subjects research requires informed consent. At least in the US (where Facebook is based), the rules governing research explicitly provide for a waiver of informed consent. Directly from the HHS website:

    An IRB may approve a consent procedure which does not include, or which alters, some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that:

    (1) The research involves no more than minimal risk to the subjects;

    (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;

    (3) The research could not practicably be carried out without the waiver or alteration; and

    (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

    Granting such waivers is a commonplace occurrence; I myself have had online studies granted waivers before for precisely these reasons. In this particular context, it’s very clear that conditions (1) and (2) are met (because this easily passes the “not different from ordinary experience” test). Further, Facebook can also clearly argue that (3) is met, because explicitly asking for informed consent is likely not viable given internal policy, and would in any case render the experimental manipulation highly suspect (because it would no longer be random). The only point one could conceivably raise questions about is (4), but here again I think there’s a very strong case to be made that Facebook is not about to start providing debriefing information to users every time it changes some aspect of the news feed in pursuit of research, considering that its users have already agreed to its User Agreement, which authorizes this and much more.

    Now, if you disagree with the above analysis, that’s fine, but what should be clear enough is that there are many IRBs (and I’ve personally interacted with some of them) that would have authorized a waiver of consent in this particular case without blinking. So this is clearly well within “reasonable people can disagree” territory, rather than “oh my god, this is clearly illegal and unethical!” territory.

    Let’s also keep in mind that these rules are themselves anachronistic, as they were not written with online data collection by major corporations in mind. I suspect that when HHS next revises their rules, they will treat this kind of situation explicitly, as it comes up quite often these days–and again, many IRBs routinely grant waivers of consent for online data collection (typically without the added benefit of any User Agreement that explicitly states what may or may not be done to participants).

    Second, Ian Holmes and Chris Chambers suggest that pointing out that the effect was small is not a principled defense of conducting a study, since that can’t be known in advance. I completely agree, and this was precisely why I pointed out that the effect was small when describing the study, and not when arguing that the backlash was unjustified. I was not arguing that we shouldn’t worry about the Facebook study because the effect size was small. I was simply observing that the effect size was very small in order to give some context to the backlash. It’s quite clear that many people who are up in arms about this are under the impression that Facebook is causing wild swings in its users’ emotions with this manipulation, which is patently not the case. The fact that the effect is very small may not affect principled concerns about research ethics, but it most certainly does matter to many naive readers who appear to have walked away from articles like the one in Slate believeing that Facebook has made “thousands upon thousands of people sad”, and is carelessly toying with its users’ feelings (which of course is silly as soon as you realize how small these effects are compared to the huge amount of variance in emotions induced by simply watching the “normal” news feed scroll by). It’s important to quantify the effects we’re talking about here in intuitive terms for all of the people who care much less about the informed consent issue than about the possibility that using Facebook is actually dangerous (and I suspect that that’s actually the vast majority of non-scientists who were riled up by exaggerated headlines).

    Third, a couple of people have objected to my suggestion that Facebook’s removal of emotional information is less problematic from an ethical standpoint than a positive mood induction would be. E.g., Eric Strong says above:

    So withholding an intervention a user may be expecting is different than giving an intervention a user was not expecting? Does this mean randomizing a patient to not receive a drug they otherwise would receive is a different ethical standard than randomizing a patient to receive a drug they otherwise would not receive? In conventional medical research, these are considered the same.

    I think there is an important disanalogy here. What IRBs care about, first and foremost, is minimization of likely risks, not abstract logical equivalence. Of course it’s theoretically possible for an act of omission to produce results as bad as an act of commission (and in fact, in this particular case, that turns out to be the case!). But a priori, it’s very clear that removing emotional information that would otherwise be presented (of course “otherwise” is itself loaded here, because in the normal course of things, the Facebook news feed is constantly varying in much more dramatic ways) is much less likely to cause harm than adding new emotional information. A better analogy would be to compare a situation in which a researcher wants to partially withhold a drug that a person has already taken on and off for years without any health problems to a situation in which a researcher wants to significantly increase the dose beyond what the participant has previously experienced. There is little doubt to my mind that most IRBs would be much more concerned about the latter case than the former case–as they should be. IRBs are not principally in the business of defining terms like “omission” and “commission”, they’re in the business of pragmatically assessing risks and benefits.

    Fourth, some people seem to be arguing that PNAS should not have published the study, because publication in a scientific journal requires “that research published there should be ethically approved by an independent research board and follow the declaration of Helsinki guidelines”. People apparently fail to appreciate that, as the editor at PNAS herself noted, the study was approved by an IRB. Now, I agree entirely with Chris Chambers above that this should have been made explicit in the paper–and that’s an editorial failure on PNAS’s part that probably explains why some of these concerns arose in the first place–but that’s different from claiming that the study was not approved by an IRB, which in fact it was. Again, one can argue about whether IRB approval was appropriate or not in this case, but in that case one should at least acknowledge that this is hardly an extreme case, and similar decisions are made all the time by many IRBs across the country when dealing with online research. Put differently, if you’re unhappy that such a study could receive approval, you’re really arguing for a revision of the rules, since HHS currently allows IRBs to interpret the regulations as they think is appropriate, and a good number routinely do exactly as was done in this case.

    Fifth, Chris Chambers, as well as commentators on Twitter, pointed out that there is a risk of a slippery slope here, in that if we allow Facebook to publish this particular study simply because it falls well within the scope of ordinary business operations, we risk encouraging other scientists to find a similar loophole by collaborating with research partners in industry, or by conducting a study under a business pretext and then later claiming it was done for research purposes. I completely agree with this concern, and think we should take it very seriously. However, I would argue that the solution to the problem is categorically not to reject the Facebook study for this reason, because that would be a one-off response that does not address the core issue in a generalizable way. The fact of the matter is that there is no principled way to prevent people from abusing this kind of loophole. Consider the implications if we were to apply such a policy consistently: every time someone tried to publish a paper using archival data acquired in (say) a corporate setting, we would be forced to say, “well, we can’t prove that these data would really have been collected in exactly this way if not for the research, so we’re not going to publish it.” This would have a devastating effect on archival research and on academia/industry collaborations.

    Personally, I would argue that the right way to address this problem is to stop pretending that there is any meaningful separation between legal user agreements on websites and informed consent, and to instead strive to enforce clear and concise Terms of Service. As it stands, the separation between legal user agreements and informed consent forms is actually deeply problematic from an ethical standpoint, because it arguably violates the principle of autonomy, which is a primary reason for providing informed consent in the first place. In other words, if I’ve carefully read Facebook’s Terms of Service, and understand that I am going to be experimented on, and that my emotions and experiences can and will be routinely manipulated, it strikes me as grave condescension for someone to show up a year later and effectively say, “oh but you poor thing, you didn’t know what you were signing up for, and for your own protection, we need to have you read a new form every single time you participate in a study!” The problem, of course, is that in practice, users don’t read the Terms of Service carefully and clearly, because they’re full of dense legalese, and nobody has time for that. Personally, my preferred solution would be to do more or less what the US Congress did in the case of credit card agreements–namely, come up with a mandatory set of terms that all credit card companies must use, written in plain English, that take priority over the dense legalese. I think if we passed laws that forced companies like Facebook to say, in simple English, unobscured by pages upon pages of other text, something like “you accept that your user experience on Facebook can and will change periodically, and that Facebook may use you as a participant in research studies without explicitly informing you,” that would strike the optimal balance between ensuring that people understand what’s going to happen, and still allowing research to go forward in a way that can be regulated consistently and with minimal loopholes.

    1. “Personally, I would argue that the right way to address this problem is to stop pretending that there is any meaningful separation between legal user agreements on websites and informed consent”

      That would be a fallacious argument. Just because a person is capable of checking a box and clicking “I agree” does not ensure they’re capable of giving informed consent. In a random sample of 100k adults, a certain number are going to be mentally ill and incable of giving informed consent. I’d like to see Facebook’s methodology for screening out this group from their research. This argument also fails to acknowledge the reality that over 99% of people who “agree” to the ToS never read it.

      1. exactly. anyone arguing that the users agreement suffices for informed consent does not understand informed consent.

        1. Jeff, did you read my comment? I largely agreed with that. But the reality is that it’s much more likely that Facebook will in future seek (and obtain) a waiver of consent than that it will ever start obtaining informed consent from participants–and many IRBs would argue that that’s a perfectly acceptable approach (as would I). What I think you’re missing here is a sense of perspective: Facebook scientists only publish a few studies a year, whereas they probably run literally thousands of studies a year internally. So it’s all well and good to get caught up in what happens to one PNAS paper, but the collective well-being of the people in that one study is dwarfed by that of the rest of the userbase in all of the other experiments we never hear about. What I’m arguing is that rather than trying to enforce much more stringent standards on corporate research–which aren’t going to be adopted anyway, and apply to only a tiny fraction of what corporations do–it would be much better to consolidate our views about how people and their data should be treated ethically, and then make sure that everyone has to abide by the same standard. That would very likely involve both (a) weakening the requirements for consent in certain kinds of research studies (which, frankly, seems like a no-brainer in the internet era) and (b) strengthening protections for users of services like Facebook.

      2. I agree that this is a legitimate problem, but I fail to see how obtaining informed consent online does much to solve it. Do you really think that presenting online participants with a 3-page form and forcing them to click “AGREE” before they can start an experiment will actually make them read that form, given that they don’t read the ToS? If it’s practically impossible to get student volunteers in lab studies to actually read the consent forms you give them, what makes you think online participants are going to bother? If your argument is that checking a box doesn’t constitute giving actual informed consent, then you’re essentially arguing that it’s impossible to do human subjects research online–and quite possibly offline too. What I’m suggesting is that it’s precisely because people are not going to bother to read multiple pages of information that we need to come up with a unified, concise, and readable approach to these problems–much like the new credit card regulations (which are much easier to read, understand, and consent to than the full legalese).

      3. So, because most people won’t take advantage of the opportunity for informed consent, it’s not important to offer it? Sorry, that is not a convincing argument against informed consent.

        No IRB I have ever worked with would offer a waiver for a study aiming to manipulate people’s emotional experiences. More to the point, from whom will Facebook seek their waiver? An internal IRB? Yikes.

        Your point about the number of studies Facebook conducts only suggests that they have been behaving unethically on many more occasions than we know. The argument that “they do it all the time” baffles me. It is irrelevant to the ethical question. Also, I do not believe that they conduct research for the purpose of creating generalizable knowledge is something they do thousands of times a year. They do lots of research to improve their click rates, but that is not systematic research that triggers human subjects protocols.

        I agree that we should consolidate our views of ethical human research. I think the scientific community has provided a reasonably good set of guidelines that corporate entities should adopt. I cannot agree with you that it would be a good thing to weaken consent requirements.

        If Facebook continues to conduct research without obtaining consent, I suspect that, sooner or later, some lawmaker is going to decide that human subjects should have some fundamental rights, regardless of who is conducting this research. For this reason, I suspect that Facebook is probably done with allowing Adam Kramer to conduct research that produces generalizable knowledge. Certainly, they are done with allowing him to publish any of their results.

    2. A story just came out that the word “research” was not in the terms of service at the time of the experiment:

      http://www.forbes.com/sites/kashmirhill/2014/06/30/facebook-only-got-permission-to-do-research-on-users-after-emotion-manipulation-study/

      Even if you have “carefully read Facebook’s Terms of Service, and understand that I am going to be experimented on” after they added the word “research” I don’t know how you would think that you would be “experimented on”? They mention use of data for research purposes to improve the user experience, but human subjects experimentation purposes not directly related to an improved user experience is not at all what is the terms imply. Also, the experiment itself worsened the user experience, so…

      1. dusanyu, I think your concern is a reasonable one, but note that it’s not a concern about Facebook in particular; it’s a general complaint about the fact that under current United States law, you have essentially no special protection in this kind of situation. Remember: this experiment was not technically considered research, because it was conducted for internal Facebook purposes, and Facebook is not under the jurisdiction of federal human subjects research regulations. As long as it’s covered under the Terms of Service, anything they do is perfectly legal.

        With respect to the change you mention, I think what it clearly shows is that Facebook anticipated this issue to some degree, and was trying to cover all of its legal bases. That said, most of the legally informed opinions I’ve read seem to agree that Facebook is still not at any real legal risk, despite this disclosure, because their ToS already clearly stated that Facebook could do pretty much anything it likes to your user experience in support of improvements to the service. Ultimately, for people’s complaints to have any force in this case, there will probably have to be some regulatory changes down the line.

  14. Bob_q, I don’t think you understand precisely what Facebook did if you think that they “prevented people’s messages from being shown to their friends and loved ones”. Facebook has always exerted a very strong filtering effect on your news feed. If you’re like the average person, and have hundreds of friends, you would simply drown in information if Facebook showed you literally everything everyone you know posted. So there is always curation involved. Which means that Facebook was already choosing for you what you get to see or don’t see. If you don’t like it, you shouldn’t use Facebook. The idea that there is some natural, unfiltered news feed that Facebook is mucking with is pure fiction.

    When you say that “they actively increase the ratio of negativity in people’s lives”, you should consider that the “prior ratio of negativity” is itself entirely an artifact of Facebook’s prior manipulation. For all you know, Facebook may have been actively suppressing negative events in your feed for years, so that the positivity in your feed is entirely an illusion. Would you then argue that Facebook should add more negativity to balance things out?

  15. First: thank you to the author for a thoughtful discussion of an overhyped issue. Second: all you folks who are so self-righteously spouting off about ethics… you clearly do not understand the concept of A/B testing our how prevalent it is. This study was nothing more and nothing less than standard operating procedure for, gosh, just about every business of any size these days. If you are worried about emotional manipulation, that battle was lost a long, long, long time ago. Have you heard of marketing? I mean, really. Please get a sense of perspective.
    I think the real story behind this non-story is that a lot of people feel very uneasy about the notion that their thinking and behaviors are manipulated by forces they don’t really understand, for reasons they don’t really understand. Yep. This has been happening all along, for as long as people have been social animals, and probably was happening even to our proto-human ancestors. This is why scepticism, critical thinking and the scientific method are such useful tools for personal liberation. Channel your efforts into those productive pursuits, don’t waste your time barking up a tree among a vast forest of identical trees.

    1. Have you heard of marketing?. YES. Have you heard of scientific research, Nuremberg code etc?. Your comments reflects that you haven’t.

  16. How sure are we that words posted in a status update reflect true emotion? I think its more of a keeping up with the joneses phenomenon. If you see positive posts, you are also going to post, regardless of true feelings

  17. In a world where Facebook has a monopoly over a large chunk of many people’s social interactions, I wonder if it’s really an option for people who are uncomfortable with being experimented on to opt out. In that sense opting out of Facebook is very different from declining to visit your local psychology department to participate in an experiment.

  18. You have really missed the point, and here is where you demonstrate that you’ve lost the plot:

    >” Facebook doesn’t have to share any of its data or findings with the rest of the world if it doesn’t want to; it could comfortably hoard all of its knowledge and use it for its own ends, and no one else would ever be any wiser for it. The fact that Facebook is willing to allow its data science team to spend at least some of its time publishing basic scientific research that draws on Facebook’s unparalleled resources is something to be commended, not criticized.”<

    This is NOT about Facebook's willingness, or lack thereof, to share data. It isn't about badly-designed studies that use tools so inappropriate to the purpose that it invalidates the entire concept and renders the 'data' useless. It isn't, really, even about Facebook itself per se.
    It's about unethical research practices,
    Period.

    That Facebook lent itself to an unethical research project only says things about Facebook that we already knew – they don't care about the actual user, so long as their revenue stream is intact. But what it says about the researchers is screamingly loud – and what it says about a prestigious journal willing to publish such an unethical study is disturbing indeed.

  19. I believe that people have the choice to read or not to read something.
    We make ourself upset and then blame the circumstances.
    Facebook can’t make someone sad… people already sad read the sad stories and then blame them for their state of mind.
    It’s the other way round…

  20. Can FB offer readers a way to manipulate their own level of positive/negative messages? Some days I would like to just turn up the happy dial. It would give the phrase “have a nice day” a whole new meaning.

  21. I didn’t find anything in Facebook’s Data Use Policy on the possibility that experimental manipulation of items that appear in their News Feeds (whether standard company policy or academic-influenced) would be published in peer-reviewed journals.

    In Information we receive and how it is used, they do declare:

    “…we also put together data from the information we already have about you, your friends, and others, so we can offer and suggest a variety of services and features. For example, we may make friend suggestions, pick stories for your News Feed, or suggest people to tag in photos.”

    So they do let people know their feed may be manipulated in this way, just not that they could be part of an experiment published in a place like PNAS.

    I fault the Editor for letting any discussion (or mention) of informed consent and ethics escape the published manuscript.

    1. As people have pointed out, somewhere in the Terms of Service it clearly uses the word “research” to describe what may done with one’s data. I’m not a lawyer, but I very much doubt that one could reasonably argue that the current policy does not also cover Facebook’s ability to publish its findings in the scientific literature. But more to the point, I don’t see how this changes anything from an ethical standpoint. I do agree that it was an oversight on the Editor’s part not to force the authors to mention the details of the IRB approval process. Though I also agree with Michelle Meyer’s take that it’s not at all clear that the authors even needed IRB approval given the way the study was conducted and the respective author contributions.

      1. Michelle Meyer’s post was indeed very informative:

        “Moreover, subjects should have been debriefed by Facebook and the other researchers, not left to read media accounts of the study and wonder whether they were among the randomly-selected subjects studied.”

        This is in reference to the Common Rule (MEM also pointed this out in an earlier comment):

        Section 46.116(d) of the Common Rule regulations

        (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

        The fact that the research was externally published (rather than kept internally) has worried millions of users into (wrongly) thinking that their emotions were manipulated. No wonder. Besides media coverage, the authors of the paper called it “Emotional Contagion” themselves, which is highly misleading. This strikes me as ethically questionable. To me, the mere title “Experimental evidence of massive-scale emotional contagion through social networks” sounds like more than minimal risk.

        1. I agree with you on all counts, but since the choice of paper title is not within the IRB’s purview, that still doesn’t change anything. I’m pretty sure Adam Kramer regrets picking that title, so in that sense, the lesson has probably been learned.

          On Facebook providing a debriefing, I’m certainly in favor of that–though I suspect they would also be able to convincingly argue that it’s not practical for them to provide one in most cases.

      2. (replying to below message but there wasn’t a reply link there for some reason, sorry).

        I’m not sure how Facebook could convincingly argue it isn’t practical for them to debrief or provide information to subjects after the fact. They certainly have the data and ability to identify which accounts specifically were included in the study, and since they’ve already shown that they’re experts at manipulating people’s feeds, it should be no problem at all pushing out a message to the affected users feeds providing a debriefing/explanation.

        They already do exactly this for things like anniversarys/etc with a pinned message visible only to the user that remains the first item in the feed until dismissed.

  22. I wonder if people would have the same response if you wrote an article about Google’s recent Chrome ads featuring cute little kids and parents writing each other through GMail. Such ad’s are entirely designed to also manipulate people’s emotional state, and in turn to tweak it so that you’re more likely to spend more time in Google products. The same holds for every other bit of advertising ever relased. Companies manipulate our emotions all the time, and we’re basically powerless to stop it. Even knowing its happening doesn’t make you immune to its affects. You could make a good argument here that the biggest difference is that Facebook released public results about what they’d done so that others could use it and learn from it. I agree, demonising them for that.

  23. Tal – would you be okay w/ FB running a similar experiment in which they had a group that saw mostly aggressive posts to see if they posted similarly aggressive sentiments? As in the case of a primarily negative-post feed, we have reason to believe that creating a primarily aggressive-post feed would affect people negatively (in this case, to behave aggressively) in real life (regardless of what they choose to post). Most tweaks FB & others make are designed to have relatively benign effects (e.g., spending more time on the site). Of course, other “collateral” effects of such tweaks occur but would likely cancel each other out (ads would make some people happy and others sad). But if you have empirical evidence that a particular kind of manipulation leads to a negative outcome (e.g., negative affect, or aggression), then it is, to me, not the same as tweaks designed to keep people on the site. Based on what we know, the type of manipulation used in this study was likely to affect many people in a negative (albeit minor) way – a risk that participants should know about up front.

    1. It would depend on the details, but in general, yes, I would be okay with that.

      I also think it’s very unlikely that the collateral effects of the “normal” tweaks you mention would really cancel each other out so tidily. For instance, it seems pretty clear that much of advertising is driven by fear (e.g., “I know you’ve always thought your counters are fine the way they are, but look at how anxious the guy in this ad is about picking up germs until he sprays his counters with our new product!”). It also seems very likely that the best way for Facebook to get people to spend more time on the site is not to create an entirely positive experience, but to create something much like a dependency–e.g., by inculcating a fear of missing out on what other people are doing. Optimizing for metrics like percent of stories read seems pretty much guaranteed to operate largely by manipulating people’s emotions, and it’s not at all clear that they’re the kind of emotions people would actually spontaneously choose to have (unless you think it’s a positive thing when people say things like “I don’t know why I keep spending time on Facebook, I just can’t help it”).

  24. Pardon my naivety but is there any reason why we should trust that Facebook is telling the truth about these seemingly insignificant results? Perhaps this news release is just a follow up on the experiment.

  25. I read this article more carefully and think it’s outrageous and I wondered a few times if it is a joke or is just brilliantly ironic – honestly. Just in case it’s not, here I go. The article argues the following:

    1. Removing specific feeds is somehow much better than adding specific feeds. To that I would say that this is the same as saying that not only is omitting the truth not lying, but should be considered acceptable behavior.

    2. The fact that FB curates your experience already for things like your interests or things that you specifically selected or hid means you should expect FB to curate it in any other way. I say that a) users might expect curation of one kind but not the other, and b) curating for the user’s benefit vs. FB benefit (even if that means user’s harm) is clearly not the same thing.

    3. This study should be assumed to be more for “global betterment of society” than “trying to get people to click on more ads.” I don’t even know where to start with this one, but this psychology researcher somehow a) is not familiar with the fundamental connection between marketing and emotional manipulation, b) thinks that FB is not in the ad-clicking business, and c) thinks FB would not use the findings from this study to generate more revenue, no matter why it originally designed the study.

    4. “There’s nothing intrinsically evil about the idea that large corporations might be trying to manipulate your experience and behavior” because your FB friends are doing the same. He even says this is like “your mother wants you to eat more broccoli.” To that I can only say that the author of this blog has some terrible friends and family who he assumes will only exploit him financially, like his mother who apparently runs the broccoli industry.

  26. The main issue I have with the experiment is that they conducted it without telling us. Given, that would have been counterproductive, but even a small adverse affect is still an adverse affect. I just don’t like the idea that corporations can do stuff to me without my consent. Just my opinion.

  27. There are two issues with this study. One is methodological. It may not be a particularly well designed study. See this review http://psychcentral.com/blog/archives/2014/06/23/emotional-contagion-on-facebook-more-like-bad-research-methods/.

    The second is more to the point of this article. We all expect facebook to experiment with the news feed to improve user experience. But this is not a study done to improve user experience, so it is unexpected. To the extent that it is unexpected, we need more regulation of that kind of activity, whether it is good and useful science or not. This particular research may not be particularly dangerous, but the concern is of slippery slope.

  28. @Joe: A wireless carrier or the postal service is supposed to transfer all messages. Facebook is not supposed to show you all results from all your friends. Facebook is constantly making little experiments that change the way the newsfeed is set up slightly.

  29. “Framing it that way tacitly implies that Facebook must have done something specifically designed to induce a different emotional experience in its users.”

    It does indeed tacitly imply that, and rightly so. If Facebook wanted to see how the emotional content affected what users were posting, there is no other way to do so other than to attempt to manipulate the users’ emotions, and that is what is quite clearly implied by the reduction of positive or negative posts in the feed. If the users’ emotions were not being manipulated, then the hypotheses of the study simply could not be tested.

    Manipulating people to click ads is certainly not any more noble a pursuit, but clickbait has a much more simple function, which is to get people to click ads, not to intentionally induce a positive or negative state and then see how people respond to it, which is what the Facebook study purported to do. The latter could have much more severe repercussions on someone suffering from depression or suicidal ideation.

    1. Hi Susan, I respectfully disagree. I think it’s naive to think that efforts to increase metrics like percent of feed items read or number of ads clicked are operating via vehicles other than emotional change. When Facebook alters its algorithm to increase the amount of time you spend on the site, it’s manipulating your emotions directly: nothing more, nothing less. Facebook’s data scientists and engineers know this, as do essentially all marketers. You say:

      Manipulating people to click ads is certainly not any more noble a pursuit, but clickbait has a much more simple function, which is to get people to click ads, not to intentionally induce a positive or negative state and then see how people respond to it, which is what the Facebook study purported to do. The latter could have much more severe repercussions on someone suffering from depression or suicidal ideation.

      I disagree strongly with this. Actually, I think in many ways there’s significantly less to worry about when the goal of the manipulation is specifically to effect emotional change rather than to sell ads. For one thing, if I’m trying to manipulate your emotional states, I’m probably thinking very carefully about how I can do that in a way that alters your mood in a relatively circumscribed way, and with minimal influence. For instance, it’s likely that the reason the authors decided to reduce the proportion of emotional posts in the news feed rather than increasing it is because the latter would almost certainly be a much stronger manipulation, and given the sample size, there was no need for a strong manipulation. Now you may say, “oh, but they’re intentionally manipulating your emotions!”. And my reply is, “yeah, and if they weren’t, then they’d still be manipulating your emotions inadvertently, but they’d be doing it without giving any thought to the consequences.”

      Compare what happened in this particular experiment with what almost certainly happens whenever Facebook builds an algorithm that has the sole goal of getting users to stay on the site longer, or click on more ads. In such cases, there are much less likely to be any checks and balances in place to keep the algorithm from bending your emotions in potentially really dark ways. For example, if my only goal is to get you to click on more of the ads I’m showing you, it could very easily turn out that the best way to do that is by putting you in a really bad mood–so that you don’t feel like going out, and simply sit in front your screen surfing Facebook. Or maybe it’s by introducing what behavioral psychologists call a variable reward schedule, where “good” items are injected into your news feed at random intervals to keep you hooked. Or maybe it’s by disproportionately showing you items in your feed in which people mention brand names, in order to induce the illusion that a product is used more often than it actually is. These aren’t hypothetical situations; we know, for example, that much of advertising in other mediums is fear-driven, and operates by creating a fear in people that they’re failing to do something important, or missing out on something good.

      Personally I think it’s a gigantic cop-out for someone who, say, writes ads for a living, or builds models to predict ad clicks, to say “well, I’m not intentionally trying to create fear in people’s minds; I’m just trying to sell more of a product, because that’s my job!” I would much rather have that person be honest and tell me “well, I know I manipulate people’s emotions, and I actively try to do my job in a way that lets me minimize the negative and accentuate the positive whenever I can help it.”

  30. Fantastic article. As a former research analyst myself, I have seen and heard many of these complaints before….. the irony being that many people actually benefit from better choices. It’s like going to a restaurant and telling them you’d like to order dessert. The wait staff won’t then hand you breakfast, lunch, dinner, and dessert menus – they’ll cater to what you’ve already indicated you’d like, by giving you only the dessert menu.
    The part that worries me the most in the field is the manipulation of the data that IS presented, by the marketing teams. Not wanting to paint an exaggerated sweeping generalization, but research analysts – or data scientists, or big data crunchers, or whatever buzzword title people want to give us – usually are taught how to avoid bias, while marketing teams have clear agendas. So when marketers get the data handed to them, I have actually seen them CHANGE IT to match what they wanted their message to be. Beyond the conduct of study, research analysts often don’t have much power to critique the final presentation of the data. I found this horrifically unethical, and left the field from a particularly harrowing experience of this.

    Just a couple editing errors I felt you might want to note:”Consider: by far the most likely outcome of the backlash Facebook is currently EXPERIENCE is that, in future, its leadership will be less likely to allow its data scientists to publish their findings in the scientific literature. Remember, Facebook is not a research institute expressly designed to further understanding of the human condition; it’s a publicly-TRADE corporation”

    1. Sorry to hear about your negative experiences, clio44. I do think explaining and selling one’s findings in an honest way is one of the toughest challenges for data analysts (both in industry and academia).

      Thanks for catching the typos; I’ve updated the post.

  31. Thanks for the good post!

    May I give a pointer to a related interesting study involving Wikipedia. http://dx.doi.org/10.1371/journal.pone.0034358

    There the researchers split uninformed Wikipedians in two groups and gave one group social rewards in the form of the so-called barnstars. The study was approved by IRB because it was judged that the study “presented only minimal risks to subjects”. Surely the barnstar could have some emotional impact, – as it indeed had an effect on productivity.

    It shows that manipulative studies on social media occur and have an effect. And in this case with no Data Use Policy to hide behind.

    1. But it went through full IRB review. This study skated IRB review. Even though the researchers designed the study, they reportedly didn’t submit it to IRB review until after the data was collected, so they could avoid any questions of having to secure consent, as they were working with an “existing data set” (which they designed the study for the collection of, without IRB approval, in violation of stated Cornell IRB policy).

      1. “This study skated IRB review.”

        Well, Cornell IRB viewed it and thought it did not need a full IRB.

        “they reportedly didn’t submit it to IRB review until after the data was collected”

        One reason that it wasn’t submitted to Cornell IRB until after the data was collected might be because the data was collected before the researcher began thinking writing an article.

        Cornell IRB could have done two things with the existing data set:

        1) State that informed consent was required and the data set could not be used in a scientific writeup.

        2) State that the informed consent was not required, e.g., because it “presented only minimal risks to subjects”.

        Given that A/B-testing occurs as normal part of Facebook operation I say that this particular study presents only minimal extra risks to subject. What would you say?

        It may be worth to note to Facebook users that they are continually used as guinea pigs, regardless of PNAS publication or not. Furthermore, the statement that Facebook did not see the actually text message, should not make the Facebook users believe that Zuckerberg et al. does not have access to all what you have written and uploaded. Apriori you should believe that operations and data scientist of Internet and mobil companies can potentially read all data, unless you are using your own encryption.

      2. Why are the Cornell faculty listed as designers of the study on the PNAS paper if they didn’t come into the picture until after the research was conducted?

  32. if your letting yourself become negatively influenced by your facebook feed to the point it becomes a psychological issue for you. take a breath look in the mirror remember what real life is, and stop taking social networking so damn seriously!

  33. This post is full of logical inconsistencies.

    The worst is that there is NO EQUIVALENCY between all types of human manipulation. It is your choice to say that Corporate Manipulation, or that of your Boss, is the same as that of your mother or your friends. And it is the wrong choice that shows a serious lack of thought put into the issue.

    Is your inner world so dead that you really think that everybody operates at the same mechanistic level as a Corporation?

    Basically you must have gotten that far and then just stopped thinking critically, because you had reached a point that supported your main argument. That is not good writing, that is not good thinking, and that is not good SCIENCE. Number studies do not justify lazy thought.

    Do you, Commenters, agree with the idea that there is an Equivalency in all manipulation/coercion? That a mother trying to make a child eat healthy is the same as a Coporation trying to make them buy something they don’t need?

    ABSURD!

  34. And your stance on the use of minors (ages 13-17) in this research? I presume as a professor of psychology, you are aware of their status as a vulnerable population, and you are aware that they are able to have facebook accounts, and that they were not precluded from the study?

    Also, I notice that you mostly discuss the ethics of them manipulating the newsfeed, not even addressing the ethics of the lack of informed consent or IRB oversight, even though the research was designed with University faculty. When they designed the study, they didn’t ask for IRB approval. They waited until AFTER the data were collected, so they could circumvent the informed consent issue, and work with an “existing data set” which is walking a thin, and ethically problematic line (one that could cost a professor a job, even a tenure-protected one).

    So do you propose that as long as research only slightly manipulates people, for research that creates only barely statistically significant effects, that replicates what a commercial entity does any way, it no longer requires informed consent? Have you rewritten the Belmont Principles yourself?

    1. I’ll address my thoughts on the IRB issue in a follow-up post today or tomorrow. To summarize though, at the time I wrote this post, the common understanding was that (per an interview with the Editor of the article) the researchers had obtained IRB approval. Since then, it’s emerged that they probably only got approval after the fact, and only for archival research. From a legalistic standpoint, I agree with Michelle Meyer’s analysis to the effect that the authors are probably still technically within the letter of the law (though PNAS should certainly have made them include a better description of the IRB process in the text). However, I also think it’s pretty clear that they’re violating the spirit of the law.

      More generally, as I noted in earlier comments, I think the fact that loopholes like this can exist (i.e., get a non-covered entity to do the analysis so that it’s not considered research, then ask for archival permission from the IRB) points to a gaping hole in the regulations. I’m not a fan of rules that require one to know what the intent was in an investigator’s head when the study was done, and I think what we really need is a major overhaul of human subjects regulations, such that there is a single unified set of rules that apply to both academic and industry uses of data. What that looks like is a matter for debate, of course.

      The minor issue is also a complicated one. It’s essentially impossible to prevent minors from participating in online studies; one can ask them to only continue if they’re above 18, but there’s no way to enforce this. So in the general case, I think the way this gets handled in practice is the right one: we do the best we can, and then live with the fact that there will be some data from minors that is impossible to filter out. As for this specific case, I think the IRB issue muddies the water. If you follow the letter of the law, since the data was not collected for research purposes, Facebook has no reason to remove minors. One could remove them post-hoc, presumably, but it’s not clear what that would achieve, since the point of excluding minors is not that they give bad data, but that they are not old enough to consent (and Facebook users clearly are not legally required to give consent for routine A/B tests). If you follow the spirit of the law, then you might want to hold Facebook accountable for not treating the study like research up front, as they presumably could have filtered out minors then. Personally I have mixed feelings about this, but ultimately I guess I don’t see it as a big deal for the same reason I don’t think consent is required here: there is no more than minimal risk involved, since the mere fact that 13 year olds are on Facebook already exposes them to all of the same potential risks that this particular manipulation does.

      1. But the data WAS collected for research purposes. The faculty, working with the FB researcher, designed the study, and gathered the data, and THEN applied for IRB, purposefully circumventing university IRB rules, but going through facebook research protocols. They were presumably aware that there were minors in their sample. They excluded non-English speakers. They didn’t even bother to exclude minors. Or they didn’t bother to mention it.

        Because perhaps to THEM the only thing they thought was important was anonymity, WHICH THEY KEPT RE-ITERATING, not human subjects as actual HUMANS on the other end of the computer.

  35. “…meaning that eliminating a substantial proportion of emotional content from a user’s feed had the monumental effect of shifting that user’s own emotional word use by two hundredths of a standard deviation”

    What you’re saying implies the manipulation had the same effect size for every user in the experimental group. But that’s crazy: What the authors report is an average effect across individuals, with some people’s emotional content much more and others’ much less negative.

    Your use of the word “monumental” here mocks the effect as being really small. But it couldn’t have been that small for everybody, or they would not have been able to measure it in the first place.

    1. I think Tal is, if anything, overstating the size of the effect.

      N = 689,003 people. They analyzed 3 million posts contaning 122 million words. That means that for each person, they analyzed an average of 177 posts, each containing an average of 41 words. The effect size in these terms was the following (I had to hand estimate this by putting the figure into powerpoint and using the ruler. Seriously, PNAS, at least make them put the actual numbers in the supplementals):

      a group of subjects given a more positive feed had their positive words increase from 5.24% to 5.28%. In other words, your average post of 41 words before the manipulation contained 2.15 positive words. Aftewards it contained … 2.16 words. In other words, it would take you 100 posts, containing 4100 total words, for this manipulation to make you type 1 more positive word.

      a different group of subjects (?!) who also got a positive feed had their negative words decrease from 1.75% to 1.68%. That means that on your average post of 41 words, 0.72 of them were negative before the manipulation, and 0.69 of them afterwards. Dofferemce of 0.03. In other words, it takes you 33 posts, or about 1350 words total, before you get one less negative word, as a result of hearing positive words in your feed. Even CBT is a better therapy (I kid, clinical friends, I kid…).

      Negative feeds had effect sizes of similar sizes (on two more different groups of subjects). Negative feeds increased negative words from 1.72% to 1.77% (0.71 negative words before manipulation, 0.73 negative words after, 50 posts before negative feed makes you post one more negative word).

      The biggest effect was negative feeds on positive words. 5.27% positive words before manipulation, 5.13% after. Or 2.16 positive words per post before, 2.10 after. After 17 posts (and 700 words), you say one less happy word if you’ve had a negative feed.

      I like “Big Data” analyses. I do them for a living. But sometimes the interpretation matters as much as the data. And given their data, they could have just as easily written a story about how shockingly small of an effect the manipulation had, and how resilient people are to massive social pressure.

      1. I should say, though, that I find what Tal says about the ethics of the situation quite convincing.

      2. Very good point. As an epidemiologist we are taught to consider not only statistical significance (eek) but clinical relevance of the effect size. This is so tiny as to be barely worth reporting?! The ethical procedures used for a published research study are still very dubious though considering PNAS stipulation of following declaration of Helsinki guidelines. And I am 99% sure our IRB at least would not have approved it.

    1. Excellent article, Engin. It clarifies much of what has been bothering me about the various defenses of Facebook that I have been reading. I consider many of these defenses to be rationalizations, rather than legitimate arguments grounded in ethics.

Leave a Reply to Tal YarkoniCancel reply