A lot of people are up in arms about a research study that was published in the latest issue of the Proceedings of the National Academy of Sciences (PNAS).
I’m talking about the Facebook social contagion study.
But Facebook didn’t do anything wrong.
And neither did PNAS by publishing it. (Although they might have violated their own editorial standards. But I wouldn’t bet against it.)
First, all internet companies run experiments on their users. It’s how internet products get better. We try things, we measure results, we iterate. It’s that simple.
Some companies are more sophisticated at this than others.
At your company, you might make a change and look at whether usage went up or down. At another company, they might A/B test all of their changes. And at companies like Google and Facebook they are regression testing dozens (if not hundreds) of changes at once.
Regardless of the sophistication, it is all experimentation.
But Facebook Didn’t Run This Experiment To Improve Their Product
Baloney! Sure they did.
Did you read the study? You can find it here.
Facebook omitted a percentage of stories (from 10-90% depending on the user) that were either positive or negative (depending on which group the user fell into) and then measured what impact this change had on those users’ status updates.
To evaluate the users’ status updates, they looked at the percentage of positive words and the percentage of negative words. To be clear, they used an algorithm to count positive or negative words. The researchers did not read status updates.
Here’s what they found:
- When they reduced emotional posts (either negative or positive) in the newsfeed, people produced fewer words in their own status updates.
- When they reduced negative posts in the newsfeed, the % of negative words in the user’s own status updates decreased and the % of positive words increased.
- Similarly, when they reduced positive posts in the newsfeed, the % of positive words in the user’s own status updates decreased, and the % of negative words increased.
Now, you can’t argue that this doesn’t impact Facebook’s product.
Removing posts that had positive or negative words, reduced the words produced in status updates. If people don’t update their statuses, Facebook has no product.
Facebook Is In the Emotion Business
The other findings are also important to Facebook.
If you have a positive experience on Facebook you are likely to return. If you have a negative experience, you are less likely to return.
This isn’t just about how easy or difficult the site is to use. It’s about the value you get out of the site.
If your friends make you happy, arguably, you get more value out of it.
But there’s more to it than that.
The “Alone Together” Syndrome
We’ve all heard the criticisms. Reading Facebook makes us unhappy.
Your friends all lead glamorous lives, taking grand vacations, their kids are always smiling, and they love their jobs.
Everything is perfect.
And this leaves you wondering why your own life is not so perfect.
I’ve heard this criticism countless times. And I’m sure so has Facebook.
If I worked there, I would want to know if this is true.
Are they making us worse off?
Or are they truly a “social utility” like they claim to be?
This study is a step in the right direction to better understand this.
And Finally, The Issue of Informed Consent
Much of the backlash against this study argues that Facebook didn’t get informed consent from its users.
This is a gray area at best. The PNAS article claims that Facebook’s Data Usage Policy acts as informed consent.
And Facebook’s Data Usage Policy, in surprisingly easy to read language, does state that one of the ways that they will use the information you share on Facebook will be (emphasis mine):
“for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”
This means that Facebook is well within the bounds of its own terms of service and data usage policy, meaning Facebook didn’t do anything technically wrong.
But this doesn’t necessarily meet the requirements of informed consent.
Informed consent requires that participants are informed of the purpose of the study, the procedures of the study, alternatives to participation, among other things. And it’s hard to argue that Facebook participants were informed of any of this.
However, informed consent is not required by law. It’s an ethical standard required for academic publication and is usually enforced by Institutional Review Boards to ensure that no harm is done to participants. Informed consent is not always required. There are exceptions like when informing participants would interfere with the research and the risk of harm is low. I could see both being argued in this case.
But let’s be clear. Regardless of whether or not this study meets the criteria of informed consent, tech companies run studies like this all the time. I know of no tech company (not one) that gets human subjects approval or informed consent for it’s A/B tests. If this were a requirement, the internet would not exist as we know it today. The only reason why this is even a question is because PNAS published the study.
Should PNAS Have Published The Study?
PNAS’ own editorial policies state (emphasis theirs):
“Research involving Human and Animal Participants and Clinical Trials must have been approved by the author’s institutional review board.”
And:
“For experiments involving human participants, authors must also include a statement confirming that informed consent was obtained from all participants. All experiments must have been conducted according to the principles expressed in the Declaration of Helsinki.”
The first author is from the Facebook data science team. The second and third authors are affiliated with UCSF and Cornell University respectively. If this study was approved through one of those two universities’ Institutional Review Boards, then this gripe about informed consent is baseless.
If it wasn’t, I suppose you could make the argument that this study violates PNAS’ own editorial standards.
This Forbes article links to an image of an email from one of the article’s editors that does claim the study went through a university Institutional Review Board.
Supposing the study didn’t go through an institutional review board, then I guess you could argue it’s questionable that PNAS published it at all. But given the reputation of PNAS, I strongly suspect they don’t believe this study violates their editorial standards. And given the backlash, I’d love to see a response form their editor. Maybe we’ll see that in the next issue.
I’m Glad Facebook Published This Study
I’m the last person to trivialize the need for informed consent or human subject review boards. As an undergrad, I took Psych 1 from Phillip Zimbardo, the man responsible for the Stanford Prison Experiment, the experiment that went so horribly wrong that it not only ended early, but it also contributed to the human subjects review process we now have today.
Experiments can go wrong. Ethics matter.
But I believe the response to this study has been blown out of proportion.
First, Facebook has always determined through an algorithm which stories to show you in their newsfeed. I imagine they are running countless experiments every day to optimize your newsfeed. The only thing that makes this study notable is that they published the results.
Second, the argument that this study is manipulative is also overblown. This pales in comparison to the billions of dollars in advertising you are subjected to on a daily basis. Does that make it right? Not necessarily. But why is it wrong?
I suspect most people are reacting to the idea that their emotions are being controlled. Again, advertising is a much more overt example of this. But even putting that aside, how is this any different from a movie that makes you laugh or a novel that makes you cry?
And let’s remember. Facebook isn’t showing you subliminal messages. They aren’t populating your newsfeed with evil thoughts. They are merely selecting which of your friends posts to show you.
Nobody is forcing you to use Facebook. Nobody is forcing you to read your newsfeed. Facebook needs a mechanism for determining which stories to show you in your newsfeed. You are never going to agree with all of their choices. This isn’t manipulation. It’s problem solving.
Oh yeah, and one other thing. The effect of these changes on your status updates, was actually teeny tiny. Read the study for the actual numbers.
Third, I’d rather Facebook do this research and understand the impact they are having on the world, then have them just assume that the product they are building is doing good.
And fourth, Facebook didn’t have to share these study results. I’m really glad they did. It’s rare for a tech company to reveal the results from their experimentation. In this case, we all get to learn from Facebook’s experience.
This Does Raise Ethical Questions
And just to be clear, I do think there is a lot of gray area here.
I do think that as the internet produces more and more data about people, more companies are going to want to do this type of research. And we will run into gray areas. Companies should be cautious. And we should hold companies to high ethical standards.
I also think we are going to get some pretty amazing research findings out of this type of data and I’m pretty excited about that.
So before we all overreact, let’s take a deep breath, and look at what’s really going on here. This study wasn’t nefarious. it wasn’t designed to manipulate. It was designed to answer a question: do emotional stories in the newsfeed impact the emotion in users’ status updates? That’s it. That’s a perfectly fair question to ask.
Journalists often do us a disservice by sensationalizing things. In my mind, this is just another example of that. Whether you agree or disagree, I’d love to hear your viewpoint in the comments.