Facebook’s trust engineers

13 Feb

If you’re interested in digital life and social science, the latest Radiolab podcast is fascinating. They start with an astounding perspective on the scale of Facebook, use an example of a problem that led to an engineering — and social engineering — solution, and then roll into the emotion manipulation study that caused a flurry of hand-wringing (including mine) last summer. The way that the Radiolab team unfolds this story is smart and interesting, and I recommend listening before reading my comments below.

While listening, I thought of the testing we used to do when I worked at Huge Internet Company. Much like in the podcast, I saw firsthand the differences between running tests that involved tens of millions of diverse people and academic experiments conducted with a few university undergraduates. We had facilities for eye-tracking studies, focus groups, and one-on-one testing, plus we did some in-home observation and beta tests. I led projects that were user-facing and host-based; they ran on our servers, not the users’ machines, and therefore could be updated on the fly.  We could try a product redesign with a couple million people before doing a full rollout or split what we showed to users for A/B testing. It was a marvelous way to learn about human behavior online. Though we weren’t doing things like skewing a person’s feed to have more positive or negative stories, we certainly tested different ways of naming our products and writing copy to increase attention, long before clickbait headlines. We played with color and shape and screen layout, and things as small as “OK” vs “Ok” and whether that button should be to the left or right of “Cancel” on a warning. How many online friends does the average user have, so we know how big the default messaging list should be?  How do people act and feel if their list of friends is smaller or larger than the default? Knowing what we had done, it was hard not to roll my eyes later when a professor would authoritatively cite a study with a test group of 80 American 18-22 year olds.

The Facebook engineers in the podcast clearly think that they are trying to help users. However, the line between help and potential harm or manipulation is thin and blurred. The social scientists in the piece are used to dealing with Institutional Review Boards (IRBs), and being part of academic professional organizations with codes of conduct that are designed to protect human test subjects from even the tiniest possible harm without a means for redress. The mindset of the two groups is very different yet both want useful information and a positive outcome for all.

If we start thinking about all the ways in which we are manipulated by organizations, I’m not sure that Facebook rises to the “most evil” level. Yet. Brand marketing is based on convincing people to buy one product over another, and it’s often done by giving a false impression or obscuring negative truths. Politics and media; need I say more? It can be argued that we have been done great harm by those groups through manipulation to sell a product or idea, rather than to help us solve a problem. Facebook has the reach and ability to manipulate billions of us without our awareness, and maybe an external ethics check would be useful. An IRB model, perhaps. (If you’re unfamiliar with an IRB, a simple explanation is that before research begins, it has to be approved by a board at that institution. They review the research question and methods, protection for subjects, and conformance with ethical rules. If you have been a subject in research and have a question or concern after the experiment, the IRB acts as a contact point as well.) IRBs don’t move at the pace of Internet companies, though — think of glaciers vs lightning, in my experience — and who would judge which companies or which changes require approval?

As a user, I hate the idea of anyone manipulating me for their own purposes, fact of life or not. As a researcher, I see the potential for harm and want ethical oversight, but I’m also excited by the knowledge we could gain from large and diverse subject groups. As a former tech manager, I know the ceaseless push for improvement and results as well as the need to quickly fix things that anger or confuse users. In the end, I’m not sure which voice in my head is the loudest.

Leave a comment

Posted by on February 13, 2015 in Research


Tags: , ,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: