clock menu more-arrow no yes mobile

Filed under:

Facebook's Top Content Cop on Censorship and Safety (Q&A)

Facebook policy head Monika Bickert explains Facebook's strategy.

Steve Maller/ Mallermedia

On the morning of August 20, Monika Bickert, the head of global policy for Facebook, was alerted to a video showing the beheading of American journalist James Foley. It was 3:54 am, and Bickert leapt into action.

The gruesome portrayal — an act of terrorism by the militant Islamist group ISIS — had already started to proliferate across the Web, and Bickert warned Facebook’s Community Operations team to stay on alert. It was likely the video would crop up on the social network soon, and they’d have to take it down.

While the Foley video is but one extreme example, when it comes to picking out the bad apples on Facebook — inappropriate posts and photos — Bickert, 38, runs the show. As a former federal criminal prosecutor, Bickert and her team literally wrote the book outlining what’s allowed on the social network, and more importantly, what isn’t. In the Foley incident, Bickert’s preparation paid off. The video was expunged, and it never gained traction on Facebook.

Bickert’s team, working from five different offices around the globe, is expected to act immediately. In the past, Facebook has come under fire when certain violent videos were allowed to remain on the site. In other cases, free-speech advocates have criticized Facebook when posts were taken down. It’s Bickert’s job to walk the line between censorship and safety.

Re/code spoke with Bickert about Facebook’s content standards, ways it can improve and how it decides what’s appropriate for Facebook’s audience. The interview that follows was edited for length and clarity.

Re/code: Facebook has one Terms of Service that applies to people from cultures all over the world. How challenging is that?

Monika Bickert: It is a tremendous challenge to maintain a set of standards that meet the needs of a community as diverse as ours. The majority of people using Facebook are coming to share and connect with the people that are important to them and share very positive experiences, but because they are from different places in the world they’re going to have different ideas about what’s okay and what’s appropriate to share. You’re going to have content that makes people uncomfortable. That’s not always a bad thing. We want to make sure that people have that ability to express themselves in a safe place.

Have you ever considered trying to localize the terms of service to specific user groups?

Facebook’s really a tool for people to communicate without borders. I have friends in a number of different countries, and on a daily basis I have the experience where a friend in one country, say a friend in Norway, shares a photo, and a mutual friend of ours from France might comment on that photo, and somebody else in Thailand might “Like” the French person’s comment. So it only works if we have one set of standards for that community.

Users are encouraged to report inappropriate content to Facebook. What’s that reporting process like? What actually happens when I click the “Report” button?

You can think of our reporting system as the world’s largest neighborhood watch program. We rely heavily on people in the community to tell us when they see something that probably shouldn’t be on Facebook. You can report any piece of content. You will answer a couple short questions about what you’re seeing and why you don’t think it should be on Facebook. Then after you’ve reported it, it will be sent to a member of our operations team who will review it.

Facebook report

Are these reports all reviewed by humans, or do you use technology or automate the review process at all?

We use technology to help us triage reports, and we also use Microsoft’s Photo DNA to help us prevent images of child exploitation from being uploaded to the site, but human beings are the people responsible for reviewing content at Facebook. We take a lot of pride in that. We have people that are specialized by topic area, so a safety team, which has experts on everything from terrorism to self harm. Then we also have people who are language specialists, so if something is reported from Turkey, the person who reviews that will be a native Turkish speaker.

How many posts are reported in a given week or month? And what’s the turnaround time? I’m envisioning the world’s largest queue of reports.

With 1.3 billion people, we get a lot of reports every week. Our turnaround time varies. It is a priority for us, but a higher priority for us is accuracy. We also prioritize reports where we feel there might be physical danger to a person. That means that if you’ve reported content, say child exploitation material on the site or a threat of self harm or a threat to harm somebody else, that sort of report gets sent to the top of the list and we respond to those reports extremely quickly.

There have been a number of beheadings in the news over the last few weeks, and it’s something Facebook has dealt with before. When you see content of that nature, how do you decide when to pull something down and when to leave it up?

Our policy on [graphic content] has been in place for about a year now. Our basic approach is that we want Facebook to be a place where people can talk about what is important in their region and in their daily lives, and frankly, oftentimes for people living in conflict zones that material is going to be upsetting. It’s important to us to create that space because many of the people in those regions don’t have the same access to means of sharing or news information that we have in other parts of the world. So we want to be sure we preserve that ability for people to raise awareness and share their experiences.

At the same time, we also want to be sure that when that content is shared on Facebook, it’s shared responsibly, and that means that it’s being shared to raise awareness with an appropriate audience but not being shared to celebrate or glorify violence. When we do see that material, we will prioritize it and remove it from the site if it’s being shared [for those reasons].

Is there ever a time when your team proactively seeks content to take down — for example, when the footage of the James Foley beheading was circulating — or is your strategy to wait for people to report it?

In a small subset of areas, we will use proactive tools such as Microsoft’s Photo DNA to prevent safety-related policy violations. The classic example I’m referring to here is preventing images of child exploitation from being shared on Facebook. But by and large we do rely on our community to tell us when they’re seeing something that’s not appropriate for Facebook.

Are there other areas besides child exploitation where Facebook is more proactive in terms of searching for inappropriate content?

In some [instances], if we find a violation, we’ll then use our special teams to do a deeper investigation into the account that was responsible for that violation. We’ll also use automated tools to try and find associated accounts or [inappropriate] content. An example of where we do this is our terrorism policies. We have taken a very strong stance against terrorism and you can see this in our community standards where we talk about our ban on violent organizations. We don’t allow terror groups or violent organizations to have a presence on Facebook. We also don’t allow other people to praise or support terror acts.

So if you feel you’ve found a terror organization on Facebook, you’ll shut that account down even if it hasn’t been reported?

If we were to identify a terror account through any means, we would definitely shut it down even if it had not been reported.

With the beheading video from last fall, you initially left the video up before ultimately deciding to take it down. Can you walk me through that thought process and why you changed your mind?

Our experience in dealing with graphic content on the site told us that most people are doing this for good reasons. Most of them want to condemn the violence and bring the world’s attention to an atrocity that’s being committed in another part of the world. That’s important speech and we want to have space for it. Ideally we want people to share that content with warning screens and the ability to narrowly control their audience, and when we saw content being shared on the site — not just with the tragic beheadings we saw last year — but also with other graphic content on the site, we felt like we just didn’t have the tools in place for people to share graphic content without surprising other people. That was when we added that language to our community standards saying you have to share this content responsibly and it matters how it is positioned.

“Our nudity policies have to be blunt — more blunt than we’d like them to be — and that can unfortunately result in the removal of some photos that are innocently shared.” — Monika Bickert

You said that you encourage people to provide a warning that content is graphic but that’s not a tool that Facebook currently offers. Is that something you plan to provide people? Are you working on that?

We’re always working on creating more tools so that people can have the experience on Facebook that they want to have. We are just not there yet with that particular tool, but it’s definitely something we’re interested in.

There are other forms of graphic content on Facebook. For example, some people may find breastfeeding offensive while others see it as beautiful. How do you draw the line in terms of where Facebook steps in to be the police, and where Facebook steps back?

Let me first say that we definitely allow breastfeeding on Facebook. There has been some confusion about that externally, but for many years we’ve had a policy allowing breastfeeding on Facebook. We noticed earlier this year that the way we were communicating that policy to people was not very clear. We’ve since refined the messaging we send out and we’ve also provided additional guidance to our enforcement teams to ensure breastfeeding photos are not removed.

But you’ve raised an important issue. When it comes to nudity, you really have widely different views around the world about what’s acceptable. What somebody in Denmark thinks is appropriate would be very different from what somebody in Saudi Arabia might think is appropriate. It’s a challenge for us, and not only because we deal with a global community but because we’re dealing with review teams that have to consistently and efficiently apply policies to these pieces of content. For that reason, our nudity policies have to be blunt — more blunt than we’d like them to be — and that can unfortunately result in the removal of some photos that are innocently shared.

In addition to a potential warning tool, what else do you need to change about what Facebook is doing in this area?

An area where we’re really focused right now is about improving the messaging we send to people about our policies. Most people come to Facebook for good reasons, most people are trying to follow the rules. If somebody violates our policies, we need to send that person a message that actually provides clear guidance. I think we’ve got some room for improvement there. It’s an area we’re focusing on this year.

This article originally appeared on Recode.net.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.