/cdn.vox-cdn.com/uploads/chorus_image/image/63954063/76344117.jpg.0.jpg)
YouTube is finally banning content that promotes white supremacist views on its video platform. But it will still permit videos it classifies as “borderline,” which isn’t clearly defined, as well as a host of other problematic content.
The Google-owned platform said on Wednesday that it’s updating its hate speech policy to prohibit “videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” The company specifically said this new policy would ban videos that promote Nazi ideology. It’s also axing content that denies well-documented violent events, such as the Holocaust and the Sandy Hook Elementary School shooting.
This is the latest development in an ongoing debate among social media companies about how to handle discriminatory and hateful content on their platforms. YouTube claims that since 2017, it has reduced views of supremacist videos by 80 percent after limiting recommendations, comments, and shares on videos, though until now, it’s refused to remove such content altogether.
In March, Facebook banned white nationalist and white separatist content on Facebook and Instagram. Twitter is reportedly researching how white supremacists and white nationalists use its service in an effort to decide whether to allow them on the platform. Motherboard, which has been tracking the issue at Twitter closely, reported in April that Twitter is hesitant to change its policies in part because some Republican politicians might be flagged by algorithms that identify and remove supremacist content.
In January, YouTube started to limit recommendations for content that it considers to be harmful misinformation, such as videos that promote the flat Earth theory, promise a miracle cure for a disease, or spread 9/11 conspiracy theories, in the US. It claims the change resulted in a drop in recommendations of such content by 50 percent. In May, it took down a video of House Speaker Nancy Pelosi that was altered so that she appeared to be drunkenly slurring her words. (Facebook, on the other hand, left it up.)
On Wednesday, YouTube said it is also continuing efforts to reduce “borderline content” and to more frequently recommend videos with more authoritative sources of information and voices. But what exactly “borderline” means to YouTube is vaguely defined. That makes it likelier that questionable content, including hate speech, will remain on its platform.
YouTube has a long way to go in refining its policies and ensuring it is not propagating hate speech, supremacist views, and abuse.
Only last week, Carlos Maza, a Vox writer and host of the video series Strikethrough, tweeted a supercut of clips showing how conservative YouTube host Steven Crowder has harassed him for two years on YouTube without facing consequences, including using racist and homophobic slurs in YouTube videos about Maza. Last fall, Maza was inundated with text messages calling for him to debate Crowder, and he has faced severe harassment online. (The Verge has complete coverage.)
Since I started working at Vox, Steven Crowder has been making video after video "debunking" Strikethrough. Every single video has included repeated, overt attacks on my sexual orientation and ethnicity. Here's a sample: pic.twitter.com/UReCcQ2Elj
— Carlos Maza (@gaywonk) May 31, 2019
After YouTube looked into the matter, it decided Crowder’s videos don’t violate its policies enough to remove Crowder’s channel. It later said it had suspended the channel’s monetization because “a pattern of egregious actions has harmed the broader community and is against our YouTube Partner Program policies.” As The Verge lays out, that means Crowder’s videos won’t be eligible for ads through YouTube’s AdSense network, and could also mean the channel’s content won’t be recommended.
(3/4) As an open platform, it’s crucial for us to allow everyone–from creators to journalists to late-night TV hosts–to express their opinions w/in the scope of our policies. Opinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site.
— TeamYouTube (@TeamYouTube) June 4, 2019
Neal Mohan, the chief product officer at YouTube, spoke with Recode’s Peter Kafka in May about the challenges of regulating the content on its platform while making sure it remains a space for diverse voices to be heard. He acknowledged it’s a work in progress.
“That’s a combination of things, right? One is, does the video actually violate our policies? Are our policies drawn in the right way? We’re constantly looking at our policies, including our hate and harassment policies. The second part is are we detecting it quickly enough and are we having an enforcement action on it quickly enough? And so what I would say is that all three of those elements are evolving, and we’re not perfect,” Mohan said. “We get better every day, but we’re not perfect about them.”
Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.