clock menu more-arrow no yes

Why Intel and Vox Media Are Teaming Up to Stop Online Harassment

The next step in the diversity conversation.

The Verge

A year after pledging $300 million toward diversity initiatives, Intel is launching a new project that focuses on one element of the problem: Online abuse.

The Hack Harassment initiative — launched in partnership with our sister site Re/code, our parent company Vox Media and Lady Gaga’s Born This Way Foundation — is an attempt to find solutions to Internet harassment, starting with a series of hackathons through the first half of 2016. Held both online and offline, the sessions will involve members of the tech industry, the media, the nonprofit world and academia. They’re designed to raise awareness and find potential technological solutions to harassment, which will be presented at the Code conference that starts May 31. For a problem that has inspired a lot of talk and few real solutions, it’s still just talk — but its organizers promise that more change is coming.

Huge numbers of people have met with extreme hostility online, with women, people of color and other underrepresented groups being particularly vulnerable. According to a 2014 survey by the Pew Research Center, 40 percent of adult Internet users have personally experienced some form of harassment. “It really spawned out of our diversity discussion,” said Intel CEO Brian Krzanich. “As you kind of take the next step in diversity, you now need to make it a safe and comfortable place for those people to work in — so harassment was a natural next step to go work on.”

While about half the Pew respondents who had been harassed reported less severe behaviors like being called offensive names or purposely embarrassed, the other half had faced stalking, physical threats, sustained harassment or sexual harassment. For young users — those between 18 and 24 — the problems were especially pronounced. Seventy percent had been harassed in some way, almost a quarter had received physical threats and one in five had been sexually harassed.

And the Pew study didn’t address some specific, more extreme forms: Non-consensually posting nude photos or personal information, hacking into accounts and “swatting” hoaxes. A Vox- and Intel-commissioned survey of 300 tech industry professionals found that 8 percent had some kind of experience with swatting calls, 15 percent had faced hacking attempts and 13 percent had had personal information exposed online. While both surveys broadly found that people think online harassment has negative effects, that doesn’t seem to capture the extent of the harm caused by this behavior.

“When people start to talk about topics, that’s when things begin to change.”

By far the most common platform for harassment, according to Pew’s survey, was social networking sites and apps. Two-thirds of people who had been harassed pointed to social media, while smaller numbers cited website comment sections, online gaming and email. But even when these platforms have consistent anti-harassment policies, they’ve struggled to enforce them. Twitter in particular has been singled out for its anemic response to attacks on users, especially because it’s so easy for harassers to create new accounts as soon as they’re banned.

For Re/code, Hack Harassment is a natural followup to last year’s Code conference, which made Silicon Valley diversity a major topic of discussion. “When people start to talk about topics, that’s when things begin to change,” said Re/code co-founder Kara Swisher. Especially after the Gamergate controversy made international news in 2014, though, it can be hard to believe that raising awareness is still necessary.

The past few years have seen companies, politicians and activists denounce harassment and put forward tentative strategies. State and national legislators are attempting to punish some of the most clearly illegal practices — members of Congress have introduced multiple anti-swatting bills, most recently in November by Rep. Katherine Clark, D-Mass., who has also pushed the Justice Department to investigate and prosecute online threats more heavily. After heavy criticism, Twitter attempted to make it easier for users to report harassment. Even Reddit, a platform famous for hands-off moderation, began banning its most notoriously vicious boards last year. An entire day of the upcoming SXSW Interactive festival is dedicated to anti-harassment talks.

“Everybody gives lip service to a lot of things and then nothing actually happens.”

But Swisher says that in the tech community, discussions often flare up around individual controversies and fade soon after, without meaningful change. “Everybody gives lip service to a lot of things and then nothing actually happens,” she said. “And the kind of stuff that happens during online harassment really damages people.” She suggests that it’s too easy for people to slip into an abstract debate that pits defenders of free speech against opponents of online harassment.

“I think one of the important things is to show people exactly what is happening instead of talking about the bigger issues,” she said. In one case, she recalls an interview in which “Girls” creator Lena Dunham described quitting Twitter because of misogynist abuse. When she saw people criticize Dunham for being thin-skinned, she responded with copies of the tweets Dunham was describing, including threats of rape and other violence. “I said ‘Okay, is this okay?’ And of course everyone was like — ‘Oh my god, I had no idea.’” People without Dunham’s privilege or visibility, meanwhile, may have their abuse outright ignored, or even be told they’ve brought it on themselves.

But Twitter, YouTube and other platforms are already taken to task so frequently that concrete solutions seem more important than awareness-raising. It’s not clear whether they’ll be participating in the hackathons, or what kind of solutions they might be willing to implement. But Hack Harassment is confident that there are technological solutions to at least some of the problems. Its early suggestions involve blocking the IP addresses of known harassers on sites and giving users more filtering tools, two options that survey participants judged effective.

Filtering has, in fact, proven one of the best solutions on Twitter so far. The crowdsourced tool Block Together lets users share lists of offenders or automatically stop seeing accounts that raise red flags, and Twitter later adopted shared block lists as an official feature. Some problems also seem to have clear technological causes — game developer and anti-harassment activist Zoe Quinn recently complained that YouTube and Facebook automatically lined her posts with links to tirades like “Zoe Quinn, a vapid idiot.” Even if there’s no surefire way to block abuse, platforms could avoid accidentally promoting it.

“If I can’t figure it out, the average teenager who’s getting pilloried on Facebook or Twitter or whatever has no hope of being able to deal with this except to sign off.”

Swisher believes the problem isn’t just that there aren’t enough options right now, but that the ones that exist are either hidden or hard to use. “If I can’t figure it out, the average teenager who’s getting pilloried on Facebook or Twitter or whatever has no hope of being able to deal with this except to sign off,” she said. “And that shouldn’t be the only choice you have, to sign off.”

Game studio Riot, creator of the hugely successful eSport League of Legends, has made some of the most promising breakthroughs in fighting toxic community elements. A dedicated “social systems” team finds and tests ways to make people act better online — sometimes getting results from things as simple as making voice chat opt-in instead of opt-out. Riot’s solutions don’t apply everywhere, but the studio has shown that basic structural changes can pay off.

But technological solutions aren’t the only thing that platforms have to offer, and they can only go so far. One of the most common complaints about online abuse is that even when it’s taken seriously in the world of tech and gaming, its impact is minimized or misunderstood elsewhere. The onus is on victims to gather evidence, explain basic technological concepts to police and convince friends and family that simply quitting the Internet isn’t an option — especially when, as with swatting, the problems reach into the real world. Groups like Quinn’s anti-harassment organization Crash Override have published guides to doing this, but an industry-wide initiative holds far more weight than any individual target.

Silicon Valley has become adept at raising public awareness about its chosen causes, using a combination of petitions, online tools and old-fashioned lobbying. If it can get people to understand and care about issues as seemingly dry as net neutrality and copyright law, harassment doesn’t seem like a bridge too far. Swisher believes that despite the pervasiveness of harassment, companies have an incentive to make things change. “They don’t want to create environments online that people are scared to be involved in,” she said. “They want to create a safe environment for people — and at the same time allow people to express themselves.”

Whatever happens in the next few months, Krzanich admits that it’s just the first step. “At the end of the day, I think we can make a lot of technology that can reduce the harassment levels,” he says. “But it’s going to be peer pressure — when it just becomes unacceptable societally, that’s when harassment will really change.”

https://www.youtube.com/watch?v=If1d-CwMoR8

This article originally appeared on Recode.net.