clock menu more-arrow no yes mobile

Doing Something About the 'Impossible Problem' of Abuse in Online Games

We need to acknowledge that online harassment and toxicity is not an impossible problem, and that it is a problem worth spending time on.

Riot Games

It’s often easier to turn a blind eye than confront the ugliness of negative online behaviors. However, online society has become an integral part of life, from conversations via Snapchat to networking on LinkedIn. As we spend more and more of our time online, we need to acknowledge that online harassment and toxicity is not an impossible problem, and that it is a problem worth spending time on.

For the past three years, a team of game designers and cross-discipline scientists at Riot Games have been doing just that, combining efforts to study online behavior in its game League of Legends. It might surprise some people that a video game could be shedding light on what’s seen as a hopeless cause, but with League’s highly competitive gameplay and more than 67 million players around the world giving it their all in-game, the team has uncovered a wealth of interactions that have led to remarkable insights.

Our team found that if you classified online citizens from negative to positive, the vast majority of negative behavior (which ranges from trash talk to non-extreme but still generally offensive language) did not originate from the persistently negative online citizens; in fact, 87 percent of online toxicity came from the neutral and positive citizens just having a bad day here or there.

Given this finding, the team realized that pairing negative players against each other only creates a downward spiral of escalated negative behaviors. The answer had to be community-wide reform of cultural norms. We had to change how people thought about online society and change their expectations of what was acceptable.

But that led to a big question: How do you introduce structure and governance into a society that didn’t have one before? The answer wasn’t as simple as abolishing anonymity. Privacy has become increasingly important online as data becomes more widely available, and numerous studies have shown that anonymity is not the strongest cause of online toxicity. While anonymity can be a catalyst for online toxicity, we focused on the more powerful factor of whether or not there are consequences (both negative and positive) for behaviors.

To deliver meaningful consequences, we had to focus on the speed and clarity of feedback. At Riot, we built a system called the “Tribunal,” which automatically created “case files” of behaviors that players reported as unacceptable in the community. The system allowed players to review game data and chat logs and vote on whether the behaviors were okay or not. (Later this year, the system will also create positive “case files” so players can vote on the full spectrum of behaviors). These cases were public, so players could see and discuss the behaviors, and the results were inspiring. The vast majority of online citizens were against hate speech of all kinds; in fact, in North America, homophobic slurs were the most rejected phrases in the English language.

It turns out that people just need a voice, a way to enact change.

100 million Tribunal votes later, we turned machine learning loose on the dataset to see if we could classify words and phrases in 15 different languages from negative to positive. Just classifying words was easy, but what about more advanced linguistics such as whether something was sarcastic or passive-aggressive? What about more positive concepts, like phrases that supported conflict resolution?

To tackle the more challenging problems, we wanted to collaborate with world-class labs. We offered the chance to work on these datasets and solve these problems with us. Scientists leapt at the chance to make a difference and the breakthroughs followed. We began to better understand collaboration between strangers, how language evolves over time and the relationship between age and toxicity; surprisingly, there was no link between age and toxicity in online societies.

By opening our doors to the academic community, we’ve started collaborations that are redefining how research is conducted in the future, and we hope other companies follow this lead.

In League of Legends, we’re now able to deliver feedback to players in near-real-time. Every single time a player “reports” another player in the game for a negative act, it informs the machine-learning system. Every time a player “honors” another player in the game for a positive act, it also trains the machine-learning system. As soon as we detect these behaviors in-game, we can deliver the appropriate consequence, whether it is a customized penalty or an incentive. Critically, players in the society are driving the decisions behind the machine-learning feedback system — their votes determine what is considered acceptable behavior in this online society.

As a result of these governance systems changing online cultural norms, incidences of homophobia, sexism and racism in League of Legends have fallen to a combined 2 percent of all games. Verbal abuse has dropped by more than 40 percent, and 91.6 percent of negative players change their act and never commit another offense after just one reported penalty.

These results have inspired us, because we realize that this isn’t an impossible problem after all.

In the office, I still have a copy of a letter a boy wrote me after receiving in-game feedback from his peers about his usage of racial slurs: “Dr. Lyte, this is the first time someone told me that you should not say the ‘N’ word online. I am sorry and I will never say it again.” I remember forwarding this letter to the entire team, because this was the moment we realized that we had started a journey that would end beyond games.

Is it our responsibility to make online society a better place? Of course it is, for all of us. It is our society. As we collaborate with those outside of games, we are realizing that the concepts we’re using in games can apply in any online context. We are at a pivotal point in the timeline of online platforms and societies, and it is time to make a difference.


Jeffrey “Lyte” Lin is lead game designer of social systems at Riot Games, and is responsible for helping League of Legends have the most sportsmanlike community in online games. “Professor Doctor Lyte” and his team challenge the convention that online game communities are and always will be toxic environments; in fact, some of the team’s latest work suggests that the vast majority of online communities are positive or neutral. He runs experiments and data analyses, translating the results and learnings into viable game features that enhance engagement while amplifying the sportsmanlike behavior that already exists in the community. Before Riot, Lin was an experimental psychologist at Valve Software, and received his PhD in cognitive neuroscience from the University of Washington. Reach him @RiotLyte.

This article originally appeared on Recode.net.