European authorities have been chomping at the bit to get social media companies to be more aggressive in targeting hate speech online. For example, take this celebratory hashtag (#Brusselsisonfire) that surfaced after the March bombing in Belgium. Or look at when Twitter said in February that it had deleted more than 125,000 ISIS-affiliated accounts.
Today, Google, Facebook, Microsoft and Twitter announced that they've inked an online "Code of Conduct" with the EU's European Commission to deal with the problem. The agreement is a "self-regulatory" measure, which means that it's not legally binding, and it follows up on an earlier promise by the commission to regulate hate online.
The code of conduct enumerates a few specific commitments that Twitter, Facebook, et al, have made to address the problem:
- The companies need to have clear and accessible ways of identifying and removing hateful content, and that they must review the majority of the content reported within 24 hours.
- They have to work more with "civil society organizations" (nonprofits, advocacy organizations, etc.) to target such content.
- They must train their staff "on current societal developments and to exchange views on the potential for further improvement."
Despite the agreement signed today, these companies have a pretty tense relationship with European regulators, especially Google and Facebook. The EU is currently leveling antitrust charges against Google, and German regulators have begun looking into Facebook's practices.
You can read the full code of conduct here.
This article originally appeared on Recode.net.