To find out how 2020 Democratic candidates would use their presidential powers to address different aspects of technology, we sent seven key questions to every campaign. This post includes six candidates’ answers to the second question. You can find answers to the other six questions on the landing page.
How should platforms be held responsible for misinformation or hate speech on their sites?
Bernie Sanders: [I believe] that tech giants and online platforms should not be shielded from responsibility when they knowingly allow content on their platforms that promotes and facilitates violence. Section 230 was written well before the current era of online communities, expression, and technological development, so [I] will work with experts and advocates to ensure that these large, profitable corporations are held responsible when dangerous activity occurs on their watch, while protecting the fundamental right of free speech in this country and making sure right-wing groups don’t abuse regulation to advance their agenda.
Elizabeth Warren: I condemn hate speech, and I believe we should be able to hold people accountable for their words. I also support the American people’s right to free speech, which is critical to our democracy. But the First Amendment does not protect violence that accompanies speech. When white supremacists and bigots murder, attack, or attempt to harm others, they should be held accountable to the fullest extent of the law.
Today, the same technological changes that have allowed people to more easily find each other and unite have also made it easier to incite hatred and violence — a change that our current leadership continues to exploit. Big tech companies cannot continue to hide behind free speech while profiting off of hate speech and disinformation campaigns. That’s why I’ve called out Facebook for operating as a disinformation-for-profit machine and why I’m committed to unwinding Facebook’s anti-competitive mergers and cracking down on practices that allow the company to undermine our democracy.
Pete Buttigieg: The El Paso shooting, following on so many others, has highlighted the role of online platforms in spreading hate. We must demand more of our platforms. My administration will identify online platforms and other companies that refuse to take steps to curb use by hate groups. Many platforms exist as online spaces where extremist ideology can flourish. Sites that traffic in hate and encourage or fail to moderate abuse and hate should be called out as facilitating socially harmful speech.
In addition, my administration will engage with social media and other online platforms to advance new tools and best practices, including appropriate terms of service, for limiting the spread of hateful ideology and of targeted harassment of individuals. We must treat ads on platforms with the same degree of accountability as TV and radio ads. My administration will also push for new public-private partnerships and provide greater federal funding to develop tools that identify malicious actors and behavior online — including things like the use of falsified identities, problematic use of bots, and extremist behavior.
False political advertising has no place in our democracy, whether on the airwaves or online, and should be removed. For too long, we have exempted digital ads from important political advertising rules that apply to broadcast media and radio. That must end now. As president, I will also work to close the digital ad loophole, including by requiring clear disclosure of the purchaser of online political ads and any entities they are acting on behalf of. I will also work with Congress to require large digital platforms to keep a public repository of ads, which must include granular and comprehensive targeting information that go beyond what most platforms currently provide. It’s time that we know what messages candidates and their supporters are sending, who is paying for them, and to whom they’re being sent.
All such initiatives must take place in ways that respect free speech, privacy, and our nation’s constitutional principles. We must proceed carefully in ways that tackle the harms posed by hateful speech online while also preserving core free speech protections and avoiding incentives to excessively monitor and restrict fair and open debate.
Andrew Yang: Last year, a research study found that 6 percent of Twitter accounts identified as bots were responsible for 31 percent of the misinformation on the website, and algorithms are designed to elevate polarizing and inciting information. We need to address these issues with the tech companies in order to combat the rise of misinformation and hate speech.
Tom Steyer: A big problem here is that social media companies are not living up to their own terms of service. If Facebook and Twitter are going to restore the public’s trust, they should take swift action to make sure that hate groups do not find a home on their websites and misinformation campaigns are not spread through their platforms.
Michael Bennet: It is time to revisit the broad immunity provided by Section 230 of the Communications Decency Act, which in many cases has shielded tech companies from accountability for misinformation and hate speech on their platforms. Section 230 may have made sense in the earliest years of the internet, but it makes little sense for a time when tech companies are some of the wealthiest and most powerful on the planet. We should modernize Section 230 to reflect current realities.