Blog post

Who Should Decide?

Jillian C. York, author of the forthcoming Silicon Values: the Future of Free Speech Under Surveillance Capitalism, argues, in the aftermath of the assault on the Capitol, that users, not tech executives, should decide what constitutes free speech online.

12 January 2021

Who Should Decide?

On January 7, following the violent white supremacist riots that breached the US Capitol, Twitter and Facebook both suspended President Donald Trump from their platforms. The next day, Twitter made its suspension permanent. Many praised the decision for preventing the president from doing more harm at a time when his adherents are taking cues from his false claims that the election was rigged. Republicans criticized it as a violation of Trump’s free speech.

[book-strip index="1" style="buy"]

It wasn’t. Just as Trump has the First Amendment right to spew deranged nonsense, so too do tech companies have the First Amendment right to remove that content. While some pundits have called the decision unprecedented—or “a turning point for the battle for control over digital speech,” as Edward Snowden tweeted—it’s not: not at all. Not only do Twitter and Facebook regularly remove all types of protected expression, but Trump’s case isn’t even the first time the platforms have removed a major political figure. 

Following reports of genocide in Myanmar, Facebook banned the country’s top general and other military leaders who were using the platform to foment hate. The company also bans Hezbollah from its platform because of its status as a US-designated foreign terror organization, despite the fact that the party holds seats in Lebanon’s parliament. And it bans leaders in countries under US sanctions.

At the same time, both Facebook and Twitter have stuck to the tenet that content posted by elected officials deserves more protection than material from ordinary individuals, thus giving politicians’ speech more power than that of the people. This position is at odds with plenty of evidence that hateful speech from public figures has a greater impact than similar speech from ordinary users. 

Clearly, though, these policies aren’t applied evenly around the world. After all, Trump is far from the only world leader using these platforms to foment unrest. One need only look to the BJP, the party of India’s Prime Minister Narendra Modi, for more examples.

Though there are certainly short-term benefits—and plenty of satisfaction—to be had from banning Trump, the decision (and those that came before it) raise more foundational questions about speech. Who should have the right to decide what we can and can’t say? What does it mean when a corporation can censor a government official? 

Facebook’s policy staff, and Mark Zuckerberg in particular, have for years shown themselves to be poor judges of what is or isn’t appropriate expression. From the platform’s ban on breasts to its tendency to suspend users for speaking back against hate speech, or its total failure to remove calls for violence in Myanmar, India, and elsewhere, there’s simply no reason to trust Zuckerberg and other tech leaders to get these big decisions right.

Repealing 230 isn’t the answer 

To remedy these concerns, some are calling for more regulation. In recent months, demands have abounded from both sides of the aisle to repeal or amend Section 230—the law that protects companies from liability for the decisions they make about the content they host—despite some serious misrepresentations from politicians who should know better about how the law actually works. 

The thing is, repealing Section 230 would probably not have forced Facebook or Twitter to remove Trump’s tweets, nor would it prevent companies from removing content they find disagreeable, whether that content is pornography or the unhinged rantings of Trump. It is companies’ First Amendment rights that enable them to curate their platforms as they see fit.

Instead, repealing Section 230 would hinder competitors to Facebook and the other tech giants, and place a greater risk of liability on platforms for what they choose to host. For instance, without Section 230, Facebook’s lawyers could decide that hosting anti-fascist content is too risky in light of the Trump administration’s attacks on antifa.

This is not a far-fetched scenario: Platforms already restrict most content that could be even loosely connected to foreign terrorist organizations, for fear that material-support statutes could make them liable. Evidence of war crimes in Syria and vital counter-speech against terrorist organizations abroad have been removed as a result. Similarly, platforms have come under fire for blocking any content seemingly connected to countries under US sanctions. In one particularly absurd example, Etsy banned a handmade doll, made in America, because the listing contained the word “Persian.”

It’s not difficult to see how ratcheting up platform liability could cause even more vital speech to be removed by corporations whose sole interest is not in “connecting the world” but in profiting from it.

Platforms needn’t be neutral, but they must play fair

Despite what Senator Ted Cruz keeps repeating, there is nothing requiring these platforms to be neutral, nor should there be. If Facebook wants to boot Trump—or photos of breastfeeding mothers—that’s the company’s prerogative. The problem is not that Facebook has the right to do so, but that—owing to its acquisitions and unhindered growth—its users have virtually nowhere else to go and are stuck dealing with increasingly problematic rules and automated content moderation.

The answer is not repealing Section 230 (which again, would hinder competition) but in creating the conditions for more competition. This is where the Biden administration should focus its attention in the coming months. And those efforts must include reaching out to content moderation experts from advocacy and academia to understand the range of problems faced by users worldwide, rather than simply focusing on the debate inside the US.

As for platforms, they know what they need to do, because civil society has told them for years. They must be more transparent and ensure that users have the right to remedy when wrong decisions are made. The Santa Clara Principles on Transparency and Accountability in Content Moderation—endorsed in 2019 by most major platforms but adhered to by only one (Reddit)—offer minimum standards for companies on these measures. Platforms should also stick to their existing commitments to responsible decision-making. Most important, they should ensure that the decisions they make about speech are in line with global human rights standards, rather than making the rules up as they go.

Reasonable people can disagree on whether the act of banning Trump from these platforms was the right one, but if we want to ensure that platforms make better decisions in the future, we mustn’t look to quick fixes.

Jillian C. York is the Director for International Freedom of Expression at the Electronic Frontier Foundation. She is also a founding member of the feminist collective Deep Lab and a fellow at Centre for Internet & Human Rights. She was named by Foreign Policy as one of the top 100 intellectuals on social media. She has written for the Guardian, Al Jazeera, and Vice. She is based in Berlin. You can pre-order her forthecoming SILICON VALUES: THE FUTURE OF FREE SPEECH UNDER SURVEILLANCE CAPITALISM here.

[book-strip index="2" style="display"]

THis post was origiannl published by MIT Technology Review here.

Silicon Values
The Internet once promised to be a place of extraordinary freedom beyond the control of money or politics, but today corporations and platforms exercise more control over our ability to access info...
Hardback
New Dark Age
From the highly acclaimed author of WAYS OF BEING. We live in times of increasing inscrutability. Our news feeds are filled with unverified, unverifiable speculation, much of it automatically gener...
Radical Technologies
Everywhere we turn, a startling new device promises to transfigure our lives. But at what cost? In this urgent and revelatory excavation of our Information Age, leading technology thinker Adam Gree...
Four Futures
“It is easier to imagine the end of the world,” the theorist Fredric Jameson has remarked, “than to imagine the end of capitalism.” Jacobin Editor Peter Frase argues that technological advancements...

Filed under: censorship, freedom-of-expression, trump, twitter, us-politics