Future Tense

The President and Congress Are Thinking of Changing This Important Internet Law

But they don’t understand it.

Alex Jones speaking and gesticulating in a hallway.
Alex Jones of Infowars outside a hearing where Google CEO Sundar Pichai testified before the House Judiciary Committee on December 11, 2018. Alex Wong/Getty Images

“No other sentence in the U.S. Code,” technology scholar David Post has written, “has been responsible for the creation of more value than” a little-known provision of the Communications Decency Act called Section 230. But in January, President Donald Trump’s technology adviser Abigail Slater suggested that Congress should consider changes to the law. It’s not crazy to consider amending the provision, no matter the trillions of dollars resting on it. But the law itself is being mischaracterized and therefore misunderstood—including by the very legislators who’d be responsible for amending it.

Section 230 has a simple, sensible goal: to free internet companies from the responsibilities of traditional publishers. Sites like Facebook and Twitter host comments and commentary that they don’t produce, edit, or even screen themselves, and Section 230 of the act ensures that those companies can’t be sued for content they host for which they haven’t assumed responsibility. The law states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

But that’s not how CDA critics see it. There are two primary camps here. One thinks that Section 230 demands neutrality from tech platforms as a predicate for immunity. The other thinks that the provision completely frees tech companies from responsibility for moderating content on their platforms.

Sen. Ted Cruz, who belongs to the first camp, has said, “The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum”—meaning that, in Cruz’s view, a tech company’s efforts to remove alt-right content could void the company’s protection under that section. Think of Facebook’s recent decision to ban right-wing conspiracy theorist Alex Jones: By Cruz’s logic, that move by Facebook could be the type of political lean that costs the company its Section 230 protection.

Others, like technology lawyer Cathy Gellis, say that “Section 230 is a federal statute that says that people who use the Internet are responsible for how they use it—but only those people are, and not those who provide the services that make it possible for people to use the Internet in the first place.” In other words, in her view, the law absolves tech companies of any responsibility to remove offensive content from their platforms. Think here of calls for Facebook to do more to prevent terrorist radicalization and foreign election interference: By Gellis’ logic, those are problems for people who use Facebook, not Facebook itself.

They’re both wrong. In reality, Section 230 empowers tech companies to experiment with new ways of imposing and enforcing norms on new sites of discourse, such as deleting extremist posts or suspending front accounts generated by foreign powers seeking to interfere with our elections. And with that freedom to experiment came a responsibility to do so: to find appropriate ways of setting the boundaries of acceptable discourse on the novel, messy world of online debate and discussion. The real question lawmakers now face isn’t whether companies are somehow forfeiting their protections by trying to tackle today’s online challenges. It’s whether companies are doing enough to deserve the protections that Section 230 bestows.

The Communications Decency Act was a reaction to Stratton Oakmont Inc. v. Prodigy Services Co., a landmark 1995 New York state court decision extending standards for liability, long imposed on publishers for their content, to new media. The decision found the early internet provider Prodigy susceptible to liability for allegedly defamatory posts on its message boards. Section 230 flipped that result: No longer would companies like Prodigy be held to a publisher’s standards for the posts, blogs, photos, videos, and other content that users were uploading to tech platforms with ever-increasing frequency.

Why? Some, like media law professor Frank LoMonte, now suggest the law was intended to absolve tech companies of any responsibility for moderating their platforms—hence LoMonte’s description that, with Section 230, “Congress elected to treat the Prodigies of the world—eventually including Facebook—as no more responsible for the acts of their users than the telephone company.” This view suggests that Section 230 was meant to put the full burden on users of Facebook, Twitter, and YouTube to self-police and, in turn, to free those sites of any responsibility for what’s uploaded to them. Legal analysts like Adam Candeub and Mark Epstein build on that view and go even further, suggesting that companies risk losing protection under Section 230 if they engage in robust moderation of content rather than adhering to strict neutrality: “Online platforms should receive immunity only if they maintain viewpoint neutrality, consistent with traditional legal norms for distributors of information.” For tech companies, this narrative has been convenient, offering an excuse when they’re accused of moving too slowly and too ineffectively to address harmful, even dangerous content posted by hostile actors ranging from ISIS to the Kremlin to vicious internet trolls.

This view of Section 230—that it was created to free companies of responsibility for moderating content on their platforms, and moreover that assuming such responsibility might cost them their protection—is now taking root among key lawmakers accusing tech companies of acting too aggressively against far-right extremist content. It’s the view espoused by Cruz, and it’s getting louder. In November, incoming Missouri Sen. Josh Hawley, a Republican, offered a similar characterization of Section 230 by suggesting that Twitter’s approach to moderation has jeopardized its immunity: “Twitter is exempt from liability as a ‘publisher’ because it is allegedly ‘a forum for a true diversity of political discourse.’ That does not appear to be accurate.”

But Section 230 is neither an excuse for failing to moderate content nor a reward for declining to do so. Indeed, its purpose wasn’t to absolve or forbid tech companies from moderating their platforms, but to empower them to do so.

It’s true that Section 230’s first sentence eliminates liability for content uploaded to tech platforms. But the second sentence is equally important: It removes liability for tech companies’ own efforts to police their platforms. That portion of the law immunizes tech companies for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

So, even as Section 230 determined that tech companies wouldn’t face the same liability as publishers for what the companies didn’t remove, the law also protected the companies for what they did remove and what, by contrast, remained. This empowered them to experiment with forms of content moderation that would be most effective for the unprecedented types of content delivery vehicles they represented. That’s why Sen. Ron Wyden, a co-author of Section 230, has explained that it was intended as both a “shield” and a “sword” for tech companies, protecting them from liability for vast amounts of content for which they’re not assuming responsibility but also empowering them to do what they can to eliminate the worst of that content. Wyden has said that “because content is posted on [internet] platforms so rapidly, there’s just no way they can possibly police everything,” but he also clarified that Section 230 was intended “to make sure that internet companies could moderate their websites without getting clobbered by lawsuits.”

The idea that Section 230 absolves tech companies of any responsibility for what’s uploaded to their platforms speaks to a widespread libertarian impulse infused with optimism about new technologies: Let all speech flourish, and the best arguments will overcome the likes of terrorist extremism and white supremacist hate. That narrative seems to take the power to moderate global conversations out of the hands of the few in Silicon Valley and spread it among the many.

But these are no longer the halcyon early days of the internet. For all of the economic benefits and human connections that modern communications platforms have facilitated, they’ve also brought hate speech, trolling, paranoid conspiracies, terrorist recruitment, and foreign election interference. It’s simply too late in the day to maintain a “tweet and let tweet” ethos. The original wisdom of Section 230—provide tech companies with room to experiment, but expect them to do so responsibly, even aggressively—rings truer now than ever.

And that brings us to today’s emerging debate about Section 230. With the steady stream of content policy challenges flowing across tech platforms, the notion that Section 230 provides an excuse for tech companies not to moderate their platforms seems increasingly untenable. And the even bolder idea being propagated by the likes of Cruz and Hawley that more aggressive moderation will cost the companies their immunity is simply an erroneous reading of the law. What we really need is a debate over whether companies are adequately using both their immunity and their accompanying responsibility to find novel ways to moderate novel technologies. Otherwise, as Wyden has warned the companies, “If you’re not willing to use the sword, there are those who may try to take away the shield.”

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.