Finally, an Interesting Proposal for Section 230 Reform
By the end of last year, there were few better symbols of bad-faith politics than Section 230 of the Communications Decency Act, the law that gives online platforms legal immunity for user-generated content. After a fairly sleepy existence since its passage in 1996, Section 230 turned into an unlikely rallying cry for a subset of Republican politicians who disingenuously blamed it for letting social media platforms discriminate against conservatives. (In fact, the law has nothing to do with partisan balance, and if anything allows platforms to keep more right-wing content up than they otherwise would.) Down the home stretch of his reelection campaign, Donald Trump began dropping Section 230 references into his stump speeches. The whole thing culminated with a pair of depressing Senate hearings that, while nominally about Section 230, were little more than PR stunts designed for Ted Cruz to get clips of himself berating Twitter CEO Jack Dorsey. Senate Democrats didn’t quite cover themselves in glory either.
So it’s a bit of a surprise to see a legislative proposal on Section 230 that thoughtfully, if imperfectly, addresses some of the most glaring problems with the law. The SAFE TECH Act, a bill announced on Friday morning by Democratic senators Mark Warner, Mazie Hirono, and Amy Klobuchar, is an encouraging sign that members of Congress are paying attention to the smartest critiques of Section 230 and trying to craft appropriate solutions.
First, a brief refresher is in order. Section 230 was passed in 1996 in order to encourage interactive platforms on the nascent internet—message boards, at the time—to self-moderate. The first part of the law says that “interactive computer services” are not legally liable for user-generated content. The second part says that they are free to moderate that content without becoming liable for it. This solved the dilemma of a company putting itself at greater legal risk by being more proactive about monitoring harmful content.
In recent years, the law has occasioned quite a bit of debate. Section 230’s defenders credit it with enabling the rise of the modern internet. They argue that interactive websites would be unimaginable without it, crushed under the threat of lawsuits from anyone offended by a comment, post, or customer review. The law’s detractors counter that Section 230 lets companies like Facebook and YouTube, along with shadier bottom-dwellers, profit off of hosting harmful content without having to bear the costs of cleaning it up.
Some of the questions raised in this debate are difficult to answer. But some are pretty easy. That’s because judges have interpreted Section 230 immunity so broadly that it has led to legal outcomes that seem obviously perverse. Today, Section 230 protects gossip sites that actively encourage users to submit nasty rumors and even revenge porn, essentially legalizing a harassment-based business model. Until Congress recently intervened, it protected sites like Backpage, that were set up to facilitate prostitution. It lets companies off the hook even when they have been made aware that they are being used to inflict harm on people. In one now-notorious case, a man’s ex-boyfriend impersonated him on Grindr, the popular gay dating app, sending a stream of men to his home and work addresses looking for sex. Grindr ignored the victim’s pleas to do something about it. After the victim sued, a federal judge ruled that Section 230 protected Grindr from any responsibility.
The law is even applied to commercial transactions whose consequences are felt in the physical world. In 2012, a Wisconsin man murdered his wife and two of her coworkers using a gun he had bought from Armslist, a “firearms marketplace.” Because he was subject to a restraining order, he was legally prohibited from owning a gun. Armslist allowed him to get around that. The victim’s daughter sued, and the Wisconsin Supreme Court eventually ruled that Section 230 made Armslist immune, because the ad for the gun was posted by a user.