Supreme Court Is Putting the Future of Section 230 Protections on Its Docket

Twitter, Facebook, YouTube and more all depend on being shielded from responsibility for user content via Section 230, but that could soon change.

We may earn a commission from links on this page.
Many conservatives have decried Section 230 for limiting their ability to restrict apps from content moderation, but any change in the law could have unexpected consequences for the billions of accounts across social media.
Many conservatives have decried Section 230 for limiting their ability to restrict apps from content moderation, but any change in the law could have unexpected consequences for the billions of accounts across social media.
Photo: TY Lim (Shutterstock)

On Monday, the Supreme Court announced nine cases it intends to hear in its upcoming term, including Renaldo Gonzalez v. Google. The case directly questions the protections afforded by Section 230 of the 1996 Communications Decency Act, which limits the legal liability of online web hosts for the content posted by their users. That law has essentially defined what users currently understand about the internet and has served as the main shield against lawsuits for social media companies against lawmakers and citizens. Lawyers for Google have said changes in the provisions of Section 230 could “threaten the basic organizational decisions of the modern internet.”

The case goes back to 2015, when Nohemi Gonzalez, a U.S. citizen living Paris, was shot and killed alongside 130 other people during a terror attack carried out by members of the Islamic State. The family of Gonzalez sued Google and said the company promoted ISIS-centric content, spreading the militant group’s message and helping them radicalize and recruit new members. The Supreme Court has also agreed to hear a similar case tied to an appeal from Twitter, Google, and the Meta-owned Facebook, where each faces claims they failed to remove IS-related materials from their platforms.

At the heart of Gonzalez is the question of whether 230 still shields tech companies and websites when they algorithmically “recommend” content, specifically third-party content to a user’s feed. Social media apps’ content recommendations are a cornerstone of how the largest tech companies operate, but the case could pin responsibility for recommended user content on those companies, completely upend the current ways most companies do business.

Advertisement

SCOTUS had declined to hear a separate but similar case revolving around Section 230, but the nation’s top court often hears cases when there’s disagreement in lower courts. As noted in the original petition, five appeals court judges have said that 230 creates immunity for cases involving recommended content, while three have argued to varying degrees that it doesn’t.

“[Internet companies] constantly direct such recommendations, in one form or another, at virtually every adult and child in the United States who uses social media,” Gonzalez’s attorneys argue in the original April appeal. “Application of section 230 to such recommendations removes all civil liability incentives for interactive computer services to eschew recommending such harmful materials, and denies redress to victims who could have shown that those recommendations had caused their injuries, or the deaths of their loved ones.”

Advertisement

But lawyers for Google have argued that the company regularly takes down flagged videos, and that the Paris attacker just happened to be active on YouTube and once appeared in an IS propaganda video. The company said its multiple recommendation widgets are the best way to help users “navigate the vast amount of data online.

Supreme Court Justice Clarence Thomas previously said about Section 230: “We should consider whether the text of this increasingly important statue aligns with the current state of immunity enjoyed by internet platforms.”

Advertisement

Bills Challenging Section 230 Have Been Put Forward in Congress and Statehouses

Conservatives and liberals alike have both attacked Section 230, though for very different reasons. California has passed a bill designed to protect kids under 18 from tech companies’ ongoing data collection. Some pro-tech groups have said such a bill could infringe against 230, though other bills put out by Republican-controlled states are much more explicit in their antagonism toward websites’ speech immunity.

Advertisement

At the same time Gonzalez is heading for a final showdown in the Supreme Court, conservatives in Texas and Florida are putting much of their anti-big tech initiative behind bills meant to restrict social media companies from banning accounts or moderating user content.

Florida’s anti-deplatforming law, put on hold by the courts in 2021, was shot down by the 11th circuit court earlier this year. Last month, trade groups representing big tech and the Florida AG petitioned for a case regarding the bill to be heard by SCOTUS. On Sept. 23, Florida’s Attorney General Ashley Moody submitted an appeal to the Supreme Court, arguing in a 111-page document that online spaces are “the modern town square” and that these social media companies are censoring content that could be considered political speech necessary for the “marketplace of ideas.”

Advertisement

Of course, Florida Governor Ron DeSantis has had difficulty himself with social media. His aides have been banned from Twitter for asking supporters to “drag” a journalist who covered the presidential hopeful.

Another bill, Texas’ H.B. 20, recently found new life after the 5th Circuit Court of Appeals decided “platforms want to eliminate speech—not promote or protect it.” This is a common right-wing talking point that several legal scholars and tech company trade groups told Gizmodo is meant to have a chilling effect on tech’s ability to moderate hate speech or cut down on disinformation online. Texas had previously put the matter up to the Supreme Court, but in a 5-4 decision the justices put a hold on the bill and sent it back down to the lower courts. Florida’s appeal directly referenced the Texas decision to extol the merits of its own anti-content moderation bill.

Advertisement

Both Florida and Texas’ loose definitions of content moderation and their interpretations of Section 230 could have ramifications far beyond social media companies, as pointed out by Corbin Barthold, internet policy counsel for TechFreedom, a tech-minded free enterprise think tank. The law effectively targets any platform with more than 50 million active users, which could even include sites like Wikipedia.

Advertisement

The question of so-called “censorship” in both the Florida and Texas laws has come down to interpreting the 1985 case Zauderer v. Office of Disciplinary Counsel, which required companies to disclose information about their services. In a phone interview, Barthold told Gizmodo that up until now, every time SCOTUS has referenced Zauderer, justices have limited the scope of the ruling to speech in advertising, but without firm precedent lower courts have used the case for other forms of speech.

And because the 5th and 11th circuit courts have disagreed so heavily, Barthold said the Supreme Court will likely need to bring up this case as well. Whatever the court decides next on Section 230 will likely have a vast impact on any future decision regarding social media companies’ liability for the posts that appear on their webpages and whether deleting any of those posts would be considered censorship.

Advertisement

If a company like Twitter suddenly finds that it is held liable for each post on its site, the company says that its options would become limited to either folding entirely or conducting extreme amounts vetting and content moderation, much more than already goes on. This, of course, isn’t exactly what conservatives want. Many, like Colorado Rep. Laura Boebert—who has been previously banned from Twitter for posting disinformation—much prefer it if social media companies were completely restricted from holding on to the ban hammer.