For the past few years, we’ve seen companies like Facebook either endlessly shrug off the misinformation problems on their platforms or issue half-assed fixes meant to keep them in check. Now, they might have a little more motivation. On Thursday, two lawmakers introduced new legislation meant to strip back these platform’s Section 230 protections in cases where they’re caught amplifying misinformation that’s harmful to public health.
The Health Misinformation Act—which was co-sponsored by Democratic Sens. Amy Klobuchar and Ben Ray—sounds great on paper. Section 230 is a foundational, albeit highly politicized, section of the Communications Decency Act that protects website owners from being held liable for certain illegal content website users post. It’s the reason some of the biggest social networks on the web, from Twitter to Facebook to Reddit, are even able to survive at all. But it also protects these companies from legal ramifications when some of those posts house misinformation that ends up, y’know, getting people killed. This bill aims to shed those Section 230 protections from platforms in those specific cases.
Where it gets tricky is when you look at the details. For one thing, this bill is only meant to apply to “health misinformation” related to an ongoing public health emergency, but it doesn’t define what “health misinformation” actually looks like. That responsibility, the bill states, falls on the shoulders of the current Department of Health and Human Service’s Secretary. Some critics have pointed out that means whoever ends up in the Secretary role in 2024 could amend that phrase to define “misinformation” however they want—even if they’re (somehow) working under another president like Donald Trump. Not only that, but the bill only applies to content that’s “algorithmically amplified,” which means misinformation that appears on your feeds chronologically or through any other “neutral mechanism” doesn’t apply.
There’s also the fact that Section 230 strictly applies to certain illegal content a user might post, like defamatory claims against another person. Misinformation isn’t illegal (yet, anyway), which means that even if users sue Facebook or Twitter over some harmful misinformation posts, it’s unclear what grounds they’d have to actually win.
Finally, there’s the fact that Section 230 doesn’t just protect websites from legal liability for things posted by their users; it also provides a legal basis for websites to remove or otherwise moderate content the companies find problematic, like pornography, hate speech, the glorification of violence—and yes, public health misinformation.
Still, unlike some of the other recent attempts at reforming Section 230, this bill stands out for one major reason: It doesn’t appear to focus solely on major internet companies, and would thus apply to virtually all website owners, big and small—all of which currently enjoy Section 230 protections. For good or bad, this precedent could pave the way for new laws down the line that could increase the liability faced by social media companies and tiny blogs alike for content promoted through their algorithms. And that could end up changing the web in ways we can’t predict.