In little more than a month’s time, the Alphabet-owned tech giant Google will soon be the main face of the entire tech industry defending how the internet currently exists, good and ill. The Supreme Court is gearing up to hear arguments that could completely redefine whether companies are liable for everything that gets uploaded to its various sites.
Last year, the Supreme Court agreed to hear the case Reynaldo Gonzalez v. Google, which centers around Section 230 of the 1996 Communications Decency Act. The brief passage of the law essentially keeps tech companies from being liable for the content people create on various sites and services. Without that protection, tech companies could be open to lawsuits and even more scrutiny. Essentially, 230 stops the government from treating every website as the active publisher for third-party content disseminated on its platforms.
The first oral arguments for Gonzalez v. Google are set to drop Feb. 21.
In Google’s response brief published Thursday, the company argued that ending 230 protections wouldn’t just make companies liable for speech that isn’t theirs, Google and other tech companies would be forced to be even more forceful in content moderation. The company argued that a lack of 230 would incentivize companies to remove “legal but controversial speech on some websites and lead other websites to close their eyes to harmful or even illegal content.”
The company said losing 230 would create a “dystopia” where providers face constant legal pressure to censor any kind of controversial content. Under such stresses, other apps and sites would simply leave up any and all content, no matter how objectionable.
More than that, Google further argued that this would return the internet to the “see-no-evil approach” of tech companies in the mid-1990s which “risked a proliferation of pornography, hate speech, and illegality.” Of course, tech companies are still actively dealing with all those issues. For instance, major platforms like Twitter have struggled to fight back against proliferation of child sexual abuse material. Meta’s Facebook is constantly facing tough questions of what is allowed on the platform and what isn’t.
Google’s second big argument is that without 230 protections, people would be able to hold online users liable for sharing or even liking articles. The company argued that algorithmic-based content systems are the only way that modern tech companies can possibly handle the load of digital content published daily. So if people are able to target how websites sort content “the internet would devolve into a disorganized mess and a litigation minefield.”
The case goes back to the 2015 terror attack in Paris, France that left 130 dead and many more injured. Nohemi Gonzales was a U.S. citizen living in Paris who was killed in the attack, and her family sued Google saying YouTube was a main vehicle for radicalizing and recruiting new members to the Islamic State. The family has further argued that 230 has gone beyond the law’s original intent, and has been used to shield companies from responsibility for algorithms that recommend harmful content.
Google has previously argued that it has worked to remove terrorist and other harmful content. The company has used Section 230 protections as the basis for its defense.
Content moderation is one of the most pressing tech policy issues going into the new year. There are already laws from Texas and Florida that restrict tech companies from doing any content moderation whatsoever. These laws challenging 230 are moving in the Supreme Court’s direction.