Black Friday Is Almost Here!
The Inventory team is rounding up deals you don’t want to miss, now through Cyber Monday. Click here to browse!

Democrats Propose 'Section 230' Changes, Say Facebook Algorithms Cradle Violent Extremists

Mark Zuckerberg pauses while speaking at the Paley Center For Media on October 25, 2019 in New York City.
Mark Zuckerberg pauses while speaking at the Paley Center For Media on October 25, 2019 in New York City.
Photo: Drew Angerer (Getty Images)

A pair of Democratic lawmakers on Tuesday introduced the latest bill proposing to amend Section 230 of the Communications Decency Act on the grounds that algorithms used by social media platforms—namely, Facebook—have facilitated extremist violence across the country resulting in U.S. citizens being deprived of their constitutional rights.

Advertisement

The “Protecting Americans from Dangerous Algorithms Act,” authored by U.S. Representatives Anna Eshoo and Tom Malinowski, targets only companies with more than 50 million users. Companies that use “radicalizing” algorithms, the lawmakers say, should not be given immunity if they programmatically amplify content involved in cases alleging civil rights violations. The bill additionally targets algorithm-promoted content involved in acts of international terrorism.

Advertisement

In a statement, the lawmakers pointed specifically to a lawsuit brought last month by victims of violence during recent protests against racial injustice in Kenosha, Wisconsin. The suit, reported by BuzzFeed, accuses Facebook of abetting violence in Kenosha by “empowering right-wing militias to inflict extreme violence” and depriving the plaintiffs of their civil rights.

The suit cites Reconstruction-era statutes that the Supreme Court applied unanimously in 1971 against white plaintiffs who had harassed and beaten a group of Black plaintiffs in Mississippi after mistaking for civil rights organizers.

In Kenosha, a 17-year-old gunman killed two men and wounded another after traveling across state lines with a semi-automatic weapon to confront demonstrates affiliated with the Black Lives Matter movement. Rittenhouse has been charged with six criminal counts in Wisconsin, including first-degree intentional homicide.

The civil suit brought by, among others, the partner of one of those Rittenhouse killed, also accuses the self-described militia group Kenosha Guard of taking part in a conspiracy to violate plaintiffs’ constitutional rights. A Facebook event started by the Kenosha Guard, which had encouraged attendees to bring weapons, was flagged by users 455 times but was not taken down by Facebook.

Advertisement

In August, Facebook CEO Mark Zuckerberg labeled the company’s failure to take down the page “an operational mistake” during a companywide meeting, BuzzFeed reported.

Advertisement

“I was a conferee for the legislation that codified Section 230 into federal law in 1996, and I remain a steadfast supporter of the underlying principle of promoting speech online,” Congresswoman Eshoo said. “However, algorithmic promotion of content that radicalizes individuals is a new problem that necessitates Congress to update the law.”

“In other words, they feed us more fearful versions of what we fear, and more hateful versions of what we hate,” Congressman Malinowski said. “This legislation puts into place the first legal incentive these huge companies have ever felt to fix the underlying architecture of their services.”

Advertisement

Facebook did not respond to a request for comment.

Section 230 is one of the hottest topics in Washington right now. Passed in 1996 and known widely today as the “twenty-six words that created the internet,” the law is credited with fostering the rapid growth of internet technology in the early 2000s, most notably by extending certain legal protections to websites that host user-generated content.

Advertisement

More recently, lawmakers of both parties motivated by a range of concerns and ideologies have offered up numerous suggestions on ways to amending Section 230. The law was intended to ensure that companies could host third-party content without exposing themselves to liability for the speech in said content—giving them a shield—while also granting them the power to enforce community guidelines and remove harmful content without fear of legal reprisal—and a sword.

Some have argued that Section 230 has been interpreted by courts far too broadly, granting companies such as Facebook legal immunity for content moderation decisions not explicitly covered by the law’s text. Others have tried using the law as a political bludgeon, claiming, falsely, that the legal immunity is preconditioned on websites remaining politically neutral. (There is no such requirement.)

Advertisement

Gizmodo reported exclusively last month that Facebook had been repeatedly warned about event pages advocating violence, and yet had taken no action.

Advertisement

Muslim Advocates, a national civil rights group involved in ongoing, years-long discussions with Facebook over its policies toward bigotry and hate speech, said it had warned Facebook about events encouraging violence no fewer than 10 times since 2015. The group’s director, Farhana Khera, personally warned Zuckerberg about the issue during a private dinner at his Palo Alto home, she said.

Facebook claimed it was banning white nationalist organizations from its platform in March 2019, but has failed to keep that promise. London’s Guardian newspaper found numerous white hate organizations had continued their operations on Facebook in November 2019, including VDARE, an anti-immigrant website affiliated with numerous known white supremacists and anti-Semites. BuzzFeed reported this summer that Facebook had run an ad on behalf of a white supremacist group called “White Wellbeing Australia,” which had railed against “white genocide.”

Advertisement

The company said in June it had removed nearly 200 accounts with white supremacist ties.

“Social media companies have been playing whack-a-mole trying to take down QAnon conspiracies and other extremist content, but they aren’t changing the designs of a social network that is built to amplify extremism,” Malinowski said. “Their algorithms are based on exploiting primal human emotions—fear, anger, and anxiety—to keep users glued to their screens, and thus regularly promote and recommend white supremacist, anti-Semitic, and other forms of conspiracy-oriented content.”

Advertisement

UC Berkley professor Dr. Hany Farid, a senior advisor to the Counter Extremism Project, called the Eshoo-Malinowski bill “an important measure” that would “hold the technology sector accountable for irresponsibly deploying algorithms that amplify dangerous and extremist content.”

“The titans of tech have long relied on these algorithms to maximize engagement and profit at the expense of users,” he added, “and this must change.”

Advertisement

Senior Reporter, Privacy & Security

Share This Story

Get our newsletter

DISCUSSION

Why just section 230? Section 230 is entitled “Protection for private blocking and screening of offensive material.”

It may seem that 230 is a very broad law that covers a wide variety of ‘offensive material’, but it really doesn’t. The only working mechanism you’ll find in S.230 is this statement:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

No provider or user of an interactive computer service shall be held liable on account of:

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]

This is a fundamental working part of the internet. Without this law, none of us would be able to speak without Gizmodo/Kinja being held accountable. Comments, like this, would not be possible.

Modifying S.230 would only hurt the internet, as a whole, by hurting all hosts and disallowing them to make decisions about their own webspace. If Facebook wanted to be a political site that favored Democrats or Republicans, they would have every right to, and as a result, they could filter out only stories that favor their views. Welcome to 90% of political websites.

If Congress (Dems or Reps) wanted to really make a fundamental difference in the accuracy and spread of information, they wouldn’t touch S.230 and hurt the web hosts, they would go after the users, which isn’t actually that far fetched. Here is a good outline of the US Telecommunicaitions code: https://www.law.cornell.edu/uscode/text/47

This section of US law covers a lot of ground, but within, Title 47, along with the pairing of other codes, like Title 17, 18 and 21 (to name some) cover a lot of what users can and cannot do over the internet (using common hosts), things like selling drugs, soliciting sex, libel, copyright, child porn, fraud, harassment, etc.

Instead of fiddling with S.230, other Titles and Sections of the US code could be amended to cover the spread of misinformation. Newer topics SHOULD be added to which there is no current US Code: Rules on Fake News/Misinformation, Foreign Information Interference, Deepfakes, Hate Speech, Cyber Bullying, Revenge/Nonconsensual Porn, Inappropriate Advertisements (by age group), etc. There are a lot of hot-button issues that are completely missing from US Law and could be written to solve many problems and go after the people who write and publish the content, not the people who passively host it.

I’ll state again, fiddling with S.230 will do more harm than good, because it will change how we define “Private Property” and it will probably lead to less sharing of ideas and content.