Tech. Science. Culture.
We may earn a commission from links on this page

Democrats Propose 'Section 230' Changes, Say Facebook Algorithms Cradle Violent Extremists

We may earn a commission from links on this page.
Mark Zuckerberg pauses while speaking at the Paley Center For Media on October 25, 2019 in New York City.
Mark Zuckerberg pauses while speaking at the Paley Center For Media on October 25, 2019 in New York City.
Photo: Drew Angerer (Getty Images)

A pair of Democratic lawmakers on Tuesday introduced the latest bill proposing to amend Section 230 of the Communications Decency Act on the grounds that algorithms used by social media platforms—namely, Facebook—have facilitated extremist violence across the country resulting in U.S. citizens being deprived of their constitutional rights.

The “Protecting Americans from Dangerous Algorithms Act,” authored by U.S. Representatives Anna Eshoo and Tom Malinowski, targets only companies with more than 50 million users. Companies that use “radicalizing” algorithms, the lawmakers say, should not be given immunity if they programmatically amplify content involved in cases alleging civil rights violations. The bill additionally targets algorithm-promoted content involved in acts of international terrorism.

Advertisement

In a statement, the lawmakers pointed specifically to a lawsuit brought last month by victims of violence during recent protests against racial injustice in Kenosha, Wisconsin. The suit, reported by BuzzFeed, accuses Facebook of abetting violence in Kenosha by “empowering right-wing militias to inflict extreme violence” and depriving the plaintiffs of their civil rights.

The suit cites Reconstruction-era statutes that the Supreme Court applied unanimously in 1971 against white plaintiffs who had harassed and beaten a group of Black plaintiffs in Mississippi after mistaking for civil rights organizers.

Advertisement

In Kenosha, a 17-year-old gunman killed two men and wounded another after traveling across state lines with a semi-automatic weapon to confront demonstrates affiliated with the Black Lives Matter movement. Rittenhouse has been charged with six criminal counts in Wisconsin, including first-degree intentional homicide.

The civil suit brought by, among others, the partner of one of those Rittenhouse killed, also accuses the self-described militia group Kenosha Guard of taking part in a conspiracy to violate plaintiffs’ constitutional rights. A Facebook event started by the Kenosha Guard, which had encouraged attendees to bring weapons, was flagged by users 455 times but was not taken down by Facebook.

Advertisement

In August, Facebook CEO Mark Zuckerberg labeled the company’s failure to take down the page “an operational mistake” during a companywide meeting, BuzzFeed reported.

Advertisement

“I was a conferee for the legislation that codified Section 230 into federal law in 1996, and I remain a steadfast supporter of the underlying principle of promoting speech online,” Congresswoman Eshoo said. “However, algorithmic promotion of content that radicalizes individuals is a new problem that necessitates Congress to update the law.”

“In other words, they feed us more fearful versions of what we fear, and more hateful versions of what we hate,” Congressman Malinowski said. “This legislation puts into place the first legal incentive these huge companies have ever felt to fix the underlying architecture of their services.”

Advertisement

Facebook did not respond to a request for comment.

Section 230 is one of the hottest topics in Washington right now. Passed in 1996 and known widely today as the “twenty-six words that created the internet,” the law is credited with fostering the rapid growth of internet technology in the early 2000s, most notably by extending certain legal protections to websites that host user-generated content.

Advertisement

More recently, lawmakers of both parties motivated by a range of concerns and ideologies have offered up numerous suggestions on ways to amending Section 230. The law was intended to ensure that companies could host third-party content without exposing themselves to liability for the speech in said content—giving them a shield—while also granting them the power to enforce community guidelines and remove harmful content without fear of legal reprisal—and a sword.

Some have argued that Section 230 has been interpreted by courts far too broadly, granting companies such as Facebook legal immunity for content moderation decisions not explicitly covered by the law’s text. Others have tried using the law as a political bludgeon, claiming, falsely, that the legal immunity is preconditioned on websites remaining politically neutral. (There is no such requirement.)

Advertisement

Gizmodo reported exclusively last month that Facebook had been repeatedly warned about event pages advocating violence, and yet had taken no action.

Advertisement

Muslim Advocates, a national civil rights group involved in ongoing, years-long discussions with Facebook over its policies toward bigotry and hate speech, said it had warned Facebook about events encouraging violence no fewer than 10 times since 2015. The group’s director, Farhana Khera, personally warned Zuckerberg about the issue during a private dinner at his Palo Alto home, she said.

Facebook claimed it was banning white nationalist organizations from its platform in March 2019, but has failed to keep that promise. London’s Guardian newspaper found numerous white hate organizations had continued their operations on Facebook in November 2019, including VDARE, an anti-immigrant website affiliated with numerous known white supremacists and anti-Semites. BuzzFeed reported this summer that Facebook had run an ad on behalf of a white supremacist group called “White Wellbeing Australia,” which had railed against “white genocide.”

Advertisement

The company said in June it had removed nearly 200 accounts with white supremacist ties.

“Social media companies have been playing whack-a-mole trying to take down QAnon conspiracies and other extremist content, but they aren’t changing the designs of a social network that is built to amplify extremism,” Malinowski said. “Their algorithms are based on exploiting primal human emotions—fear, anger, and anxiety—to keep users glued to their screens, and thus regularly promote and recommend white supremacist, anti-Semitic, and other forms of conspiracy-oriented content.”

Advertisement

UC Berkley professor Dr. Hany Farid, a senior advisor to the Counter Extremism Project, called the Eshoo-Malinowski bill “an important measure” that would “hold the technology sector accountable for irresponsibly deploying algorithms that amplify dangerous and extremist content.”

“The titans of tech have long relied on these algorithms to maximize engagement and profit at the expense of users,” he added, “and this must change.”