Questioned over Facebook’s efforts to stop the spread of false information in the aftermath of the 2017 Las Vegas shooting, a senior Facebook official called to testify before the House Committee on Homeland Security on Wednesday said the company has since improved the way it handles viral acts of bloodshed and terrorism.
As evidence, the official pointed to actions taken by Facebook this spring during the Christchurch terrorist attack—a massacre of more than 50 Muslim worshipers in New Zealand that was streamed in real-time over the internet by a shooter using Facebook Live.
“Our systems didn’t work the way we wanted them to after Las Vegas,” said Monika Bickert, head of global policy management at Facebook, when she was pressed about the spread of misinformation and extremist content by Rep. Dina Titus, a Democrat whose congressional district occupies most of Las Vegas.
Considered the deadliest mass shooting in U.S. history, an individual armed with two dozen guns fatally shot 59 festivalgoers from a window on the 32nd floor of Las Vegas’ Mandalay Bay Hotel in October 2017. On Facebook, the shooter was misidentified in viral posts that spread well after the real shooter was identified by police.
Accused of harboring hate and serving as a gateway to violence, Facebook and other major social networks are scrambling to devise new methods to stem the spread of extremist content. Earlier this month, YouTube announced plans to purge the site of thousands of videos and channels advocating hatred of individuals because of their race or religion. Facebook wants its platform to be a “hostile environment” for terrorists, a company official told reporters last month.
While the social giants often speak of the immense technical challenges involved in such undertakings, Facebook and YouTube have both faced allegations that executives simply ignored the deadly consequences of their failed moderation policies in pursuit of greater profit and growth. YouTube, especially, has been accused of knowingly radicalizing users with its algorithms that encourage them to watch videos progressively more “edgy,” hate-filled and caustic.
On Wednesday, Titus recalled how the Las Vegas shooting was followed by a wave of hoaxes, conspiracy theories, and misinformation, including widely shared Facebook posts that misidentified the gunman and his religious affiliation. “They peddled false information claiming the shooter was associated with some kind of anti-Trump army,” she said.
On Facebook’s “Safety Check” page, where users are encouraged to “connect with friends and family to find help after a crisis,” a blog listed as a top story identified the shooter as a woman who was described as a “Trump-hating Rachel Maddow fan.” Clark County Sheriff Joe Lombardo later told the press that the shooter—a 64-year-old man—was “happy with Trump” because of the stock market.
Seated alongside Google and Twitter officials on Capitol Hill, Bicker testified that Facebook’s crisis response had “gotten better” since the 2017 shooting, pointing specifically to Facebook’s response to the Christchurch terrorist attack this March.
“With Christchurch, you had these companies at the table and others communicating real-time, sharing with one another, URLS, new versions of the video of the attack,” she said, noting that Facebook stopped 1.2 million versions of the video from being spread across the platform.
Facebook said in March that it had removed 1.5 million videos of the attack globally. It’s systems, in other words, failed to prevent some 300,000 videos—or roughly 20 percent of them—from being viewed by users.
“So we’ve gotten a lot better technically,” Bickert told the committee.
The Christchurch massacre was streamed live on Facebook and viewed about 4,000 times before it was removed. Copies and edited versions of the videos subsequently spread across the site and onto Twitter, YouTube, and other platforms.
“One of the things we saw after Christchurch that was concerning was people uploading content to prove the event had happened,” said Nick Pickles, senior strategist of public policy at Twitter.
Derek Slater, global director of information policy at Google, said it was YouTube’s policy to “raise up” authoritative sources of information. “And we will also seek to reduce exposure to content that is harmful misinformation including conspiracies and the like.”
Last week, Bloomberg reported that four Google employees privately admitted that they won’t allow their own children to watch YouTube unsupervised. The sentiment, they claimed, is widespread at the company. “One of these people said frustration with YouTube has grown so much that some have suggested the division spin-off altogether to preserve Google’s brand,” the report said.