Representatives for Facebook, Twitter, and YouTube trekked up to Capitol Hill on Wednesday to let senators know how their personal war on terrorism is going. It turns out that things are going well, thanks for asking.
“We think we’re better prepared for this election than we’ve ever been,” Carlos Monje, Twitter’s director of public policy and philanthropy, told the members of the Senate Committee on Commerce, Science, and Transportation this afternoon.
The committee asked Monje and representatives from other social media firms to come to Washington to answer questions about terrorism, but content moderation, in general, was on everyone’s minds—minds that appear to be oddly copacetic compared to the veritable freak-out over online content from just a few months ago.
Sen. Brian Schatz, for example, wanted to know if Twitter is taking care of its fake news problem and if we can be sure that it is “going to get this right, and before the midterms.” For the record, Twitter is more prepared this time around than it’s ever been, Monje said. And that’s not surprising because it’s never really had to think about the elections all that much until last year.
But the bulk of tough questions about election meddling were handled back in October when Facebook, Google, and Twitter appeared before the Senate Judiciary subcommittee. The newcomer today was YouTube, the video streaming service that has faced a firestorm of criticism for its approach to moderation for months. In her testimony, Juniper Downs, YouTube’s director of public policy and government relations, told the committee that her company has policies that prohibit “the spread of hate and incitement to violence,” and it’s getting better at using machine learning to detect “violent extremist videos.” As evidence of YouTube’s progress, Downs claimed that in June “40 percent of the videos we removed for violent extremism were identified by our algorithms.” At present, that number has grown to 98 percent, Downs said, and 70 percent of that content is taken down within eight hours of being uploaded.
It would be inaccurate to say that the senators gave anyone on the panel a tough time. Sen. Bill Nelson took out a decent chunk of time to applaud social media’s ability to bring people together during tragic times, and it was a mostly genial session.
Senator John Thune did point out that one flagged video that describes how to build a bomb had been re-uploaded to YouTube numerous times, and he wondered how that’s possible. YouTube’s response was that it removed the video every time, and it’s getting better at wiping re-uploads from its site. This seemed to be good enough. And honestly, what else is there to say? YouTube obviously doesn’t want the headache of hosting content from terrorists. Logan Paul, a man who is not believed to be a terrorist, is enough of a handful as it is.
“We’re getting better,” was the theme of the day. Monika Bickert, Facebook’s head of product policy and counterterrorism, highlighted her company’s efforts to go beyond just removing a group that supports terrorism. “When we identify Pages, groups, posts, or profiles that support terrorism, we use AI to identify related material that may also support terrorism,” she told the committee. “As part of that process, we utilize a variety of signals, including whether an account is ‘friends’ with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.”
But the elephant in the room was obviously the content that falls through the cracks and doesn’t explicitly get identified as terrorism. Several senators were curious about white supremacists on social media platforms and what’s being done to combat their activities. Arguably, white supremacists are terrorists, but they seem to thrive on social media as long as they stay away from posting violent imagery.
Senator Tammy Baldwin raised concerns about social media being used as an organizational and messaging platform for hate groups. Bickert’s response was representative of everyone when she responded Facebook’s policies “prohibit hate organizations” and that groups organized around hatred of other races are treated the same as any terrorist organization. Anyone who’s been paying attention knows that social media companies could do a lot more to target white supremacists, and their policies may prohibit known organizations like the KKK, but little is done on the individual level. If Facebook knows how to keep track of the scratches on your camera lens, it can probably recognize a swastika and hate speech.
Twitter’s big announcement today was that it’s currently working out how it plans to alert users that they were targeted with fake news and propaganda during the 2016 election. Without going into detail, Monje said Twitter is “working to identify and inform individually the users who have been exposed to IRA accounts during the election.” IRA is an acronym for Internet Research Agency, the Russian government’s infamous “troll farm.” Senator Richard Blumenthal applauded this decision and suggested that Facebook and YouTube follow suit.
Politicians in the EU have become increasingly comfortable with the idea of fining social media companies that fail to proactively remove terrorist content from their platforms, so it’ll likely be a priority for the foreseeable future. Here in the US, lawmakers seemed pretty chill about the whole thing. “I feel like the companies, by and large, were pretty responsive, and I think we got a better sense for the things that they’re already doing,” Senator Thune told reporters after the hearing. “I don’t know at this point it requires or necessitates any additional action.”