Twitter and Facebook have both explicitly banned terrorist content this year. Yet neither will comment on how they define terrorism. Are they using the FBI’s definition in US code? Or something else?
Twitter announced an update to its “Twitter Rules” today, changing the language in its “violence and threats” section to forbid “threatening or promoting terrorism.” Facebook has adopted a “zero-tolerance” policy on terrorism. But what does terrorism mean to the companies banning terrorist speech?
These changes were spurred by calls to crack down on ISIS internet activity, but neither company specifically mentions the Islamic State or jihadists. Twitter and Facebook have taken steps to sweep ISIS recruiters off their platforms, but they aren’t explaining which other groups they want to curtail.
For both companies, the bans remain so broad—no terrorism!—that they’re meaningless. Terrorism can be whatever Facebook or Twitter wants it to be. There’s no globally accepted definition of the word “terrorism.” The UN can’t agree. It’s a contentious buzzword.
Is any post praising the Islamic State automatically considered a terrorist post? What about a tweet expressing admiration for Hamas? What about a status update about how great the Communist Part of India is? Is it “promoting terrorism” to write a tweet that says “I wish the Weather Underground would come back because they were AWESOME!!!!!”
These communications platforms are deciding which groups are “terrorists” and which groups are “freedom fighters.” We should know the criterion. But we don’t.
Do Facebook and Twitter adhere to the US government’s definitions of terrorist groups and terrorist activities? Do they make their own determinations?
Does this tweet promote terrorism?
How about this one?
By adopting opaque and extremely flexible language on who gets to speak, these platforms give themselves wiggle room to ban whatever they want to under the amorphous umbrella of “terrorism.”
This is a post full of questions, because these companies aren’t forthcoming with their answers. When I asked Facebook how it defined terrorism, I got a canned response from a spokesperson:
“There is no place for terrorists on Facebook. We work aggressively to ensure that we do not have terrorists or terror groups using the site, and we also remove any content that praises or supports terrorism. We have a community of more than 1.5 billion people who are very good at letting us know when something is not right. We make it easy for them to flag content for us and they do. We have a global team responding to those reports around the clock, and we prioritize any safety-related reports for immediate review. When we find terrorist related material, we look for and remove associated violating content and accounts.”
I asked Twitter similar questions in light of its new policy. A spokesperson emphasized that it had changed the language about “hateful content,” and noted that the company had made its terrorism ban back in April.
I’m not saying that Twitter and Facebook should allow users to threaten violence to promote a political agenda. I am saying that controlling who speaks is a political act, and one that we should give our attention. Twitter and Facebook are frequently portrayed as neutral platforms. They are key communication tools in areas of political oppression and political unrest. They are active players in a propaganda war.
Image: Flickr