In the two months since a man brutally attacked Muslims in Christchurch, New Zealand, and livestreamed his shootings on Facebook, the social media company has faced scrutiny and pressure from advocates and politicians to stop the spread of violent and hateful content.
Immediately after the attack, New Zealand Prime Minister Jacinda Ardern told Parliament, “We cannot simply sit back and accept that these platforms just exist and what is said is not the responsibility of the place where they are published.”
Australia parliament then passed legislation that could threaten imprisonment for tech executives who allow violent content on their platforms.
That same week, Zuckerberg, in an ABC interview, wouldn’t commit to tweaking Facebook’s livestreaming in the wake of the attack, leading New Zealand privacy commissioner John Edwards to condemn the company in a series of tweets.
“Facebook cannot be trusted. They are morally bankrupt pathological liars who enable genocide (Myanmar), facilitate foreign undermining of democratic institutions,” Edwards tweeted. “[They] allow the live streaming of suicides, rapes, and murders, continue to host and publish the mosque attack video, allow advertisers to target ‘Jew haters’ and other hateful market segments, and refuse to accept any responsibility for any content or harm.”
Weeks later, Facebook is finally making some changes. On Tuesday, the company announced it is putting restrictions on its livestream video platform. “Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate,” Facebook’s vice president of integrity Guy Rosen, wrote in his post explaining the changes.
According to Rosen, users who violate Facebook’s “most serious policies” will be barred from Facebook Live for a “set period of time,” for which it gives the example of 30 days.
It’s unclear what these most serious policies are, but one is Facebook’s “dangerous organizations and individuals” policy —which forbids users from promoting terrorism, organized hate, mass murder, and human trafficking. More specifically, Rosen writes that “someone who shares a link to a statement from a terrorist group with no context” will be blocked from using Facebook Live.
The announcement shows that violations that occur outside of the livestreaming service could block someone from using Facebook Live.
The company said it intends to expand on this new regulation in the next several weeks, starting with blocking people who violate these policies from posting ads.