Britain is scheduled to crash out of the European Union on Friday, potentially causing severe economic damage. But that hasn’t stopped the British government from attending to other issues. Like proposing new rules that could make social media companies legally responsible for some content uploaded by users.
The British government released a paper this morning outlining its concerns about “online harms,” and expressing displeasure that “voluntary efforts have not led to adequate or consistent steps to protect British citizens online.” The paper explains that tech companies like Facebook, Twitter, and Google, need to do more to ensure that “harmful” content doesn’t make its way to their platforms.
Parts of the paper were leaked last week but this is the public’s first proper look at a proposed crackdown on social media companies that have become more and more popular with governments around the world—especially in the wake of last month’s Christchurch massacre where a white supremacist terrorist killed 50 people in New Zealand and broadcast his attacks on Facebook.
“For too long these companies have not done enough to protect users, especially children and young people, from harmful content,” Prime Minister Theresa May said in a statement to CNN. “That is not good enough, and it is time to do things differently.”
The new government paper proposes that a UK regulatory body should be set up to regulate online safety, though Prime Minister May’s government hasn’t decided whether to give that responsibility to an existing organization or to create a new government body. The paper will be revisited by the government in 12 weeks time, assuming that Britain is still a thing by then.
One of the big issues that’s addressed in the paper is what responsibility technology companies have to ensure that dangerous content, including everything from child pornography to terrorist recruitment materials, be restricted. American law, under Section 230 of the Communications Decency Act, says that tech companies aren’t responsible for the content that’s posted on their sites. Tech companies only have to remove illegal content once they’re made aware of it. The UK has followed a similar tradition over the years, but that could soon change.
The paper explains that social media companies in the UK currently have no responsibility to ensure that “harmful” content isn’t on their platforms until they’re made aware of its existence. But the proposal would like to change some aspects of that.
From the paper:
[C]ompanies will be required to ensure that they have effective and proportionate processes and governance in place to reduce the risk of illegal and harmful activity on their platforms, as well as to take appropriate and proportionate action when issues arise. The new regulatory regime will also ensure effective oversight of the take-down of illegal content, and will introduce specific monitoring requirements for tightly defined categories of illegal content.
The proposal claims that harm is occurring on both public and private channels of communication, but acknowledges that it’s difficult to police private channels. The proposal explains that “any requirements to scan or monitor content for tightly defined categories of illegal content will not apply to private channels.”
One of the most alarming proposals for free speech advocates will likely be the “disinformation” sections of the new UK paper. The proposal includes measures that would “require users not to misrepresent their identity on social media in order to disseminate or amplify disinformation” and “maintaining a news environment where accurate content can prevail.” Obviously, both of those assertions are highly subjective and the former would require some kind of identity verification regime to enforce.
But governments around the world are increasingly frustrated by inaction in Silicon Valley and they’re sometimes willing to suppress lawful speech in order to move the balance of power. In Singapore, for example, the government has moved to outlaw “fake news” and allow officials to demand the removal of speech that it deems incorrect. But the big tech companies have called it an “overreach” that threatens free speech.
And even the United States is seeing a major backlash against speech online. Republican Congressman Devin Nunes recently filed a $250 million lawsuit against both Twitter the company and Twitter parody accounts for content that he claims is “abusive, hateful and defamatory.”
Currently, content that qualifies as parody is protected speech in the U.S. but that could easily change under the Trump regime now that the courts have been packed with increasingly conservative judges. President Donald Trump himself has called for action against both social media companies and the news media in general, which he calls the enemy of the people.
But President Trump’s increasingly authoritarian rhetoric barely makes the nightly news anymore. And it seems that the U.S. could easily be next for imposing some kind of speech guidelines for tech platforms. Britain is just one of many countries that are looking hard at companies like Facebook.
“The era of self-regulation for online companies is over,” Jeremy Wright, Britain’s digital secretary, said in a statement to the New York Times.
“Technology can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However, those that fail to do this will face tough action.”