Facebook, YouTube, Twitter, and other big internet companies may soon have to tell you exactly how they’re managing fake news.
On Friday, European lawmakers got closer to finalizing and passing the Digital Services Act, a law focused on boosting the enforcement and transparency of big tech’s content moderation, according to a report by The New York Times. The legislation coincides with the recently agreed-upon Digital Markets Act, which aims to combat big tech anti-competitive practices, i.e. their monopolization of the market.
“The two proposals serve one purpose: to make sure that we, as users, have access to a wide choice of safe products and services online,” said Margrethe Vestager, a Danish European Commissioner in a 2020 press statement about the pair of policies.
The final text of the new Digital Services Act hasn’t yet been released, and likely won’t for at least another few weeks. However, the key goals of the proposed law include mandatory transparency reporting about website recommendation algorithms and company efforts to combat misinformation, as well as big changes in how advertisements can be targeted.
If passed, big companies would be required to produce public, annual reports on content filtering and recommendation policies and practices on their sites. And the tech giants would no longer be allowed to target ads to users based on race, religion, political views, or union membership. Another aspect of the legislation aims to curb sales of illegal items through massive online marketplaces like Amazon, making the mega-retailer and its sellers open to consumer laws.
As of writing this, Meta and TikTok have not responded to Gizmodo’s request for comment. Twitter responded by saying they have no statement. Amazon directed Gizmodo to a 2021 blogpost on their website.
“The Digital Services Act significantly improves the mechanisms for the removal of illegal content and for the effective protection of users’ fundamental rights online,” reads the European Union website, outlining the policy.
If the law passes, it would go into effect next year, and it could have global reverberations in how tech companies manage the content on their sites. Lawmakers are also hoping it could serve as a model for other countries like India and Japan, according to The Times.
Or, it could be another flop, like the E.U.’s General Data Protection Regulation which some predicted would fundamentally shift online privacy protection worldwide, and instead basically just gave us those insufferable cookie permission pop-ups.
However, unlike the GDPR, which left enforcement up to individual nations, the Digital Services Act and Digital Markets Act would both be enforced by the centralized European Commission, based in Brussels, according to the New York Times. It would still be up to individual countries, though, to define the boundaries of what sort of content is and isn’t allowed on online platforms, in other words: what counts as free speech and what counts as hate speech.
There is a similar bill currently introduced in the U.S. House, called the Digital Services Oversight and Safety Act, that would mandate more transparency and reporting from tech companies about how they moderate content. But, in the U.S., the first amendment and Section 230 of the Communication Decency Act prevent tech companies from being held liable for what users say on their platforms. And the yet-to-be-passed E.U. law is aiming to do just that, making tech companies responsible for hate speech and misinformation posted on their sites if they fail to address it quickly enough.
Under the rule, fines for noncompliance would be as much as 6% of a company’s total annual revenue. And, like previous attempts to reign in companies at this scale, the proposed law is likely to face a coordinated fight from big tech.