Facebook announced Wednesday that it is rolling out new anti-misinformation features aimed at reducing the amplification of potentially harmful content spreading within Facebook Groups.
Users can now opt to automatically decline posts coming from sources Facebook fact-checkers have identified as containing false information. Facebook hopes rejecting these posts before other users ever get a chance to interact with them will ultimately “reduce the visibility of misinformation.”
The company said it would expand its “mute” function, adding the ability for group admins and moderators to temporarily suspend members from posting, commenting, or reacting in a group. The suspension function will also let admins and mods temporarily block certain users’ ability to access group chats or enter a room in a group. The new features will allow admins to automatically approve or decline member requests based on specific criteria of their choosing.
Finally, Facebook said it would also introduce new updates to its Admins Home functions, providing group admins more tools like an overview page and an insights summary feature meant to assist with community management. When combined, Facebook said it hopes put more enforcement power and judgment capability in the arms of group leaders. That empowering of admins and moderators appears to take a page from Redddit’s playbook, whose moderators have such wide discretion that the social network has become notorious for featuring different, sometimes wildly opposing, standards for content within disparate communities. Facebook did not immediately respond to Gizmodo’s request for comment for more details on the tools or about the timing of their release.
Reached for comment by Gizmodo, Facebook provided a list of its previous efforts to combat misinformation in Groups. The company said in a statement, “We’ve been doing a lot to keep FB Groups safe over a number of years... To combat misinformation across Facebook, we take a ‘remove, reduce, inform’ approach that leverages a global network of independent fact-checkers.”
This isn’t the first time Facebook has tried to introduce tools to encourage Groups leaders to clean up their communities. Last year, the company introduced the ability for administrators to appoint designated “experts” into their groups. Those experts’ profiles would appear with official badges next to their names meant to signal to other users that they were particularly knowledgeable on a given topic.
For some context, Facebook partners with around 80 different independent organizations, including the Associated Press, The Dispatch, USA Today, and others, which are all certified through the independent Fact-Checking Network to review content. These fact-checkers identify and review questionable content to determine whether or not any of it may rise to the level of misinformation. The fact-checking program began nearly six years ago.
Critics have long pointed to Facebook’s relatively hands-off approach to limiting content on Groups as instrumental in helping spawn mini incubators of misleading content all across the web. Others have blamed Groups specifically for contributing to the rise of fringe political elements like QAnon and the Stop The Steal movement that ultimately fueled the January 6 Capitol riot. A Washington Post analysis conducted earlier this year found at least 650,000 posts questioning the legitimacy of the election floating around on Facebook Groups between election night and the riots, averaging out to about 10,000 posts per day.
Facebook’s modest changes arrive amid heightened public fear over the risk of misinformation related to Russia’s invasion of Ukraine. Fake images and videos (some of video gameplay) supposedly showcasing fighting raging through the country spread like wildfire just hours after the invasion first began.