The Facebook Oversight Board overturned 75% of Meta's initial content moderation decision in 12 high-profile cases disputed last year, per a new report.
Meta won't say why it suspended The Real Facebook Oversight Board after it criticized the platform's threat to cut off news links in Canada and California.
Twitter's head of trust and safety and the company's head of brand safety and ad quality reportedly resigned after Musk snubbed his moderation team.
The case of an American citizen who became a top propagandist for al-Qaeda looms large over our modern free speech battles.
Facebook's parent says it will provide more explanations for low-level violations and reserve 30-day suspensions for users with seven or more violations.
Nothing, Forever devs told users that they mistakenly never employed OpenAI's content moderation tools, showing how AI bias is inherent in multiple models.
The laborers reportedly looked through graphic accounts of child sexual abuse, murder, torture, suicide, and, incest.
The tech giant argued that algorithms are the only possible way companies can possibly handle the mass number of online users.
The two new tools could help smaller firms avoid penalties imposed by the Digital Services Act and other new EU policies.
Two Ethiopian researchers accuse the tech giant of not removing hateful content they say helped drive bloodshed during the nation's brutal civil war.
Meta is rolling out new counter-terrorism software tool in January that will be available to a wide range of online companies.
The watchdog said that Meta’s "cross-check" systems on Facebook and Instagram gave celebrities and businesses more leeway in posting unmoderated content.
The platform's crowd-sourced fact-checking feature got a new algorithm adjustment, as advertisers continue to question Elon Musk's chaotic leadership.
The tech giant was forced to fully restore a Facebook post that was removed earlier this year that compared the war in Ukraine to World War II.
Oversight Board members advised Musk to adhere to a harm principle and left open the possibility of working with Twitter on content decisions.
Twitter, Facebook, YouTube and more all depend on being shielded from responsibility for user content via Section 230, but that could soon change.
Meta and its subcontractor Sama are accused of traumatizing overseas moderators. The next hearing in the case is set for Oct. 25
Court judges wrote that Nazis and terrorists using online platforms to spread hate was ‘hypothetical' and compared content moderation to checking the mail.
The units are reportedly merging to increase efficiencies and cut down costs as Meta continues to reel from a tumultuous 2022.
The company is launching AutoMod, an autonomous keyword filter it says will moderators to detect, block, or receive alerts of harmful messages in a server.