Tech. Science. Culture.
We may earn a commission from links on this page

Facebook Says It Will Use AI to Police Revenge Porn, but It Won't Fully Explain How (Updated)

We may earn a commission from links on this page.

In the absence of proactive tools to prevent dirtbags from nonconsensually uploading intimate photos to the internet, victims of revenge porn have little surefire recourse to ensure the photos stop circulating online. It’s an exhausting and devastating game of whack-a-mole, and even when images are effectively scrubbed, the damage has been done. That’s why Facebook’s announcement today that it’s ramping up its program to police nonconsensual image-sharing feels like a glimmer of hope in the fight against revenge porn.

But Facebook’s latest effort, as promising as it is, remains frustratingly vague, leaving vulnerable users with key questions unanswered.

Facebook announced in a blog post on Friday its “new detection technology” that will help flag and remove intimate photos shared on its platform without the subject’s consent. “By using machine learning and artificial intelligence, we can now proactively detect near nude images or videos that are shared without permission on Facebook and Instagram,” Antigone Davis, Facebook’s global head of safety, wrote in the post. “This means we can find this content before anyone reports it, which is important for two reasons: often victims are afraid of retribution so they are reluctant to report the content themselves or are unaware the content has been shared.”


The technology Facebook announced on Friday will detect intimate photos that have already been uploaded to Facebook and Instagram. A “specially-trained member” of Facebook’s Community Operations team will then review the images and remove them from the platform if they are found in violation of the social network’s Community Standards. The team will also disable accounts that have shared said content “in most cases,” according to Davis.

Facebook’s new detection tech will work in tandem with the pilot program it announced last year, which lets users preemptively submit intimate photos to the company that they don’t want to be shared on Facebook, Instagram, or Messenger. The images are reviewed by a team of five Facebook employees, hashed, and then any images matching the hashes will be prevented from being uploaded to the services.


In addition to bolstering its efforts to scrub the site of revenge porn, Facebook is also launching a “support hub” for victims called Not Without My Consent, where “victims can find organizations and resources to support them, including steps they can take to remove the content from our platform and prevent it from being shared further — and they can access our pilot program,” Davis writes.

What remains unclear is how Facebook’s new AI tool will be able to identify whether the intimate photos it flags have been uploaded without someone’s consent. While it’s sometimes legitimate for tech companies to keep the intricate inner workings of anti-harassment tools under wraps so that bad actors can’t abuse or exploit them, in this case, we need to understand more about this new technology. Facebook doesn’t have to release a detailed blueprint of its detection tech, but it should be able to tell us how it’s able to determine the intent behind a user sharing an intimate photo. This is a tool meant to help some of Facebook’s most vulnerable users, and leaving them in the dark about crucial functionality challenges faith in the system.


Facebook’s initial efforts to combat revenge porn made it inherently obvious that a photo someone is trying to upload is being shared nonconsensually, given that the would-be victim has preemptively shared the image to ensure it isn’t posted online. That is not the case with Facebook’s new detection technology—these photos are flagged after they’ve already been uploaded, and it’s unclear whether they’re the ones that have been reported to Facebook. How will this detection technology be able to discern a lingerie photo that someone uploads confidently of themselves versus a photo of a woman in lingerie that was taken in private by a former partner who is now vindictively posting it across his social media pages? Or, how will this new detection tech be able to differentiate revenge porn from a nude work of art or a historically significant photo?

Moderating large platforms like Facebook is difficult business, and the company’s efforts to better police revenge porn and protect victims is good and necessary, but it needs to ensure that transparency remains at the forefront.


We have reached out to Facebook for comment on how its new detection technology will be able to identify whether an intimate photo has been shared without someone’s consent, and will update when they respond.

Update 1:30pm ET: Facebook told Gizmodo in an email that its detection technology was trained on revenge porn in order to better understand what these types of posts would look like, and so it is able to identify whether an intimate or nude image or video is shared without someone’s consent. However, they didn’t provide any further details on how it would work. AI, as stands, has yet to prove that it can understand even basic human nuances, but Facebook seems confident that it will understand whether a post is vindictive.