At the end of 2017, Facebook announced a pilot program that lets users send the company their intimate photos and videos that they don’t want to be shared on its main platform as well as Instagram. Facebook said in March that it is using machine learning and artificial intelligence to detect that content, and in the event that someone nonconsensually tries to post it, they will be reviewed and removed. But the company isn’t informing users when someone attempts to upload revenge porn of them, a threat they should have the option of knowing about.
The Daily Dot first reported on Wednesday that Facebook doesn’t intend to notify potential victims if someone tries to upload content of them without their permission. “Once content is taken down from the site, we will hash it or create a digital fingerprint of it so we can catch any attempts to upload that image again,” a Facebook spokesperson told the Daily Dot.
A Facebook spokesperson confirmed in an email to Gizmodo that the company does not inform people if they are a victim of revenge porn on the platform and that there is no option to opt-in to receive such a notification.
Facebook’s system uses “detection technology” that is able to identify when photos that have been preemptively submitted to prevent uploading to the site are, in fact, uploaded to the site. “This means we can find this content before anyone reports it, which is important for two reasons: often victims are afraid of retribution so they are reluctant to report the content themselves or are unaware the content has been shared,” Antigone Davis, Facebook’s global head of safety, wrote in a blog post in March. This detection tech, announced this year, reportedly works together with Facebook’s expanded revenge porn pilot program announced last year.
It’s important to highlight one aspect of Davis’ remark from earlier this year, that victims might be “unaware the content has been shared.” According to the Daily Dot report, even when using Facebook’s revenge porn system, potential victims will still remain in the dark as to whether someone is trying to or has successfully uploaded an intimate photo of them without their consent.
This is not inherently a bad thing—there is preliminary research suggesting there are traumatic mental health effects of revenge porn, and it is reasonably distressing to get an alert any time someone tries to post a nude photo of you online. But it should at least be an option for someone to be privy to that knowledge, should they want to be aware of any threats.
Since those types of notifications could, for many, be an added layer of emotional distress, it doesn’t need to be an automated function, but simply an opt-in functionality. Meaning, if someone does want to be informed, they can check that option. If they don’t, they don’t have to do anything, and they don’t have to worry about receiving an agitation notification.
“I understand why Facebook wouldn’t want to be involved in proactively reaching out to victims,” Kelsey Bressler, who has been a victim of revenge porn and is now a member of Badass Army, a nonprofit dedicated to helping victims of revenge porn, told the Daily Dot, adding that such an interaction requires sufficient “bedside manner.”
But Facebook is already working alongside experts in online abuse and civil rights organizations to deploy these tools. It’s not hard to imagine that they could also lean on these organizations to help them craft a notification system that wouldn’t be triggering for those that choose to opt in.
Given the complexity of the system and the technology already being deployed to flag and hash photos, it should be quite easy for Facebook to create a function to notify users if the photos they submitted were shared online. Potential victims are already dealing with the threat of not only a gross invasion of privacy but the loss of autonomy. Giving them the choice on whether or not they are aware of how their photos might be circulated online is a small ask for volunteering their nude photos and putting their trust in a massive corporation.