Facebook Used an 'I Will Rape You' Image to Invite Users to Instagram

Illustration for article titled Facebook Used an 'I Will Rape You' Image to Invite Users to Instagram

This is getting absolutely nuts. Just a week after Facebook got busted for allowing advertisers to target topics like “Jew haters” and “How to burn Jews,” a reporter found that the company used an image that said “I will rape you” to invite her friends to Instagram. There was a death threat in there, too.


The reporter in question is Olivia Solon, who covers tech for The Guardian. Here’s a screenshot of the Instagram ad and, be forewarned, it’s unsettling:

There’s a simple explanation for how this image ended up in an ad. Solon uploaded it to Instagram a year ago to illustrate the kind of hate mail she gets as a journalist. That photo received over a dozen comments and just three likes. Nevertheless, Facebook identified the post as “engaging,” which is part of the criteria the company uses to select images to be used in the promo ads for subsidiaries like Instagram.

Instagram did not respond particularly well to the issue. Here’s a statement as framed by The Guardian:

An Instagram spokesperson apologized and claimed that the image was not used in a “paid promotion”. “We are sorry this happened – it’s not the experience we want someone to have,” the statement said. “This notification post was surfaced as part of an effort to encourage engagement on Instagram. Posts are generally received by a small percentage of a person’s Facebook friends.”


So, according to Instagram, seeing rape and death threats is “not the experience we want someone to have.” That’s understating things. For a company with the resources of Facebook, it seems beyond insulting that something like this can’t be prevented. Like the “Jew hater” incident, this latest screw-up is the clear result of Facebook leaning too hard on algorithms without the human oversight to ensure those algorithms don’t select abusive content.

Once again, we’re seeing how the company relies on algorithms for things like account verification and ad sales, only to shrug when something inevitably goes wrong. This is more or less what happened after Facebook found out that Russian trolls spent $100,000 on ads during the 2016 election. “We know we have to stay vigilant to keep ahead of people who try to misuse our platform,” Facebook security chief Alex Stamos said in a blog post.


Based on this latest incident, however, it seems like the company itself is among those misusing the platform. At the very least, it’s obvious that Facebook’s embarrassing year is only getting more embarrassing. And to think, Congress is just now starting to intervene. For Facebook, it’s likely going to get worse before it gets better.

[The Guardian]


Senior editor at Gizmodo.


This blog post is coming from someone that probably has never in their life written an actual algorithm, let alone one that takes big data on various different cultures and lifestyles and uses that to advertiser it’s other platform.

These things are issues that will come up. As long as Facebook fixes them, why are we gonna keep complaining?

The algo works 99.99% of the time. Every once in a rare while it goofs.