The largest tech platforms in the world have yet to prove that they can effectively and expeditiously moderate their hellish platforms, and Instagram is no exception.
On Friday, Business Insider reported that IGTV—Instagram’s new standalone video service—recommended disturbing videos to users, “including what appeared to be child exploitation and genital mutilation.” Business Insider reports that it monitored the app’s recommendation sections for nearly three weeks, using both journalists’ accounts and an account set up as a 13-year-old with no activity history.
Instagram characterized the platform as “a new app for watching long-form, vertical video from your favorite Instagram creators” when it announced IGTV in June. According to Business Insider, IGTV’s algorithm recommended a video of a young girl in a bathroom who is about to take her top off before the video ends. The video—titled “Hot Girl Follow Me”—was reportedly suggested to both the journalist and the fake child account. The news outlet reports the account that posted the video also uploaded “Patli Kamar follow me guys plzz,” which was recommended to the fake child’s account and reportedly displayed a young girl “exposing her belly and pouting for the camera.”
The videos were reportedly uploaded by different accounts and stayed active on the app for five days, only removed once Business Insider reached out to Instagram’s media contact. According to Business Insider, the videos had racked up over one million views before Instagram scrubbed them from the platform. The news outlet reports the accounts were not deactivated because the content—not the accounts—violated Instagram’s policy.
IGTV also reportedly recommended a video of a penis being mutilated by a motorized saw to the fake child’s account—that video was also removed after Business Insider reported it. The account reportedly was not. An Instagram spokesperson told Gizmodo in an email that they had removed all of the content from IGTV that Business Insider reported to them. They also added that one of the accounts had been disabled for violating its community guidelines.
“We care deeply about keeping all of Instagram—including IGTV—a safe place for young people to get closer to the people and interests they care about,” the spokesperson said. “We have Community Guidelines in place to protect everyone using Instagram and have zero tolerance for anyone sharing explicit images or images of child abuse.”
The spokesperson said that the company works with law enforcement and the Child Exploitation and Online Protection Command (CEOP) to deal with the issue.
The spokesperson added that Instagram has a “trained team of reviewers who work 24/7 to remove anything which violates our terms.” They also said that both Instagram and Facebook (which owns the service) are amping up their safety and security teams—Facebook pledged last year to double this team to 20,000 by the end of 2018.
There are, of course, a number of technical efforts platforms like Facebook and Instagram deploy to try and tackle issues like child abuse on their platforms, including algorithms that automatically scan and remove photos of exploitation. But as we’ve seen time and time again, even thousands of humans and some of the most sophisticated algorithms aren’t masterful enough to sift through the endless stream of content uploaded to the platforms. What’s increasingly clear is that the most efficient line of defense shouldn’t be a reporter emailing the company’s press line.