Instagram announced on Sunday that it’s banning drawings, memes, video, and comics that portray self-harm and suicide, as well as “other imagery that may not show self-harm or suicide, but does include associated materials or methods.” This adds to its preexisting policy of prohibiting content which promotes or encourages self-harm and suicide.
Advocates have implored the platform to take stronger measures, especially after the well-publicized death of 14-year-old Molly Russell. In January of this year, her father publicly spoke about finding images of depression, self-harm, and suicide on her social media feeds. Still, she’d displayed “no obvious signs” of preexisting mental health issues. Moved by Russell’s story, UK health secretary Matt Hancock addressed a letter to Instagram, Facebook, Twitter, Google, Snapchat, Pinterest, and Apple urging them to remove graphic and triggering content. “It is appalling how easy it still is to access this content online and I am in no doubt about the harm this material can cause, especially for young people,” he wrote. “It is time for internet and social media providers to step up and purge this content once and for all.”
In February, Instagram head Adam Mosseri responded to the news by announcing in the Telegraph that the platform would launch “sensitivity screens” blurring images of self-harm, which the user could click through to view. They updated their policy to remove all graphic images of self-harm, shadowban images of non-graphic content including healed scars, and direct users who search for self-harm and suicide-related content to support networks.
Subsequently, some users found that their images of healed scars accompanying stories of recovery had been blurred or removed, which generated the hashtag #youcantcensormyskin–raising the question of how to promote positive stories while removing triggering images.
Data on such imagery is “limited and complex,” Samaritans, a UK-based charity and crisis hotline, noted in a statement to Gizmodo. They continue:
While we know that some content can be helpful and supportive, there is also evidence that content can be harmful, and that some imagery can glorify, sensationalise and normalise self-harm. This is very concerning, as we know that self-harm is a strong risk factor for future suicide. We also know that in recent years both the self-harm and suicide rates in young people have been rising, which is incredibly worrying.
Samaritans go on to say that they will be collaborating with social media platforms, including Instagram, and hearing from creators in order to “maximise supportive content and minimise harmful content.”
Mosseri touched on that fine line in the policy announcement:
We understand that content which could be helpful to some may be harmful to others. In my conversations with young people who have struggled with these issues, I’ve heard that the same image might be helpful to someone one day, but triggering the next.
Some good examples of helpful content can be found on the Trevor Project’s Instagram, with messages of hope and support from regular people to Jonathan Van Ness. They welcome people who’ve received support to share their stories without graphic imagery.
If you or someone you know is having a crisis, please call the National Suicide Prevention Lifeline at 800-273-8255 or text the Crisis Text Line at 741-741.
You can also reach The Trevor Project’s TrevorLifeline 24/7 at 1-866-488-7386. Counseling is also available 24/7 via chat every day at TheTrevorProject.org/Help, or by texting 678-678.
From the UK, you can reach the Samaritans’ discreet helpline free at 116 123 or visit www.samaritans.org to find details of your nearest branch, where you can talk to trained volunteers.