Last month, Twitter apologized after users noticed that its automated tool for cropping photos appeared to favor subjects with light skin. The social network said that it was investigating bias in its existing machine learning algorithm, and on Thursday it announced that it has seen enough to know it’s time for a change.
The issue started after users began testing different examples of photos with odd aspect ratios that wouldn’t fit in the typical Twitter parameters. Twitter uses machine learning to analyze these photos and predict which part of the image an average person would look at first. Time and again, users found that if a photo included a white person and a black person, Twitter chose to display the white face in a preview of the image. This happened in images featuring Mitch McConnell and Barack Obama. It happened in images featuring Lenny and Carl from The Simpsons. It even happened in images featuring a golden retriever and a black labrador.
There are countless examples of human biases being built into the algorithmic processes that are programmed by biased humans. But this one was incredibly easy to witness for yourself in an uncomplicated manner. Yesterday, Twitter Chief Design Officer Dantley Davis explained in a blog post that the company has been testing for bias in the machine learning system it uses for cropping photos, and well, they haven’t come to a conclusion yet. “While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm,” Davis wrote.
This time, Davis wanted to give a little more transparency into the process that Twitter has used to test for bias. Here’s that explanation in full:
The image cropping system relies on saliency, which predicts where people might look first. For our initial bias analysis, we tested pairwise preference between two demographic groups (White-Black, White-Indian, White-Asian and male-female). In each trial, we combined two faces into the same image, with their order randomized, then computed the saliency map over the combined image. Then, we located the maximum of the saliency map, and recorded which demographic category it landed on. We repeated this 200 times for each pair of demographic categories and evaluated the frequency of preferring one over the other.
Davis said that Twitter is continuing its analysis, adding “further rigor” to the process, and will share the eventual findings. But in the meantime, it’s working on a solution that really shouldn’t be that hard: It’ll let users decide how a photo preview should be cropped.
Designers are still testing different approaches to the new cropping tool, but Davis said that the principle going forward will be “what you see is what you get.” In most cases, users should simply choose how the preview looks in the timeline but Davis indicated that there could be a more elegant tool for presenting “the creator’s intended focal point” in some non-standard photos.
It’s unfortunate that, in tough situations, Twitter’s best option is often to pawn off responsibility on users. But there’s a difference between asking users to find all the hate speech because content moderation is hard and asking users to choose how they want their content to look or what they see in their timeline. Take note designers, there’s a better way.