Twitter has never been an ideal platform for sharing photos—largely because users can’t be sure what portion of their image will show up in a tweet’s image preview. If the most important part of a photo is cropped out, followers probably won’t be inclined to click through to see the whole image.
Yesterday Twitter announced it is rolling out a tool that will help determine exactly which portion of an image should be displayed in tweets. Two Twitter machine learning researchers, Lucas Theis and Zehan Wang, explained in a blog post how the technology works.
Since Twitter first enabled users to post photos in 2011, the company has faced the challenge of automatically cropping images that are uploaded in different sizes and shapes. Initially, Twitter algorithms simply found the center and cropped a square around that point, or used face detection to crop around heads. But this could lead to preview images that cropped out the most impressive portion of a sunset above the horizon, or a dog’s derpy tongue at the bottom of a frame.
Now, Twitter’s photo cropping tools determine the most “salient” part of photos—what people are drawn to visually—and crop based on that. Research has been able to gauge saliency using eye trackers to determine the pixels that people look at first. The information from those tests helped build neural networks that show what types of things and characteristics people are generally drawn to the most.
At their most advanced levels, these programs can scan an image and determine the exact pixels that most people might look at first. But since that would take too much time for the purposes of posting photos on Twitter, the company created a stripped-down version that imitates the slow yet highly specific neural network—but works ten times faster, according to Twitter’s blog post.
It seems like Twitter might also want to give users the option to choose how their photos are cropped in previews. But for now, we’ll just have to trust the machines.