What Can Google Lens Actually Do for You Right Now

We may earn a commission from links on this page.
Image for article titled What Can Google Lens Actually Do for You Right Now
Photo: David Nield (Gizmodo)

Few apps are getting as much hype from Google right now as Google Lens, which taps into the machine learning AI that Google is currently so fond of. That AI lets the app recognize all kinds of stuff through your phone’s camera, and take action on it. It’s available now through Google Photos for Android and iOS, so is it the future of phone apps? And what exactly can it do for you?

Lens was announced at last year’s Google I/O developer conference, and remains in a “preview” stage according to Google: In other words, this is by no means the finished version yet, and we’re still waiting for features Google was demoing back in 2017. Look for regular updates in the months and years ahead (and probably some major upgrades when Google I/O 2018 rolls around in May).

Advertisement

Getting started

Advertisement
Advertisement

Fire up Google Photos on your mobile device of choice, tap on an image to bring it fully into view, and you’ll see the Lens button down at the bottom—it’s an Instagram-esque dot with three-quarters of a square around it. Right now most of you will have to settle for applying the Lens magic to a picture you’ve already taken.

If you happen to have a Pixel phone, you can use Google Lens before you snap a picture via Google Assistant. Maybe the extra processing power of the Pixel is required for this, maybe Google wants to keep it as a Pixel exclusive, but this is definitely the future of Lens. Press and hold the Home button to bring up Google Assistant, then tap the Google Lens button (bottom right).

Advertisement

Whether using Photos or Assistant, the functionality is more or less the same. Though identifying something with Lens in Google Assistant is much quicker—you don’t have to snap a photo with the camera app first, then go to Google Photos, then load up Google Lens.

This being Google, of course, all your Lens search activity is saved for you to browse: You can view (and delete) it from this page on the web.

Advertisement

Road testing Lens in Google Photos

Image for article titled What Can Google Lens Actually Do for You Right Now
Photo: David Nield (Gizmodo)
Advertisement

Google Lens’ forte is identifying something in your shot, and it can recognize just about any common object at this point: We got it to correctly identify a cappuccino, a MacBook Pro, a daffodil, and a pint of lager with no problems at all (you can see what sort of day it’s been). Just in case you’re not sure what you’re looking at, Google Lens can let you know.

Next we went for more specific photos. Google Lens also correctly identified pictures of the Cloud Gate sculpture in Chicago, an anteater, and the painting Going to the Match by LS Lowry. Typically you’ll get a snippet of search results: For a painting, the name of the artist and the date it was painted, for example. For public landmarks, you might see opening times with a brief description.

Advertisement

Snippets from Wikipedia occasionally pop up too, but usually there’s not a whole lot of information actually available inside Lens—just a quick hit on what the photo shows. Most of the time you’ll get the option to tap to progress to a matching list of Google search results, which will lead to extra details.

Advertisement

We’re not sure quite how often you’ll be looking at something that you need to identify like this, but it could come in handy for tourists and bird-spotters.

Google Lens also works very well for any kind of media. We tested it with vinyl record, CD, DVD, and book covers, and it got the result right every time—again, you probably know what album or novel you’re looking at if it’s right in front of you, but it might sometimes be useful in looking up extra information or running a related search.

Advertisement

Then there’s text. Google Lens can pick up email addresses, phone numbers, and map addresses very well, and launch the relevant app with another tap, if needed. You can snap an address on a poster then see where it is in Google Maps, or take a picture of a flyer and add the event to your calendar, if a date is detected.

Advertisement

Not everything works so well: Photograph a brand logo and you’ll see matching products from that brand rather a match for the brand itself. While it can identify text, and let you copy it to the clipboard, it can’t do anything like real-time translation (you’ll need the Lens version of Assistant for that which means you’ll need a Pixel device), and it’s not always obvious how you get text out of Lens into something else.

The Lens search engine also seems to be too aggressive at times, occasionally finding landmarks or words in photos that weren’t actually there. Overall though, image recognition is pleasingly accurate, hitting the mark even if photos are slightly blurred or taken at an angle.

Advertisement

What about Lens in Google Assistant?

Google Lens in Google Assistant is smarter and smoother at this point. One feature that makes it better than the Lens tool built into Google Photos is the ability to recognize people, from politicians to movie stars—in some of our tests it could even name the movie as well as the actor (it probably depends if you’re using a well-known still from the flick).

Advertisement

Lens-in-Assistant can also do text translations on the fly, through it needs an extra tap to work (in other words it’s not the instant translation through the camera that Google Translate has offered for several years now). We found the translations to be pretty hit and miss—enough to understand the gist, but not word-for-word accurate.

Advertisement

This Lens-in-Assistant functionality is coming to more “flagship devices” in the near future, Google says—as we’ve said, this is definitely going to become the norm eventually, and is a much more satisfying experience, combining the image recognition of Lens with the contextual know-how of Assistant.

The future of Lens

Advertisement

So what can’t you do with Lens yet? There’s still no sign of the magic fence removal demo we saw at Google I/O 2017, where the AI magic of Lens automatically wiped away a chain-link fence to reveal the scene behind. This kind of photo manipulation tech has been shown off for several years now, but it has yet to trickle into Google Lens (or Google Photos, where it may eventually end up).

Another demo we saw at last year’s I/O that isn’t live yet is the ability to snap a Wi-Fi code on a router, then automatically connect to that network—but you can see where this is going to go. As we collectively use Google Lens the idea is that it will get smarter and gain a better understanding of what’s being shown, and what to do with that information.

Advertisement

We can’t think of too many scenarios where this is going to be indispensable, but snapping a photo is certainly quicker and more convenient than tapping out search queries by hand. Shopping could be one area where this comes in handy—instantly showing you the best deals on something in a photo—and getting an accurate calorie count from just a picture of a meal would be really useful in diet tracking apps.

Advertisement

That kind of intelligent image processing is still some way off though: For the time being, you’d be lucky if Google Lens recognized you were having chicken instead of beef. You get the feeling that if something isn’t already in its library of training pictures, it’s going to struggle to recognize it, though AI should fill in those gaps over time.