Google just wrapped up the keynote address at its annual I/O developer conference. It was kind of short compared to some of the marathons we’ve been subjected to in the past! But there’s still plenty for us to praise—and pan.
We didn’t get a lot of totally new stuff at Google I/O this year, but sometimes making things better is the best thing. Google also managed to make some things worse. Let’s get started.
This one almost isn’t fair. We were blown away last year when Google showed how a new feature called Google Lens would offer a super smart visual search tool in Google Photos. Now, Google has moved Lens into the camera app and made it faster, more powerful, and even smarter.
Now, you can hold your camera up to, say, a dog, and Google Lens will tell you what kind of dog it is as well as provide links to learn more about the breed. You can also hold the camera up to text and then select, copy, and paste that text into other apps. Google Lens will also do an instantaneous search on a word you see in the real world. Perhaps most impressive is Google Lens’ ability to look at practically any object—clothes, books, furniture—and find similar objects on the internet, just based on what’s in the camera frame. The feature is still in beta but promises to blow us all away when it’s finally released.
Are you ever writing an email when you think to yourself, “Damn, I wish a robot would just write this for me?” It sounds nice at first, but of course it’s not. You are not a machine! You have feelings and emotions! Autocomplete is bad enough, but imagine if algorithms were predicting entire sentences of your prose.
That’s exactly what Google aims to do with Smart Compose in Gmail. You start typing a message, and the machine predicts what you’re going to say. It’s hard to tell how well it works based on the demo Google showed during the keynote, but the idea just sounds dreadful. Not everything on Earth needs to be automated.
The world is stuck with crappy lithium-ion batteries for its gadgets (for now), but bless Google for building software that makes battery life suck less. A new feature in Android P called Adaptive Battery promises to dedicate battery power to the apps you use the most and to predict what you use when to optimize battery life. So while the feature might not entirely solve your dead battery woes, at least it should make you run out of juice less often than you do now.
In addition to automating email writing, Google also wants to automate mundane phone calls. The company is developing a new feature called Duplex to do this very thing. Powered by Assistant, the software basically makes robocalls for everyday tasks like scheduling a hair appointment or making a restaurant reservation. A robot that sounds like a person literally talks to the person on the other end, all while pretending to be human.
The technology seems incredible. If it works as advertised, Google Duplex could make your life a lot better, especially if you don’t like making phone calls. Then again, it’s also deeply dystopian. Now we’re training artificially intelligent robots to pretend they’re us and talk to other people? You can guess the next plot twist in this Philip K. Dick reality, and it’s not good for the humans.
Waymo chief executive John Krafcik closed out the keynote with a quick update about his company’s self-driving cars. Long story short: they can sort of see in the snow now. Krafcik also spoke at length about how “a group of Google engineers, roboticists, and researchers set out on a crazy mission: to prove that cars could actually drive themselves.” While this is a true statement, it overlooks the fact that Carnegie Mellon researchers actually invented self-driving cars way back in the 1990s. Google would love to take credit for the invention, but it’s just another big company peddling a mythical origin story for profit. Don’t believe the hype.
A lot of people overuse their devices. You know the feeling. You’re so used to checking your phone that sometimes you find yourself just staring at the screen, opening and closing apps, your brain melting out of your nose. This is bad, and Google just revealed some good ways to bring balance to the phone experience in Android P.
Four new Android features—Dashboard, App Timer, Do No Disturb, and Wind Down—aim to help you use your phone in a healthy way. The Dashboard is a new screen that shows you how often you use particular apps. App Timer, as the name implies, gives you a set amount of time to use a certain app before reminding you of the limit and turning the icon grey. Do Not Disturb mode basically ensures you get no visual or sound alerts, when you want to have quiet time. Finally, Wind Down helps you put down your damn phone and go to sleep by turning on Do Not Disturb and setting the whole screen to grayscale.
Will these new features actually help you use your phone less? It’s unclear. But at least Google is trying.
In a slick effort to make using Android a better experience, Google is addding two small but slick features called App Actions and Slices.
App Actions predict what you want to do on your phone next, based on what you’re currently doing. The example Google provides is, after you connect headphones, a little window pops up and gives you the option to make a phone call or listen to a song on Spotify.
The Slices API will allow developers to port live pieces from one app into another to make multitasking easier. For starters it’ll be available in Android search, and in one example given today, it was used to surface a piece of the Lyft app to help a user get a ride home.
As it does on a regular basis, Google is making Android look a little bit more like iOS with a new home button and swiping gestures that look an awful lot like the ones on the iPhone X.
This happens all the time, and Apple steals from Android all the time. It still sucks to see something that’s not very original at an event that’s supposed to be all about new things.