Google CEO Sundar Pichai thinks we are now living in an “artificial intelligence-first world.” He’s probably right. Artificial intelligence is all the rage in Silicon Valley these days, as technology companies race to build the first killer app that utilizes machine learning and image recognition. Today, Google announced an AI-powered assistant built into its new Pixel phones. But there’s a pivotal downside to the company’s latest creation: Because of the very nature of artificial intelligence, our data is less secure than ever before, and technology companies are now collecting even more personal information about each one of us.
Google’s new assistant, which debuted in the company’s new messaging app Allo, works like this: Simply ask the assistant a question about the weather, nearby restaurants, or for directions, and it responds with detailed information right there in the chat interface. It is undoubtedly neat and useful. Pichai stressed at today’s Google event that this is just the beginning for artificial intelligence. Google’s artificial intelligence will only become smarter, faster, and more accurate. It will learn things about your habits and preferences to better serve you personalized results and to answer more specific questions.
But this is where the problems start.
Because Google’s assistant recommends things that are innately personal to you, like where to eat tonight or how to get from point A to B, it is amassing a huge collection of your most personal thoughts, visited places, and preferences. Google is pretty vague about what exactly the assistant is collecting. It can access information on your devices like contacts or storage (read: literally anything stored to your device), and it can also access “content on your screen.” In order for the AI to “learn” this means it will have to collect and analyze as much data about you as possible in order to serve you more accurate recommendations, suggestions, and data.
In order for artificial intelligence to function, your messages have to be unencrypted. Computer scientists are trying to figure out a way to make “searchable encryption,” but that’s a ways off. Besides, even standard encryption still has problems. Google offers state-of-the art encryption within its Allo messaging app, but if you turn it on, say goodbye to your fancy AI assistant.
That means Google’s stuck between a rock and a hard place here. The security engineers at Google know, and cryptography experts agree, that automatic encryption is the best way to defend personal data and conversations from hackers and government surveillance. But in order to stay competitive against all the other technology companies that have (or will eventually have) AI-powered assistants, Google has no choice. Kudos to Google for offering users the choice of encrypting their messages, but I wished we lived in a world where people could use Google’s cool new feature while keeping their messages secure at the same time.
Googles isn’t alone in this push-and-pull. In fact, Facebook has pretty much the exact same problem. Facebook Messenger also has opt-in encryption, and uses what is widely regarded as the gold standard for encrypting messages, just like Google does. But in order for users to do things like call an Uber from the app or use a fun bot, their messages have to be unencrypted.
These new assistants are really cool, and the reality is that tons of people will probably use them and enjoy the experience. But at the end of the day, we’re sacrificing the security and privacy of our data so that Google can develop what will eventually become a new revenue stream. Lest we forget: Google and Facebook have a responsibility to investors, and an assistant that offers up a sponsored result when you ask it what to grab for dinner tonight could be a huge moneymaker.
Google is betting that people care more about convenience and ease than they do about a seemingly oblique notion of privacy, and it is increasingly correct in that assumption. Google’s job is to innovate and make money, and if nothing else, be glad the company is offering you a robust option to protect your data. But, you know, an option that means sacrificing some very helpful AI-powered assistance.