How Google's Leaner AI Could Move Speech Recognition Offline

Illustration for article titled How Googles Leaner AI Could Move Speech Recognition Offline

Google’s artificial intelligence is getting speedily (and worryingly?) better, as its recent slam-dunk of a human Go champion demonstrated. That victory required highly computationally-efficient AI rather than just brute force, something Google thinks could help it move speech recognition offline.

Advertisement

The speech recognition we’re all used to in Siri and Google Now relies heavily on cloud computing to decipher and make sense of humans. It’s necessary, because the processing power and memory banks required are way outside the bounds of most smartphones.

In a recent paper, a team of Google engineers outlined how they used deep machine learning techniques to run a lightweight speech-recognition program on a smartphone. The paper is dense, but the gist of it is dictation and voice commands working with a 13.5 percent error rate (compared to the ~8 percent of Google’s cloud-based system), all running natively on a Nexus 5 with a 2.2GHz processor and 2GB of RAM.

Advertisement

The benefits are obvious: if speech recognition doesn’t need an always-on internet connection to work, it makes building it into things cheaper, more power-efficient, and faster (no need to send all that data to a server and back). Now, whether you want your fridge to talk to you is a whole separate thing.

[Arxiv via Android Police]

Share This Story

Get our newsletter

DISCUSSION

Unfortunately a lot of other smartphone services require always-on internet, so removing that need for speech recognition won’t save much phone power...in fact, if it’s used frequently it would be an added power draw on a smartphone. The only significant thing it will save is massive Google server computing resources, which is the clear goal. You also didn’t mention that local speech recognition would be more secure, which is an upside.