In recent months, it became clear that Google, Apple, and Amazon were all guilty of having humans review audio recordings collected by digital assistants. Today, Google’s trying to mitigate some of the backlash by updating and clarifying its policies on what it does with your audio data.
In July, a Google subcontractor leaked over a thousand Google Assistant recordings to VRT, a Belgian news organization. While it wasn’t exactly a secret that Google employed humans to review and transcribe recordings, the leak resurfaced concerns about accidental recordings in which the “Hey Google” wake word wasn’t used, and how securely Google stores sensitive audio data. In response, Google spun the leak as a security breach and defended human review as a necessary part of improving speech recognition across multiple languages. It then paused human transcription globally as it reviewed its policies.
The first change Google’s making directly deals with human review. In a blog, it noted that customers were always able to opt-in or -out of its Voice & Audio Activity (VAA) setting during Assistant setup. However, it wasn’t necessarily clear from the previous language in its terms of service that humans would be reviewing audio recordings. To fix that, Google says it will highlight the fact “that when you turn on VAA, human reviewers may listen to your audio snippets to help improve speech technology.” Existing users will also have the option to review their VAA and reconfirm whether they still want to participate.
Google also said it plans to add an option to adjust how sensitive a Google Assistant device is to the “Hey Google” command. Meaning, you could make it stricter to reduce accidental recordings, or temporarily more relaxed in a noisy setting.
Also on the agenda is automatically deleting more data and beefing up privacy protections for the transcription process—though Google didn’t give much detail on these fronts. With regard to privacy, Google merely reiterated that audio recordings were never associated with individual accounts and that it would add “an extra layer of privacy filters.” Google did not immediately respond to Gizmodo’s request for comment to clarify what that actually means.
As for data deletion, it said it would improve its process of identifying unintentional recordings. More concretely, Google noted it would update its policy “later this year” so that the audio data of VAA participants would be automatically deleted after a few months.
On the surface, these are all good things—especially the bit where Google says it will highlight human review in its VAA opt-in process. It bears reminding that right now, human review is still a necessary part of improving voice and speech recognition. Even with improved or stricter auto delete measures, you can’t be 100 percent sure that a digital assistant won’t accidentally record a conversation and send it off into the cloud for some underpaid contractor to listen to. If you want zero chance of that, you’re better off not opting into VAA at all, or eschewing voice assistants altogether.