Google’s Leaked Recordings Violates Data Security Policies
A report, based on the Belgium-based NWT VRT revealed that Google employees routinely listened to audio files recorded by Google Home Smart Home speaker, and Google Assistant smartphones.
As per ZdNet, the report elucidates how employees listen to snippets of the recordings when the user activates the device with the usual “OK Google” commands.
After receiving copies of several recordings, NWS VRT approached users, asking them to check their voices or those of their children and to talk to digital assistance or PDAs.
Google responded to the report by posting a blog titled “More information about our processes to safeguard speech data”.
Google acknowledged that it uses sequences of linguists from around the world who “understood the nuances and accents of a particular language”, and had reviewed and copied a small series of questions to better understand these languages. The terms and condition indicate that the users’ conversations are recorded.
Google blog mentions that that capturing interaction is an important part of the sound technology in the process of creating products like Google Assistant. According to them, various security measures are implemented to protect the privacy of users during the review process.
Google product manager of Search David Monsees in a blog penned by him said, “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.”
According to Google, it applies a wide range of safeguards to protect user privacy throughout the entire review process. The blog further adds, “Language experts only review around 0.2% of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.”
The company states that Google Assistant sends audio data to Google after device activation. He also said that devices, including Google Assistant can sometimes receive something like “false accept”, which means there are fewer voices or words in the background than their software interprets as keywords.
Although Google stated that the audio was recorded after the command was heard, NWT VRT stated that out of over a thousand sample heard, 153 should never be recorded and that the “OK Google” command was not clearly given.
In February, Google detailed that its Nest Guard, the centerpiece of the Nest Secure home alarm system, would soon receive Google Assistant functionality — meaning the device needed to have both a speaker and microphone.
Users were not made aware that the Nest Guard had a microphone at all, however.
Google responded that it was nothing more than a mistake to not to tell users about the Nest Guard microphones.
Earlier this year, Amazon found a team of people to answer questions about speakers powered by Alexa Amazon, similar to Google, to improve the accuracy of its voice assistant.
The recording sent to the human team does not have a full name, but is linked to the account name, the device serial number, and the user name of the clip.
Some team members are tasked with copying commands and analyzing whether Alexa answers correctly or not. Others were asked to write background noises and poorly calculated conversations by the device.
Kevin Jones951 Posts
Kevin Jones, Ph.D., is a research associate and a Cyber Security Author with experience in Penetration Testing, Vulnerability Assessments, Monitoring solutions, Surveillance and Offensive technologies etc. Currently, he is a freelance writer on latest security news and other happenings. He has authored numerous articles and exploits which can be found on popular sites like hackercombat.com and others.