Many of us have given a home to voice-controlled speakers such as the Amazon Echo and Google Home, using them to control music, turn off the lights, or simply got a kick out of asking them silly questions.
But it hasn’t all been fun and games, with revelations that the digital assistants were routinel sending recordings to third-party subcontractors in an attempt to improve speech recognition performance – recordings that users expected to be private and confidential.
Now researchers at SRLabs have revealed just how easy it is for third-parties to exploit the so-called “smart” speakers that many home owners have purchased to eavesdrop on conversations and even steal passwords and credit card details.
The team at SRLabs in Germany uncovered two potential methods which can be used in a similar fashion against both Amazon Alexa and Google Home devices.
Both methods exploit the fact that after an initial review of newly-submitted Skills and Actions by third-party developers, both Amazon and Google fail to properly check for malicious behaviour when a developer issues an update.
A seemingly innocent app is updated by its developers to pretend that it cannot run. In the video demonstration below, this is done by playing a fake error message
“This skill is currently not available in your country.”
before falling silent.
Typically a user would believe that the app is no longer running after hearing the message, but in reality it is still running, but has been programmed to be silent for a period of time (perhaps a minute or more).
Finally, the app plays a phishing message which requests sensitive information. For instance:
“An important security update is available for your device. Please say start update followed by your password.”
Amazon and Google’s digital assistants would never ask you to say your password out loud, of course, but it’s easy to imagine how some users might find this convincing.
Researchers at SRLabs discovered that it was also possible to listen in to conversations within range of a digital assistant after users believed the app had stopped.
For instance, on a Google Home it was possible to create an app that constantly sent recognised speech to a server controlled by a hacker. According to SRLabs, this continues until there is at least a 30 second break of detected speech although it is possible to extend the eavesdropped period if required.
What the researchers at SR Labs demonstrate is something security and privacy advocates have been saying for some time: having a device in your home which can listen to your conversations introduces risks.
In particular it’s not a good idea if the devices are able to run third-party apps which have not been properly reviewed by the digital assistant’s manufacturers, or if insufficient vetting is undertaken when new versions of the apps are released.
Amazon and Google are making a serious error if they believe that a single check when an app is first submitted is enough to confirm that the app will always behave itself in future. More needs to be done to protect users of such devices from privacy-busting apps.
Remember – when you introduce a listening device into your home, you’re not only putting trust in the manufacturer but also the thousands of third-party developers who might have produced the apps that you run upon it.
Graham Cluley is an award-winning security blogger, researcher and public speaker. He has been working in the computer security industry since the early 1990s.View all posts
May 16, 2023
March 10, 2023
June 02, 2023
June 01, 2023