Smart Assistants Are Falsely Triggered All the Time
Voice assistants should only listen to their owners and ignore the rest, but new research shows that they can respond to false triggers much more than we imagine, and it’s usually the result of forgiving programming.
Most people can point to situations when their phone or smart assistant woke up, seemingly triggered by someone talking in the room or by the TV. Such mistakes are expected because the systems are not perfect, but the main problem lies elsewhere.
The fact that smart assistants activate when they’re not called upon is not a big issue, and most people understand why it happens. The problem is that assistants get triggered by the wrong words, and the subsequent conversations are uploaded into the cloud, after which they are usually investigated by people who try to determine if the device triggered correctly.
Researchers from Ruhr-Universität Bochum (RUB) and the Bochum Max Planck Institute (MPI) for Cyber Security and Privacy investigated a host of smart devices from Amazon, Apple, Google, Microsoft, Deutsche Telekom, Xiaomi, Baidu and Tencent.
The devices received a steady stream of English, German and Chinese words from various TV shows and broadcasts. The setup also captured any data that the devices sent into the cloud after false triggers.
“Based on this data, the team created a list of over 1,000 sequences that incorrectly trigger speech assistants,” wrote the researchers.
“Depending on the pronunciation, Alexa reacts to the words ‘unacceptable’ and ‘election,’ while Google reacts to ‘OK, cool.’ Siri can be fooled by ‘a city,’ Cortana by ‘Montana,’ Computer by ‘Peter,’ Amazon by ‘and the zone,’ and Echo by ‘tobacco.’
The privacy aspect is the most worrying, as the devices send snippets of conversation back to the companies to be analyzed after a false trigger. Anyone using these devices should know that private conversations may no longer be as private as they once were.Amazon apple Google Internet of Things IoT