Virtual assistants seem to be virtually everywhere these days. They’re being embedded in phones and motor vehicles and being used in more and more homes and workplaces. Despite their utility and convenience, however, virtual assistants present some security risks. Here are the 6 that are most often cited:
1) Virtual devices may be hearing too much.
What happens when you think they’re turned off and they’re not? Are they still “listening” to what’s being said in the background? If so, they could be gathering information that could be stored, accessed and replayed by or to a third party—possibly for nefarious purposes. One couple discovered that their “VA” had taken a conversation between spouses and forwarded it by email to the husband’s administrative assistant at work, having heard certain keywords during the conversation.
2) Virtual devices can’t apply context to words.
While virtual assistants are certainly “smart” in their ability to listen and elicit an appropriate response, they lack the human capacity to apply context or nuance to what’s being said. So, it’s very possible that the device may misunderstand a comment—and the speaker may invariably be creating a permanent log that could be misconstrued and later used maliciously.
3) Users’ voice commands are stored in the cloud.
And users can’t access the voice records or control who does have access to them.
4) IoT devices controlled by virtual assistants can get lost, stolen or compromised.
This can become a problem if the devices fall into the hands of individuals who intend to cause harm.
5) Virtual assistants are susceptible to “dolphin attacks.”
These are cyberattacks driven by ultrasonic audio waves. These sound waves are difficult for humans to hear, but smart assistants interpret them as commands. The best way to prevent such attacks it to turn off the virtual assistants until they’re needed and introduce a confirmation protocol for certain commands. This protocol can be set up to work only in sensitive situations (as a sort of two-factor authentication for a certain subset of commands).
6) Since virtual assistant are used for shopping and other monetary transactions, they may have access to bank and financial records.
Conceivably, they could automatically order merchandise via voice commands—which could open up a number of potential problems. If a third party were to ask the device to order illegal goods, for example, the owner of the virtual assistant would be implicated. And since voice records are stored in the cloud, security is in the hands of the operator of the smart assistant and/or their cloud provider. The best way to minimize the financial risks is for users to turn off the microphone or the purchase feature on any IoT device when they’re shopping.
With the use of physical assistants growing exponentially, everyone needs to be aware of these risks. Do your customers a favor and share this information with them. While the threats are real, taking common sense precautions is a good first line of defense.
To learn more about cybersecurity and physical assistants, contact Ingram Micro’s Thomas Norman.