Top

Researchers warn of growing voice AI vulnerabilities

Researchers are warning that with the rise of voice AI, there's a rise in vulnerabilities as well.

Tech giants across the globe are working incessantly on voice-controlled AI systems to make human life easier. However, researchers are warning that with the rise of voice AI, there's a rise in vulnerabilities as well.

Latest research claims that these vulnerabilities could leverage inaudible commands at a frequency beyond human ability to launch attacks such as sending messages, making purchases, and so on, all without the user realizing it.

Researchers at the UC Berkeley Nicholas Carlini and David Wagner claim that they were able to fool Mozilla's open-source DeepSpeech voice-to-text engine by hiding a secret, inaudible command within audio of a completely different phrase, Cnet reported.

The researchers also claim that they were also able to hide the rogue command within brief music snippets and it worked.

This is not the first instance of voice AI being vulnerable. Last year, researchers in China were able to use inaudible, ultrasonic transmissions to trigger popular voice assistants such as Siri, Alexa, Cortana, and the Google Assistant. This attack method is called 'DolphinAttack' and requires the attacker to be within a small distance of the phone or smart speaker.

However, latest studies claim that the attack can be amplified and executed even from a distance of 25 feet.

Click on Deccan Chronicle Technology and Science for the latest news and reviews. Follow us on Facebook, Twitter.

( Source : ANI )
Next Story