A recent research paper from Durham University in the UK revealed a powerful AI-driven attack that can decipher keyboard inputs solely based on subtle acoustic cues from keystrokes.
Published on Arxiv on Aug. 3, the paper “A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards” demonstrates how deep learning techniques can launch remarkably accurate acoustic side-channel attacks, far surpassing the capabilities of traditional methods.
The researchers developed a deep neural network model utilizing Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) architectures. When tested in controlled environments on a MacBook Pro laptop, this model achieved 95% accuracy in identifying keystrokes from audio recorded via a smartphone.
Remarkably, even with the noise and compression introduced by VoIP applications like Zoom, the model maintained 93% accuracy – the highest reported for this medium. This contrasts sharply with previous acoustic attack methods, which have struggled to exceed 60% accuracy under ideal conditions.
The study leveraged an extensive dataset of over 300,000 keystroke samples captured across various mechanical and chiclet-style keyboards. The model demonstrated versatility across keyboard types, although performance could vary based on specific keyboard make and model.
According to the researchers, these results prove the practical feasibility of acoustic side-channel attacks using only off-the-shelf equipment and algorithms. The ease of implementing such attacks raises concerns for industries like finance and cryptocurrency, where password security is critical.
While deep learning enables more powerful attacks, the study explores mitigation techniques like two-factor authentication, adding fake keystroke sounds during VoIP calls, and encouraging behavior changes like touch typing.
The researchers suggest the following potential safeguards users can employ to thwart these acoustic attacks:
This pioneering research spotlights acoustic emanations as a ripe and underestimated attack surface. At the same time, it lays the groundwork for fostering greater awareness and developing robust countermeasures. Continued innovation on both sides of the security divide will be crucial.
The post Protect against new AI attack vector using keyboard sounds to guess passwords over Zoom appeared first on CryptoSlate.
A recent research paper from Durham University in the UK revealed a powerful AI-driven attack that can decipher keyboard inputs solely based on subtle acoustic cues from keystrokes.
Published on Arxiv on Aug. 3, the paper “A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards” demonstrates how deep learning techniques can launch remarkably accurate acoustic side-channel attacks, far surpassing the capabilities of traditional methods.
The researchers developed a deep neural network model utilizing Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) architectures. When tested in controlled environments on a MacBook Pro laptop, this model achieved 95% accuracy in identifying keystrokes from audio recorded via a smartphone.
Remarkably, even with the noise and compression introduced by VoIP applications like Zoom, the model maintained 93% accuracy – the highest reported for this medium. This contrasts sharply with previous acoustic attack methods, which have struggled to exceed 60% accuracy under ideal conditions.
The study leveraged an extensive dataset of over 300,000 keystroke samples captured across various mechanical and chiclet-style keyboards. The model demonstrated versatility across keyboard types, although performance could vary based on specific keyboard make and model.
According to the researchers, these results prove the practical feasibility of acoustic side-channel attacks using only off-the-shelf equipment and algorithms. The ease of implementing such attacks raises concerns for industries like finance and cryptocurrency, where password security is critical.
While deep learning enables more powerful attacks, the study explores mitigation techniques like two-factor authentication, adding fake keystroke sounds during VoIP calls, and encouraging behavior changes like touch typing.
The researchers suggest the following potential safeguards users can employ to thwart these acoustic attacks:
This pioneering research spotlights acoustic emanations as a ripe and underestimated attack surface. At the same time, it lays the groundwork for fostering greater awareness and developing robust countermeasures. Continued innovation on both sides of the security divide will be crucial.
The post Protect against new AI attack vector using keyboard sounds to guess passwords over Zoom appeared first on CryptoSlate.