2 min read

New Deep Learning Model Can Steal Data from Sound of Keystrokes


August 07, 2023

Promo Protect all your devices, without slowing them down.
Free 30-day trial
New Deep Learning Model Can Steal Data from Sound of Keystrokes

Researchers from Durham University, the University of Surrey and Royal Holloway University of London have developed a deep learning model that can steal data from keyboard keystrokes with alarming accuracy.

In an era of unprecedented technological advancement, privacy and security have become prime concerns. The recent breakthrough by researchers Joshua Harrison, Ehsan Toreini and Maryam Mehrnezhad has pushed these concerns to the forefront, as they have trained a deep learning model that can steal data by listening to keystrokes, with 95% accuracy.

How Does It Work?

"CoAtNet," the model used by the team, was trained using recordings of 36 keys on a MacBook Pro pressed 25 times each. By generating waveforms and spectrograms, the researchers were able to highlight noticeable differences between distinct key presses. They used this data to identify individual keystrokes and feed them into the CoAtNet image classifier.

This approach resulted in an incredible 95% prediction accuracy from smartphone audio recordings and 93% from keystrokes recordings captured from Zoom. Even with Skype, the accuracy was 91.7% - a figure that, while lower, is still highly effective.

Implications and Dangers

The potential implications of this discovery are terrifying. Not only could passwords be deciphered, but entire discussions and personal data could be leaked by merely listening to keystrokes. Unlike other side-channel attacks, this method doesn't require laboratory conditions, nor is it limited by data rate or distance.

This new attack could be executed through malware on a target's phone or laptop or by infiltrating a Zoom call. The accessibility and accuracy make it a formidable threat to privacy and security.

Mitigation Measures and Challenges

Mitigating this form of attack is complex. Randomizing passwords, altering typing style, or contaminating recordings with additional sound might help, but the adaptability of machine learning models could make these countermeasures ineffective.

The researchers suggest that using password managers, enabling biometrics or deploying specialized software to filter out keystroke sounds could be more effective. Yet these are not fool-proof solutions, especially if the attacker wants to "listen" to conversations rather than passwords.


The paper, "A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards," has opened a new front in the battle for digital privacy. As technology expands, so do the threats to our security. This groundbreaking research illustrates how creative and devastating cyber-attacks can become.

Though the experiment was performed on a very silent keyboard, and certain countermeasures might seem adequate, the reality is that the battle against this type of attack has only just begun. The entire report can be found here, detailing more about the methodology and potential countermeasures.

The challenge lies in developing new, better defenses against this emerging threat, as our keyboards have become an unlikely but profound vulnerability in our digital lives.

Specialized software like Bitdefender Password Manager can help you avoid leaking keystroke sounds while typing or creating your password. Key features include:

  • Password generator that can create secure, complex, unique passwords for every new account at the press of a button
  • Automatic password-capturing module that stores your passwords immediately after creating them
  • Intelligent password autofill module that automatically inputs your credentials on previously visited websites




Vlad's love for technology and writing created rich soil for his interest in cybersecurity to sprout into a full-on passion. Before becoming a Security Analyst, he covered tech and security topics.

View all posts

You might also like