Team for Research in
Ubiquitous Secure Technology

Keyboard Acoustic Emanations Revisted
Li Zhuang, Feng Zhou

Citation
Li Zhuang, Feng Zhou. "Keyboard Acoustic Emanations Revisted". Talk or presentation, 27, April, 2006; Poster given at Trust NSF Site Visit.

Abstract
I propose a new approach for analyzing keyboard acoustic emanations. Emanations produced by typing on a keyboard can be used as a source of attacking. I present a novel attack taking as input a couple minutes sound recording of a user typing English text using a keyboard and recover typed characters. There is no need for a labeled training recording. A recognizer bootstrapped this way can even recognize random text such as passwords. My attack uses the statistical constraints of the underlying content, English language, to reconstruct text from sound recordings without any labeled training data. The attack uses a combination of standard machine learning and speech recognition techniques, including cepstrum features, Hidden Markov Models, linear classification, and feedback-based incremental learning. Based experiences of recovering text from keyboard acoustic emanations, I propose investigating a systematic way to compare information leaking from typing on keyboards and different attacking techniques. Further more, I propose studying the defenses in different practical scenarios.

Electronic downloads

Citation formats  
  • HTML
    Li Zhuang, Feng Zhou. <a
    href="http://www.truststc.org/pubs/92.html"
    ><i>Keyboard Acoustic Emanations
    Revisted</i></a>, Talk or presentation,  27,
    April, 2006; Poster given at Trust NSF Site Visit.
  • Plain text
    Li Zhuang, Feng Zhou. "Keyboard Acoustic Emanations
    Revisted". Talk or presentation,  27, April, 2006;
    Poster given at Trust NSF Site Visit.
  • BibTeX
    @presentation{ZhuangZhou06_KeyboardAcousticEmanationsRevisted,
        author = {Li Zhuang, Feng Zhou},
        title = {Keyboard Acoustic Emanations Revisted},
        day = {27},
        month = {April},
        year = {2006},
        note = {Poster given at Trust NSF Site Visit.},
        abstract = {I propose a new approach for analyzing keyboard
                  acoustic emanations. Emanations produced by typing
                  on a keyboard can be used as a source of
                  attacking. I present a novel attack taking as
                  input a couple minutes sound recording of a user
                  typing English text using a keyboard and recover
                  typed characters. There is no need for a labeled
                  training recording. A recognizer bootstrapped this
                  way can even recognize random text such as
                  passwords. My attack uses the statistical
                  constraints of the underlying content, English
                  language, to reconstruct text from sound
                  recordings without any labeled training data. The
                  attack uses a combination of standard machine
                  learning and speech recognition techniques,
                  including cepstrum features, Hidden Markov Models,
                  linear classification, and feedback-based
                  incremental learning. Based experiences of
                  recovering text from keyboard acoustic emanations,
                  I propose investigating a systematic way to
                  compare information leaking from typing on
                  keyboards and different attacking techniques.
                  Further more, I propose studying the defenses in
                  different practical scenarios.},
        URL = {http://www.truststc.org/pubs/92.html}
    }
    

Posted by Christopher Brooks on 4 May 2006.
Groups: trust
For additional information, see the Publications FAQ or contact webmaster at www truststc org.

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.