Team for Research in
Ubiquitous Secure Technology

Can Machine Learning Be Secure?
Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, J. D. Tygar

Citation
Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, J. D. Tygar. "Can Machine Learning Be Secure?". Talk or presentation, 27, April, 2006; Poster Given at the NSF Trust Site Visit.

Abstract
Statistical learning is an invaluable tool that is increasingly being used in security-sensitive applications, but little attention has been paid to the possibility that new vulnerabilities may be introduced by learning systems. We investigate a broad class of statistical learning algorithms and show that their use creates new potential vulnerabilities that an attacker may be able to exploit. We discuss and analyze the range of potential attacks and their effects. We also explore defenses along the lines of adding robustness to the algorithms and selecting appropriate algorithms and model parameters in the first place. Finally, we present some theoretical analysis and experiments to evaluate the attacks and defenses.

Electronic downloads

Citation formats  
  • HTML
     Marco Barreno, Blaine Nelson, Russell Sears, Anthony D.
    Joseph, J. D. Tygar. <a
    href="http://www.truststc.org/pubs/71.html"
    ><i>Can Machine Learning Be
    Secure?</i></a>, Talk or presentation,  27,
    April, 2006; Poster Given at the NSF Trust Site Visit.
  • Plain text
     Marco Barreno, Blaine Nelson, Russell Sears, Anthony D.
    Joseph, J. D. Tygar. "Can Machine Learning Be
    Secure?". Talk or presentation,  27, April, 2006;
    Poster Given at the NSF Trust Site Visit.
  • BibTeX
    @presentation{BarrenoNelsonSearsJosephTygar06_CanMachineLearningBeSecure,
        author = { Marco Barreno, Blaine Nelson, Russell Sears,
                  Anthony D. Joseph, J. D. Tygar},
        title = {Can Machine Learning Be Secure?},
        day = {27},
        month = {April},
        year = {2006},
        note = {Poster Given at the NSF Trust Site Visit},
        abstract = {Statistical learning is an invaluable tool that is
                  increasingly being used in security-sensitive
                  applications, but little attention has been paid
                  to the possibility that new vulnerabilities may be
                  introduced by learning systems. We investigate a
                  broad class of statistical learning algorithms and
                  show that their use creates new potential
                  vulnerabilities that an attacker may be able to
                  exploit. We discuss and analyze the range of
                  potential attacks and their effects. We also
                  explore defenses along the lines of adding
                  robustness to the algorithms and selecting
                  appropriate algorithms and model parameters in the
                  first place. Finally, we present some theoretical
                  analysis and experiments to evaluate the attacks
                  and defenses.},
        URL = {http://www.truststc.org/pubs/71.html}
    }
    

Posted by Christopher Brooks on 4 May 2006.
Groups: trust
For additional information, see the Publications FAQ or contact webmaster at www truststc org.

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.