Team for Research in
Ubiquitous Secure Technology

Open Problems in the Security of Learning.

Citation
"Open Problems in the Security of Learning.". M. Barreno, P. Bartlett, F. Chi, A. Joseph, B. Nelson, B. Rubinstein, U. Saini, and J. D. Tygar (eds.), Proceedings of the First ACM Workshop on AISec, 2008.

Abstract
Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more ap- plications employ machine learning techniques in adversar- ial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that nding bounds on adversarial in uence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities|the success of an attack depends largely on what types of information and in uence the at- tacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research direc- tions we propose represent the most important directions to pursue in the quest for secure learning.

Electronic downloads

Citation formats  
  • HTML
     <a
    href="http://www.truststc.org/pubs/744.html"
    ><i>Open Problems in the Security of
    Learning.</i></a>,  M. Barreno, P. Bartlett, F.
    Chi, A. Joseph, B. Nelson, B. Rubinstein, U. Saini, and J.
    D. Tygar (eds.), Proceedings of the First ACM Workshop on
    AISec, 2008.
  • Plain text
     "Open Problems in the Security of Learning.".  M.
    Barreno, P. Bartlett, F. Chi, A. Joseph, B. Nelson, B.
    Rubinstein, U. Saini, and J. D. Tygar (eds.), Proceedings of
    the First ACM Workshop on AISec, 2008.
  • BibTeX
    @proceedings{BarrenoBartlettChiJosephNelsonRubinsteinSainiTygar08_OpenProblemsInSecurityOfLearning,
        title = {Open Problems in the Security of Learning.},
        editor = { M. Barreno, P. Bartlett, F. Chi, A. Joseph, B.
                  Nelson, B. Rubinstein, U. Saini, and J. D. Tygar},
        organization = {Proceedings of the First ACM Workshop on AISec},
        year = {2008},
        abstract = {Machine learning has become a valuable tool for
                  detecting and preventing malicious activity.
                  However, as more ap- plications employ machine
                  learning techniques in adversar- ial
                  decision-making situations, increasingly powerful
                  attacks become possible against machine learning
                  systems. In this paper, we present three broad
                  research directions towards the end of developing
                  truly secure learning. First, we suggest that
                  nding bounds on adversarial in uence is important
                  to understand the limits of what an attacker can
                  and cannot do to a learning system. Second, we
                  investigate the value of adversarial
                  capabilities|the success of an attack depends
                  largely on what types of information and in uence
                  the at- tacker has. Finally, we propose directions
                  in technologies for secure learning and suggest
                  lines of investigation into secure techniques for
                  learning in adversarial environments. We intend
                  this paper to foster discussion about the security
                  of machine learning, and we believe that the
                  research direc- tions we propose represent the
                  most important directions to pursue in the quest
                  for secure learning.},
        URL = {http://www.truststc.org/pubs/744.html}
    }
    

Posted by Jessica Gamble on 4 May 2010.
For additional information, see the Publications FAQ or contact webmaster at www truststc org.

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.