Team for Research in
Ubiquitous Secure Technology

Open Problems in the Security of Learning
Marco Barreno, Peter L. Bartlett, Fuching Jack Chi, Anthony Joseph, Blaine Nelson, Benjamin I. Rubinstein, Udam Saini, Doug Tygar

Citation
Marco Barreno, Peter L. Bartlett, Fuching Jack Chi, Anthony Joseph, Blaine Nelson, Benjamin I. Rubinstein, Udam Saini, Doug Tygar. "Open Problems in the Security of Learning". Talk or presentation, 11, November, 2008.

Abstract
Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities—the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning.

Electronic downloads

Citation formats  
  • HTML
    Marco Barreno, Peter L. Bartlett, Fuching Jack Chi, Anthony
    Joseph, Blaine Nelson, Benjamin I. Rubinstein, Udam Saini,
    Doug Tygar. <a
    href="http://www.truststc.org/pubs/488.html"
    ><i>Open Problems in the Security of
    Learning</i></a>, Talk or presentation,  11,
    November, 2008.
  • Plain text
    Marco Barreno, Peter L. Bartlett, Fuching Jack Chi, Anthony
    Joseph, Blaine Nelson, Benjamin I. Rubinstein, Udam Saini,
    Doug Tygar. "Open Problems in the Security of
    Learning". Talk or presentation,  11, November, 2008.
  • BibTeX
    @presentation{BarrenoBartlettChiJosephNelsonRubinsteinSainiTygar08_OpenProblemsInSecurityOfLearning,
        author = {Marco Barreno and Peter L. Bartlett and Fuching
                  Jack Chi and Anthony Joseph and Blaine Nelson and
                  Benjamin I. Rubinstein and Udam Saini and Doug
                  Tygar},
        title = {Open Problems in the Security of Learning},
        day = {11},
        month = {November},
        year = {2008},
        abstract = {Machine learning has become a valuable tool for
                  detecting and preventing malicious activity.
                  However, as more applications employ machine
                  learning techniques in adversarial decision-making
                  situations, increasingly powerful attacks become
                  possible against machine learning systems. In this
                  paper, we present three broad research directions
                  towards the end of developing truly secure
                  learning. First, we suggest that finding bounds on
                  adversarial influence is important to understand
                  the limits of what an attacker can and cannot do
                  to a learning system. Second, we investigate the
                  value of adversarial capabilities—the success of
                  an attack depends largely on what types of
                  information and influence the attacker has.
                  Finally, we propose directions in technologies for
                  secure learning and suggest lines of investigation
                  into secure techniques for learning in adversarial
                  environments. We intend this paper to foster
                  discussion about the security of machine learning,
                  and we believe that the research directions we
                  propose represent the most important directions to
                  pursue in the quest for secure learning. },
        URL = {http://www.truststc.org/pubs/488.html}
    }
    

Posted by Jessica Gamble on 23 Jan 2009.
For additional information, see the Publications FAQ or contact webmaster at www truststc org.

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.