Secure Learning in Adversarial Environments
Bo Li

Citation
Bo Li. "Secure Learning in Adversarial Environments". Talk or presentation, 23, August, 2017.

Abstract
Advances in machine learning have led to rapid and widespread deployment of software-based inference and decision making, resulting in various applications such as data analytics, autonomous systems, and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or classification models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in classification through poisoning attacks. In addition, by undermining the integrity of learning systems, the privacy of users' data can also be compromised. In this talk, I will describe my recent research about physical attacks, especially poisoning attacks, to show the vulnerabilities of current machine learning systems. I’ll also give an example for a general generative adversarial networks (GANs) based malware generation and detection system to illustrate the adversarial learning idea.

Electronic downloads


Internal. This publication has been marked by the author for FORCES-only distribution, so electronic downloads are not available without logging in.
Citation formats  
  • HTML
    Bo Li. <a
    href="http://www.cps-forces.org/pubs/274.html"
    ><i>Secure Learning in Adversarial
    Environments</i></a>, Talk or presentation,  23,
    August, 2017.
  • Plain text
    Bo Li. "Secure Learning in Adversarial
    Environments". Talk or presentation,  23, August, 2017.
  • BibTeX
    @presentation{Li17_SecureLearningInAdversarialEnvironments,
        author = {Bo Li},
        title = {Secure Learning in Adversarial Environments},
        day = {23},
        month = {August},
        year = {2017},
        abstract = {Advances in machine learning have led to rapid and
                  widespread deployment of software-based inference
                  and decision making, resulting in various
                  applications such as data analytics, autonomous
                  systems, and security diagnostics. Current machine
                  learning systems, however, assume that training
                  and test data follow the same, or similar,
                  distributions, and do not consider active
                  adversaries manipulating either distribution.
                  Recent work has demonstrated that motivated
                  adversaries can circumvent anomaly detection or
                  classification models at test time through evasion
                  attacks, or can inject well-crafted malicious
                  instances into training data to induce errors in
                  classification through poisoning attacks. In
                  addition, by undermining the integrity of learning
                  systems, the privacy of users' data can also be
                  compromised. In this talk, I will describe my
                  recent research about physical attacks, especially
                  poisoning attacks, to show the vulnerabilities of
                  current machine learning systems. I’ll also give
                  an example for a general generative adversarial
                  networks (GANs) based malware generation and
                  detection system to illustrate the adversarial
                  learning idea. },
        URL = {http://cps-forces.org/pubs/274.html}
    }
    

Posted by Carolyn Winter on 24 Aug 2017.
Groups: forces
For additional information, see the Publications FAQ or contact webmaster at cps-forces org.

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.