Team for Research in
Ubiquitous Secure Technology

ANTIDOTE: Understanding and Defending against Poisoning of Anomaly Detectors

Citation
"ANTIDOTE: Understanding and Defending against Poisoning of Anomaly Detectors". B. Rubinstein, B. Nelson, L. Huang, A. Joseph, S. Lau, S. Rao, N. Taft, and J. D. Tygar (eds.), November, 2009.

Abstract
Statistical machine learning techniques have recently garnered increased popularity as a means to improve network design and security. For intrusion detection, such methods build a model for normal behavior from training data and detect attacks as deviations from that model. This process invites adversaries to manipulate the training data so that the learned model fails to detect subsequent attacks. We evaluate poisoning techniques and develop a defense, in the context of a particular anomaly detector—namely the PCA-subspace method for detecting anomalies in backbone networks. For three poisoning schemes, we show how attackers can substantially increase their chance of successfully evading detection by only adding moderate amounts of poisoned data. Moreover such poisoning throws off the balance between false positives and false negatives thereby dramatically reducing the efficacy of the detector. To combat these poisoning activities, we propose an antidote based on techniques from robust statistics and present a new robust PCA-based detector. Poisoning has little effect on the robust model, whereas it significantly distorts the model produced by the original PCA method. Our technique substantially reduces the effectiveness of poisoning for a variety of scenarios and indeed maintains a significantly better balance between false positives and false negatives than the original method when under attack.

Electronic downloads

Citation formats  
  • HTML
     <a
    href="http://www.truststc.org/pubs/725.html"
    ><i>ANTIDOTE: Understanding and Defending against
    Poisoning of Anomaly Detectors</i></a>, B.
    Rubinstein, B. Nelson, L. Huang, A. Joseph, S. Lau, S. Rao,
    N. Taft, and J. D. Tygar (eds.), November, 2009.
  • Plain text
     "ANTIDOTE: Understanding and Defending against
    Poisoning of Anomaly Detectors". B. Rubinstein, B.
    Nelson, L. Huang, A. Joseph, S. Lau, S. Rao, N. Taft, and J.
    D. Tygar (eds.), November, 2009.
  • BibTeX
    @proceedings{RubinsteinNelsonHuangJosephLauRaoTaftTygar09_ANTIDOTEUnderstandingDefendingAgainstPoisoningOfAnomaly,
        title = {ANTIDOTE: Understanding and Defending against
                  Poisoning of Anomaly Detectors},
        editor = {B. Rubinstein, B. Nelson, L. Huang, A. Joseph, S.
                  Lau, S. Rao, N. Taft, and J. D. Tygar},
        month = {November},
        year = {2009},
        abstract = {Statistical machine learning techniques have
                  recently garnered increased popularity as a means
                  to improve network design and security. For
                  intrusion detection, such methods build a model
                  for normal behavior from training data and detect
                  attacks as deviations from that model. This
                  process invites adversaries to manipulate the
                  training data so that the learned model fails to
                  detect subsequent attacks. We evaluate poisoning
                  techniques and develop a defense, in the context
                  of a particular anomaly detector—namely the
                  PCA-subspace method for detecting anomalies in
                  backbone networks. For three poisoning schemes, we
                  show how attackers can substantially increase
                  their chance of successfully evading detection by
                  only adding moderate amounts of poisoned data.
                  Moreover such poisoning throws off the balance
                  between false positives and false negatives
                  thereby dramatically reducing the efficacy of the
                  detector. To combat these poisoning activities, we
                  propose an antidote based on techniques from
                  robust statistics and present a new robust
                  PCA-based detector. Poisoning has little effect on
                  the robust model, whereas it significantly
                  distorts the model produced by the original PCA
                  method. Our technique substantially reduces the
                  effectiveness of poisoning for a variety of
                  scenarios and indeed maintains a significantly
                  better balance between false positives and false
                  negatives than the original method when under
                  attack.},
        URL = {http://www.truststc.org/pubs/725.html}
    }
    

Posted by Jessica Gamble on 7 Apr 2010.
For additional information, see the Publications FAQ or contact webmaster at www truststc org.

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.