Team for Research in
Ubiquitous Secure Technology

Regret Minimizing Audits: A Learning-theoretic Basis for Privacy Protection
Jeremiah Blocki, Nicolas Christin, Anupam Datta, Arunesh Sinha

Citation
Jeremiah Blocki, Nicolas Christin, Anupam Datta, Arunesh Sinha. "Regret Minimizing Audits: A Learning-theoretic Basis for Privacy Protection". Proceedings IEEE CSF 2011, June, 2011.

Abstract
Audit mechanisms are essential for privacy protection in permissive access control regimes, such as in hospitals where denying legitimate access requests can adversely affect patient care. Recognizing this need, we develop the first principled learning-theoretic foundation for audits. Our first contribution is a game-theoretic model that captures the interaction between the defender (e.g., hospital auditors) and the adversary (e.g., hospital employees). The model takes pragmatic considerations into account, in particular, the periodic nature of audits, a budget that constrains the number of actions that the defender can inspect, and a loss function that captures the economic impact of detected and missed violations on the organization. We assume that the adversary is worst-case as is standard in other areas of computer security.We also formulate a desirable property of the audit mechanism in this model based on the concept of regret in learning theory. Our second contribution is an efficient audit mechanism that provably minimizes regret for the defender. This mechanism learns from experience to guide the defender’s auditing efforts. The regret bound is significantly better than prior results in the learning literature. The stronger bound is important from a practical standpoint because it implies that the recommendations from the mechanism will converge faster to the best fixed auditing strategy for the defender

Electronic downloads


Internal. This publication has been marked by the author for TRUST-only distribution, so electronic downloads are not available without logging in.
Citation formats  
  • HTML
    Jeremiah Blocki, Nicolas Christin, Anupam Datta, Arunesh
    Sinha. <a
    href="http://www.truststc.org/pubs/783.html"
    >Regret Minimizing Audits: A Learning-theoretic Basis for
    Privacy Protection</a>, Proceedings IEEE CSF 2011,
    June, 2011.
  • Plain text
    Jeremiah Blocki, Nicolas Christin, Anupam Datta, Arunesh
    Sinha. "Regret Minimizing Audits: A Learning-theoretic
    Basis for Privacy Protection". Proceedings IEEE CSF
    2011, June, 2011.
  • BibTeX
    @inproceedings{BlockiChristinDattaSinha11_RegretMinimizingAuditsLearningtheoreticBasisForPrivacy,
        author = {Jeremiah Blocki and Nicolas Christin and Anupam
                  Datta and Arunesh Sinha},
        title = {Regret Minimizing Audits: A Learning-theoretic
                  Basis for Privacy Protection},
        booktitle = {Proceedings IEEE CSF 2011},
        month = {June},
        year = {2011},
        abstract = {Audit mechanisms are essential for privacy
                  protection in permissive access control regimes,
                  such as in hospitals where denying legitimate
                  access requests can adversely affect patient care.
                  Recognizing this need, we develop the first
                  principled learning-theoretic foundation for
                  audits. Our first contribution is a game-theoretic
                  model that captures the interaction between the
                  defender (e.g., hospital auditors) and the
                  adversary (e.g., hospital employees). The model
                  takes pragmatic considerations into account, in
                  particular, the periodic nature of audits, a
                  budget that constrains the number of actions that
                  the defender can inspect, and a loss function that
                  captures the economic impact of detected and
                  missed violations on the organization. We assume
                  that the adversary is worst-case as is standard in
                  other areas of computer security.We also formulate
                  a desirable property of the audit mechanism in
                  this model based on the concept of regret in
                  learning theory. Our second contribution is an
                  efficient audit mechanism that provably minimizes
                  regret for the defender. This mechanism learns
                  from experience to guide the defender’s auditing
                  efforts. The regret bound is significantly better
                  than prior results in the learning literature. The
                  stronger bound is important from a practical
                  standpoint because it implies that the
                  recommendations from the mechanism will converge
                  faster to the best fixed auditing strategy for the
                  defender},
        URL = {http://www.truststc.org/pubs/783.html}
    }
    

Posted by Nicolas Christin on 1 Oct 2011.
For additional information, see the Publications FAQ or contact webmaster at www truststc org.

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.