Team for Research in
Ubiquitous Secure Technology

Privacy and Utility in Business Processes
Anupam Datta

Citation
Anupam Datta. "Privacy and Utility in Business Processes". Talk or presentation, 10, October, 2007.

Abstract
Privacy is an increasingly important business concern in health care, financial services, and other organizations. Hospitals, clinics, banks, credit card clearing houses, customer support centers, and academic institutions all maintain databases with sensitive information. These databases are used regularly by employees to carry out business critical tasks. Organizations that collect and use personal information face the growing challenge of conducting their business effectively while managing privacy risks and compliance requirements. The risks are very real, with the theft of 26 million veteran records in May 2006 demonstrating how easily sensitive information can fall into unauthorized hands. In the United States, privacy legislation, such as HIPAA for the health care sector and GLBA for financial institutions, has spurred many business, including 68% of the Direct Marketing Association member companies as of 2001, to appoint Chief Privacy Officers whose primary job is privacy issues and policies. rnrnOne of the biggest problems that privacy-sensitive organizations face is designing their internal activities and information practices to simultaneously serve their customers effectively and manage risks from disclosure of sensitive information. This fundamental problem arises in hospitals and clinics, where personal health information must be used to provide effective health care, but must also be protected from indiscriminate sharing to respect the privacy of patients—a requirement made more precise by HIPAA. Financial institutions use sensitive financial information to decide whether to grant loans, for example, and suffer direct loss and brand erosion if sensitive information is lost. Retail enterprises use credit card details in resolving charge-back disputes (where the privacy concerns are exacerbated by the common practice of outsourcing this task). College admissions officers review confidential letters of recommendation and transcripts. In all of these situations, the organization must carefully design the way it processes and uses information to balance the competing goals of privacy and the usefulness, or utility, of the business process. rnrnBusiness process designs involve instructing individuals how and when to access and use information, coupled with access and use policies embedded in information processing systems. Because considering utility or privacy alone does not provide enough information to make meaningful management decisions, our goal is to develop a framework and model for designing, evaluating, and auditing business processes to achieve utility goals while minimizing privacy risks. We propose an abstract model of business processes, utility, and privacy, present some specific results, and illustrate our concepts using MyHealth@Vanderbilt, a web-based patient portal built and used at the Vanderbilt Medical Center. Examining the MyHealth portal led to many insights captured in our general theory. rnrnOur approach builds on contextual integrity, a conceptual framework for understanding privacy expectations and their implications developed in the literature on law, public policy, and political philosophy. The primary tenant of contextual integrity is that people interact in society not simply as individuals in an undifferentiated social world, but as individuals in certain capacities or roles, in distinctive social contexts (e.g., health care or banking). For example, the individuals in MyHealth act as patients, doctors, nurses, or secretaries, according to a specific workflow for scheduling appointments, viewing lab results, and asking and answering heath questions. rnrnEach context is characterized by its business objectives, or utility goals, and its norms of transmission. For example, one utility goal for MyHealth is to respond to health questions from patients. The norms of transmission identify conditions under which personal information can be communicated from one party to another. These norms are represented by the privacy goals of a workflow. A privacy goal for MyHealth is to restrict health information to doctors and nurses, the health care providers. Using a model of actions that transmit personal information from a sender in one role to a receiver in a possibly different role, agents may accumulate and send different types of personal information they receive. These messages represent emails, web forms, database entries, workflow data structures, and arguments to actions. We assume that messages have associated tags (e.g., “health information”) to indicate their contents, but consider business processes in which human agents may tag messages incorrectly. Since agents may act independently, with different motives, we express privacy and utility goals using a form of alternating time temporal logic, which we call the Logic of Privacy and Utility (LPU), interpreted over the concurrent game structure of agent actions. In this logical setting, privacy is a trace property expressible in LTL, while utility requires that agents have strategies to achieve certain useful outcomes, and is therefore expressed naturally using the stronger ATL* path quantifiers. We also formulate workflows in temporal logic, by associating a responsibility to each agent role. For example, in the patient portal workflow, doctors are responsible for answering health questions and secretaries are responsible for scheduling appointments. We consider both a general class of workflows presented abstractly by logical formulas and a more concrete subclass of practical workflows presented as a labeled graph or automata. Within this setting, we formulate and address design-time and run-time questions about whether a give nworkflow achieve its privacy and utility goals, without assuming that human agents always follow their assigned responsibilities. rnrn1. Does a given workflow achieve privacy and utility if all agents act responsibly? We present algorithms for answering this question. Specifically, privacy properties may be evaluated using standard LTL model-checking over the traces generated by responsible executions of the concurrent game structure. Evaluating utility is more involved because of the ATL* path quantifiers and, in general, is undecidable because agents learn only of messages they send or receive. Because of this limitation, we present a sound decision procedure for a restricted, but useful, class of formulas. rnrn2. Can irresponsible agents be detected and held ac- countable for violations? If the execution of a workflow satisfying our design criteria actually violates privacy, then some agent must have caused the violation by acting irresponsibly. These violations can be caught at run time, and the accountable agent determined using auditing algorithms we present. These algorithms are not fully automatic (or else they could be used for enforcement), but require an oracle (such as a human auditor) to determine the accuracy of message tags. We seek to minimize the number of oracle calls (reducing the human auditor’s work) by using classical causality ideas in distributed computing and a new notion of “suspicious events.”rnrnPrivacy advocates often recommend reconciling the competing interests of privacy and utility with the principle of minimum necessary disclosure: disclose the minimum information necessary to achieve the utility goal. This principle is included expressly in several influential privacy policies, including the HIPAA Privacy Rule and the Markle Connecting for Health Common Framework. We leverage our unified model of privacy and utility to provide a formal definition of this principle. We apply these concepts to the MyHealth patient portal and recommend several design changes to the MyHealth developers at Vanderbilt. Message tags are themselves one such suggestion, enabling finer grained message routing. Our auditing algorithms were developed in response to the MyHealth developers’ concern about incorrectly tagged messages. In this paper, we suggest further privacy improvements in the MyHealth workflow based on tagging and illustrate our auditing methods using a hypothetical execution of MyHealth with an irresponsible agent.

Electronic downloads


Internal. This publication has been marked by the author for TRUST-only distribution, so electronic downloads are not available without logging in.
Citation formats  
  • HTML
    Anupam Datta. <a
    href="http://www.truststc.org/pubs/296.html"
    ><i>Privacy and Utility in Business
    Processes</i></a>, Talk or presentation,  10,
    October, 2007.
  • Plain text
    Anupam Datta. "Privacy and Utility in Business
    Processes". Talk or presentation,  10, October, 2007.
  • BibTeX
    @presentation{Datta07_PrivacyUtilityInBusinessProcesses,
        author = {Anupam Datta},
        title = {Privacy and Utility in Business Processes},
        day = {10},
        month = {October},
        year = {2007},
        abstract = {Privacy is an increasingly important business
                  concern in health care, financial services, and
                  other organizations. Hospitals, clinics, banks,
                  credit card clearing houses, customer support
                  centers, and academic institutions all maintain
                  databases with sensitive information. These
                  databases are used regularly by employees to carry
                  out business critical tasks. Organizations that
                  collect and use personal information face the
                  growing challenge of conducting their business
                  effectively while managing privacy risks and
                  compliance requirements. The risks are very real,
                  with the theft of 26 million veteran records in
                  May 2006 demonstrating how easily sensitive
                  information can fall into unauthorized hands. In
                  the United States, privacy legislation, such as
                  HIPAA for the health care sector and GLBA for
                  financial institutions, has spurred many business,
                  including 68% of the Direct Marketing Association
                  member companies as of 2001, to appoint Chief
                  Privacy Officers whose primary job is privacy
                  issues and policies. rnrnOne of the biggest
                  problems that privacy-sensitive organizations face
                  is designing their internal activities and
                  information practices to simultaneously serve
                  their customers effectively and manage risks from
                  disclosure of sensitive information. This
                  fundamental problem arises in hospitals and
                  clinics, where personal health information must be
                  used to provide effective health care, but must
                  also be protected from indiscriminate sharing to
                  respect the privacy of patientsâa requirement
                  made more precise by HIPAA. Financial institutions
                  use sensitive financial information to decide
                  whether to grant loans, for example, and suffer
                  direct loss and brand erosion if sensitive
                  information is lost. Retail enterprises use credit
                  card details in resolving charge-back disputes
                  (where the privacy concerns are exacerbated by the
                  common practice of outsourcing this task). College
                  admissions officers review confidential letters of
                  recommendation and transcripts. In all of these
                  situations, the organization must carefully design
                  the way it processes and uses information to
                  balance the competing goals of privacy and the
                  usefulness, or utility, of the business process.
                  rnrnBusiness process designs involve instructing
                  individuals how and when to access and use
                  information, coupled with access and use policies
                  embedded in information processing systems.
                  Because considering utility or privacy alone does
                  not provide enough information to make meaningful
                  management decisions, our goal is to develop a
                  framework and model for designing, evaluating, and
                  auditing business processes to achieve utility
                  goals while minimizing privacy risks. We propose
                  an abstract model of business processes, utility,
                  and privacy, present some specific results, and
                  illustrate our concepts using MyHealth@Vanderbilt,
                  a web-based patient portal built and used at the
                  Vanderbilt Medical Center. Examining the MyHealth
                  portal led to many insights captured in our
                  general theory. rnrnOur approach builds on
                  contextual integrity, a conceptual framework for
                  understanding privacy expectations and their
                  implications developed in the literature on law,
                  public policy, and political philosophy. The
                  primary tenant of contextual integrity is that
                  people interact in society not simply as
                  individuals in an undifferentiated social world,
                  but as individuals in certain capacities or roles,
                  in distinctive social contexts (e.g., health care
                  or banking). For example, the individuals in
                  MyHealth act as patients, doctors, nurses, or
                  secretaries, according to a specific workflow for
                  scheduling appointments, viewing lab results, and
                  asking and answering heath questions. rnrnEach
                  context is characterized by its business
                  objectives, or utility goals, and its norms of
                  transmission. For example, one utility goal for
                  MyHealth is to respond to health questions from
                  patients. The norms of transmission identify
                  conditions under which personal information can be
                  communicated from one party to another. These
                  norms are represented by the privacy goals of a
                  workflow. A privacy goal for MyHealth is to
                  restrict health information to doctors and nurses,
                  the health care providers. Using a model of
                  actions that transmit personal information from a
                  sender in one role to a receiver in a possibly
                  different role, agents may accumulate and send
                  different types of personal information they
                  receive. These messages represent emails, web
                  forms, database entries, workflow data structures,
                  and arguments to actions. We assume that messages
                  have associated tags (e.g., âhealth
                  informationâ) to indicate their contents, but
                  consider business processes in which human agents
                  may tag messages incorrectly. Since agents may act
                  independently, with different motives, we express
                  privacy and utility goals using a form of
                  alternating time temporal logic, which we call the
                  Logic of Privacy and Utility (LPU), interpreted
                  over the concurrent game structure of agent
                  actions. In this logical setting, privacy is a
                  trace property expressible in LTL, while utility
                  requires that agents have strategies to achieve
                  certain useful outcomes, and is therefore
                  expressed naturally using the stronger ATL* path
                  quantifiers. We also formulate workflows in
                  temporal logic, by associating a responsibility to
                  each agent role. For example, in the patient
                  portal workflow, doctors are responsible for
                  answering health questions and secretaries are
                  responsible for scheduling appointments. We
                  consider both a general class of workflows
                  presented abstractly by logical formulas and a
                  more concrete subclass of practical workflows
                  presented as a labeled graph or automata. Within
                  this setting, we formulate and address design-time
                  and run-time questions about whether a give
                  nworkflow achieve its privacy and utility goals,
                  without assuming that human agents always follow
                  their assigned responsibilities. rnrn1. Does a
                  given workflow achieve privacy and utility if all
                  agents act responsibly? We present algorithms for
                  answering this question. Specifically, privacy
                  properties may be evaluated using standard LTL
                  model-checking over the traces generated by
                  responsible executions of the concurrent game
                  structure. Evaluating utility is more involved
                  because of the ATL* path quantifiers and, in
                  general, is undecidable because agents learn only
                  of messages they send or receive. Because of this
                  limitation, we present a sound decision procedure
                  for a restricted, but useful, class of formulas.
                  rnrn2. Can irresponsible agents be detected and
                  held ac- countable for violations? If the
                  execution of a workflow satisfying our design
                  criteria actually violates privacy, then some
                  agent must have caused the violation by acting
                  irresponsibly. These violations can be caught at
                  run time, and the accountable agent determined
                  using auditing algorithms we present. These
                  algorithms are not fully automatic (or else they
                  could be used for enforcement), but require an
                  oracle (such as a human auditor) to determine the
                  accuracy of message tags. We seek to minimize the
                  number of oracle calls (reducing the human
                  auditorâs work) by using classical causality
                  ideas in distributed computing and a new notion of
                  âsuspicious events.ârnrnPrivacy advocates
                  often recommend reconciling the competing
                  interests of privacy and utility with the
                  principle of minimum necessary disclosure:
                  disclose the minimum information necessary to
                  achieve the utility goal. This principle is
                  included expressly in several influential privacy
                  policies, including the HIPAA Privacy Rule and the
                  Markle Connecting for Health Common Framework. We
                  leverage our unified model of privacy and utility
                  to provide a formal definition of this principle.
                  We apply these concepts to the MyHealth patient
                  portal and recommend several design changes to the
                  MyHealth developers at Vanderbilt. Message tags
                  are themselves one such suggestion, enabling finer
                  grained message routing. Our auditing algorithms
                  were developed in response to the MyHealth
                  developersâ concern about incorrectly tagged
                  messages. In this paper, we suggest further
                  privacy improvements in the MyHealth workflow
                  based on tagging and illustrate our auditing
                  methods using a hypothetical execution of MyHealth
                  with an irresponsible agent.},
        URL = {http://www.truststc.org/pubs/296.html}
    }
    

Posted by Larry Rohrbough on 16 Oct 2007.
For additional information, see the Publications FAQ or contact webmaster at www truststc org.

Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.