The concept of privacy has received attention for over a century now and its definition?let alone, understanding?has been profoundly challenging. This is primarily attributed to the “incompatible” and rich set of characteristics privacy comprises. As Brunk (2002) states very sharply, “Privacy is a matter of intellectual and philosophical thought and retains few tangible characteristics, making it resistant to simple explanation.” Perhaps the first scholarly work on privacy was that of Warren and Brandeis (1980), who introduced the highly abstractive yet popular definition of privacy as the “right to be left alone.” As privacy was recognized as a right, it primarily existed within a legal context. Legislation for protecting one’s privacy exists in many countries and in some cases at a constitutional level (see for example the Fourth Amendment of the U.S. Constitution). It was soon realized in the information revolution era that privacy and information are somewhat coupled. More precisely, emerging privacy concepts and metrics relate to the intentional or unintentional information flows. However, when it comes to studying, using, and investing in information, security appeared to have a higher priority over privacy. Security and privacy seemingly operate under different agendas; privacy is about protecting one’s actions in terms of offering anonymity, whereas security includes the notion of accountability which implies that anonymity is waived. Still, security is a vital component of an information system, as it is well needed in order to protect privacy. This contradictory relation between security and privacy has caused a considerable amount of debate, political and technical, resulting in a plethora of position and research papers. Accepting that there may be no optimum solution to the problem of striking a balance between security and privacy, this article presents a recently developed methodology that could support policy decision making on a strategic level, thus allowing planners to macro-manage security and privacy.
A thorough overview on the economics of privacy is maintained by Acquisti (2008). The 1970s was a decade marked by economists and their aspirations to develop an economic model to “decrypt” the market forces. Although Hirshleifer (1971) introduced the value of information in relation to privacy in the early 1970s, economics tools were ported to the privacy domain in the late 1970s and early 1980s (e.g., Posner, 1978; Stigler, 1980). However in the 1980s the concept of information sharing and the Internet were showing signs of potential, only to be interrupted by the Morris Worm in 1988 (Seeley, 1989), and security was added into the agenda. Initially this was done in the expense of privacy. For the following years information security received substantial attention—if the members of the private sector were to invest in electronic communications and technologies, trust needed to be restored.
Formal treatment of information security was initially in the domain of cryptography, but soon expanded to access control models and intrusion detection systems. The security goals of confidentiality, integrity, and availability were defined. The escape from security being equivalent to confidentiality was soon realized in the domain of cryptography, which was enforced with Rivest’s (1990) definition of cryptography which “is about communication in the presence of adversaries.” As such, the adversary would not necessarily be interested in eavesdropping on a communication, but could elect to interrupt, modify, fabricate, or replay messages. Formally, this omnipotent adversary was initially captured in Dolev and Yao’s (1981) threat model, spawning research into cryptographic protocols.
Key Terms in this Chapter
Unobservability: The privacy goal of allowing a user to perform an action or use a system resource without others being able to observe that the resource is being used.
Adversarial Technologies: The technologies used in offensive security, such as hacking, penetration testing, and so forth.
Anonymity: The privacy goal of the inability to identify a user’s identity when that user is performing an action or using a resource of a given system.
Side Channel: The unintentional flow of information through a probabilistic communication channel which facilitates information inference (or leakage).
Access Control: All security processes and technologies that are responsible for determining and managing legitimate user access to data and system resources.
Pseudonymity: The privacy goal of hiding a user’s identity by disguise, through the use of a pseudonym.
Unlinkability: The privacy goal of allowing a user to perform multiple actions without others being able to link these actions together.