Analysis and Text Classification of Privacy Policies From Rogue and Top-100 Fortune Global Companies

Analysis and Text Classification of Privacy Policies From Rogue and Top-100 Fortune Global Companies

Martin Boldt, Kaavya Rekanar
Copyright: © 2019 |Pages: 20
DOI: 10.4018/IJISP.2019040104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the present article, the authors investigate to what extent supervised binary classification can be used to distinguish between legitimate and rogue privacy policies posted on web pages. 15 classification algorithms are evaluated using a data set that consists of 100 privacy policies from legitimate websites (belonging to companies that top the Fortune Global 500 list) as well as 67 policies from rogue websites. A manual analysis of all policy content was performed and clear statistical differences in terms of both length and adherence to seven general privacy principles are found. Privacy policies from legitimate companies have a 98% adherence to the seven privacy principles, which is significantly higher than the 45% associated with rogue companies. Out of the 15 evaluated classification algorithms, Naïve Bayes Multinomial is the most suitable candidate to solve the problem at hand. Its models show the best performance, with an AUC measure of 0.90 (0.08), which outperforms most of the other candidates in the statistical tests used.
Article Preview
Top

1. Introduction

In an increasingly more inter-connected world we rely ever more on ubiquitous online services on the internet. These services are incorporating more and more personal information about internet users when providing access to both static content, e.g. webpages, as well as dynamic content in for instance online social networks. As a result, the users share more and more personal information about themselves with the service providers, which affect the users’ privacy. Warren and Brandies first defined the concept of privacy in 1890 as the “right to be let alone” (Warren, 1890). The most commonly used definition of privacy today is the one formulated by Alan Westin in 1967 (Westin, 1967). In his book, Privacy and Freedom, he defines privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others”. This definition states that individuals, groups or institutions are constantly engaged in an adjustment process that balances their current degree of privacy.

Westin’s privacy definition is reflected in the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data that was published by in 1980 and which establishes principles for how to regulate the collection and use of personal data (OECD, 1980). Similar principles are used in Europe’s privacy laws that are quite similar to the OECD guidelines. However, the US does not have comprehensive data protection laws. As a market driven privacy protection based on self-regulation of industry actors is chosen instead. To allow US based companies and organizations to store information about EU citizens, the EU-US Safe Harbour framework were declared in law in 2009 (European Commission, 2009). Since February 2016 the Safe Harbour framework was replaced by the EU-US Privacy Shield. A key aspect in the EU-US framework is the concept of “notice and choice”, meaning that privacy is protected if companies provide notice of their privacy routines and that their customers have some choice to participate or not (Cranor, 2012). In the ecosystem developed around web services on the internet, this “notice and choice” concept is manifested in legal documents, stating companies’ privacy practices, known as privacy policies.

Web privacy is negotiated between users, requesting services, and web service providers by the means of these privacy policies that are written by the service providers and published on their websites. Unfortunately, the privacy policies are often both long and written in legal jargon that is hard for ordinary users to understand. Thus, they are even harder to use as the basis for informed decisions about whether or not to proceed in using the particular web service described. In fact, most users do not bother to read a single sentence in the privacy policy before using web services. By using the service without reading the privacy policy they implicitly accept the terms described in the privacy policy (McDonald, Reader, Kelley, & Cranor, 2009), e.g. allowing the service provider to collect information about them as well as sharing with or forwarding to third parties.

An average privacy policy is estimated to include some 2,500 words and requires 10 minutes to read for an average user with a high school education (McDonald & Cranor, 2008). Taking into account the number of unique websites visited per year by an average user it would require between 181 – 304 hours per year (with a point estimate of 244 hours, i.e. more than 10 full days) to read each unique website’s privacy policy on first site access (McDonald & Cranor, 2008). By scaling this up to a US national level, it would take somewhere between 40 – 67 Billion hours yearly (with a point estimate of 53 Billion hours) for all users to read privacy policies of all unique sites on first access.

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing