Optimizing Privacy-Accuracy Tradeoff for Privacy Preserving Distance-Based Classification

Optimizing Privacy-Accuracy Tradeoff for Privacy Preserving Distance-Based Classification

Dongjin Kim, Zhiyuan Chen, Aryya Gangopadhyay
Copyright: © 2012 |Pages: 18
DOI: 10.4018/jisp.2012040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Privacy concerns often prevent organizations from sharing data for data mining purposes. There has been a rich literature on privacy preserving data mining techniques that can protect privacy and still allow accurate mining. Many such techniques have some parameters that need to be set correctly to achieve the desired balance between privacy protection and quality of mining results. However, there has been little research on how to tune these parameters effectively. This paper studies the problem of tuning the group size parameter for a popular privacy preserving distance-based mining technique: the condensation method. The contributions include: 1) a class-wise condensation method that selects an appropriate group size based on heuristics and avoids generating groups with mixed classes, 2) a rule-based approach that uses binary search and several rules to further optimize the setting for the group size parameter. The experimental results demonstrate the effectiveness of the authors’ approach.
Article Preview
Top

Introduction

With the huge amount of data and its increasingly distributed sources across organizations, accurate, efficient, and fast analysis of the data for finding knowledge has become a major challenge. In many cases, these factors force companies or organizations to outsource their data mining tasks to a third party. In these circumstances, privacy of the outsourced data is a major concern because without proper protection, the data is subject to misuse.

For example, revealing identity information such as social security number, name, address, and date of birth may lead to identity theft. Another type of privacy risk is that revealing sensitive information such as preexisting medical conditions may cause negative impact such as denial of health insurance. Identity theft was the top concern among customers contacting the Federal Trade Commission (Federal Trade Commission, 2007). According to a Gartner study (Gartner Inc., 2007), there were 15 million victims of identity theft in 2006. Another study showed that identity theft cost U.S. businesses and customers $56.6 billion in 2005 (MacVittie, 2007). Therefore, legislation such as the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm–Leach–Bliley Act (also known as the Financial Services Modernization Act of 1999) requires that the privacy of medical and financial data being protected.

There has been a rich body of work on privacy preserving data mining (PPDM) techniques. Two excellent surveys can be found at (Aggarwal & Yu, 2008; Vaidya, Zhu, & Clifton, 2005). The goal of privacy preserving data mining is two-fold: to protect privacy of the original data and at the same time still preserve the utility of sanitized data (often measured in quality of data mining). Note that these two goals are conflicting to each other because most PPDM techniques distort the original data (e.g., by adding random noise or making data values less accurate) to provide privacy protection. Obviously, the more distortion introduced, the better the privacy protection, but the lower the utility of data. Most proposed PPDM techniques have some tunable parameters which will lead to different degree of privacy protection and data utility. Thus these parameters need to be set correctly to achieve the optimal privacy and utility tradeoff.

For example, K-anonymity is a very commonly used privacy protection model (Sweeney, 2002a) which makes K people in the data set indistinguishable such that their identities will not be revealed. A number of techniques have been proposed to implement this model (Bayardo & Agrawal, 2005; LeFevre, DeWitt, & Ramakrishnan, 2005, 2006a, 2006b; Samarati, 2001; Sweeney, 2002b; Xiao & Tao, 2006). However all these techniques must set the correct value of K. If K is too large, the data may be distorted too much such that the quality of mining may become very poor. If K is too small, the degree of privacy protection may not be sufficient. More recently researchers have proposed several privacy models such as L-diversity (Machanavajjhala, Kifer, Gehrke, & Venkitasubramaniam, 2007), t-closeness (Li, Li, & Venkatasubramanian, 2007), and differential privacy (Dwork, 2006). All these models need to set some parameters, e.g., we need to set proper values for L in the L-diversity model, t in the t-closeness model, and ε (the degree of differential privacy) in the differential privacy model.

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing