Privacy-Preserving Hybrid K-Means

Privacy-Preserving Hybrid K-Means

Zhiqiang Gao, Yixiao Sun, Xiaolong Cui, Yutao Wang, Yanyu Duan, Xu An Wang
Copyright: © 2018 |Pages: 17
DOI: 10.4018/IJDWM.2018040101
(Individual Articles)
No Current Special Offers


This article describes how the most widely used clustering, k-means, is prone to fall into a local optimum. Notably, traditional clustering approaches are directly performed on private data and fail to cope with malicious attacks in massive data mining tasks against attackers' arbitrary background knowledge. It would result in violation of individuals' privacy, as well as leaks through system resources and clustering outputs. To address these issues, the authors propose an efficient privacy-preserving hybrid k-means under Spark. In the first stage, particle swarm optimization is executed in resilient distributed datasets to initiate the selection of clustering centroids in the k-means on Spark. In the second stage, k-means is executed on the condition that a privacy budget is set as ε/2t with Laplace noise added in each round of iterations. Extensive experimentation on public UCI data sets show that on the premise of guaranteeing utility of privacy data and scalability, their approach outperforms the state-of-the-art varieties of k-means by utilizing swarm intelligence and rigorous paradigms of differential privacy.
Article Preview

1. Introduction

Nowadays, big data is ubiquitous and abundant as the booming growth of cloud computing and mobile Internet (Xia et al., 2016; Li, Taniar & Indrawan-Santiago, 2017). However, it poses a rising challenge on individuals’ raw data when data-mined or released by untrustworthy data analyzers. Individual privacy is always faced with threatens from potential malicious attackers (Khan & Al-Yasiri, 2016; Sander, Teh & Sloka, 2017; Brocardo, Rolt, Dias, Custodio & Traore, 2017). Furthermore, with massive deployment of cloud computing and increasing demand of big data services, traditional data mining methods are in urgent requirement to be optimized and security-enhanced (Fu, Huang, Ren, Weng & Wang, 2017; Xiong et al., 2017). Consequently, privacy-preserving data mining (PPDM) as well as privacy-preserving data releasing (PPDR) have become extremely challenging problems. Overall, the research direction of privacy-preserving techniques can be illustrated in Table 1.

Table 1.
Existing research direction of privacy-preserving techniques
PPDRk-anonymity (Sweeney, 2002), l-diversity (Machanavajjhala, Kifer, Gehrke & Venkitasubramaniam, 2007), t-closeness (Li, Li & Venkatasubramanian, 2007)Based on background knowledge;
Managed by a centralized data curator;
Unable to provide strictly mathematical guarantee.
PPDMDifferential privacy (Dwork, McSherry, Nissim & Smith, 2006)Strong privacy guarantee;
Centralized and decentralized model.
Miyajima et al., 2017)Computation overheads;
Strict limitation on involved parties.
Jain, Rasmussen & Sahai, 2017)Computation overheads;
Far from large-scaled production.

Complete Article List

Search this Journal:
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing