Internal and External Threat Analysis of Anonymized Dataset

Internal and External Threat Analysis of Anonymized Dataset

Saurav Jindal (Punjab Engineering College, India) and Poonam Saini (Punjab Engineering College, India)
Copyright: © 2020 |Pages: 14
DOI: 10.4018/978-1-7998-2242-4.ch009

Abstract

In recent years, data collection and data mining have emerged as fast-paced computational processes as the amount of data from different sources has increased manifold. With the advent of such technologies, major concern is exposure of an individual's self-contained information. To confront the unusual situation, anonymization of dataset is performed before being released into public for further usage. The chapter discusses various existing techniques of anonymization. Thereafter, a novel redaction technique is proposed for generalization to minimize the overall cost (penalty) of the process being inversely proportional to utility of generated dataset. To validate the proposed work, authors assume a pre-processed dataset and further compare our algorithm with existing techniques. Lastly, the proposed technique is made scalable thus ensuring further minimization of generalization cost and improving overall utility of information gain.
Chapter Preview
Top

Introduction

Over the last decade, data has increased substantially on a large scale in almost every field. In 2011, according to a record from IDC (International Data Corporation), overall stored and created data volume was almost 2 ZB (nearly 1021 B) that is expected to double every two years (Laney, 2001). The rapid expansion is due to social networking applications, such as Twitter, Facebook, Snapchat etc., which allows a user to create content freely, hence adding more data in already existing web data volume. Other than social media, mobile phone generates a lot of data through sensors like GPS, health-tracking applications, Call Data Records (CDR) and other. Moreover, technologies like cloud computing and Internet of Things (IoT) promotes the sharp growth of data. Currently, big data analytics (Gantz & Reinsel, 2011) has come as a computational process to provide better solution in order to understand and analyse such huge volume of data and thus ensure improvements in healthcare, law enforcement, financial trading and many other real-time applications. The use of big data poses major challenges like data representation (aims to make data more meaningful for analysis), redundancy (high level of duplicate datasets), expandability and scalability (analytical algorithms must be able to process expanding and more complex datasets), information sharing and data privacy (security of sensitive information). Although, most of the challenges have been addressed and overcome to a larger extent with the use of improved data analytic techniques, privacy is still a major concern. The privacy of data can be maintained in following two ways:

  • 1.

    Restrict access to the data, thereby, providing limited access to sensitive information.

  • 2.

    Anonymize data fields such that sensitive information cannot be identified by anyone and trustworthiness.

As the first approach has limitation in terms of data accessibility, anonymization may result in better utilization of data while maintaining data privacy.

Basic Terminology in Data Privacy

The data anonymization process comprises of three essential entities, namely, participants, operations and attributes. The entities are responsible for data collection during initial stages to data anonymization and final release of data into public with preserved privacy. There are following four participants in big data process with different roles (Yu, 2016):

  • 1.

    Data Generator: Individuals and organizations who generate the original raw data.

  • 2.

    Data Curator: Organizations that collect, store and release the data.

  • 3.

    Data User: People who require the released data for some purposes.

  • 4.

    Data Attacker: Person who try to use the data for malicious purposes.

Further, there are three main operations on dataset in privacy establishing models (Figure 1):

  • 1.

    Collection: Data curators collect data (raw data) from various sources.

  • 2.

    Anonymization: Data curators anonymize the collected data in order to be released in public.

  • 3.

    Communication: Data users retrieve information from the released data (anonymized data).

Figure 1.

Operations and participants in big data model

978-1-7998-2242-4.ch009.f01

Furthermore, there are three types of attributes in privacy study:

  • 1.

    Explicit Identifiers: Attributes which uniquely represents an individual or an organization, for example, social security number, email id etc.

  • 2.

    Quasi Identifiers: Attributes which alone cannot represent an individual, however, when combined with other quasi attributes or any other external information may identify an individual.

  • 3.

    Sensitive Attributes: Attributes which are private to an individual like disease, salary etc.

Complete Chapter List

Search this Book:
Reset