Article Preview
TopIntroduction
In 1921, a Yevgency Zamyatin, a Russian author of science fiction and political satire, wrote a novel named “We” in which he imagined a future world where every building is made by glasses. Those days, with the observed development of big data, we can say that this imagination became true. Nowadays, the big data occupies a wide range in our lives. Since its coming, big data promises in teasing out unknown correlative events. Yet, this new paradigm had proved its high efficiency in helping research community for several objectives. However, it is well known that the data can reveal some information that have a high relevance with persons as well as it cannot. Furthermore, the correlative analysis and study of personal data in aggregated format can be useful for prediction of behaviour of users in nearly the same efficiency while using the data in its original form. A simple example of that can be regarded in a statistical survey of commercial company about the product the most bought according to clients’ range (regarding to their ages – child, adult…etc.). In this case replacing the real age of clients by intervals (22 years can be seen as [20...30]) can reveal the same results as well as the use of the real values. However, this is a sample example, but in real data, privacy concerns can be much more than that. These last requires not only hiding visible data, but also the sensitive knowledge within this data especially in the light of the observed advancement of data mining techniques. A clear example of that is the Terry Gross’s height presented in (Dwork, 2008): Suppose one’s exact height were considered a highly sensitive piece of information, and that revealing the exact height of an individual were a privacy breach. Assume that a database yields the average heights of women of different nationalities. An adversary who has access to the statistical database and the auxiliary information ‘Terry Gross is two inches shorter than the average Lithuanian women’ learns Terry Gross’s height, while anyone learning only the auxiliary information learns relatively little.
Medical records and physician notes are information concerning patients routinely recorded in different hospitals and clinics over the world. This ever-increasing gathering of personal medical information represents an important source for human disease studies. With the development of big data techniques and its emergence, the study of real life data in real time becomes a reality. As Kate Greene mentioned in her essay in (Smolan, 2013) “your next phone could help you Figure out you’re sick before you are even aware of a problem”. This huge volume of data contains some information that is considered as highly sensitive and cannot be shared with any one which creates a real obstacle for researchers over this domain. This kind of problems led to the birth of new search field in information protection science known by privacy preserving.
The privacy preserving represents the set of algorithms and approaches that are used to detect and hide sensitive information. One specific field of privacy preserving is the perturbation theory in which the aim is to perturb such data in order to prevent any identity disclosure. The most challenge that perturbation tries to handle is to achieve an accepted level of privacy without compromising the data utility.
The nature gives us always solutions for several problems. It is an immense source of inspiration of methods and approaches. It exhibits extremely diverse, dynamic, robust solutions against complex and hard problems. Starting from evolutionary algorithms like genetic algorithms to the ecological methods like PS20 algorithm. The bio-inspired algorithms have presented a big efficiency facing several problems of different domains especially in the optimisation theory.
This paper presents a proposition of a new bio-inspired algorithm from the apoptosis behaviour of human cells for the perturbation of sensitive attributes over medical records. The algorithm consists of: first isolates the sensitive attributes. Then split it into two sets: nominal and numerical ones. And finally perturb it. We evaluate our work using the supervised classification of both original (private) and perturbed data by basing on several evaluation measures such as precision, recall, F-measure. For more credibility we evaluate our work by putting it in comparison with some conventional works.