Privacy in Pervasive and Affective Computing Environments

Privacy in Pervasive and Affective Computing Environments

Jeremy Pitt, Arvind Bhusate
DOI: 10.4018/978-1-61520-975-0.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Pervasive computing aims to saturate ambient environments with sensors and processors; affective computing aims to infer emotive and intentional states from physiological readings and physical actions. The convergence of pervasive and affective computing offers genuine promise for creating ambient environments which (re-)configure themselves according to user’s emotive states. On the down-side, there is a real risk of privacy invasion if emotions, behaviours, and even intentions are recorded and subject to the same content-access rules as telephone calls, IP logs, and so on. Based on an experiment to enhance Quality of Experience (QoE) in a visit to a public collection enhanced with pervasive and affective computing, chapter 11 discuss the subtle interactions and requirements of enhanced service provision vis-à-vis privacy rights. The outcome will contribute to the discussion on ensuring an effective relationship between technologists and application developers on the one hand, and those concerned with privacy rights, ethical computing and formulation of social policy on the other, to promote and protect the rights and interests of citizens.

Chapter Preview
Top

1 Introduction

Miniaturisation and Moore’s Law have combined to make a reality of ubiquitous or pervasive computing (ambient environments and artefacts saturated with sensors and processors); while advances in intelligent software (machine learning, autonomic systems, etc.) make adaptation of those pervasive computing environments correspondingly possible. This opens up a wide range of interesting and beneficial applications in health, commerce and entertainment; it also opens up the possibility of every behaviour and preference, and even emotions and intentions, being sensed and recorded digitally. This also raises the possibility that this data is then used in a way that is less desirable: for surveillance, invasions of privacy, reduction or removal of rights, unwanted advertising – even unexpected uses caused by inadvertent loss of the data.

Let us then assume a ubiquitous/pervasive computing infrastructure that provides location-based applications and delivers context-aware services to nomadic users. Clearly, there is added-value in both applications and services to those users if they can be customised and/or personalised to each user. Such customisation can be achieved by transmitting immediate personal information to the infrastructure, adding expressed preferences stored in a user profile, and delivering services according to user-defined policies. It is evident that users are willing to trade personal information in return for value-added services. Increasingly, though, as the sensor technology improves, physiological signals, from galvanic skin response to brain activity (via electroencephalograms) will be a type of personal information traded in return for such services, using the ideas of affective computing.

The premise of affective computing is that interaction — by desktop or ubiquitously — can be improved by sensing and responding to the user’s emotive and intentional state (with respect to completion of, or engagement with, a task). If we seek to integrate ubiquitous computing with affective computing, an essential component of ‘immediate personal information’ to be transmitted is the physiological signals which can be processed (including fusion with sensed behaviour data) to infer a user’s emotive and intentional state. This can be used as an input parameter to delivering the customised service. There are, however, well-documented security risks associated with revealing such personal information, and so, we have exacerbated the security risk, although now it is a matter primarily of privacy.

The transmission of affective data (physiological signals) is unlike, for example, the transmission of personal data like credit card information, or context data like location and device parameters. We want to protect this information from eavesdroppers who would misuse it, and ‘trust’ the intended recipient of the signal not to misuse it. In any case, the interception of raw sensor data is less likely to be of any use unless one has that user’s ‘personal key’ to interpret the data in order to make a sufficiently reliable estimate of a user’s emotive state. This is not to say that eavesdropping on a personal transmission is a potential invasion of privacy, and steps should be considered to avoid this if necessary. However, the situation faced is distinctly more subtle and more serious than this, since the most likely exploiter might not necessarily be an eavesdropper on the channel, but is rather the intended recipient anyway.

Therefore it is too simplistic to respond that user-relevant contextual information should simply be transmitted over secure communication channels to assure user trust: no amount of security on the channel is going to protect the transmitter of information if the recipient intends to act in bad faith (e.g. a corporation intent on creating customer lock-in) or may be under pressure from an external authority to reveal data (e.g. ISPs in the UK required by law to reveal IP logs). Therefore, transmitting raw sensor data over encrypted channels is not the solution to assuring user trust and confidence in the system.

Complete Chapter List

Search this Book:
Reset