Formal Analysis of Security in Interactive Systems

Formal Analysis of Security in Interactive Systems

Antonio Cerone
DOI: 10.4018/978-1-60566-132-2.ch025
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Reducing the likelihood of human error in the use of interactive systems is increasingly important. Human errors could not only hinder the correct use and operation, they can also compromise the safety and security of such systems. Hence the need for formal methods to analyze and verify the correctness of interactive systems, particularly with regards to human interaction. This chapter examines the use of formal modeling and analysis of security properties of interactive systems. The reader is introduced to some basic concepts in security and human-computer interaction, followed by formal modeling of human cognitive behavior and analysis of such systems. Authors highlight the use of model-checking to drive the system development to design secure user actions and sequences of actions. Authors also analyze the patterns of user behavior that may lead to security violation. Finally, particular areas of security protocol design where human interaction plays a key role are discussed.
Chapter Preview
Top

Introduction

Reducing the likelihood of human error in the use of interactive systems is increasingly important: the use of such systems is becoming widespread in applications that demand high reliability due to safety, security, financial or similar considerations. Consequently, the use of formal methods in verifying the correctness of interactive systems should also include analysis of human behaviour in interacting with the interface.

The study of human behaviour in operating machines has emerged in the 1980’s, when Human Reliability Assessment (HRA) techniques have been used in the analysis of safety-critical systems (Kirwan, 1990). These techniques appeared soon to lack objectivity (Svenson, 1989) and failed to cope with the complex behaviours emerging in the human-machine interaction (Kirwan, 1992).

In the 1990s formal methods have started to be introduced aiming at a more objective analysis (Dix, 1991). However, the gain in objectivity was offset by an oversimplified model of the human behaviour, which was actually often modelled as defined by the interface requirements. In reality, the user interacting with the system does not necessarily behave as it was expected while designing the interface, and errors are actually the very result of an unexpected user behaviour that emerges through the interaction.

To best capture such an emergent behaviour, a model of the user must specify the cognitively plausible behaviour, which is all the possible behaviours that can occur, and that involve different cognitive processes (Butterworth et al, 2000). The model must take into account all relationship between user’s actions, user’s goals and the environment. Following this approach, a number of researchers have explored the use of formal models to understand how cognitive errors can affect user performance. Mode confusion is often blamed for human errors in interactive systems, whereby the user’s mental model of how the system works gets out of step with how the system is actually working. This phenomenon has been analysed by developing a plausible (but simplified) mental model such as might represent the user’s understanding of the system and comparing it with a user interface model, to look for sources of cognitive dissonance (Butler et al, 1998; Rushby, 2002). Rushby (2002) models the behaviour of a forgetful user who follows a warning display light or a non-forgetful user without warning lights and checks for emergent mode confusion. Curzon and Blandford (2004) focus on goal-based interactions and model the behaviour of a user who assumes all tasks completed when the goal is achieved, but forgets to complete some important subsidiary tasks (post-completion error). Cerone et al (2005) investigate the application of formal methods to the classification and analysis of task failure caused by systematic recurrence of human errors. This recurrence produces complex patterns of human behaviours, which are the result of bad habits, poor training, deficient mental models, or are associated with cognitive processes such as habituation. The error is modeled by the behavioural pattern itself, rather than by specific error states as for mode confusion and post-completion error. In particular, different categories of behavioural patterns are formally verified to be a sound and complete decomposition of an air traffic control task failure.

These formal approaches all address safety-critical aspects of interactive systems and cognitive behaviour. Only at the beginning of the new millenium, security aspects of interactive systems have been started to be systematically analysed (Cerone and Curzon, 2007; Yee, 2002; Zurko, 2005). In fact, security violations in human-computer interaction may be due to systematic causes such as cognitive overload, lack of security knowledge, and mismatches between the behaviour of the computer system and the user’s mental model (Yee, 2002). The user who causes the violation may or may not be the attacker: the user may either become aware of the violation and exploit it or unwillingly allow a malicious user to exploit it. The latter situation is more complex due to two reasons. First, the violation often emerges as a consequence of a complex behavioral pattern (Cerone et al, 2005). Secondly, the security properties need to capture the conditions which enable a malicious user to exploit the violation (Cerone and Elbegbayan, 2007).

Key Terms in this Chapter

Human Error: A class of errors that (in the context of interactive systems) refer to any undesired user action or user-system interaction that leads to the violation of a system goal. Such an interaction may also violate some security goal of the system, that is leakage of confidential information or facilitate unauthorised access to the system.

Model-Checking: An approach to verifying whether a model (of a software or hardware system) satisfies a set of formal assertions and constraints. This usually involves the use of a tool, which allows a model to be specified using a particular formal language (such as CSP), along with the property that it needs to be checked against, and then check whether the model would always satisfy the property for all its executions. The check involves the use of a state exploration technique to ensure that all possible states of the system are explored as desired. In case if the model fails to satisfy the stated property, the tool returns an execution path demonstrating the failure.

Empirical Channel: A formalisation of a medium of communication which results from human-to-human or human-to-device interaction. Some information communicated verbally from one person to another, read from a device display panel or input into a keyboard of a computer are all examples of communication over empirical channels. They are considered low bandwidth and reliable.

CSP: Communicating Sequential Processes, or CSP, is a formal language for describing patterns of interaction for concurrent systems (in computer science). First described by Sir Professor Tony Hoare at Oxford in 1978, it is a member of the family of mathematical theories of concurrency known as process algebras. It is traditionally written in a fairly mathematical notation, with special symbols representing its operators, and is typeset more like mathematics than a typical programming language.

Electronic-Voting: A voting infrastructure that uses electronic means to support and provide the casting and counting of votes. This usually involves an underlying computer system which allows the voters to cast their vote and output the result of the election.

Coercion-Resistance: A property of electronic voting systems where a voter is not able to prove to a coercer what way he/she voted. This is an important property as it prevents voters being bought or blackmailed by those who intend to unlawfully influence the results of the election.

Security Protocol: A security protocol is essentially a communication protocol – an agreed sequence of actions performed by two or more communicating entities in order to accomplish some mutually desirable goal – that makes use of cryptographic techniques, allowing the communicating entities to achieve a security goal. A particular protocol, however, may enable the communicating parties to establish one or more such goals. Some common security goals include (data and entity) authentication, confidentiality and integrity.

Complete Chapter List

Search this Book:
Reset