Towards a More Systematic Approach to Secure Systems Design and Analysis

Towards a More Systematic Approach to Secure Systems Design and Analysis

Simon Miller (Intelligent Modelling and Analysis Research Group, School of Computer Science, University of Nottingham, Nottingham, UK), Susan Appleby (Communications-Electronics Security Group, Cheltenham, Gloucestershire, UK), Jonathan M. Garibaldi (Intelligent Modelling and Analysis Research Group, School of Computer Science, University of Nottingham, Nottingham, UK) and Uwe Aickelin (Intelligent Modelling and Analysis Research Group, School of Computer Science, University of Nottingham, Nottingham, UK)
Copyright: © 2013 |Pages: 20
DOI: 10.4018/jsse.2013010102
OnDemand PDF Download:
$37.50

Abstract

The task of designing secure software systems is fraught with uncertainty, as data on uncommon attacks is limited, costs are difficult to estimate, and technology and tools are continually changing. Consequently, experts may interpret the security risks posed to a system in different ways, leading to variation in assessment. This paper presents research into measuring the variability in decision making between security professionals, with the ultimate goal of improving the quality of security advice given to software system designers. A set of thirty nine cyber-security experts took part in an exercise in which they independently assessed a realistic system scenario. This study quantifies agreement in the opinions of experts, examines methods of aggregating opinions, and produces an assessment of attacks from ratings of their components. The authors show that when aggregated, a coherent consensus view of security emerges which can be used to inform decisions made during systems design.
Article Preview

Introduction

Today, an ever-growing number of sensitive transactions of data take place on-line (e.g., e-government, internet banking and e-commerce), and cyber crime has become prevalent. One of the consequences of this is that the cybersecurity of information systems has become an increasing concern. Assessing the level of risk posed by specific events is an area of ongoing interest for most (if not all) organisations, leading to a requirement for scientific methods of validating the cybersecurity of software systems.

This special issue asks two questions:

  • 1.

    What are the foundational, measurable, and repeatable scientific elements applicable to assuring the cybersecurity of software systems?

In the real world, the subjective opinions of cybersecurity experts are used to assess the security of software systems in their design stage. Indeed, this is often the only way to make such assessment. However, the human elements involved introduce the potential for inconsistency both between experts, and within the experts themselves. In this paper we show how the opinions of experts can be elicited and measured, and how we can ameliorate (and measure) the effects of their inherent variation through aggregation to produce a consistent and repeatable assessment.

  • 2.

    How should we verify and validate software systems in terms that will provide indisputable scientific evidence that they are secure?

We contend that ‘indisputable evidence’ of security is not a practical concept in the real-world, as no system can be guaranteed to be without security risks for any length of time. Furthermore, we argue that independent, measured, ‘proven’ expert opinion should be used to verify and validate software systems. Repeated successful assessments by cybersecurity experts provide the historical scientific evidence that their opinions are a valid and proven method of assuring the security of software systems. The job of security practitioners is to make an informed judgement as to whether the system is secured to an appropriate degree for the threat environment it faces.

The work described in this paper addresses two key topics of interest, Measuring Human Factors in Security – In the method we present, human experts are used to validate software systems. Perceptions of security vary both between experts, and within an individual expert. Our method allows us to explicitly measure this variation, and produce an assessment that accounts for it. Quantitative Security Management – The outcome of the proposed method is a quantification of expert opinion of the security of a system, including a measurement of uncertainty. These values can be used to make decisions regarding the implementation of a software system, and the management of its security framework.

In the proposed method we use two different types of survey to elicit the opinions of a group of security experts. The first involves ranking a series of technical attacks on a system in order of how difficult they are to carry out undetected by the system or its operators. The second requires experts to rate components of attacks in terms of aspects which are thought to contribute to their overall difficulty. In practice, a system is only as secure as its weakest element, i.e., the easiest way in. Identifying which are the weakest aspects of a system, i.e., the ‘easiest’ ways of attacking it, is thus a highly relevant component of system security assessment, though obviously it does not provide all of the answers.

We have applied the method to measure variation within a set of thirty nine highly experienced expert practitioners including system and software architects, security consultants, penetration testers, vulnerability researchers and specialist systems evaluators.

This paper shows how we are able to use expert opinion to produce a consistent assessment, and identify outliers. We also discuss the meaning of the results, and how the approach could be applied in future work.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017): 1 Released, 3 Forthcoming
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing