Benchmarking Untrustworthiness: An Alternative to Security Measurement

Benchmarking Untrustworthiness: An Alternative to Security Measurement

Afonso Araújo Neto, Marco Vieira
DOI: 10.4018/jdtis.2010040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Benchmarking security is hard and, although there are many proposals of security metrics in the literature, no consensual quantitative security metric has been previously proposed. A key difficulty is that security is usually more influenced by what is unknown about a system than by what is known. In this paper, the authors propose the use of an untrustworthiness metric for benchmarking security. This metric, based on the idea of quantifying and exposing the trustworthiness relationship between a system and its owner, represents a powerful alternative to traditional security metrics. As an example, the authors propose a benchmark for Database Management Systems (DBMS) that can be easily used to assess and compare alternative database configurations based on minimum untrustworthiness, which is a low-cost and high-reward trust-based metric. The practical application of the benchmark in four real large database installations shows that untrustworthiness is a powerful metric for administrators to make informed security decisions by taking into account the specifics needs and characteristics of the environment being managed.
Article Preview
Top

Introduction

The problem of security quantification is a longstanding one. In fact, Enterprise-Level Security Metrics were included in the 2005 Hard Problems List prepared by the INFOSEC Research Council, which identifies key research problems related to information security (INFOSEC, 2005). However, so far, no consensual security metric has been proposed (Jansen, 2009).

A useful security metric must portray the degree to which security goals are met in a given system (Payne, 2006), allowing the system administrator to make informed decisions. However, one of the biggest difficulties in designing such a metric is related to the fact that security is, usually, much more dependent on what is unknown about the system than by what is known about it. For example, the vulnerabilities that exist in an application, but that are not known by the developer/administrator, are the ones that should be portrayed by a security metric; otherwise the metric is of reduced usefulness. This becomes even more evident if we consider complex environments where security vulnerabilities may exist due to the combination of several distinct characteristics of the system, including the environment around it and how it is used (e.g., a database accessed by several applications and users).

Insecurity metrics based on risk (Jelen & Williams, 1998) try to cope with the uncertainty associated with security goals by incorporating the probability of attacks. Risk is usually defined as the product of the likelihood of an attack by the damage expected if it happens. This metric can be used to decide if the risks are acceptable and to decide which ones have to be mitigated first. The problem is that it is very easy to underestimate or overestimate these values. This is, obviously, a major problem when they are used for supporting security related decisions.

Traditional security and insecurity metrics are hard to define and compute (Torgerson, 2007) because they involve making isolated estimations about the ability of an unknown individual (e.g., a hacker) to discover and maliciously exploit an unknown system characteristic (e.g., vulnerability). In practice, it is assumed that such metrics can be computed using information about the system itself, and they depend only on the systems properties. Therefore, they are universal and have the same value when seen from different perspectives (e.g., the administrators’ versus the attackers’ point of view). In spite of the usefulness of such metrics, they are not necessarily the only way of quantifying security aspects.

Consider the definition of a useful security metric: “the degree to which security goals are met in a given system allowing an administrator to make informed decisions”. An interesting alternative would be a metric that systematizes and summarizes the knowledge and control that a particular administrator has about his own system. This metric would still fit the security metric definition. Basically, the idea is not to measure just the system characteristics, but to extend the measurement to the relationship between the system and the person (or persons) that is in charge of it (defined here as the system administrator). Such a metric would allow the administrator to become aware of the security characteristics of the system, gathering knowledge to backup decisions. This metric would be even more useful for administrators that are not security experts and have to manage a complex environment, with just too many distinct security aspects to consider at once. This kind of metric is what we call a trust-based metric, in the sense that it exposes and quantifies the trustworthiness relationship between an administrator and the system he manages.

In this work we argue that a highly useful trust-based metric can be based on the evaluation of how much active effort the administrator puts in his system to make it more secure. Note that effort is used broadly, including not only real effort (e.g., testing an application) but also effort put on becoming aware of the state of the system (e.g., identifying that the server currently loads insecure processes). This effort can be summarized as the level of trust (or rather distrust) that can be justifiably put in a given system as not being susceptible to attacks. As an instantiation, we propose a trust-based metric called minimum untrustworthiness that expresses the minimum level of distrust one should put in a given system or component to act accordingly to its specification.

Complete Article List

Search this Journal:
Reset
Volume 2: 3 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing