Article Preview
TopIntroduction
The problem of security quantification is a longstanding one. In fact, Enterprise-Level Security Metrics were included in the 2005 Hard Problems List prepared by the INFOSEC Research Council, which identifies key research problems related to information security (INFOSEC, 2005). However, so far, no consensual security metric has been proposed (Jansen, 2009).
A useful security metric must portray the degree to which security goals are met in a given system (Payne, 2006), allowing the system administrator to make informed decisions. However, one of the biggest difficulties in designing such a metric is related to the fact that security is, usually, much more dependent on what is unknown about the system than by what is known about it. For example, the vulnerabilities that exist in an application, but that are not known by the developer/administrator, are the ones that should be portrayed by a security metric; otherwise the metric is of reduced usefulness. This becomes even more evident if we consider complex environments where security vulnerabilities may exist due to the combination of several distinct characteristics of the system, including the environment around it and how it is used (e.g., a database accessed by several applications and users).
Insecurity metrics based on risk (Jelen & Williams, 1998) try to cope with the uncertainty associated with security goals by incorporating the probability of attacks. Risk is usually defined as the product of the likelihood of an attack by the damage expected if it happens. This metric can be used to decide if the risks are acceptable and to decide which ones have to be mitigated first. The problem is that it is very easy to underestimate or overestimate these values. This is, obviously, a major problem when they are used for supporting security related decisions.
Traditional security and insecurity metrics are hard to define and compute (Torgerson, 2007) because they involve making isolated estimations about the ability of an unknown individual (e.g., a hacker) to discover and maliciously exploit an unknown system characteristic (e.g., vulnerability). In practice, it is assumed that such metrics can be computed using information about the system itself, and they depend only on the systems properties. Therefore, they are universal and have the same value when seen from different perspectives (e.g., the administrators’ versus the attackers’ point of view). In spite of the usefulness of such metrics, they are not necessarily the only way of quantifying security aspects.
Consider the definition of a useful security metric: “the degree to which security goals are met in a given system allowing an administrator to make informed decisions”. An interesting alternative would be a metric that systematizes and summarizes the knowledge and control that a particular administrator has about his own system. This metric would still fit the security metric definition. Basically, the idea is not to measure just the system characteristics, but to extend the measurement to the relationship between the system and the person (or persons) that is in charge of it (defined here as the system administrator). Such a metric would allow the administrator to become aware of the security characteristics of the system, gathering knowledge to backup decisions. This metric would be even more useful for administrators that are not security experts and have to manage a complex environment, with just too many distinct security aspects to consider at once. This kind of metric is what we call a trust-based metric, in the sense that it exposes and quantifies the trustworthiness relationship between an administrator and the system he manages.
In this work we argue that a highly useful trust-based metric can be based on the evaluation of how much active effort the administrator puts in his system to make it more secure. Note that effort is used broadly, including not only real effort (e.g., testing an application) but also effort put on becoming aware of the state of the system (e.g., identifying that the server currently loads insecure processes). This effort can be summarized as the level of trust (or rather distrust) that can be justifiably put in a given system as not being susceptible to attacks. As an instantiation, we propose a trust-based metric called minimum untrustworthiness that expresses the minimum level of distrust one should put in a given system or component to act accordingly to its specification.