Security Assurance Evaluation and IT Systems’ Context of Use Security Criticality

Security Assurance Evaluation and IT Systems’ Context of Use Security Criticality

Moussa Ouedraogo (Public Research Center Henri Tudor, Luxembourg), Haralambos Mouratidis (University of East London, England), Eric Dubois (Public Research Center Henri Tudor, Luxembourg) and Djamel Khadraoui (Public Research Center Henri Tudor, Luxembourg)
DOI: 10.4018/978-1-4666-2785-7.ch005

Abstract

Today’s IT systems are ubiquitous and take the form of small portable devices, to the convenience of the users. However, the reliance on this technology is increasing faster than the ability to deal with the simultaneously increasing threats to information security. This paper proposes metrics and a methodology for the evaluation of operational systems security assurance that take into account the measurement of security correctness of a safeguarding measure and the analysis of the security criticality of the context in which the system is operating (i.e., where is the system used and/or what for?). In that perspective, the paper also proposes a novel classification scheme for elucidating the security criticality level of an IT system. The advantage of this approach lies in the fact that the assurance level fluctuation based on the correctness of deployed security measures and the criticality of the context of use of the IT system or device, could provide guidance to users without security background on what activities they may or may not perform under certain circumstances. This work is illustrated with an application based on the case study of a Domain Name Server (DNS).
Chapter Preview
Top

Introduction

Evolution is an inherent characteristic of IT systems. IT systems’ models are made to evolve depending on the context, either because of new business or users requirements or owing to changes in the system operating environment (new threats for instance). However, as it is well known, different contexts may harbor different risks and eventually call for different security requirements.

The list of recent high profile security breaches is daunting; headlines have exposed major leaks among the largest organizations, resulting in loss of customer trust, potential fines and lawsuits (Le Grand, 2005). Vulnerable systems pose a serious risk to successful business operations, so managing that risk is therefore a necessary board-level and executive-level concern. Executives must ensure appropriate steps are being taken to audit and address IT flaws that may leave critical systems open to attack (Le Grand, 2005). As revealed by a study conducted by Wool (2004) on firewall configurations, a common but sometimes overlooked source of IT risks for large distributed and open IS is improper deployment of security measures after a risk assessment has been completed. The term security measures within the paper refer to security controls. In fact, risk countermeasures may be properly elucidated at risk assessment but their actual deployment may be less impressive or unidentified hazards in the system environment may render them less effective. How good, for instance, is a fortified door if the owner, inadvertently, leaves it unlocked? Or considering a more technical example, how relevant is a firewall for a critical system linked to the Internet if it is configured to allow any incoming connections? Therefore, monitoring and reporting on the security status or posture of IT systems can be carried out to determine compliance with security requirements (Jansen, 2009) and to get assurance as to their ability to adequately protect system assets. This remains one of the fundamental tasks of security assurance, which is here defined as the ground for confidence on deployed security measures to meet their objectives. To that extent, our understanding of security assurance is in line with the Common Criteria’s definition of assurance (Common Criteria Sponsoring Organizations, 2006). Unfortunately most of what has been written so far about security assurance is definitional. Published literatures either aim at providing guidelines for identifying metrics example (Vaughn et al., 2002; Seddigh et al., 2004; Savola, 2007), without providing indications on how to combine them into quantitative or qualitative indicators that are important for a meaningful understanding of the security posture of an IT component; target end products (example of the Common criteria Common Criteria Sponsoring Organizations, 2006) or the software development stage (example of assurance cases Strunk & Knight, 2006; UMLSec, Jürjens, 2005; Secure Tropos, Mouratidis & Giorgini, 2007).

Our approach. We argue that evaluation of an operational system’s security assurance only make sense when placed within a risk management context. To reflect this, our method literally takes place after the risk assessment has been completed and the countermeasures deployed. Figure 1 shows the security assurance evaluation model and how it relates to the risk assessment stage, which concepts are depicted in bold. The security requirements identified for the risks mitigation could come either on the form of security functions deployed on the system or on the form of guidelines for security relevant parameters i.e., those parameters that are not directly linked to security but when altered could induce a security issue. According to the NIST special publication 800-33 (Stoneburner, 2009), the assurance that the security objectives (integrity, availability, confidentiality, and accountability) will be adequately met by a specific implementation depends on whether required security functionality is present and correctly implemented, and effective. Heeding that call, our approach to evaluating the security assurance of a security measure is founded on key verifications that aim to:

Figure1.

Security assurance evaluation model

Complete Chapter List

Search this Book:
Reset