Formative User-Centered Evaluation of Security Modeling: Results from a Case Study

Formative User-Centered Evaluation of Security Modeling: Results from a Case Study

Sandra Trösterer, Elke Beck, Fabiano Dalpiaz, Elda Paja, Paolo Giorgini, Manfred Tscheligi
Copyright: © 2012 |Pages: 19
DOI: 10.4018/jsse.2012010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Developing a security modeling language is a complex activity. Particularly, it becomes very challenging for Security Requirements Engineering (SRE) languages where social/organizational concepts are used to represent high-level business aspects, while security aspects are typically expressed in a technical jargon at a lower level of abstraction. In order to reduce this socio-technical mismatch and reach a high quality outcome, appropriate evaluation techniques need to be chosen and carried out throughout the development process of the modeling language. In this article, the authors present and discuss the formative user-centered evaluation approach, namely an evaluation technique that starts since the early design stages and actively involves end-users. The authors demonstrate the approach in a real case study presenting the results of the evaluation. From the gained empirical evidence, we may conclude that formative user-centered evaluation is highly recommended to investigate any security modeling language.
Article Preview
Top

Introduction

Modeling languages are fundamental elements for any software engineering methodology. They are used to capture requirements, design systems, and specify desired properties. However, the role of security modeling is not yet clearly defined in mainstream software engineering. As pointed out by Devanbu and Stubblebine (2000), a major challenge in security software engineering is the unification of conceptual abstractions, methodologies and tools differently adopted in security and system engineering.

In SRE (McDermott & Fox, 1999; Sindre & Opdahl, 2005; Giorgini, Massacci, & Mylopoulos, 2003), security concerns are considered since the beginning of the engineering process along stakeholders’ needs and constraints. Here, unifying security with system aspects is inherently more complex than in later phases of the software development process. System aspects are expressed and modeled at a social/business level, so to allow mutual understanding between requirements engineers and stakeholders. On the other hand, security aspects are expressed by security experts at a technical level. SRE modeling languages are, thus, very likely to suffer from a socio-technical mismatch that prevents an effective conduction of such development phase.

Even for expert designers, accommodating orthogonal perspectives and minimizing the socio-technical mismatch can be a very complex task. This calls for specific evaluation techniques involving end-users throughout the development of a security modeling language. Thus, the choice of adequate evaluation techniques, a sound set-up of the evaluation study, and an accurate analysis of the results are essential to assess a security modeling language.

Many evaluations of (security) modeling languages exist in literature, e.g., Opdahl and Sindre (2009), Recker et al. (2009), and Kärnä, Tolvanen, and Kelly (2009). However, such studies were conducted after the language was adopted (summative evaluation). To the best of our knowledge, there are no publicly available evaluations of security modeling languages prior to the language release. In this paper, we report about our experience with formative evaluation of a security modeling language, called STS-ml (Socio-Technical Security modeling language), which is currently under development in the context of the EU-sponsored project Aniketos. Aniketos (http://www.aniketos.eu) is about ensuring trustworthiness and security in composite services. STS-ml is a security requirements engineering modeling language expressly thought to model and analyze security concerns for composite services (Casati, Sayal, & Shan, 2001). STS-ml is being developed according to an iterative development paradigm where internal releases are followed by evaluation studies.

The objective of conducting a formative evaluation before the language is released, is to gather feedback from a variety of end-users (requirements engineers, security experts, and domain experts), so to refine the language in the subsequent development iterations. This way of considering the needs of end-users throughout the development process is known as user-centered design approach. Our evaluation focuses on the adequacy of the modeling primitives for expressing security concerns as well as an initial assessment of the usability of the language and its support tool. In order to identify weaknesses of both the language and its support tool, we focused the evaluation around the following research questions:

  • How usable are the modeling language and its support tool?

  • Are there missing concepts that would be essential to model security aspects?

  • Is the graphical representation adequate / easy to understand?

  • Are there concepts whose semantics is unclear / underspecified?

  • Are there technical issues that limit the usability of the tool?

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing