Article Preview
TopIntroduction
Modeling languages are fundamental elements for any software engineering methodology. They are used to capture requirements, design systems, and specify desired properties. However, the role of security modeling is not yet clearly defined in mainstream software engineering. As pointed out by Devanbu and Stubblebine (2000), a major challenge in security software engineering is the unification of conceptual abstractions, methodologies and tools differently adopted in security and system engineering.
In SRE (McDermott & Fox, 1999; Sindre & Opdahl, 2005; Giorgini, Massacci, & Mylopoulos, 2003), security concerns are considered since the beginning of the engineering process along stakeholders’ needs and constraints. Here, unifying security with system aspects is inherently more complex than in later phases of the software development process. System aspects are expressed and modeled at a social/business level, so to allow mutual understanding between requirements engineers and stakeholders. On the other hand, security aspects are expressed by security experts at a technical level. SRE modeling languages are, thus, very likely to suffer from a socio-technical mismatch that prevents an effective conduction of such development phase.
Even for expert designers, accommodating orthogonal perspectives and minimizing the socio-technical mismatch can be a very complex task. This calls for specific evaluation techniques involving end-users throughout the development of a security modeling language. Thus, the choice of adequate evaluation techniques, a sound set-up of the evaluation study, and an accurate analysis of the results are essential to assess a security modeling language.
Many evaluations of (security) modeling languages exist in literature, e.g., Opdahl and Sindre (2009), Recker et al. (2009), and Kärnä, Tolvanen, and Kelly (2009). However, such studies were conducted after the language was adopted (summative evaluation). To the best of our knowledge, there are no publicly available evaluations of security modeling languages prior to the language release. In this paper, we report about our experience with formative evaluation of a security modeling language, called STS-ml (Socio-Technical Security modeling language), which is currently under development in the context of the EU-sponsored project Aniketos. Aniketos (http://www.aniketos.eu) is about ensuring trustworthiness and security in composite services. STS-ml is a security requirements engineering modeling language expressly thought to model and analyze security concerns for composite services (Casati, Sayal, & Shan, 2001). STS-ml is being developed according to an iterative development paradigm where internal releases are followed by evaluation studies.
The objective of conducting a formative evaluation before the language is released, is to gather feedback from a variety of end-users (requirements engineers, security experts, and domain experts), so to refine the language in the subsequent development iterations. This way of considering the needs of end-users throughout the development process is known as user-centered design approach. Our evaluation focuses on the adequacy of the modeling primitives for expressing security concerns as well as an initial assessment of the usability of the language and its support tool. In order to identify weaknesses of both the language and its support tool, we focused the evaluation around the following research questions:
- •
How usable are the modeling language and its support tool?
- •
Are there missing concepts that would be essential to model security aspects?
- •
Is the graphical representation adequate / easy to understand?
- •
Are there concepts whose semantics is unclear / underspecified?
- •
Are there technical issues that limit the usability of the tool?