Applying AHP for Collaborative Modeling Evaluation: Experiences from a Modeling Experiment

Applying AHP for Collaborative Modeling Evaluation: Experiences from a Modeling Experiment

Denis Ssebuggwawo, Stijn Hoppenbrouwers, Henderik A. Proper
Copyright: © 2013 |Pages: 24
DOI: 10.4018/jismd.2013010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Collaborative modeling is one of the approaches used to enhance productivity in many enterprise modeling and system development projects. Determining the success of such a collaborative effort needs an evaluation of a number of factors which affect the quality of not only the end-products – the models, but also that of other modeling artifacts: the modeling language, the modeling procedure and the support tool. Although a number of quality frameworks have been developed, few of these frameworks have received practical validation and many offer little guidance about how the evaluation is operationalized. The Collaborative Modeling Evaluation (COME) framework presented in this paper offers a holistic approach to the evaluation of the four modeling artifacts. It employs the Analytic Hierarchy Process (AHP), a well-established method from Operations Research, to score the artifacts’ quality dimensions and to aggregate the modelers’ priorities and preferences. Results from a modeling experiment demonstrate both the theoretical and practical significance of the framework.
Article Preview
Top

Introduction

Collaborative Modeling (Renger et al., 2008; Rittgen, 2007), which is closely related to Group Model Building (Vennix, 1996), is a process that can enhance productivity in Information Systems Design and Business Process Re-engineering. During the collaborative effort of system development, stakeholders “move through a process in which they combine their expertise, their insights and their resources to bring them to bear for the task at hand” (de Vreede & Briggs, 2005, p. 1). The importance of involving different hierarchical level representatives in a (re-) engineering process is recognized by Dean et al. (1994). However, the emphasis in the bulk of the literature is on tools and techniques used by the stakeholders in order to achieve the desired model quality (completeness and correctness). Yet, it has been argued that model quality alone, especially its clarity and completeness, which are often emphasized, is no longer enough (Mendling & Recker, 2007). Following that observation, it is our contention that if we are concerned with the quality of the final model, we also need to evaluate other modeling artifacts that are used in, and produced during, the modeling session.

Rather than taking the end-products (models) to the so-called “modeling expert (s),” we advocate the evaluation of such models and other modeling artifacts - which include the modeling language, the modeling procedure and the support tool (Ssebuggwawo et al., 2010) - to be done by the collaborative modelers themselves. This essentially guarantees stakeholders’ satisfaction if the evaluation of the models and the process is integrated within the modeling session. After all, it is their model and it is their process. This, however, is compounded by the fact that modelers posses different knowledge, skills, expertise and often lack the required competencies (Frederiks et al., 2005) which may not only affect the process of modeling, but also the evaluation of the modeling artifacts. Moreover, they often have different priorities and preferences about the modeling artifacts to be evaluated and their associated quality dimensions. One way of overcoming the limitations encountered during the modeling process and evaluation is to position the modeling process and evaluations within the communicative process (Hoppenbrouwers et al., 2005). This fits in well since the modeling process is collaborative in nature and exchanges between and among the modelers are expected and assumed to eventually lead to agreement and consensus about the final quality of the modeling artifacts.

Communication plays a vital and important role in system development, and in conceptual modeling (Veldhuijzen et al., 2004). The communicative process should render consensus and agreement transparent to the modelers. This is, however, not always the case since many stakeholders (with varying skills, expertise, knowledge, priorities, and preferences) are involved in system development. The heterogeneity of the group makes it hard for them to agree on each and every issue. Yet, agreement and consensus are key pillars in such an interactive and collaborative environment. For this to be achieved, participants need to engage in various types of conversation during the creation of agreed models. Such conversations involve negotiation, which results in accepts, rejects, modifications, etc. (Rittgen, 2007). This communicative, argumentative and negotiation process is vital for reaching agreement and consensus about the quality of the different modeling artifacts. Due to the differences in their knowledge stored in their mental models, skills, competencies and expertise, priorities and preferences, there is always some bias and subjectivity – a fact that makes the overall decision-making process subjective (Saaty, 2008b) which eventually overflows into the evaluations. This begs the question whether there exists (an) evaluation framework(s) that can help us evaluate the four modeling artifacts yet at the same time reduce the subjectivity and aggregate the modelers’ priorities and preferences. We describe, in this paper, a framework that can help us achieve this. The major contribution of this paper is the COME framework that can be used by participants in the modeling effort to collaboratively evaluate the different modeling artifacts without guidance of a facilitator.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 8 Issues (2022): 7 Released, 1 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing