Article Preview
TopIntroduction
Collaborative Modeling (Renger et al., 2008; Rittgen, 2007), which is closely related to Group Model Building (Vennix, 1996), is a process that can enhance productivity in Information Systems Design and Business Process Re-engineering. During the collaborative effort of system development, stakeholders “move through a process in which they combine their expertise, their insights and their resources to bring them to bear for the task at hand” (de Vreede & Briggs, 2005, p. 1). The importance of involving different hierarchical level representatives in a (re-) engineering process is recognized by Dean et al. (1994). However, the emphasis in the bulk of the literature is on tools and techniques used by the stakeholders in order to achieve the desired model quality (completeness and correctness). Yet, it has been argued that model quality alone, especially its clarity and completeness, which are often emphasized, is no longer enough (Mendling & Recker, 2007). Following that observation, it is our contention that if we are concerned with the quality of the final model, we also need to evaluate other modeling artifacts that are used in, and produced during, the modeling session.
Rather than taking the end-products (models) to the so-called “modeling expert (s),” we advocate the evaluation of such models and other modeling artifacts - which include the modeling language, the modeling procedure and the support tool (Ssebuggwawo et al., 2010) - to be done by the collaborative modelers themselves. This essentially guarantees stakeholders’ satisfaction if the evaluation of the models and the process is integrated within the modeling session. After all, it is their model and it is their process. This, however, is compounded by the fact that modelers posses different knowledge, skills, expertise and often lack the required competencies (Frederiks et al., 2005) which may not only affect the process of modeling, but also the evaluation of the modeling artifacts. Moreover, they often have different priorities and preferences about the modeling artifacts to be evaluated and their associated quality dimensions. One way of overcoming the limitations encountered during the modeling process and evaluation is to position the modeling process and evaluations within the communicative process (Hoppenbrouwers et al., 2005). This fits in well since the modeling process is collaborative in nature and exchanges between and among the modelers are expected and assumed to eventually lead to agreement and consensus about the final quality of the modeling artifacts.
Communication plays a vital and important role in system development, and in conceptual modeling (Veldhuijzen et al., 2004). The communicative process should render consensus and agreement transparent to the modelers. This is, however, not always the case since many stakeholders (with varying skills, expertise, knowledge, priorities, and preferences) are involved in system development. The heterogeneity of the group makes it hard for them to agree on each and every issue. Yet, agreement and consensus are key pillars in such an interactive and collaborative environment. For this to be achieved, participants need to engage in various types of conversation during the creation of agreed models. Such conversations involve negotiation, which results in accepts, rejects, modifications, etc. (Rittgen, 2007). This communicative, argumentative and negotiation process is vital for reaching agreement and consensus about the quality of the different modeling artifacts. Due to the differences in their knowledge stored in their mental models, skills, competencies and expertise, priorities and preferences, there is always some bias and subjectivity – a fact that makes the overall decision-making process subjective (Saaty, 2008b) which eventually overflows into the evaluations. This begs the question whether there exists (an) evaluation framework(s) that can help us evaluate the four modeling artifacts yet at the same time reduce the subjectivity and aggregate the modelers’ priorities and preferences. We describe, in this paper, a framework that can help us achieve this. The major contribution of this paper is the COME framework that can be used by participants in the modeling effort to collaboratively evaluate the different modeling artifacts without guidance of a facilitator.