The call for the integration of program evaluation into the development of computer-supported learning environments is ever-increasing. Pushed not only by demands from policy groups and grant makers who desire greater accountability in lean times, this trend is due also because outcomes of computer supported learning environment projects often fall short of the expectations held by the project teams. The discrepancy between the targets set by the project staff and the outcomes achieved suggests there is a need for formative evaluation approaches (vs. summative approaches) that derive information that can be used to improve a program while it is in its development stage (see Worthen, Sanders, & Fitzpatrick, 1997). And in spite of the known benefits of integrating evaluation into the project development process, we note a lack of theoretical frameworks that reflect the peculiarities of computer-supported learning projects and the ways they evolve (see Keil-Slawik, 1999). This is of crucial importance, as formative evaluation will only be an accepted and effective part of a project if it provides information useful for the project staff. The purpose of this chapter is to outline the obstacles to integrating evaluation in computer-supported learning projects and then discuss two promising approaches that can be used to address these challenges.
According to Worthen, Sanders and Fitzpatrick (1997), evaluation is “the identification, clarification and application of defensible criteria to determine an evaluation object’s value (worth or merit), quality, utility, effectiveness or significance in relation to those criteria.” In this regard, evaluation can serve different purposes. Patton (1997) distinguishes between judgment-, knowledge- and improvement-oriented evaluations. We focus on improvement-oriented evaluation approaches. We stress that evaluation can facilitate decision making and reveal information that can be used to improve not only the project itself but also outcomes within the project’s target population. The conceptualization of evaluation as an improvement-oriented and formative activity reveals its proximity to design activities. In fact, this kind of evaluative activity is an integral part of any design process, whether explicitly mentioned or not. Accordingly, it is not the question if one should evaluate, but which evaluation methods generate the most useful information to improve the program. This question can only be answered by facing the characteristics and obstacles of designing computer-supported learning environments.
Keil-Slawik (1999) points out that one of the main challenges in evaluating computer-supported learning environments is that some goals and opportunities can spontaneously arise in the course of the development process and, thus, are not specified in advance. We believe that this is because design, in this context, addresses ill-structured and -situated problems. The design and implementation of computer-supported learning environments, which can be viewed as a response to a perceived problem, also generates new problems as it is designed. Furthermore, every computer-supported learning experience takes place in a unique social context that contributes to the success of an intervention or prevents it. Therefore, evaluation requires that designers pay attention to evolutionary and cyclic processes and situational factors. As Weiss notes, “much evaluation is done by investigating outcomes without much attention to the paths by which they were produced” (1998, p. 55).
For developers designing projects at the intersection of information and communication technology (ICT) and the learning sciences, evaluation is difficult. Evaluation efforts often are subverted by a myriad of confounding variables, leading to a “garbage in, garbage out” effect; the evaluation cannot be better than the parameters that were built in the project from the start (Nash, Plugge, & Eurlings, 2001). Leaving key parameters of evaluative thinking out of computer-supported learning projects is exacerbated by the fact that many investigators lack the tools and expertise necessary to cope with the complexity they face in addressing the field of learning.