Improving Evaluations in Computer-Supported Learning Projects

Improving Evaluations in Computer-Supported Learning Projects

John B. Nash, Christoph Richter, Heidrun Allert
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-60566-198-8.ch162
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The call for the integration of program evaluation into the development of computer-supported learning environments is ever-increasing. Pushed not only by demands from policy groups and grant makers who desire greater accountability in lean times, this trend is due also because outcomes of computer supported learning environment projects often fall short of the expectations held by the project teams. The discrepancy between the targets set by the project staff and the outcomes achieved suggests there is a need for formative evaluation approaches (vs. summative approaches) that derive information that can be used to improve a program while it is in its development stage (see Worthen, Sanders, & Fitzpatrick, 1997). And in spite of the known benefits of integrating evaluation into the project development process, we note a lack of theoretical frameworks that reflect the peculiarities of computer-supported learning projects and the ways they evolve (see Keil-Slawik, 1999). This is of crucial importance, as formative evaluation will only be an accepted and effective part of a project if it provides information useful for the project staff. The purpose of this chapter is to outline the obstacles to integrating evaluation in computer-supported learning projects and then discuss two promising approaches that can be used to address these challenges.
Chapter Preview
Top

Background

According to Worthen, Sanders and Fitzpatrick (1997), evaluation is “the identification, clarification and application of defensible criteria to determine an evaluation object’s value (worth or merit), quality, utility, effectiveness or significance in relation to those criteria.” In this regard, evaluation can serve different purposes. Patton (1997) distinguishes between judgment-, knowledge- and improvement-oriented evaluations. We focus on improvement-oriented evaluation approaches. We stress that evaluation can facilitate decision making and reveal information that can be used to improve not only the project itself but also outcomes within the project’s target population. The conceptualization of evaluation as an improvement-oriented and formative activity reveals its proximity to design activities. In fact, this kind of evaluative activity is an integral part of any design process, whether explicitly mentioned or not. Accordingly, it is not the question if one should evaluate, but which evaluation methods generate the most useful information to improve the program. This question can only be answered by facing the characteristics and obstacles of designing computer-supported learning environments.

Keil-Slawik (1999) points out that one of the main challenges in evaluating computer-supported learning environments is that some goals and opportunities can spontaneously arise in the course of the development process and, thus, are not specified in advance. We believe that this is because design, in this context, addresses ill-structured and -situated problems. The design and implementation of computer-supported learning environments, which can be viewed as a response to a perceived problem, also generates new problems as it is designed. Furthermore, every computer-supported learning experience takes place in a unique social context that contributes to the success of an intervention or prevents it. Therefore, evaluation requires that designers pay attention to evolutionary and cyclic processes and situational factors. As Weiss notes, “much evaluation is done by investigating outcomes without much attention to the paths by which they were produced” (1998, p. 55).

For developers designing projects at the intersection of information and communication technology (ICT) and the learning sciences, evaluation is difficult. Evaluation efforts often are subverted by a myriad of confounding variables, leading to a “garbage in, garbage out” effect; the evaluation cannot be better than the parameters that were built in the project from the start (Nash, Plugge, & Eurlings, 2001). Leaving key parameters of evaluative thinking out of computer-supported learning projects is exacerbated by the fact that many investigators lack the tools and expertise necessary to cope with the complexity they face in addressing the field of learning.

Key Terms in this Chapter

Program Theory: A set of assumptions underlying a program that explains why the planned activities should lead to the predefined goals and objectives. The program theory includes activities directly implemented by the program as well as the activities that are generated as a response to the program by the context in which it takes place.

Evolutionary Processes: Pertains to the process of change in a certain direction regardless of external planning exerted upon the process. Related to the notion that a project can “take on a life of its own.”

Socio-Technical Systems: Systems, such as computer-supported learning projects, that create interplay between technical decisions of tool development and social interactions that occur as a result of users using the tool.

Summative Evaluation: The elicitation of information that can be used to determine if a program should be continued or terminated.

Computer-Supported Learning: (of which CSCL is one part): Learning processes that take place in an environment that includes computer-based tools and/or electronically stored resources.

Scenarios: A narrative description of a sequence of (inter-)actions performed by one or more persons in a particular context. Scenarios include information about goals, plans, interpretations, values and contextual conditions and events.

Evaluation: The systematic determination of the merit or worth of an object.

Cyclical Processes: Events or operations within a project that can occur and lead project teams back to areas that resemble the starting point of the project.

Formative Evaluation: The elicitation of information that can be used to improve a program while it is in the development stage.

Program: A social endeavor to reach some predefined goals and objectives. A program draws on personal, social and material resources to alter or preserve the context in which it takes place.

Complete Chapter List

Search this Book:
Reset