This chapter will examine the approach taken in the evaluation of a large-scale feasibility trial of the production, distribution, and use of learning objects (LOs). This was carried out by partners in several countries of Europe as part of the Context E-Learning with Broadband Technologies (CELEBRATE) project, coordinated by European Schoolnet. The project produced a large number of LOs and involved linking up commercial and ministry producers of LOs to make available their products to teachers in six countries. The chapter examines what it means to evaluate learning objects, given that they are both particular objects and a general idea, especially important given the dearth of empirical studies of the use of LOs. It then goes on to explore the way this was tackled strategically and tactically, bearing in mind a European context of distributed locations, different languages, and education systems.
Can Los Be Evaluated?
When we presented our preliminary findings of the evaluation at the European Association for Learning and Instruction (EARLI) annual conference in 2005 (Ilomäki, Lakkala, & Paavola, 2005; Jaakkola & Nurmi, 2005; McCormick & Li, 2005) our discussant, Wouter van Joolingen, rightly posed the question of whether and in what way it was possible to evaluate learning objects in the general way we were apparently doing. He drew parallels between trying to evaluate “pills” (rather than a specific drug), and argued that the concept of a LO applied to a form of packaging and the metadata, not the content and, in which case, the whole process of production, storage, selection, and use had to be part of the evaluation. As he graphically put it “Just evaluating learning objects does not say anything.” At that conference we were only reporting the results of the “use” of LOs, and it was an important reminder of the limitations of what can be claimed and for the importance of reporting our general approach to evaluating LOs in the context of the project. Here I will examine how we answered his justifiable question, which of course also contains within it the definition of what constitutes a LO.
The definition of an LO we used was rather general: any entity, digital or nondigital, that can be used or re-used or referenced during technology-supported learning.c This makes it difficult to answer the question that Wouter van Joolingen posed, as in a sense it has no special characteristics. There are, however, a number of such characteristics that are usually associated with LOs, namely that they are:
Interoperable, that is, that they will operate in any technical environment;
Reusable, that is, that they can be used by any teacher in any context;
Modifiable, that is, that a teacher can alter some features of the LO to suit their situation;
Adaptable, that is, that they will adapt to the learners needs.
Key Terms in this Chapter
Modifiability: The condition for a learning object that a teacher can alter some its features to suit his or her situation.
Interoperability: The condition for a learning object to operate in any technical environment.
Experimental Approach to Evaluation: An approach that requires an experimental group of respondents to receive a treatment (e.g., a new approach to teaching) and to be compared to a control group (who are subjected to traditional treatment). Ideally individuals should be randomly assigned to the experimental or control groups, or that the two groups are matched on significant variables (e.g., prior attainment).
Routine Data: Data that is collected automatically by a learning object distribution system.
Reusability: The condition for a learning object to used by any teacher in any context.
Illuminative Evaluation: An approach to evaluation that seeks to illuminate the conditions of an educational programme mainly through a qualitative evaluation approach (e.g., ethnography).
Adaptability: The condition for a learning object that will adapt to the learner’s needs.
Learning Object: Any entity, digital or nondigital, that can be used or re-used or referenced during technology-supported learning.
Bureaucratic Evaluation: An evaluation approach that seeks to serve the needs of those who control education and which accepts their values and helps them accomplish their policy objectives. The methods must be credible to them and not leave them open to public criticism.
Democratic Evaluation: An approach to evaluation that provides information to the community about an educational programme, adopting a pluralistic approach and serving all the stakeholders in the programme.
Metadata: Data used to describe a learning object in ways that a computer or computer system can read and work with.