Collaborative Argumentation in Learning Resource Evaluation

Collaborative Argumentation in Learning Resource Evaluation

John C. Nesbit, Tracey L. Leacock
DOI: 10.4018/978-1-59904-861-1.ch028
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The Learning Object Review Instrument (LORI) is an evaluation framework designed to support collaborative critique of multimedia learning resources. In this chapter, the interactions among reviewers using LORI are framed as a form of collaborative argumentation. Research on collaborative evaluation of learning resources has found that reviewers’ quality ratings tend to converge as a result of their interactions. Also, novice instructional designers have reported that collaborative evaluation is valuable preparation for undertaking resource design projects. The authors reason that collaborative evaluation is effective as a professional development method to the degree that it sustains argumentation about the application of evidence-based design principles.
Chapter Preview
Top

Collaborative Argumentation In Learning Resource Evaluation And Design

There are several reasons why producing high quality multimedia learning resources is challenging. Many types of media, media features, and design models are available to resource developers, yet there are few standards that can guide selecting them. Relevant research on multimedia learning has expanded, yet many developers are unaware of its full scope and value. Personnel are available who specialize in media development, instructional design, usability design, subject knowledge, and teaching, yet they are rarely coordinated so that that their expertise can be effectively brought to bear. Learners usually have opinions about the resources they use, yet their opinions are rarely heard by developers.

The challenge is seen most clearly when design decisions are informed by conflicting recommendations from different specializations. Decisions about text layout are a case in point. Psychologists and educational researchers who have studied readers using computer screens to read text with a fixed number of alphabetic characters per line have observed that more characters per line (possibly up to 100) may be optimal for rapid reading, but that as few as 40 or 50 characters per line may be optimal for reading comfort and comprehension (Dyson, 2004). Ling and van Schaik (2006, p. 403) concluded that “longer line lengths should be used when information is presented that needs to be scanned quickly…. [and] shorter line lengths should be used when text is to be read more thoroughly, rather than skimmed.” Specialists familiar with this research who are designing the text components of a resource to be used for a defined learning activity might choose a fixed line length of, say, 70 characters. On the other hand, many Web developers advocate a “liquid design” for Web pages in which the number of characters per line varies according to the width of the browser window, character size, and presence of images (Weiss, 2006). They argue that readers can resize the browser window to the optimal width for normal reading, or to a much wider width that minimizes scrolling when scanning through a large document. Because neither fixed nor liquid approaches to line length is likely to be the best choice in all design situations, an analysis of how specific circumstances play into the decision seems necessary, and that process requires knowledge of both the fixed length and flexible length strategies. Finding the best design solutions and evaluating existing designs requires an exchange of specialist knowledge in relation to situated learner needs. The nature and requirements of this exchange are the concern of the present chapter.

Any approach to ensuring quality in learning objects that is built around rigid standards for technologies or implementation will quickly become obsolete. Instead, what is needed is a system for evaluating learning objects that applies design principles, recognizes that the best way to operationalize these principles will change from context to context, and has a mechanism for continued interpretation and clarification of how these principles relate to specific learning objects. We maintain that continued interpretation of quality standards requires reasoned discussion or argumentation among learning object stakeholders—media developers, instructional designers, instructors, students, and so on—and that this argumentation can also serve as a form of professional development for the stakeholders. Such dialogue provides the opportunity for professionals and students to test their ideas and see the views of other stakeholders who may be approaching the same object from different professional perspectives.

The purpose of this chapter is to present theory and evidence that collaborative argumentation can be a powerful method for the design and evaluation of multimedia learning resources. We describe how a model of collaborative argumentation that we have developed, convergent participation, has been used to evaluate learning resources and provide professional development for learning resource designers. Before taking up this main theme we introduce an instrument for evaluating multimedia learning resources that offers substantive guidance to collaborating reviewers.

Key Terms in this Chapter

Learning by Evaluation: A process in which students learn design principles by critiquing existing objects. In the course of forming and explaining their evaluation, students gain a deeper understanding of design principles than they would by only reading about them. Learning by evaluation complements learning by design in which students must create their own objects and may often be distracted by technical matters.

eLera (E-Learning Research and Assessment Network): A Web site featuring Web-based tools for evaluating learning resource quality. Members can register the metadata for any learning object and then use evaluation tools within eLera to rate the object individually or collaboratively. The goals of eLera are (1) to improve the quality of online learning resources through better design and evaluation; (2) to develop effective pedagogical models that incorporate learning objects; and (3) to help students, teachers, professors, instructional designers, and others to select pedagogical models and digital resources that meet their requirements.

Collaborative Argumentation: A form of productive critical thinking characterized by evaluation of claims and supporting evidence, consideration of alternatives, weighing of cost and benefits, and exploration of implications.

Convergent Participation: An evaluation protocol in which individuals first rate learning objects independently and then discuss the reasons for their ratings in a structured, moderated discussion. Participants may choose to change their ratings during the group discussion.

Learning Object Review Instrument (LORI): A nine-item heuristic quality rating tool for digital learning resources developed by the E-Learning Research and Assessment Network (Available from: www.elera.net). The nine items are: content quality, learning goal alignment, feedback and adaptation, motivation, presentation design, interaction usability, accessibility, reusability, and standards compliance.

Learning Objects: Digital multimedia learning resources that combine text, images, and other media, are intended for re-use across educational settings, typically require a few minutes to perhaps an hour of a learner’s time for initial study, and usually focus on one topic or a small set of closely related elements, which could then be integrated with other objects and activities in a particular teaching context to form a full course

Complete Chapter List

Search this Book:
Reset