Practical Strategies for Assessing the Quality of Collaborative Learner Engagement

Practical Strategies for Assessing the Quality of Collaborative Learner Engagement

John LeBaron, Carol Bennett
Copyright: © 2009 |Pages: 16
DOI: 10.4018/978-1-60566-410-1.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Teachers and designers of computer-networked settings increasingly acknowledge that active learner engagement poses unique challenges, especially for instructors weaned on traditional site-based teaching, and that such engagement is essential to the progressive construction of learner knowledge. “Learner engagement” can mean several things: engagement with material, engagement with instructors, and, perhaps most important, peer engagement. Many teachers of computer-networked courses, who are quite diligent about incorporating activities and procedures to promote human interactivity, are confronted with the challenge of assessing the efficacy of their efforts. How do they discern whether the strategies and tactics woven into their “e-settings” are achieving the desired ends? This chapter outlines issues of self-assessment, including ethical questions. It lays out recommendations for self-assessment in a manner that respects student trust and confidentiality, distinguishing the demands of practical self-assessment from scholarly course research. The institutional pressures from which such assessment emerges are also examined.
Chapter Preview
Top

Introduction

Computer-supported collaborative learning (CSCL) outlined by Orvis and Lassiter (2006) makes a case for the active engagement of students in their own learning. These authors introduce certain challenges unique to computer-networked vs. face-to-face settings. Their commentary begs the question, “How do we know if our intentions work?” If we are truly committed to active student engagement and peer collaboration, then how do we gauge the achievement of our intentions? Orvis and Lassiter suggest that cognitive growth depends on a successful social construction of knowledge. If this is true, online instructors and designers need to devise techniques to discern the effectiveness of tactics and strategies incorporated into their course settings.

Personal interaction is crucial to the success of all forms of teaching and learning (Laurillard, 2000; Swan, 2002; Vrasidas & McIsaac, 1999). Computer-supported learning allows for many kinds of interactions: one-to-one, one-to-many, or many-to-many. By itself, however, technology does not promote interaction. Technology requires human intervention in design and instruction to assure strong student engagement in networked settings (Harasim, 1993; Harasim, Hiltz, Teles, & Turroff, 1995; Kearsley & Schneiderman, 1999). Roblyer and Wiencke (2003) add that specific, deliberate activities are necessary to promote and support interaction among course participants.

Inquiry into the questions of self-assessment in computer-networked learning environments has progressed little since the day when research concentrated on direct efficacy comparisons between computer-mediated and traditional classroom teaching. As computer-networked education was just emerging, Verduin and Clark (1991) reviewed 56 studies comparing the academic achievement of students in conventional classrooms to “distance learning” students. While focusing on student performance measured by grades, they found little or no distinction. Continuing this “no significant difference” stream of research, Russell’s growing compendium of studies (2001) revealed no significant difference in student performance between learners in conventional classrooms and those enrolled in various types of “distance learning” courses. Based on such “no significant difference” research, findings to date have indicated that distance learning, in a variety of modalities, typically matches or exceeds teaching, at least when effectiveness is gauged by student perceptions or performance measured by, say, their course grades.

These studies, however, provide little insight beyond that indicated by survey results or student transcripts. They fail to reveal much about the qualitative nature of the compared learning environments, and leave unanswered such other questions as: Do different teaching modalities transform traditional instructional media into significantly different learning experiences? What tactics and strategies do particular teaching and learning settings enable to promote the kinds of student growth sought by the course designers and instructors?

Several scholars have decried the persistent failure of scholarly research to analyze academic practice deeply or to improve it (Brown & Johnson-Shull, 2000; Phipps & Merisotis, 1999). Ehrmann (1997) suggests that most research comparing technology-infused with traditional teaching fails to address important substantive questions about distance education. Indeed, comparisons between alternative modes of instruction are meaningless because they typically fail to account for the innumerable and complex variables that distinguish different types of learning environment. As Ramage (2002) points out, the research deficiencies on effective higher education teaching are by no means limited to the analysis of education-at-a-distance. Research on classroom practice is similarly weak.

Complete Chapter List

Search this Book:
Reset