Practical Strategies for Assessing the Quality of Collaborative Learner Engagement

Practical Strategies for Assessing the Quality of Collaborative Learner Engagement

John LeBaron (Western Carolina University, USA) and Carol Bennett (WRESA Elementary and Middle Grades Curriculum Coordinator, USA)
Copyright: © 2009 |Pages: 16
DOI: 10.4018/978-1-60566-410-1.ch015
OnDemand PDF Download:


Teachers and designers of computer-networked settings increasingly acknowledge that active learner engagement poses unique challenges, especially for instructors weaned on traditional site-based teaching, and that such engagement is essential to the progressive construction of learner knowledge. “Learner engagement” can mean several things: engagement with material, engagement with instructors, and, perhaps most important, peer engagement. Many teachers of computer-networked courses, who are quite diligent about incorporating activities and procedures to promote human interactivity, are confronted with the challenge of assessing the efficacy of their efforts. How do they discern whether the strategies and tactics woven into their “e-settings” are achieving the desired ends? This chapter outlines issues of self-assessment, including ethical questions. It lays out recommendations for self-assessment in a manner that respects student trust and confidentiality, distinguishing the demands of practical self-assessment from scholarly course research. The institutional pressures from which such assessment emerges are also examined.
Chapter Preview


Computer-supported collaborative learning (CSCL) outlined by Orvis and Lassiter (2006) makes a case for the active engagement of students in their own learning. These authors introduce certain challenges unique to computer-networked vs. face-to-face settings. Their commentary begs the question, “How do we know if our intentions work?” If we are truly committed to active student engagement and peer collaboration, then how do we gauge the achievement of our intentions? Orvis and Lassiter suggest that cognitive growth depends on a successful social construction of knowledge. If this is true, online instructors and designers need to devise techniques to discern the effectiveness of tactics and strategies incorporated into their course settings.

Personal interaction is crucial to the success of all forms of teaching and learning (Laurillard, 2000; Swan, 2002; Vrasidas & McIsaac, 1999). Computer-supported learning allows for many kinds of interactions: one-to-one, one-to-many, or many-to-many. By itself, however, technology does not promote interaction. Technology requires human intervention in design and instruction to assure strong student engagement in networked settings (Harasim, 1993; Harasim, Hiltz, Teles, & Turroff, 1995; Kearsley & Schneiderman, 1999). Roblyer and Wiencke (2003) add that specific, deliberate activities are necessary to promote and support interaction among course participants.

Inquiry into the questions of self-assessment in computer-networked learning environments has progressed little since the day when research concentrated on direct efficacy comparisons between computer-mediated and traditional classroom teaching. As computer-networked education was just emerging, Verduin and Clark (1991) reviewed 56 studies comparing the academic achievement of students in conventional classrooms to “distance learning” students. While focusing on student performance measured by grades, they found little or no distinction. Continuing this “no significant difference” stream of research, Russell’s growing compendium of studies (2001) revealed no significant difference in student performance between learners in conventional classrooms and those enrolled in various types of “distance learning” courses. Based on such “no significant difference” research, findings to date have indicated that distance learning, in a variety of modalities, typically matches or exceeds teaching, at least when effectiveness is gauged by student perceptions or performance measured by, say, their course grades.

These studies, however, provide little insight beyond that indicated by survey results or student transcripts. They fail to reveal much about the qualitative nature of the compared learning environments, and leave unanswered such other questions as: Do different teaching modalities transform traditional instructional media into significantly different learning experiences? What tactics and strategies do particular teaching and learning settings enable to promote the kinds of student growth sought by the course designers and instructors?

Several scholars have decried the persistent failure of scholarly research to analyze academic practice deeply or to improve it (Brown & Johnson-Shull, 2000; Phipps & Merisotis, 1999). Ehrmann (1997) suggests that most research comparing technology-infused with traditional teaching fails to address important substantive questions about distance education. Indeed, comparisons between alternative modes of instruction are meaningless because they typically fail to account for the innumerable and complex variables that distinguish different types of learning environment. As Ramage (2002) points out, the research deficiencies on effective higher education teaching are by no means limited to the analysis of education-at-a-distance. Research on classroom practice is similarly weak.

Complete Chapter List

Search this Book:
Editorial Advisory Board
Table of Contents
Gary Poole
Christine Spratt, Paul Lajbcygier
Chapter 1
Selby Markham, John Hurt
Reliability and validity have a well-established place in the development and implementation of educational assessment devices. With the advent of... Sample PDF
Re-Assessing Validity and Reliability in the E-Learning Environment
Chapter 2
Päivi Hakkarainen, Tarja Saarelainen, Heli Ruokamo
In this chapter the authors report on the assessment framework and practices that they applied to the e-learning version of the Network Management... Sample PDF
Assessing Teaching and Students' Meaningful Learning Processes in an E-Learning Course
Chapter 3
Charlotte Brack
Within the notion of Web 2.0, social software has characteristics that make it particularly relevant to ELearning, aligning well with a social... Sample PDF
Collaborative E-Learning Using Wikis: A Case Report
Chapter 4
Mike Hobbs, Elaine Brown, Marie Gordon
This chapter provides an introduction to learning and teaching in the virtual world Second Life (SL). It focuses on the nature of the environment... Sample PDF
Learning and Assessment with Virtual Worlds
Chapter 5
Paul White, Greg Duncan
This chapter describes innovative approaches to E-Learning and related assessment, driven by a Faculty Teaching and Learning Technologies Committee... Sample PDF
A Faculty Approach to Implementing Advanced, E-Learning Dependent, Formative and Summative Assessment Practices
Chapter 6
Christine Armatas, Bernard Colbert
Two challenges with online assessment are making sure data collected is secure and authenticating the data source. The first challenge relates to... Sample PDF
Ensuring Security and Integrity of Data for Online Assessment
Chapter 7
Robyn Benson
This chapter addresses some issues relating to the use of e-learning tools and environments for implementing peer assessment. It aims to weigh up... Sample PDF
Issues in Peer Assessment and E-Learning
Chapter 8
Paul Lajbcygier, Christine Spratt
This chapter presents recent research on group assessment in an e-learning environment as an avenue to debate contemporary issues in the design of... Sample PDF
The Validity of Group Marks as a Proxy for Individual Learning in E-Learning Settings
Chapter 9
Robert S. Friedman, Fadi P. Deek, Norbert Elliot
In order to offer a unified framework for the empirical assessment of e-learning (EL), this chapter presents findings from three studies conducted... Sample PDF
Validation of E-Learning Courses in Computer Science and Humanities: A Matter of Context
Chapter 10
Richard Tucker, Jan Fermelis, Stuart Palmer
There is considerable evidence of student scepticism regarding the purpose of team assignments and high levels of concern for the fairness of... Sample PDF
Designing, Implementing and Evaluating a Self-and-Peer Assessment Tool for E-Learning Environments
Chapter 11
Andrew Sanford, Paul Lajbcygier, Christine Spratt
A differential item functioning analysis is performed on a cohort of E-Learning students undertaking a unit in computational finance. The motivation... Sample PDF
Identifying Latent Classes and Differential Item Functioning in a Cohort of E-Learning Students
Chapter 12
Christine Armatas, Anthony Saliba
A concern with E-Learning environments is whether students achieve superior or equivalent learning outcomes to those obtained through traditional... Sample PDF
Is Learning as Effective When Studying Using a Mobile Device Compared to Other Methods?
Chapter 13
Thomas C. Reeves, John G. Hedberg
Evaluation falls into the category of those often neglected human practices such as exercise and eating right. All of us involved in education or... Sample PDF
Evaluation Strategies for Open and Distributed Learning Environments
Chapter 14
Madhumita Bhattacharya
This chapter presents a description and analysis of salient issues related to the development of an integrated e-portfolio application implemented... Sample PDF
Introducing Integrated E-Portfolio Across Courses in a Postgraduate Program in Distance and Online Education
Chapter 15
John LeBaron, Carol Bennett
Teachers and designers of computer-networked settings increasingly acknowledge that active learner engagement poses unique challenges, especially... Sample PDF
Practical Strategies for Assessing the Quality of Collaborative Learner Engagement
Chapter 16
Som Naidu
Many teachers commonly use assessment as the starting point of their teaching activities because they believe that assessment drives learning and... Sample PDF
Afterword: Learning-Centred Focus to Assessment Practices
About the Contributors