A Case Study of Peer Assessment in a Composition MOOC: Students' Perceptions and Peer-grading Scores versus Instructor-grading Scores

A Case Study of Peer Assessment in a Composition MOOC: Students' Perceptions and Peer-grading Scores versus Instructor-grading Scores

Lan Vu (Southern Illinois University at Carbondale, USA)
DOI: 10.4018/978-1-5225-1851-8.ch009
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

The large enrollments of multiple thousands of students in MOOCs seem to exceed the assessment capacity of instructors; therefore, the inability for instructors to grade so many papers is likely responsible for MOOCs turning to peer assessment. However, there has been little empirical research about peer assessment in MOOCs, especially composition MOOCs. This study aimed to address issues in peer assessment in a composition MOOC, particularly the students' perceptions and the peer-grading scores versus instructor-grading scores. The findings provided evidence that peer assessment was well received by the majority of students although many students also expressed negative feelings about this activity. Statistical analysis shows that there were significant differences in the grades given by students and those given by the instructors, which means the grades the students awarded to their peers tended to be higher in comparison to the instructor-assigned grades. Based on the results, this study concludes with implementations for peer assessment in a composition MOOC context.
Chapter Preview
Top

Introduction

The evolution of traditional online learning or online learning 1.0 to online learning 2.0 has created both opportunities and challenges for higher education (Sloan C, 2013; Grosseck, 2009; McLoughlin, & Lee, 2007). In the traditional online learning, online courses are quite similar to traditional face-to-face courses in term of the ratio of students to instructors. However, in online learning 2.0, of which massive open online courses (MOOCs), including MOOCs in composition, are a typical representative, an online instructor can have up to several thousand students in his or her course. Grading in such massive open online courses becomes a burden or a mission impossible for even the most dedicated professors, with an army of equally dedicated teaching assistants. Since not all assignments can be designed in auto-graded formats, and artificial intelligence grading programs are not well regarded by educators and researchers (Condon, 2013; Deane, 2013; Bridgeman, Trapani, & Yigal, 2012; Byrne, Tang, Truduc, & Tang, 2010; Chen, & Cheng, 2008; Cindy, 2007; Benett, 2006; Cheville, 2004; Chodorow, & Burstein, 2004), online peer grading is utilized, especially for composition and other courses in humanities. This online peer grading practice shifts the traditional grading authority from the instructor to the learners and poses many unanswered questions about the reliability and validity of online peer-reviewed grades in an open online learning setting. In the literature, findings in a few studies on peer grading (i.e. Cho et. al, 2006; Sadler & Good, 2006; Bouzidi & Jaillet, 2009) show a high consistency among grades assigned by peers and a high correlation between peer grading and teacher grading, which indicates that peer grading has been found to be a reliable and valid assessment tool. However, these findings are generally based on the context of college courses with small or moderate enrollments. By the time this study was conducted, there has been only one empirical study on peer grading in MOOC context.Lou, Robinson and Park (2014) examined peer grading assignments from a Coursera MOOC and found that grading scores given by peer students were fairly consistent and highly similar to the scores given by instructors. Nevertheless, their results actually refer to a Coursera MOOC named Maps and the Geospatial Revolution, not a composition MOOC. Given research on peer assessment in MOOCs is limited, the present study looks into certain issues of peer assessment in composition in a MOOC context. For this study, I collected surveys, conducted interviews, and accumulated statistical data on students’ and instructors’ grades and comments from a seven–week MOOC-based composition course with the purpose of examining aspects of peer assessment in this context, particularly the students’ perceptions, and the grades given by the students and the instructors.

Complete Chapter List

Search this Book:
Reset