Collaborative Calibrated Peer Assessment in Massive Open Online Courses

Collaborative Calibrated Peer Assessment in Massive Open Online Courses

Asma Boudria (LabSTIC Laboratory, University 8 May 1945 Guelma, Guelma, Algeria), Yacine Lafifi (LabSTIC Laboratory, University 8 May 1945 Guelma, Guelma, Algeria) and Yamina Bordjiba (LabSTIC Laboratory, University 8 May 1945 Guelma, Guelma, Algeria)
Copyright: © 2018 |Pages: 27
DOI: 10.4018/IJDET.2018010105

Abstract

The free nature and open access courses in the Massive Open Online Courses (MOOC) allow the facilities of disseminating information for a large number of participants. However, the “massive” propriety can generate many pedagogical problems, such as the assessment of learners, which is considered as the major difficulty facing in the MOOC. In fact, the immense number of learners who exceeded in some MOOC the hundreds of thousands make the instructors' evaluation of students' production quite impossible. In this work, the authors present a new approach for assessing the learners' production in MOOC. This approach combines the peer assessment with the collaborative learning and the calibrated method. It aims at increasing the degree of trust in peer-assessment. For evaluating the proposed approach, the authors implemented a MOOC dedicated for learning algorithms. In addition, an experiment was conducted during two months for knowing the effects of the proposed approach. The obtained results are presented in this paper. They are judged as very interesting and encouraging.
Article Preview

1. Introduction And Motivation

Over the past recent years, MOOCs (Massive Open Online Courses) have become the modern trend of e-learning. They have become important tools to support the learning of several thousands of learners or apprentices simultaneously and have opened several new areas of research. According to Siemens (2013), MOOCs are a continuation of innovation made in using technology for supporting distance and online learning to provide many learning opportunities for a large number of learners.

The MOOCs are usually limited in time, organized online (the entire course can be taken online: courses, activities, home works, exams, etc.) and open to a large public regardless of origin, level of education or other criteria, specified on a precise theme which can accommodate thousands or tens of thousands of participants (Cisel & Bruillard, 2013). They include a comprehensive set of educational resources with pedagogical objectives, interactions modalities, exercises and exams possibly leading to certification. Furthermore, they involve a teaching staff responsible for learners’ supervision and smooth running of the course (Cisel & Bruillard, 2013).

The free nature and open access courses allow the ease of disseminating information for the massive number of participants over the world. Anyone with an Internet connection can watch videos of a course. Furthermore, he can download the study materials and benefit from the high-quality education of prestigious universities worldwide such as Harvard, MIT and Stanford. However, the massive number of learners who exceeded in some MOOCs several thousands of learners generates many pedagogical problems such as the limited interaction with the teachers, the increase of the abundant rate and the difficulty of assessing learners' productions (Hone & El Said, 2016).

The assessment of learners is one of the major difficulties encountered in MOOCs (Bachelet & Cisel, 2013). This difficulty is due mainly to the massive number of participants. In this situation, the instructors and the administrative staff become unable to assess all learners’ productions (Sandeen, 2013). The type of questions asked is another difficulty encountered in the MOOCs. In fact, there are some examples of exercises like open-ended ones or essays where the automatic assessment is impossible because these questions require a human reflection to understand the solutions (Sandeen, 2013). So, who can evaluate the works of thousands of learners in MOOCs?

Recently, assessing learners in the MOOCs increases the interest of many researchers (Admiraal et al., 2015; Balfour, 2013; Staubitz et al., 2016; Ren et al., 2016). They propose that it is better to delegate the assessment task to learners. In other words, learners can evaluate the productions of their peers (peer-assessment) (Luaces, 2015). The peer assessment represents an important solution to this new form of learning because it is the only situation where the number of correctors may be equal to the number of candidates.

Bachelet and Cisel (2013) argue that “…peer assessment is one of the major challenges of MOOCs because it is the main mechanism for assessing participants’ production as a scale of MOOCs when the automatic evaluation is not applied…” For Miao and Koper (2007), the peer assessment is “a special form of collaborative learning in which peer learners learn through assessing others’ work”. Several studies showed that peer assessment can also have benefits related to the quality of learning. According to Mirielli (2007), “…it is a powerful method for leveraging the learning processes in a variety of settings…” However, the peer assessment is often seen as not relevant. On one side, the participant does not have the required competencies to assess the works of his peers. On the other side, learners do not believe on their peers. So, an important question arises: how can we improve peer assessment in MOOCs?

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 17: 4 Issues (2019): Forthcoming, Available for Pre-Order
Volume 16: 4 Issues (2018): 3 Released, 1 Forthcoming
Volume 15: 4 Issues (2017)
Volume 14: 4 Issues (2016)
Volume 13: 4 Issues (2015)
Volume 12: 4 Issues (2014)
Volume 11: 4 Issues (2013)
Volume 10: 4 Issues (2012)
Volume 9: 4 Issues (2011)
Volume 8: 4 Issues (2010)
Volume 7: 4 Issues (2009)
Volume 6: 4 Issues (2008)
Volume 5: 4 Issues (2007)
Volume 4: 4 Issues (2006)
Volume 3: 4 Issues (2005)
Volume 2: 4 Issues (2004)
Volume 1: 4 Issues (2003)
View Complete Journal Contents Listing