Evaluation of Multi-Peer and Self-Assessment in Higher Education: A Brunei Case Study

Evaluation of Multi-Peer and Self-Assessment in Higher Education: A Brunei Case Study

David Hassell, Kok Yueh Lee
DOI: 10.4018/IJITLHE.2020010104
Article PDF Download
Open access articles are freely available for download

Abstract

This article presents an evaluation of the use of peer and self-assessment as part of the learning process in a public speaking assessment coursework, with students from two departments taking part. Students were assessed by themselves, their peers and the lecturer using an online platform, Google forms, utilizing a set of rubrics. The marks were compared between markers to identify similarities and differences. After the process, student feedback on the experience was obtained using a questionnaire utilizing the Likert seven point scale to rate different questions. Analysis of the marks awarded found that whilst there might be correlations between different markers (i.e. peer – self) for marks on certain subsections of the work, there was no overall correlation between marks. Student perceptions to the exercise indicated that the use of rubrics was well received; students considered it a fair assessment method and it provided information on how to perform well in the assessment.
Article Preview
Top

Introduction

The use of self and peer assessment is increasingly being used within higher education for both formative and summative assessment, with the former providing timely feedback to students on their performance during a particular exercise. This assessment can take many forms, from informal verbal feedback based on student experience to student evaluation using model answers or assessment rubrics. In the latter case, self or peer assessment and the use of rubrics has been shown to improve performance if implanted effectively (Arendt, Trego, & Allred, 2016). Some of the benefits of using rubrics are that students are able to understand the tutor’s expectations, understand the specific intended learning outcomes of the assignment or task and the assessment criteria, and the provision of feedback to students informing their achievement and performance skills (Andrade, 2005; Andrade & Du, 2005; Reddy & Andrade, 2010). Asikainen, Virtanen, Postareff and Heino (2014, p. 202) suggested that “long-term pedagogical training is not the only way to develop the university teaching and learning”, implying that the use of rubrics and peer assessment can be an effective teaching approach to improve student learning. However, initial work indicated that there was resistance to this shift from lecturer assessment to peer assessment by both staff and students (Liu and Carless, 2006) and it has subsequently been proposed that a number of different approaches are required to mitigate student reluctance (Sendziuk, 2010).

Work on the use of rubrics has been reported over a wide range of disciplines and academic levels (Andrade, Du, & Wang, 2008; Cho, Schunn, & Wilson, 2006; Moni & Moni, 2008; Tierney & Simon, 2004), but their use poses challenges for the lecturer including rubric reliability and validity (Andrade, 2005, Moskal & Leydens, 2000). Analysis of published work within Engineering education (Davey, 2011, Davey & Palmer, 2012) identifies considerable scatter between assessor and assesse, with later work (Davey, 2015) reporting that students undertaking self-assessment of predominantly quantitative calculations marked their work on average 16% higher than the tutor did. Rater reliability in this case is likely a consequence of the type of question posed as well as any training raters receive, with the greatest deviation observed in the more open-ended questions. Other studies report similar over-marking for more qualitative activities such as self-assessment in report writing (Bringula and Moraga, 2017) and peer-assessment in oral presentations (Langan et al., 2005). One approach to increasing accuracy between assessor and assessed is the use of multiple assessors (Cho, Schunn & Wilson, 2006), however the need for multiple assessors could increase the workload and resource burden on the academic staff. These above studies indicate that success of the approach is dependent on numerous factors including the method of implementation, the student cohort and the manner of the exercise assessed.

Collecting and processing numerous marking rubrics would introduce an additional burden on teaching staff when undertaking assessment, whilst increasing the possibility of inaccurate data entry and hence errors when compiling marks. Technology is being increasingly utilised within education (Palenque, 2016) and it is possible that this can also be used to simplify the peer assessment process, provide a simple approach to engage multiple assessors for an individual work, and increase reliability and reduce the lecturer assessment burden. This paper presents an evaluation of the use technology, specifically Google forms, for self and peer-assessment using rubrics as part of the learning process for public speaking in a module studied by two different student cohorts, Civil Engineering and Computer Science. Google forms is a simple online platform that allows for the quick and easy capture of participant responses for subsequent data analysis and this work addresses the following two questions:

Complete Article List

Search this Journal:
Reset
Volume 5: 1 Issue (2024)
Volume 4: 1 Issue (2023)
Volume 3: 1 Issue (2022)
Volume 2: 1 Issue (2021)
Volume 1: 3 Issues (2020)
View Complete Journal Contents Listing