Computer Assisted Evaluation Using Rubrics for Reduction of Errors and Inter and Intra Examiner Heterogeneity

Computer Assisted Evaluation Using Rubrics for Reduction of Errors and Inter and Intra Examiner Heterogeneity

Kissan G. Gauns Dessai, Venkatesh V. Kamat
DOI: 10.4018/IJICTE.2018100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Educational institutions worldwide conduct summative examinations to evaluate academic performance of students. Such summative examinations are normally subjective in nature in higher education institutions and needs manual evaluation. However, the manual evaluation of subjective answer-scripts often suffers from evaluation anomalies and the impact of ‘Examiner variability' or ‘Examiner subjectivity'. Examiner variability mainly occurs due to differences in perception and expectation of each examiner coupled with lapses/errors in evaluation. Most of the currently employed methods partly address the problem of evaluation errors/lapses and examiner subjectivity with the aid of extra checks such as re-checking, re-verification, re-evaluation, etc. We need a pragmatic and unified approach to ensure uniformity and error-free evaluation. In this article, the authors present a method of computer aided evaluation of subjective answer-scripts using rubrics. The proposed approach focuses on improving the evaluation by reducing/eliminating the errors and examiner variability.
Article Preview
Top

1. Introduction

Academic institutions mainly use summative examination at the end of the course of study for evaluation of student (Boud, 2000; Timmins, Vernon, & Kinealy, 2005). Such summative examinations are normally subjective in nature in higher education institutions and needs manual evaluation. Answer-scripts evaluation in manual form involves varieties of human intensive tasks as listed below:

  • Reading answer-scripts;

  • Deciding the marks based on answer content;

  • Recording the marks;

  • Calculating the subtotal of marks for each main question;

  • Calculating a grand total of marks;

  • Preparing the statement of marks.

The typical examination system comprises of a large number of answer-books for evaluation, often needing multiple examiners for each course paper. In an intra/inter examiner subjective answer-scripts evaluation each examiner applies his own yardstick to assess the answer-scripts. The independent evaluation scale results into large scale variation (wide difference in average marks and range of marks in a particular course) in allotment of marks (Brooks, 2012; O’Hagan & Wigglesworth, 2015). The presence of a large number of answer-scripts for evaluation, subjectivity associated with the answer-scripts and lack of uniform evaluation guidelines are some of the causes of major variation in marks allotment. Besides this, errors can creep in while totaling and recording of marks. Errors in recording of marks can occur during any/all of the following situations:

  • 1.

    Transferring marks from inside of the answer-book to front-page of the answer-book;

  • 2.

    Recording marks from the front-page of the answer-book to the course statement of marks;

  • 3.

    Marks entry from the course statement of marks to a computerized system for the final compilation of results.

The ultimate consumers of the defective evaluation are the students. The serious flaws in current evaluation are apparent from the significant changes in the students overall marks during verification/re-evaluation of answer-scripts. Table 1 highlights some of the current approaches to compensate for errors/lapses (Reason, 1990), (Wiseman, Cairns, & Cox, 2011), occurring at different stages of evaluation. These approaches try to minimize the errors in evaluation and result compilation at the cost of additional efforts such as: moderation, re-checking, re-verification and re-evaluation.

Table 1.
Current approaches for controlling evaluation/marks entry errors
TaskIssueApproaches Used
EvaluationInter and Intra examiner variation in assignment of marks.Provision of answer key/ moderation of evaluated answer-scripts / Evaluate answer-scripts through experienced examiner.
Errors in total, sub-total, considering best marks of optional questions, recording in front of the answer-script and from the front of the answer-script to the main statement of marks.Verification of assessed answer-scripts by assigning it to experienced verifier.
Marks entry (Computer system)Keeping track of roll no vis-à-vis corresponding marks.Use dictation process where one person reads the marks and one person does the marks entry.
Data entry operator errors.Employ one person for checking entered marks and one person for reading marks from the course statement of marks.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 3 Issues (2022)
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing