Article Preview
Top1. Introduction
Academic institutions mainly use summative examination at the end of the course of study for evaluation of student (Boud, 2000; Timmins, Vernon, & Kinealy, 2005). Such summative examinations are normally subjective in nature in higher education institutions and needs manual evaluation. Answer-scripts evaluation in manual form involves varieties of human intensive tasks as listed below:
- •
Reading answer-scripts;
- •
Deciding the marks based on answer content;
- •
Recording the marks;
- •
Calculating the subtotal of marks for each main question;
- •
Calculating a grand total of marks;
- •
Preparing the statement of marks.
The typical examination system comprises of a large number of answer-books for evaluation, often needing multiple examiners for each course paper. In an intra/inter examiner subjective answer-scripts evaluation each examiner applies his own yardstick to assess the answer-scripts. The independent evaluation scale results into large scale variation (wide difference in average marks and range of marks in a particular course) in allotment of marks (Brooks, 2012; O’Hagan & Wigglesworth, 2015). The presence of a large number of answer-scripts for evaluation, subjectivity associated with the answer-scripts and lack of uniform evaluation guidelines are some of the causes of major variation in marks allotment. Besides this, errors can creep in while totaling and recording of marks. Errors in recording of marks can occur during any/all of the following situations:
- 1.
Transferring marks from inside of the answer-book to front-page of the answer-book;
- 2.
Recording marks from the front-page of the answer-book to the course statement of marks;
- 3.
Marks entry from the course statement of marks to a computerized system for the final compilation of results.
The ultimate consumers of the defective evaluation are the students. The serious flaws in current evaluation are apparent from the significant changes in the students overall marks during verification/re-evaluation of answer-scripts. Table 1 highlights some of the current approaches to compensate for errors/lapses (Reason, 1990), (Wiseman, Cairns, & Cox, 2011), occurring at different stages of evaluation. These approaches try to minimize the errors in evaluation and result compilation at the cost of additional efforts such as: moderation, re-checking, re-verification and re-evaluation.
Table 1. Current approaches for controlling evaluation/marks entry errors
Task | Issue | Approaches Used |
Evaluation | Inter and Intra examiner variation in assignment of marks. | Provision of answer key/ moderation of evaluated answer-scripts / Evaluate answer-scripts through experienced examiner. |
Errors in total, sub-total, considering best marks of optional questions, recording in front of the answer-script and from the front of the answer-script to the main statement of marks. | Verification of assessed answer-scripts by assigning it to experienced verifier. |
Marks entry (Computer system) | Keeping track of roll no vis-à-vis corresponding marks. | Use dictation process where one person reads the marks and one person does the marks entry. |
Data entry operator errors. | Employ one person for checking entered marks and one person for reading marks from the course statement of marks. |