This chapter focuses on the use of computers for online summative assessment, in particular for objectively marked items. The aim of this chapter is to try and address the concerns of individuals wishing to pilot the introduction of online summative assessments in their own institutions. A five-stage development life cycle of online summative assessment—item development, quality assurance, item selection, examination delivery, and results analysis—is presented and discussed.
Many institutions are already using computers for online formative assessment, but in a review looking at medical education, Cantillon, Irish, and Sales (2004) found the application of computers to the summative-assessment arena much more limited. Limiting factors preventing wider adoption of online summative assessment included lack of space and perceived security risks. The publication of failures (Harwood, 2005; Heintz & Jemison, 2005; Smailes, 2002) also does little to reassure the unconverted.
Although the rationale for online assessment has been well rehearsed, it is nevertheless useful to recap some pertinent arguments that support the use of online assessment in the summative area. Students entering higher education today come from a broad background of technology in both their school and home lives. They expect interaction, a visual experience, and rapid feedback from their activities (Oblinger, 2006). Additionally, as more and more online assessment is used in secondary education before these students enter university and in the workplace after students leave, if universities do not keep up with this trend, their courses are in danger of appearing outdated to students (Sim, Holifield, & Brown, 2004). Additionally, online examination broadens the assessment arsenal and creates a more holistically challenging assessment environment: no longer is it possible to be just good at written examinations.
From the point of view of teaching and administration staff, the move to assessing students online also offers a number of advantages. As student numbers increase in the United Kingdom along with time pressures on staff to produce research alongside their teaching, a system that can reduce marking loads has huge advantages. Results can be available as soon as an examination is finished and can be immediately reviewed by an examination board and released to students. A number of quality checks can also be performed as the results come in, resulting in the early detection of problematic questions. These issues are covered in detail below.
The chapter concentrates on the specific topic of computer-based assessment using a client-server architecture such as the Internet. The field of computer-assisted assessment is very wide and conceptually encompasses any form of assessment activity assisted wholly or in part by a computer. This includes endeavours such as student submission of coursework into virtual learning environments (VLEs), the use of online plagiarism detection systems such as Turnitin (http://www.turnitin.com), and investigating methods for marking free-text prose automatically. What the current chapter will concentrate on is the use of computer-based assessment for objectively marked items. This should be of interest to curriculum managers, educationalists, and module coordinators who have possibly built up experience in using paper-based examinations that can be automatically marked through optical mark recognition (OMR) systems. OMR is a form of computer-assessed assessment. As the computer does the marking, there is growing interest in using computers to present the assessments to students as well. Table 1 contains a comparison of the two approaches to using computers in assessment.