Enhancing the IMS QTI to Better Support Computer Assisted Marking

Enhancing the IMS QTI to Better Support Computer Assisted Marking

Damien Clark (Central Queensland University, Australia) and Penny Baillie-de Byl (University of Southern Queensland, Australia)
DOI: 10.4018/978-1-60566-342-5.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Computer aided assessment is a common approach used by educational institutions. The benefits range into the design of teaching, learning, and instructional materials. While some such systems implement fully automated marking for multiple choice questions and fill-in-the-blanks, they are insufficient when human critiquing is required. Current systems developed in isolation have little regard to scalability and interoperability between courses, computer platforms, and learning management systems. The IMS Global Learning Consortium’s open specifications for interoperable learning technology lack functionality to make it useful for computer assisted marking. This article presents an enhanced set of these standards to address the issue.
Chapter Preview
Top

Introduction

Computer aided assessment (CAA), one of the recent trends in education technology, has become common-place in educational institutions as part of delivering course materials, particularly for large classes. This has been driven by many factors, such as:

  • The need to reduce educational staff workloads (Dalziel, 2000; Jacobsen & Kremer, 2000; Jefferies, Constable et al., 2000; Pain & Heron, 2003; Peat, Franklin et al., 2001);

  • A push for more timely feedback to students (Dalziel, 2001; Jefferies, Constable et al., 2000; Merat & Chung, 1997; Sheard & Carbone, 2000; Woit & Mason, 2000);

  • Reduction in educational material development and delivery costs (Jefferies, Constable et al., 2000; Muldner & Currie, 1999); and,

  • The proliferation of online education (White, 2000).

Internet-based technologies in CAA can be broadly categorised into the following system types: online quiz systems, fully automated marking, and semiautomated/computer assisted marking systems. The most common form of CAA, online quizzes, typically consist of multiple choice questions (MCQ) (IMS, 2000), as they can be automatically marked. Yet, there is much conjecture on the effectiveness of MCQs, particularly in the assessment of Bloom’s higher learning outcomes (1956) such as analysis, synthesis, and evaluation (Davies, 2001). This limits the scope by which a student’s abilities can be assessed. Short response and essay type questions are commonly used to assess the higher order skills of Bloom’s taxonomy. Still, these types of assessments are time consuming to mark manually (Davies, 2001; White, 2000).

A more ambitious approach to CAA involves the use of fully-automated marking systems. These can be defined as systems that can mark electronically submitted assignments such as essays (Palmer, Williams et al., 2002) via online assignment submission management (OASM) (Benford, Burke et al., 1994; Darbyshire, 2000; Gayo, Gil et al., 2003; Huizinga, 2001; Jones & Behrens, 2003; Jones & Jamieson, 1997; Mason & Woit, 1999; Roantree & Keyes, 1998; Thomas, 2000; Trivedi, Kar et al., 2003), and automatically generate a final grade for the assignment with little to no interaction with a human marker. The obvious benefit to this approach is the ability to assess some higher order thinking as per Bloom’s Taxonomy (1956) in a completely automated manner, thus improving marking turn-around times for large classes. Fully automated systems include MEAGER, which is designed to automatically mark Microsoft Excel spreadsheets (Hill, 2003), automatic essay marking systems, such as those evaluated by Palmer, Williams et al. (2002), and English and Siviter’s system (2000) designed to assess student hypertext mark-up language (HTML) Web pages, to name a few. Unfortunately, this approach is not suitable for all assessment types and can often require significant time to develop the model solution. In addition, most of the automated functionality examines students’ solutions against model solutions. This may lead to issues relating to marking quality when it is impossible for the assessment creator to identify all possible solutions.

Complete Chapter List

Search this Book:
Reset