Automatic Item Generation

Automatic Item Generation

Mark Gierl (University of Alberta, Canada), Hollis Lai (University of Alberta, Canada) and Xinxin Zhang (University of Alberta, Canada)
Copyright: © 2018 |Pages: 11
DOI: 10.4018/978-1-5225-2255-3.ch206

Abstract

Changes to the design and development of educational tests are resulting in the unprecedented demand for a large supply of content-specific test items. One way to address this growing demand is with automatic item generation. Automatic item generation is the process of using models to create test items with the aid of computer technology. The purpose of this chapter is to describe and illustrate a method for generating test items. The method is also illustrated using an example from the medical health sciences.
Chapter Preview
Top

Introduction

As the importance of technology in society continues to increase, countries require skilled workers who can produce new ideas, make new products, and provide new services. The ability to create these ideas, products, and services will be determined by the effectiveness of our educational programs. Education provide students with the knowledge and skills required to think, reason, communicate, and collaborate in a world that is shaped by knowledge services, information, and communication technologies (e.g., Binkley, Erstad, Herman, Raizen, Ripley, Miller-Ricci, & Rumble, 2012; Darling-Hammond, 2014). Educational testing has an important role to play in helping students acquire these foundational skills. Educational tests, once developed almost exclusively to satisfy demands for accountability and outcomes-based summative testing, are now expected to provide teachers and students with timely, detailed, formative feedback to directly support teaching and learning. To meet these teaching and learning directives, formative principles are beginning to guide our educational testing practices. Formative principles can include any assessment-related activities that yield constant and specific feedback to modify teaching and improve learning, including administering tests more frequently (Black & Wiliam, 1998, 2010). But when testing occurs frequently, more test items are required. These additional test items must be created efficiently and economically while maintaining a high standard of quality. Fortunately, this requirement for frequent and timely educational testing coincides with the dramatic changes occurring in instructional technology. Developers of local, national, and international educational tests are now implementing computerized tests at an extraordinary rate (Beller, 2013). Computerized testing offers many important benefits to support and promote key principles in formative assessment. Computers permit testing on-demand thereby allowing students to take the test at any time during instruction; items on computerized tests are scored immediately thereby providing students with instant feedback; computerized tests permit continuous administration thereby allowing students to have more choices about when they write their exams. In short, computers are helping infuse formative principles into our testing practices to support teaching and learning.

Despite these important benefits, the advent of computerized testing has also raised formidable challenges, particularly in the area of test item development. Educators must have access to large numbers of diverse, high-quality test items to implement computerized testing because items are continuously administered to students. Hundreds of items are needed to develop the test item banks necessary for computerized testing. Unfortunately, test items, as they are currently created, are time consuming and expensive to develop because each individual item is written, initially, by a content specialist and, then, reviewed, edited, and revised by groups of content specialists (Gierl & Lai, 2016a; Rudner, 2010). Hence, item development has been identified as one of the most important problems that must be solved before we can fully migrate to computerized testing because large numbers of high-quality, content-specific, test items are required (Haladyna & Rodriguez, 2013; Webb, Gibson, & Forkosh-Baruch, 2013).

Key Terms in this Chapter

Cognitive Model: A representation that highlights the knowledge, skills, problem-solving processes and/or content an examinee requires to answer test items.

Systematic Distractor Generation: A method for generating distractors where specific information related to errors and misconceptions is used to create plausible but incorrect options.

Item Model: A template that highlights the features in an item that must be manipulated to generate new items.

Distractor Pool Method with Random Selection: A method for creating distractors when the distractors created from a list and then the list is used to randomly select plausible but erroneous content for each generated item.

Systematic Generation with Rationales Method: A method to systematically create distractors when rationales are used to produce a list of incorrect options.

Elements: Variables in the item model that can be modified to create new test items.

Key Features Cognitive Model: A model used to guide item generation based on the relationships among the key features specified in the cognitive model, which include the attributes or features of a task are systematically combined to produce meaningful outcomes across the item feature set.

Automatic Item Generation: A process of using item models to generate test items with the aid of computer technology.

Complete Chapter List

Search this Book:
Reset