The Implementation of a Server-based Computerized Adaptive Testing on Mobile Devices (CAT-MD)

The Implementation of a Server-based Computerized Adaptive Testing on Mobile Devices (CAT-MD)

Evangelos Triantafillou (Center of Educational Technology, Greece), Elissavet Georgiadou (Center of Educational Technology, Greece) and Anastasios A. Economides (University of Macedonia, Department of Computer Networks, Greece)
DOI: 10.4018/978-1-60566-938-0.ch020


The introduction of mobile devices into the learning pedagogy can compliment e-learning and e-testing by creating an additional channel of assessment with mobile devices. The current study describes the design issues that were considered for the development and the implementation of a CAT on mobile devices, the CAT-MD (Computerized Adaptive Testing on Mobile Devices). The system was implemented in two phases, where initially, a standalone prototype application was developed in order to implement the architecture of the CAT-MD. After a formative evaluation, the second phase took place, where a server-based application was developed in order to add new functionalities to the system so that CAT-MD can be an effective and efficient assessment tool that can add value to the educational process. The mobility of the CAT-MD eliminates the need for a specialized computer lab, as it can be used anywhere, including a traditional classroom.
Chapter Preview

Computerized Adaptive Testing

The recent years Computer Based Testing (CBT) is widely used in educational and training as there are a number of perceived benefits in using computers for assessing performance such as: (a) large numbers can be marked quickly and accurately, (b) students response can be monitored, (c) assessment can be offered in an open access environment, (d) assessments can be stored and reused, (e) immediate feedback can be given, (f) assessment items can be randomly selected to provide a different paper to each student (Harvey & Mogey, 1999). Moreover, another benefit of CBTs would be to bring the assessment environment closer to the learning environment. Software tools and web-based sources are frequently used to support the learning process, so it seems reasonable to use similar computer-based technologies in the assessment process (Baklavas, Economides & Roumeliotis, 1999; Lilley & Barker, 2002).

Most types of CBT are based on fixed-length computerized assessment that presents the same number of items to each examinee in a specified order and the score usually depends on the number of items answered correctly, giving little or no attention to the ability of each individual examinee. However, in Computerized Adaptive Testing (CAT), a special case of computer-based testing, each examinee takes a unique test that is tailored to his/her ability level. As an alternative of giving each examinee the same fixed test, CAT item selection adapts to the ability level of individual examinees and after each response the ability estimate is updated and the next item is selected to have optimal properties at the new estimate (van der Linden & Glas, 2003). The CAT presents first an item of moderate difficulty in order to initially assess each individual’s level. During the test, each answer is scored immediately and if the examinee answers correctly then the test statistically estimates her/his ability as higher and then presents an item that matches this higher ability. The opposite occurs if the item is answered incorrectly. The computer continuously re-evaluates the ability of the examinee until the accuracy of the estimate reaches a statistically acceptable level or when some limit is reached; such as a maximum number of test items. The score is determined from the level of the difficulty, and as a result, while all examinees may answer the same percentage of questions correctly the high ability ones will get a better score as they answer correctly more difficult items.

Regardless of some disadvantages reported in the literature –for example, high cost of development, item calibration, item exposure (Eggen, 2001; Boyd, 2003), the effect of a flawed item (Abdullah, 2003), or the use of CAT for summative assessment (Lilley & Barker, 2002) – CAT has several advantages. Testing on demand can be facilitated so as an examinee can take the test whenever and wherever s/he is ready. Multiple media can be used to create innovative item formats and more realistic testing environments. Other possible advantages are flexibility of test management; immediate availability of scores; increased test security; increased motivation etc. However, the main advantage of CAT over any other computerized based test is efficiency. Since fewer items are needed to achieve a statistically acceptable level of accuracy, significantly less time is needed to administer a CAT compared to a fixed length Computerized Based Testing (Rudner, 1998; Linacre, 2000).

Since the mid-80s when the first CAT systems became operational, i.e. the Armed Services Vocational Aptitude Battery for the US Department of Defence account (van der Linden & Glas, 2003) using adaptive techniques to administer multiple-choice items, much research and many technical challenges have made new assessment tools possible. Currently, there are several tools for developing adaptive computerized test such as FastTEST (Assessment Systems Corporation, 2009), QuestionMark (2009), Webassesor (2009), SIETTE (Conejo, Guzmán, Millán, Trella, Pérez-De-La-Cruz & Ríos, 2004), Test++ (Barra, Lannaccone, Palmieri & Scarano, 2002) etc.

Complete Chapter List

Search this Book: