Implementing Distributed Architecture of Online Assessment Tools Based on IMS QTI ver.2

Implementing Distributed Architecture of Online Assessment Tools Based on IMS QTI ver.2

Vladimir Tomberg, Mart Laanpere
DOI: 10.4018/978-1-61692-789-9.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter addresses the decade of development and state of the art in the domain of online testing of learning outcomes. The authors focus on the changes and implementation scenarios of the latest versions of IMS QTI – the major technical specification that has became the de facto standard in the domain. Standardization of content and applications used for online testing is partly driven by the paradigm shifts that are taking place in the fields of pedagogy and Web technology. This chapter pays a special attention to the increasing trend of using Web 2.0 technology in education, especially Mash-up Personal Learning Environments and their impact on the architectural decisions while developing the next generation online assessment tools.
Chapter Preview
Top

Evolution Towards Standards

During second half of the '90s, Computer Based Training (CBT) systems became the mainstream software applications in the educational domain. These systems had been developed in universities but were very quickly demanded also by business organizations because of their high efficiency. In such systems a process of assessment is often implemented by means of automated testing. There are much of examples of well-known systems, especially in the domain of vocational education. Cisco Networking Academy, Microsoft certification courses, almost any CBT system have included this useful functionality. With the lapse of time this functionality has been distinguished also as independent type of application, and received a name Computer Based Assessment (CBA). CBA functionalities were easily included in CBT systems (and later in Learning Management System, LMS) or to be released as a separate software package. From the beginning of the 21st century teachers widely began to prepare and use online tests for assessments of learning outcomes in universities and schools.

However, first compatibility problems have appeared quite soon. Preparation of test questions is handwork that is difficult to automate. Instructors wanted to have an ability to re-use questions and test repeatedly and transfer them into different software systems. In the middle nineties the two main obstacles have been revealed, those were incompatibilities between different systems at file level and at questions’ level. CBT systems of that period of time usually were commercial software with proprietary closed source code. Their architecture was monolithic and tightly coupled (Wills et al., 2009, p. 354). Such system usually saved data in some kind of self-developed database or in own file format. An internal structure of such files was proprietary, closed. Because of that it was impossible to open and use a file with questions from one software application in another.

The paper-and-pencil tests became popular during the twentieth century. A multitude of testing methodologies was developed in different research areas, especially in psychology and sociology. They were used separately without need of any interoperability. Therefore there was not a common point of view for that, which types of questions can be used in computer assessment. The absence of standards in area of testing has leading to appearance a multitude of incompatible systems, each of them with own set of supported question types. Of course there was nothing doing with data interchange; in such circumstances there was logical an appearance in 1999 a technical specification for Question and Test Interoperability (QTI).

The QTI describes a data model for the representation of question and test data and reporting of testing results. The specification enables the exchange of questions, tests, and results data between authoring tools, item banks, test constructional tools, learning systems, and assessment delivery systems (IMS 2006). This standard has been developed by Instructional Management Systems Global Learning Consortium (IMS GLC or IMS).

Complete Chapter List

Search this Book:
Reset