Crowdsourcing and Education with Relation to the Knowledge Economy

Crowdsourcing and Education with Relation to the Knowledge Economy

Kathleen Scalise
Copyright: © 2013 |Pages: 14
DOI: 10.4018/978-1-4666-2023-0.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Crowdsourcing in the development and use of educational materials involves Web 2.0 tools to leverage collaboration and produce materials from user groups and stakeholders. Such a community-based design, sometimes called a participatory design, can help capture, refine, carry out, systematize or evaluate aspects of online learning materials. Here the use of crowdsourcing is discussed in educational assessments. This paper presents new evidence on how examinees respond to use of crowdsourcing. It shows how a “modify” option in the content can lead to the generation of new materials, and new knowledge, through tapping into the wisdom of the group.
Chapter Preview

Intermediate Constraint Taxonomy for E-Learning Assessment Questions and Tasks

Computers and electronic technology today offer myriad ways to enrich educational assessment both in the classroom and in large-scale testing situations. The question type currently dominating much of large-scale computer-based testing and many e-learning assessments is the standard multiple-choice question, which generally includes a prompt followed by a small set of responses from which students are expected to select the best choice. This kind of task is readily scorable by a variety of electronic means and offers some attractive features as an assessment format. However, if developers adopt this format alone as the focus of assessment formats in this emerging field, much of the computer platform’s potential for rich and embedded assessment could be sacrificed. Thus the need for more innovative assessment approaches in education are being investigated by many scholars.

Questions, tasks, activities and other methods of eliciting student responses are often called items in the assessment process.1 In the computer-based platform, almost any type of interaction with a user can be considered an assessment item. Note that a working definition we have proposed for an assessment item is any designed interaction with a respondent from which data is collected with the intent of making an inference about the respondent.

Given this definition, there are many ways in which assessment items can be innovative when delivered by computer, and in which crowdsourcing can contribute to the innovations.

One organizational scheme describes innovative features for computer-administered items, such as the technological enhancements of sound, graphics, animation, video or other new media incorporated into the item stem, response options or both (Parshall, Davey, & Pashley, 2000). But other classification possibilities are myriad, including how items function. For some innovative formats, students can, for instance, click on graphics, drag or move objects, re-order a series of statements or pictures, or construct a graph or other representation. Or the innovation may not be in any single item, but in how the items flow, as in branching through a changing series of items contingent on an examinee’s responses.

Much of the literature on item types is concerned with innovations of the observation—the stimulus and response—that focus on the degree of construction versus selection, or constraint versus openness, in the response format. A number of characteristics are common to most constructed-response and performance formats:

“First and perhaps most obvious, these alternative formats require an examinee to supply, develop, perform, or create something. And, typically, these tasks attempt to be more engaging to the examinee than conventional multiple-choice items. Often, they employ real-world problems that people of a comparable age and peer status may encounter in daily life, such as asking school-age children to calculate from a grocery store purchase, or for high schoolers, to complete a driver’s license application or examine an insurance policy. [They] are generally scored by comparing and contrasting an examinee’s responses to some developed criteria, sometimes elucidated in lengthy descriptions called ‘rubrics’” (Bennett, 1993; Osterlind, 1998).

So-called open-ended items cover a lot of territory, however, and organizing schemes for the degree of constraint and other measurement aspects regarding items can be helpful (Bennett, 1993). To help frame the discussion on what new item types may be useful and practical in CBT, we have introduced a taxonomy or categorization of 28 innovative item types (Scalise & Gifford, 2006). Organized along the degree of constraint on the respondent's options for answering or interacting with the assessment item or task, the proposed taxonomy, shown in Figure 1, describes a set of iconic item types termed “intermediate constraint” items. These item types have responses that fall somewhere between fully constrained responses (i.e., the conventional multiple-choice question), which can be too limiting to tap much of the potential of new information technologies, and fully constructed responses (i.e., the traditional essay), which can be a challenge for computers to meaningfully analyze even with today’s sophisticated tools. The 28 example types discussed in this paper are based on 7 categories of ordering involving successively decreasing response constraints from fully selected to fully constructed. Each category of constraint includes four iconic examples.

Figure 1.

Intermediate constraint taxonomy for e-learning assessments and tasks

978-1-4666-2023-0.ch009.f01

References for the taxonomy were drawn from a review of 44 papers and book chapters on item types and item designs—many of them classic references regarding particular item types—with the intent of consolidating considerations of item constraint for use in assessment designs. An organization by Bennett called the “Multi-faceted Organization Scheme” (Bennett, 1993, p. 47) was used to develop most of the column headings in the Intermediate Constraint Taxonomy.

Intermediate constraint tasks can be used alone for complex assessments or readily composited together. This paper describes the use of such item types bundled and treated with bundle, or in other words testlet, measurement models (see the section on Measurement Models).

At one end of the spectrum, the most constrained selected response items require an examinee to select one choice from among a few alternatives, represented by the conventional multiple-choice item. At the other end of the spectrum, examinees are required to generate and present a physical performance under real or simulated conditions. Five intermediary classes fall between these two extremes in the taxonomy and are classified as selection/identification, reordering/rearrangement, substitution/correction, completion, and construction types.

Note that all item types in the item taxonomy can include new response actions and media inclusion. Thus, by combining intermediate constraint types and varying the response and media inclusion, a vast array of innovative assessments can be developed, and could arguably match assessment needs and evidence for many educational objectives.

This figure shows twenty-eight item examples organized into a taxonomy based on the level of constraint in the item/task response format. The most constrained item types, at left in Column 1, use fully selected response formats. The least constrained item types, at right in Column 7, use fully constructed response formats. In between are “intermediate constraint items,” which are organized with decreasing degrees of constraint from left to right. There is additional ordering that can be seen “within-type,” where innovations tend to become increasingly complex from top to bottom when progressing down each column.

Complete Chapter List

Search this Book:
Reset