Classroom Critical Incidents

Classroom Critical Incidents

John M. Carroll, Dennis C. Neale, Philip L. Isenhour
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-60566-198-8.ch040
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Evaluating the quality and effectiveness of user interaction in networked collaborative systems is difficult. There is more than one user, and often the users are not physically proximal. The “session” to be evaluated cannot be comprehensively observed or monitored at any single display, keyboard, or processor. It is typical that none of the human participants has an overall view of the interaction (a common source of problems for such interactions). The users are not easily accessible either to evaluators or to one another. In this article we describe an evaluation method that recruits the already-pervasive medium of Web forums to support collection and discussion of user critical incidents. We describe a Web forum tool created to support this discussion, the Collaborative Critical Incident Tool (CCIT). The notion of “critical incident” is adapted from Flanagan (1956), who debriefed test pilots in order to gather and analyze episodes in which something went surprisingly good or bad. Flanagan’s method has become a mainstay of human factors evaluation (Meister, 1985). In our method, users can post a critical incident report to the forum at any time. Subsequently, other users, as well as evaluators and system developers, can post threaded replies. This improves the critical incident method by permitting follow-up questions and other conversational elaboration and refinement of original reports.
Chapter Preview
Top

Multifaceted Evaluation

Some of the greatest challenges in the LiNC Project pertained to evaluation. In the situations of greatest interest, students were working together while located at different school sites, some more than 15 miles apart. Usability engineering and human factors engineering provide many techniques for evaluating traditional single-user sessions—observing and classifying portions of user activity, non-directively prompting think-aloud protocols, logging session events, interviewing, and surveying. The problem in the case of the Virtual School is that the “session” is distributed over the whole county.

Key Terms in this Chapter

Synchronous Collaboration: Collaborative interactions, for example, over the Internet, carried out in real time, such as chat, video/audio conferencing, and shared applications.

Critical Incident: An observed or experienced episode in which things go surprisingly well or badly.

Asynchronous Collaboration: Collaborative interactions, for example, over the Internet, that are not synchronized in real time, such as e-mail exchanges, browser-based shared editing (wikis), and postings to newsgroups.

Multifaceted Evaluation Record: Evaluation data record collating and synchronizing multiple types of data, for example, video recordings, user session logs, and field observations,

Threaded Discussion: An asynchronous collaboration medium in which participants contribute to topics and to contributions under topics, creating a nested discourse structure.

Participatory Evaluation: Evaluation methods in which the subjects of the evaluation actively participate in planning and carrying out observations.

Think-Aloud Protocol: A research protocol in which participants verbalize their task-relevant thoughts as they perform.

Complete Chapter List

Search this Book:
Reset