When teachers integrate online discussions into courses, they are faced with the challenge of deciding how to evaluate the postings. This chapter discusses a study that used a discussion board rubric to evaluate online discussions. The study tested the reliability of the instrument (rubric) to assess the quality of the content of Web-based discourse. To obtain the rubric interrater reliability, researchers used the rubric to evaluate the discussion postings of preservice teachers’ enrolled in six different sections of an English language arts methods course. Six hundred sixty two (662) postings from 165 preservice teachers were analyzed using the rubric. The study utilized the scorings from six judges. When measured with Cronbach’s alpha intraclass coefficient, the findings indicated substantial agreement between judges in two of the four rubric criteria: evocative (.8742) and reference-resource (.8209). The other rubric criteria rumination (.7256) and storytelling (.5984) scored at the moderate and fair levels respectfully.
Case studies have been found to be a powerful pedagogical tool for teacher education (Moore & Kearsley, 1996; Risko & Kinzer, 1997). Discussions about cases fostered thoughtful engagement (Dawson, Mason, & Molebash, 2000; Silverman & Welty, 1996). Specifically, video case studies provided a realistic, yet controlled, context that considerably enhanced textbook readings by bringing descriptions of actual classroom settings to life (Shulman, 1992). Further, Computer-mediated discussions increased time for reflection in formulating thoughtful dialog (Daiute, 2000).
In a pilot study of OVCS, we (Larson, Boyd-Batstone, & Cox, 2004–2005) reported on the nature of online discourse according to who was the discourse audience and what were the discourse functions utilized by a group of 98 preservice teachers in a university language arts methods course. The rubrics used (Flynn & Polin, 2003) provided useful categories for content analysis. But the researchers found that content analysis was limited in determining the function of the dialog. A persistent question was raised about the nature of a quality dialog online. In other words, how can one determine whether learning was taking place and knowledge was being constructed?
Key Terms in this Chapter
Online Video Case Studies (OVCS): An OVCS refers to a Web-based case study model that is composed of video clips of teachers (written about in an accompanying textbook) actually teaching, teacher interviews, responses of preservice teachers to video-clips posted online, and online interaction between students and with the instructor.
Posting: A posting is a message or response that is uploaded or “posted” hence the word posting, to an electronic discussion board.
Prompt: A prompt is a statement or group of statements about a specific topic, constructed to stimulate reflective thought. In the case of OVCS, the online discussions usually begin with a teacher posted prompt.
Rubric: A rubric is a scoring instrument that lists the criteria for a piece of work or artifact. In the case of OVCS, the rubric for a quality posting will list the content the student must include to receive a certain score or rating. Rubrics help the student understand how discussion board postings are evaluated. Generally, rubrics specify the level of performance expected for several levels of quality.
Discussant: The discussant is one of the several people participating in an online discussion group. Groups may or may not have an appointed group leader.
Critical Reflection: Critical reflection refers to a person’s ability to reflect critically on his/her experiences, integrate the knowledge acquired from these experiences with the previous knowledge, and then be able make an informed decision based on insights he/she gained from the new and previous experiences.
Storytelling: Storytelling is a narrative or story that uses personal examples or narrative details.
Rumination: Rumination is a long thoughtful consideration of an idea or thought.
Discussion Board: The discussion board allows students to post threads (comments or responses) to forums usually created by the instructor. The posted threads (comments, responses) can be viewed and responded to by the instructor and other students enrolled in the course.
Interrater Reliability: Interrater reliability is the degree of agreement among judges when they are rating rubric criteria. The score shows how much the ratings of the judges agree. Scores can range from a 0.00 to a 1.00. A score of 1.00 indicates 100% agreement of the judges. A score of 0.00 indicates 0% agreement of the judges. A 0.00 score could mean that either the rating scale is defective or the raters need to be re-trained in the meaning of the rubric criteria.