Do media influence learning? This is a historical debate in the field of educational technology, which started when Clark (1983, 1994) argued that media are “mere vehicles” and it is the content and pedagogical methods that are the “active ingredients” influencing student learning. Others (e.g., Kozma, 1994; Cobb, 1997) disagreed and argued that special media attributes can make certain types of learning more effective or cognitively efficient. In this chapter, I will first review the key arguments for and against media effects in distance education (DE). I will then review several meta-analyses that attempted to analyze the effects of media and pedagogy based on quantitative syntheses of the empirical research in DE. Finally, I will discuss directions for future research.
Typically, distance education research has evidenced a preoccupation with comparisons of the face-to-face versus the distance education classroom (Gunawardena & Mclsaac, 2004). This research reflects a tradition of media comparison studies which, argue McIsaac and Gunawardena (1996), have resulted in “very little useful guidance for distance education practice” (Research related to media in distance education section, ¶ 2). Lockee, Moore, and Burton (2001) remind us why we should be cautious about drawing conclusions based on media comparisons:
Comparing a face-to-face course to a Web-based course doesn’t tell us anything about what the teacher or students did in a face-to-face class, or what strategies the Web-based event employed. Perhaps a Web-based event succeeded because students engaged in collaborative problem-solving compared to students in the face-to-face setting who simply received information through lectures. (Instructional Strategies section)
The lack of usefulness of this focus in research may be due to its underlying assumptions that the technology, and not the teaching, is the determinant of effectiveness. Sabelli (2004) observed that studies of distance education need to concentrate, not on whether distance education is effective, but on why. Likewise, Morrison (2001) argued that investigation of the effectiveness of distance education should not compare distance and face-to-face delivery, but should focus on quality instruction that results in student achievement equivalent to courses delivered by other means. According to Cavanaugh et al. (2004), in a context of virtual schooling, effectiveness can be assessed in terms of teacher quality.
In spite of the need to investigate teacher quality in this context, in fact, “there is little empirical research specifically focused on K-12 teachers and teaching in distance education courses” (Clark, 2003, p. 692). Cavanaugh et al. (2004) argued in relation to distance education that “teacher effectiveness is a strong determiner of differences in student learning” (p. 20). Likewise, Sherry (1996) explained that the most important factor for successful distance education is a “caring, concerned teacher who is confident, experienced, at ease with the equipment, uses the media creatively, and maintains a high level of interactivity with the students” (Systems of distance education section, ¶ 4). If we want to learn about the effectiveness of this growing form of education, we need to focus more research efforts on understanding the practice of the e-teacher. Specifically, argue Cavanaugh et al., what is needed is “research that guides practitioners in refining practice so the most effective methods are used” (p. 6).
Key Terms in this Chapter
Media: Delivery media such as television, Internet, and video used in teaching and learning.
Systematic Instructional Design: Conventional instructional design practices and principles used in developing the course and course materials.
Synchronous Communication: Real time communication between the instructor and students or among students using two-way communication media such as telephone, video or audio conferencing, and chat, etc.
Asynchronous Communication: Delayed time communication between the instructor and students or among students using discussion board, emails, and etc.
Effect Size: A measure of standardized mean difference between the experimental and control conditions.
Computer-Based Instruction: Instruction such as tutorials, drill-and-practice, and simulations that are provided through a computer.
Method: Instructional and pedagogical strategies used in instruction.
Meta-Analysis: A literature review method that quantitatively synthesizes the effects of an experimental treatment.