Multimedia Design of Assistive Technology for Those with Learning Disabilities

Multimedia Design of Assistive Technology for Those with Learning Disabilities

Boaventura DaCosta (Solers Research Group, USA) and Soonhwa Seok (eLearning Design Lab, University of Kansas, USA)
DOI: 10.4018/978-1-61520-817-3.ch003
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

This is the final of three chapters serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies and its design for individuals with cognitive disabilities. In this chapter the authors build upon the last two chapters and focus specifically on research investigating the visual and auditory components of working memory. The authors present the cognitive theory of multimedia learning, a learning theory proposing a set of instructional principles grounded in human information processing research that provide best practices in designing efficient multimedia learning environments. Much like the last chapter, the instructional principles presented are grounded in empirically-based study and consolidate nearly twenty years of research to highlight the best ways in which to increase learning. Altogether, the authors stress the common thread found throughout this three chapter introduction—that technology for learning should be created with an understanding of design principles empirically supported by how the human mind works. They argue that the principles emerging from the cognitive theory of multimedia learning may have potential benefits in the design of assistive technologies for those with learning disabilities.
Chapter Preview
Top

Introduction

Multimedia, Assistive Technology, and Those with Learning Disabilities

Unlike early theories which viewed short-term memory as a single store capable of performing numerous operations (Sweller, 2005a), working memory is assumed to be composed of multiple stores (Baddeley, 1986, 1998, 2002; Paivio, 1990; Penney, 1989; Sweller, 2005). Baddeley’s model of working memory portrays numerous operations by handling visual and acoustic information individually with the visuospatial sketchpad and phonological loop subsystems. Making use of partial autonomy for processing visual and auditory information is believed to be a way in which to address the limitations of working memory. For example, Frick (1984) had investigated the idea of separate visual and auditory memory stores, showing how digit-span recall could be increased; Penney (1989), in a review, had provided evidence that appropriate use of the visual and auditory stores can maximize working memory capacity. Although researchers seem to disagree on a common nomenclature, using terms such as stores, channels, bisensory, dual-coding, and dual-processing (e.g., Allport, Antonis, & Reynolds, 1972; Baddeley, 1986, 1998; Jones, Macken, & Nicholls, 2004; Mayer & Anderson, 1991; Paivio, 1971; Penney, 1989) to represent the components of working memory, they do seem to agree with the premise that dual-processing is vital towards overcoming the limitations of working memory.

This dual-processing assertion is best represented in Paivio’s dual-coding theory (Clark & Paivio, 1991; Paivio, 1971, 1990), which proposes that cognition is composed of verbal and non-verbal subsystems. These two subsystems are considered distinct, but interrelated. The verbal subsystem favors organized, linguistically-based information, stressing verbal associations. Examples include words, sentences, and stories. The non-verbal subsystem, organizes information in nested sets, processed either synchronously or in parallel. Examples include pictures and sounds (Paivio, 1971, 1990; Paivio, Clark, & Lambert, 1988). Multimodal instructional material, which can be coded in both subsystems, rather than just one, is more easily recalled. By leveraging both the verbal and non-verbal subsystems, more information can be processed.

Studies examining dual-coding have shown greater performance can be achieved when learners are presented with instructional material that takes advantage of both the verbal and non-verbal subsystems (e.g., Frick, 1984; Gellevij, Van Der Meij, De Jong, & Pieters, 2002; Leahy, Chandler, & Sweller, 2003; Mayer & Moreno, 1998; Moreno & Mayer, 1999). These findings are promising, as they suggest the limited capacity of working memory can be addressed by presenting instruction in a verbal and non-verbal manner (Mayer, 2001, 2005e; Sweller, van Merrienboer, & Paas, 1998). More importantly, the converse has also been shown. The verbal and non-verbal subsystems are believed to pool from the same processing resources. As such, multimodal information that is not interrelated can negatively impact working memory performance (Morey & Cowan, 2004). Thus, the non-verbal presentation of information should be relational to the verbal (textual), for it has a significant impact on working memory and learning.

This is the final of three chapters serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies (ATs) and its design for individuals with cognitive disabilities. In this chapter we build upon the last two chapters and focus specifically on research investigating the visual and auditory components of working memory. We present the cognitive theory of multimedia learning (CTML), a learning theory proposing a set of instructional principles grounded in human information processing research that provide best practices in designing efficient multimedia learning environments. Much like the last chapter, the instructional principles presented are grounded in empirically-based study and consolidate nearly twenty years of research to highlight the best ways in which to increase learning. Altogether, we stress the common thread found throughout this three chapter introduction—that technology for learning should be created with an understanding of design principles empirically supported by how the human mind works. We argue that the principles emerging from the CTML may have potential benefits in the design of ATs for those with learning disabilities (LDs).

Before we delve into the principles composing the CTML, we begin by first defining multimedia learning itself. We then provide a brief explanation of the theory and discuss its theoretical foundation.

Key Terms in this Chapter

Multimedia: Broadly speaking, it is the presentation of both words and pictures to a learner in a variety of ways.

Cognitive Theory of Multimedia Learning (CTML): A theory credited to Richard E. Mayer and his colleagues focused on best practices in the use of visual and auditory information in multimedia-based instruction.

Signaling Principle: An instructional principle proposing that learners learn more deeply when cues are added to highlight the organization of the essential material (Mayer, 2005d).

Site Map Principle: An instructional principle proposing that learners learn more deeply when appropriately structured site maps are used because these maps provide learners with overarching view of the information to be learned (Shapiro, 2005).

Redundancy Principle: An instructional principle proposing that learners learn more deeply when identical information is not presented in more than one format (Mayer, 2005a).

Modality Principle: An instructional principle proposing that presenting information in dual modalities spreads total induced load across the visual and auditory channels of working memory thereby reducing cognitive load (Low & Sweller, 2005; Sweller & Chandler, 1994; Sweller et al., 1998).

Multimedia Learning: The building of mental representations from the amalgamation of words and pictures, which induces the promotion of meaningful learning (Mayer, 2001, 2005b).

Guided-discovery Principle: An instructional principle proposing that learners learn more deeply when using the strategy of directing the learner toward discovery (Jong, 2005).

Limited Capacity Assumption: One of the three theoretical assumptions underpinning the cognitive theory of multimedia learning; proposes that working memory is limited in how much information can be processed within each channel.

Self-Explanation Principle: An instructional principle proposing that learners learn more deeply when engaged in self-explanation, a strategy which aids in attention and promotes meaningful learning though knowledge construction and integration activities (Roy & Chi, 2005).

Pre-training Principle: An instructional principle proposing that learners learn more deeply when they are made aware of the names and behaviors of main concepts in the lesson before they are presented with the main lesson itself (Mayer, 2005a; Mayer & Moreno, 2003).

Cognitive Aging Principle: An instructional principle focused on helping older learners by effectively managing working memory resources (Mayer, 2005b). Subscribing to the idea that working memory capability declines with age (Paas et al., 2005; Van Gerven et al., 2006), the principle suggests that some instructional material presented in multiple modalities may be more efficient than instructional material presented in a single modality.

Animation and Interactivity Principles: A set instructional principles providing guidance on the design of multimedia that incorporate sophisticated animated graphics while at the same time taking into account learner interactivity (Betrancourt, 2005).

Prior Knowledge Principle: An instructional principle focused on the effects of learners’ prior knowledge on the cognitive theory of multimedia learning principles (Kalyuga, 2005). The principle stems from consistent research findings that suggest instructional principles may not benefit or adversely impact learners with high prior knowledge of the content to be learned.

Personalization, Voice, and Image Principles: Three instructional principles providing recommendations based on social cues. According to Mayer (2005e), the personalization principle proposes that learners learn more deeply when words are presented in a conversational style as opposed to formally; the voice principle proposes that learners learn more deeply when words are spoken in a human voice void of accent, opposed to an accented voice or a machine voice; and the image principle proposes that learners learn more deeply when a speaker’s image can be seen on screen by the learner.

Worked-out Example Principle: A step-by-step example that demonstrates how a task is performed or how to solve a problem (R. Clark et al., 2006); the principle proposes learners learn more deeply when studying worked examples than studying practice problems (Sweller, 2005a).

Dual-channels Assumption: One of the three theoretical assumptions underpinning the cognitive theory of multimedia learning; proposes that the human information processing system is composed of a separate processing channel for visual and auditory represented material.

Spatial Contiguity Principle: An instructional principle proposing that learners learn more deeply when related words and pictures are presented near one another than far apart (Mayer, 2005d).

Cognitive Load Theory: A theory proposed by John Sweller and his colleagues focused on the limitations of working memory during instruction.

Temporal Contiguity Principle: An instructional principle proposing that learners learn more deeply when related animation and narration are presented concurrently rather than consecutively (Mayer, 2005d).

Active Processing Assumption: One of the three theoretical assumptions underpinning the cognitive theory of multimedia learning; proposes that humans must actively engage in cognitive processing for learning to occur.

Segmentation Principle: An instructional principle proposing that learners learn more deeply when a lesson is presented in learner-controlled segments rather than continuous units (Mayer, 2005a; Mayer & Moreno, 2003).

Meaningful Learning: The remembering and deep understanding of instructional material; occurs when important aspects of the material are cognitively recognized, when the material is organized into a coherent structure, and then integrated with relevant existing knowledge (Marshall, 1996; Mayer, 2001; Mayer & Moreno, 2003; Wittrock, 1990).

Navigation Principles: A variety of instructional principles providing recommendations on the use of navigational aids which include a broad category of visual and auditory devices ranging from local cues (e.g., headings and subheadings) to global content (e.g., tables and outlines) (Rouet & Potelle, 2005).

Collaboration Principle: An instructional principle proposing a variety of recommendations that support collaborative learning (Jonassen et al., 2005).

Dual-Coding Theory: A theory proposed by Allan Paivio, which proposes that cognition is composed of verbal and non-verbal subsystems.

Multimedia Principle: An instructional principle proposing that learners learn more deeply from words and pictures than from words only.

Coherence Principle: An instructional principle proposing that learners learn more deeply when extraneous information is excluded (Mayer, 2005d).

Complete Chapter List

Search this Book:
Reset