A Next Gen Interface for Embodied Learning: SMALLab and the Geological Layer Cake

A Next Gen Interface for Embodied Learning: SMALLab and the Geological Layer Cake

David Birchfield, Mina Johnson-Glenberg
DOI: 10.4018/jgcms.2010010105
(Individual Articles)
No Current Special Offers


Emerging research from the learning sciences and human-computer interaction supports the premise that learning is effective when it is embodied, collaborative, and multimodal. In response, we have developed a mixed-reality environment called the Situated Multimedia Arts Learning Laboratory (SMALLab). SMALLab enables multiple students to interact with one another and digitally mediated elements via 3D movements and gestures in real physical space. It uses 3D object tracking, real time graphics, and surround-sound to enhance learning. We present two studies from the earth science domain that address questions regarding the feasibility and efficacy of SMALLab in a classroom context. We present data demonstrating that students learn more during a recent SMALLab intervention compared to regular classroom instruction. We contend that well-designed, mixed-reality environments have much to offer STEM learners, and that the learning gains transcend those that can be expected from more traditional classroom procedures.
Article Preview


SMALLab is a mixed-reality environment where students collaborate and interact with sonic and visual media using full body movements in an open physical space. This represents a new breed of a technology-based student-centered learning environment (Bransford, Brown, & Cocking, 2000), one that incorporates multimodal sensing, modeling, and feedback while still addressing the financial and logistical constraints of a real world classroom. Physically, SMALLab is a cubic space that measures 15-feet wide x 15 feet wide x 12 feet high. It includes a vision-based tracking object tracking system, a top-mounted video projector providing real time visual feedback (typically projected onto the floor), four audio speakers for surround-sound feedback, and an array of tracked physical objects called glowballs (Birchfield et al., 2006). In addition, a set of common wireless video game interfaces (e.g., gamepad, Wii Remote, see Shirai, Geslin, & Richir, 2007) can be added. The glowballs are handheld, lit from within, and are the size of grapefruits. Students move freely in the space - they are not dragging wires or attaching sensing equipment to their bodies or clothes. They are able to immediately SEE the impact of their movements, FEEL the results kinesthetically, and HEAR immersive sound as they engage elements of the feedback apparatus. All of this is done in a collaborative manner with all students in a typical classroom participating. Video documentation of student learning in SMALLab is available at: http://ame2.asu.edu/projects/emlearning. See Figure 1 for an example of a typical classroom layout.

Figure 1.

Students construct a layer cake structure in SMALLab


A growing body of evidence supports the theory that cognition is “embodied” and grounded in the sensorimotor system (Barsalou, 2008; Fauconnier & Turner, 2002; Glenberg, Gutierrez, Levin, Japuntich, & Kaschak, 2004; Hestenes, 2006; Hutchins, 1995). This research posits that the way we think is a function of our body, its physical and temporal location, and our interactions with the world around us. In the domains of interactive media and digital learning, the concept of embodiment has been applied in a numerous ways. The River City virtual environment exemplifies one conception of embodiment (Dede & Ketelhut, 2003; Dede, Ketelhut, & Ruess, 2002; Ketelhut, 2007; Nelson et al., 2007). In River City, student actions and observations are embodied as virtual avatars in a virtual world. Dourish (2001) offers another conception of embodiment that is grounded in the domain of social and tangible computing. He writes, “embodied phenomena are those that by their very nature occur in real time and real space.” Our work draws upon both of these concepts of virtual and physical embodiment to generate learning experiences that directly couple physical action with interactive digital media. By embodiment in SMALLab we mean interaction that engages students both in mind and in body, encouraging them to physically explore concepts and systems by moving within and acting upon a mediated environment. Importantly, this interaction is multimodal. It engages the full visual, sonic, and kinesthetic capabilities of students.

This paper presents two experiments pertaining to SMALLab learning in the earth science domain. For the first pilot study, we wanted to know can SMALLab be effectively integrated into a conventional high school context and is there preliminary evidence of student learning? In the second controlled experiment we wanted to look more closely at student achievement and address the question, does the SMALLab learning experience yield greater student gains than regular classroom instruction?

Complete Article List

Search this Journal:
Volume 15: 1 Issue (2023)
Volume 14: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing