SMALLab is a mixed-reality environment where students collaborate and interact with sonic and visual media using full body movements in an open physical space. This represents a new breed of a technology-based student-centered learning environment (Bransford, Brown, & Cocking, 2000), one that incorporates multimodal sensing, modeling, and feedback while still addressing the financial and logistical constraints of a real world classroom. Physically, SMALLab is a cubic space that measures 15-feet wide x 15 feet wide x 12 feet high. It includes a vision-based tracking object tracking system, a top-mounted video projector providing real time visual feedback (typically projected onto the floor), four audio speakers for surround-sound feedback, and an array of tracked physical objects called glowballs (Birchfield et al., 2006). In addition, a set of common wireless video game interfaces (e.g., gamepad, Wii Remote, see Shirai, Geslin, & Richir, 2007) can be added. The glowballs are handheld, lit from within, and are the size of grapefruits. Students move freely in the space - they are not dragging wires or attaching sensing equipment to their bodies or clothes. They are able to immediately SEE the impact of their movements, FEEL the results kinesthetically, and HEAR immersive sound as they engage elements of the feedback apparatus. All of this is done in a collaborative manner with all students in a typical classroom participating. Video documentation of student learning in SMALLab is available at: http://ame2.asu.edu/projects/emlearning. See Figure 1 for an example of a typical classroom layout.
Students construct a layer cake structure in SMALLab
A growing body of evidence supports the theory that cognition is “embodied” and grounded in the sensorimotor system (Barsalou, 2008; Fauconnier & Turner, 2002; Glenberg, Gutierrez, Levin, Japuntich, & Kaschak, 2004; Hestenes, 2006; Hutchins, 1995). This research posits that the way we think is a function of our body, its physical and temporal location, and our interactions with the world around us. In the domains of interactive media and digital learning, the concept of embodiment has been applied in a numerous ways. The River City virtual environment exemplifies one conception of embodiment (Dede & Ketelhut, 2003; Dede, Ketelhut, & Ruess, 2002; Ketelhut, 2007; Nelson et al., 2007). In River City, student actions and observations are embodied as virtual avatars in a virtual world. Dourish (2001) offers another conception of embodiment that is grounded in the domain of social and tangible computing. He writes, “embodied phenomena are those that by their very nature occur in real time and real space.” Our work draws upon both of these concepts of virtual and physical embodiment to generate learning experiences that directly couple physical action with interactive digital media. By embodiment in SMALLab we mean interaction that engages students both in mind and in body, encouraging them to physically explore concepts and systems by moving within and acting upon a mediated environment. Importantly, this interaction is multimodal. It engages the full visual, sonic, and kinesthetic capabilities of students.
This paper presents two experiments pertaining to SMALLab learning in the earth science domain. For the first pilot study, we wanted to know can SMALLab be effectively integrated into a conventional high school context and is there preliminary evidence of student learning? In the second controlled experiment we wanted to look more closely at student achievement and address the question, does the SMALLab learning experience yield greater student gains than regular classroom instruction?