Tele-Media-Art:Feasibility Tests of Web-Based Dance Education for the Blind Using Kinect and Sound Synthesis of Motion

Tele-Media-Art:Feasibility Tests of Web-Based Dance Education for the Blind Using Kinect and Sound Synthesis of Motion

José Rodrigues Dias (INESC TEC, Porto, Portugal), Rui Penha (INESC TEC and Faculty of Engineering, University of Porto, Porto, Portugal), Leonel Morgado (Universidade Aberta, INESC TEC, CIAC, & LE@D, Lisbon, Portugal), Pedro Alves da Veiga (CIAC and Universidade Aberta, Lisbon, Portugal), Elizabeth Simão Carvalho (Universidade Aberta and CIAC, Lisbon, Portugal) and Adérito Fernandes-Marcos (Universidade Aberta, CIAC, INESC TEC, & LE@D, Lisbon, Portugal)
Copyright: © 2019 |Pages: 18
DOI: 10.4018/IJTHI.2019040102

Abstract

Tele-media-art is a web-based asynchronous e-learning platform, enabling blind students to have dance and theatre classes remotely, using low-cost motion tracking technology feasible for home use. Teachers and students submit dance recordings augmented with sound synthesis of their motions. Sound synthesis is generated by processing Kinect motion capture data, enabling blind students to compare the audio feedback of their motions with the audio generated by the teacher's motions. To study the feasibility of this approach, the authors present data on early testing of the prototype, performed with blindfolded users.
Article Preview
Top

1. Introduction

Distance education and e-learning platforms have changed the way people learn, enabling interactions between learners and instructors, or between learners, free from daytime or geographical limitations, through asynchronous and synchronous learning models (Sun et al., 2008). Dance education on such platforms has been conducted using a varied range of materials, from traditional combinations of text-based descriptions, images, recorded video and video streaming (Brooks, 2013) or even three-dimensional approaches such as stereoscopic video (Lee, Lee, & Goo, 2012), animations (Karkou, Bakogianni, & Kavakli, 2008), and motion tracking with immersive participation in full-body virtual environments (Kyan et al., 2015).

Such platforms should be inclusive for the visually impaired, and while this has been feasible for text-based materials, more visual and especially animated content has lacked solutions for distance learning inclusion. As early as 1968, researchers noted that dance education was visually-minded and relied on examples and saying, “Do it like this” (Dugger, 1968), emphasizing that for the blind “…verbal descriptions for action have to be more carefully established…” (Dugger, 1968). Since then, various efforts have contributed to better linguistic description of dance for the blind, often combining them with touch-assisted demonstrations (e.g., Duquette et al., 2012). Touch-assistance is however unavailable in distance education. Dance descriptions can also resort to symbolic notations such as Labanotation (Guest, 2014), but their visual nature makes them unsuitable for blind users. The same reliance on the visual content is present in efforts to use video annotation in dance education (e.g., Tsiatsos et al., 2010). Some systems automatically analyze a student’s motion and provide feedback, but these are also visual in nature: “So much emphasis is placed on the technique of mimicking the dance teacher that quantitative measures and feedback are crude or nonexistent, essentially requiring the students to follow the virtual teacher…” (Kyan et al., 2015).

Our speculative approach is that automated, low-cost, auditory feedback may be a feasible alternative for distance blind learners for the role of touch assistance, inspired by studies reported in the related work section. In this paper, we report on the early evaluation of our prototype, which provides sound synthesis feedback of motion through a low-cost device: the Kinect. An algorithm generates harmonic sound sequences according to the stream of coordinates that are generated through motion capture with the Kinect. The sound synthesis feedback accompanies the teacher’s examples of motion, and is dynamically generated when the learner follows the teacher’s recorded directions. Our rationale is that this instant feedback will enable blind users to realize whether their motions are similar to the teacher-provided examples or not (and also to have a better grasp of other students’ motions). This realization can then be used by the learner for a variety of educational goals, such as training the replication of motions, debate the motions, proposing alternatives, etc.

The teaching method is based on an asynchronous model, where a teacher first provides audio and/or text instructions and records movements. Students then listen/read the instructions and get the real-time synthesized audio feedback of the movements. By attempting to mimic the motions, students also get real-time synthesized audio feedback.

We implemented our prototype integrated with the Moodle learning management system, in view of streamlining subsequent deployment for large-scale testing, but the overall operation is independent from the Moodle platform.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 16: 4 Issues (2020): 2 Released, 2 Forthcoming
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing