ZatLab Gesture Recognition Framework: Machine Learning Results

ZatLab Gesture Recognition Framework: Machine Learning Results

André Baltazar (Catholic University of Portugal, Center for Science and Technology in the Arts, Porto, Portugal)
DOI: 10.4018/IJCICG.2016070102
OnDemand PDF Download:
No Current Special Offers


The main problem this work addresses is the real-time recognition of gestures, particularly in the complex domain of artistic performance. By recognizing the performer gestures, one is able to map them to diverse controls, from lightning control to the creation of visuals, sound control or even music creation, thus allowing performers real-time manipulation of creative events. The work presented here takes this challenge, using a multidisciplinary approach to the problem, based in some of the known principles of how humans recognize gesture, together with the computer science methods to successfully complete the task. This paper is a consequence of previous publications and presents in detail the Gesture Recognition Module of the ZatLab Framework and results obtained by its Machine Learning (ML) algorithms. One will provide a brief review the previous works done in the area, followed by the description of the framework design and the results of the recognition algorithms.
Article Preview


The field of human movements and gesture analysis has, for a long time now, attracted the interest of many researchers, choreographers and dancers. Thus, since the end of the last century, a significant corpus of work has been conducted relating movement perception with music (Fraisse, 1982).

Among the research community on this subject, there are works that stand out as important references on how video analysis technologies have provided interesting ways of movement-music interaction. Early works of composers Todd Winkler (Winkler, 1995) and Richard Povall (Povall, 1998), or the choreographer Robert Weschler work with Palindrome1. Also, Mark Coniglio continued development of his Isadora2 programming environment, plus the groundbreaking work Troika Ranch3 has done in interactive dance.

Other example of research in this field is the seminal work of Camurri, with several studies published, including an approach for the recognition of acted emotional states based on the analysis of body movement and gesture expressivity (Castellano, Villalba, & Camurri, 2007) and one of the most remarkable and recognized works, the EyesWeb software (Camurri et al., 2000).

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 13: 2 Issues (2022): Forthcoming, Available for Pre-Order
Volume 12: 2 Issues (2021)
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing