A Collaborative Pointing Experiment for Analyzing Bodily Communication in a Virtual Immersive Environment

A Collaborative Pointing Experiment for Analyzing Bodily Communication in a Virtual Immersive Environment

Divesh Lala (Graduate School of Informatics Kyoto University, Japan) and Toyoaki Nishida (Graduate School of Informatics Kyoto University, Japan)
DOI: 10.4018/jssci.2012070101
OnDemand PDF Download:
No Current Special Offers


Virtual environments are a medium in which humans can effectively interact; however, until recently, research on body expression in these worlds has been sparse. This has changed with the recent development of markerless motion capture. This paper is a first step toward using this technology as part of an investigation into a collaborative task in the virtual world. In this task, participants used a pointing gesture as a means to both complete the task and communicate with their partner. The results gained from the experiment were inconclusive, but did show that the effectiveness of the experiment depends largely on the algorithm used to detect gesture and thereby influence the virtual world. Additionally, the benefits of the experimental system are shown. This research shows the potential of examining body expression in collaborative virtual environments.
Article Preview

1. Introduction

Body expression as a means for communication pervades everyday life. Using our upper body to indicate, explain and even show our emotion is ubiquitous, despite many of these actions occurring almost subconsciously. Similarly, communication in the virtual world using avatars is no less rich, despite comparably fewer methods with which users can express themselves. In fact, it is remarkable that communicative and collaborative acts in the online world can be established using avatars with only a few preset animations. However, the question remains as to the mechanism of virtual interaction when users are able to directly manipulate their own avatars. This domain is one which is in a state of early development, but provides interest for researchers in human-computer interaction, virtual reality and anthropology, among others. This paper takes a step into this field and shows its potential for future contributions.

From the perspective of cognitive informatics, simulating cognitive processes during communication is a worthy goal. However, the actual process of human-human communication should first be observed. If the goal is to make online communication natural (i.e. close to the real world) for human beings, then there is still a lot of work to be done. Although there exists a large amount of technology in terms of connectivity, online communication still cannot provide the richness of real-world interaction. This research also contributes to the field of cognitive informatics by providing both a system and a scenario through which these interaction processes can be studied. The advantage of doing this through information technology is that the data can also be captured and acted upon in real-time.

The analysis of non-verbal behavior of human beings is a rich and mature research field, with many avenues of study. Early work by Mehrabian and Ferris (1967) highlighted the importance of non-verbal expression for communication. Following on from this research, many types of non-verbal modalities have been investigated, such as facial expression (Ekman et al., 1987), proxemics (Hall, 1990), gesture (Kendon, 2004), and body expression (de Gelder, 2009). While these are all valuable fields in their own right, it is body expression within which this research will be grounded.

While previous studies have focused primarily on real-world analysis, this paper takes a different perspective, that of body expression communication in the virtual world. There are relatively few examples of prior research which address this. A recent paper by Allmendinger (2010) argued that body language behavior has been under-utilized in desktop environments, the reason perhaps being that expressing body language in the virtual world is far more difficult than in reality. On the other hand, analysis of social interactions in virtual worlds could well be largely dependent on body expression from avatars. A pertinent question is whether or not the same social rules which govern body expression will differ between real and virtual worlds. This paper aims to take a first step towards the investigation of some of these issues by using an environment which makes use of a human's real world motions to influence a virtual world avatar.

There are two major methods through which an avatar can utilize body expression. The first is the most common, especially for the average user. It is simply a point and click method such as that used by Guye-Vuillme, Capin, Pandzic, N. Thalmann, and D. Thalmann (1999), where the user may choose the expression with which they wish to communicate. The obvious drawback to this method is that the user is limited in the number of potential types of communicative features which they can execute. Furthermore, there is no flexibility in the way that the avatar performs an action, as it is determined by preset animations. The second method is for the user themselves to become the input of the system and to utilize their own natural movement such as in Peinado et al. (2009) and Jovanov et al. (2009). Wearable markers enable motion to be captured in this fashion. This method allows the user much greater flexibility when interacting through their body, and is the conceptual basis for the technique to be utilized in this paper. However, rather than the need for wearable markers, a Kinect device will be used for markerless body motion recognition.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 14: 4 Issues (2022): Forthcoming, Available for Pre-Order
Volume 13: 4 Issues (2021): 3 Released, 1 Forthcoming
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing