Approaches and Applications of Virtual Reality and Gesture Recognition: A Review

Approaches and Applications of Virtual Reality and Gesture Recognition: A Review

Sudha M. R., Sriraghav K., Sudar Abisheck S., Shomona Gracia Jacob, Manisha S.
DOI: 10.4018/978-1-5225-5469-1.ch009
(Individual Chapters)
No Current Special Offers


Interaction with a computer has been the center of innovation ever since the advent of input devices. From simple punch cards to keyboards, there are number of novel ways of interaction with computers which influence the user experience. Communicating using gestures is perhaps one of the most natural ways of interaction. Gesture recognition as a tool for interpreting signs constitutes a pivotal area in gesture recognition research where accuracy of the algorithm and the ease of usability determine the effectiveness of the algorithm or system. Introducing gesture based interaction in Virtual reality applications has not only helped solve problems which were commonly reported in traditional Virtual Reality systems, but also gives user a more natural and enriching experience. This paper concentrates on comparison of different systems and identifying their similarities, differences, advantages and demerits which can play a key role in designing a system using such technologies.
Chapter Preview

Concepts In Virtual Reality

Since the inception of virtual reality and its conception by Ivan Sutherland, different perceptions of virtual reality have been developed and presented. While each of them differs from one another in their method of implementation, the level of immersion and sensory revelation, they all share some qualities that are unique to the concept of virtual reality. A well-known method proposed by Zelzter (Zeltzer, 1992) is the Zelzter’s AIP cube which defines the components in a virtual reality environment’s complexity and quality using three parameters namely – Autonomy, Interaction and Presence as shown in Figure 1. These parameters are expressed as the coordinate axes of a cube. Autonomy is defined as a qualitative measure of the ability of a computational model to act and react to simulated events and stimuli, ranging from 0 for the passive geometric model to 1 for the most sophisticated, physically based virtual agent. Interaction is the degree of access to model parameters at runtime (i.e., the ability to define and modify states of a model with immediate response). The range is from 0 for “batch” processing in which no interaction at runtime is possible, to 1 for comprehensive, real-time access to all model parameters. Presence provides a rough (dimensionless) measure of the number and fidelity of available input and output channels” (Kalawsky, 2000).

Complete Chapter List

Search this Book: