Human-Computer Interaction in Games Using Computer Vision Techniques

Human-Computer Interaction in Games Using Computer Vision Techniques

Vladimir Devyatkov, Alexander Alfimtsev
Copyright: © 2013 |Pages: 22
DOI: 10.4018/978-1-4666-3994-2.ch061
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A primary goal of virtual environments is to support natural, efficient, powerful and flexible human-computer interaction. But the traditional two-dimensional, keyboard- and mouse-oriented graphical user interface is not well-suited for virtual environments. The most popular approaches for capture, tracking and recognition of different modalities simultaneously to create intellectual human-computer interface for games will be considered in this chapter. Taking into account the large gesture variability and their important role in creating intuitive interfaces, the considered approaches focus one’s attention on gestures although the approaches may be used also for other modalities. The considered approaches are user independent and do not require large learning samples.
Chapter Preview
Top

Introduction

A primary goal of virtual environments is to support natural, efficient, powerful, and flexible human-computer interaction. If the interaction technology is awkward, or constraining, the user’s experience with the synthetic environment is severely degraded. If the interaction itself draws attention to the technology, rather than the task at hand, it becomes an obstacle to a successful virtual environment experience.

The traditional two-dimensional, keyboard- and mouse-oriented graphical user interface (GUI) is not well-suited for virtual environments. Instead, synthetic environments provide the opportunity to utilize several different sensing modalities and integrate them into the user experience. The cross product of communication modalities and sensing devices begets a wide range of unimodal and multimodal interface techniques. The potential of these techniques to support natural and powerful interfaces is the future of game constructing and designing.

To more fully support natural communication, it has to not only track human movement, but to interpret that movement in order to recognize semantically meaningful gestures. While tracking user’s head position or hand configuration may be quite useful for directly controlling objects or inputting parameters, because people naturally express communicative acts through higher-level constructs such as gesture or speech.

In this chapter, we shall consider the most popular approaches for capture, tracking and recognition of different modalities simultaneously to create intellectual human-computer interface for games. Taking into account the large gesture variability and their important role in creating intuitive interfaces, the considered approaches focus one's attention on gestures although the approaches may be used also for other modalities. The considered approaches are user independent and do not require large learning samples.

Complete Chapter List

Search this Book:
Reset