On Vision-Based Human-Centric Virtual Character Design: A Closer Look at the Real World from a Virtual One

On Vision-Based Human-Centric Virtual Character Design: A Closer Look at the Real World from a Virtual One

Eugene Borovikov (PercepReal, USA), Ilya Zavorin (PercepReal, USA) and Sergey Yershov (PercepReal, USA)
Copyright: © 2016 |Pages: 34
DOI: 10.4018/978-1-5225-0454-2.ch001
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Enabling cognition in a Virtual Character (VC) may be an exciting endeavor for its designer and for the character. A typical VC interacts primarily with its virtual world, but given some sensory capabilities (vision or hearing), it would be expected to explore some of the real world and interact with the intelligent beings there. Thus a virtual character should be equipped with some algorithms to localize and track humans (e.g. via 2D or 3D models), recognize them (e.g. by their faces) and communicate with them. Such perceptual capabilities prompt a sophisticated Cognitive Architecture (CA) to be integrated into the design of a virtual character, which should enable a VC to learn from intelligent beings and reason like one. To seem natural, this CA needs to be fairly seamless, reliable and adaptive. Hence a vision-based human-centric approach to the VC design is explored here.
Chapter Preview
Top

Introduction

A pure virtual character (VC) is typically limited to interactions with and reasoning about its virtual world. However, given certain abilities to perceive and explore some of the real world and interact with the intelligent beings there, can a VC evolve into an intelligent virtual being? Let us equip a VC with visual sensors, include some algorithms for object recognition and tracking, and provide some ability to learn and reason. Then such a virtual character, much like Alice stepping through the looking-glass (as in Figure 1) and becoming aware of the other world, should have a chance to eventually discover some intelligent characters there, observe their traits, and by virtue of interacting with them, learn and reason about that world and its beings. Such perceptual capabilities evidently prompt a sophisticated cognitive architecture (CA) to be integrated into the design of a virtual character, and to seem natural, this CA needs to be fairly seamless, reliable and adaptive at both sides of the virtual looking glass. Thus, enabling cognition in a virtual character may truly be an exciting endeavor for the VC designers and hopefully for the VCs themselves.

Figure 1.

Alice Through the Looking Glass sculpture by Jeanne Argent at Guildford Castle, Surrey, UK

In general, there is a difference between cognitive architecture approaches and Artificial Intelligence (AI) approaches to intelligent agents design. The latter usually are optimized for the maximum task performance, while the former are optimized for a human-like performance. This chapter focuses on the human-centric CA that enable a perception-capable VC to learn and imitate the traits of the intelligent agents it observes and interacts with ultimately striving towards a human-like performance, but also allowing for developing and optimizing certain abilities that may eventually surpass those of the humans, e.g. very fast and accurate content based image retrieval.

Virtual character’s perceptual abilities would naturally rely on the given sensory capabilities, e.g. video cameras for its eyes or microphones for its ears. Clearly those sensory streams should be synchronized and carry enough of the signal resolution to distinguish the important features of the objects and beings a VC would need to interact with. Those features would be extracted by various signal and image processing algorithms accompanying the given sensors, and hence be known and referred to as the basic perceptual abilities that our virtual character does not need to develop. A perceptually capable VC, however, would need to use its evolving cognitive architecture at deciding on a combination of important features characterizing the real-world objects and beings it needs to reason about and interact with.

Communications between VC and humans would naturally be of the most interest to this study, and thus a virtual character should be able to localize and track humans (e.g. via non-rigid 2D or 3D models), recognize them (e.g. by their faces and/or voices) and communicate with them, preferably via natural (for both parties) interfaces, e.g. a human-like virtual reality (VR) avatar, for the purpose of our human-centric approach, which puts the humans at the center of the VC’s attention with the intent of learning some of the human behavioral traits via the given senses, especially vision. This means that such a VC needs to work in visually unconstrained environments, perform its perceptual sub-tasks in real time, and constantly learn from its experiences with both virtual and real worlds, interacting with their inhabitants. Such real-time interactions between a VC and the real world should result in a gradual development of that virtual character, ultimately resulting in a highly realistic virtual or mixed reality experience for the humans.

The authors propose a vision-based human-centric approach to the design of a virtual character equipped with visual sensors. A general vision-based solution for the problem of VC design is beyond the scope of this discussion, so we focus on arguably the most visually expressive and natural human real-world manifestations: face and body. The main contribution of this work is a set of methods for several visual perception tasks that we believe are essential for a flexible, real-time and continuously learning human-centric VC development system, namely:

Complete Chapter List

Search this Book:
Reset