How Do We Interact in Collaborative Virtual Reality?: A Nonverbal Interaction Perspective

How Do We Interact in Collaborative Virtual Reality?: A Nonverbal Interaction Perspective

Adriana Peña Pérez Negrón
Copyright: © 2021 |Pages: 27
DOI: 10.4018/978-1-7998-7552-9.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Nonverbal interaction includes most of what we do; the interaction resulted from other means than words or their meaning. In computer-mediated interaction, the richness of face-to-face interaction has not been completely achieved. However, multiuser virtual reality, a computer-generated environment that allows users to share virtual spaces and virtual objects through their graphic representation, is a highly visual technology in which nonverbal interaction is better supported when compared with other media. Still, like in any technology media, interaction is accomplished distinctively due to technical and design issues. In collaborative virtual reality, the analysis of nonverbal interaction represents a helpful mechanism to support feedback in teaching or training scenarios, to understand collaborative behavior, or to improve this technology. This chapter discussed the characteristics of nonverbal interaction in virtual reality, presenting advances in the automatic interpretation of the users' nonverbal interaction while a spatial task is collaboratively executed.
Chapter Preview
Top

Introduction

Virtual reality (VR) is a three-dimensional (3D) computer-generated scenario with which the user can interact, navigating through it, or modifying objects. Furthermore, in a multiuser situation, the user will interact not only with the virtual environment but also with others through different channels, verbal and no verbal actions. While verbal communication is easily achieved, nonverbal interaction in VR is constrained due to design and technological issues. It is then important to understand how users accomplish interaction through a graphical representation to improve the users’ experience in general, and for the design of particular situations. In this context, the automatic analysis of nonverbal interaction in VR represents a not trivial task to enhance such interactions' comprehension, providing stakeholders with immediate feedback.

VR primary purpose is to produce a feeling of presence by generating the users’ perceptual transfer into the virtual environment (VE). Although immersion, as a state of mind, depends on the users' willingness, the interaction design plays a significant role in the immersion sensation. In turn, the interaction design is hardly based on the computer input/output devices.

From a technical point of view, desktop-based VR is considered the less immersive because the user can interact simultaneously with the real world. At the other edge is Immersive Virtual Reality (IVR), which surrounds the user with the VE, so that the user can interact only with the virtual scenario. The two leading technologies to achieve IVR are the CAVETM, see Figure 1, a 10’X10’X10’ theater made up of three rear-projection screens for walls, and a down projection screen for the floor (Cruz-Neira, Sandin, & Defanti, 1993). And the head-mounted display (HMD), a device that displays the scenario for each eye with a different perspective, see Figure 2. Between desktop VR and IVR is semi-immersive VR, a variety of technologies that can consist of big screens, semicircular displays, or tactile gadgets such as virtual gloves as input devices.

Figure 1.

Cave Automatic Virtual Environment (CAVE) (Cruz-Neira, Sandin, & Defanti, 1993)

978-1-7998-7552-9.ch013.f01
Figure 2.

Head-mounted display (HMD)

978-1-7998-7552-9.ch013.f02

The user interaction with the VE is part of the Human-Computer Interaction (HCI) field of study. And it is composed of four basic actions (Mine, 1995):

  • 1.

    Navigation, the displacement in the virtual space.

  • 2.

    Selection, the action of pointing or grabbing an object.

  • 3.

    Manipulation, the modification of the state of an object (e.g., moving or rotating it).

  • 4.

    System control, the set of application features usually through menus.

HCI in VR can follow similar to real-life approaches, such as grabbing an object by a hand-arm movement. However, it mostly follows a metaphor, a representative action to perform the interaction with the VE. In turn, a metaphor can follow a similar to real-life approach; however, this is not always possible or advisable. For example, following the grabbing an object instance, in VR, objects are usually grabbed or selected through the mouse or a game controller, different from extending the arm to touch them. However, people only can grab real objects at their reach, which is not an impediment in virtual situations where the user can select objects that appear at a long distance. Some metaphors to grab remote objects are a ray or a gun target pointer to indicate the object to be grabbed, and a click most likely will trigger the selection action. Afterward, the object has to present some indicative change for the user to be aware of the object selection. A metaphor for the user interaction is then the users’ actions and the design mechanism that supports the users’ awareness.

Key Terms in this Chapter

Nonverbal Interaction: The interaction accomplished by other means than words or their meaning.

Avatar: The graphic representation of an actor in a virtual reality environment.

MUVE: Computer-generated scenarios for multiusers to interact.

Complete Chapter List

Search this Book:
Reset