Visiting Tourist Landmarks in Virtual Reality Systems by Real-Walking

Visiting Tourist Landmarks in Virtual Reality Systems by Real-Walking

F. Steinicke (Westfälische Wilhelms-Universität Münster, Germany), G. Bruder (Westfälische Wilhelms-Universität Münster, Germany), J. Jerald (University of North Carolina at Chapel Hill, USA) and H. Frenz (Westfälische Wilhelms-Universität Münster, Germany)
DOI: 10.4018/978-1-60566-818-5.ch011


In recent years virtual environments (VEs) have become more and more popular and widespread due to the requirements of numerous application areas in particular in the 3D city visualization domain. Virtual reality (VR) systems, which make use of tracking technologies and stereoscopic projections of three-dimensional synthetic worlds, support better exploration of complex datasets. However, due to the limited interaction space usually provided by the range of the tracking sensors, users can explore only a portion of the virtual environment (VE). Redirected walking allows users to walk through large-scale immersive virtual environments (IVEs) such as virtual city models, while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. With redirected walking users are guided on physical paths that may differ from the paths they perceive in the virtual world. The authors have conducted experiments in order to quantify how much humans can unknowingly be redirected. In this chapter they present the results of this study and the implications for virtual locomotion user interfaces that allow users to view arbitrary real world locations, before the users actually travel there in a natural environment.
Chapter Preview


Walking is the most basic and intuitive way of moving within the real world.

Navigating through large-scale immersive virtual environments (IVEs) can be used in interesting ways in the e-Tourism domain. Landmarks, historical areas, hotels etc. can be viewed in an IVE before going there physically.

Many domains are inherently three-dimensional and advanced visual simulations often provide a good sense of locomotion, but exclusive visual stimuli cannot address the vestibular-proprioceptive system -- which provide us the ability to know where we are in space and time.

Real walking through IVEs is often not possible (Whitton et al. 2005). However, an obvious approach is to transfer the user's tracked head movements to changes of the virtual camera in the virtual world by means of a one-to-one mapping, i.e., a one meter movement in the real world is mapped to a one meter movement in the virtual one. This technique has the drawback that the users' movements are restricted by a limited range of the tracking sensors and a rather small workspace in the real world. Therefore, concepts for virtual locomotion methods are needed that enable walking over large distances in the virtual world while remaining within a relatively small space in the real world. Various prototypes of interface devices have been developed to prevent a displacement in the real world such that users remain almost at the same position in the physical world even while they walk. These devices include torus-shaped omni-directional treadmills, motion foot pads, robot tiles and motion carpets (Bouguila & Sato, 2002; Iwata, Yano, Fukushima, & Noma, 2006).

Although these hardware systems represent enormous technological achievements, they are still very expensive and will not be generally accessible in the foreseeable future. Hence there is a tremendous demand for more accessible approaches. As a solution to this challenge, traveling by exploiting walk-like gestures has been proposed in many different variants, giving the user the impression of walking. For example, the walking-in-place approach exploits walk-like gestures to travel through an IVE, while the user remains physically at nearly the same position (Feasel et al. 2008).

However, real walking has been shown to be a more presence-enhancing locomotion technique than other navigation methods.

Cognition and perception research suggests that cost-efficient as well as natural alternatives exist. It is known from perceptive psychology that vision often dominates proprioceptive and vestibular sensation when they disagree. When, in perceptual experiments, human participants can use only vision to judge their motion through a virtual scene they can successfully estimate their momentary direction of self-motion but are much less proficient in perceiving their paths of travel (Lappe, Bremmer, & van den Berg, 1999). Therefore, since users tend to unwittingly compensate for small inconsistencies during walking, it is possible to guide them along paths in the real world that differ from the path perceived in the virtual world. This Redirected walking enables users to explore a virtual world that is considerably larger than the tracked working space (Razzaque, 2005) (see Figure 1).

Figure 1:

Redirected walking scenario for a user walking in the real environment.

In this chapter we present a series of experiments in which we have quantified how much humans can be redirected without observing inconsistencies between real and virtual motions. The remainder of this chapter is structured as follows.

Complete Chapter List

Search this Book: