Effects of Volumetric Augmented Reality Displays on Human Depth Judgments: Implications for Heads-Up Displays in Transportation

Effects of Volumetric Augmented Reality Displays on Human Depth Judgments: Implications for Heads-Up Displays in Transportation

Lee Lisle, Coleman Merenda, Kyle Tanous, Hyungil Kim, Joseph L. Gabbard, Doug A. Bowman
Copyright: © 2019 |Pages: 18
DOI: 10.4018/IJMHCI.2019040101
Article PDF Download
Open access articles are freely available for download

Abstract

Many driving scenarios involve correctly perceiving road elements in depth and manually responding as appropriate. Of late, augmented reality (AR) head-up displays (HUDs) have been explored to assist drivers in identifying road elements, by using a myriad of AR interface designs that include world-fixed graphics perceptually placed in the forward driving scene. Volumetric AR HUDs purportedly offer increased accuracy of distance perception through natural presentation of oculomotor cues as compared to traditional HUDs. In this article, the authors quantify participant performance matching virtual objects to real-world counterparts at egocentric distances of 7-12 meters while using both volumetric and fixed-focal plane AR HUDs. The authors found the volumetric HUD to be associated with faster and more accurate depth judgements at far distance, and that participants performed depth judgements more quickly as the experiment progressed. The authors observed no differences between the two displays in terms of reported simulator sickness or eye strain.
Article Preview
Top

Introduction

Today’s modern cars collect vast amounts of data through various sources that need to be condensed and presented to drivers in a salient format. Accordingly, there are multiple ways to present the data to drivers, from console displays to dashboard displays and, recently, augmented reality (AR) head-up dis-plays (HUDs). While console displays and dashboard displays require users to look away from the road scene, HUDs display information contextually overlaid on top and into the driving scene. Since these displays do not require the user to glance away from the scene, there is opportunity for increased performance in visual and identification tasks without impacting primary task (driving) performance (Smith et al., 2017; Rusch et al., 2013; Tran et al., 2013).

AR displays, however, can be impacted from several factors that impede users’ visual perception. Some AR HUDs, for example, can cause eye strain (Banks et al., 2013) or simulator sickness (Kennedy et al., 1993). These effects may be overcome through using binocular focal planes which further afford more accurate perception of virtual objects. Traditionally, to address these effects AR displays employ transparent or video see-through technology utilizing a fixed-focal display. Unfortunately, this solution is affected by the vergence-accommodation mismatch (Hoffman et al., 2008) which hinders depth perception. Common driving scenarios, such as collision hazards or pedestrian detection, often involve depth perception in order to comprehend and execute a response effectively. When combined with an AR HUD, these scenarios need to assist the driver with correct depth cues in order for them to identify the distance of any hazard and react accordingly. Traditional fixed-focal plane AR HUDs have to overcome their depth perception issues in order to be a more effective display for road hazards.

Volumetric AR displays purport to improve depth perception through the use of voxels. Voxels are illuminated points in three-dimensional space that can create depth cues naturally as they occupy a true depth location in variable focal planes. This approach eliminates the need for other specialized technology like stereoscopic glasses or head-tracking systems to create three-dimensional depth cues such as motion parallax or binocular disparity (Jones et al., 2008). As such, volumetric displays afford consistent oculomotor vergence and accommodation cues that help to overcome the issues with the mismatch that occurs with many other augmented reality displays (Swan et al., 2015). Moreover, volumetric displays can support natural depth cues and increase depth perception at arbitrary distances as compared to traditional fixed-focal plane display designs. Supporting natural depth cues is especially important in driving, where for example, AR graphics should guide drivers’ visual attention to hazards, and presenting AR graphics at the same depth of a hazard may increase detection and subsequent reactions.

This study evaluates a swept-volume volumetric AR HUD to better understand human performance gains against a traditional fixed-focal plane AR HUD. We focused specifically on the quality of depth judgments (both time and accuracy), speed/accuracy tradeoffs and practice effects over time. Furthermore, we collected self-reported measures of eye strain and simulator sickness to see what effect these displays have on drivers.

Our study found that depth perception does improve with volumetric AR HUD technology. This can be partially attributed to the fixed-focal display anchoring users’ depth perception to a particular distance which hinders users possibly more than the variable focal planes of the volumetric display helps. With further statistical analysis, we found a weak correlation between response time and judgment as well as an auto-correlation between trial number with respect to judgment and response time. There was also a multivariate negative correlation between trial number with respect to judgment and response time with the volumetric display. Since their accuracy did not improve, this shows a measure of learning where users were quicker to perceive distances. Furthermore, this significance was not present with the traditional fixed-focal display condition which shows that users were specifically more confident with the volumetric display for depth judgments.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 1 Issue (2023)
Volume 14: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 13: 1 Issue (2021)
Volume 12: 3 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing