Touchless Selection Schemes for Intelligent Automotive User Interfaces With Predictive Mid-Air Touch

Touchless Selection Schemes for Intelligent Automotive User Interfaces With Predictive Mid-Air Touch

Bashar I. Ahmad (University of Cambridge, Cambridge, UK), Chrisminder Hare (Jaguar Land Rover, Coventry, UK), Harpreet Singh (Jaguar Land Rover, Coventry, UK), Arber Shabani (Jaguar Land Rover, Coventry, UK), Briana Lindsay (Jaguar Land Rover, Coventry, UK), Lee Skrypchuk (Jaguar Land Rover, Coventry, UK), Patrick Langdon (University of Cambridge, Cambridge, UK) and Simon Godsill (University of Cambridge, Cambridge, UK)
Copyright: © 2019 |Pages: 22
DOI: 10.4018/IJMHCI.2019070102

Abstract

Predictive touch technology aims to improve the usability and performance of in-vehicle displays under the influence of perturbations due to the road and driving conditions. It fundamentally relies on predicting and early in the freehand pointing movement, the interface item the user intends to select, using a novel Bayesian inference framework. This article focusses on evaluating facilitation schemes for selecting the predicted interface component whilst driving, and without physically touching the display, thus touchless. Initially, several viable schemes were identified in a brainstorming session followed by an expert workshop with 12 participants. A simulator study with 24 participants using a prototype predictive touch system was then conducted. A number of collected quantitative and qualitative measures show that immediate mid-air selection, where the system autonomously auto-selects the predicted interface component, may be the most promising strategy for predictive touch.
Article Preview
Top

Introduction

Predictive touch is an emerging HMI technology that employs a probabilistic Bayesian framework and novel algorithms to predict the interface component the user intends to select, notably early in the pointing-selection task (Ahmad, Murphy, Godsill, Langdon & Hardy, 2017). It infers the user intent from the available freehand pointing movements in 3D, for example from gesture trackers which are increasingly becoming commonplace in vehicles (Zhang & Angell, 2014; Ohn-Bar & Trivedi, 2014), and potentially other available sensory data such as eye-gaze. The pointing-selection task is simplified and expedited by the system via applying a suitable selection facilitation scheme. This can significantly reduce the effort and distractions associated with using in-vehicle displays whilst driving (Jæger, Skov & Thomassen, 2008). Figure 1 depicts the system block diagram including the sensory data sources utilized by a Bayesian predictor to estimate, early in the pointing task, the probability of each of the selectable interface items being the intended on-screen destination. Predictive touch was originally developed to mitigate the effects of perturbations on the user input, for example vibrations and accelerations due to the road and driving conditions. They can have a detrimental impact on the performance of interactive displays, such as touch screens (Goode, Lenné & Salmon, 2012; Ahmad et al., 2015), which often act as the gateway to control in-vehicle infotainment systems and are an integrated part of modern vehicles (Harvey & Stanton, 2016).

With predictive touch, which is not a pointing/ray-casting or conventional “symbolic” gestures recognition solution; as will be discussed, the user does not need to physically touch a display to select an interface component. Therefore, this touchless technology can not only improve the usability and performance of in-vehicle interactive displays, but it also provides the means to interact via the intuitive free-hand pointing with new automotive display technologies that do not have a physical surface such as head-up displays and 3D projections (Bark et al., 2014; Broy et al., 2015). It also offers additional design flexibilities in terms of the display placement and size which is otherwise limited by the reach of the driver/passenger. This can promote inclusive design practices by tailoring the display operation to the user capabilities by adequately configuring the “software-based” intent prediction algorithms and pointing facilitation schemes.

Figure 1.

System block diagram showing an in-car touchscreen, partial 3D pointing-finger trajectory (black solid line) available at the current time instant tk with tracked locations (crosses), future pointing trajectory (red dotted line) and intended on-screen destination (red circle)

IJMHCI.2019070102.f01

In this paper, we address the problem of identifying the most suitable scheme for facilitating the selection of the predicted interface component whilst driving. This human factor aspect is crucial for the deployment of the predictive touch technology in automotive. The selection facilitation functionality belongs to the Facilitation Scheme module in Figure 1. It involves altering the interface, for instance highlighting the predicted item, and then triggering the selection action. The users receive visual feedback on their input as the interface typically changes and the Graphical User Interface (GUI) page updates with each selection action. Other feedback modalities can be explored, e.g. audible (Ahmad et al., 2016b) or mid-air haptic (Shakeri, Williamson & Brewster, 2018), however this is outside the scope of this article.

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 12: 4 Issues (2020): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing