From Touchpad to Smart Lens: A Comparative Study on Smartphone Interaction with Public Displays

From Touchpad to Smart Lens: A Comparative Study on Smartphone Interaction with Public Displays

Matthias Baldauf, Peter Fröhlich, Jasmin Buchta, Theresa Stürmer
Copyright: © 2013 |Pages: 20
DOI: 10.4018/jmhci.2013040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Today’s smartphones provide the technical means to serve as interfaces for public displays in various ways. Even though recent research has identified several new approaches for mobile-display interaction, inter-technique comparisons of respective methods are scarce. The authors conducted an experimental user study on four currently relevant mobile-display interaction techniques (‘Touchpad’, ‘Pointer’, ‘Mini Video’, and ‘Smart Lens’) and learned that their suitability strongly depends on the task and use case at hand. The study results indicate that mobile-display interactions based on a traditional touchpad metaphor are time-consuming but highly accurate in standard target acquisition tasks. The direct interaction techniques Mini Video and Smart Lens had comparably good completion times, and especially Mini Video appeared to be best suited for complex visual manipulation tasks like drawing. Smartphone-based pointing turned out to be generally inferior to the other alternatives. Examples for the application of these differentiated results to real-world use cases are provided.
Article Preview
Top

Introduction

Digital signage technology such as public displays and projections are starting to become omnipresent in today's urban surroundings. According to ABI Research (2011), the global market for such installations will triple in the next few years and will reach almost $4.5 billion in 2016 indicating their increasing potential. However, typical public displays in the form of LCD flat screens are a passive medium and do not provide any interaction possibilities for an interested passerby. As our steady companions, smartphones have been identified as promising input devices for such remote systems. With their steadily expanding set of features such as built-in sensors, high quality cameras, and increasing processing power, they enable several advanced techniques to interact with large public displays.

Ballagas et al. (2006) investigated the available input design space and came up with different dimensions for classifying existing mobile/display interaction techniques. E.g. they suggest distinguishing between relative and absolute input commands as well as between continuous and discrete techniques. A continuous technique may change an object position continually, using a discrete technique the object position changes at the end of the task. Another commonly used dimension is the type of directness of a technique. A direct technique allows for the immediate selection of a favored point on the screen through the mobile device, traditionally using a graphical approach. In contrast, indirect approaches make use of a mediator, typically an on-screen mouse cursor which can be controlled through the mobile device.

Following an early classification of interaction techniques (Foley et al., 1984) we extend this smartphone/display interaction design space by the dimension of orientation-awareness taking into account the increasing popularity of mobile gesture-based applications. In case of an orientation-aware technique the position and/or orientation of the mobile device affects the interaction with the screen. In contrast, orientation-agnostic approaches are not sensitive to device movement.

To learn more about upcoming orientation-aware interaction techniques and to evaluate their suitability for spontaneous interaction with public displays in comparison to established techniques, we selected four recent techniques for an in-depth comparative study. We decided to choose two novel orientation-aware interaction techniques which are gaining increasing attention in industry and academia. These techniques became feasible on smartphones only recently due to advances in mobile device technology. Respective implementations have not been scientifically compared with existing more established techniques so far. Thus their actual benefits in terms of performance and user acceptance have not been proven by now.

The first orientation-aware aware technique, the Pointer (Figure 2), is made possible due to gyroscopes integrated into mobile devices of the latest generation. Inspired by a laser pointer, this technique enables the control of the mouse cursor by tilting and thus literally pointing towards the favored display location with the mobile device. The second orientation-aware, yet direct Smart Lens technique (Figure 4) enables screen interaction over the live video of the smartphone. By targeting respective areas of the remote screen through the built-in camera users may select a specific screen point by touching the mobile device display. Since this direct technique works directly on the device's live video, it inherently offers a zoom feature by reaching out and moving the device closer to the display and vice versa.

Figure 2.

Indirect techniques: Pointer. Pointer and Smart Lens are orientation-aware techniques.

jmhci.2013040101.f02
Figure 4.

Direct techniques: Smart Lens. Pointer and Smart Lens are orientation-aware techniques.

jmhci.2013040101.f04

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 1 Issue (2023)
Volume 14: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 13: 1 Issue (2021)
Volume 12: 3 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing