Investigating Serendipitous Smartphone Interaction with Public Displays

Investigating Serendipitous Smartphone Interaction with Public Displays

Matthias Baldauf (Vienna University of Technology, Austria) and Peter Fröhlich (Austrian Institute of Technology (AIT), Austria & FTW Telecommunications Research Center, Austria)
DOI: 10.4018/978-1-4666-8583-3.ch011
OnDemand PDF Download:
No Current Special Offers


Today's smartphones provide the technical means to serve as interfaces for public displays in various ways. Even though recent research has identified several approaches for mobile-display interaction, inter-technique comparisons of respective methods are scarce. In this chapter, the authors present an experimental user study on four currently relevant mobile-display interaction techniques (‘Touchpad', ‘Pointer', ‘Mini Video', and ‘Smart Lens'). The results indicate that mobile-display interactions based on a traditional touchpad metaphor are time-consuming but highly accurate in standard target acquisition tasks. The direct interaction techniques Mini Video and Smart Lens had comparably good completion times, and especially Mini Video appeared to be best suited for complex visual manipulation tasks like drawing. Smartphone-based pointing turned out to be generally inferior to the other alternatives. Finally, the authors introduce state-of-the-art browser-based remote controls as one promising way towards more serendipitous mobile interactions and outline future research directions.
Chapter Preview


Digital signage technology such as public displays and projections are starting to become omnipresent in today's urban surroundings. According to ABI Research (2011), the global market for such installations will reach almost $4.5 billion in 2016 indicating their increasing potential. However, typical public displays in the form of LCD flat screens are a passive medium and do not provide any interaction possibilities for an interested passerby. As our steady companions, smartphones have been identified as promising input devices for such remote systems. With their steadily expanding set of features such as built-in sensors, high quality cameras, and increasing processing power, they enable several advanced techniques to interact with large public displays.

Ballagas, Borchers, Rohs, & Sheridan (2006) investigated the available input design space and came up with different dimensions for classifying existing mobile/display interaction techniques. E.g., they suggest distinguishing between relative and absolute input commands as well as between continuous and discrete techniques. A continuous technique may change an object position continually, using a discrete technique the object position changes at the end of the task. Another commonly used dimension is the type of directness of a technique. A direct technique allows for the immediate selection of a favored point on the screen through the mobile device, traditionally using a graphical approach. In contrast, indirect approaches make use of a mediator, typically an on-screen mouse cursor which can be controlled through the mobile device.

Following an early classification of interaction techniques (Foley, Wallace, & Chan, 1984) we extend this smartphone/display interaction design space by the dimension of orientation-awareness taking into account the increasing popularity of mobile gesture-based applications. In case of an orientation-aware technique the position and/or orientation of the mobile device affects the interaction with the screen. In contrast, orientation-agnostic approaches are not sensitive to device movement.

To learn more about upcoming orientation-aware interaction techniques and to evaluate their suitability for spontaneous interaction with public displays in comparison to established techniques, we selected four recent techniques for an in-depth comparative study. We decided to choose two novel orientation-aware interaction techniques which are gaining increasing attention in industry and academia. These techniques became feasible on smartphones only recently due to advances in mobile device technology. Respective implementations have not been scientifically compared with existing more established techniques so far. Thus their actual benefits in terms of performance and user acceptance have not been proven by now.

The first orientation-aware technique, the Pointer (Figure 1, top right), is made possible due to gyroscopes integrated into mobile devices of the latest generation. Inspired by a laser pointer, this technique enables the control of the mouse cursor by tilting and thus literally pointing towards the favored display location with the mobile device. The second orientation-aware Smart Lens technique (Figure 1, bottom right) enables screen interaction over the live video of the smartphone. By targeting respective areas of the remote screen through the built-in camera users may directly select a specific screen point by touching the mobile device display. Since this technique works on the device's live video, it inherently offers a zoom feature by reaching out and moving the device closer to the display and vice versa.

Figure 1.

The four compared interaction techniques include two indirect techniques, Touchpad (top left) and Pointer (top right), and two direct techniques, Mini Video (bottom left) and Smart Lens (bottom right). While Pointer and Smart Lens are orientation-aware techniques, Touchpad and Mini Video are not sensitive to device movement.


Complete Chapter List

Search this Book: