Studying Natural Interaction in Multimodal, Multi-Surface, Multiuser Scenarios

Studying Natural Interaction in Multimodal, Multi-Surface, Multiuser Scenarios

Carlos Duarte, Andreia Ribeiro, Rafael Nunes
DOI: 10.4018/978-1-4666-4623-0.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Current technological apparati have made it possible for natural input systems to reach our homes, businesses, and learning sites. However, and despite some of these systems being already commercialized, there is still a pressing need to better understand how people interact with these apparati, given the whole array of intervening contextual factors. This chapter presents two studies of how people interact with systems supporting gesture and speech on different interaction surfaces: one supporting touch, the other pointing. The naturally occurring commands for both modalities and both surfaces have been identified in these studies. Furthermore, the studies show how surfaces are used, and which modalities are employed based on factors such as the number of people collaborating in the tasks and the placement of appearing objects in the system, thus contributing to the future design of such systems.
Chapter Preview
Top

Background

Gestural interaction is becoming pervasive. It can be found in tablets and smartphones, who offer their users touch based interfaces, supporting direct manipulation and semaphoric gestures (Quek et al., 2002). Microsoft Kinect and other entertainment systems support deictic and semaphoric gestures also. While people interact naturally with each other through gestures, gesture dictionaries are still required for HCI. This has been acknowledged in several works that tried to understand how people interact with computers through gestures (Kurdyukova, Redlin, & André, 2012; Miki, Miyajima, Nishino, Kitaoka, & Takeda, 2008; Wobbrock, Morris, & Wilson, 2009; Yin & Davis, 2010).

While some gestures have become standard for performing actions (e.g. pinch for zooming), there is still a need to characterize the way people perform general actions in a computing environment (Dang, Straub, & André, 2009; Epps, Lichman, & Wu, 2006; Neca & Duarte, 2011). These studies show that people not only present variability in the gestures they make for each command, but also in how they make it (e.g. by using different hand posture). This impacts the interaction design of applications that want to make use of gestures, but also the way gesture recognizers need to perform. Two of the major problems identified are: (1) people perform the same gesture for different actions; and (2) people find it very difficult to come up with gestures for actions that can not be addressed through direct manipulation (e.g. deleting an object in an interactive space without a recycle bin to drop the object into).

Complete Chapter List

Search this Book:
Reset