Using Information Retrieval for Interaction with Mobile Devices

Using Information Retrieval for Interaction with Mobile Devices

Kamer Ali YUKSEL
Copyright: © 2014 |Pages: 17
DOI: 10.4018/978-1-4666-4446-5.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Future's environments will be sensitive and responsive to the presence of people to support them carrying out their everyday life activities, tasks and rituals, in an easy and natural way. Such interactive spaces will use the information and communication technologies to bring the computation into the physical world in order to enhance ordinary activities of their users. Human-computer interaction (HCI) and information retrieval (IR) fields have both developed innovative techniques to address the challenge of navigating complex information spaces, but their insights have often failed to cross-disciplinary borders. Human-computer information retrieval (HCIR) has emerged in academic research and industry practice to bring together research in the fields of IR and HCI, in order to create new kinds of search systems that depend on continuous human control of the search process. HCIR is the study of information retrieval techniques that bring human intelligence into the search process. This chapter will describe search-based interaction techniques using two human-computer interaction information retrieval systems: (1) a speech-based spoken multimedia retrieval system that can be used to present relevant video-podcast (vodcast) footage in response to spontaneous speech and conversations during daily life activities, and (2) a novel shape retrieval technique that allows 3D modeling of indoor/outdoor environments using multi-view sketch input from a mobile device.
Chapter Preview
Top

Introduction

Recent interaction techniques have begun to expose the interaction challenges and problems posed by traditional graphical user interface designs (GUIs). GUI mechanisms such as cursors, windows, icons, menus and drag-and-drop are not anymore answering expectations of users, especially in the upcoming age of ubiquitous computing. Novel sensing mechanisms have expanded the key-pressing, point-and-click interaction bottleneck and allowed systems to accept a far wider range of input. According to Norman’s theory of action, there are seven stages of action with respect to system interaction: forming the goal, forming the intention, specifying an action, executing the action, perceiving the state of the world, interpreting the state of the world and evaluating the outcome (Norman, 2002).

Ubiquitous computing (ubicomp) is a post-desktop model of human-computer interaction in which information processing has been thoroughly integrated into everyday objects and activities. In the course of ordinary activities, someone “using” ubiquitous computing engages many computational devices and systems simultaneously, and may not necessarily even be aware that they are doing so. This model is considered advancement from the older desktop paradigm. More formally, ubiquitous computing is defined as “machines that fit the human environment instead of forcing humans to enter theirs (York, 2004).

Contemporary human-computer interaction models, whether command-line, menu-driven, or GUI-based, are inappropriate and inadequate to the ubiquitous case. Computer vanishes into the background” notion of the ubicomp imposes the problem of communicating to the user, which objects the potential for possible action is embedded in. How users can establish what action they wish the system to perform, how to control its extent (if it has one) as well as how to specify (if there are any) targets of that action? This suggests that the “natural” interaction paradigm appropriate to a fully robust ubiquitous computing has yet to emerge.

Bellotti et al. have expanded Norman’s theory to inform sensing user-interfaces within the ubicomp context by focusing on the communicative aspects of interaction and borrowing ideas from social science. In contrast to Norman’s theory’ their approach highlights communicative, rather than cognitive aspects of interaction and focused their attention on the joint accomplishments of the user and the system rather than user’s mental model. According to them, humans and systems must manage and repair their communications, and must be able to establish a shared topic similar to HHI where signals are used to communicate intention to initiate, availability for communication, or a listener understands what is being said (Bellotti, 2002).

In this work, we have addressed the challenges exposed in the ubiquitous systems while defining what is to be done with the system (Norman’s Gulf of Execution) and allowed users to effect a meaningful action, control its extent and possibly specify a target or targets for their action. The proposed method enables users to handle complex operations by identifying and selecting abstract objects for their actions using information retrieval technologies. In HHI, we often describe nonconcrete actions and objects using verbal (e.g. text and speech) and non-verbal means of communication (e.g., gestures and sketches) and let the listener figure out and refine the precise intention. The mistakes and misunderstandings are repaired through mutual feedbacks and a shared topic is established between both parties. For that reason, we propose utilizing search mechanisms to define abstract objects and demonstrated the proposed method using two usage scenarios of ubicomp. In the first scenario, the system recommends relevant objects (multimedia contents) to the user using automated speech recognition. In the second one, the user creates an augmented reality environment by defining virtual objects through sketch-based input.

In the following section, we summarize information retrieval measures utilized throughout the work to evaluate the performance of proposed methodologies on both usage scenarios. Then, we explain the background, architecture of methodologies proposed for each scenarios and present experimental results using the measures mentioned.

Complete Chapter List

Search this Book:
Reset