Towards a Taxonomy of Display Styles for Ubiquitous Multimedia

Towards a Taxonomy of Display Styles for Ubiquitous Multimedia

Florian Ledermann
Copyright: © 2009 |Pages: 15
DOI: 10.4018/978-1-60566-046-2.ch063
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, a domain independent taxonomy of sign functions rooted in an analysis of physical signs found in public space is presented. This knowledge is necessary for the construction of future multimedia systems that are capable of automatically generating complex yet legible graphical responses from an underlying abstract information space such as a semantic network. The authors take the presence of a sign in the real world as indication for a demand for the information encoded in that sign, and identify the fundamental types of information that are needed to fulfill various tasks. For the information types listed in the taxonomy, strategies for rendering the information to the user in digital mobile multimedia systems are discussed.
Chapter Preview
Top

Introduction

Future mobile and ubiquitous multimedia systems will be even more an integrated part of our everyday reality than it is the case today. A digital layer of information will be available in everyday situations and tasks, displayed on mobile devices, blended with existing contents of the real, physical world. Such an “augmented reality” (Azuma et al., 2001) will put into practice recent developments in the area of mobile devices, wireless networking, and ubiquitous information spaces, to be able to provide the right information to the right person at the right time.

The envisioned applications for these kinds of systems are manifold; the scenarios we are thinking of are based on a dense, spatially distributed information space which can be browsed by the user either explicitly (by using navigation interfaces provided by hardware or software) or implicitly (by moving through space or changing one’s intentions, triggering changes in the application’s model of the user’s context). Examples for the information stored in such an information space would be historical anecdotes, routes, and wayfinding information for a tourist guide or road and building information for wayfinding applications. The question of how to encode this information in a suitable and universal way is the subject of ongoing research in the area of semantic modeling (Chen, Perich, Finin, & Joshi, 2004; Reitmayr & Schmalstieg, 2005). For the applications we envision, we will require the information space not only to carry suitable abstract metainformation, but also multimedia content in various forms (images, videos, 3D-models, text, sound) that can be rendered to the user on demand.

Besides solving the remaining technical problems of storage, querying, distribution, and display of that information, which are the subject of some of the other chapters in this book, we have to investigate the consequences of such an omnipresent, ubiquitous computing scenario for the user interfaces of future multimedia applications. Up to now, most research applications have been mainly prototypes targeted towards a specific technical problem or use case; commercial applications mostly focus on and present an interface optimized for a single task (for example, wayfinding). In the mobile and ubiquitous multimedia applications we envision, the user’s task and therefore the information that should be displayed cannot be determined in advance, but will be inferred at runtime from various aspects of the user’s spatio-temporal context, selecting information and media content from the underlying information space dynamically. To communicate relevant data to the user, determined by her profile, task, and spatio-temporal context, we have to create legible representations of the abstract data retrieved from the information space. A fundamental problem here is that little applicable systematic knowledge exists about the automatic generation of graphical representations of abstract information.

If we want to take the opportunity and clarify rather than obscure by adding another layer of information, the following questions arise: Can we find ways to render the vast amounts of abstract data potentially available in an understandable, meaningful way, without the possibility of designing each possible response or state of such a system individually? Can we replace a part of existing signs in the real world, already leading to “semiotic pollution” (Posner & Schmauks, 1998) in today’s cities, with adaptive displays that deliver the information the user needs or might want to have? Can we create systems that will work across a broad range of users, diverse in age, gender, cultural and socio-economical background?

A first step towards versatile systems that can display a broad range of context-sensitive information is to get an overview of which types of information could possibly be communicated. Up to now, researchers focused on single aspects of applications and user interfaces, as for example navigation, but to our knowledge there is no comprehensive overview of what kinds of information can generally occur in mobile information systems. In this article, we present a study that yields such an overview. This overview results in a taxonomy that can be used in various ways:

Key Terms in this Chapter

Augmented Reality: Augmented reality (AR) is a field of research in computer science which tries to blend sensations of the real world with computer-generated content. While most AR applications use computer graphics as their primary output, they are not constrained by definition to visual output—audible or tangible representations could also be used. A widely accepted set of requirements of AR applications is given by Azuma (2001 AU7: The in-text citation "Azuma (2001" is not in the reference list. Please correct the citation, add the reference to the list, or delete the citation. ): • AR applications combine sensations of the real world with virtual content. • AR applications are interactive in real-time • AR applications are registered in the 3-dimensional space of the real world

Taxonomy: A taxonomy is a classification of things or concepts, often in a hierarchical manner.

Ubiquitous Computing: The term ubiquitous computing (UbiComp) captures the idea of integrating computers into the environment rather than treating them as distinct objects, which should result in more “natural” forms of interaction with a “smart” environment than current, screen-based user interfaces.

Recently: several mobile AR systems have been realized as research prototypes, using laptop computers or handheld devices as mobile processing units

Complete Chapter List

Search this Book:
Reset