Two Frameworks for the Adaptive Multimodal Presentation of Information

Two Frameworks for the Adaptive Multimodal Presentation of Information

Yacine Bellik, Christophe Jacquet, Cyril Rousseau
DOI: 10.4018/978-1-60566-978-6.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Our work aims at developing models and software tools that can exploit intelligently all modalities available to the system at a given moment, in order to communicate information to the user. In this chapter, we present the outcome of two research projects addressing this problem in two different areas: the first one is relative to the contextual presentation of information in a “classical” interaction situation, while the second one deals with the opportunistic presentation of information in an ambient environment. The first research work described in this chapter proposes a conceptual model for intelligent multimodal presentation of information. This model called WWHT is based on four concepts: “What,” “Which,” “How,” and “Then.” The first three concepts are about the initial presentation design while the last concept is relative to the presentation evolution. On the basis of this model, we present the ELOQUENCE software platform for the specification, the simulation and the execution of output multimodal systems. The second research work deals with the design of multimodal information systems in the framework of ambient intelligence. We propose an ubiquitous information system that is capable of providing personalized information to mobile users. Furthermore, we focus on multimodal information presentation. The proposed system architecture is based on KUP, an alternative to traditional software architecture models for human-computer interaction. The KUP model takes three logical entities into account: Knowledge, Users, and Presentation devices. It is accompanied by algorithms for choosing and instantiating dynamically interaction modalities. The model and the algorithms have been implemented within a platform called PRIAM (PResentation of Information in AMbient environment), with which we have performed experiments in pseudo-real scale. After comparing the results of both projects, we define the characteristics of an ideal multimodal output system and discuss some perspectives relative to the intelligent multimodal presentation of information.
Chapter Preview
Top

At first, multimodality was explored from the input side (user to system). The first multimodal interface was developed in 1980 by Richard Bolt (Bolt, 1980). He introduced the famous “Put That There” paradigm which showed some of the power of multimodal interaction. Research work on output multimodality is more recent (Elting, 2001-2003). Hence, the contextualization of interaction requires new concepts and new mechanisms to build multimodal presentations well adapted to the user, the system and the environment.

Complete Chapter List

Search this Book:
Reset