Hasselt: Rapid Prototyping of Multimodal Interactions with Composite Event-Driven Programming

Hasselt: Rapid Prototyping of Multimodal Interactions with Composite Event-Driven Programming

Fredy Cuenca, Jan Van den Bergh, Kris Luyten, Karin Coninx
Copyright: © 2016 |Pages: 20
DOI: 10.4018/IJPOP.2016010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Implementing multimodal interactions with event-driven languages results in a ‘callback soup', a source code littered with a multitude of flags that have to be maintained in a self-consistent manner and across different event handlers. Prototyping multimodal interactions adds to the complexity and error sensitivity, since the program code has to be refined iteratively as developers explore different possibilities and solutions. The authors present a declarative language for rapid prototyping multimodal interactions: Hasselt permits declaring composite events, sets of events that are logically related because of the interaction they support, that can be easily bound to dedicated event handlers for separate interactions. The authors' approach allows the description of multimodal interactions at a higher level of abstraction than event languages, which saves developers from dealing with the typical ‘callback soup' thereby resulting in a gain in programming efficiency and a reduction in errors when writing event handling code. They compared Hasselt with using a traditional programming language with strong support for events in a study with 12 participants each having a solid background in software development. When performing equivalent modifications to a multimodal interaction, the use of Hasselt leads to higher completion rates, lower completion times, and less code testing than when using a mainstream event-driven language.
Article Preview
Top

Introduction

Rapid prototyping multimodal interactive systems consists of implementing, evaluating, and refining different types of multimodal interactions in an iterative fashion. These progressive refinements enable developers to gain a proper understanding of the strengths and weaknesses of different possible solutions. They arrive at a set of interactions that need to be supported by the final system. Rapid prototyping must be inexpensive in effort, since the goal is to quickly explore a wide variety of possible types of interaction. This involves building, evaluating, and throwing away many prototypes without remorse (Beaudouin-Lafon, 2003). In the remainder of this article we use the term developers to indicate developers of multimodal interactive systems that participate in rapid prototyping activities.

It is commonly accepted that the event-driven paradigm is a good match for realizing the implementation of interactive systems (Lewis & Rieman, 1993). However, in the case of multimodal interactive systems, the use of this paradigm may adversely affect the speed and cost of the rapid prototyping phase significantly. When implementing multimodal interactions, the usage of event-driven languages results in code that is dedicated in large part to the management of the interaction state. This code is then plagued with a multitude of flags that developers have to update in a self-consistent manner and across different event handlers (Spano, Cisternino, Paternò, & Fenu, 2013; Kin, Hartmann, DeRose, & Agrawala, 2012; Cuenca, Van den Bergh, Luyten, & Coninx, 2014). The resulting ‘callback soup’ makes it difficult to understand and to change the multimodal system source code. This complexity has to be faced for each iteration of the prototyping phase.

Several (mostly visual) languages have been proposed with the aims of facilitating the creation of multimodal prototypes (Bourguet, 2002; Dragicevic & Fekete, 2004; De Boeck, Vanacken, Raymaekers, & Coninx, 2007; Lawson, Al-Akkad, Vanderdonckt, & Macq, 2009; Navarre, Palanque, Ladry, & Barboni, 2009; König, Rädle, & Reiterer, 2010; Hoste, Dumas, & Signer, 2011; Dumas, Signer, & Lalanne, 2014). These languages allow the developer to describe multimodal interactions at a high-level of abstraction bypassing the need to manually maintain the interaction state, as it is needed with event-driven languages. To a greater or lesser extent, the aforementioned languages have accomplished their main goal of simplifying the creation of multimodal prototypes. Despite this, for many of these languages abstraction also means giving up the fine-grained control when dealing with events directly. In other words, these approaches dismiss the programming experience of developers and replace this with some formalism that hides details and introduces a more abstract terminology. Abstraction by means of visual models may not be the method of choice for many developers, who, instead, use textual languages or at least access and modify the code that drives the interactive system. Since familiarity with a language is an important factor that has a strong, positive influence in programming language adoption (Meyerovich & Rabkin, 2013), we created a language that saves developers from dealing with the ‘callback soup’ problem, while building upon familiar concepts and well-known programming practices.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 6: 2 Issues (2017)
Volume 5: 1 Issue (2016)
Volume 4: 2 Issues (2015)
Volume 3: 2 Issues (2014)
Volume 2: 2 Issues (2012)
Volume 1: 2 Issues (2011)
View Complete Journal Contents Listing