Article Preview
TopIntroduction
Rapid prototyping multimodal interactive systems consists of implementing, evaluating, and refining different types of multimodal interactions in an iterative fashion. These progressive refinements enable developers to gain a proper understanding of the strengths and weaknesses of different possible solutions. They arrive at a set of interactions that need to be supported by the final system. Rapid prototyping must be inexpensive in effort, since the goal is to quickly explore a wide variety of possible types of interaction. This involves building, evaluating, and throwing away many prototypes without remorse (Beaudouin-Lafon, 2003). In the remainder of this article we use the term developers to indicate developers of multimodal interactive systems that participate in rapid prototyping activities.
It is commonly accepted that the event-driven paradigm is a good match for realizing the implementation of interactive systems (Lewis & Rieman, 1993). However, in the case of multimodal interactive systems, the use of this paradigm may adversely affect the speed and cost of the rapid prototyping phase significantly. When implementing multimodal interactions, the usage of event-driven languages results in code that is dedicated in large part to the management of the interaction state. This code is then plagued with a multitude of flags that developers have to update in a self-consistent manner and across different event handlers (Spano, Cisternino, Paternò, & Fenu, 2013; Kin, Hartmann, DeRose, & Agrawala, 2012; Cuenca, Van den Bergh, Luyten, & Coninx, 2014). The resulting ‘callback soup’ makes it difficult to understand and to change the multimodal system source code. This complexity has to be faced for each iteration of the prototyping phase.
Several (mostly visual) languages have been proposed with the aims of facilitating the creation of multimodal prototypes (Bourguet, 2002; Dragicevic & Fekete, 2004; De Boeck, Vanacken, Raymaekers, & Coninx, 2007; Lawson, Al-Akkad, Vanderdonckt, & Macq, 2009; Navarre, Palanque, Ladry, & Barboni, 2009; König, Rädle, & Reiterer, 2010; Hoste, Dumas, & Signer, 2011; Dumas, Signer, & Lalanne, 2014). These languages allow the developer to describe multimodal interactions at a high-level of abstraction bypassing the need to manually maintain the interaction state, as it is needed with event-driven languages. To a greater or lesser extent, the aforementioned languages have accomplished their main goal of simplifying the creation of multimodal prototypes. Despite this, for many of these languages abstraction also means giving up the fine-grained control when dealing with events directly. In other words, these approaches dismiss the programming experience of developers and replace this with some formalism that hides details and introduces a more abstract terminology. Abstraction by means of visual models may not be the method of choice for many developers, who, instead, use textual languages or at least access and modify the code that drives the interactive system. Since familiarity with a language is an important factor that has a strong, positive influence in programming language adoption (Meyerovich & Rabkin, 2013), we created a language that saves developers from dealing with the ‘callback soup’ problem, while building upon familiar concepts and well-known programming practices.