Article Preview
Top1. Introduction
The Oxford advanced dictionary describes “Interaction” as reciprocal action or influence of two objects. It is a kind of action that occurs as two or more objects have an effect upon one another. The idea of a two-way effect is essential in the concept of interaction, as opposed to a one-way causal effect. The feedback during the operation of machines such as a computer or tool, for example, the interaction between a driver and the position of his vehicle on the road: by steering the driver influences this position, by visual observation this information returns to the driver.
Multi-modal interaction in HCI focuses on the essence of interaction by which users can carry out tasks on an interactive system using various modalities such as visual and/or aural tools or elements, then this interactive system provides feedback to the users by representing the results of the tasks the users performed haptically, visually and/or aurally. Multi-sensory interaction can also be referred to as “Multimodal interaction”, which can be inferred from Dumas, Lalanne, and Oviatt, (2009) and Sarter, (2002) as the equipping of users with multiple choice of modalities to interact with a system that interprets and reacts to users’ inputs from more than one sensory and interaction channel, be it through aural, gestural, gaze, facial expression, body movement, touch, etc. Oviatt, (2003) explained that « Multimodal interfaces process two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze, and head and body movements) in a coordinated manner with multimedia system output. They are a new class of interfaces that aim to recognize naturally occurring forms of human language and behaviour, and which incorporate one or more recognition-based technologies (e.g. speech, pen, vision) ». Reeves et al., (2004) explained that the two main aims of multimodal interactions are to achieve an interaction closely similar or identical to the natural human-human interaction style, and to increase the interaction’s robustness through the use of redundant of complementary information
Visual interaction in HCI specifically is the most common and most important modality of interaction. Its significance cannot be overemphasized. For instance, operating Microsoft Windows calculator with a mouse, it is surely very user-friendly and easy to use. Now try the same operation with your eyes closed, only then would you begin to comprehend how frustratingly difficult computational life can get without visual interaction, despite this task not being so different from dialling a touch tone phone which for most of us can be completed comfortably closing both eyes.
Haptic/touch is the most vital and dominant interaction involved in most complementary massage therapy that promotes relaxation and stress relief. Some of these complementary massage therapies include, acupuncture, lomi-lomi, reflexology and so on, are mostly haptic dominated. Several studies (Okere, Sulaiman, Awang, & Foong, 2014a; Sherman, Dixon, Thompson, & Cherkin, 2006) have highlighted the dominance and significance haptic interaction has on these therapies. But for the therapy in question “reflexology”, is haptic interaction the only modal interaction involved in this therapy that influence the therapeutic effects the users perceive? There has been little or no attention from literature that attends to this. This paper hence looks to identify the visual and aural interactive nature involved in the therapy that influences the relaxation and stress relief the users perceive.
This paper presents its introduction in Section 1, the literature review in Section 2, containing relevant domains, significance, relevance, and application. Section 3 presents the study, method, analysis and results. This is then followed by discussion, conclusion, and future works.