Advances in computational and device technologies, combined with the commercial success and acceptance of 3D, haptic, and various other media presentation devices, has increased commercial interest in engaging additional senses within the multimedia experience. This integration is leading to a paradigm shift away from traditionally defined multimedia systems, and towards more interactive Multiple Sensory Media (MulSeMedia) systems.
Multiple Sensorial Media Advances and Applications: New Developments in MulSeMedia provides a comprehensive compilation of knowledge covering state-of-the-art developments and research, as well as current innovative activities in MulSeMedia. This book focuses on the importance of Multiple Sensorial Media and its importance in media design with an emphasis on the applicability to real world integration and provides a broad perspective on the future of the technology in a variety of cohesive topic areas.
The many academic areas covered in this publication include, but are not limited to:
- 3D Environments
- FIVIS Simulation Environment
- Fluid Dynamics Simulation
- Haptic Rendering
- Multi-Sensory Entertainment
- Multimodal Data
- Non-Visual Programming
- Olfactory Display
- Virtual Reality
Reviews and Testimonials
We believe we have exciting, high quality contributions from a wide variety of researchers involved in the area – mulsemedia is happening in all corners of the world, and it is happening now.
– George Ghinea, Brunel University, UK; Frederic Andres, National Institute of Informatics, Japan; and Stephen Gulliver, University of Reading, UK
Table of Contents and List of Contributors
The term ‘multimedia’ is often used, however formal definitions for ‘multimedia’ vary significantly. In the fields of art and education, multimedia is the act of ‘using more than one medium of expression or communication’ (oxford dictionary online). In the domain of computing it is the ‘incorporating of audio and video, especially interactively’ (oxford dictionary online). The problem is that, as well as conflicting, such definitions provide limited distinction from traditional methods of information dissemination.
Multimedia involves the issues of media and content, and relates to the uses of a combination of different content forms, yet it unclear how ‘multimedia’, a term coined in 1966 by Bob Goldsteinn, is really distinct from many other communication methods. Traditional communication methods, such as theatre, a lecture or even an open debate, involve the use of multiple forms of communication, e.g. inclusion of music, drama, speech, text, demonstration and non-verbal expression, etc. Some claim that ‘multimedia’ is distinct as it is does not relate to traditional forms of printed or hand-produced material, yet most modern newspapers include text and still images; which taken literally is also the use of multiple content forms. It may be argued that multimedia must be multimodal, i.e. include multiple communication modes, however this argument is hard to justify when multiple content forms (e.g. video and text) can be contained in the single file and will be perceived using the same user senses.
The conflict in definitions raises the concerning question: What value does limited sensory multimedia provide? This overly dismissing question is clearly without value, since the research under the banner of ‘multimedia’ has brought significant progress and development. In the creative industries ‘multimedia’ development has supported the creation of new art-forms, the instant dissemination of news and social comment, art and media. Digital video technologies now monopolies the dissemination of television and have changed the future of video and film distribution and business models. In the area of commerce ‘multimedia’ has supported globalization, customer interaction, work-practises via increase use of remote, and pervasive working. In education, ‘multimedia’ has supported the visualization and teaching of multidimensional teaching material, the development of remote computer-based training, which increases both the flexibility of and accessibility of educational structures. ‘multimedia’ interfaces are common place, and have been introduced into all areas of our lives - from the latest smart phone in your pocket to the self checkout in your local supermarket. The increasing dependence on online and automated services means that the business of the future will depend significantly on user interaction via computer-based interactive and ‘multimedia’ systems. Although it is clear that multimedia has provided considerable benefits to man-kind, the common range of sensory interaction with users is surprisingly limited.
Human senses are the physiological capacities that provide an input for perception. Traditionally humans were defined as having five senses (i.e. sight, hearing, taste, smell and touch), however increased research in the field of perception has increased this list to include senses, such as balance, acceleration, temperature, pain, proprioception, etc. In 1955 Morton Heilig wrote a paper entitled ‘The Cinema of the Future’, in which he detailed his vision of multi-sensory theatre. In 1962 he built a prototype of his vision, dubbed sensorama, which was able to display wide-angled stereoscopic 3D images, provide body tilting, supply stereo sound, and also provide users with wind and aroma feedback during the film. Heilig was unable to obtain financial backing, so work with Sensorama was halted. Multimedia applications and research has subsequently primarily focused on two human senses, i.e. audio and visual sense. This consistent reliance on visual and audio mediums has removed investment and interest from the complex interplay between physical senses and has limited multiple sensory research to a limited range of high cost research domains.
Interestingly, advances in computational and device technologies, combined with the commercial success and acceptance of 3D and haptic media presentation devices (e.g. 3D cinema and the Nintendo Wii), has increased commercial interest in engaging additional senses within the multimedia experience. This integration is leading to a paradigm shift away from traditionally defined ‘multimedia’ systems towards more interactive mulsemedia system – multiple sensory media’s. Many would define mulsemedia systems as ‘muli-modal’ systems, however this is not fair. Not all multimodal systems are by definition multi-sensory, yet Mulsemedia must be multimodal by definition.
Mulsemedia is, therefore, a new term which recognizes that traditional multimedia or multimodal applications need to be thought of in a new light in the 21st century. One might even argue that for the vast majority of cases, the term ‘multimedia’ is, in actual fact, a misnomer, since only two-types of media (video and audio) are actually used. So, in this case, bi-media might actually be a more accurate reflection of the situation. A similar comment could apply to multi-modal applications, as most employ a binary combination of visual, audio or haptic modalities; again bi-modal is probably more suited in these cases.
We are, of course, interested in mulsemedia, and, with the publication of this book, wish to highlight the variety of work that is done in the area. We believe we have exciting, high quality contributions from a wide variety of researchers involved in the area – mulsemedia is happening in all corners of the world, and it is happening now.
This book is structured into four key areas which: 1. introduce to the need and benefits of multisensory systems; 2. expands upon the importance of multisensory systems when interacting with a range of individuals, giving specific consideration to individual differences, physical perception and user culture; 3. introduces research concerning haptic and olfactory mulsemedia interfaces; 4. investigates a range of practical mulsemedia applications, i.e. entertainment, education, research, semantic interpretation and media modeling. The structure of sections and book chapters is defined below:
1. Multiple senses and consideration of presence.
• Chapter 1 - Multisensory Presence in Virtual Reality: Possibilities & Limitations.
• Chapter 2 - Multiple Sensorial Media and Presence in 3D Environments.2. Individual difference, perception and culture.
• Chapter 3 - Appreciating Individual Differences: Exposure time requirements in Virtual Space.3. Mulsemedia interfaces
• Chapter 4 - Non-Visual Programming, Perceptual Culture and Mulsemedia: Case studies.
• Chapter 5 - Multiculturality and Multimodal Languages.
• Chapter 6 - Haptic rendering of HTML components and 2D maps included in web pages.4. Mulsemedia applications
• Chapter 7 - Olfactory Display Using Solenoid Valves and Fluid Dynamics Simulation.
1. Multiple senses and Consideration of Presence
• VR: Chapter 8 - Entertainment Media Arts with Multi-Sensory Interaction.
• Education: Chapter 9 - Thinking Head Mulsemedia – A Storytelling Environment for Embodied Language Learning.
• Research: Chapter 10 - User Perception of Media Content Association in Olfaction-Enhanced Multimedia
• Research: Chapter 11 - Multimedia Sensory Cue Processing in the FIVIS Simulation Environment.
• Semantics: Chapter 12 - Cross-modal Semantic-associative Labelling, Indexing and Retrieval of Multimodal Data.
• Modelling: Chapter 13 - The MultiPlasticity of New Media.
It is essential that the reader understands the reasons for our pushing for development of mulsemedia technologies. Mulsemedia provides an endless potential for user immersion, an essential element of presence, which is critical in the user’s perception of enjoyment and memory. The first section in this book, introduces the reader to the critical need to include multiple senses in media design, i.e. development of mulsemedia systems, when trying to obtain user presence.
Human perception, as described in the introduction, is inherently multisensory; involving visual, auditory, tactile, olfactory, gustatory, nociceptive (i.e., painful) etc. With this multisensory bombardment it is unsurprising that the vast majority of life’s most enjoyable experiences involve the stimulation of several senses simultaneously, e.g. sight and touch, sound and smell, etc. A virtual rose may seem visually appealing, however it is little substitute to the emotional, and olfactory impact of a full bunch of flowers. The true multisensory nature of life has been considered by a small number of people, mostly in the amusement industry, with the majority of computer and virtual reality (VR) applications only considering, including and/or stimulation only one, or at most two, senses. Typically vision (sight), audition (sound), and, on occasion, haptics (touch) have been included in developed of interactive environments, yet focus on this sense does not reflect the real interplay of multisensory inputs from the real world. The research that has been conducted to date has clearly shown that increasing the number of senses stimulated in a VR simulator will dramatically enhance a user’s sense of immersion, and therefore the development of user presence. Immersion is defined as the subjective impression that one is participating in a comprehensive, realistic experience and is seen as a necessary condition for the creation of ‘presence’, which is a psychologically emergent property of immersion and is directly related to user enjoyment and influences memory formation during the encounter/experience. Research shows that greater quantity of sensory information provided by the virtual environment relates to a higher the sense of presence, and that as more sensory modalities are stimulated, presence is also similarly increased. It can therefore be expected, that Mulsemedia, engaging a range of senses, enhances presence and subsequently enjoyment and/or user information assimilation.
Given that the mulsemedia technology has recently gained increasing levels of commercial success, due to improved functionality and reduced costs associated with VR systems, the likelyhood is that truly multisensory VR should be with us soon.
It is important to note that there are both theoretical and practical limitations to the stimulation of certain senses in VR. This is well demonstrated by Heilig’s 1962 Sensorama experience, which showed the huge potential of multisensory systems, however ran out of funding due to limited appreciation of commercial application.
Chapter 1, entitled ‘‘Multisensory Presence in Virtual Reality: Possibilities and Limitations’ by Alberto Gallace, Mary K. Ngo, John Sulaitis, and Charles Spence, highlights some of the most exciting potential applications associated with engaging more of a user’s senses in a simulated environment. They review the key technical challenges associated with stimulating multiple senses within a VR setting and focus on the particular problems associated with the stimulation of traditional mulsemedia senses (i.e. touch, smell, and taste). Gallace et al. highlight the problems associated with the limited bandwidth of human sensory perception and the psychological costs associated with users having to divide their attention between multiple sensory modalities simultaneously; a negative implication of information overload. Finally, Gallace et al. discuss how the findings provided by the extant research in the field of cognitive neurosciences might help to overcome some of the cognitive and technological limitations affecting the development of multisensory VR systems.
Chapter 2, entitled “Multiple Sensorial Media and Presence in 3D Environments” by Helen Farley and Caroline Steel, looks at the characteristics of the Mulsemedia experience that facilitate user immersion within three-dimensional virtual environments; including discussion of Multi-user Virtual Environments (MUVEs), such as Second Life, and Massively Multiplayer Online Role-playing games (MMORPGs), such as World of Warcraft. Evidence, extracted from extensive literature pertaining to gaming and/or work surrounding user interfaces enabling haptic feedback, tactile precision and engaging other sensory modalities, states that though there are multiple factors that impede and facilitate immersion, it is clear that the practical ability to interact with and engage multiple senses is one of the key factors. Farley and Steel begin chapter 2 by unravelling the relationship between ‘immersion’, with a special emphasis on ‘sensory immersion’, and ‘presence’ in relation to Mulsemedia systems. In addition, they looks at the nature of the sensory stimulation provided by Mulsemedia systems in relation to the amount of immersion that it engenders and the value that is provides; e.g. a sound that is directional will have a positive effect on immersion and sensory feedback that is not conflicting will further enhance the immersive experience. Farley and Steel conclude by discussing some of the challenges that Mulsemedia systems will face in order to develop on the considerable promises.
2. Individual difference, perception and culture
Mulsemedia relates to any multi-sensory interactive user experience, which is commonly defined as requiring a combination of at least one continuous (i.e. sound and video) and one discrete (i.e. text, images) medium. Mulsemedia facilitates an infotainment duality, which means that it not only is able to transfer information to a user, but is also provides the user with a subjective experience (relating to preferences, self-assessment, satisfaction and / or enjoyment). Ability to understand the mulsemedia content, and perception of enjoyment from this content, is ultimately dependent on user perception, pre-knowledge and individual differences.
There has been an abundance of research dealing with how perceptual (i.e. physical and cultural), pre-knowledge (i.e. education and experience), and individual user differences (i.e. physical and cognitive) impact our perception and use of computer-based technologies. It has long been one of the central tenets of our research philosophy that ‘multimedia’ should to be user-centric, allowing personalised interaction as a result of an understanding of user needs. Mulsemedia technology, unsurprisingly, is not an exception; but more than ever requires user focus to manage the interplay of sensual. If user individual characteristics are not taken into account in the development of mulsemedia systems and applications, and if their perception of media quality (i.e. the user experience) is ignored, then we run the risk of designing niche mulsemedia applications, which although intrinsically novel and alluring, lack a wider user appeal. The second section in this book introduces the reader to the need to consider user differences in the development, use and perception of mulsemedia systems. Physical ability, cognitive style, age, gender, culture, systems experience, pre-knowledge, etc. all play an essential part in how the user perceives mulsemedia and must be understood to ensure relevant perception of content.
Accordingly in chapter 3, entitled “Appreciating Individual Differences: Exposure time requirements in Virtual Space” by Markos Kyritsis and Stephen Gulliver, looks at the experience of learning the spatial layout of environments. Although focusing on single media perception, the research focuses on the impact of gender, orientation skill, cognitive style, system knowledge, and environmental knowledge when users are learning a virtual space. The chapter makes a strong case for including individual user differences in the development of mulsemedia, as the results of their research show that user characteristics significantly influence the training time required to ensure effective virtual environment spatial knowledge acquisition.
Mulsemedia holds considerable potential to people with physical or learning limitations. By managing provision of information away from limited sensory inputs (e.g. audio captions for dyslexic users interacting with heavy text media), mulsemedia allows users to gain assistive help that allows easier user interaction. Although allowing access, the interplay and reliance on more sense also risks the loss of information. In chapter 4, entitled “Non-Visual Programming, Perceptual Culture and Mulsemedia: Case studies of five blind computer programmers” by Simon Hayhoe, describes an investigation into the premise that blind programmers and web-developers can create modern Graphical User Interfaces (GUIs) through perceptions of mulsemedia, and whether perceptual culture has a role in this understanding. Since mulsemedia is inherently multi-modal, it comes as no surprise that research has been undertaken that addresses the use of mulsemedia for people who have limited sensory perception in one (or more) senses. To this end, Hayhoe explores whether mulsemedia can inform accessible interface design and explores the boundaries of accessing computer interfaces in a mulsemedia manner (in this particular case, through non-visual perceptions and memories). The chapter shows that programmers who had been introduced to, and educated using a range of visual, audio and / or tactile devices, could adapt to produce code with GUIs, but programmers who were educated using only tactile and audio devices preferred to shun visual references in their work.
The perception of mulsemedia is critical to it ultimate acceptance, however people in different countries, cultures, using different languages and semiotic perceptual structures will ultimately understand information differently. To consider the impact of cultural aspects we have included chapter 5, entitled “Multiculturality and Multimodal Languages” by Maria Chiara Caschera, Arianna D’Ulizia, Fernando Ferri and Patrizia Grifoni, which recognises that the way in which people communicate changes according to their culture. This chapter focuses on this challenging mulsemedia topic, and proposes a grammatical approach for the representation of multicultural issues in multimodal languages. The approach is based on a grammar that is able to produce a set of structured sentences, composed of gestural, vocal, audio, graphical symbols, along with the meaning that these symbols have in the different cultures.
3. Mulsemedia interfaces
In the field of computing, the interface refers to a point of interaction between the information components, and includes the consideration of development of relevant hardware and software technologies. The inclusion of multiple-sensory input has been shown to be essential to increased user immersion and presence. Moreover, experience and interaction ability with the sensory interface has been shown as an individual difference that impacts user experience. It is therefore essential that MulSeMedia interaction is considered in this book. Section three of this book relates to research concerning proposed use of Haptic (touch) and Olfactory (smell) interfaces.
Haptic technology, or haptics, is a tactile feedback technology that takes advantage of the user's sense of touch. By applying forced feedback, vibrations and/or motion to interaction hardware the user get a physical feeling of virtual object presence. Haptics has been applied in fields including Virtual Reality, Medicine and Robotics, allowing for the sensation of touch; which is especially useful to those who rely on this due to their remote location (e.g. a bomb disposal team), or because of physical limitation (e.g. blind or partially sighted.
Chapter 6, entitled “Haptic rendering of HTML components and 2D maps included in web pages” by Nikolaos Kaklanis, Konstantinos Moustakas and Dimitrios Tzovaras, describes an interaction technique where web pages are automatically parsed, in order to generate a corresponding 3D virtual environment with haptic feedback. A web page rendering engine was developed, to automatically create 3D scenes that are composed of “hapgets” (haptically-enhanced widgets); three dimensional widgets that provide semantic behaviour analogous to that of the original HTML components, but that is enhanced with haptic feedback. For example, 2D maps included in the original web page are represented by a corresponding multimodal (haptic-aural) map, automatically generated from the original. The proposed interaction technique enables the user to experience a haptic navigation through the internet, as well as the haptic exploration of conventional 2D online maps; a consider benefit for visually impaired users. This interface technology offers great potential for visually impaired users, and demonstrates how individual and physical difference can be supported via access to information that is currently totally inaccessible using the existing assistive technologies.
The olfactory system is the sensory system used for olfaction, i.e. the sense of smell. The sense of smell is linked directly to the central hemisphere of the brain; uncontrollably linking sensation of certain smells to memory and emotion. Despite regular attempts to incorporate smell in virtual reality simulations, olfactory interfaces and systems are limited. Olfaction technologies, due to recent progress of in olfactory displays, are becoming available and therefore inclusion in Multiple Sensorial Media is fast becoming a practically commercially desirable possibility. Interestingly, one of the important functions of the olfactory display is to blend multiple of odor components to create a variety of odours.
Chapter 7, entitled “Olfactory Display Using Solenoid Valves and Fluid Dynamics Simulation” by Takamichi Nakamoto, Hiroshi Ishida and Haruka Matsukura, describes a developed olfactory display to blend up to 32 odour components using solenoid valves. High-speed switching of a solenoid valve enables the efficient blending of a range of instantaneously odours, even if the solenoid valve has only two states such as ON and OFF. Since the developed system is compact and is easy to use, it has been so far used to scent a movie, an animation and a game. At present the multisensory media content developer must manually adjust the concentration sequence, since the concentration varies from place to place. Nakamoto et al. discuss how manually determined concentration sequences are not accurate and, therefore it takes much time to make the plausible concentration sequence manually. Since the spread of odour in spatial domain is very complicated, the isotropic diffusion from the odour source is not valid. Since the simulated odour distribution resembles the distribution actually measured in the real room, a CFD simulation enabled Nakamoto et al. to reproduce the spatial variation in the odour intensity that the user would experience in the real world. In this chapter, most users successfully perceived the intended change in the odour intensity when they watched the scented movie; where they approached an odour source hindered by an obstacle.
4. Mulsemedia Applications
The editors have defined a number of important application areas where consideration of Mulsemedia systems holds considerable potential. Although far from comprehensive, the following list shows areas where mulsemedia applications are practically being developed: context-aware systems, adaptive and personalized media delivery, distributed communication and use in risk environments, sensory integration, cognitive retraining, virtual reality immersion, enhanced media, quality of service and quality of perception considerations, emotional interaction and consideration, user emotional modelling, e-learning and education and interactive e-commerce. In the fourth and final section of our text we highlight a number of application areas, i.e. areas where use of multisensory systems are practically impacting user experience. Although this section covers five short chapters, these application areas show just a short glance of the potential that mulsemedia system hold.
The entertainment industries have always been very interested in the area of mulsemedia, however device availability and cost has historically restricted common acceptance. With the rare exception of specialist entertainment systems at arcades, or at purpose build theatres, mulsemedia entertainment was limited to use of visual and audio media. With the introduction of home surround sound, the WII, the fit-board and the iphone, interactive visual, audio and haptic devices have become commercially available, thus fuelling consumer desire for mulsemedia interactive entertainment. Compounded by the push and the standardisation of 3D television, and the ubiquitous introduction of touch screen technologies, entertainment is seen by the editors as a driving force in the development of new mulsemedia technologies.
In Chapter 8, Masataka Imura and Shunsuke Yoshimoto address the topic of mulsemedia-based entertainment. Recognising the allure of combining different sensorial experiences in the realm of entertainment, the authors introduce and discuss the use of three mulsemedia applications for representing and improving the reality of virtual user worlds: (1) Haptic Canvas: An entertainment system with dilatant fluid based haptic device; (2) Fragra: An entertainment system with a hand-mounted olfaction display and; (3) Invisible: An entertainment system that represents existence of virtual creatures by indirect information.
Use of ‘multimedia’ technology in schools has been widely researched. Results are often mixed in focus, highlighting the benefit of sensory enforcement, yet also stressing concerns about attention divide and reduced student focus. Despite concerns, however, multimedia systems have facilitated computer-based training, which increases both the flexibility of and accessibility of many to educational structures. Mulsemedia systems offer an extension of interactive and sensory training that is only recently available to all.
Chapter 9, entitled “Thinking Head MulSeMedia – A Storytelling Environment for Embodied Language Learning” broadly focuses on the uses of mulsemedia in Education. Recognising that traditional tutoring systems using only one or at most two modalities opens up new possibilities to be potentially exploited by mulsemedia application and content designers alike, the chapter authors (Tom A. F. Anderson, Zhi-Hong Chen, Yean-Fu Wen, Marissa Milne, Adham Atyabi, Kenneth Treharne, Takeshi Matsumoto, Xi-Bin Jia, Martin Luerssen, Trent Lewis, Richard Leibbrandt and David M. W. Powers) present the reader with Thinking Head. This is a conversational agent that plays the role of a tutor/teacher in an individual learning environment employing multi-sensorial interaction.
Based on previous work done by Tom Anderson, which explored the use of speech synthesis/recognition as an added interaction modality, the authors introduce these within the Thinking Head. Moreover, they attempt to bridge the virtual-physical divide through physical manipulation achieved via visual input recognition. It is noteworthy to remark that the authors achieve this by employing an inherently affordable and widely available device – a webcam. Whilst the webcam is not traditionally thought of as a mulsemedia device, the work described in the chapter shows that any single-media, single-modality device can automatically become a mulsemedia input if employed in a multi-sensorial context. The world of mulsemedia devices is therefore much larger than one would think at first sight (or smell, or touch,…).
Current Virtual Reality (VR) and Augmented Reality (AR) environments have advanced visual and audio outputs, but the use of smell is either very limited or completely absent. Adding olfactory stimulation to current virtual environments will greatly enhance the sense of presence, or ‘realness’ of the environment – or will it? The first recorded attempt of combining artificially generated smell with audiovisual occurred in 1906; when the audience of the Rose Bowl football game were sprayed with the scent of roses. In 1943, Hans Laube released selected odours at specific times and durations, which led to the development of a 35 minute ‘smell-o-drama’ movie called Mein Traum in which 35 different odours were released to accompany the drama presentation. Despite continued efforts to introduce smell into mulsemedia entertainment experiences, limited research has been done concerning the actual perceptual benefit.
Chapter 10, entitled “User Perception of Media Content Association in Olfaction-Enhanced Multimedia”, by Gheorghita Ghinea and Oluwakemi Adeoye empirically asks the question - does the correct association of scent and content enhance the user experience of multimedia applications? Ghinea and Adeoye present the results of an empirical study that varied olfactory media content association by combining video excerpts with semantically related and unrelated scents, and subsequently measured the impact that this variation had on participants’ perception of the olfaction-enhanced multimedia experience. Results show a significant difference in opinions in respect of the ability of olfactory media to heighten the sense of reality. Interestingly, the use of unrelated scents was not found to significantly affect the degree to which participants found the olfactory media annoying.
In chapter 11, entitled “Multimedia Sensory Cue Processing in the FIVIS Simulation Environment”, the authors (Rainer Herpers, David Scherfgen, Michael Kutz, Jens Bongartz, Ulrich Hartmann, Oliver Schulzyk, Sandra Boronas, Timur Saitov, Holger Steiner and Dietmar Reinert) expand details relating to the research being carried out in the FIVIS project. In the FIVIS project, at the Bonn-Rhein-Sieg University of Applied Sciences in Sankt Augustin Germany, an immersive bicycle simulator has been developed. It allows researchers to simulate dangerous traffic situations within a safe controlled laboratory environment. The system has been successfully used for road safety education of school children, as well as to conduct multimedia perception research experiments. The mulsemedia simulator features a bicycle equipped with steering and pedaling sensors, an electrical motor brake, a panoramic back-projection-based visualization system, optical tracking cameras and an optional motion platform. The FIVIS simulation system has proven to be extensible (due to the use of the Python scripting language and XML) and is well suited for perceptual and stress related research. The visualization system’s projection screens occupy a high percentage of the rider’s visual field, which, together with the adapted 3D rendering process, contributes to a high degree of immersion.
Semantics and modelling
Semantics is the study of meaning. Modelling is the act of representing something. Both are essential to ensure relevant capture of mulsemedia content, and context relevant user perception. To look at some of the applications domains relating to media semantics and modelling we have included two chapters that look at the capture and representation of media content.
Chapter 12, entitled “Cross-modal Semantic-associative Labelling, Indexing and Retrieval of Multimodal Data” by Meng Zhu and Atta Badii, looks at how digitalised media and information is typically represented using different modalities, and then subsequently distributed through relevant sensory channels. The use and interaction of such a huge amount of data is therefore highly dependent on the effective and efficient cross-modal labelling, indexing and retrieval of information in real-time. Zhu and Badii focus on the combining of the primary and collateral modalities of information resources in an intelligent and effective way; in order to provide better media understanding, classification, labelling and retrieval. Image and text are the two modalities use by the Authors, however application is possible to a wider range of media forms. A novel framework for semantic-based collaterally cued image labelling is proposed by the authors and subsequently implemented that automatically assigns linguistic keywords to regions of interest in an image. A visual vocabulary was constructed based on manually labelled image segments and the Authors use Euclidean distance and Gaussian distribution to map the low-level region-based image features to the high-level visual concepts defined in the visual vocabulary. Both the collateral content and context knowledge were extracted from the collateral textual modality to bias the mapping process. A semantic-based high-level image feature vector model was constructed based on the labelling results, and the performance of image retrieval using this feature vector model appears to outperform both content-based and text-based approaches in terms of its capability for combining both perceptual and conceptual similarity of the image content.
Chapter 13, entitled “The MultiPlasticity of New Media” by Gianluca Mura, looks at the problem of modelling interactive mulsemedia systems. Interactive systems engineering is an interdisciplinary field that normally involves interdevelopment communication from experts in computer and systems engineering, interaction design, software development, aesthetic, ethnography, psychology and usability. Accordingly, mulsemedia interactive systems, involving the consideration of complex user needs, demands suitable conceptual and multisensorial media definition model. This study analyzes social and conceptual evolutions of digital media and proposes an interactive mixed-space media model, which communicates the media contents effectively in order to enhance the user experience between interactive space of physical objects and online virtual space. Feedback from the interactive mixed-space media model provides information concerning user performance when using a range of multisensorial interfaces. The research widens previous research findings, and gives the reader a specific definition of the cognitive and emotional perception using a metaplastic multimedia model. The chapter describes how the interaction quality within its conceptual media space, through an action-making loop, facilitates the creation of new information content within its metaplastic metaspace configurations.
We are delighted and excited, in equal measure, to have edited this first book on mulsemedia – multiple sensorial media. We have had quality contributions from a range of researchers and practitioners. We hope its readers share our feelings and enjoy the book.
George Ghinea, Frederic Andres, and Stephen Gulliver
Author(s)/Editor(s) BiographyGeorge Ghinea received the B.Sc. and B.Sc. (Hons) degrees in Computer Science and Mathematics, in 1993 and 1994, respectively, and the M.Sc. degree in Computer Science, in 1996, from the Universityof the Witwatersrand, Johannesburg, South Africa; he then received the Ph.D. degree in Computer Science from the University of Reading, United Kingdom, in 2000. He is a Reader in the School of Information Systems, Computing and Mathematics at Brunel University, United Kingdom. Dr Ghinea has over 100 refereed publications and currently leads a team of 8 research students in his fields of interest, which span perceptual multimedia, semantic media management, human computer interaction, and network security. He has co-edited three books including Digital Multimedia Perception and Design for IGI.Frederic Andres has been an Associate Professor in the Digital Content and Media Sciences Research Division at the National Institute of Informatics (NII) since 2000 and at The Graduate University for Advanced Studies since 2002. He received his Ph.D. from University of PARIS VI and his Doctor Habilitate (specialization: information engine) from University of Nantes, in 1993 and 2000 respectively. For more
than 10 years Dr. Andres has been pursuing basic and applied research in semantic management, semantic digital library, knowledge clustering and Topic Maps. Since 2003, Dr. Andres continues to develop and to refine innovative learning based on procedural memory and related to semantic and pedagogy/didactic enrichment. His research interests include digital ecosystem, semantic digital Library, image learning ontology, immersive knowledge, and MulSeMedia. He has published more than 100 scientific papers and he produced various patents on Cost Evaluation and Modeling. In the past 5 years, he has been participating in innovative digital ecosystem projects in Luxembourg (CVCE), France (Bourgogne University), Thailand (KU, NECTEC), India (Bishop Heber College(Autonomous) and Nepal. He is project leader of the Geomedia project and Myscoper project. Dr. Frederic Andres is a senior member
of ACM, a member of IEEE, of the IEEE Technical Committee on Semantic Computing and IPSJ. He is also observer in ISO SC29/WG11/MPEG and SC34/W3 on Topic Maps working groups.Stephen Gulliver received a BEng. (Hons) degree in Microelectronics, an MSc. degree (Distributed Information Systems) and a PhD in 1999, 2001, and 2004 respectively. Stephen worked within the Human Factors Integration Defence Technology Centre (HFI DTC), before getting a job as a lecturer
at Brunel University (2005-2008). Now, as a lecturer within the Informatics Research Centre (IRC), a core part of Henley Business School (Reading University), his personal research relates to the area of user and pervasive Informatics. Dr. Gulliver has published in a number of related fields, including: multimedia and information assimilation, usability, key performance indicators and user acceptance. Dr. Gulliver supervises research relating to topics including: VR information acquisition, extensible modelling frameworks, CRM and ERP acceptance, intelligent building systems, eye-tracking technologies and multimedia content personalisation.