Contextual Inquiry for a Climate Audio Interface

Contextual Inquiry for a Climate Audio Interface

Visda Goudarzi
DOI: 10.4018/978-1-4666-6228-5.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter presents a contextual inquiry of climate scientists during data analysis tasks. Eighteen scientists volunteered for requirement-gathering interviews and focus groups. The interviews have been analyzed in order to determine the implications for a complementary audio interface based on sonification. Results show that climate scientists depend heavily on visualizations, and the amount and complexity of data to be displayed are huge. Climate metaphors are assessed to help develop an intuitive sound design of the interface. The outline and basic properties of the audio tool could be determined. Furthermore, user preference of sound for the auditory display has been evaluated. The volunteers evaluated the sounds aesthetically and associated them with climate parameters. The stimuli, which have been chosen as the sonically most appealing and associated with the same parameter, are considered the optimal ones for the auditory interface.
Chapter Preview
Top

Introduction

For the last centuries, throughout the evolution of modern science itself, the main tool for data display and basis for numerous analysis methods has been data visualization. Today, with the growth of information technology, the amount of data available to be explored and observed has expanded and needs innovative data scanning methods. Auditory displays have been explored as a complimentary tool and can potentially help scientists, depending on the amount of data, data structures, and the tasks within the research context. Sonification, the use of non-speech audio to convey information (Kramer, 1999) has still un-explored potential for application in science. Numerous sonification tools have been developed for specific scientific problems (e.g., sonification for EEG data Analysis (Hermann, 2002), sonification of data from computational physics (Vogt, 2008), or sonification of earthquakes’ data (Aiken et al., 2012)), but to date few have been adopted within the scientific domain they are intended for.

Within the field of human computer interaction (HCI), auditory display has not been explored as much as other ones, primarily graphical interfaces. (Frauenberger, 2009) analyzed 23 proceedings of the International Conference on Auditory Display (ICAD) on four themes: design process, guidance, rationale, and evaluation. He describes that all papers introduce the application domain, but contextual information is not playing a role in the design process. After the in-depth view on design issues, he looks at the field of design in sonification from HCI community’s point of view using an online survey. The results of this research show that the design process for auditory display is mostly unstructured and it provides limited support to reuse the design knowledge created. Another issue is that methodologies and existing guidance in audio domain are often tied to a specific context and reusing them is only possible within the restricted context (Flowers et al., 1996).

The research project (syson.kug.ac.at) aims at incorporating a user centered design process to develop sonifications. Therefore an extensive investigation of the day-to-day research work of scientists has been performed, as is described in this paper. In the research project, we focus on data from climate models and measurements. Climate data are a good model domain for sonification because of the typically large and multivariate data sets, which are difficult to visualize completely. Furthermore, the time-based nature of the data implies a straight-forward direction of reading as sound, which evolves in time as well. General advantages of the human auditory system, e.g., an extremely precise resolution in the time and frequency domain (Bregman, 1990) can be utilized in the data display. Other advantages of using auditory display in the context of climate research have been found with the contextual inquiry discussed in this paper.

Examples of sonification in the context of climate research are (Halim et al., 2006) presenting a “rain prediction auditory icon”. They used auditory icons to display the probability of rain based on weather conditions of previous 48 hours. Another example is (Bearman, 2011), using sound to represent uncertainty in UK climate projections data. He compares different visual and sonic methods of representing uncertainty in spatial data. He shows; when handling large volumes of spatial data, users can be limited in the amount that can be displayed at once due to visual saturation (when no more data can be shown visually without obscuring existing data). Bearman presented that using sound in combination with visual methods may help to represent uncertainty in spatial data. This idea can be expanded into other data sets to help represent uncertainty when visual representations are not sufficient. In addition to scientific examples, many projects exist where sonification of climate data was used in an artistic context, e.g., (Polli, 2004) sonification of storm data from weather models.

Complete Chapter List

Search this Book:
Reset