Composing by Listening: A Computer-Assisted System for Creating Emotional Music

Composing by Listening: A Computer-Assisted System for Creating Emotional Music

Lena Quinto, William Forde Thompson
Copyright: © 2012 |Pages: 20
DOI: 10.4018/jse.2012070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only “experts” can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians (Experiment 1) and musicians (Experiment 3) were progressively presented with pairs of melodies that each differed in an acoustic attribute (e.g., intensity - loud vs. soft). For each pair, participants chose the melody that most strongly conveyed a target emotion (anger, fear, happiness, sadness or tenderness). Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians (Experiment 2) and musicians (Experiment 4). Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained individuals.
Article Preview
Top

Introduction

Emotional communication in music has been conceptualized as a transmission process. Composers and performers encode or transmit emotion through the acoustic signals in music. Listeners hear the music and are then able to decode the emotion in these signals. The ability to create powerfully emotional music is usually the result of extensive training or experience, whereas listeners do not require training to perceive emotion in music (Gabrielsson & Lindström, 2010; Juslin & Timmers, 2010). Nonmusicians acquire knowledge of emotional cues implicitly, following long-term exposure to the music of their culture (Robazza, Macaluso, & D’Urso, 1994). Accurate emotion perception occurs because musicians and nonmusicians share a common understanding of the emotional connotations of acoustic attributes (Juslin & Laukka, 2003; Kendall & Carterette, 1990; Seashore, 1923). Once this understanding is acquired, it can be used to decode the emotional qualities of novel music, including stylistically unfamiliar music from a different culture (Balkwill & Thompson, 1999; Fritz et al., 2009). Even young children are capable of interpreting the emotional qualities of music (Dalla Bella, Peretz, Rousseau, & Gosselin, 2001; Kastner & Crowder, 1990; Terwogt & Van Grinsven, 1991).

It is likely that through passive exposure to music, nonmusicians acquire extensive musical knowledge but they lack the capacity to express it verbally (for a review see Bigand & Poulin-Charronnat, 2006). This implicit knowledge assists nonmusicians to perceive and appreciate complex musical structures such as tonal and harmonic relationships (Krumhansl, 1990; Bigand & Poulin-Charronnat, 2006; Tillmann, 2005). These findings imply that nonmusicians have considerable knowledge of music that may even extend to composition. The abilities of nonmusicians in compositional tasks have not been evaluated because they lack the necessary technical skills, including performance ability and musical literacy. Musicians have greater technical ability in both the performance of their instrument and explicit knowledge of musical notation that allows them to realize musical ideas, and to focus on aesthetic goals (Lehmann & Gruber, 2006).

One way to probe the knowledge of individuals is through the study of human-computer interaction. Human interaction with technology can often reveal implicit creative and cognitive abilities within an individual (Clark & Chalmers, 1998). For example, Coughlan and Johnson (2008) developed compositional software to probe the processes of composition and creativity. Using this software, users were able to identify and define musical patterns, rules, and boundaries. Individuals varied in musical training, with some having no training at all and other having extensive experience as composers, yet all participants could explore musical ideas with the assistance of this software. Thus, technology facilitated the development of musical thought in a way that would have otherwise not been possible. In music pedagogy, the use of technology has also been important in shaping curricula (Cain, 2004). The form that the technology takes has varied – ranging from the use of electronic keyboards to audio recording and sequencing. Research shows that the use of technology and computers in music education facilitates the ability of students to work collaboratively, experiment with new ideas, and reliably edit their work (Mills & Murray, 2000).

Complete Article List

Search this Journal:
Reset
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing