Expressiveness in Music Performance: Analysis, Models, Mapping, Encoding

Expressiveness in Music Performance: Analysis, Models, Mapping, Encoding

Sergio Canazza, Giovanni De Poli, Antonio Rodà, Alvise Vidolin
DOI: 10.4018/978-1-4666-2497-9.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

During the last decade, in the fields of both systematic musicology and cultural musicology, a lot of research effort (using methods borrowed from music informatics, psychology, and neurosciences) has been spent to connect two worlds that seemed to be very distant or even antithetic: machines and emotions. Mainly in the Sound and Music Computing framework of human-computer interaction an increasing interest grew in finding ways to allow machines communicating expressive, emotional content using a nonverbal channel. Such interest has been justified with the objective of an enhanced interaction between humans and machines exploiting communication channels that are typical of human-human communication and that can therefore be easier and less frustrating for users, and in particular for non-technically skilled users (e.g. musicians, teacher, students, common people). While on the one hand research on emotional communication found its way into more traditional fields of computer science such as Artificial Intelligence, on the other hand novel fields are focusing on such issues. The examples are studies on Affective Computing in the United States, KANSEI Information Processing in Japan, and Expressive Information Processing in Europe. This chapter presents the state of the art in the research field of a computational approach to the study of music performance. In addition, analysis methods and synthesis models of expressive content in music performance, carried out by the authors, are presented. Finally, an encoding system aiming to encode the music performance expressiveness will be detailed, using an XML-based approach.
Chapter Preview
Top

Background

Humans usually interact with music in complex and various ways. Depending on the context, music can assume different functions: e.g. the result of a creation process, a score to interpret, structured sound to listen, or an object to study from an historical and cultural perspective. For these reasons, it is difficult to design effective systems for technological-mediated access to music, able to cover all the requirements of the human-music interaction. Concerning software platform for the access to music content, a still open issue is which music-related information needs to be codified and how this information can be appropriately structured. In the last years, several works addressed this issue, focusing from time to time on specific applicative context.

Complete Chapter List

Search this Book:
Reset