The Physics of Music

The Physics of Music

Jyri Pakarinen (Aalto University, Finland)
DOI: 10.4018/978-1-4666-2497-9.ch002


This chapter discusses the central physical phenomena involved in music. The aim is to provide an explanation of the related issues in an understandable level, without delving unnecessarily deep in the underlying mathematics. The chapter is divided in two main sections: musical sound sources and sound transmission to the observer. The first section starts from the definition of sound as wave motion, and then guides the reader through the vibration of strings, bars, membranes, plates, and air columns, that is, the oscillating sources that create the sound for most of the musical instruments. Resonating structures, such as instrument bodies are also reviewed, and the section ends with a discussion on the potential physical markup parameters for musical sound sources. The second section starts with an introduction to the basics of room acoustics, and then explains the acoustic effect that the human observer causes in the sound field. The end of the second section provides a discussion on which sound transmission parameters could be used in a general music markup language. Finally, a concluding section is presented.
Chapter Preview

Introduction And Background

This chapter intends to serve as an introduction for the different physical phenomena involved in music. It is aimed at an audience with a basic understanding on mathematics and physics, but without a formal education on acoustics. This chapter does not attempt to serve as a thorough tutorial or textbook on musical acoustics or physics of musical instruments. For these purposes, other excellent works can be found in the literature. The distinguished ‘Science of Sound’ (Rossing, 1990) provides a broad and easily understandable introduction to acoustics and speech in general, and is being used as a primer on acoustics in several universities. Works by Reynolds (1981), Fahy (2001), and Blauert and Xiang (2008) provide perhaps a more thorough engineering approach to acoustics and vibrational physics of fluids and solids. The book Physics of Musical Instruments (Fletcher & Rossing, 1991) gives a rigorous explanation of various physical phenomena involved in musical instruments, and is mostly used as the basis for the section on musical sound sources below.

Aside from purely educational purposes, basic understanding of the physical phenomena that enable the creation of music is useful when deciding what musical features to markup in a general music markup language. Using physical quantities, such as frequency, as markup parameters has some important advantages. Firstly, physical quantities are universal. Since higher-level musical concepts are derived from physical phenomena, a system using physical variables as its lowest-level representation would adapt well to changes higher up in the system hierarchy. For example, if the pitch is represented with a frequency in Hertz rather than the note name in the Common Western Notation (CWN), transporting the system between different notation systems becomes trivial.

Secondly, physical parameters can be used in controlling sound synthesizers. For physics-based sound synthesizers, the physical quantities can in many cases be directly used as control parameters. This enables more explicit control over the timbre of the synthetic instruments, since instead of feeding only the note name, velocity, and instrument name to a synthesizer, the user could for example explicitly state the length and material of a guitar string, as well as plucking location, and whether the string is plucked with a finger or plectrum, and so forth. Of course, some default parameter values or compound variables for such a system should be set, so that the user would not always have to define an extensive list of parameters, but he or she would have the freedom to do so, if in search for a specific musical timbre. For an extensive review of physics-based sound synthesis methods, see (Välimäki, et al., 2006). For abstract sound synthesis techniques (such as additive synthesis or frequency modulation), the physical parameters can be used, e.g. in defining the desired spectral form or amplitude envelope of the sound. This can be achieved by creating mapping rules from physical parameters for the control parameters of the particular abstract synthesis method. It must be noted that creating these mappings might not necessarily be as difficult task as it might seem at first, since many synthesis techniques already have some control parameters (such as pitch, amplitude, spectral density, etc.) that are indirectly related to physical properties of the source.

The objective of this chapter is to introduce the reader to the physical processes that enable us to create and hear music. Furthermore, the chapter should raise ideas on what features to markup in a general music markup language. The depth of coverage is defined by the potential markup parameters, so that the physics is discussed to the extent that is required for understanding these parameters.

Complete Chapter List

Search this Book: