Universal Information Architecture of Acoustic Music Instruments

Universal Information Architecture of Acoustic Music Instruments

Jacques Steyn (Monash University, South Africa)
DOI: 10.4018/978-1-4666-2497-9.ch010


The properties of a wide range of musical instruments have been considered, ranging from ancient acoustic instruments to modern ones, as well as including the instruments of many music cultures. Following on a logical analysis and synthesis of previous research rather than acoustic lab results, a high-level generic and universal model of the information architecture of acoustic music instruments is constructed.
Chapter Preview

Demarcating The Field

Music acoustics has been around for a long time, at least since Hermann von Helmholtz (1821-1894) invented the Helmholtz resonator in the 1860s, although his interest was more in the physics of perception. It could also be argued that the Ancient Greeks had some or other conceptualization of acoustics, given the observations of Pythagoras of Samos (ca. 570 – ca. 495 BCE) that the pitch of a string seems to be doubled at half its length. The Greek interest was more in the mathematical ratios of the music of the spheres (James, 1993) than understanding acoustics. In modern times, the acoustic properties of many artifacts have been researched, ranging from building spaces and the metal and concrete used in bridge construction to solids, liquids and many other phenomena. Although a lot is known, there is still a whole range of unknown properties, such as the acoustic behavior of different topographical shapes and thicknesses of many types of timber, metals, plastics, lacquers, and other materials used in the construction of acoustic music instruments. Before all the variants of acoustic behavior is known, a detailed model cannot be constructed. The model proposed here is thus a high-level tentative one, constructed on the basis of current available knowledge.

The first use of computers was for military purposes, and in large organizations, such as government and corporate environments. Almost simultaneously, music was also one of the first uses computers were put to. Much later, after the MIDI standard was introduced in the 1980s, as well as novel methods to create sound synthetically, there had been a boom in music technology, not only in the design of playback devices, but also in the design of sound and music creation devices. Strangely, despite the very long history of acoustics, as well as interest from music technologists in computers, there had not yet been a complete information architectural description of the acoustics of non-electronic music instruments. Extensive literature (such as the Journal of the Acoustical Society of America) exists describing the acoustic properties, but none in a format that computers could readily use in an information exchange environment. There is no formal approach to this description either. It is this gap that this chapter addresses.

The chapter on The Architecture of Music in this volume covers a brief discussion on ontology and related concepts. Here it suffices to state that an ontological analysis of music focuses on the information required for sound synthesis, and not on the engineering aspects of its physical synthesis or on synthesis models. Approaches to music synthesis is often more concerned with the hardware, the chip design, algorithms and on the methods to be used to generate sound. Relatively little attention is paid to the data structures that the system uses, except in the form of algorithms, or in the analysis of the final output sound. The focus seems to be on a sampled soundclip is a whole, and there is no interest in the intrinsic properties of the wholes. Most descriptions of sounds are about the physics of wave propagation and wave properties. Synthesizers typically use the sinusoidal wave as basic starting point. This wave is then modified and manipulated through filters and by adding other waves to serve as harmonics. Several waves are added together to obtain the required sound - hence called additive synthesis. The other main method, subtractive synthesis, starts off with a wave rich in harmonics, and by using for example filters, reduces the complexity of the wave. Wavetable synthesis is a variation of the additive synthesis method. The proposed model is not a synthesis model, but an information model of the architectural components of the acoustic music instrument. The approach is thus very different from approaches such as, for example, the parametric piano synthesizer of Rauhala et al. (2008).

The information architectural model in this chapter does not attempt to create a physical model from an algorithm perspective, but from the perspective of the information architecture of the physical music instrument. The proposed model assumes that the music instrument's information architecture can be described independently from algorithms or synthesis models, or even low-level characteristics of music instruments. In fact, the information architecture is algorithm ignorant, and various algorithms could be written for any component described by the information architecture.

Complete Chapter List

Search this Book: