Music Content Analysis in MP3 Compressed Domain

Music Content Analysis in MP3 Compressed Domain

Antonello D’Aguanno (Università degli Studi di Milano, Italy)
DOI: 10.4018/978-1-61692-859-9.ch014

Abstract

Nowadays more and more audio contents are stored in compressed formats. Especially MP3 music has become very popular with the availability of powerful computation and wide bandwidth connectivity. So that, this chapter will be devoted to present techniques and algorithms, dealing with compressed audio, aimed at content analysis. Since content analysis in compressed domain is an innovative field of applications, the literature review will be extended to methods that extract music content from MP3, even if the algorithms are not focused on music information retrieval. In this chapter, the authors focus on a number of different algorithms dealing with common tasks of the MIR field such as tempo induction, tempo tracking, and automatic music synchronization. They will present an overview of the MusicXML, and IEEE1599 language to represent score and synchronization results, because they have decided to use those formats to represent the score in their synchronization algorithm. The chapter will end showing applications, conclusions, and future works in the field of direct content analysis in compressed domain.
Chapter Preview
Top

Background

MP3

MPEG audio gives a set of standards to lossy audio compression. Algorithms are classified in three layers sorted by complexity and efficiency. They are contained both in MPEG-1 (MPEG1) and MPEG-2 (MPEG2). These standards allow to work on high (32, 44.1, 48 kHz) and low sampling frequencies (16, 22.05, 24 kHz) respectively.

Usually MP3 codecs use a non-uniform quantization on frequency domain driven by perceptual model to compress PCM audio signal into a standard bit stream at various bit rate values.

The time to frequency transform is built by means of a polyphase filter bank and by cascading it with MDCT (hybrid filter bank). Polyphase filter bank gets samples from PCM streaming and represents them, for long window, in 32 frequency sub-bands, further subdivided into 18 finer sub-bands by MDCT.

Psychoacoustic model generates the SMR (Signal to Mask Ratio), this index tells to the quantization block about the bits number that should be allocated for each frequency sub-band in order to get an inaudible quantization noise (Zwicker,2001).

The output of filter banks and perceptual model are the input of non-uniform quantization process. This process decides how to quantize every frequency sub band to respect the SMR value. Huffman lossless compression is performed before bit stream packing. Although MPEG-2 layer 3 frame contains only one granule per frame, MPEG-1 Layer 3 frame is made by two granules.

Further information about MPEG standards can be found in (MPEG1; MPEG2; Noll,1997; Pan,1995).

Complete Chapter List

Search this Book:
Reset