A Psychoacoustic Model Based on the Discrete Wavelet Packet Transform

A Psychoacoustic Model Based on the Discrete Wavelet Packet Transform

DOI: 10.4018/978-1-61520-925-5.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Chapter Preview

Top

8.1 Introduction

Psychoacoustic modeling has made important contributions in the development of recent high quality audio compression methods (ISO/IEC 11172-3, 1993; Painter and Spanias, 2000; Pan, 1995) and has enabled the introduction of effective audio watermarking techniques (Swanson et al., 1998; Liu, 2004; Cox et al., 2002). In audio analysis and coding, it strives to reduce the signal information rate in lossy signal compression, while maintaining transparent quality. This is achieved by accounting for auditory masking effects, which make possible to keep quantization and processing noises inaudible. In speech and audio watermarking the inclusion of auditory masking has made possible the addition of information that is unrelated to the signal in a manner that keeps it imperceptible and can be effectively recovered during the identification process.

Most psychoacoustic models used in audio compression or watermarking, have so far utilized the short-time Fourier transform (STFT) to construct a time-varying spectral representation of the signal (Painter and Spanias, 2000; Cox, 1997; Bosi and Goldberg, 2003). A window sequence of fixed length is used to capture a signal section, resulting in a fixed spectral resolution. The STFT is applied on windowed sections of the signal thus providing an analysis profile at regular time instances. However, the STFT can provide only averaged frequency information of the signal and it lacks the flexibility of arbitrary time-frequency localization (Polikarg, online, 2006). Such a rigid analysis regime is in striking contrast with the unpredictably dynamic spectral-temporal profile of information-carrying audio signals. Instead, signal characteristics would be analyzed and represented more accurately by a more versatile description providing a time-frequency multi-resolution pertinent to the signal dynamics. The approaches included in the MPEG 1 standard and elsewhere allow the switching between two different analysis window sizes depending on the value of the signal entropy (ISO/IEC 11172-3, 1993), or the changes in the estimated signal variance (Lincoln, 1998). Greater flexibility, however, is needed. The wavelet transform presents an attractive alternative by providing frequency-dependent resolution, which can better match the hearing mechanism (Polikarg,online, 2006). Specifically, long windows analyze low frequency components and achieve high frequency resolution while progressively shorter windows analyze higher frequency components to achieve better time resolution. Wavelet analysis has found numerous signal processing applications including video and image compression (Abbate et al., 2002; Jaffard et.al., 2001), perceptual audio coding (Veldhuis et. al., 1998), high quality audio compression and psychoacoustic model approximation (Sinha and Tewfik, 1993).

Wavelet-based approaches have been previously proposed for perceptual audio coding. Sinha et al. (1993) used the masking model proposed in (Veldhuis et. al., 1998) to first calculate masking thresholds in the frequency domain by using the fast Fourier transform (FFT). Those thresholds were used to compute a reconstruction error constraint caused either by quantization or by the approximation of the wavelet coefficients used in the analysis. If the reconstruction errors were kept below those thresholds then no perceptual distortion was introduced. The constraints were then translated into the wavelet domain to ensure transparent wavelet audio coding.

Complete Chapter List

Search this Book:
Reset