A Proposed Intelligent Denoising Technique for Spatial Video Denoising for Real-Time Applications

A Proposed Intelligent Denoising Technique for Spatial Video Denoising for Real-Time Applications

Amany M. Sarhan (Mansoura University, Egypt), Mohamed T. Faheem (Tanta University, Egypt) and Rasha Orban Mahmoud (Nile Institute of Commerce & Computer Technology, Egypt)
DOI: 10.4018/978-1-4666-0119-2.ch010
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

With the widespread use of videos in many fields of our lives, it becomes very important to develop new techniques for video denoising. Spatial video denoising using wavelet transform has been the focus of the current research, as it requires less computation and more suitable for real-time applications. Two specific techniques for spatial video denoising using wavelet transform are considered in this work: 2D Discrete Wavelet Transform (2D DWT) and 2D Dual Tree Complex Wavelet Transform (2D DTCWT). We performed an analytical analysis to investigate the performance of each of these techniques. From this analysis, we found out that each of these techniques has its advantages and disadvantages. The first technique gives less quality at high levels of noise but consumes less time, whereas the second gives high quality video while consuming a large amount of time. In this work, we introduce an intelligent denoising system that makes a tradeoff between the quality of the denoised video and the time required for denoising. The system first estimates the noise level in the video frame then chooses the proper denoising technique to apply on the frame. The simulation results show that the proposed system is more suitable for real-time applications where time is critical, while still giving high quality videos at low to moderate levels of noise.
Chapter Preview
Top

Introduction

The recent advancement in multimedia technology has promoted an enormous amount of research in the area of image and video processing. Included in the many image and video processing applications, such as compression, enhancement, and target recognition, is preprocessing functions for noise removal. Noise removal is one of the most common and important processing steps in many image and video systems. Because of the importance and commonality of preprocessing in most image and video systems, there has been an enormous amount of research dedicated to the subject of noise removal, and many different mathematical tools have been proposed (Balster, Zheng & Ewing, 2006).

Noise refers to unwanted stochastic variations as opposed to deterministic distortions such as shading or lack of focus. It can be added to the video signal or multiplied with the video signal. It can also be signal dependent or signal independent (Ghazal, Amer & Ghrayeb, 2007). Based on its spectral properties, noise is further classified as white or color noise. Many types of noise effect charge-coupled device (CCD) cameras such as photon shot noise and read out noise. Photon shot noise is due to the random arrival of photons at the sensor, which is governed by Poisson distribution. Other sources of noise include output amplifier noise, camera noise and clock noise, which can be combined in a single equivalent Gaussian noise source called read out noise. Because of the high counting effect of Photon arrivals and according to the central limit theorem, the aggregate noise effect can be well approximated by Gaussian distribution. Consequently, in this paper, an Additive White Gaussian Noise (AWGN) model is assumed. The choice is also motivated by AWGN being the most common noise model for TV broadcasting (Ghazal et al., 2007).

Spatial video denoising techniques use both the two dimensional Dual Tree Complex Wavelet Transform (2D DTCWT) and three dimensional Dual Tree Complex Wavelet Transform (3D DTCWT) (Lilly & Olhede, 2008; Selesnick & Li, 2003), temporal video denoising techniques use temporal filtering only (Zlokolica, 2006), while spatio-temporal video denoising techniques use combination of spatial and temporal denoising (Zlokolica, 2006).

The need for fast and accurate video noise estimation algorithms rises from the fact that many fundamental video processing algorithms such as compression, segmentation, motion estimation and format conversion adapt their parameters and improve performance when the noise is known. The effectiveness of video processing methods can be significantly reduced in the presence of noise. When information about the noise becomes available, processing can be adapted to the amount of noise to provide stable processing methods (Francois, Amer & Wang, 2006).

A noise estimation technique calculates the level of white Gaussian noise, which is the most commonly assumed noise type in video processing applications, contained in a corrupted video signal. When noise variance becomes available, video denoising algorithms (e.g., 2D Discrete Wavelet Transform (2D DWT) and 2D Dual Tree Complex Wavelet Transform (2D DTCWT)) can be adapted to the amount of noise for improved performance.

Video noise can be estimated spatially or temporally. A widely used spatial noise estimation method calculates the variance (as a measure of homogeneity) over a set of image blocks and averages the smallest block variance as an estimate of the image noise variance. Spatial variance-based methods tend to overestimate the noise in less noisy images and underestimate it in highly noisy and textured images. Therefore, measures other than the variance were introduced in (Francois et al., 2006) to determine homogeneous blocks. Temporal noise estimation evaluates noise using motion information (Song & Chun, 2005). Such approach is very expensive for hardware implementations with estimation accuracy not significantly more precise than spatial methods.

Complete Chapter List

Search this Book:
Reset