Denoising and Contrast Enhancement in Dental Radiography

Denoising and Contrast Enhancement in Dental Radiography

N.A. Borghese, I. Frosio
DOI: 10.4018/978-1-60566-292-3.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter shows how large improvement in image quality can be obtained when radiographs are filtered using adequate statistical models. In particular, it shows that impulsive noise, which appears as random patterns of light and dark pixels on raw radiographs, can be efficiently removed. A switching median filter is used to this aim: failed pixels are identified first and then corrected through local median filtering. The critical stage is the correct identification of the failed pixels. We show here that a great improvement can be obtained considering an adequate sensor model and a principled noise distribution, constituted of a mixture of photon counting and impulsive noise with uniform distribution. It is then shown that contrast in cephalometric images can be largely increased using different grey levels stretching for bone and soft tissues. The two tissues are identified through an adequate mixture derived from histogram analysis, composed of two Gaussians and one inverted log-normal. Results show that both soft and bony tissues become clearly visible in the same image under a wider range of conditions. Both filters work in quasi-real time for images larger than five Mega-pixels.
Chapter Preview
Top

Introduction

Principled statistical models have been introduced in the imaging field as an effective alternative to classical linear and non-linear filtering. The first efficient statistical algorithms were first introduced in the fields of astrophysics (Lucy 1974, Richardson 1974) and PET imaging (Shepp and Vardi 1982). In these pioneering works the Poisson nature of the noise was first taken into account explicitly in the filtering stage. This leads to the formulation of a non-linear problem, where the cost function is named Kullback-Liebler divergence or Csizar divergence (Csizar 1991). Its minimization can be carried out efficiently through the Expectation Maximization algorithm, that is an iterative approach for estimating the parameters of a statistical model. EM iterates between the computation of an Expectation function, which describes the expected value of the negative log-likelihood over a set of latent variables, and the Maximization of this function, which allows adjusting the values of the parameters (Bishop 2006).

It was soon clear that considering an adequate model of the noise, much better results could be obtained. The price to be paid was a large increase in computational time. This was an obstacle to extend more principled statistical approaches to real problems and only few attempts were made in this direction (Geman and Geman 1984) until the last decade, in which computational power has made feasible this approach.

In this chapter, it is shown how using principled statistical models, two of the major problems in radiographic imaging can be reliably solved: impulsive noise removal, which is a common problem for any digital radiograph, and contrast enhancement in cephalometric radiography, where the anatomical structures of both soft and bone tissue have to be both clearly visible in the same image (Figure 1).

Figure 1.

In panel (a), a raw cephalometric image is shown. The same image is shown: in panel (b) after the application of UM (mask size 26 x 26, gain 3); after GC (γ = 0.5) + UM in panel (c); after HE + UM in panel (d); after STF (γBone = 0.25, γSoft = 1.25, TP = 52) + UM in panel (e). The rectangle highlighted in panel (e) is shown in panel (f) at a higher magnification rate (100 x 100 pixels are shown here); notice the presence of impulsive noise in this area.

978-1-60566-292-3.ch006.f01

The method described in the first part of this chapter is based on the observation that two are the main noise components on a radiograph. The first one, called impulsive noise, shows up as a random pattern of light and dark pixels, which changes from image to image. It may be attributed to transient failures of the A/D converter or communication over the bus of the sensor. The second noise component is due to the emission statistics of the X-ray photons, which is considered typically Poisson (Webb 1988). Impulsive noise badly affects both readability of images and further processing. Therefore, before providing the radiograph to the clinician, this noise component has to be corrected, often in a transparent way, at the driver level.

The classical approach to impulsive noise removal is based on a two stages procedure: pulses are detected first and then the image is filtered only in correspondence of the failed pixels; a local median filter is used to avoid the modification of any pixel but the failed ones. This approach has been named switching median filtering (Alparone Baronti and Carla 1995).

However, as shown in Figure 2 and Figure 3, the number of false positives identified by traditional switching median filters (consider for instance the Rank Conditioned Filter (RCF) by Alparone Baronti and Carla 1995) is quite large with a consequent significant low-pass filtering effect and loss of details. This has suggested developing better schemes for pulse identification.

Complete Chapter List

Search this Book:
Reset