Bayesian Neural Networks for Image Restoration

Bayesian Neural Networks for Image Restoration

Radu Mutihac
Copyright: © 2009 |Pages: 8
DOI: 10.4018/978-1-59904-849-9.ch035
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Numerical methods commonly employed to convert experimental data into interpretable images and spectra commonly rely on straightforward transforms, such as the Fourier transform (FT), or quite elaborated emerging classes of transforms, like wavelets (Meyer, 1993; Mallat, 2000), wedgelets (Donoho, 1996), ridgelets (Candes, 1998), and so forth. Yet experimental data are incomplete and noisy due to the limiting constraints of digital data recording and the finite acquisition time. The pitfall of most transforms is that imperfect data are directly transferred into the transform domain along with the signals of interest. The traditional approach to data processing in the transform domain is to ignore any imperfections in data, set to zero any unmeasured data points, and then proceed as if data were perfect. Contrarily, the maximum entropy (ME) principle needs to proceed from frequency domain to space (time) domain. The ME techniques are used in data analysis mostly to reconstruct positive distributions, such as images and spectra, from blurred, noisy, and/or corrupted data. The ME methods may be developed on axiomatic foundations based on the probability calculus that has a special status as the only internally consistent language of inference (Skilling 1989; Daniell 1994). Within its framework, positive distributions ought to be assigned probabilities derived from their entropy. Bayesian statistics provides a unifying and selfconsistent framework for data modeling. Bayesian modeling deals naturally with uncertainty in data explained by marginalization in predictions of other variables. Data overfitting and poor generalization are alleviated by incorporating the principle of Occam’s razor, which controls model complexity and set the preference for simple models (MacKay, 1992). Bayesian inference satisfies the likelihood principle (Berger, 1985) in the sense that inferences depend only on the probabilities assigned to data that were measured and not on the properties of some admissible data that had never been acquired. Artificial neural networks (ANNs) can be conceptualized as highly flexible multivariate regression and multiclass classification non-linear models. However, over-flexible ANNs may discover non-existent correlations in data. Bayesian decision theory provides means to infer how flexible a model is warranted by data and suppresses the tendency to assess spurious structure in data. Any probabilistic treatment of images depends on the knowledge of the point spread function (PSF) of the imaging equipment, and the assumptions on noise, image statistics, and prior knowledge. Contrarily, the neural approach only requires relevant training examples where true scenes are known, irrespective of our inability or bias to express prior distributions. Trained ANNs are much faster image restoration means, especially in the case of strong implicit priors in the data, nonlinearity, and nonstationarity. The most remarkable work in Bayesian neural modeling was carried out by MacKay (1992, 2003) and Neal (1994, 1996), who theoretically set up the framework of Bayesian learning for adaptive models.
Chapter Preview
Top

Background

Bayesian approach to image restoration is based on the assumption that all of the relevant image information may be stated in probabilistic terms and prior probabilities are known. The ME principle is optimally setting prior probabilities for positive additive distributions. Yet Bayes’ theorem and the ME principle share one common future: the updating of a state of knowledge. In some cases, running Bayes’ theorem in one hypothesis space and applying the ME principle in another lead to similar calculations.

Key Terms in this Chapter

Entropy: A measure of the uncertainty associated with a random variable. Entropy quantifies information in a piece of data.

Probabilistic Inference: An effective approach to approximate reasoning and empirical learning in AI.

Deconvolution: An algorithmic method for eliminating noise and improving the resolution of digital data by reversing the effects of convolution on recorded data.

Digital Image: A representation of a 2D/3D image as a finite set of digital values called pixels/voxels typically stored in computer memory as a raster image or raster map.

Point Spread Function (PSF): The output of the imaging system for an input point source.

Artificial Neural Networks (ANNs): Highly parallel nets of interconnected simple computational elements, which perform elementary operations like summing the incoming inputs (afferent signals) and amplifying/thresholding the sum.

Bayesian Inference: An approach to statistics in which all forms of uncertainty are expressed in terms of probability.

Image Restoration: A blurred image can be significantly improved by deconvolving its PSF in such a way that the result is a sharper and more detailed image.

Complete Chapter List

Search this Book:
Reset