Edge Detection by Maximum Entropy: Application to Omnidirectional and Perspective Images

Edge Detection by Maximum Entropy: Application to Omnidirectional and Perspective Images

Ibrahim Guelzim, Ahmed Hammouch, El Mustapha Mouaddib, Driss Aboutajdine
Copyright: © 2011 |Pages: 15
DOI: 10.4018/ijcvip.2011070101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the edge detection, the classical operators based on the derivation are sensitive to noise which causes detection errors. It is even more erroneous in the case of omnidirectional images, due to geometric distortions caused by the used sensors. This paper proposes a statistical method of edge detection invariant to image resolution applied to omnidirectional images without preliminary treatments. It is based on the entropy measure. The authors compared its behavior with existing methods on omnidirectional images and perspectives images. The criteria of comparisons are the parameters of Fram and Deutsch. For omnidirectional images, the authors used two types of neighborhood: fixed and adapted to the parameters of the sensor. The authors compared the results of detection visually. The tests are performed on grayscale images.
Article Preview
Top

1. Introduction

The edge detection is an essential step in the computer vision systems because it influences the result of the treatments which follow: segmentation, image registration, 3D reconstruction, etc.

An edge is a transition zone separating two different textures in which the local statistical characteristics of the image may vary slightly (Keskes et al., 1979).

In literature, several researches on edge detection based on the derivation have been presented. The proposed detectors can be divided into two large families. The first family is based on finding local maxima of the first derivative. The gradient operator is often used. The second one is based on the cancellation of the second derivative. In this case, the Laplace operator is commonly used.

To detect an edge, some authors propose to calculate an approximation of the derivative applied directly to the pixels (Roberts, 1965; Prewitt, 1970; Pingle & Tenenbaum, 1971). Others compute it from local least squares fitting (Haralick, 1984; Lindeberg, 1998). In Modestino and Fries (1977) the authors propose to represent the edge by a stochastic model and use the minimization of the mean squared error of spatial filter as the optimization criterion. Shanmugam et al. (1979) propose to use a filter that maximizes signal energy near the edges. However, the resulting filter does not lead to a good localization of edges because the asymptotic approximation used was incorrect (Bourennane et al., 1993). Shen and Castan (1986) overcame this problem of bad localization by proposing a filter in exponential form to detect edges shaped unit step. Torre and Poggio (1986) proved the need for using a filter of regularization before derivation where they presented a complete theory of the derivation of a numeric signal. Canny (1986) was the first to propose analytical expressions for optimization criteria of edge detection where he introduced the concepts of non-maxima suppression and hysteresis thresholding (Bourennane et al., 1993). An extension of the Canny band limited filter was suggested by Deriche where he proposed implementation of efficient computation using recursive filters (Deriche, 1987; Lindeberg, 1998).

The common problem to those operators based on derivation is that they do not give by themselves good results on real images where intensity changes are rarely sharp and abrupt (Deriche, 1987). Because of the derivation, they are very sensitive to noise (Qian & Huang, 1996). To mitigate the effect of the noise, the detection is preceded by a smoothing and followed by a thresholding.

The problem of detection is not completely solved because smoothing introduces undesirable effects like the loss of information or the displacement of important structures in the image (Ziou & Tabbone, 1998). These effects are most pronounced in the omnidirectional images where conventional edge detectors cannot provide good results (Daniilidis et al., 2002).

The omnidirectional vision is a vision process that provides a sphere of sight of the world observed from its center. It increases the vision fields to collect the maximum of information. In the artificial systems, the omnidirectional vision is obtained by the association of a camera and a mirror of revolution which, by reflecting the luminous rays coming from all the directions, forms an omnidirectional image once projected on the sensor.

The resulting images have non uniform resolution and involve the geometrical distortions that are at the origin of the bad performances of the classical edge detectors.

To apply these detectors to the omnidirectional images, a previous adaptation of the neighborhood is needed. In Strauss and Comby (2007) and Jacquey et al. (2007), the authors propose to calculate the neighborhood by projecting it on a cylinder. Other authors have chosen to use the equivalence sphere for the detection of straight lines and rectangles (Fiala & Basu, 2002; Bazin et al., 2007).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing