Edge Detection on Light Field Images: Evaluation of Retinal Blood Vessels Detection on a Simulated Light Field Fundus Photography

Edge Detection on Light Field Images: Evaluation of Retinal Blood Vessels Detection on a Simulated Light Field Fundus Photography

Yessaadi Sabrina, Laskri Mohamed Tayeb
DOI: 10.4018/978-1-6684-7544-7.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Digital fundus imaging is becoming an important task in computer-aided diagnosis and has gained an important position in the digital medical imaging domain. One of its applications is the retinal blood vessels extracting. Object detection in machine vision and image processing has gained increasing interest due to its social and security potential. Plenoptic imaging is a promising optical technique. This technique computes the location and the propagation direction information of the object light, which are used as efficient descriptors to detect and track the object displacement. In this chapter, the authors use an edge detection technique to extract and segment blood vessels in the retinal image. They propose a novel approach to detect vessels in a simulated light fields fundus image, based on the image representation with the first and the second order derivative, well known as gradient and Laplacian image descriptors. Since the difficulties to get a light field image of a fundus in the retinal image, the authors test their model in the image provided by Sha Tong et al.
Chapter Preview
Top

The extraction of visual information of a frame image, such as a discrete object, specifically: edges, was an important issue to accomplish the object detection task. This technique is called the edge detection.

Edge detection is a long-standing problem in computer vision. It is an important phase for several computer vision and image processing techniques, as pattern recognition, image segmentation, image matching, object detection and tracking. This method aims to locate pixels of high variation in intensity from the other neighbor’s pixels (Canny, 1983; Ziou, & Tabbone, 1998).

In the graphics community, successful approaches have been proposed to perform the object detection process, based on edge detection, under a variety of condition scene and camera constraints. See Ziou and Tabbone (1998) for an interesting overview of researches in this field. Certain of these methods proved very good model accuracy in the two-dimension image plane, therefore they suffer from object moving and background changes.

However, existing solutions suffer from many underlying assumptions, due to complexes scene properties (geometrics, illumination, ..., etc.). Recently, the computer vision community has converged in using light field imaging as a new representation of the image scene. The light field imaging, or, as Gortler, Grzeszczuk., Szeliski, and Cohen (1996), called “The Lumigraph”, are 4-dimensional images containing both, orientation, and position information of each point of the object. Effectively, Light field imaging record the 3D Information of the scene in the image plane, this information is represented by the location of the individual light rays, that is defined by the position coordinates and the propagation direction of the incoming light, defined by the incidence angles (Adelson, & Bergen, 1991; Levoy, & Hanrahan, 1996; Levoy, 2006).

Complete Chapter List

Search this Book:
Reset