Image Analysis for Exudate Detection in Retinal Images

Image Analysis for Exudate Detection in Retinal Images

Gerald Schaefer, Albert Clos
DOI: 10.4018/978-1-60566-768-3.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Diabetic retinopathy is recognised as one of the most common causes of blindness. Early diagnosis is important and is based on detection of features such as exudates during eye fundus image screening. In this chapter it is shown how areas corresponding to exudates can be automatically detected using a neural network that, following contrast enhancement and vessel and optic disc extraction steps, classifies each image pixel as exudate or non-exudate. Experimental results on an image set with known ground truth verify the usefulness of the presented approach.
Chapter Preview
Top

Introduction

Diabetic retinopathy is recognised as one of the most common causes of blindness (Aiello et al., 1998). It can however be diagnosed early based on various features that can be detected in eye fundus images. One of these indicators is exudates which are typically formed in groups or rings surrounding leakage of plasma and often appear yellowish in the image. An example of a retinal image with exudates is given in Figure 1.

Figure 1.

Sample retinal image with exudates

978-1-60566-768-3.ch013.f01

Various approaches for detecting exudates have been presented in the literature. In (Gardner et al., 1996) a neural network for retinal image analysis was developed. Their algorithm was able to identify vessels, exudates and haemorrhages. A retinal image was divided into disjoint 20×20 pixel regions and each region assigned by an expert as either exudate or non-exudate. Each pixel of the window corresponds to an input of a backpropagation network giving a total of 400 inputs. A sensitivity of 93.1% in detecting exudates was reported.

Osareh et al. (2003) used histogram specification as a preprocessing step to eliminate colour variations and then segmented the images based on a fuzzy c-means clustering technique. Each segmented region is classified as either exudate or non-exudate and characterised by 18 visual features. A two-layer perceptron network was trained with these and a sensitivity of 93% and specificity of 94.1% were achieved.

Walter et al. (2002) utilise morphological image processing to isolate exudate regions. First, candidate regions are found based on high local contrast variations while the exact contours of exudate regions are then extracted using morphological operators. They report a sensitivity of 92.8%.

In (Sinthanayothin et al., 2002) a recursive region growing technique is employed which groups similar pixels together. Following a thresholding step, the resulting binary image shows the extracted exudate areas. Using this approach a sensitivity of 88.5% and specificity of 99.7% were achieved.

The approach that we present in this chapter is also centred around the use of neural networks for exudate classification, but in contrast to earlier work it proceeds on a pixel-by-pixel basis and utilises only the colour information at each pixel location (Clos, Schaefer & Nolle, 2007). We first pre-process the images in order emphasise the differences between exudate and non-exudate regions, and to reduce the variation of colours in the images. We then perform vessel tracking and optic disc detection and discard the associated areas. The remaining regions are passed to a backpropagation neural network based on a sliding window data extraction mechanism. Principal component analysis (PCA) is applied in order to reduce the dimensionality of the data and speed up the training of the network. The network is then trained to differentiate exudate from non-exudate regions and hence to detect the locations of exudates in the images. Experimental results based on a ground truth dataset with known exudate locations confirm the efficacy of our technique.

Top

Image Pre-Processing

As it is difficult to control lighting conditions but also due to variations in ethnic background and iris pigmentation, retinal images usually exhibit large colour and contrast variations, both on global and local scales. Figure 2 shows four different fundus images and it is apparent that the colour variations between them need to be considered in order to successfully analyse the structures in the images.

Figure 2.

Colour variations in different retinal images

978-1-60566-768-3.ch013.f02

Goatman et al. (2003) evaluated three pre-processing techniques on retinal images including greyworld normalisation, histogram equalisation and histogram specification in order to reduce the variation in background colour among different images. They found that histogram specification performed best followed by histogram equalisation.

Complete Chapter List

Search this Book:
Reset