LVQ Neural Networks in Color Segmentation

LVQ Neural Networks in Color Segmentation

Erik Cuevas (Universidad de Guadalajara, México), Daniel Zaldivar (Universidad de Guadalajara, México), Marco Perez-Cisneros (Universidad de Guadalajara, México) and Marco Block (Freie Universität Berlin, Germany)
DOI: 10.4018/978-1-61520-893-7.ch004
OnDemand PDF Download:
$37.50

Abstract

Segmentation in color images is a complex and challenging task in particular to overcome changes in light intensity caused by noise and shadowing. Most of the segmentation algorithms do not tolerate variations in color hue corresponding to the same object. By means of the Learning Vector Quantization (LVQ) networks, neighboring neurons are able to learn how to recognize close sections of the input space. Neighboring neurons would thus correspond to color regions illuminated in different ways. This chapter presents an image segmentator approach based on LVQ networks which considers the segmentation process as a color-based pixel classification. The segmentator operates directly upon the image pixels using the classification properties of the LVQ networks. The algorithm is effectively applied to process sampled images showing its capacity to satisfactorily segment color despite remarkable illumination differences.
Chapter Preview
Top

Introduction

The color discrimination plays an important role in humans for individual object identification. Humans usually do not search in a bookcase for a previously known book solely by its title. We try to remember the color on the cover (e.g., blue) and then search among all of the books with a blue cover for the one with the correct title. The same applies to recognizing an automobile in a parking site. In general, humans do not search for model A of company B, but rather we look for a red car. It is only when a red vehicle is spotted, when it is decided according to its geometry, whether that vehicle is the one of the required kind.

Image segmentation is the first step in image analysis and pattern recognition. It is a critical and essential component but also it is one of the most difficult tasks in image processing. The actual operation of the algorithm determines the quality of the overall image analysis.

Color image segmentation is a process of extracting from the image domain one or more connected regions satisfying the uniformity (homogeneity) criterion (Ridder & Handels, 2002) which is derived from spectral components (Cheng et al., 2001; Gonzalez & Woods, 2000). These components are defined within a given color space model such as the RGB model -the most common model, which considers that a color point is defined by the color component levels of the corresponding pixel, i.e. red (R), green (G), and blue (B). Other color spaces can also be employed considering that the performance of an image segmentation procedure is known to depend on the choice of the color space. Many authors have sought to determine the best color space for their specific color image segmentation problems. Unfortunately, there is not an ideal color space to provide satisfying results for the segmentation of all kinds of images.

Image segmentation has been the subject of considerable research activity over the last two decades. Many algorithms have been elaborated for gray scale images. However, the problem of segmentation for color images that implies a lot of information about objects in scenes has received much less attention of the scientific community. Although color information allows a more complete representation of images and more reliable segmentations, processing color images requires computational times considerably larger than those needed for gray-level images as it is very sensitive to illumination changes.

This chapter considers the color image segmentation as a pixel classification problem. By means of the LVQ neural networks and their classification schemes, classes of pixels are detected by analyzing the similarities between the colors of the pixels.

In particular, color image segmentation techniques described in the literature can be categorized into four main approaches: Histogram thresholding and color space clustering; region based approaches, edge detection, probabilistic methods and soft-computing techniques. The following section discusses on each techniques, summarizing their main features.

Histogram Thresholding and Color Space Clustering

Histogram thresholding is one of the widely used techniques for monochrome image segmentation. It assumes that images are composed of regions with different gray levels. The histogram of an image can be separated into a number of peaks (modes), each corresponding to one region, and there exists a threshold value corresponding to valley between the two adjacent peaks. As for color images, the situation is different from monochrome images because of multi-features. Multiple histogram-based thresholding divides the color space by thresholding each component histogram.

The classes for color segmentation are built by means of a cluster identification scheme which is performed either by an analysis of the color histogram (Park et al., 2001) or by a cluster analysis procedure (Chen & Lu, 2002). When the classes are constructed, the pixels are assigned to one of them by means of a decision rule and then mapped back to the original image plane to produce the segmentation. The regions of the segmented image are composed of connected pixels which are assigned to the same classes. When the distribution of color points is analyzed in the color space, the procedures generally lead to a noisy segmentation with small regions scattered through the image. Usually, a spatial-based post-processing is performed to reconstruct the actual regions in the image (Nikolaev & Nikolaev, 2004).

Complete Chapter List

Search this Book:
Reset