Image Segmentation Using Rough Set Theory: A Review

Image Segmentation Using Rough Set Theory: A Review

Payel Roy, Srijan Goswami, Sayan Chakraborty, Ahmad Taher Azar, Nilanjan Dey
Copyright: © 2017 |Pages: 13
DOI: 10.4018/978-1-5225-0571-6.ch059
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the domain of image processing, image segmentation has become one of the key application that is involved in most of the image based operations. Image segmentation refers to the process of breaking or partitioning any image. Although, like several image processing operations, image segmentation also faces some problems and issues when segmenting process becomes much more complicated. Previously lot of work has proved that Rough-set theory can be a useful method to overcome such complications during image segmentation. The Rough-set theory helps in very fast convergence and in avoiding local minima problem, thereby enhancing the performance of the EM, better result can be achieved. During rough-set-theoretic rule generation, each band is individualized by using the fuzzy-correlation-based gray-level thresholding. Therefore, use of Rough-set in image segmentation can be very useful. In this paper, a summary of all previous Rough-set based image segmentation methods are described in detail and also categorized accordingly. Rough-set based image segmentation provides a stable and better framework for image segmentation.
Chapter Preview
Top

1. Introduction

Image segmentation is the process of splitting an image space into non-overlapping and meaningful homogeneous regions. The image analysis success depends on the standard of segmentation. The two major approaches of segmentation of remotely sensed images are pixel classification and gray-level thresholding. In gray-level thresholding, a set of thresholds are obtained such that all the pixels with gray values in the range constitute threshold region type. On the contrary, during pixel classification, similar regions are determined by grouping the characteristic space of multiple image bands. Thresholding and pixel classification algorithms both may be either local (context dependent) or global (blind to the position of a pixel). The multispectral character of almost all remotely sensed images makes pixel classification the logical choice for segmentation.

The Statistical methods are widely used in the unsupervised pixel classification framework because of their ability of managing uncertainties emerging from both the presence of mixed pixels and measurement error. In maximum statistical approaches, an image is modelled as “random field” which consists of collection of two random variables. The first variable keeps the values in the field of “classes,” while the second one inflict with the field of “observations” or “measurements”. The problem of segmentation is to determine from a usual method of statistical clustering. Statistical clustering is applied to illustrate probability density function of the particular data as a mixture model. This method asserts that the specific data is a combination of individual component densities (generally Gaussians), corresponding to clusters. The task is to identify the given data, a set of populations within it and provide a model for every population. The Expectation Maximization (EM) algorithm is a popular and effective technique to estimate the mixture model parameters. It iteratively refines initial cluster model to fit the data in a better way and terminates at solution which is locally optimal for underlying (Mobahi, et al. 2011) clustering criterion. An advantage of Expectation Minimization (EM) is capability of handling uncertainties due to the mixed pixels and it also helps in designing the multivalued recognition systems. The EM algorithm has some limitations. Here are some examples:

  • 1.

    The number of clusters needs to be known.

  • 2.

    It could only model convex clusters.

  • 3.

    The solution depends strongly on the initial conditions.

The first limitation is serious handicap in the satellite image processing. It is difficult to determine a priority of the number of classes in real images. To overcome the third problem, several methods have been suggested for determining “good” initial parameters for EM, mainly based on two-stage clustering (Rao, et al. 2009) and sub sampling voting. However, most of the methods have sensitive to noise and/or high computational requirement. The Stochastic EM (SEM) algorithm (Stockman & Shapiro, 2001) for segmentation of image is another attempt in this particular direction which provides robustness to initialization, fast convergence and upper bound on the number of classes. The Rough-set theory (Ohlander, et al. 1978) provides an effective analysis of data by constructing or synthesizing approximations of set concepts from acquired data. The key notions are “reducts” and “information granule”. The information granule formalizes the concept of finite-precision representation of objects in real-life situations and reducts represent the core of an information system (both in terms of objects and features) in a granular universe. An important use of the rough-set theory and the granular computing has been in generating (Lindeberg & Li, 1997) logical rules for association and classification. These logical rules correspond to different important regions of the feature space, which represent data clusters.

Complete Chapter List

Search this Book:
Reset