Hybrid Data Mining Approach for Image Segmentation Based Classification

Hybrid Data Mining Approach for Image Segmentation Based Classification

Mrutyunjaya Panda (Department of Computer Science, Utkal University, Bhubaneswar, India), Aboul Ella Hassanien (Faculty of Computers & Information, Cairo University, Giza, Egypt) and Ajith Abraham (Machine Intelligence Research Labs (MIR Labs), Auburn, WA, USA & VSB Technical University of Ostrava, Ostrava, Czech Republic)
Copyright: © 2016 |Pages: 17
DOI: 10.4018/IJRSDA.2016040105
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Evolutionary harmony search algorithm is used for its capability in finding solution space both locally and globally. In contrast, Wavelet based feature selection, for its ability to provide localized frequency information about a function of a signal, makes it a promising one for efficient classification. Research in this direction states that wavelet based neural network may be trapped to fall in a local minima whereas fuzzy harmony search based algorithm effectively addresses that problem and able to get a near optimal solution. In this, a hybrid wavelet based radial basis function (RBF) neural network (WRBF) and feature subset harmony search based fuzzy discernibility classifier (HSFD) approaches are proposed as a data mining technique for image segmentation based classification. In this paper, the authors use Lena RGB image; Magnetic resonance image (MR) and Computed Tomography (CT) Image for analysis. It is observed from the obtained simulation results that Wavelet based RBF neural network outperforms the harmony search based fuzzy discernibility classifiers.
Article Preview

1. Introduction

In order to identify homogeneous regions in the image after dividing into different regions, Image segmentation is used with a goal to simplify the change in the representation of image into a new one that is easy to interpret. In the segmented image, labelling of each object is done so as to reflect the “actual structure” of the data and provides the detailed description of the original image for further processing. Whether, the image segmentation is a classification problem or a clustering one, heavily depends on spatially separated objects. When the features describing each pixel correspond to a pattern, and each image region (i.e. a segment) corresponds to a cluster (Jain, Murty and Flynn, 1999), then Image segmentation can be treated as a clustering problem, such as using K-Means (Tou and Gonzalez, 1974), FCM (Trivedi and Bezdek, 1986), ISODATA (Ball and Hall, 1967).

On the other hand, intensity values of patches from the same subject are rarely independent and often dealt with shared information that detailed about the inherent structure of the images, which is very much useful for image classification. Recently, the authors (Liu, Zhang and Shen, 2013) or ROIs (Wee, Yap and Shen, 2012) extracted the correlated features to inherit the relationships among patches of the same subject, which has been shown to improve the classification accuracy.

Looking at the image partition into different regions and then finding the discontinuities as the boundaries between the regions, the segmentation process can be classified into various types. First, Pixel-Based Segmentation is almost simplest ones defined as Point-based or pixel-based segmentation approach (Ruz, Estevez and Perez, 2005). The drawback of the pixel based segmentation is to result in a bias of the size of segmented objects when the objects show variations in their gray values, with Darker objects will become too small a gray value where as too large for brighter objects. This gray scale variation occurs due to gradual changes of value from the background to the object value. However, bias will be remedied when the mean of the object and the background gray values as the threshold are taken into consideration. Second, Edge-Based Segmentation approach can be used to avoid a bias in the size of the segmented object without using a complex thresholding scheme. It works when different thresholds for each objects or same gray scale value for the objects. Further, this approach considers the position of an edge by an extreme of the first-order derivative or a zero crossing in the second-order derivative (Robinson, 1977; Canny, 1986). Third, Region-based methods focus on feature of an original image where the features represent not a single pixel but a small neighbourhood, depending on the mask sizes of the operators used (Moghaddamzadeh and Bourbakis, 1997). It is worth noting that at the edges of the objects, the mask includes pixels from both the object and the background, any feature that could be useful cannot be computed. Hence, there should be a limit in the mask size at the edge to points of either the object or the background. But the question comes in mind on distinguishability of the object and the background after computation of the feature. In order to solve this, one may extract the features without taking any object boundaries into consideration and then perform segmentation, depending on the location of the centre pixel, limit the masks of the neighbourhood operations at the object edges to either the object or the background pixels. It is always desirable to conduct repeated experiments in order to obtain stable results out of this approach and fourth one Model-Based Segmentation is applied when specific knowledge on geometrical shape of the objects are known, which was not in all other cases.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 5: 4 Issues (2018): 1 Released, 3 Forthcoming
Volume 4: 4 Issues (2017)
Volume 3: 4 Issues (2016)
Volume 2: 2 Issues (2015)
Volume 1: 2 Issues (2014)
View Complete Journal Contents Listing