A Novel Fuzzy Rule Guided Intelligent Technique for Gray Image Extraction and Segmentation

A Novel Fuzzy Rule Guided Intelligent Technique for Gray Image Extraction and Segmentation

Koushik Mondal (IIT Indore, India)
Copyright: © 2013 |Pages: 19
DOI: 10.4018/978-1-4666-3994-2.ch017
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Image segmentation and subsequent extraction from a noise-affected background, has all along remained a challenging task in the field of image processing. There are various methods reported in the literature to this effect. These methods include various Artificial Neural Network (ANN) models (primarily supervised in nature), Genetic Algorithm (GA) based techniques, intensity histogram based methods, et cetera. Providing an extraction solution working in unsupervised mode happens to be even more interesting a problem. Fuzzy systems concern fundamental methodology to represent and process uncertainty and imprecision in the linguistic information. The fuzzy systems that use fuzzy rules to represent the domain knowledge of the problem are known as Fuzzy Rule Base Systems (FRBS). Literature suggests that effort in this respect appears to be quite rudimentary. This chapter proposes a fuzzy rule guided novel technique that is functional devoid of any external intervention during execution. Experimental results suggest that this approach is an efficient one in comparison to different other techniques extensively addressed in literature. In order to justify the supremacy of performance of our proposed technique in respect of its competitors, the author takes recourse to effective metrices like Mean Squared Error (MSE), Mean Absolute Error (MAE), and Peak Signal to Noise Ratio (PSNR).
Chapter Preview
Top

Introduction

In traditional computing methodology, the prime considerations are precision, certainty, and rigor. By contrast, in soft computing the principal notion is that precision and certainty carry a cost and that computation, reasoning, and decision-making should exploit, according to possibility, the tolerance for imprecision, uncertainty, approximate reasoning, and partial truth for obtaining low-cost solutions. This leads to the remarkable human ability of understanding distorted speech, deciphering sloppy handwriting, comprehending the nuances of natural language, summarizing text, recognizing and classifying images, driving a vehicle in dense traffic and, more generally, making rational decisions in an environment of uncertainty and imprecision. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation that lead to an acceptable solution at low cost. This, in essence, is the guiding principle of soft computing (Zadeh et al, 1994).Soft computing is a consortium of methodologies that works synergetically and provides in one form or flexible information processing capability for handling real life ambiguous situations. Its aim is to exploit the tolerance for imprecision, uncertainty, approximate reasoning, and partial truth in order to achieve tractability, robustness, and low-cost solutions. The guiding principle is to devise methods of computation that lead to an acceptable solution at low cost by seeking for an approximate solution to an imprecisely/precisely formulated problem. In past few decades, Fuzzy Logic, one principal component of soft computing, has been used in a wide range of problem domains. Fuzzy logic provides an inference morphology that enables approximate human reasoning capabilities to be applied to knowledge-based systems. The theory of fuzzy logic (Zadeh et al.,1965) provides a mathematical strength to capture the uncertainties associated with human cognitive processes, such as thinking and reasoning. The conventional approaches to knowledge representation lack the means for representing the meaning of fuzzy concepts. As a consequence, the approaches based on first order logic and classical probability theory do not provide an appropriate conceptual framework for dealing with the representation of commonsense knowledge, since such knowledge is by its nature both lexically imprecise and non-categorical. Fuzzy Logic is usually regarded as a formal way to describe how human beings perceive everyday concepts. In Fuzzy Image processing, fuzzy set theory (Zadeh et al., 1973) is applied to the task of image processing. Fuzzy Image Processing is depends upon membership values (Zadeh et al., 1968), rule-base and inference engine. According to (Bezdek et al., 1999) fuzzy approaches for image segmentation can be categorized into four classes: segmentation via thresholding, segmentation via clustering, supervised segmentation and rule based segmentation. Among these categories, rule based segmentation are able to take advantage of application dependent heuristic knowledge and model them in the form of fuzzy rule base. In my case, the heuristic knowledge gathers by the process of already exists threshold segmentation methods that helped us to build the rule base. Feature extraction is the process of generating features to be used in the selection and classification tasks. Feature selection reduces the number of features provided to the classification task. Those features that are likely to assist in discrimination are selected and used in the classification task. Features that are not selected are discarded (Fu et al., 1981). After the features are extracted, a suitable classifier must be chosen. A number of classifiers are used and each classifier is found suitable to classify a particular kind of feature vectors depending upon their characteristics. The classifier used commonly is Nearest Neighbor classifier. The nearest neighbor classifier is used to compare the feature vector of the prototype with image feature vectors stored in the database. High-level feature extraction concerns finding shapes in computer images. To be able to recognize faces automatically, for example, one approach is to extract the component features. This requires extraction of, say, the eyes, the ears and the nose, which are the major face features. To find them, we can use their shape: the white part of the eyes is ellipsoidal; the mouth can appear as two lines, as do the eyebrows. Shape extraction implies finding their position, their orientation and their size. This feature extraction process can be viewed as similar to the way we perceive the world: many books for babies describe basic geometric shapes such as triangles, circles and squares. More complex pictures can be decomposed into a structure of simple shapes. Modular approaches partitions the classification task into some sub-classification tasks, solve each sub-classification task, and eventually integrates the results to obtain the final classification result. In other words, partitioning of the classification task is carried out such that each sub-problem can be solved in a module by exploiting the local uncertainties and exploiting the global uncertainties can combine the results of all the modules. The performance of each module can be improved by giving importance to the features based on their class discrimination capability for the output classes present in the module. In many applications, analysis can be guided by the way the shapes are arranged. The task of pattern classifier is to search the structure. This search becomes complicated because of the presence of uncertainties associated with the structure. Thus, the whole pattern classification process involves manipulation of the information supplied by the instances. The instances contain the information about the process generating them, and the extracted features reflect this information. The structures present inside the features represent the information in an organized manner so that the relationship among the variables in the classification process can be identified. Finally, in the last step, a search process recognizes the information from the structure. Now, if a new pattern is encountered, the machine detects the structure in which the input pattern belongs, and based on the structure the pattern is classified. Therefore, once the structure is found, the machine is capable of dealing with new situations to some extent. The issue of choosing the features to be extracted should be guided by the following concerns:

Complete Chapter List

Search this Book:
Reset