A Neuro-Fuzzy Rule-Based Classifier Using Important Features and Top Linguistic Features

A Neuro-Fuzzy Rule-Based Classifier Using Important Features and Top Linguistic Features

Saroj Kr. Biswas, Monali Bordoloi, Heisnam Rohen Singh, Biswajit Purkayastha
Copyright: © 2016 |Pages: 13
DOI: 10.4018/IJIIT.2016070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The efficient feature selection for predictive and accurate classification is highly desirable in many application domains. Most of the attempts to neuro-fuzzy classifier lose information to build interpretable neuro-fuzzy classification model. This paper proposes an interpretable neuro-fuzzy classification model with significant features without loss of knowledge, which is an extension of an existing interpretable neuro-fuzzy classification model. The proposed model is designed based on the consideration of feature importance that is determined by frequency of linguistic features. The rules are then made based on important features. Therefore, the knowledge acquired in network can be comprehended to logical rules using only important features. The proposed model finally performs classification task by rule-based approach. The average accuracy calculated by 10-fold cross validation finds that the proposed model can increase performance of the already proven neuro-fuzzy system for classification tasks.
Article Preview
Top

Introduction

Machine learning algorithms in pattern recognition, image processing and data mining mainly ensure classification and clustering. These algorithms operate on a huge amount of data with multiple dimensions. Most of these data are insignificant to a specific domain. An important concept that helps in classification, clustering and better understanding of the domain is feature selection (Kohavi and George 1997). Feature selection is a process of selecting a subset of features from a set of features without losing the characteristics and identity of the original object. There are two factors that affect feature selection – irrelevant features and redundant features. Irrelevant features are those features which provide no useful information in a context. Redundant features are those features which provide no more information than the currently selected features.

Feature selection has been proved an inevitable part of a classifier through numerous researches. In real-world scenario, to better represent the domain, many candidate features are introduced, which result in the existence of irrelevant/redundant features to the target concept (Dash and Liu 1997). In many classification problems, due to the huge size of the data it is difficult to build good classifiers before removing these unwanted features. Reducing the number of irrelevant/redundant features can drastically abate the running time of the learning algorithms and yields a more general classifier. Feature selection provides us with the advantages of facilitating data visualization and data understanding, reducing training and utilization times, reducing the measurement and storage requirements, defying the curse of dimensionality; which aids in the elevation of classification performance (Guyon and Elisseeff 2003).

Feature selection can be done using various techniques like mutual information (Battiti 1994; Chandrashekar and Sahin 2014), genetic algorithm (Chandrashekar and Sahin 2014; Sun, Babbs and Delp 2005; Puch, Goodman, Pei, Chia-Shun, Hovland and Enbody 1993), Bayesian network (Inza, larranaga and Sierra 2001), Artificial Neural Networks (ANNs) (Ledesma, Cerda, Avina, Hernandez and Torres 2008) etc. All these techniques have certain limitations. It is hard to calculate mutual information between the features that have continuous values. In Bayesian network, the number of structure super-exponentially increases as number of features increase and more focus on the dependency of the features rather than the important features. In genetic algorithm, some kind of randomness is involved and is very hard to assign more importance to more significant feature. Among these techniques ANN is mostly used for feature selection and classification. They are well-known massively parallel computing models, which exhibit excellent behavior in input-output mapping and resolving complex artificial intelligence problems in classification tasks.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing