Fuzzification of Euclidean Space Approach in Machine Learning Techniques

Fuzzification of Euclidean Space Approach in Machine Learning Techniques

Mostafa A. Salama, Aboul Ella Hassanien
DOI: 10.4018/ijssmet.2014100103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Euclidian calculations represent a cornerstone in many machine learning techniques such as the Fuzzy C-Means (FCM) and Support Vector Machine (SVM) techniques. The FCM technique calculates the Euclidian distance between different data points, and the SVM technique calculates the dot product of two points in the Euclidian space. These calculations do not consider the degree of relevance of the selected features to the target class labels. This paper proposed a modification in the Euclidian space calculation for the FCM and SVM techniques based on the ranking of features extracted from evaluating the features. The authors consider the ranking as a membership value of this feature in Fuzzification of Euclidian calculations rather than using the crisp concept of feature selection, which selects some features and ignores others. Experimental results proved that applying the fuzzy value of memberships to Euclidian calculations in the FCM and SVM techniques has better accuracy than the ordinary calculating method and just ignoring the unselected features.
Article Preview
Top

1. Introduction

Fuzzification algorithms have been applied in most machine learning techniques to provide more human-like behavior and are successful in increasing the performance and accuracy of the classification results. Fuzzy logic introduces to machine learning techniques a framework for dealing quantitatively, mathematically, and logically with semantic and ambiguous concepts (Kyoomarsi et al. 2009). The membership of data points in a set or class label is not crisp but can be specified as a degree of membership. The machine learning techniques under investigation in this paper are C-Means clustering and support vector machine (SVM), where these techniques are fuzzified to generate FCM clustering (FCM) and fuzzy SVM, respectively. In instance-based techniques such as the C-Means clustering technique, fuzzy logic is used to determine the proximity of a given instance to the training set’s instances (Kotsiantis 2007). Fuzzy logic allows data instances to belong to two or more clusters where it is based on minimizing an objective or dissimilarity function (Gewenigera et al. 2010). With FCM, the centroid of a cluster is computed as the mean of all points, weighted by their degree of belonging to the cluster according to their proximity in the feature space. The degree of being in a certain cluster is related to the inverse of the distance to the cluster. SVM is a non-linear binary classification algorithm based on the theory of structural risk minimization. SVM solves complex classification tasks without suffering from the over-fitting problems that affect other classification algorithms. Computationally speaking, the SVM training problem is a convex quadratic programming problem, meaning local minima are not a problem (Cortes et al. 1995). In fuzzy SVM, the fuzzy membership values are calculated based on the distribution of training vectors where the outliers are given proportionally smaller membership values compared to the other training vectors (Lee et. al. 2006), (Shilton et. al. 2007).

The problem in such fuzzified techniques is that they apply the fuzzy logic concept on the level of objects and ignore the features composing the objects. Each object has a degree of membership in the class labels in the learning problem. For multivariate objects, the features of the objects have different degrees of relevance to the target class labels. When reducing the number of features by selecting the best features or extracting a lower number of features from the higher ones, the reduced features still have different relevance to the classification problem, while irrelevant features are ignored (Janecek et. al. 2008). Consequently, classifiers deal crisply with features without considering the degree of relevance: Either the full data set is used or only the selected features. The degree of relevance is calculated with feature evaluation techniques such as ChiMerge (Abdelwadood et. al. 2007). This technique successfully ranks continuous and discrete features. Using the degree of relevance in classification techniques could enhance their accuracy.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 2 Released, 4 Forthcoming
Volume 12: 6 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing