MLVQ: A Modified Learning Vector Quantization Algorithm for Identifying Centroids of Fuzzy Membership Functions

MLVQ: A Modified Learning Vector Quantization Algorithm for Identifying Centroids of Fuzzy Membership Functions

Kai Keng Ang, Chai Quek
DOI: 10.4018/978-1-60960-551-3.ch019
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The Learning Vector Quantization (LVQ) algorithm and its variants have been employed in some fuzzy neural networks to automatically derive membership functions from training data. Although several improvements to the LVQ algorithm have been proposed, problematic areas of the LVQ algorithm include: the selection of number of clusters, initial weights, proper training parameters, and forced termination. These problematic areas in the derivation of centroids of one-dimensional data are illustrated with an artificially generated experimental data set on LVQ, GLVQ, and FCM. A Modified Learning Vector Quantization (MLVQ) algorithm is presented in this chapter to address these problematic areas for one-dimensional data. MLVQ models the development of the nervous system in two stages: a first stage where the basic architecture and coarse connections patterns are laid out, and a second stage where the initial architecture is refined in activity-dependent ways. MLVQ determines the learning constant parameter and modifies the terminating condition of the LVQ algorithm so that convergence can be achieved and easily detected. Experiments on the MLVQ algorithm are performed and contrasted against LVQ, GLVQ, and FCM. Results show that MLVQ determines the number of clusters and converges to the centroids. Results also show that MLVQ is insensitive to the sequence of the training data, able to identify centroids of overlapping clusters, and able to ignore outliners without identifying them as separate clusters. Results using MLVQ algorithm and Gaussian membership functions with Pseudo Outer-Product Fuzzy Neural Network using Compositional Rule of Inference and Singleton fuzzifier (POPFNN-CRI(S)) on pattern classification and time series prediction are also provided to demonstrate the effectiveness of the fuzzy membership functions derived using MLVQ.
Chapter Preview
Top

1 Introduction

The main rationale in integrating fuzzy logic and neural networks in Neural Fuzzy Systems is to create a logical framework based on a linguistic model through the training and learning of the connectionist neural networks (Lin & Lee, 1996). Please refer to (Gupta & Rao, 1994) for the principles and architecture of fuzzy neural networks. The notion of linguistic variable in fuzzy set theory and fuzzy logic is extensively used in Neural Fuzzy Systems. The linguistic labels in each linguistic variable are usually defined as fuzzy sets with appropriate membership functions. These membership functions have to be predefined to enable fuzzy inference rules to map numerical data into linguistic labels.

Membership functions are usually predefined by human experts or experienced users. Several methods of automatically deriving membership functions from training data have been proposed and among these methods, the Learning Vector Quantization (LVQ) algorithm and its variants were employed in some fuzzy neural networks (Ang, Quek, & Pasquier, 2003; Li, Mukaidono, & Turksen, 2002; Lin, 1995; Zhou & Quek, 1996) to derive fuzzy membership functions. The use of LVQ in these fuzzy neural networks is not for pattern classification, but to utilise the centroids obtained from LVQ to derive Gaussian-shaped, triangular or trapezoidal-shaped membership functions for each individual dimension of the network’s input and output. After the membership functions and fuzzy rules are derived, a supervised learning algorithm is often employed to fine-tune the rules and membership functions.

Although several improvements to LVQ have been proposed in (Kohonen, 1990), problematic areas of LVQ exists which include the selection of number of clusters, selection of initial weights, the selection of proper training parameters and the forced termination. Variants of LVQ were proposed in the literature to address some but not all of these problems, namely Soft Competition Scheme (SCS) (Yair, Zeger, & Gersho, 1992), Fuzzy Learning Vector Quantization (FLVQ) (Tsao, Bezdek, & Pal, 1994), Generalised Learning Vector Quantization (GLVQ) (Pal, Bezdek, & Tsao, 1993) and GLVQ-F (Karayiannis, Bezdek, Pal, Hathaway, & Pai, 1996).

The variants of LVQ in the literature did not address the selection of number of clusters, which is a crucial parameter in the determination of the number of linguistic labels for membership functions for fuzzy neural networks. The learning constant parameter in LVQ and its variants are usually decremented with time to force the termination of the training process. This guarantees termination, but not necessary converges to the means of the training data. The training parameters must also be varied from one data set to another to achieve good results. Furthermore, the final weights obtained after training are dependent on the initial weights and the sequence of the training data.

In contrast to LVQ and its variants, the Fuzzy C-Means (FCM) algorithm in (Bezdek, James C., 1981) has established convergence. FCM is a batch-learning optimisation algorithm that performs updates to the weights after iterating through the entire data set. Thus FCM is independent of the sequence of data (Bezdek, J. C., Hathaway, & Tucker, 1987). However, the iterative nature of FCM is computationally and memory intensive due to the large number of feature vectors involved (Cheng, Goldgof, & Hall, 1998), see (14) and (15). FCM is also unable to perform on-line learning since it is a batch-learning scheme (Rhee & Oh, 1996). Furthermore, the performance of FCM depends on a good choice of the weighting exponential m and the initial pseudo partition. Although guidelines are provided for a suitable choice for m (Choe & Jordan, 1992), however, this choice is still largely heuristic (Tsao et al., 1994).

Complete Chapter List

Search this Book:
Reset