Self-Organizing Map Convergence

Self-Organizing Map Convergence

Robert Tatoian, Lutz Hamel
DOI: 10.4018/IJSSMET.2018040103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Self-organizing maps are artificial neural networks designed for unsupervised machine learning. Here in this article, the authors introduce a new quality measure called the convergence index. The convergence index is a linear combination of map embedding accuracy and estimated topographic accuracy and since it reports a single statistically meaningful number it is perhaps more intuitive to use than other quality measures. The convergence index in the context of clustering problems was proposed by Ultsch as part of his fundamental clustering problem suite as well as real world datasets. First demonstrated is that the convergence index captures the notion that a SOM has learned the multivariate distribution of a training data set by looking at the convergence of the marginals. The convergence index is then used to study the convergence of SOMs with respect to the different parameters that govern self-organizing map learning. One result is that the constant neighborhood function produces better self-organizing map models than the popular Gaussian neighborhood function.
Article Preview
Top

1. Introduction

Self-organizing maps (SOMs) are artificial neural networks designed for unsupervised machine learning. They represent powerful data analysis tools applied in many different areas including areas such as biomedicine, bioinformatics, proteomics, and astrophysics (Kohonen, 2001; Yang & Lee, 2003; Hadzic et al., 2007; Tan et al., 2002; da Silva et al., 2011; Kihara, 2012; Alakhdar, 2013; Sonnenber et al., 2013; Bilgihan et al., 2013; Alouane-Ksouri et al., 2015). We maintain a data analysis package in R called popsom (Hamel et al., 2016) based on self-organizing maps. The package supports efficient, statistical measures that enable the user to gauge the quality of a generated map (Hamel, 2016). Here we introduce a new quality measure called the convergence index. The convergence index is a linear combination of map embedding accuracy and estimated topographic accuracy. It reports a single, statistically meaningful number between 0 and 1, 0 representing a least fitted model and 1 representing a well fitted model, and is therefore perhaps more intuitive to use than other quality measures. Here we study the convergence index in the context of clustering problems proposed by Ultsch as part of his fundamental clustering problem suite (Ultsch, 2005) as well as real world datasets. In particular, we are interested in how well the convergence index captures the notion that a SOM has learned the multivariate distribution of a training data set. We then use our convergence index to study the convergence of SOMs with respect to the different parameters that govern self-organizing map learning. One perhaps surprising result is that the constant neighborhood function produces better self-organizing map models than the popular Gaussian neighborhood function.

Over the years a number of different quality measures for self-organizing maps have been proposed. Nice overviews of common SOM quality measures appear in (Pölzlbauer, 2004) and (Mayer et al., 2009). Our convergence index distinguishes itself from many of the other measures in that it is statistical in nature. This is particularly true for the part of the convergence index based on embedding (or coverage) which is essentially a two-sample test between the training data and the set of self-organizing map neurons viewed as populations. The two sample test measures how similar these two populations are. For a fully embedded map the population of neurons should be statistically indistinguishable from the population of training data instances. This statistical view of embedding is interesting because it makes the standard visualization of SOMs using a U-matrix (Ultsch, 1993) statistically meaningful. That is, the cluster and the distance interpretations of the U-matrix now have a statistical foundation based on the fact that the distribution of the map neurons is indistinguishable from the distribution of the training data.

The other part of our convergence index, the estimated topographic accuracy, is an efficient statistical approach to the usual topographic error quality measure (Pölzlbauer, 2004) where the topographic error can be computationally expensive. In our approach we use a sample of the training data to estimate the topographic accuracy. Experiments have shown that we need only a fairly small sample of the training data to get very accurate topographic accuracy estimates (Hamel, 2016).

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 2 Released, 4 Forthcoming
Volume 12: 6 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing