Article Preview
Top1. Introduction
Color quantization is a process used to reduce the number of distinct colors used for representing a digitally sampled image. It consists of selecting a small but representative set of indexed colors (codebook) for coding the original digital image with minimum perceptual distortion. Considered as a useful lossy compression method to find an acceptable set of colors for representing a digital image, it has been used as a method to properly adapt images to video adapters with low color displaying capabilities as well as reducing the storage requirements and the transmission bandwidth while maintaining an acceptable image fidelity.
The quality of the codebook will be determined by the error between the original image and the resultant image. An optimal codebook aims to minimize this error, which is usually measured by a mean square error criterion.
There are several well-known codebook design algorithms such as k-means algorithm (Linde, Buzo, & Gray, 1980), fuzzy c-means (Bezdek, 1981), competitive learning (Hertz, Krogh, & Palmer, 1991), self-organizing map (Kohonen, The self-organizing map, 1990), and their variants. The Self-Organizing Map (SOM) (Kohonen, 1982) was the starting point for the development of many self-organizing models (López-Rubio, 2010a), (López-Rubio, 2010b), (López-Rubio, Palomo-Ferrer, Ortiz-de-Lazcano-Lobato, & Vargas-González, 2011). Some of them try to face some drawbacks of the original SOM regarding its pre-established network architecture, i.e. topology and number of neurons (Kohonen, 2013), (López-Rubio, Growing Hierarchical Probabilistic Self-Organizing Graphs, 2011). The Growing Hierarchical Self-Organizing Map (GHSOM) (Rauber, Merkl, & Dittenbach, 2002) represents a hierarchical extension of the SOM to reflect hierarchical data, where the entire architecture of the neural network is automatically determined during the unsupervised learning process. The Growing Neural Gas (GNG) (Fritzke, 1995) constitutes a successful self-organizing neural network model that solves the fixed-network architecture problem of the SOM. The GNG is based on the Neural Gas (NG) model (Martinetz, 1991), but the GNG provides a neuron growth and removal mechanism to automatically determine the number of neurons during the unsupervised learning process according to the input data.
Initially proposed in (Palomo & López-Rubio, 2016b), the Growing Neural Forest (GNF) model was featured as an improvement of the GNG in which a spanning tree for each connected component (subgraph) of the overall graph is computed. This way, only those units which are connected to the winning unit in a spanning tree are updated. Hence, the GNF will learn a set of trees (forest) so that each tree represents a connected data cluster, meaning such a better adaptation to input data than the GNG.
As many self-organizing models were successfully applied in the past to color quantization (Dekker, 1994), (Papamarkos, 1999), (Xiao, Leung, Lam, & Ho, 2012), (Palomo & Domínguez, 2013), in this work the GNF has also been used for competing with those in order to remark its validity for this application.
The reminder of the paper is organized as follows1. The GNF model is explained in Section 2. Experimental results on color quantization are detailed in Section 3. Section 4 concludes this paper.