Article Preview
TopMany research efforts have been made to improve the K-means computation time and its convergence. Similar approaches to our proposed method have been done using the HSV color space (Tse-Wei et al., 2008; Zadeh, 2013) for grayscale and colored image, and a very fast computation time has been noticed with a good clustering efficiency. Another of the many approaches treats the change in the distance metric used between intra-clusters. The Euclidean (Danielsson & Per-Erik, 1980) distance is the more often used as in (Wang et al., 2005), the result of the clusters is spherical or ball-shaped and usually used for data in two or three dimension (Anil, 2010; Su et al., 2001) and also gives good results when the clusters are compact or outlying (Anil et al., 1999). More variations on how to calculate the distances have been developed, such as the Minkowski distance metric (Ridder, 1992; Ichino et al., 1994), which is a more general formula than the Euclidean distance metric such in (Archana, S et al., 2013; Anil, 2010; De Amorim et al., 2012). The Manhattan distance (Pieterse & Paul, 2006), a more specific formula than the Euclidian distance, can also be used as a distance metric as in (Archana, et al., 2013; Anil, 2010; Kahkashan & Sunita, 2013). The Mahalanobis distance metric (De Maesschalck et al., 2000) is also used in (Xiang et al., 2008) for data with high dimensional size. The Itakura–Saito distance (Enqvist & Karlsson, 2008) is also used in vector quantization for speech processing (Anil, 2010). All distances metrics aim to reduce the computation time of the K-means algorithm and try to make the algorithm converge faster.