Comparative Performance Analysis of Optimization Techniques on Vector Quantization for Image Compression

Comparative Performance Analysis of Optimization Techniques on Vector Quantization for Image Compression

Karri Chiranjeevi (Veer Surendra Sai University of Technology, Department of Electronics and Telecommunication Engineering, Odisha, India), Umaranjan Jena (Veer Surendra Sai University of Technology, Department of Electronics and Telecommunication Engineering, Odisha, India) and Sonali Dash (Veer Surendra Sai University of Technology, Department of Electronics and Telecommunication Engineering, Odisha, India)
Copyright: © 2017 |Pages: 25
DOI: 10.4018/IJCVIP.2017010102
OnDemand PDF Download:
$37.50

Abstract

Linde-Buzo-Gray (LBG) Vector Quantization (VQ), technically generates local codebook after many runs on different sets of training images for image compression. The key role of VQ is to generate global codebook. In this paper, we present comparative performance analysis of different optimization techniques. Firefly and Cuckoo search generate a near global codebook, but undergoes problem when non-availability of brighter fireflies and convergence time is very high respectively. Hybrid Cuckoo Search (HCS) algorithm was developed and tested on four benchmark functions, that optimizes the LBG codebook with less convergence rate by taking McCulloch's algorithm based levy flight and variant of searching parameters. Practically, we observed that Bat algorithm (BA) peak signal to noise ratio is better than LBG, FA, CS and HCS in between 8 to 256 codebook sizes. The convergence time of BA is 2.4452, 2.734 and 1.5126 times faster than HCS, CS and FA respectively.
Article Preview

Introduction

Image data compression is concerned with minimization of the number of information carrying units used to represent an image. Due to the advances in various aspects of digital electronics like image acquisition, data storage and display, many new applications of the digital imaging have emerged over the past decade. However, many of these applications are not widespread because of required large storage space. As a result, the image compression grew tremendously over the last decade. Image compression plays a significant role in multimedia applications like mobile, internet browsing, fax and so on. Now a day’s establishment of image compression techniques with excellent reconstructed image quality is the crucial and challenging task for researchers. The image compression is aimed to transmit the image with lesser bitrates. The way to image compression is identification of redundancies in image, better encoding techniques and transformation techniques. The primary image compression was JPEG introduced by the group of Joint Photographic Expert Group (Karri et al., 2015). Now a day’s quantization is a powerful and efficient tool for image compression, it is non-transformed compression technique and was introduced for lossy compression. Quantization is of two types: scalar quantization and vector quantization. The main aim of vector quantization is to design an efficient codebook. A codebook contains a group of codewords to which input image vector is assigned based on the minimum Euclidean distance. The primary and most used vector quantization technique was Linde Buzo Gray (LBG) algorithm (Linde et al., 1980). LBG algorithm is simple, adaptable and flexible, but it suffers with local optimal problem. It produces a local optimal solution but, does not guarantee the best global solutions. LBG algorithm final solution depends on initial codebook which is generated randomly (Patane & Russo, 2002). A quad tree (QT) decomposition algorithm allows VQ with variable block size by observing homogeneity of local regions (Hu & Chang, 2000). But Sasazaki et al. observed that complexity of local regions of an image is more essential than the homogeneity (Kazuya et al., 2008). They proposed a vector quantization of images with variable block size by quantifying the complex regions of the image using local fractal dimensions (LFDs). Tsolakis et al. proposed a Fuzzy vector quantization for image compression based on competitive agglomeration and a novel codeword migration strategy (Dimitrios et al., 2012). George et al. proposed an improved batch fuzzy learning vector quantization for image compression (George et al., 2008). Dorin & Richard (2002) proposed an image coding using transform vector quantization with a training set synthesis by means of best-fit parameters between input vector and codebook. Wang & Juan (2008) observed that image compression is performed with transformed vector quantization. In this technique, image to be quantized is transformed to wavelet domain with discrete wavelet transform (DWT). A novel Vector Quantization (VQ) technique is proposed to encode wavelet decomposed image using Modified Artificial Bee Colony (ABC) optimization algorithm (M. Lakshmi et al., 2016). Image compression is achieved by sing vector quantization and hybrid wavelet transform. Image is converted to transform domain using hybrid wavelet transform and few low frequency coefficients are retained to achieve good compression (Kekre et al., 2016).

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 7: 4 Issues (2017): 3 Released, 1 Forthcoming
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing