Fast Vector Quantization Encoding Algorithms for Image Compression

Fast Vector Quantization Encoding Algorithms for Image Compression

Ahmed Swilem (Minia University,Egypt)
DOI: 10.4018/978-1-60960-563-6.ch012
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Vector quantization (VQ) is a well-known compression method. In the encoding phase, given a block represented as a vector, searching the closest codeword in the codebook is a time-consuming task. In this paper, two fast encoding algorithms for VQ are proposed. To reduce the search area and accelerate the search process, the first algorithm utilizes three significant features of a vector that are, the norm, and two projection angles to two projection axes. The second algorithm uses the first two features as in the first algorithm with the projection value of the vector to the second projection axe. The algorithms allow significant acceleration in the encoding process. Experimental results are presented on image block data. These results confirm the effectiveness of the proposed algorithms.
Chapter Preview
Top

1. Introduction

Thanks to the incredibly rapid advancement of the Internet, online information communication and human connection, where the delivery and storage of digital images of all kinds have become the main stream, seem to have been planning a more and more important part of our life. However, for massive quantities of multimedia data to get traveled online from place to place at least acceptable speed or to be stored without taking up too much of the memory space, especially when the network bandwidth is limited with small hardware storage, there is nothing more important than a good digital image compression scheme. Image compression methods can be categorized as two types: lossless compression and lossy compression. In lossless compression scheme, the original image can be recreated exactly from the compressed data but lossy compression scheme must exist some distortion between the original image and the reconstructed image. So far, vector quantization (VQ) (Gray, 1984; Linde, Buzo, & Gray, 1980) has long been a well-celebrated lossy compression technique that guarantees the achievement of a satisfactory balance between image quality and compression ratio (Chen, Chang, & Hwang, 1998). At the same time, because of its simple and easy implementation, VQ has been very popular in a variety of research fields such as speech recognition and face detection (Garcia, & Tziritas, 1999). Even in real-time video-based events detection (Liao, Chen, Su, & Tyan, 2006) and the anomaly intrusion detection systems (Zheng, & Hu, 2006), VQ has been exploited recently to learn and collect some representative patterns and then to identify similar feature vectors or detect unusual activities. Next, the procedure of vector quantization will be interpreted from the aspect of image compression.

VQ can be defined as a mapping from a k-dimensional Euclidean space to a finite set of vectors in called the codebook. Each representative vector in the codebook is called a codeword. Generally, VQ can be divided into three procedures: codebook design, encoding and decoding. The codebook design procedure is executed before the other two procedures for VQ. The goal of the codebook design is to construct a codebook from a set of training vectors using clustering algorithms like the generalized Lloyd algorithm (GLA) (Linde, Buzo, & Gray, 1980). This codebook is used in both the image encoding/decoding procedures. In the encoding procedure, for each training vector, we find the index of the codeword is the closest codeword to the vector which gives minimum distortion and satisfy , where

, (1)

so the codeword now represents the vector . The decoding procedure is simply a table look-up procedure that uses the received index to deduce the reproduction codeword , and then uses to represent the input vector .

Complete Chapter List

Search this Book:
Reset