Image Compression Technique Based on Some Principal Components Periodicity

Image Compression Technique Based on Some Principal Components Periodicity

Wilmar Hernandez, Alfredo Mendez
DOI: 10.4018/978-1-5225-9924-1.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, the almost periodicity of the first principal components is used to carry out the reconstruction of images. First, the principal component analysis technique is applied to an image. Then, the periodicity of its principal components is analyzed. Next, this periodicity is used to build periodic vectors of the same length of the original principal components, and finally, these periodic vectors are used to reconstruct the original image. The proposed method was compared against the JPEG (Joint Photographic Experts Group) compression technique. The mean square error and peak signal-to-noise ratio were used to perform the above-mentioned comparison. The experimental results showed that the proposed method performed better than JPEG, when the original image was reconstructed using the principal components modified by periodicity.
Chapter Preview
Top

Introduction

In order to represent information, different data sets are available. Therefore, sometimes the information through redundant data sets is represented. Image compression methods are procedures that are used to reduce redundant data and represent these images profitably (Gonzalez et al. 2008).

This type of methods is having a great growth and a great reception for several years, and the perspectives of the development of digital communications indicate that image compression is a field of study with a great projection. As an example, some data extracted from (Gonzalez et al. 2008, p. 525) are shown below:

“Think about the volume of data that is necessary to store a two-hours long standard-definition television movie, by using arrays of 720×480×24 bit pixel arrays. Due to the fact that the digital movie is a sequence of video frames, in which a frame is a photo in full color and 30 frames per second are used, 2.24×1011 bytes are required to store the digital movie, which approximately is equal to 224 GB (gigabytes) of data. Therefore, it is necessary to use 27 dual-layer DVDs of 8.5 GB to store the movie. In order to have a two-hour movie on a single DVD, the user must compress each frame by a factor of 26.3, on average” (Gonzalez et al. 2008, p. 525).

Image compression methods are also used in the following cases: digital cameras, high-definition television, medical images that are reconstructed in order to help doctors and researchers interpret diagnosis of rare illnesses and improve it, surveillance systems based on video processing techniques, traffic control in highways and inside cities, and interpretation of satellite images, among other applications.

Due to the large number of people who use communication devices today, large volumes of images are shared over the internet. To store these images, which often have high resolution, large amounts of bits are needed. In addition, the transmission of said images is done through networks that have limited bandwidth. What has been said above, entails excessive bandwidth consumption and justifies the fact that it is necessary to have procedures to compress images using few bits. Therefore, it is necessary to have procedures for compressing and transmitting images through the network in a fast and efficient way.

The image compression process is based on reducing the amount of data that is necessary to be able to represent it. These compression processes eliminate data that does not provide relevant information about the content of the image and cause losses in terms of their visual quality. However, this loss due to image degradation is not significant compared to the decrease in the size of the file that contains the image, while this is tolerable.

When there was no internet, people worked with high resolution and carrying out the compression of the files was only a requirement to take into account to carry out the transfer of data from one site to another. At that time, the storage media did not have great capacity. With the widespread use of the Internet, in 1986 a committee of experts, called the Joint Photographic Experts Group, set to work to create a standard procedure for compression and coding of images, and the JPEG format emerged in 1992 (JPEG, 2019).

The JPEG format is based on the DCT (discrete cosine transform) (CCIT, 1992) and is a format, in general, with losses, which means that each and every pixel that forms the bitmap is not saved. When the compressed image is reopened, the deleted pixels are plotted based on their resemblance to the surrounding pixels. This procedure supports different levels of compression; that is to say, it carries a very high quality, if little is compressed, and instead the quality decreases if it is highly compressed.

Due to the fact that compression always implies the loss of information, if an image that is stored with the JPEG format is opened and then saved again with this format, after having performed this operation several times it will be observed that said image will be degraded (Lifewire, 2019).

Complete Chapter List

Search this Book:
Reset