Information Compression

Information Compression

Manjunath Ramachandra (MSR School of Advanced Studies, Philips, India)
DOI: 10.4018/978-1-60566-888-8.ch008


If a large data transactions are to happen in the supply chain over the web, the resources would be strained and lead to choking of the network apart from the increased transfer costs. To use the available resources over the internet effectively, the data is often compressed before transfer. This chapter provides the different methods and levels of data compression. A separate section is devoted for multimedia data compression where a certain losses in the data is tolerable during compression due to the limitations of human perception.
Chapter Preview


The compression of data is closely linked to the architecture of storage of the data. The advantage of keeping the data in compressed form is that, even if the network is slow and the infrastructure is no so good, there will be little impact on the retrieval of the data. The files system JFFS2 is optimized to support the compression of data and the metadata in the flash memory. However, it does not speak about the mix (A. Kawaguchi, S. Nishioka, and H. Motoda, 1995) of the flash and the disk storage (T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, 2001). When the flash memory available to store the data is slow, it is advantageous to compress even the small data objects as it reduces the latency.

Cache Compression

In the compression cache architecture proposed by Douglis, there will be an intermediate layer of virtual memory between the physical memory and the secondary memory. It provides the requisite docking space for the compressed pages. The implementation provides a reasonable improvement in the performance (T. Cortes, Y. Becerra, and R. Cervera, 2000). However, the performance may be improved further for the compressed page cache on Linux (S. F. Kaplan, 1999, R. S. de Castro, 2003).

Metadata Compression

In practice, a number of compression mechanisms are used for handling the metadata. It comprises of known techniques such as Shannon-Fano, Huffman coding with a pre-computed tree, gamma compression and similar prefix encodings. They provide different degrees of compression as well as the computational complexities. In Linux system, block or stream-based compression mechanisms are used.

Complete Chapter List

Search this Book: