Principles of Digital Video Coding

Principles of Digital Video Coding

Harilaos Koumaras (University of the Aegean, Greece), Evangellos Pallis (Technological Educational Institute of Crete, Greece), Anastasios Kourtis (National Centre for Scientific Research “Demokritos”, Greece) and Drakoulis Martakos (National and Kapodistrian University of Athens, Greece)
DOI: 10.4018/978-1-60566-026-4.ch497
OnDemand PDF Download:


Multimedia applications and services have already possessed a major portion of today’s traffic over communication networks. The revolution and evolution of the World Wide Web has enabled the wide provision of multimedia content over the Internet and any other autonomous network. Among the various types of multimedia, video services (transmission of moving images and sound) are proven dominant for present and future communication networks. Although the available network bandwidth and the corresponding supporting bit rates continue to increase, the raw video data necessitate high bandwidth requirements for its transmission. For example, current commercial communication networks throughput rates are insufficient to handle raw video in real time, even if low spatial and temporal resolution (i.e., frame size and frame rate) has been selected. Towards alleviating the network bandwidth requirements for efficient transmission of audiovisual content, coding techniques have been applied on raw video data, which perform compression by exploiting both temporal and spatial redundancy in video sequences. Video coding is defined as the process of compressing and decompressing a raw digital video sequence, which results in lower data volumes, besides enabling the transmission of video signals over bandwidth-limited means, where uncompressed video signals would not be possible to be transmitted. The use of coding and compression techniques leads to better exploitation and more efficient management of the available bandwidth. Video compression algorithms exploit the fact that a video signal consists of sequence series with high similarity in the spatial, temporal, and frequency domain. Thus, by removing this redundancy in these three different domain types, it is possible to achieve high compression of the deduced data, sacrificing a certain amount of visual information, which however it is not highly noticeable by the mechanisms of the human visual system, which is not sensitive at this type of visual degradation (Richardson, 2003). Thus, the research area of video compression has been a very active field during the last few years by proposing various algorithms and techniques for video coding (International Telecommunications Union [ITU], 1993; ITU 2005a, 2005b; Moving Picture Experts Group [MPEG], 1998; MPEG, 2005a, 2005b). In general video compression techniques can be classified into two classes: (1) the lossy ones and (2) information preserving (lossless). The first methods, although maintaining the video quality of the original/uncompressed signal, do not succeed high compression ratios, while the lossless ones compress more efficiently the data volume of initial raw video signal with the cost of degrading the perceived quality of the video service. The lossy video coding techniques are widely used, in contrast to lossless ones, due to their better performance. More specifically, by enhancing the encoding algorithms and techniques, the latest proposed coding methods try to perform in a more efficient way both the data compression and the maintenance of the deduced perceived quality of the encoded signal at high levels. In this framework, many of these coding techniques and algorithms have been standardized, encouraging by this way the interoperability between various products designed and developed by different manufacturers. This article deals with the fundamentals of the video coding process of the lossy encoding techniques that are common on the great majority of today’s video coding standards and techniques.
Chapter Preview


The majority of the compression standards have been proposed by the ITU and the International Organization for Standardization (ISO) bodies, by introducing the following standards H.261, H.263, H.263+, H.263++, H.264, MPEG-1, MPEG-2, MPEG-4 and MPEG-4 Advanced Video Coding (AVC).

Some of the aforementioned standards were developed in partnership of ITU with MPEG, exploiting similar coding techniques developed by each body separately.

Key Terms in this Chapter

Pixel: A pixel is considered the smallest sample of a digital image or video.

Video Codec: Video codec is the device or software that enables the compression/decompression of digital video.

Moving Picture Experts Group (MPEG): MPEG is a working group of ISO charged with the development of audiovisual encoding standards. MPEG includes many members from various industries and universities related to audiovisual coding research.

Integrated Services Digital Network (ISDN): ISDN is a type of circuit-switched, telephone network system designed to allow digital transmission of voice and data over ordinary telephone copper wires resulting in better quality and higher speeds than available with analog systems.

Multimedia: Multimedia is the several different media types (e.g., text, audio, graphics, animation, video).

Video Coding: Video coding is the process of compressing and decompressing a raw digital video sequence.

Frame: Frame is one of the many still images that as a sequence compose a video signal,

Bit Rate: Bit rate is the frequency at which bits are passing over a given physical medium. It is quantified by using the bit per second ( bit/s ) unit.

Complete Chapter List

Search this Book: