A Survey on Video Coding Principles and Standards

A Survey on Video Coding Principles and Standards

H. Koumaras (Institute of Informatics and Telecommunications, Greece) and M.A. Kourtis (Athens University of Economic and Business, Greece)
Copyright: © 2013 |Pages: 27
DOI: 10.4018/978-1-4666-2660-7.ch001

Abstract

This chapter presents a general overview of the video coding principles and standards. This chapter launches with a brief explanation of the video coding process, and analyzes its discrete components, using specific examples. It then proceeds to an overall outline of the most important video coding standards by MPEG and their individual profiles/versions, and of the novel video coding standard High Efficiency Video Coding (HEVC). Additionally, this chapter contains a description of the objective and subjective video quality evaluation methods that are widely used in video quality assessment. The chapter concludes with a report on future trends and developments regarding the field of video processing.
Chapter Preview
Top

Principles Of Video Coding

This section presents the basic principles of video coding procedure, which the reader must understand in order to be able to continue further reading in this book. All forms of video-coding that have compression as a primary goal, try to minimize redundancy in the media. A video consists of a number of frames, meaning separate pictures, which given the fact that they are projected one after the other at a particular rate, they give the human eye the feeling of continuous movement. This leads us to the fact that we can have two kinds of redundancy; spatial redundancy and temporal redundancy. Spatial redundancy refers to intraframe coding techniques, which means that we use neighboring similar pixels of the same frame to encode it. Temporal redundancy has to do with interframe coding, meaning the usage of past and future frames to encode our current frame.

Therefore, video compression techniques are divided into two categories based on the redundancy type. The temporal phase exploits the similarities between successive frames with the aim to reduce the temporal redundancy in a video sequence. The spatial stage exploits spatial similarities located on the same frame, reducing by this way the spatial redundancy. Then the output parameters of the temporal and spatial stages are further quantized and compressed by an entropy encoder, which removes the statistical redundancy in the data, producing an even more compressed video stream. Thus, all the video coding standards are based on the same basic coding scheme, which briefly consists of the following phases: The temporal, the spatial, the transform, the quantization and the entropy coding phase.

Finally, it must be also noted that in more simplistic systems every frame is coded separately, so the intraframe redundancy method is the best, because the loss of one frame does not affect the coding of the other frames. Due to the simplicity of these systems, this frame specific methodology is not analysed in the chapter, because it is a part of the more complicated coding systems that use both spatial and temporal techniques in order to achieve greater effectiveness and efficiency in video compression ratio.

Complete Chapter List

Search this Book:
Reset