Data Partitioning: A Video Source-Coding Technique for Layered Video and Error Resilience

Data Partitioning: A Video Source-Coding Technique for Layered Video and Error Resilience

Martin Fleury, Laith Al-Jobouri
Copyright: © 2016 |Pages: 41
DOI: 10.4018/978-1-4666-8850-6.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Data partitioning is a source-coding technique that has existed in one form or another in the standardized hybrid video codecs up to recent times. In essence, it is a method of prioritizing coding data, resulting in video layers that can be separately communicated across an error-prone network. The Chapter includes the background that has led to data partitioning being included in the standardized codecs. As this Chapter discusses, it differs from scalable video because the output from conventional, single-layer encoders can be converted to multi-layer form, rather than requiring specialist codec extensions. It is shown that the methods of forming the partitions so far employed are: dividing transformed, residual coefficients into two or more layers; and dividing coded data by function into headers, intra-, and inter-coded residuals to form three or more layers. It is also shown how layering naturally combines with protection by channel coding. Used as an error resilience tool, data partitioning presents a low overhead method, suitable for benign as well as bad channels. And in the three-layer variety, error concealment at the decoder can significantly aid the reconstruction of damaged video frames. The Chapter will be of particular interest to developers charged with making a mobile, low-latency, or interactive video streaming application robust, as they can select from the data-partitioning methods and apply them to open-source code of the recent High Efficiency Video Coding (HEVC) codec standard. Broadcast TV can also benefit from data partitioning. Developers of codecs additionally will find in this Chapter a guide to research and ideas about data partitioning which could be incorporated into future codecs.
Chapter Preview
Top

Introduction

This Chapter will explore video source-coded, data partitioning: through a comprehensive review of variations of this technique; and by reference to a number of research papers by the authors and many others that will serve as reference or illustrative case studies. Compressed video streams are vulnerable to errors owing to the substantial removal of temporal redundancy across a video sequence of frame, as well as the sequential nature of entropy coding, which occurs as the last processing stage of hybrid video codecs (Richardson, 2002; Ghanbari, 2003, Rao et al., 2013). Data partitioning involves arranging a compressed video stream according to the reconstruction priority of the coding data contained in the stream. If the data are placed in different parts of a network packet or in completely separate network packets, then, at a low cost in transmission overhead, error resilience is provided (Wang et al., 2000; Stockhammer & Zia, 2007) to those fragile video streams. The technique is particularly valuable in error-prone wireless networks, when it can be combined (Al-Jobouri et al., 2012) with application-layer Forward Error Correction (FEC) (Morelos-Zaragoza, 2006) or with physical-layer prioritized transmission (Barmada et al., 2005).

Another way to view data partitioning is as a form of video layering in which, instead of layering by video quality, spatial resolution, and/or temporal scalability, the video data are layered according to their priority when reconstructing the video at the decoder. This form of layering allows prioritized network communication or transmission, possibly supported by Unequal Error Protection (UEP) (Heinzelman et al., 1999). Layering by data partitioning is not a scalable form of coding. In fact, it requires a single-layer encoder/decoder pair (codec) rather than a specialized scalable codec or extension to a single-layer codec. However, there is normally a bit-rate overhead arising from scalable coding (refer to the next Section) and the computational performance should be checked. Consequently, owing also to possible software complexity, it seems that the majority of deployed codec software or hardware will always be single-layer, though for a deployed, multi-layer video-conferencing counter example see Eleftheriadis et al. (2006).

There is also a challenge to error-resilience provision implied if scalable video is employed. Stemming from JPEG2000 (Taubman & Marcelin, 2002), the Scalable Video Coding (SVC) extension (Schwarz et al., 2007) to the H.264/Advanced Video Coding (AVC) standard codec (Sullivan et al., 2004) embedded scalability into the bit-stream, allowing the level of scalability to be determined after the video has been compressed. This implies that the complete embedded scalable bitstream is transmitted over a network until it reaches an intermediate device, which then extracts the part of bitstream that is suitable for a targeted end device in terms of display resolution, required video quality, or processing capability. Consequently, this form of scalability (in H.264/SVC) primarily aims at flexibility rather than error resilience through the ability to send different layers over different channels. In contrast, data-partitioned layering differs from that of scalable coding, as it allows separate layers to be easily sent over different communication channels.

Complete Chapter List

Search this Book:
Reset