Wireless Video Sensor Networks: Advances in Distributed Video Coding

Wireless Video Sensor Networks: Advances in Distributed Video Coding

Abdelrahman Elamin (Universiti Teknologi PETRONAS, Malaysia), Varun Jeoti (Universiti Teknologi PETRONAS, Malaysia) and Samir Belhouari (Universiti Teknologi PETRONAS, Malaysia)
DOI: 10.4018/978-1-61350-153-5.ch003
OnDemand PDF Download:
No Current Special Offers


Wireless Video Sensors Networks (WVSNs) generally suffer from the constraint that their sensor nodes must consume very little power. In this rapidly emerging video application, the traditional video coding architecture cannot be used due to its high encoding complexity. Thankfully, some theorems from Information Theory suggest that this problem can be solved by shifting the encoder tasks, partially or totally, to the decoder. These theorems are employed in the design of so-called Distributed Video Coding (DVC) solutions, the subject matter of this chapter. The chapter not only introduces the DVC but also reviews some important developments of the popular Stanford Wyner-Ziv coding architecture and caps it with latest research trends highlighting a Region-Based-Wyner-Ziv video codec that enables low-complexity encoding while achieving high compression efficiency.
Chapter Preview

Introduction: Wvsns And Dvc

Wireless video sensors network (WVSN) are fast finding many applications, chief among them being surveillance application in ad-hoc deployment scenarios. Examples of such surveillance applications are aplenty e.g. surveillance in a soccer-stadium, along certain traffic route, of a gathering of people in certain places etc. The video sensor nodes are deployed in an ad-hoc manner and many a times disposable. Such video sensors cannot afford the traditional encoding complexity of a video compression but still require good video compression efficiency.

Today’s digital video coding paradigm represented by the ITU-T and MPEG standards mainly relies on a hybrid of block-based transform and inter-frame predictive coding approaches. In this coding framework, the encoder exploits both the temporal and spatial redundancies present in the video sequence, which is a complex process and it requires a noticeable amount of resources (power and memory). As a result, all standard video encoders have much higher computational complexity than the decoder (typically five to ten times more complex) (Girod et al., 2005), mainly due to the temporal correlation exploitation tools used in the motion estimation task. As a result, the traditional video coding is no longer applicable for these WVSN applications. Appropriate video coding paradigm for these applications must have low encoding complexity. Lower encoding complexity can be achieved by moving some of the encoder tasks to the decoder part, particularly the complex motion estimation process.

Two notable theorems from information theory have paved the way for new video coding paradigm known in the literature as Distributed Video Coding (DVC). It allows low encoding complexity and approaches the efficiency of traditional video coding schemes. These two theorems are known as Slepian-Wolf theorem and Wyner-Ziv theorem (Slepian & Wolf, 1973; Wyner & Ziv, 1976). They suggest that, foe sources X and Y, separate encoding - joint decoding system can approach the efficiency of joint encoding-decoding system when the information about the correlation between X and Y is available at the decoder. The practical application of DVC (Aaron & Girod, 2002; Aaron, Zhang, & Girod, 2002; C. Brites, 2005; Fowler, 2005; R. Puri & Ramchandran, 2003; Rohit Puri & Ranichandran, 2003) is referred to as Wyner-Ziv video coding (WZ) where an estimate of the original frame herein called the “side information” is available at the decoder. The compression is achieved by sending only that extra information that is needed to correct this estimate. An error correcting code is often used with the assumption that the estimate is a noisy version of the original frame and the correction can be made with few extra parity bits that determine the rate. For the purpose of modeling, a virtual channel is assumed to represent the estimation noise (Westerlakena, Gunnewiekb, & Lagendijka, 2005) in the estimate of the original frame.

This chapter mainly aims at introducing the WZ video codec as light distributed video compression scheme. Section 1 covers the theoretical foundation of the distributed source coding (DSC) principle and the two major Information Theory theorems, namely the Slepian-Wolf and Wyner-Ziv theorems. Section 2 explains how the source coding, channel coding and estimation interplay to construct a distributed video coding solution. Section 3 presents a detailed review of the state-of-the-art solution, known as feedback channel DVC developed by Girod’s group at Stanford University (B. Girod, 2003), as this system represents a good solution for WVSNs due to its low-encoding complexity. In section 4, some of the architectural developments which are derived from the initial Stanford architecture are given. Section 5 starts first by highlighting some performance issues in the state-of-the-art solution. It is followed by the analysis of various research efforts in the literature and capped with a solution called Region-Based DVC. Finally, as the WVSNs produce multiview video sequences the chapter introduces how to tailor the DVC system to match the feature of such video sequences.

Complete Chapter List

Search this Book: