Bitrate Adaptation of Scalable Bitstreams in a UMTS Environment

Bitrate Adaptation of Scalable Bitstreams in a UMTS Environment

Nicolas Tizon (TELECOM ParisTech, France) and Béatrice Pesquet-Popescu (TELECOM ParisTech, France)
DOI: 10.4018/978-1-61692-831-5.ch007
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

In this chapter, we propose a complete streaming framework for a wireless, in particular a 3G/UMTS, network environment. We describe choices one can make in terms of network architecture and then focus on the input parameters used to evaluate network conditions and to perform the adaptation. A particular attention is dedicated to the protocol information one can exploit in order to infer the channel state. In addition, each implementation choice is a compromise between the industrial feasibility and the adaptation efficiency.
Chapter Preview
Top

Introduction

Bitrate adaptation is a key issue when considering streaming applications involving throughput limited networks with error prone channels, as wireless networks (Girod, Kalman, Liang & Zhang, 2004). Concerning the availability of real time values of the network parameters like resource allocation among users or channel error rate, it is worth noticing that today, in a majority of practical cases, the media server is far away from the bottleneck links, thus preventing real time adaptation. Classically, only long term feedbacks like RTCP reports can be used to perform estimations. In the same time, the emergence of recent source coding standards like the scalable extension of H.264/AVC (Wiegand, Sullivan, Bjontegaard & Luthra, 2003; ISO/IEC & ITU-T Rec., 2003), namely Scalable Video Coding (SVC) (Schwarz, Marpe & Wiegand, 2006; Reichel, Schwarz & Wien, 2007), that allows to encode in the same bitstream a wide range of spatio-temporal and quality layers, offers new adaptation facilities (Wang, Hannuksela & Pateux, 2007; Amonou, Cammas, Kervadec & Pateux, 2007). Standardized scalability domains (spatial, temporal and SNR) can also be completed by a Region Of Interest (ROI) approach (Tizon & Pesquet-Popescu, 2007; Tizon & Pesquet-Popescu, 2008) in order to improve the perceived quality. The concept of scalability, when exploited for dynamic channel adaptation purposes, raises at least two kinds of issues: how to measure network conditions and how to differentiate transmitted data in terms of distortion contribution?

Firstly, in order to select the appropriate subset of scalable layers, a specific network entity must be able to parse the SVC high level syntax and to derive the transmission conditions from the specific protocol feedbacks. Hence, we need to define an architecture to specify the localization of adaptation operations and which protocols are used to infer the channel characteristics. In this chapter, we propose and compare two different approaches in terms of network architecture in order to comply with different practical requirements. The first approach consists in not modifying the existing network infrastructure and keeping the adaptation operations in the server that exploits long term feedbacks, like RTCP reports (Wenger, Sato, Burmeister & Rey, 2006), from the client. To our knowledge, the framework proposed by Baldo, Horn, Kampmann and Hartung (2004) is one of the most developed solutions, that allows to control the sent data rate and to infer network and client parameters based on RTCP feedbacks. The main purpose of the described algorithm is to avoid packet congestion in the network and client buffer starvation. This first solution can be integrated very easily in an existing video streaming framework, but its adaptation abilities are quite low.

On the other hand, the second approach consists in a video streaming system that uses SVC coding in order to adapt the input stream at the radio link layer as a function of the available bandwidth, thanks to a Media Aware Network Element (MANE) (Wenger, Wang & Schierl, 2009) that assigns priority labels to video packets. In this way, a generic wireless multi-user video streaming system with a MANE that assigns priority labels to video packets was proposed by Liebl, Schierl, Wiegand and Stockhammer (2006). In the proposed approach, a drop priority based (DPB) radio link buffer management strategy (Liebl, Jenkac, Stockhammer & Buchner, 2005) is used to keep a finite queue before the bottleneck link. The main drawback of this method is that the efficiency of the source bitrate adaptation depends on buffer dimensioning and with this approach, video packets are transmitted without considering their reception deadlines. In this chapter, we will discuss where to introduce the MANE in order to enable both network condition estimation and SVC bitstream parsing. This approach is no longer compliant with the OSI model but provides finer adaptation capabilities.

Complete Chapter List

Search this Book:
Reset