Techniques and Tools for Adaptive Video Streaming

Techniques and Tools for Adaptive Video Streaming

Martin Fleury, Laith Al-Jobouri
DOI: 10.4018/978-1-4666-2833-5.ch004
(Individual Chapters)
No Current Special Offers


Adaptive video streaming is becoming increasingly necessary as quality expectations rise, while congestion persists and the extension of the Internet to mobile access creates new sources of packet loss. This chapter considers several techniques for adaptive video streaming including live HTTP streaming, bitrate transcoding, scalable video coding, and rate controllers. It also includes additional case studies of congestion control over the wired Internet using fuzzy logic, statistical multiplexing to adapt constant bitrate streams to the bandwidth capacity, and adaptive error correction for the mobile Internet. To guide the reader, the chapter makes a number of comparisons between the main techniques, for example explaining why currently per-encoded video may be better-streamed using adaptive simulcast than by transcoding or scalable video coding.
Chapter Preview


The scope of video streaming is currently expanding in two directions: towards delivery to mobile devices (Kumar, 2007; Schaar & Chou, 2007; Zhang, et al., 2008; Rupp, 2009), and towards streaming of High-Definition (HD) video (Park, et al., 2006; Zhu, et al., 2007; Bing, 2010). Both these developments imply adaptive streaming (Ortega & Wang, 2007) to cope with error-prone wireless channels and fluctuating bandwidths due to congestion. In fact, the two conditions may be combined, as laptops now exist that have HD displays, which compensate for the reduced viewing distance when reading a typical mobile display. In the case of streaming over wireless channels, packet corruption can seriously damage video quality due to the temporal dependency of video compression. In the case of HD video, changes in bandwidth may make it impossible to stream at full resolution (either spatial, temporal resolution or Signal-to-Noise Ratio [SNR]—the video quality) while congestion persists. A further possibility (apart from spatial, temporal, and SNR adaptation) is to adapt the level of error protection (Al-Jobouri, et al., 2012) according to channel conditions. Thus, adaptive video streaming needs to be addressed in today’s environment. An issue is which form of adaptive streaming to adopt.

In respect to pre-encoded video, the commercial world has largely opted for simulcast (Conklin, et al., 2001) as the means of adjusting the streaming rate delivered across access networks to the user’s display. According to available bandwidth, the server can then switch between the streams at anchor frames. However, because the user chooses just one rate at streaming start-up, this method can result in service interruptions. To cope with changing conditions, variants of HTTP live streaming (Stockhammer, 2011) have been introduced and are on the point of standardization, for example as Dynamic Adaptive Streaming over HTPP (DASH) (Stockhammer, et al., 2011) (notice that ‘Live Streaming’ is not suitable for live video despite its marketing name). In that form of streaming, the streaming rate can be dynamically changed, though the granularity of rate changes is coarse (Cicco, et al., 2010).

Rather than switching at anchor frames (I-frames), streaming with switching frames, as introduced in the H.264/Advanced Video Coding (AVC) standard codec (Karczewicz & Kurceren, 2003), allows a more flexible form of rate switching. However, switching frames are not compatible with widely deployed multimedia players. Depending on the latency demands of the live streaming application, it may also be possible to change the streaming rate at the codec by altering the quantization parameter. In comparison, bit-rate transcoders (Assunção & Ghanbari, 1997; Ahmad, et al., 2005; Sun, et al., 2005) are able to change the rate of both live and pre-encoded video. However, most bitrate transcoders were developed for the MPEG-2 codec, while non-linearities within an H.264/AVC codec are a deterrent to bit-rate transcoding. It is an interesting coincidence that the in-loop de-blocking filter of H.264/AVC was made mandatory in the standard, and soon afterwards, the scalable extension of H.264/AVC was developed with the aim of removing the need for transcoders. Nevertheless, transcoding continues to be used. H.264/AVC spatial or temporal resolution transcoding remains entirely possible. The cost of a transcoder bank can also be traded against the need to store differing versions of the stream in the simulcast method. Trancoders can translate between different codec formats and are not limited by the multimedia player format at the receiver device.

Complete Chapter List

Search this Book: