Free-Viewpoint 3DTV: View Interpolation, Coding, and Streaming

Free-Viewpoint 3DTV: View Interpolation, Coding, and Streaming

S. Zinger (Eindhoven University of Technology, The Netherlands), L. Do (Eindhoven University of Technology, The Netherlands), P. H. N. de With (Eindhoven University of Technology, The Netherlands), G. Petrovic (Eindhoven University of Technology, The Netherlands) and Y. Morvan (Eindhoven University of Technology, The Netherlands)
Copyright: © 2013 |Pages: 19
DOI: 10.4018/978-1-4666-2660-7.ch009
OnDemand PDF Download:
No Current Special Offers


Free-ViewPoint (FVP) interpolation allows creating a new view between the existing reference views. Applied to 3D multi-view video sequences, it leads to two important applications: (1) FVP service provided to the user, which enables the possibility to interactively select the viewing point of the scene; (2) improved compression of multi-view video sequences by using view prediction for inter-view coding. In this chapter, the authors provide an overview of the essential steps for 3D free-view video communication, which consists of the free-viewpoint interpolation techniques, a concept for free-view coding and a scalable free-view video streaming architecture. For facilitating free-view to the user, the chapter introduces the free-viewpoint interpolation techniques and the concept of warping. The authors assume that 3D video is represented by texture and depth images available for each view. Therefore it is possible to apply Depth Image Based Rendering (DIBR), which uses the depth signal as a important cue for geometry information and 3D reconstruction. Authors analyze the involved interpolation problems, such as cracks, ghost contours and disocclusions, which arise from an FVP interpolation and propose several solutions to improve the image quality of the synthesized view. Afterwards, they present a standard approach to FVP rendering used currently by the research community and our FVP interpolation. Additionally, authors show the use of FVP rendering for the multi-view coding and streaming and discuss the gains and trade-offs of it. At the end of the chapter are the state-of-the-art achievements and challenges of FVP rendering and a vision concerning the development of free-viewpoint services.
Chapter Preview


Three-dimensional (3D) video is nowadays broadly considered to succeed existing 2D HDTV technologies (Smolic, 2011). The depth in a 3D scene can be created with e.g. stereo images or by explicitly sending a depth signal or map, as an addition to the texture image. This means that the viewer can perceive depth while looking at a stereoscopic screen. Many movies are already recorded in a stereoscopic format today, and commercially available stereoscopic displays are strongly emerging. It is expected that stereoscopic 3D video will first establish its place in the market while standards for 3D extensions will emerge in parallel, thereby paving the way for more advanced forms of 3D imaging. One of those advanced interesting forms in 3D imaging is to virtually move through the scene in order to create different viewpoints. This feature, called multi-view video has become a popular topic in coding and 3D research. Viewing a scene from different angles is an attractive feature for applications, such as medical imaging (Zinger et al., 2009; Ruijters & Zinger, 2009), multimedia services (Kubota et al., 2007) and 3D reconstruction (Leung & Lovell, 2003). Since the number of cameras is practically limited and consequently also the number of viewing angles, research has been devoted to interpolate views between the cameras.

The chosen free-viewpoint may not only be selected from the available multi-view camera views, but also any viewpoint between these cameras. It will be possible to watch a soccer game or a concert from the viewpoint preferred by the user, where the viewpoint can be changed at any time. This interactivity adds complexity to the 3D TV system because it requires a smart rendering algorithm that allows free- viewpoint view interpolation.

To create an interactive free-viewpoint 3D TV system, several challenges have to be addressed: multi-view texture and depth acquisition, multi-view coding, transmission and decoding, and multi-view rendering (see Figure 1).

Figure 1.

Block diagram of a free-viewpoint 3DTV system


Each block of the diagram in Figure 1 represents an essential processing stage of active research in 3D vision. Let us briefly discuss these stages. With respect to data generation, there are various ways to create multi-view 3D video content. For example, a set of stereo-cameras may be installed around the scene. Multi-view 3D video can also be acquired by cameras that can produce depth maps for their views.

In multi-view video, a scene is captured by several cameras, which are typically positioned along an arc with the optical axis pointing to the center of the scene. For accurate free-viewpoint generation, the parameters that define the position and orientation (intrinsic and extrinsic) of each camera need to be extracted. This process is well known and called camera calibration,

The positioning of cameras is an open question because it influences not only the viewing experience, but also the approach to multi-view coding and view interpolation. This influence changes per scene, it depends on the scene complexity and geometry. When the captured views are far enough from each other, the redundancy in information that they contain will decrease, therefore the performance of coding and interpolation relying on the inter-view redundancy will be reduced similarly. 3D Display technology has been under development for several years and now it is introduced to the consumers. Various technologies are applied for 3D displays: stereoscopic vision with active or passive glasses, autostereoscopy with different number of views presented to the user. Even though the algorithms for virtual view creation discussed in this chapter aim at one virtual view, it is easy to extend them to stereo images that better fit the current 3DTV displays. These extensions to stereo output are introduced by Do et al. (2010b). The remaining four processing stages – free-viewpoint (multi-view) interpolation, multi-view coding and streaming – will be the subject of discussion in the sequel of this chapter.

Complete Chapter List

Search this Book: