Tele-Immersive Collaborative Environment with Tiled Display Wall

Tele-Immersive Collaborative Environment with Tiled Display Wall

Yasuo Ebara (Osaka University, Japan)
DOI: 10.4018/978-1-61520-871-5.ch007


In intellectual collaborative works with participants between remote sites via WAN, CSCW have been used as general communication tools. Especially, the sharing of various, high-quality digital contents such as various materials, computer graphics or visualization contents, and video streaming by between remote places is important to recognize or analyze to easily refer to these contents. However, the image magnification by general projector and large-sized display equipment is low-resolution, and sufficient quality of contents is not obtained. In this research, the author has constructed a tele-immersive collaborative environment with a tiled display wall. In this environment, the author has implemented an application to display high-resolution real video streaming on a tiled display wall in remote place. By using the application, the author displayed the clear video image of remote place over a wide range. Then, the author conducted experimental verification on the effect for eye-to-eye contact by changing the position of camera on frame of LCDs on tiled display wall, and has collected a lot of knowledge. Moreover, the author has tried realistic display processing of high-resolution astronomical observation image and movie data, and it has enabled observation of the entire image of observation data all over tiled display wall.
Chapter Preview

Tele-immersion is defined as a new type of telecommunication media in which virtual reality has been incorporated into video-conference systems (Towles, 2002; Sadagic, 2001; Park, 2000 Schreer, 2005; Craig, 2009). The goal of tele-immersion is to enable users in physically remote spaces to interact with one another in a shared space that mixes both local and remote realities, and allows participants to share a mutual sense of presence.

In the 3D tele-immersion system, a user wears polarized glasses and a head tracker as a view-dependent scene is rendered in real-time on a large stereoscopic display in 3D (Gibbs, 1999; Kauff, 2002; Kelshikar, 2003; Towles, 2003; Blundell, 2005). Ideally, there exists a seamless continuum between the users’ experience of local and remote space within the application. A technique has been developed which creates a 3D model from a human image captured by multi-viewpoint cameras. Video images depend on participant’s direction of eyes in remote places at real time, as examples of studies on eye-to-eye contact in remote communication with video image (Sadagic, 2001) indicate. A technique has developed to merge virtual head images which create computer graphics on video images of heads in order to form HDM (Head Mount Display) to reconstruct eye-to-eye contact in a virtual communication environment with HDM (Takemura, 2005). However, these technologies are necessary to obtain position information of heads in real time to apply independent video image for each participant, and application of sensors on heads is fundamental. Therefore, it is difficult to have a conversation face to face, and this causes interference from smooth communication between participants.

Complete Chapter List

Search this Book: