Scalable Multi-Platform Distribution of Spatial 3D Contents

Scalable Multi-Platform Distribution of Spatial 3D Contents

Jan Klimke (Hasso-Plattner-Institute, University of Potsdam, Potsdam, Germany), Benjamin Hagedorn (Hasso-Plattner-Institute, University of Potsdam, Potsdam, Germany) and Jürgen Döllner (Hasso-Plattner-Institute, University of Potsdam, Potsdam, Germany)
Copyright: © 2014 |Pages: 15
DOI: 10.4018/ij3dim.2014070103
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. This paper introduces a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.
Article Preview

1. Introduction

Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Through ``the continuing desire for more detail and realism, the model complexity of common scenes has not reached its peak by far” (Jeschke, Wimmer, & Purgathofer, 2005). The ongoing development in remote sensing and data processing technologies produces more and more 3D model data in increasing amounts and in continuously improving quality, which makes providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner still a challenging task.

Today's systems for mobile or web-based visualization of virtual 3D city models, e.g., Google Earth, Apple Maps, or here.com, mostly rely on streaming 3D geometry and corresponding textures to client devices. In this way, the applications running on those devices need to implement the whole rendering part of the visualization pipeline (Haber & McNapp, 1990), i.e., rasterization of images from computer graphic primitives. The rasterization process is a resource intensive task, which requires specialized rendering hardware and software components. Rendering performance and their requirements regarding CPU, GPU, main memory, and disk space, strongly depend on the complexity of the model to be rendered. Further, high-quality rendering techniques, e.g., for realistic illumination, shadowing, or water rendering, increase the resource consumption that is necessary to provide users with interactive frame rates (more than 10 frames per second). This makes it very hard to develop applications that adapt to different hardware and software platforms while still providing a high-quality rendering and an acceptable frame rate also for large 3D datasets. Building a fast, stable application that renders large 3D models in high quality on a variety of heterogeneous platforms and devices, incorporates a huge effort in software development and maintenance as well as data processing, which usually raises with the number of different platforms to be supported.

Approaches for image-based 3D portrayal introduced recently (Doellner, Hagedorn, & Klimke, 2012) tackle these problems by shifting the more complex and resource intensive task of image synthesis to the server side, which allows for interactive thin client applications on various end user devices. Such clients can reconstruct lightweight representations of the server side 3D model from server-generated G-Buffer images (i.e., multi layer raster images that not only encode color values, but also other information like depth, etc.). Image-based portrayal provides two major advantages compared to geometry-based approaches: a) They decouple the rendering complexity on client side from the model complexity on server-side and b) they allow to deliver a homogeneously high rendering quality to all end user platforms and devices, regardless of the 3D capabilities of these devices. Nevertheless, the operation of such visualization applications needs a 3D rendering service to generate these image-based representations of the underlying 3D model. This makes scaling the applications for many simultaneously used clients a complex and relatively expensive task, since each service instance can only effectively serve a limited number of clients.

In this paper we introduce a novel approach for provisioning of massive, virtual 3D city models on different platforms (web browsers, smartphones, tablets) by means of an interactive map showing synthetic, tiled images of the 3D city model (oblique map). The key concept is to synthesize these oblique views in a preprocessing step by a 3D rendering service, to store the corresponding tiles, e.g., on a web server, to be easily accessible and usable by client applications (Figure 1). Different stylizations, combinations of thematic layers, map layers, and viewing directions can be specified and applied for the tile generation process, leading to multiple tile sets, each storing not only a RGB images but also additional information such as world coordinates, depth information, surface normals, object identities or thematic information (G-Buffer). Client applications can use the additional information, e.g., for object-based interaction or additional client-side rendering effects such as object highlighting. In particular object identity information allows client applications to set up to application specific functionality, e.g., to retrieve object-based information from a remote server.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 6: 4 Issues (2017): 1 Released, 3 Forthcoming
Volume 5: 4 Issues (2016)
Volume 4: 4 Issues (2015)
Volume 3: 4 Issues (2014)
Volume 2: 4 Issues (2013)
Volume 1: 4 Issues (2012)
View Complete Journal Contents Listing