Article Preview
Top1. Introduction
Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Through ``the continuing desire for more detail and realism, the model complexity of common scenes has not reached its peak by far” (Jeschke, Wimmer, & Purgathofer, 2005). The ongoing development in remote sensing and data processing technologies produces more and more 3D model data in increasing amounts and in continuously improving quality, which makes providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner still a challenging task.
Today's systems for mobile or web-based visualization of virtual 3D city models, e.g., Google Earth, Apple Maps, or here.com, mostly rely on streaming 3D geometry and corresponding textures to client devices. In this way, the applications running on those devices need to implement the whole rendering part of the visualization pipeline (Haber & McNapp, 1990), i.e., rasterization of images from computer graphic primitives. The rasterization process is a resource intensive task, which requires specialized rendering hardware and software components. Rendering performance and their requirements regarding CPU, GPU, main memory, and disk space, strongly depend on the complexity of the model to be rendered. Further, high-quality rendering techniques, e.g., for realistic illumination, shadowing, or water rendering, increase the resource consumption that is necessary to provide users with interactive frame rates (more than 10 frames per second). This makes it very hard to develop applications that adapt to different hardware and software platforms while still providing a high-quality rendering and an acceptable frame rate also for large 3D datasets. Building a fast, stable application that renders large 3D models in high quality on a variety of heterogeneous platforms and devices, incorporates a huge effort in software development and maintenance as well as data processing, which usually raises with the number of different platforms to be supported.
Approaches for image-based 3D portrayal introduced recently (Doellner, Hagedorn, & Klimke, 2012) tackle these problems by shifting the more complex and resource intensive task of image synthesis to the server side, which allows for interactive thin client applications on various end user devices. Such clients can reconstruct lightweight representations of the server side 3D model from server-generated G-Buffer images (i.e., multi layer raster images that not only encode color values, but also other information like depth, etc.). Image-based portrayal provides two major advantages compared to geometry-based approaches: a) They decouple the rendering complexity on client side from the model complexity on server-side and b) they allow to deliver a homogeneously high rendering quality to all end user platforms and devices, regardless of the 3D capabilities of these devices. Nevertheless, the operation of such visualization applications needs a 3D rendering service to generate these image-based representations of the underlying 3D model. This makes scaling the applications for many simultaneously used clients a complex and relatively expensive task, since each service instance can only effectively serve a limited number of clients.
In this paper we introduce a novel approach for provisioning of massive, virtual 3D city models on different platforms (web browsers, smartphones, tablets) by means of an interactive map showing synthetic, tiled images of the 3D city model (oblique map). The key concept is to synthesize these oblique views in a preprocessing step by a 3D rendering service, to store the corresponding tiles, e.g., on a web server, to be easily accessible and usable by client applications (Figure 1). Different stylizations, combinations of thematic layers, map layers, and viewing directions can be specified and applied for the tile generation process, leading to multiple tile sets, each storing not only a RGB images but also additional information such as world coordinates, depth information, surface normals, object identities or thematic information (G-Buffer). Client applications can use the additional information, e.g., for object-based interaction or additional client-side rendering effects such as object highlighting. In particular object identity information allows client applications to set up to application specific functionality, e.g., to retrieve object-based information from a remote server.