An Algorithm for Occlusion-Free Texture Mapping from Oriented Images

An Algorithm for Occlusion-Free Texture Mapping from Oriented Images

Mattia Previtali, Marco Scaioni, Luigi Barazzetti, Raffaella Brumana, Daniela Oreni
DOI: 10.4018/978-1-4666-4490-8.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The possibility of deriving digital reconstructions of real objects has given new emphasis to numerous research domains. The growing interest in accurate and detailed models has also increased the request of realistic visualizations and data management methods. The aim of this work is the implementation of an algorithm able to map digitaled images on 3D polygonal models of terrestrial objects. In particular, the authors focus on two different aspects: the geometric issues during the texture mapping phase with convergent images (e.g. self-occlusions) and the color brightness correction when multiple images are used to process the same portion of the mesh.
Chapter Preview
Top

Introduction

Nowadays different instruments and techniques allow expert operators to obtain accurate 3D models of complex objects. This has had an impact on the customer's expectations, which have increased making realistic reconstructions important in many research domains (e.g. cultural heritage documentation and preservation, virtual reality, game and movie industry, reverse engineering, etc.).

Modern data acquisition and processing techniques are mainly based on strategies which use active or passive sensors. The former relies on the use of range-based systems that emit a signal and detect its reflection (typical examples of such expensive instruments are laser scanning and structured light technology). The latter employs sets of digital images that are processed to obtain a tridimensional reconstruction of the scene. Basically, both methods have pro and contra and their combined use is often mandatory for real applications.

Images and laser scans are today exploited in both Computer Vision (Hartley & Zisserman, 2004). and Photogrammetry (Kraus, 2007), whose researchers have developed methods for data processing based on different algorithms and methods. Although the goal (in this case a digital reconstruction) is basically the same, the attention is paid to particular requisites such as automation, accuracy, and completeness: in a few words, the final result could be very different.

In this chapter we focus on the last step of the data processing pipeline, that is called texture-mapping and relies on the assignment of original color to the model. We therefore assume that a 3D digital model of the object (represented by a mesh) is already available, whereas the color information must be applied using a set of oriented images. This means that mesh and images are already registered in a common reference system.

The final aim is the creation of realistic, detailed, and accurate models with a correct correspondence between the geometric part and its visual appearance: the final model must represent the material and its real color.

The literature reports several examples of applications where a realistic 3D model was needed (e.g. –Guidi, et al., 2009; Pesci, Bonali, Galli, & Boschi, 2011). The market offers some packages (e.g. 3D Studio Max, Geomagic Studio, PhotoModeler Scanner, ShapeTexture, etc.) that are clearly based on different algorithmic implementations, even though they can be often used only as “black boxes”. The scientific literature tried to solve for the texture mapping problem by different geometric and radiometric strategies (Rocchini, Cignoni, & Montani, 1999) developed a complete pipeline divided into several steps (vertex-to-image binding, patch growing, patch boundary smoothing, texture patches packing) where multiple views are stitched on the mesh and partially fused to generate a single textured model (El-Hakim, Gonzo, Picard, Girardi & Simoni, 2003) considers the problem of multiple image overlap for the same triangles. Their approach generates larger groups of triangles mapped from the same image and eliminates isolated texture mappings. Then, two procedures are run to compensate for radiometric differences. The first one is an iterative procedure based on a least squares adjustment which minimizes gray-value differences at both local and global levels. The second method, that is more like and alternative to the previous one, performs image blending (Lensch, Heidrich & Seidel, 2000; Niem & Broszio, 1995) through a weighted average of the textures of different images. However, this could lead to blurred results: if there are small triangles, blending cannot be sufficient to reduce the color transition. On the other hand, if the triangles are too large a blurring effect can occur. Alternative approaches for color correction are instead based on corresponding points [9-10] extracted from overlapping areas, where the aim is the analytic estimation of the color brightness variation.

Complete Chapter List

Search this Book:
Reset