3D Modeling for Environmental Informatics: Parametric Manifold of an Object under Different Viewing Directions

3D Modeling for Environmental Informatics: Parametric Manifold of an Object under Different Viewing Directions

Xiaozheng Zhang (Ladbrokes, Australia) and Yongsheng Gao (Griffith University, Australia)
DOI: 10.4018/978-1-4666-9435-4.ch010
OnDemand PDF Download:
$37.50

Abstract

3D modeling plays an important role in the field of computer vision and image processing. It provides a convenient tool set for many environmental informatics tasks, such as taxonomy and species identification. This chapter discusses a novel way of building the 3D models of objects from their varying 2D views. The appearance of a 3D object depends on both the viewing directions and illumination conditions. What is the set of images of an object under all viewing directions? In this chapter, a novel image representation is proposed, which transforms any n-pixel image of a 3D object to a vector in a 2n-dimensional pose space. In such a pose space, it is proven that the transformed images of a 3D object under all viewing directions form a parametric manifold in a 6-dimensional linear subspace. With in-depth rotations along a single axis in particular, this manifold is an ellipse. Furthermore, it is shown that this parametric pose manifold of a convex object can be estimated from a few images in different poses and used to predict object's appearances under unseen viewing directions. These results immediately suggest a number of approaches to object recognition, scene detection, and 3D modeling, applicable to environmental informatics. Experiments on both synthetic data and real images were reported, which demonstrates the validity of the proposed representation.
Chapter Preview
Top

Introduction

Insects are the most diverse animal group with over one million described species. Since most pests are insects, their identification is important in pest control and food biosecurity. They are also of great interests to the public as insects are pervasive in our environment. Due to increasing lack of expertise to insect identification (Gaston & O’neill, 2004), computer-assisted taxonomy (CAT) is desirable for both pest control systems and electronic field guides. It may rely on 1) morphological characteristics such as shape information, 2) molecular signatures such as 16s DNA (Ball & Armstrong, 2008), 3) mass spectrometry, 4) behavioral traits, and 5) sound (Gaston & O’neill, 2004). The first trait existed for the longest time and remains the most natural and widely used method in taxonomy, although DNA-based method could provide more accurate results. Morpho-taxonomy will continue to be at least the backbone of taxonomic work, especially for an electronic field guide system of insect identification.

With recent advances in computer vision and pattern recognition research, a few attempts have been made based on insects’ morphological characteristics towards computer-assisted taxonomy. Weeks et al. (1997) applied image-based pattern recognition techniques on insect wings to classify Ichneumonidae. This image-based strategy was later transformed into a CAT system known as DAISY (Digital Automated Identification SYstem) (Weeks et al., 1999). Recently, Mayo and Watson (2007) applied automatic image-based feature extraction and machine learning techniques on the identification of live moths and achieved about 85% accuracy on a dataset of 774 individuals over 35 species. These holistic image-based approaches focus on primarily identification of closely related species, because holistic image comparison is often sensitive to image variations caused by species difference. Larios et al. (2008) proposed to extract local image features to form concatenated histograms for recognition of deformable stoneflies. The use of local image features provides a better tolerance over body deformation as well as certain kind of image variations due to species differences. Those techniques are 2D image-based and suffer from viewing angle variations and in-depth self-occlusions.

To overcome these limitations, it is desirable to acquire 3D insect models for both descriptions and identifications. An identification system gains great benefits even from a rough 3D insect structure, because it can help to rectify viewing angles and compensate articulated body parts, especially for wings, legs and antennas. Due to self-occlusion and the small body size of insects, traditional 3D scanning cannot work properly.

Previous reconstruction methods model 3D structures from shading (Zhang at al., 1999), contour (Ulupinar & Nevatia, 1995) and texture (Samaras & Metaxas, 2003), using 2D images. Recently, a few interactive modeling techniques have been proposed to infer 3D object structures from single 2D images or sketches. Gingold et al. (2009) developed a user annotation system based on geometric primitives for 3D modeling. From different parts of the bodies, users control and change the parameters of 3D geometric primitives to best fit the input images or sketches. Wu et al. (2007) transferred reference normal distributions to target 2D image based on image edges and user inputs. With a reference simple geometry (typically a sphere), users draw the paths on both the object image and sphere image whose normals they think are identical. With pixel-wise interpolations, normals as well as structures from the normal map of the entire object are transferred from the reference object.

Complete Chapter List

Search this Book:
Reset