Widely-Separated Stereo Views Turn into 3D Objects: An Application

Widely-Separated Stereo Views Turn into 3D Objects: An Application

Rimon Elias (German University in Cairo, Egypt)
DOI: 10.4018/978-1-4666-0113-0.ch012
OnDemand PDF Download:
No Current Special Offers


This chapter discusses the representation of obstacles in an environment with planar ground through wide baseline set of images in the context of teleoperation. The camera parameters are assumed to be known approximately within some range according to the error margins of the sensors used such as inertial devices. The technique proposed in this chapter is based on detecting junctions in all images using the so-called the JUDOCA operator, and through homographic transformation, correlation is applied to achieve point correspondences. The match set is then triangulated to obtain a set of 3D points. Point clustering is then performed to achieve a bounding box for each obstacle, which may be used for localization purposes by itself. Finally, voxel occupancy scheme is applied to get volumetric representation of the obstacles.
Chapter Preview


Since our proposed system is dealing with detection and reconstructing objects in 3D space, we will go over the important concepts found in the literature concerning these two problems.

Many researchers have tackled the problem of detecting objects residing on a ground plane (or detecting the ground plane alternatively). Different approaches have been developed to solve such a problem for different settings. Color information can be used for ground detection as in (Hoffmann et al., 2005). Stereo pairs can be used for detection as in (Sabe et al., 2004; Mandelbaum et al., 1998; Bertozzi et al., 1996). Monocular vision approaches also have been developed. Optical flow can be used in this case as in (Kim & Kim, 2004) where the surface normals for different image areas are computed and grouped to identify the ground plane. Homography can also be used with monocular sequence of images to identify the ground plane as in (Zhou & Li, 2006). However, this previous approach is restricted to a camera fixed on a mobile robot with a flexibility to rotate horizontally only.

Complete Chapter List

Search this Book: