Adapted Approach for Omnidirectional Egomotion Estimation

Adapted Approach for Omnidirectional Egomotion Estimation

A. Radgui, C. Demonceaux, E. Mouaddib, M. Rziza, D. Aboutajdine
DOI: 10.4018/978-1-4666-3906-5.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Egomotion estimation is based principally on the estimation of the optical flow in the image. Recent research has shown that the use of omnidirectional systems with large fields of view allow overcoming the limitation presented in planar-projection imagery in order to address the problem of motion analysis. For omnidirectional images, the 2D motion is often estimated using methods developed for perspective images. This paper adapts motion field calculated using adapted method which takes into account the distortions existing in the omnidirectional image. This 2D motion field is then used as input to the egomotion estimation process using spherical representation of the motion equation. Experimental results are shown and comparison of error measures are given to confirm that succeeded estimation of camera motion will be obtained when using an adapted method to estimate optical flow.
Chapter Preview
Top

Introduction

The progress of techniques for autonomous robot navigation constitutes one of the major trends in the current research on mobile robotics (Kim & Suga, 2007; Yoshizaki et al., 2008; Wang et al., 2006; Bunschoten & Krose, 2003; Winters et al., 2000). The objective is to make robot able to plan its path and execute its plan without human intervention. One way to do this is to enable robot to estimate the egomotion starting with the images of environment. This estimation can, for example, be used by the robot to reconstruct the scene in three-dimensional space. The egomotion estimation problem consists in recovering the camera motion relative to the environment taking an image sequence as an input.

In the past, different approaches are proposed to estimate motion from perspective images that only offer a limited field of view. It was also proved that estimation methods from these images have difficulty to distinguish small pure translations from small pure rotations. Recently, the developed omnidirectional cameras with large field of view have been able to overcome the limited field of view introduced by planar projections (Gluckman & Nayar, 1998). The omnidirectional images with hemispherical field of view contain global information about motion with the presence of the focus of expansion (FOE) and/or the focus of contraction (FOC) in the images. If a whole spherical field of view is offered, it is guaranteed that both FOE and FOC are in the image. Consequently, the optical flow filed analysis from omnidirectional images is more efficient also with smooth camera motion.

The process of egomotion estimation naturally consists in estimating the optical flow (Gluckman & Nayar, 1998; Vassallo et al., 2002; Shakernia et al., 2003; Lim & Barnes, 2008; Gandhi & Trivedi, 2005), or features correspondences (Svoboda et al., 1998; Lee et al., 2000; Thanh et al., 2008), and then extracting the 3D camera motion from the 2D information computed in images. In omnidirectional vision, several methods, proposed in last few years, are concerned with egomotion estimation from motion field. Gluckman and Nayar (1998) showed that good results for egomotion can be obtained by using omnidirectional images. They project the image motion on a spherical surface using Jacobians of transformations to determine motion of the camera. A different Jacobian function must be determined according to the particular projection model of the used camera. This approach was generalized later by Vassallo et al. (2002). They present general Jacobian as a function of the parameters of the central panoramic projection model that can describe a wide variety of omnidirectional cameras. In another direction, Shakernia et al. (2003) show that egomotion algorithms proposed for perspective images by Tian et al. (1996) can be directly applied to the back-projection flow. The so-called back-projection flow is obtained by lifting the optical flow from the image plane onto a virtual curved surface instead of spherical surface to simplify the Jacobians. They showed that the unified projection model for central panoramic cameras, developed by Geyer and Daniilidis (2000), can be considered as a projection onto a virtual curved retina that is intrinsic to the camera geometry. Recently, Lim et al. in (2008) and (2009) present geometrical constraint considering the flow at antipodal points. They show that it allows estimating of direction of motion using a spherical representation of images.

Complete Chapter List

Search this Book:
Reset