3D Camera Tracking for Mixed Reality using Multi-Sensors Technology

3D Camera Tracking for Mixed Reality using Multi-Sensors Technology

Fakhreddine Ababsa, Iman Maissa Zendjebil, Jean-Yves Didier
DOI: 10.4018/978-1-4666-2038-4.ch128
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The concept of Mixed Reality (MR) aims at completing our perception of the real world, by adding fictitious elements that are not perceptible naturally such as: computer generated images, virtual objects, texts, symbols, graphics, sounds, smells, et cetera. One of the major challenges for efficient Mixed Reality system is to ensure the spatiotemporal coherence of the augmented scene between the virtual and the real objects. The quality of the Real/Virtual registration depends mainly on the accuracy of the 3D camera pose estimation. The goal of this chapter is to provide an overview on the recent multi-sensor fusion approaches used in Mixed Reality systems for the 3D camera tracking. We describe the main sensors used in those approaches and we detail the issues surrounding their use (calibration process, fusion strategies, etc.). We include the description of some Mixed Reality techniques developed these last years and which use multi-sensor technology. Finally, we highlight new directions and open problems in this research field.
Chapter Preview
Top

State Of The Art

The idea of combining several kinds of sensors is not recent. The first multi-sensors system appeared with robotic applications where, for example, Vieville et al. (1993) proposed to combine a camera with an inertial sensor to automatically correct the path of an autonomous mobile robot. This idea has been exploited these last years by the community of Mixed Reality. Several works proposed to fuse vision and inertial data sensors, using a Kalman filter (You et al., 1999) (Ribo et al., 2002) (Hol et al., 2006) (Reitmayr & Drummond, 2006) (Bleser & Stricker, 2008) or a particular filter (Ababsa et al., 2003) (Ababsa & Mallem, 2007). The strategy consists in merging all data from all sensors to localize the camera following a prediction/correction model. The data provided by inertial sensors (gyroscopes, magnetometers, etc.) are generally used to predict the 3D motion of the camera which is then adjusted and refined using the vision-based techniques. The Kalman filter is generally implemented to perform the data fusion. Kalman filter is a recursive filter that estimates the state of a linear dynamic system from a series of noisy measurements. Recursive estimation means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. So, no history of observations and/or estimates is required.

Complete Chapter List

Search this Book:
Reset