DBN Models for Visual Tracking and Prediction

DBN Models for Visual Tracking and Prediction

Qian Diao (Intel China Research Center, China), Jianye Lu (Intel China Research Center, China), Wei Hu (Intel China Research Center, China), Yimin Zhang (Intel China Research Center, China) and Gary Bradski (Microprocessor Research Lab/ Intel Research, USA)
Copyright: © 2007 |Pages: 18
DOI: 10.4018/978-1-59904-141-4.ch009
OnDemand PDF Download:
List Price: $37.50
10% Discount:-$3.75


In a visual tracking task, the object may exhibit rich dynamic behavior in complex environments that can corrupt target observations via background clutter and occlusion. Such dynamics and background induce nonlinear, nonGaussian and multimodal observation densities. These densities are difficult to model with traditional methods such as Kalman filter models (KFMs) due to their Gaussian assumptions. Dynamic Bayesian networks (DBNs) provide a more general framework in which to solve these problems. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. Under the DBN umbrella, a broad class of learning and inference algorithms for time-series models can be used in visual tracking. Furthermore, DBNs provide a natural way to combine multiple vision cues. In this chapter, we describe some DBN models for tracking in nonlinear, nonGaussian and multimodal situations, and present a prediction method to assist feature extraction part by making a hypothesis for the new observations.

Complete Chapter List

Search this Book: