A Real-Time Compressive Tracking System for Amphibious Spherical Robots

A Real-Time Compressive Tracking System for Amphibious Spherical Robots

Shuxiang Guo (Beijing Institute of Technology, China) and Liwei Shi (Beijing Institute of Technology, China)
Copyright: © 2018 |Pages: 20
DOI: 10.4018/978-1-5225-2993-4.ch017
OnDemand PDF Download:
No Current Special Offers


Given the special working environments and application functions of the amphibious robot, an improved RGB-D visual tracking algorithm with dual trackers is proposed and implemented in this chapter. Compressive tracking (CT) was selected as the basis of the proposed algorithm to process colour images from a RGB-D camera, and a Kalman filter with a second-order motion model was added to the CT tracker to predict the state of the target, select candidate patches or samples, and reinforce the tracker's robustness to high-speed moving targets. In addition, a variance ratio features shift (VR-V) tracker with a Kalman prediction mechanism was adopted to process depth images from a RGB-D camera. A visible and infrared fusion mechanism or feedback strategy is introduced in the proposed algorithm to enhance its adaptability and robustness. To evaluate the effectiveness of the algorithm, Microsoft Kinect, which is a combination of colour and depth cameras, was adopted for use in a prototype of the robotic tracking system.
Chapter Preview

Organization Background

To execute various tasks in ever-changing environments autonomously, it is a critical and important work for a robot to sense and detect the surroundings. As a kind of feasible sensors with advantages of low-cost, low power consumption and strong adaptability (Fabian et al., 2014; Habib, 2011; Shi et al., 2015; Pan et al., 2015), digital cameras have been widely used in robotics to guide electromechanical devices and realize intelligent methods. In a machine vision-based robot, the visual tracking system plays a role of great importance to realize diverse robotic functions such as autonomous navigation (Capi et al., 2010; Wirbel et al., 2013), path planning (Bischoff et al., 2012; Lin et al., 2013), visual servoing (Siradjuddin et al., 2012; Wang, P. et al., 2014), robot-human interaction (Oonishi et al., 2013; Gupta et al., 2014) and et al. The content of this chapter is completely based on the work of Pan et al. (2015).

In general, existing tracking algorithms can be categorized into estimation-based and classification-based (Liu, Q. et al., 2014). Estimation-based or generative algorithms model the target on the basis of appearance features and then search it in each frame of the visual stream (Salti et al., 2012). This kind is mainly motivated by innovations in appearance models and includes MeanShift (Liu, Y. F. et al., 2014), particle filter-based algorithms (Zhang et al., 2013), IVT (Incremental Visual Tracking) (Ross et al., 2008), optical flow-based algorithms (Liu et al., 2013) and et al. Classification-based or discriminative algorithms treat tracking as a binary pattern recognition problem and try to separate the target from the background (Pirzada et al., 2014). This kind is usually built upon pattern recognition algorithms such as SVM (Support Vector Machine) (Avidan, 2004), Bayes Classifier (Chen & Wu, 2013), K-Means (Qi et al., 2011) and et al. Most of studies on visual tracking are performed on high performance computers and evaluated with standard benchmark sequences, each of which contains controlled disturbance from the target or environment.

Compared to normal studies towards tracking algorithms on the basis of commercial computer in a controlled lab environment, designing a reliable visual tracking system for mobile robots is an even more challenging work owing to two aspects. On the one hand, the appearance of targets is variable due to pose or scale changes, random motion and occlusion, which makes it difficult to establish a reliable appearance model (Wang, K. et al., 2014a). On the other hand, the ambient environment of robots is easy to be disturbed because of illumination variation, camera vibration and outside interference, which may cause problems of drift or target missing (Wang, K. et al., 2014b). More than that, most of state-of-the-art studies aimed at improving tracking precision, adaptability and robustness, while the real-time performance and computational consumption of algorithms were often overlooked. That led to a situation in which most existing algorithms were unsuitable to apply to mobile robots equipped with embedded microprocessors and limited power source.

Complete Chapter List

Search this Book: