Navigation by Image-Based Visual Homing

Navigation by Image-Based Visual Homing

Matthew Szenher
Copyright: © 2009 |Pages: 6
ISBN13: 9781599048499|ISBN10: 1599048493|EISBN13: 9781599048505
DOI: 10.4018/978-1-59904-849-9.ch173
Cite Chapter Cite Chapter

MLA

Szenher, Matthew. "Navigation by Image-Based Visual Homing." Encyclopedia of Artificial Intelligence, edited by Juan Ramón Rabuñal Dopico, et al., IGI Global, 2009, pp. 1185-1190. https://doi.org/10.4018/978-1-59904-849-9.ch173

APA

Szenher, M. (2009). Navigation by Image-Based Visual Homing. In J. Rabuñal Dopico, J. Dorado, & A. Pazos (Eds.), Encyclopedia of Artificial Intelligence (pp. 1185-1190). IGI Global. https://doi.org/10.4018/978-1-59904-849-9.ch173

Chicago

Szenher, Matthew. "Navigation by Image-Based Visual Homing." In Encyclopedia of Artificial Intelligence, edited by Juan Ramón Rabuñal Dopico, Julian Dorado, and Alejandro Pazos, 1185-1190. Hershey, PA: IGI Global, 2009. https://doi.org/10.4018/978-1-59904-849-9.ch173

Export Reference

Mendeley
Favorite

Abstract

Almost all autonomous robots need to navigate. We define navigation as do Franz & Mallot (2000): “Navigation is the process of determining and maintaining a course or trajectory to a goal location” (p. 134). We allow that this definition may be more restrictive than some readers are used to - it does not for example include problems like obstacle avoidance and position tracking - but it suits our purposes here. Most algorithms published in the robotics literature localise in order to navigate (see e.g. Leonard & Durrant- Whyte (1991a)). That is, they determine their own location and the position of the goal in some suitable coordinate system. This approach is problematic for several reasons. Localisation requires a map of available landmarks (i.e. a list of landmark locations in some suitable coordinate system) and a description of those landmarks. In early work, the human operator provided the robot with a map of its environment. Researchers have recently, though, developed simultaneous localisation and mapping (SLAM) algorithms which allow robots to learn environmental maps while navigating (Leonard & Durrant-Whyte (1991b)). Of course, autonomous SLAM algorithms must choose which landmarks to map and sense these landmarks from a variety of different positions and orientations. Given a map, the robot has to associate sensed landmarks with those on the map. This data association problem is difficult in cluttered real-world environments and is an area of active research. We describe in this chapter an alternative approach to navigation called visual homing which makes no explicit attempt to localise and thus requires no landmark map. There are broadly two types of visual homing algorithms: feature-based and image-based. The featurebased algorithms, as the name implies, attempt to extract the same features from multiple images and use the change in the appearance of corresponding features to navigate. Feature correspondence is - like data association - a difficult, open problem in real-world environments. We argue that image-based homing algorithms, which provide navigation information based on whole-image comparisons, are more suitable for real-world environments in contemporary robotics.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.