A Review of Registration Methods on Mobile Robots

A Review of Registration Methods on Mobile Robots

Vicente Morell-Gimenez (University of Alicante, Spain), Sergio Orts-Escolano (University of Alicante, Spain), José García-Rodríguez (University of Alicante, Spain), Miguel Cazorla (University of Alicante, Spain) and Diego Viejo (University of Alicante, Spain)
Copyright: © 2013 |Pages: 13
DOI: 10.4018/978-1-4666-3994-2.ch029
OnDemand PDF Download:
No Current Special Offers


The task of registering three dimensional data sets with rigid motions is a fundamental problem in many areas as computer vision, medical images, mobile robotic, arising whenever two or more 3D data sets must be aligned in a common coordinate system. In this chapter, the authors review registration methods. Focusing on mobile robots area, this chapter reviews the main registration methods in the literature. A possible classification could be distance-based and feature-based methods. The distance based methods, from which the classical Iterative Closest Point (ICP) is the most representative, have a lot of variations which obtain better results in situations where noise, time, or accuracy conditions are present. Feature based methods try to reduce the great number or points given by the current sensors using a combination of feature detector and descriptor which can be used to compute the final transformation with a method like RANSAC or Genetic Algorithms.
Chapter Preview

2D/3D Data Acquisition

We have used several robot platforms, depending on the perception system used. In Figure 1, two of these platforms are shown. The left one is a Magellan Pro from iRobot used for indoor experiments. For outdoors we have used a PowerBot from ActiveMedia. Furthermore, PowerBot can carry heavy loads like the 3D sweeping laser unit. Both come with an onboard computer.

Figure 1.

Mobile robots used for experiments. From left to right: Magellan Pro unit used for indoors; PowerBot used for outdoors. SR4000 camera used with both robots; Kinect sensor used in indoors.


In our research, we manage 3D data that can come from different sensor devices. For outdoor environments we use a 3D sweeping laser unit, a LMS-200 Sick laser mounted on a sweeping unit. Its range is 80 meters with an error of 1mm per meter. The main disadvantage of this unit is the data capturing time: it takes about one minute to get a complete frame. For indoor environments we use another two sensors. The first one is a SR4000 camera from Mesa Imaging, which is a time-of-flight camera, based on infrared light. Its range is limited to 5 or 10 meters, providing gray level from the infrarred spectrum. Finally, a Kinect sensor has been included. This sensor provides 3D data together with RGB data, with a maximum range of 10 meters.

Complete Chapter List

Search this Book: