Mobile Robot Path Planning Using Continuous Laser Scanning

Mobile Robot Path Planning Using Continuous Laser Scanning

Mykhailo Ivanov (Universidad Autónoma de Baja California, Mexico), Lars Lindner (Universidad Autónoma de Baja California, Mexico), Oleg Sergiyenko (Universidad Autónoma de Baja California, Mexico), Julio Cesar Rodríguez-Quiñonez (Universidad Autónoma de Baja California, Mexico), Wendy Flores-Fuentes (Universidad Autónoma de Baja California, Mexico) and Moises Rivas-Lopez (Universidad Autonoma de Baja California, Mexico)
Copyright: © 2019 |Pages: 35
DOI: 10.4018/978-1-5225-5751-7.ch012

Abstract

The main object of this book chapter is an introduction and presentation of mobile robot path planning using continuous laser scanning, which has significant advantages compared with discrete laser scanning. A general introduction to laser scanning systems is given, whereby a novel technical vision system (TVS) using the dynamic triangulation measurement method for 3D coordinate determination is found suitable for accomplishing this task of mobile robot path planning. Furthermore, methods and algorithms for mobile robot road maps and path planning are presented and compared.
Chapter Preview
Top

Introduction

Robots are used to carry out mechanical work for humans and are designed as a stationary or mobile vehicle. Autonomous Mobile Robot (AMR) thereby represents mobile machines, which can move independently in their surroundings and can carry out specific tasks. Robotics represents a scientific branch that deals with the construction and design of robots. Furthermore, robotics is strongly related to the scientific branches of electrical engineering, computer science, and mechanics. Mechatronics has developed from these three disciplines. There exist different versions of mobile robots, each using the required actuators for different terrains. For example, in the case of flat terrain, mostly wheels are used for locomotion, while for uneven terrains usually chains or legs are used.

The hardware used for mobile robots can be divided mainly into two groups: the Sensors and the Actuators. The sensors measure actual physical parameters of the environment, as well as the current axis positions and speeds of the robot. The sensors furthermore can be divided into internal and external sensors, whereby the internal sensors measure current status data about the mobile robot (A/D converter, odometer, etc.) and the external sensors record data from the environment (accelerometer, gyrocompass, etc.). The actuators represent the counterpart of the sensors and are used to manipulate the mobile robot position in space or the robot environment. The actuators can also be classified into internal and external actuators. Internal actuators are primarily used to alter the state of the robot, while external actuators drive the mobile robot or move external objects.

The environment of a mobile robot is typically measured with CCD cameras and/or laser scanning systems. In Ohnishi & Imiya (2013) for example, a mobile robot is navigated using a “visual potential”, which is computed using a sequence-capturing of various images by a camera mounted on the robot. Work (Correal, Pajares, & Ruz, 2014) uses an automatic expert system for 3D terrain reconstruction, which captures the robot environment with two cameras in a stereoscopic way, similar to the human binocular vision. Laser scanning systems, as remote sensing technology, instead are known as Light Detection and Ranging (Lidar) systems, which are widely used in many areas, as well as in mobile robot navigation. Work (Kumar, McElhinney, Lewis, & McCarthy, 2013) for example uses an algorithm and terrestrial mobile Lidar data, to compute the left and right road edge of a route corridor. In Hiremath, van der Heijden, van Evert, Stein, and ter Braak (2014), a mobile robot is equipped with a Lidar-system, which navigates in a cornfield, using the time-of-flight principle.

However, other sensors and methods are also used to measure the mobile robots environment. Paper (Benet, Blanes, Simo, & Perez, 2002) for example uses infrared (IR) and ultrasonic sensors (US) for map building and object location of a mobile robot prototype. One ultrasonic rotary sensor is installed on the top and a ring of 16 infrared sensors are distributed in eight pairs around the perimeter of the robot. These IR sensors are based on the direct measurement of the IR light magnitude that is back-scattered from a surface placed in front of the sensor. The typical response time of these IR sensors for a distance measurement is about . Distance measurement with this sensor can be realized from a few centimeters to , which represents one limitation of this approach. The range of coordinate measurements by triangulation can be far over . The work (Volos, Kyprianidis, & Stouboulos, 2013) even experiment with a chaotic controlled mobile robot, which only uses an ultrasonic distance sensor for short-range measurement to avoid obstacle collision. The experimental results show the applicability of chaotic systems to real autonomous mobile robots.

Complete Chapter List

Search this Book:
Reset