Vision Based Localization for Multiple Mobile Robots Using Low-cost Vision Sensor

Vision Based Localization for Multiple Mobile Robots Using Low-cost Vision Sensor

Seokju Lee, Girma Tewolde, Jongil Lim, Jaerock Kwon
Copyright: © 2020 |Pages: 15
DOI: 10.4018/978-1-7998-1754-3.ch029
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper presents an efficient approach for a vision based localization of multiple mobile robots in an indoor environment by using a low cost vision sensor. The proposed vision sensor system that uses a single camera mounted over the mobile robots field takes advantages of small size, low energy consumption, and high flexibility to play an important role in the field of robotics. The nRF24L01 RF transceiver is connected to the vision system to enable wireless communication with multiple devices through 6 different data pipes. The downward-facing camera provides excellent performance that has the ability to identify a number of objects based on color codes, which form colored landmarks that provide mobile robots with useful image information for localization in the image view, which is then transformed to real world coordinates. Experimental results are given to show that the proposed method can obtain good localization performance in multi-mobile robots setting.
Chapter Preview
Top

Introduction

An essential and one of the most challenging components of mobile robots is their localization and navigation system. All mobile robots must preferentially be able to estimate their position in a given environment in order to navigate autonomously. Since the early days of mobile robotics localization research vision-based method, among others, has been one of the techniques being explored by researchers. Vision sensors are one of the most versatile as they can be used in many environments such as indoor, outdoor and even in underwater applications (Sun, Yu, & Xu, 2013). They can provide useful image information about detected shape or color in their field of view. This information is particularly useful for helping the mobile robot localize. Therefore, the authors propose a localization method of identifying each mobile robot and obtaining its relative position and heading direction in the limited filed using vision sensor. Two important papers (DeSouza & Kak, 2002), (Bonin-Font, Ortiz, & Oliver, 2008) provide a good survey of various aspects of the progress made so far in vision based navigation and localization methods.

Unfortunately, vision based localization requires complex algorithms and high quality hardware resources when related to general environment features. However, using simple landmarks can reduce dramatically the cost and the complexity of the recognition system. We propose to use color code which is combination of two or more color tags placed close together as a simple landmark to differentiate multi-mobile robots. A camera so-called “Pixy” that implements a hue-based filtering algorithm is used to detect objects of a specified color. This camera is designed to be low cost, fully programmable, and found to be appropriate for real-time processing with enough flexibility to be connected directly to most microcontroller-based systems without any additional electronics. The advantages of small size and real-time performance make it possible to apply embedded vision to the robotics.

The image information in the ground is obtained by the downward-facing camera over the field and color codes are placed on the mobile robot as shown in Fig. 1 (A). The main advantage of the vision system in this paper is that the vision sensor can be incorporated directly with an inexpensive microcontroller, which results in compact and light weight vision system. The proposed vision sensor is used to identify and track the mobile robots that move within the field of view of the camera. The built in image processing unit in the vision system provides the identified object color code’s id, its x and y coordinates of the center of color code, its height, and width, and angle from the vision sensor. This information can be read to a microcontroller in real time used to estimate the real-world position of the mobile robot.

Figure 1.

(A) Illustration of how the camera should be placed in real environment. (B) Block diagram illustrating the system architecture. Vision sensor system provides actual positions and orientations to mobile robots through wireless communication.

978-1-7998-1754-3.ch029.f01

The component layout, depicted in Fig. 1 (B) shows overall system implementation. Multiple nRF24l01 wireless transceivers are used to establish communication between vision sensor system and multi-mobile robots. These wireless transceivers make it easy, inexpensive and suitable implementation of our work because they allow point to multi-point communication. Therefore, robots can broadcast their own information to their teammates with the use of these wireless modules. They make effective alternative in multi-robot systems (Carpin & Parker, 2002), (Pugh & Martinoli, 2006) that require broadcast communication.

The rest of the paper is organized as follows: in Section “RELATED WORKS,” we begin with an overview of the related literature. After discussing related works, the main hardware components are discussed and the localization method for multi-mobile robots is detailed in Section “IMPLEMENTATION DETAILS”. Section “RESULTS” demonstrates the performance result of the proposed localization method by using the NDI 3D Investigator optical instruments for performance evaluation. Section “CONCLUSION” presents concluding remarks and the possible future work ideas.

Complete Chapter List

Search this Book:
Reset