Visual Feedback Control Through Real-Time Movie Frames for Quadcopter With Object Count Function and Pick-and-Place Robot With Orientation Estimator

Visual Feedback Control Through Real-Time Movie Frames for Quadcopter With Object Count Function and Pick-and-Place Robot With Orientation Estimator

Lu Shao, Fusaomi Nagata, Maki K. Habib, Keigo Watanabe
DOI: 10.4018/978-1-7998-8686-0.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Previous studies have successfully identified results of searching the desired object in real-time movie frames using a template image prepared in color or shape. In every sample time of 40 ms, the center of gravity (COG) can be consecutively calculated by detecting and identifying the desired object. Hence, it is possible to have visual feedback (VF) control with the quadrotor by referring to the change of the COG as the relative velocity with respect to the desired object. In this chapter, a useful function is developed to enable the detection and counting of desired objects in real-time. This allows the quadrotor to monitor the number of individually selected objects in the frames of a movie such as cars, animals, etc. This supports high-speed counting while avoiding errors due to object overlapping. Also, the proposed VF controller is successfully applied to a pick-and-place robot, in which a transfer learning-based convolutional neural network can estimate object orientation for smooth picking.
Chapter Preview
Top

Introduction

There are growing demands in developing robot technologies that enable advanced functions, such as enhancing flexibility, agility, and support friendly interaction with the environment, other robots, and human beings in many different application domains. During the last decade, the technologies supporting the development of Unmanned Ariel Vehicles (UAV) such as quadrotors have remarkably progressed. Hence, many promising applications tried to contribute new results. For example, Lu et al. developed remote control and monitoring system using a quadrotor to enable an operator to remotely control the quadcopter and monitor its operational surroundings using an iOS device. Primary handlers for obtaining compass information, controlling a gimbal, autopilot function for landing or emergency intervention, photo and video preview, photoshoot, and movie video recording have been developed and implemented. The developed functionalities were implemented, evaluated, and confirmed through experiments (Lu et al., 2017a). Koziar et al. discussed a quadrotor design for outdoor air quality monitoring. They were described that monitoring air quality to prevent environmental pollution and improve human quality of life is essential. One of the most promising directions in ecological applications is measurements with UAVs which has a significant advantage over ground vehicles like high maneuverability and easy deployment (Koziar et al., 2019). Rosario et al. applied an image processing technique to detect cars in a specific parking lot together with a UAV that can hover above the parking lot (Rosario et al., 2020). Also, Allen and Mazumder proposed an integrated system concept for autonomously surveying and planning emergency response for areas impacted by natural disasters, which was composed of a network of ground stations and autonomous aerial vehicles interconnected by an ad hoc emergency communication network (Allen & Mazumder, 2020). More research covered the development of using multiple UAVs as a subset of multi robotic systems, which represents an important step to support communication coordination and cooperation that facilitate applying such technology for a wide range of tasks. The advantages of such research directions cover flexibility, reliability, cooperation, complementarity, time efficiency, and cost.

For the developed technology in this chapter that focuses on image processing technology, various applications using consumer cameras and smartphones are being implemented and used. For example, detecting people by surveillance cameras has already been realized. Also, to support autonomous driving of cars, image processing techniques supported by different machine learning techniques are now being developed and quickly progressing. Developing an effective autonomous navigation system requires recognizing the road environment around the automobile and avoiding any possible clash. Hence, image recognition technology is applied to movies obtained by a camera used as primary sensors (Hattori, 2008). Cameras, movie frames, and image processing techniques are used in association with the functions of UAVs for different purposes. Serizawa et al. developed a video system that rotates and translates a stereo camera mounted on a flying robot to synchronize the operator's head movement (Serizawa et al., 2019). Otsuka et al. developed a small UAV localization technique in a GPS-unavailable indoor environment using dropping type AR (Augmented Reality) markers (Ostuka, et al., 2015). Saito et al. proposed a method to measure how the movement of a person can be captured and recognized. Setting multiple markers on the ground made it possible to estimate a UAV's position by capturing those markers' images using a camera integrated with the UAV body and supported by analyzing algorithms. Concurrently, the trajectory of a person is monitored successfully by estimating his position on the ground in the same images. Reducing the measurement in position error was the focus of Saito’s research team. This was achieved by using Kalman filter (Saito et al., 2018). Lu et al. developed two software programs for rounding around a target object and executing a mission planning function with multiple flight tasks. These functions were implemented and integrated with a quadrotor (Lu et al., 2017b; Lu et al., 2018).

Complete Chapter List

Search this Book:
Reset