Autonomous Unmanned Aerial Vehicle for Post-Disaster Management With Cognitive Radio Communication

Autonomous Unmanned Aerial Vehicle for Post-Disaster Management With Cognitive Radio Communication

Raja Guru R., Naresh Kumar P.
Copyright: © 2021 |Pages: 24
DOI: 10.4018/IJACI.2021010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Unmanned aerial vehicles (UAV) play a significant role in finding victims affected in the post-disaster zone, where a man cannot risk his life under a critical condition of the disaster environment. The proposed design incorporates autonomous vision-based navigation through the disaster environment based on general graph theory with dynamic changes on the length between two or multiple nodes, where a node is a pathway. Camera fixed on it continuously captures the surrounding footage, processing it frame by frame on-site using image processing technique based on a SOC. Identifies victims in the zone and the pathways available for traversal. UAV uses an ultrasonic rangefinder to avoid collision with obstacles. The system alerts the rescue team if any victim detected and transmits the frames using CRN to the off-site console. UAV learns navigation policy that achieves high accuracy in real-time environments; communication using CRN is uninterrupted and useful during such emergencies.
Article Preview
Top

1 Introduction

Post-disaster management refers to a situation in which the society or community of a particular disaster affected zone has to take measures to clear debris and save lives, which is the highest priority after an emergency occurs. Dealing with this top priority task, people may give away their lives while saving others in hazardous smoke or a burning building or a destructed building, in general, where a person's heartbeat cannot be healthy.

Handle this type of situation, and there are machines or, in particular, drones. A human too controls these drones, i.e., remote-controlled (Bhattarai et al., 2018) Nemi Bhattarai et.al.., 2018). Remote-controlled drones work on channel signals sent from the user's remote, different channel signals represent different directions and speed. So, if there is a small mistake in sending the channel correctly, there may be losses such as UAV damage or improper detection or damage to the environment. This has to be overcome by making the UAV work on itself by analyzing the situation in which it is present and navigate accordingly. This can be done using image processing and computer vision, sensors, and a proper path planning algorithm, which will have multiple factors to depend upon (Padhy et al., 2018) Ram Prasad Padhy et.al.., 2018).

While considering image processing, a trained model should be standard, i.e., it should be faster and as well as accurate, sometimes it is not possible to have both (Keerthana & Kala, 2019) Keerthana T et.al.., 2019), a model can be standardized (Hartawan et al., 2019) Dean RizkyHartawan et.al.., 2019). The parameters used to calculate accuracy are the images that the camera captures, so a camera should get the image similar to the pictures with which the model got trained, meaning it should have identical pixel densities, dimensions, image colorings, and contrasts. If the prepared image has too much quality, then the camera used should also generate the same quality of image, because if we train with high-quality picture and pass a low-quality image for detection, then the screening may not be appropriate and can be incorrect. So, the prepared image and the captured image should have similar quality rates. For faster detection, the trained model should work under higher frames per second, i.e., detect under at least ten frames per second.

For detection to happen, the UAV must navigate, and for it to be autonomous, the navigation should be dynamic or adaptive according to the relative parameters of the situation. A geo mapping UAV cannot participate in saving lives of people under debris, and a debris clearing UAV cannot participate in surveying geolocation, both should come hand in hand. And most importantly, the UAV has to be energy efficient.

Communication should be uninterruptible while dealing with post-disaster zones, as video streaming from one place to another can be interrupted due to damage in communication architecture. Such losses cannot be cleared then and there. Those have to be managed seamlessly.

This paper presents a system to overcome the challenges faced during a disaster where the UAV navigates autonomously based on general graph theory that calculates distances by the movement of UAV from the root node, where nodes are the entry point that leads a pathway which changes dynamically according to the measurement of traveled distance and can be adjusted. And image processing also aids navigation by analyzing the environment by using computer vision. The technique used here is based on modal in System On Chip (SOC). And to manage speed accuracy and Frames Per Second (FPS), we use MobileNet version1-Single Shot multiboxDetector (MobileNetv1-SSD). To detect assets, we use an image dataset consisting of about 10,000 images approximately. The captured data are processed on-site by the UAV using the processor, and the results are streamed to the console. To transfer and receive data (images or manual commands), we use the Cognitive Radio Network (CRN), which uses unlicensed bands to communicate. Thus, saving time and rescuing fast.

As novelty our proposed system the following are considered:

  • Multiple parameters were detecting doors & path holes for navigation with the distance between objects inside a disaster zone.

  • Autonomous behavior using image recognition and Open Source Computer Vision (OpenCV) with the real-time distance between objects and paths with sync in-camera fps and detection speed.

  • Usage of CRN simulation showing how communication in a disaster zone can be made uninterruptible for communication between UAV and console.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing