Article Preview
Top1 Introduction
Post-disaster management refers to a situation in which the society or community of a particular disaster affected zone has to take measures to clear debris and save lives, which is the highest priority after an emergency occurs. Dealing with this top priority task, people may give away their lives while saving others in hazardous smoke or a burning building or a destructed building, in general, where a person's heartbeat cannot be healthy.
Handle this type of situation, and there are machines or, in particular, drones. A human too controls these drones, i.e., remote-controlled (Bhattarai et al., 2018) Nemi Bhattarai et.al.., 2018). Remote-controlled drones work on channel signals sent from the user's remote, different channel signals represent different directions and speed. So, if there is a small mistake in sending the channel correctly, there may be losses such as UAV damage or improper detection or damage to the environment. This has to be overcome by making the UAV work on itself by analyzing the situation in which it is present and navigate accordingly. This can be done using image processing and computer vision, sensors, and a proper path planning algorithm, which will have multiple factors to depend upon (Padhy et al., 2018) Ram Prasad Padhy et.al.., 2018).
While considering image processing, a trained model should be standard, i.e., it should be faster and as well as accurate, sometimes it is not possible to have both (Keerthana & Kala, 2019) Keerthana T et.al.., 2019), a model can be standardized (Hartawan et al., 2019) Dean RizkyHartawan et.al.., 2019). The parameters used to calculate accuracy are the images that the camera captures, so a camera should get the image similar to the pictures with which the model got trained, meaning it should have identical pixel densities, dimensions, image colorings, and contrasts. If the prepared image has too much quality, then the camera used should also generate the same quality of image, because if we train with high-quality picture and pass a low-quality image for detection, then the screening may not be appropriate and can be incorrect. So, the prepared image and the captured image should have similar quality rates. For faster detection, the trained model should work under higher frames per second, i.e., detect under at least ten frames per second.
For detection to happen, the UAV must navigate, and for it to be autonomous, the navigation should be dynamic or adaptive according to the relative parameters of the situation. A geo mapping UAV cannot participate in saving lives of people under debris, and a debris clearing UAV cannot participate in surveying geolocation, both should come hand in hand. And most importantly, the UAV has to be energy efficient.
Communication should be uninterruptible while dealing with post-disaster zones, as video streaming from one place to another can be interrupted due to damage in communication architecture. Such losses cannot be cleared then and there. Those have to be managed seamlessly.
This paper presents a system to overcome the challenges faced during a disaster where the UAV navigates autonomously based on general graph theory that calculates distances by the movement of UAV from the root node, where nodes are the entry point that leads a pathway which changes dynamically according to the measurement of traveled distance and can be adjusted. And image processing also aids navigation by analyzing the environment by using computer vision. The technique used here is based on modal in System On Chip (SOC). And to manage speed accuracy and Frames Per Second (FPS), we use MobileNet version1-Single Shot multiboxDetector (MobileNetv1-SSD). To detect assets, we use an image dataset consisting of about 10,000 images approximately. The captured data are processed on-site by the UAV using the processor, and the results are streamed to the console. To transfer and receive data (images or manual commands), we use the Cognitive Radio Network (CRN), which uses unlicensed bands to communicate. Thus, saving time and rescuing fast.
As novelty our proposed system the following are considered:
- •
Multiple parameters were detecting doors & path holes for navigation with the distance between objects inside a disaster zone.
- •
Autonomous behavior using image recognition and Open Source Computer Vision (OpenCV) with the real-time distance between objects and paths with sync in-camera fps and detection speed.
- •
Usage of CRN simulation showing how communication in a disaster zone can be made uninterruptible for communication between UAV and console.