Virtual Intelligent Autonomous Agents: Robot Navigation for Disaster Intervention

Virtual Intelligent Autonomous Agents: Robot Navigation for Disaster Intervention

Akindele Segun Afolabi (University of Ilorin, Ilorin, Nigeria) and Olubunmi Adewale Akinola (Federal University of Agriculture, Abeokuta, Nigeria)
Copyright: © 2025 |Pages: 52
DOI: 10.4018/979-8-3693-7832-8.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The intersection of Artificial Intelligence (AI) and robotics has helped to promote the idea of enhancing the performance of real-world autonomous systems through their digital twins. In this context, reinforcement learning (RL) emerges as a key technology used by digital twins in learning policies that results in enhanced performances. RL problems are typically modelled within the Markov Decision Process (MDP) framework, where an agent learns to take actions in a stochastic environment to maximize rewards. A scenario where an intervention robot has to be trained to detect and navigate to a gas pipe leakage while avoiding collisions with objects on in its path was considered and simulated in this chapter. The robot was trained using the Unity platform which was selected mainly due to its support for RL and its realistic physics simulation engine. Simulation experiments revealed that the minimum steps it takes the robot to master the task was around 300,000 steps. After this, the robot intelligently navigated to the gas leakage by following the shortest path while avoiding collisions.
Chapter Preview

Complete Chapter List

Search this Book:
Reset