A Reinforcement Learning Integrating Distributed Caches for Contextual Road Navigation

A Reinforcement Learning Integrating Distributed Caches for Contextual Road Navigation

Jean-Michel Ilié, Ahmed-Chawki Chaouche, François Pêcheux
Copyright: © 2022 |Pages: 19
DOI: 10.4018/IJACI.300792
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Due to contextual traffic conditions, the computation of optimized or shortest paths is a very complex problem for both drivers and autonomous vehicles. This paper introduces a reinforcement learning mechanism that is able to efficiently evaluate path durations based on an abstraction of the available traffic information. The authors demonstrate that a cache data structure allows a permanent access to the results whereas a lazy politics taking new data into account is used to increase the viability of those results. As a client of the proposed learning system, the authors consider a contextual path planning application and they show in addition the benefit of integrating a client cache at this level. Our measures highlight the performance of each mechanism, according to different learning and caching strategies.
Article Preview
Top

Literature Review

Reinforcement learning is a paradigm useful to yield the problem of autonomous learning in dynamic environments, assuming for instance no prior information about the actions to perform (Stafylopatis and Blekas, 1998; Veres and Moussa, 2019). Compared to the model-based approach specifying an ‘a priori’ time-dependent behavior for a real time system, e.g., (Boukharrou et al, 2017), it appears to be more adequate to capture dynamicity on-the-fly, handling immediate adaptations of the execution parameters.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing