Self-Adaptation Through Reinforcement Learning Using a Feature Model

Self-Adaptation Through Reinforcement Learning Using a Feature Model

Selma Ouareth, Soufiane Boulehouache, Mazouzi Smaine
Copyright: © 2022 |Pages: 20
DOI: 10.4018/IJOCI.312226
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Typically, self-adaptation is achieved by implementing the MAPE-K Control Loop. The researchers agree that multiple control loops should be assigned if the system is complex and large-scale. The hierarchical control has proven to be a good compromise to achieve SAS goals, as there is always some degree of decentralization but it also retains a degree of centralization. The decentralized entities must be coordinated to ensure the consistency of adaptation processes. The high cost of data transfer between coordinating entities may be an obstacle to achieving system scalability. To resolve this problem, coordination should only define between entities that require communication between them. However, most of the current SAS relies on static MAPE-K. In this article, authors present a new method that allows changing the structure and behavior of loops. Authors base on exploration strategies for online reinforcement learning, using the feature model to define the adaptation space.
Article Preview
Top

Background

In this section, the authors introduce the background knowledge that allows understanding the contribution.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022)
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing