SiMAMT: A Framework for Strategy-Based Multi-Agent Multi-Team Systems

SiMAMT: A Framework for Strategy-Based Multi-Agent Multi-Team Systems

D. Michael Franklin, Xiaolin Hu
DOI: 10.4018/IJMSTR.2017010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multi-agent multi-team systems are commonly seen in environments where hierarchical layers of goals are at play. For example, theater-wide combat scenarios where multiple levels of command and control are required for proper execution of goals from the general to the foot soldier. Additionally, similar structures can be seen in game environments, where agents work together as teams to compete with other teams. The different agents within the same team must, while maintaining their own “personality”, work together and coordinate with each other to achieve a common team goal. This paper develops strategy-based multi-agent multi-team systems, where strategy is framed as an instrument at the team level to coordinate the multiple agents of a team in a cohesive way. The authors present SiMAMT, a framework for strategy-based multi-agent multi-team systems. The different components of the framework, including strategy simulation, strategy inference, strategy evaluation, and strategy selection are described. A formal specification of strategy and strategy-based multi-agent multi-team systems is provided. An example and experimental results are given to illustrate the proposed framework and its efficacy.
Article Preview
Top

1. Introduction

When multi-agent scenarios move beyond singular, short-term goals and into the realm of multi-layered strategies the complexity quickly scales out of the practical. Much of the research in multi-agent systems revolves around single-goal systems where multiple agents each work independently to achieve the same goal. This does not accurately model the real-world scenarios found in larger systems where each independent agent has their own initiatives but still works together to achieve team goals. These teams are also part of larger teams (e.g., units make up regiments, regiments make up battalions, etc.) that also have large-scale goals. As the hierarchy builds, the strategy becomes larger and extends further. The authors are proposing a framework to address this particular issue. The SiMAMT framework is designed to allow a hierarchical strategy structure that works at each level to enforce policies that work at that particular level. Each sub-level of the hierarchy then works at its’ particular level order while considering the orders filtered down from the higher level. In this manner, the entire structure incorporates a multi-level strategy without having to use a large, monolithic policy (these large policies arise from applying small-scale solutions to much larger problems). When policies are allowed to grow in scale with the number of agents and the complexity of the system they become computationally too complex to be applied, recognized, and changed in real-time. Strategy-based systems utilize group policies to aggregate the policies of individuals into a larger team policy (which the authors refer to as a strategy). Each of these strategies can be grouped together into a larger strategy at the next level. The SiMAMT framework creates a system to setup, model, control, and analyze multi-level strategies such as these.

SiMAMT seeks to model more complex multi-agent systems where there are multiple teams involved. Strategic interactions at this level are challenging and must be considered carefully. Considering the actions of other agents is foundational to strategy.

The classic game theoretic question asked of any particular multi-agent encounter is: What is the best - most rational - thing an agent can do? In most multi-agent encounters, the overall outcome will depend critically on the choices made by all agents in the scenario. This implies that in order for an agent to make the choice that optimises (sic) its outcome, it must reason strategically. That is, it must take into account the decisions that other agents may make, and must assume that they will act so as to optimise (sic) their own outcome. Game theory gives us a way of formalising (sic) and analyzing such concerns. [Parson, 2002]

…an agent...must (a) recognize that there are other agents, (b) compute some of the other agent’s possible plans, (c) compute how the other agent’s plans interact with its own plans, and (d) decide on the best action in view of these interactions. So [both competition and cooperation] require a model of the other agent’s plans. [Russell, 2003]

It has been shown that strategies offer significant performance enhancement to artificially intelligent agents, that strategies can be recognized in real-time when complexity is limited, and that AI’s utilizing strategy inference will outperform their originally superior opponents (Franklin, 2015). In contrast, classical machine learning requires repetitive trials and numerous iterations to begin to form a hypothesis as to the intended actions of a given agent. There are numerous methodologies employed in an attempt to reduce the number of examples needed to form a meaningful hypothesis. The challenge arises from the difficulty created by the diversity of possible scenarios in which the machine learning algorithm is placed. Given enough time and stability a machine learning algorithm can learn reasonably well in a fixed environment, but this does not replicate the real world very accurately. As a result, strategies offer an opportunity to encapsulate much of this policy-space in a compact representation. They have to be learned as well, but they are transmutable to another instance of a similar problem. Additionally, they can be pre-built and then modified to suit the exact situation.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 5: 4 Issues (2017)
Volume 4: 4 Issues (2016)
Volume 3: 4 Issues (2015)
Volume 2: 4 Issues (2014)
Volume 1: 4 Issues (2013)
View Complete Journal Contents Listing