Distributed Multi-Agent Systems for a Collective Construction Task based on Virtual Swarm Intelligence

Distributed Multi-Agent Systems for a Collective Construction Task based on Virtual Swarm Intelligence

Yan Meng (Stevens Institute of Technology, USA) and Yaochu Jin (University of Surrey, UK)
Copyright: © 2010 |Pages: 22
DOI: 10.4018/jsir.2010040104
OnDemand PDF Download:
$37.50

Abstract

In this paper, a virtual swarm intelligence (VSI)-based algorithm is proposed to coordinate a distributed multi-robot system for a collective construction task. Three phases are involved in a construction task: search, detect, and carry. Initially, robots are randomly located within a bounded area and start random search for building blocks. Once the building blocks are detected, agents need to share the information with their local neighbors. A distributed virtual pheromone-trail (DVP) based model is proposed for local communication among agents. If multiple building blocks are detected in a local area, agents need to make decisions on which agent(s) should carry which block(s). To this end, a virtual particle swarm optimization (V-PSO)-based model is developed for multi-agent behavior coordination. Furthermore, a quorum sensing (QS)-based model is employed to balance the tradeoff between exploitation and exploration, so that an optimal overall performance can be achieved. Extensive simulation results on a collective construction task have demonstrated the efficiency and robustness of the proposed VSI-based framework.
Article Preview

Introduction

One main challenge for multi-agent systems is to create intelligent agents that are able to adapt their behaviors to changing environments and to improve their skills in performing tasks over time. Such abilities are crucial for agents under dynamic environments, where unpredictable task scenarios may happen, or for those required to perform the same action repeatedly over a large area, where agents need to work together to achieve a global task.

Centralized control methods for multi-agent systems usually require to maintain a complete overview of the situation and a plan of action for every agent (Koenig et al., 2001), which is unpractical. In contrast, distributed control methods are more attractive and feasible for these systems due to their robustness, flexibility, and adaptability. Distributed control methods have been proposed for solving a wide range real-world applications, such as foraging (Krieger et al., 2000), box-pushing (Mataric et al., 1995), aggregation and segregation (Martinoli et al., 1999), shape formation (Balch & Arkin, 1998; Guo et al., 2009), cooperative mapping (Yamauchi, 1999), soccer tournaments (Weiger et al., 2002), collective cleaning (Meng & Gan, 2007; Wagner et al., 1999; Wagner et al., 2008), site preparation (Parker & Zhang, 2006), sorting (Holland & Melhuish, 1999), collective construction and assembly (Stewart & Russell, 2006; Werfel & Nagpal, 2006; Matthey et al., 2009), cooperative searching and exploration and coverage (Correll & Martinoli, 2007; Franchi et al., 2007; Hsiang et al., 2007; Mclurkin & Smith, 2004; Rutishauser et al., 2009). All these systems consist of multiple robots or embodied simulated agents acting autonomously based on their own individual decisions.

In this paper, we focus on developing a distributed control algorithm to coordinate a multi-agent system for a collective construction task. In this task, a few building blocks are randomly distributed in a closed area. Agents are required to search, detect, and carry these building blocks to a predefined base.

A number of challenges much be addressed to develop a distributed, self-adaptive multi-agent system for collective construction tasks. First, since blocks are randomly distributed within a large-scale environment, it is nontrivial to efficiently search and detect the blocks. Second, since small, inexpensive agents have limited communication capability, each agent can only share information with its local neighbors and makes its decisions individually. It is thus tricky to control the communication costs among the agents in a large-scale system, while maximizing the information sharing among agents. Finally, it is not straightforward to achieve an optimal global performance, i.e., to accomplish the construction task as soon as possible by means of local control of the individual agents.

To address the above-mentioned challenges, researchers have turned to biological systems (Pfeifer et al., 2007; Bonabeau et al., 1997; Camazine et al., 2001; Chialvo & Millonas, 1995). For example, swarm intelligence was inspired from behaviors of social insects. Among other applications, swarm intelligence has shown to be successful in controlling multi-agent systems (Doctor et al., 2004; Koening et al., 2001; Payton et al., 2001; Wagner et al., 1999; Keriger et al., 2000; Kumar & Sahin, 2003; Meng & Gan, 2007; Meng & Gan, 2008; Pugh & Martinoli, 2007). However, these work mainly focused on mimicking the behaviors of biological systems by treating each agent in multi-agent systems as an artificial ant or an artificial particle, which usually involve extensive random movements of agents. Furthermore, in some of these work, agents either need to deposit artificial pheromone trails physically in the environment for information sharing (which is usually not feasible in the real-world applications or special sensors have to be developed), or using some global information such as virtual pheromone gradients for information sharing (which is not feasible for distributed systems). In the collective construction tasks, efficient distributed local communication and dynamic task allocation mechanisms are critical for the success of the tasks.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing