Multi-Agent Applications with Evolutionary Computation and Biologically Inspired Technologies: Intelligent Techniques for Ubiquity and Optimization

Multi-Agent Applications with Evolutionary Computation and Biologically Inspired Technologies: Intelligent Techniques for Ubiquity and Optimization

Shu-Heng Chen (National Chengchi University, Taiwan), Yasushi Kambayashi (Nippon Institute of Technology, Japan) and Hiroshi Sato (National Defense Academy, Japan)
Indexed In: SCOPUS
Release Date: July, 2010|Copyright: © 2011 |Pages: 384
ISBN13: 9781605668987|ISBN10: 1605668982|EISBN13: 9781605668994|DOI: 10.4018/978-1-60566-898-7


Biologically inspired computation methods are growing in popularity in intelligent systems, creating a need for more research and information.

Multi-Agent Applications with Evolutionary Computation and Biologically Inspired Technologies: Intelligent Techniques for Ubiquity and Optimization compiles numerous ongoing projects and research efforts in the design of agents in light of recent development in neurocognitive science and quantum physics. This innovative collection provides readers with interdisciplinary applications of multi-agents systems, ranging from economics to engineering.

Topics Covered

The many academic areas covered in this publication include, but are not limited to:

  • Agent-based knowledge acquisition
  • Ant colony clustering method
  • Autonomous mobile sensors
  • Computations for financial applications
  • Coordinating collective behavior
  • Decentralized pervasive systems
  • Evolving artificial neural network
  • Exploitation oriented learning
  • Financial hybrid systems
  • Man-machine system
  • Neural-like assumptions
  • Simple artificial market

Reviews and Testimonials

This volume covers six subjects, which of course are not exhaustive but are sufficiently representative of the current important developments of MAS and, in the meantime, point to the directions for the future. When computer science becomes more social and the social sciences become more computational, publications that can facilitate the talks between the two disciplines are demanded. This edited volume demonstrates our efforts to work this out.

– Shu-Heng Chen (National Chengchi University, Taiwan); Yasushi Kambayashi (Nippon Institute of Technology, Japan); Hiroshi Sato (National Defense Academy, Japan)

Table of Contents and List of Contributors

Search this Book:



From a historical viewpoint, the development of multi-agent systems demonstrates how computer science has become more social, and how the social sciences have become more computational.   With this development of cross-fertilization, our understanding of multi-agent systems may become partial if we only focus on computer science or only focus on the social sciences. This book with its 17 chapters intends to give a balanced sketch of the research frontiers of multi-agent systems.  We trace the origins of the idea, a biologically-inspired approach to multi-agent systems, to John von Neumann, and then continue his legacy in this volume.


Multi-agent system (MAS) is now an independent, but highly interdisciplinary, scientific subject.  It offers scientists a new research paradigm to study the existing complex natural systems, to understand the underlying mechanisms by simulating them, and to gain the inspiration to design artificial systems that can solve highly complex (difficult) problems or can create commercial value.  From a historical viewpoint, the development of multi-agent systems itself demonstrates how computer science has become more social, and, in the meantime, how the social sciences have become more computational.  With this development of cross-fertilization, our understanding of multi-agent systems may become partial if we only focus on computer science or only focus on the social sciences. A balanced view is therefore desirable and becomes the main pursuit of this editing volume.  In this volume, we attempt to give a balanced sketch of the research frontiers of multi-agent systems, ranging from computer science to the social sciences.

While there are many intellectual origins of the MAS, the book“Theory of Self-Reproducing Automata”  by von Neumann (1903-1957) certainly contributes to a significant part of the later development of MAS (von Neumann, 1966).  In particular, it contributes to a special class of MAS, called cellular automata, which motivates a number of pioneering applications of MAS to the social sciences in the early 1970s (Albin, 1975).  In this book, von Neumann suggested that an appropriate principle for designing artificial automata can be productively inspired by the study of natural automata.  Von Neumann himself spent a great deal of time on the comparative study of the nervous systems or the brain (the natural automata) and the digital computer (the artificial automata).  In his book “The Computer and the Brain”, von Neumann demonstrates the effect of interaction between the study of natural automata and the design of artificial automata.  

This biologically-inspired principle has been further extended by Arthur Burks, John Holland and many others. By following this legacy, this volume has this biologically-inspired approach to multi-agent systems as its focus. The difference is that we are now richly endowed with more natural observations for inspirations, from evolutionary biology, and neuroscience, to ethology and entomology.  The main purpose of this book is to ground the design of multi-agent systems in biologically-inspired tools, such as evolutionary computation, artificial neural networks, reinforcement learning, swarm intelligence, stigmergic optimization, ant colony optimization, and ant colony clustering.

Given the two well-articulated goals above, this volume covers six subjects, which of course are not exhaustive but are sufficiently representative of the current important developments of MAS and, in the meantime, point to the directions for the future.  The six subjects are multi-agent financial decision systems (Chapters 1-2), neuro-inspired agents (Chapters 3-4), bio-inspired agent-based financial markets (Chapters 5-8), multi-agent robots (Chapters 9-10), multi-agent games and simulation (Chapters 11-12), and multi-agent learning (Chapters 13-15).  15 contributions to this volume are grouped by these subjects into six parts of the volume.  In addition to these six parts, a “miscellaneous” part is added to include two contributions, each of which addresses an important dimension of the development of MAS. In the following, we would like to give a brief introduction to each of these six subjects.


We start with the multi-agent financial system.  The idea of using multi-agent systems to process information has a long tradition in economics, even though in early days the term MAS did not even exist. In this regard, Hayek (1945) is an influential work.  Hayek considered the market and the associated price mechanism as a way of pooling or aggregating the market participants’ limited knowledge of the economy. While the information owned by each market participant is imperfect, the pool of them can generate prices with any efficient allocation of resources. The assertion of this article was later on coined as the Hayek Hypothesis by Vernon Smith (Smith 1982) in his double auction market experiments. The intensive study of the Hayek hypothesis in experimental economics has further motivated or strengthened the idea of prediction markets.  A prediction market essentially generates an artificial market environment such that forecasts of crowds can be pooled so as to generate better forecasts.  Predicting election outcomes via what is known as political future markets becomes one of the most prominent applications.

On the other hand, econometricians tend to pool the forecasts made by different forecasting models so as to improve their forecasting performance.  In one literature, this is known as the combined forecasts (Clement 1989). Like prediction markets, combined forecasts tend to enhance the forecast accuracy.  The difference between prediction markets and combined forecasts is that agents in the former case are heterogeneous in both data (the information acquired) and models (the way to process information), whereas agents in the latter case are heterogeneous in models only.  Hybrid systems in machine learning or artificial intelligence can be regarded as a further extension of the combined forecasts, for example, Kooths, Mitze, and Ringhut (2004).  Their difference lies in the way they integrate the intelligence of the crowd. Integration in the case of a combined forecast is much simpler, most of the time, consisting of just the weighted combination of forecasts made by different agents. This type of integration can function well because the market price under certain circumstances is just this simple linear combination of a pool of forecasts. This latter property has been shown by the recent agent-based financial markets. Nevertheless, the hybrid system is more sophisticated in terms of its integration.  It is not just the horizontal combination of the pool, but also involves the vertical integration of it.  In this way, heterogeneous agents do not just behave independently, but work together as a team (Mumford and Jain, 2009).   

Chapter One “A Multi-Agent System Forecast of the S&P/Case-Shiller LA Home Price Index” authored by Mak Kaboudan provides an illustration of the hybrid systems.  He provides an agent-based forecasting system of real estate.  The system is composed of three types of agents, namely, artificial neural networks, genetic programming and linear regression.  The system “aggregates” the dispersed forecasts of these agents through a competition-cooperation cyclic phase.  In the competition phase, best individual forecasting models are chosen from each type of agent. In the cooperation phase, hybrid systems (reconciliatory models) are constructed by combining artificial neural networks with genetic programming, or by combining artificial neural networks with regression models, based on the solutions of the first phase.  Finally, there is a competition again for individual models and reconciliatory models.

Chapter Two “An Agent-based Model for Portfolio Optimization Using Search Space Splitting” authored by Yukiko Orito, Yasushi Kambayashi, Yasuhiro Tsujimura and Hisashi Yamamoto proposes a novel version of genetic algorithms to solve the portfolio optimization problem.  Genetic algorithms are population-based search algorithms; hence, they can naturally be considered to be an agent-based approach, if we treat each individual in the population as an agent.  In Orito et al.’s case, each agent is an investor with a portfolio over a set of assets.  However, the authors do not use the standard single-population genetic algorithm to drive the evolutionary dynamics of the portfolios.  Instead, the whole society is divided into many sub-populations (clusters of investors), within each of which there is a leader. The interactions of agents are determined by their associated behavioral characteristics, such as leaders, obedient followers or disobedient followers.  These clusters and behavioral characteristics can constantly change during the evolution: new leaders with new clusters may emerge to replace the exiting ones.  Like the previous chapter, this chapter shows that the wisdom of crowds emerges from complex social dynamics rather than just a static weighted combination.


Our brain itself is a multi-agent system; therefore, it is natural to study the brain as a multi-agent system (de Garis 2008). In this direction, MAS is applied to neuroscience. However, the other direction also exists. One recent development in multi-agent systems is to make software agents more human like.  Various human factors, such as cognitive capacity, intelligence, personality attributes, emotion, and cultural differences, have become new working dimensions for software agents. Since these human factors have now been intensively studied in neuroscience with regard to their neural correlates, it is not surprising to see that the design of autonomous agents, under this influence, will be grounded deeper into neuroscience.  Hence, the progress of neuroscience can impact the design of autonomous agents in MAS. The next two chapters are written to feature this future.

Chapter Three “Neuroeconomics: A Viewpoint from Agent-Based Computational Economics” by Shu-Heng Chen and Shu G. Wang gives a review of how the recent progress in neuroeconomics may shed light on different components of autonomous agents, including their preference formation, alternatives valuation, choice making, risk perception, risk preferences, choice making under risk, and learning. The last part of their review covers the well-known dual system conjecture, which is now the centerpiece of neuroeconomic theory.

Chapter Four “Agents in Quantum and Neural Uncertainty” authored by Germano Resconi and Boris Kovalerchuk raises a very fundamental issue: does our brain fuzzify the received signals, even when they are presented in a crispy way?  They then further inquire into the nature of uncertainty and propose a notion of uncertainty which is neural theoretic.  A two-layered neural network is proposed to be able to transform crisp signals into multi-valued outputs (fuzzy outputs). In this way, the source of fuzziness comes from the conflicting evaluations of the same inputs made by different neurons, to some extent, like Minsky’s society of minds (Minsky, 1998). Using various brain image technologies, the current study of neuroscience has already explored various neural correlates when subjects are presented with vague, incomplete and inconsistent information.  This mounting evidence may put the modal logic under a close examination and motivate us to think about some alternatives, like dynamic logic.


The third subject of this volume is bio-inspired agent-based artificial markets. Market is another natural demonstration of multi-agent systems.  In fact, over the last decade, the market mechanism has inspired the design of MAS, known as the market-based algorithm. To some extent, it has also revolutionized the research paradigm of artificial intelligence by motivating the distributed AI.  However, in a reverse direction, MAS also provides economists with a powerful tool to explore and to test the market mechanism.  This research helps them to learn when markets may fail and hence learn how to do market designs.  Nevertheless, the function of markets is not just about the institutional design (the so-called structuralism); a significant number of studies of artificial markets have found that institutional design is not behavior-free or culture-free.  This behavioral awareness and cultural awareness has now also become a research direction in experimental economics and agent-based computational economics.

The four chapters contributing to this part all adopt a behavioral approach to the study of artificial markets. Chapter 5 “Bounded Rationality and Market Micro-Behaviors: Case Studies Based on Agent-Based Double Auction Markets” authored by Shu-Heng Chen, Ren-Jie Zeng, Tina Yu and Shu G Wang can be read as an example of the recent attempt to model agents with different cognitive capacities or intelligence. It is clear that human agents are heterogeneous in their cognitive capacity (intelligence), and the effect of this heterogeneity on their economic and social status has been found in many recent studies ranging from psychology and sociology to economics; nevertheless, conventional agent-based models paid little attention to this development, and in most cases agents were explicitly or implicitly assumed to be equally smart.  By using genetic programming parameterized with different population sizes, this chapter provides a pioneering study to examine the effect of cognitive capacity on the discovery of trading strategies.  It is found that larger cognitive capacity can contribute to the discovery of more complex but more profitable strategies.  It is also found that different cognitive capacity may coordinate different matches of strategies of players in a co-evolutionary fashion, while they are not necessarily the Nash equilibria.

Chapter 6 “Social Simulation with both Human Agents and Software Agents: An Investigation into the Impact of Cognitive Capacity on Their Learning Behavior” authored by Shu-Heng Chen, Chung-Ching Tai, Tzai-Der Wang and Shu G Wang.  This chapter can be considered to be a continuation of the cognitive agent-based models.  What differs from the previous one is that this chapter considers not only software agents with different cognitive capacity which is manipulated in the same way as in the previous chapter, but also considers human agents with different working memory capacity.  A test borrowed from psychology is employed to measure the working memory capacity of human subjects. By placing software agents and human agents separately in a similar environment (double auction markets, in this case) to play against the same group of opponents (Santa Fe program agents), they are able to examine whether the economic significance of intelligence observed from human agents can be comparable to that observed in the software agents, and hence to evaluate how well the artificial cognitive capacity has mimicked the human cognitive capacity.

Chapter 7 “Evolution of Agents in a Simple Artificial Market” authored by Hiroshi Sato, Masao Kubo and Akira Namatame is a work devoted to the piling-up literature on agent-based artificial stock markets. As Chen, Chang and Du (2010) have surveyed, from the viewpoint of agent engineering, there are two major classes of agent-based artificial stock markets.  One comprises the H-type agent-based financial models, and the other, the Santa-Fe-like agent-based financial models.   The former has the agents whose behavioral rules are known and, to some extent, are fixed and simple.   The latter has the agents who are basically autonomous, and their behavior, in general, can be quite complex.  This chapter belongs to the former, and considers two types of agents: rational investors and imitators. It uses the standard stochastic utility function as the basis for deriving the Gibbs-Boltzmann distribution as the learning mechanism of agents and shows the evolving microstructure (fraction) of these two types of agents and its connection to the complex dynamics of financial markets.

Chapter 8 “Agent-Based Modeling Bridges Theory of Behavioral Finance and Financial Markets” authored by Hiroshi Takahashi and Takao Terano is another contribution to agent-based artificial stock markets.  It shares some similarities with the previous chapter; mainly, they both belong to the H-type agent-based financial markets, categorized in Chen, Chang and Du (2010). However, this chapter distinguishes itself by incorporating the ingredients of behavioral finance into agent-based financial models, a research trend perceived in Chen and Liao (2004). Specifically, this chapter considers passive and active investors, overconfident investors, and prospects-based investors (Kahneman-Tversky investors).  Within this framework, the authors address two frequently-raised issues in the literature.  The first one is the issue pertaining to survival analysis: among different types of agents, who can survive, and under what circumstances? The second issue pertains to the traceability of the fundamental prices by the market price: how far and for how long can the market price deviate from the fundamental price. Their results and many others in the literature seem to indicate that the inclusion of behavioral factors can quite strongly and persistently cause the market price to deviate from the fundamental price, and that persistent deviation can exist even after allowing agents to learn.


Part IV comes to one of the most prominent applications of multi-agent systems, i.e., multi-agent robotics. RoboCup (robotic soccer games) which was initiated in the year 1997 provides one of the exemplary cases (Kitano, 1998).  In this case, one has to build a team of agents that can play a soccer game against a team of robotic opponents. The motivation of RoboCup is that playing soccer successfully demands a range of different skills, such as real-time dynamic coordination using limited communication bandwidth. Obviously, a formidable task in this research area is how to coordinate these autonomous agents (robots) coherently so that a common goal can be achieved.  This requires each autonomous robot to follow a set of behavioral rules, and when they are placed in a distributed interacting environment, the individual operation of these rules can collectively generate a desirable pattern.   This issue is so basic that it already exists in the very beginning of MAS, such as pattern formation in cellular automata. The simple cellular automata are homogeneous in the sense that all automata follow the same set of rules, and there is a mapping from these sets of rules to the emergent patterns. Wolfram (2002) has worked this out in quite some detail.

Multi-robot systems can be considered to be an extension of the simple cellular automata.  The issue pursued here is an inverse engineering problem. Instead of asking what pattern emerges given a set of rules, we are now asking what set of rules are required to generate certain kinds of patterns.  This is the coordination problem for not only multi-agent robots but also other kinds of MAS.  Given the complex structure of this problem, it is not surprising to see that evolutionary computation has been applied to tackle this issue. In this part, we shall see two such studies.

Chapter 9 “Autonomous Specialization in a Multi-Robot System using Evolving Neural Networks” authored by Masanori Goka and Kazuhiro Ohkura gives a concrete coordination problem for robots. Ten autonomous mobile robots have to push three packages to the goal line.  Each of these autonomous robots is designed with a continuous-time recurrent artificial neural network.  The coordination of them is solved using evolutionary strategies and genetic algorithms.  In the former case, the network structure is fixed and only the connection weights evolve; in the latter case, the network structure is also evolved with the connection weights.  It has been shown that in the latter case and in the later stage, the team of robots develops a kind of autonomous specialization, which divides the entire team into three sub-teams to take care of each of the three packages separately.

Chapter 10 “A Multi-Robot System Using Mobile Agents with Ant Colony Clustering” authored by Yasushi Kambayashi, Yasuhiro Tusjimura, Hidemi Yamachi, and Munehiro Takimoto presents another coordination problem of the multi-robot systems.  In their case, the robots are the luggage carts used in the airports.  These carts are picked up by travelers at designated points and left in arbitrary places. They are then collected by man one by one, which is very laborious.  Therefore, an intelligent design is concerned with how these carts can draw themselves together at designated points, and how these gathering places are determined.  The authors apply the idea of mobile agents in this study.  Mobile agents are programs that can transmit themselves across an electronic network and recommence execution at a remote site (Cockayne and Zyda, 1998). In this chapter, mobile agents are employed as the medium before the host computer (a simulating agent) and all these scattered carts via the device of RFID (Radio Frequency Identification).  The mobile software agent will first collect information with regard to the initial distribution of these luggage carts, and this information will be sent back to the host computer, which will then use ant colony clustering, an idea motivated by ant corps gathering and brood sorting behavior, to figure out the places to which these carts should return.  The designated place for each cart is then transmitted to each cart again via the mobile software agent.   

The two chapters in this part are in interesting sharp contrast.  The former involves the physical movement of robots during the coordination process, whereas the latter does not involve physical movement until the coordination problem has been solved via the simulation. In addition, the former uses the bottom-up (decentralized) approach to cope with the coordination problem, whereas the latter uses the top-down (centralized) approach to cope with the coordination problem, even though the employed ant colony clustering itself is decentralized in nature. It has been argued that the distributed system can coordinate itself well, for example, in the well-known El Farol problem (Arthur, 1994). In the intelligent transportation system, it has also been proposed that a software driver be designed that can learn and can assist the human drives to avoid traffic routes, if these software drives can be properly coordinated first (Sasaki, Flann and Box, 2005). Certainly, this development may continue and, after reading these two chapters, readers may be motivated to explore more on their own.


The analysis of social group dynamics through gaming for sharing of understanding, problem solving and education can be closely tied to MAS.   This idea has been freshly demonstrated in Arai, Deguchi and Matshi (2006).  In this volume, we include two chapters contributing to gaming simulation.

Chapter 11 “Agile Design of Reality Games Online” authored by Robert Reynolds, John O’Shea, Farshad Fotouhi, James Fogarty, Kevin Vitale and Guy Meadows is a contribution to the design of on-line games.  The authors introduce agile programming as an alternative to the conventional waterfall model.   In the waterfall model, the software development goes through a sequential process, which demands that every phase of the project be completed before the next phase can begin.  Yet, very little communication occurs during the hand-offs between the specialized groups responsible for each phase of development.  Hence, when a waterfall project wraps, this heads-down style of programming may create product that is not actually what the customer want.  The agile-programming is then proposed as an alternative to help software development teams react to the instability of building software through an incremental and iterative work cycle, which is detailed in this chapter.  The chapter then shows how this incremental and iterative work cycle has been applied to develop an agent-based hunter-deer-wolf game.   In these cases, agents are individual hunters, deer, and wolves.  Each of these individuals can work on his own, but each of them also belongs to a group, a herd or a pack so that they may learn socially.  Social intelligence (swarm intelligence) can, therefore, be placed into this game; for example, agents can learn via cultural algorithms (Reynolds, 1994, 1999).  The results of these games are provided to a group of archaeologists as an inspiration for their search for human activity evidences in ancient times.  

Chapter 12 “Management of Distributed Energy Resources Using Intelligent Multi-Agent System” authored by T Logenthiran and Dipti Srinivasan is a contribution to the young but rapidly-growing literature on the agent-based modeling of electric power markets (Weidlich, 2008).  The emergence of this research area is closely related to the recent trend of deregulating electricity markets, which may introduce competition to each constituent of the originally vertically-integrated industry, from generation, transmission to distribution.  Hence, not surprisingly, multi-agent systems have been applied to model the competitive ecology of this industry.  Unlike those chapters in Part III, this chapter is not direct involved in the competitive behavior of buyers and sellers in the electricity markets.  Instead, it provides an in-depth description of the development of the simulation software for electricity markets.  It clearly specifies each agent, in addition to the power-generating companies and consumers, of electricity markets.   Then they show how this knowledge can be functionally integrated into simulation software using a multi-agent platform, such as JADE (Java Agent DEvelopment Framework).


The sixth subject of the book is about learning in the context of MAS.  Since the publication of Bush and Mosteller (1955) and Luce (1959), reinforcement learning is no longer just a subject of psychology itself, but is proved to be important for many other disciplines, such as economics and games. Since the seminal work of Samuel (1959) on the checkers playing program, the application of reinforcement learning to games is already 50 years on. The recent influential work by Sutton and Barro (1998) has further pushed these ideas so that they are being widely used in artificial intelligence and control theory.  The advancement of various brain-image technologies, such as fMRI and positron emission tomography, has enabled us to see how our brain has the built-in mechanism required for the implementation of reinforcement learning. The description of reinforcement learning systems actually matches the behavior of specific neural systems in the mammalian brain. One of the most important such systems is the dopamine system and the role that it plays in learning about rewards and directing our choices that lead us to rewards (Dow, 2003; Montague, 2007).

However, like other multi-disciplinary development, challenging issues also exist in reinforcement learning.  A long-lasting fundamental issue is the design or the determination of the reward function, i.e., reward as a function of state and action. Chapter 13 “Effects of Shaping a Reward on Multiagent Reinforcement Learning” by Sachiyo Arai and Nobuyuki Tanaka addresses two situations which may make reward function exceedingly difficult to design.   In the first case, the task is constantly going on and it is not clear when to reward.  The second case involves the learning of team members as a whole instead of individually.  To make the team achieve its common goal, it may not be desirable to distribute the rewards evenly among team members, but the situation can be worse if the rewards are not properly attributed to the few deserving individuals.   Arai and Tanaka address these two issues in the context of keepaway soccer, in which a team tries to maintain ball possession by avoiding the opponent’s interceptions.  

Trust has constantly been a heated issue in multi-agent systems.  This is so because in many situations agents have to decide with whom they want to interact and what strategies to use.  By all means, they want to be able to manage the risk of interacting with malicious agents.  Hence, evaluating the trustworthiness of “strangers” become crucial.  People in daily life would be willing to invest to gain information to deal with this uncertainty.  Various social systems, such as rating agencies, social networks, etc., have been constructed so as to facilitate the acquiring of the reputations of agents.  Chapter 14 “Swarm Intelligence Based Reputation Model for Open Multiagent Systems” by Saba Mahmood, Assam Asar, Hiroki Suguri and Hafiz Ahmad deals with the dissemination of updated reputations of agents.  After reviewing the existing reputation models (both centralized and decentralized ones), the authors propose their construction using ant colony optimization.

Chapter 15 “Exploitation-oriented Learning XoL—A New Approach to Machine Learning Based on Trial-and-Error Searches” by Kazuteru  Miyazaki  is also a contribution to reinforcement learning.  As we have said earlier, a fundamental challenge for reinforcement learning is the design of the reward function.  In this chapter, Miyazaki proposes a novel version of reinforcement learning based on many of his earlier works on therationality theorem of profit sharing.  This new version, called XoL, differs from the usual one in that reward signals only require an order of importance among the actions, which facilitates the reward design.  In addition, XoL is a Bellman-free method since it can work on the classes beyond Markov decision processes.  XoL can also learn fast because it traces successful experiences very strongly.  While the resultant solution can be biased, a cure is available through the multi-start method proposed by the author.  


The last part of the book has “Miscellaneous” as its title.   There are two chapters in the part.  While these two chapters can be related to and re-classified into some of the previous parts, we prefer to make them “stand out” here to not blur their unique coverage.  The first one in this part (Chapter 16) is related to the multi-agent robotics and also to multi-agent learning, but it is the only chapter devoted to the simulation of the behavior of insects, namely, the ant war. It is an application of MAS to entomology or computational entomology, and a biologically inspired approach that is put back to the study of biology. The last chapter of the book (Chapter 17) is devoted to cellular automata, an idea widely shared in many other chapters of the book, but it is the only chapter which exclusively deals with this subject with an in-depth review.  As we have mentioned earlier, one of the origins of the multi-agent systems is von Neumann’s cellular automata. It will indeed be aesthetic if the whole book has the most recent developments on this subject as its closing chapter.   

Chapter 16 “Pheromone-style Communication for Swarm Intelligence” authored by Hidenori Kawamura and Keiji Suzuki simulates two teams of ants competing for food.  What concerns the authors is how ants effectively communicate with their teammates so that the food collected can be maximized. In a sense, this chapter is similar to the coordination problems observed in RoboCup.  The difference is that insects like ants or termites are cognitively even more limited than robots in RoboCup.  Their decisions and actions are rather random, which requires no memory, no prior knowledge, and does not involve learning in an explicit way.  Individually speaking, they are comparable to what is known to economists as zero-intelligent agents (Gode and Sunder, 1993). Yet, entomologists have found that they can communicate well.  The communication is however not necessarily direct, but more indirect, partially due to their poor visibility.  Their reliance on indirect communication has been noticed by the French biologists Pierre-Paul Grasse (1895-1985), and he termed this style of communication or interaction stigmergy (Grosan and Abraham, 2006)

He defined stigmergy as: “Stimulation of workers by the performance they have achieved.” Stigmergy is a method of communication in which the individuals communicate with each another via modifying their local environment.  The price mechanism familiar to economists is an example of stigmergy.  It does not require market participants to have direct interaction, but only indirect interaction via price signals. In this case the environment is characterized as the price, which is constantly changed by market participants and hence constantly invites others to take actions further.  

In this chapter, Kawamura and Suzuki use genetic algorithms to simulate the co-evolution processes of the emergent stigmergic communication among ants.  While this study is specifically placed in a context of an ant war, it should not be hard to see its potential in a more general context, such as the evolution of language, norms and culture.

In Chapter 17 “Evolutionary Search for Cellular Automata with Self-Organizing Properties toward Controlling Decentralized Pervasive Systems” authored by Yusuke Iwase, Reiji Suzuki and Takaya Arita bring us back to where we begin in this introductory chapter, namely, cellular automata. As we have noticed, from a design perspective, the fundamental issue is an inverse engineering problem, i.e., to find out rules of automata by which our desired patterns can emerge. This chapter basically deals with this kind of issue but in an environment different from the conventional cellular automata. The cellular automata are normally run in a closed system.  In this chapter, the authors consider an interesting extension by exposing them to an open environment or a pervasive system.  In this case, each automaton will receive external perturbations probabilistically.  These perturbations will then change the operating rules of the interfered cells, which in turn may have global effects.  Having anticipated these properties, the authors then use genetic algorithms to search for rules that may best work with these perturbations to achieve a given task.

The issues and simulations presented in this chapter can have applications to social dynamics.  For example, citizens interact with each other in a relatively closed system, but each citizen may travel out once in a while.  When they return, their behavioral rules may change due to cultural exchange; hence they will have an effect on their neighbors that may even have a global impact on the social dynamics.  In this vein, the other city which hosts these guests may experience similar kinds of changes.  In this way, the two systems (cities) are coupled together. People in cultural studies may be inspired by the simulation presented in this chapter.


When computer science becomes more social and the social sciences become more computational, publications that can facilitate the talks between the two disciplines are demanded.  This edited volume demonstrates our efforts to work this out.  It is our hope that more books or edited volumes as joint efforts among computer scientists and social scientists will come, and, eventually, computer science will help social scientists to piece together their “fragmental” social sciences, and the social sciences will constantly provide computer scientists with fresh inspiration in defining and forming their new and creative research paradigm.   The dialogue between artificial automata and natural automata will then continue and thrive.    

Shu-Heng Chen, AI-ECON Research Center, Department of Economics, National Chengchi University, Taipei, Taiwan

Yasushi Kambayashi, Department of Computer and Information Engineering, Nippon Institute of Technology, Saitama, Japan

Hiroshi Sato, Department of Computer Science, National Defense Academy, Yokosuka, Japan


Albin, P. (1975). The Analysis of Complex Socioeconomic Systems. Lexington, MA: Lexington Books.

Arai, K., Deguchi, H., & Matsui, H. (2006). Agent-Based Modeling Meets Gaming Simulation. Springer.

Arthur, B. (1994). Inductive reasoning and bounded rationality. American Economic Review, 84(2), 406–411.

Bush, R.R., & Mosteller, F. (1955). Stochastic Models for Learning. New York: John Wiley & Sons.

Hayek, F. (1945). The use of knowledge in society. American Economic Review, 35(4), 519-530.

Chen S.-H, & Liao C.-C. (2004). Behavior finance and agent-based computational finance: Toward an integrating framework. Journal of Management and Economics, 8, 2004.

Chen S.-H, Chang C.-L, & Du Y.-R (in press). Agent-based economic models and econometrics. Knowledge Engineering Review, forthcoming.

Clement, R. (1989). Combining forecasts: A review and annotated bibliography. International Journal of Forecasting, 5, 559-583.

Cockayne, W., Zyda, M. (1998). Mobile Agents. Prentice Hall.

De Garis, H. (2008). Artificial brains: An evolved neural net module approach. In J. Fulcher & L. Jain (Eds.), Computational Intelligence: A Compendium. Springer.

Dow, N. (2003). Reinforcement Learning Models of the Dopamine System and Their Behavior Implications. Doctoral Dissertation. Carnegie Mellon University.

Grosan, C., & Abraham, A. (2006) Stigmergic optimization: Inspiration, technologies and perspectives. In A. Abraham, C. Gorsan, & V. Ramos (Eds.), Stigmergic Optimization (pp. 1-24). Springer.

Gode, D., & Sunder, S. (1993). Allocative efficiency of markets with zero intelligence traders: Market as a partial substitute for individual rationality. Journal of Political Economy, 101,119-137.

Kitano, H. (Ed.) (1998) RoboCup-97: Robot Soccer World Cup I. Springer.

Kooths, S., Mitze, T., & Ringhut, E. (2004). Forecasting the EMU inflation rate: Linear econometric versus non-linear computational models using genetic neural fuzzy systems. Advances in Econometrics, 19, 145-173.

Luce, D. (1959). Individual Choice Behavior: A Theoretical Analysis. Wiley.

Minsky, M. (1988). Society of Minds. Simon & Schuster.

Montague, R. (2007). Your Brain Is (Almost) Perfect: How We Make Decisions. Plume.

Mumford, C., & Jain, L. (2009). Computational Intelligence: Collaboration, Fusion and Emergence. Springer.

Reynolds, R. (1994). An introduction to cultural algorithms. In Proceedings of the 3rd Annual Conference on Evolutionary Programming (pp. 131-139). World Scientific Publishing.

Reynolds, R. (1999). An overview of cultural algorithms.  In D. Corne, F. Glover,  M. Dorigo (Eds.), New Ideas in Optimization (pp. 367-378). McGraw Hill Press.

Samuel, A. (1959). Some studies in machine learning using the game of checkers. IBM Journal, 3(3), 210-229.

Sasaki, Y., Flann, N., Box, P. (2005). The multi-agent games by reinforcement learning applied to on-line optimization of traffic policy. In S.-H. Chen, L. Jain & C.-C. Tai (Eds.), Computational Economics: A Perspective from Computational Intelligence (pp. 161-176). Idea Group Publishing.

Smith, V. (1982). Markets as economizers of information: Experimental examination of the “Hayek Hypothesis”. Economic Inquiry, 20(2), 165-179.

Sutton, R.S., & Barto, A.G. (1998). Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press.

von Neumann, J. (1958). The Computer and the Brain. Yale University Press.

von Neumann, J.  completed by Burks A (1966). Theory of Self Reproducing Automata. Univ of
Illinois Press.

Weidlich, A. (2008).  Engineering Interrelated Electricity Markets: An Agent-Based Computational Approach. Physica-Verlag.

Wolfram, S. (2002). A New Kind of Science. Wolfram Media.

Author(s)/Editor(s) Biography

Shu-Heng Chen is a professor in the Department of Economics and Director of Center of International Education and Exchange at the National Chengchi University. He also serves as the Director of the AI-ECON Research Center, National Chengchi University, the editor- in-chief of the Journal of New Mathematics and Natural Computation (World Scientific), the associate editor of the Journal of Economic Behavior and Organization, and the editor of the Journal of Economic Interaction and Coordination. Dr. Chen holds an M.A. degree in mathematics and a Ph. D. in Economics from the University of California at Los Angeles. He has more than 150 publications in international journals, edited volumes and conference proceedings. He has been invited to give keynote speeches and plenary talks on many international conferences. He is also the editor of the volume "Evolutionary Computation in Economics and Finance" (Plysica-Verlag, 2002), "Genetic Algorithms and Genetic Programming in Computational Finance" (Kluwer, 2002), and the co-editor of the Volume I & II of "Computational Intelligence in Economics and Finance" (Springer-Verlag, 2002 & 2007), "Multi-Agent for Mass User Support" (Springer-Verlag, 2004), “Computational Economics: A Perspective from Computational Intelligence” (IGI publisher, 2005), and “Simulated Evolution and Learning,” Lecture Notes in Computer Science, ( LNCS 4247) (Springer, 2006), as well as the guest editor of Special Issue on Genetic Programming, International Journal on Knowledge Based Intelligent Engineering Systems (2008). His research interests are mainly on the applications of computational intelligence to the agent-based computational economics and finance as well as experimental economics. Details of Shu-Heng Chen can be found at or
Yasushi Kambayashi is an associate professor in the Department of Computer and Information Engineering at the Nippon Institute of Technology. His research interests include theory of computation, theory and practice of programming languages, and political science. He received his PhD in Engineering from the University of Toledo, his MS in Computer Science from the University of Washington, and his BA in Law from Keio University.
Hiroshi Sato is Assistant Professor of Department of Computer Science at National Defense Academy in Japan. He was previously Research Associate at Department of Mathematics and Information Sciences at Osaka Prefecture University in Japan. He holds the degrees of Physics from Keio University in Japan, and Master and Doctor of Engineering from Tokyo Institute of Technology in Japan. His research interests include agent-based simulation, evolutionary computation, and artificial intelligence. He is a member of Japanese Society for Artificial Intelligence, and Society for Economic Science with Heterogeneous Interacting Agents.