Swarm Intelligence Based Reputation Model for Open Multi agent Systems

Swarm Intelligence Based Reputation Model for Open Multi agent Systems

Saba Mahmood (School of Electrical Engineering and Computer Science (NUST-SEECS), Pakistan), Azzam ul Asar (Department of Electrical and Electronics Eng NWFP University of Engineering and Technology, Pakistan), Hiroki Suguri (Miyagi University, Japan) and Hafiz Farooq Ahmad (School of Electrical Engineering and Computer Science (NUST-SEECS), Pakistan)
DOI: 10.4018/978-1-60566-898-7.ch014
OnDemand PDF Download:
List Price: $37.50


In open multiagent systems, individual components act in an autonomous and uncertain manner, thus making it difficult for the participating agents to interact with one another in a reliable environment. Trust models have been devised that can create level of certainty for the interacting agents. However, trust requires reputation information that basically incorporates an agent’s former behaviour. There are two aspects of a reputation model i.e. reputation creation and its distribution. Dissemination of this reputation information in highly dynamic environment is an issue and needs attention for a better approach. We have proposed a swarm intelligence based mechanism whose self-organizing behaviour not only provides an efficient way of reputation distribution but also involves various sources of information to compute the reputation value of the participating agents. We have evaluated our system with the help of a simulation showing utility gain of agents utilizing swarm based reputation system. We have utilized an ant net simulator to compute results for the reputation model. The ant simulator is written in c
Chapter Preview


Interactions in Human Societies.

Agent based systems share a number of characteristics with human societies in terms of interactions, communications and various other factors. Human beings create a perception about another human based upon several factors. For example if someone bought a product from a seller and that product proves to be good enough on various parameters, the buyer would rate that seller as good as compared to any other available seller of the same product. So, in future shopping the buyer will keep this rating in mind before making a decision from whom to buy and not to buy. But if a buyer is to experience for the first time the interaction with certain seller and has no prior knowledge, then knowledge of peers can be utilized in this scenario to rate a particular seller. For example, if Mr. X bought a product from a seller and was satisfied, another buyer who has no knowledge about the product of the seller can use this information. Thus human beings are using notion of trust and reputation of other humans with whom they want to interact.

Trust in Computer Systems

Multiagent systems (MAS) are composed of individual agents working towards a certain goal. These agents need to interact with one another in order to achieve the goal. However in open systems it’s very difficult to predict the behaviour of the agents. Open systems are characterized by high degree of dynamism. Thus interactions among the agents require some degree of certainty. The notion of Trust is used in recent years in the field of computer systems to predict the behaviour of the agents based upon certain factors. Another term Reputation is also being used and sometimes both trust and reputation are used interchangeably but they do differ from one another. Reputation is defined as a collected and processed information about one entity’s former behaviour as experienced by others while Trust is the measure of willingness to proceed with action (decision) which places parties at risk of harm and based on an assessment of the risks, rewards and reputation associated with all the parties in involved in the given situation.

Several computational and empirical models have been suggested in recent years trying to address various issues of open multiagent system. Earlier work involved model and mechanism developed for centralized multiagent systems. However, with the evolution of distributed computing of decentralized nature, those models proved to be incomplete in addressing certain issues. Models like REGRET and FIRE take in to account various sources of information to compute the final reputation value of the agent under observation designed specifically to address issues of open MAS.


Trust is a fundamental concern in open distributed systems. Trust forecast the outcome of interaction among the agents in the system. There are basically two approaches to trust in multiagent systems; firstly trust requires that the agents should be endowed by some knowledge in order to calculate the trust value of the interacting agent. A high degree of trust would mean most probable selection of that agent for interaction purposes. Second approach to trust revolves around the design of protocols and mechanisms of interaction i.e. the rules of encounter. These interaction mechanisms need to be devised to ensure that those involved can be sure they will gain some utility if they rightly deserve it and malicious agent cannot tamper with the correct payoff allocation of the mechanism(Schlosser, Voss, Bruckner 2004). This definition of reputation and trust above depicts that for trust management, reputation is required. Reputation creation and distribution is an issue especially in case of open multiagent systems. Thus we can say that the two approaches to trust come under reputation creation and distribution.

Trust has evolved as the most recent area in the domain of Information systems. A wide variety of trust and reputation models have been developed in the past few years. Basically we have divided the models in to two areas, centralized and decentralized.

Centralized reputation mechanism

Online electronic communities manage reputation of all the users in a centralized manner, for example eBay and SPORAS.

Complete Chapter List

Search this Book: