How Can We Trust Agents in Multi-Agent Environments? Techniques and Challenges

How Can We Trust Agents in Multi-Agent Environments? Techniques and Challenges

Kostas Kolomvatsos (National and Kapodistrian University of Athens, Greece) and Stathes Hadjiefthymiades (National and Kapodistrian University of Athens, Greece)
Copyright: © 2009 |Pages: 22
DOI: 10.4018/978-1-59904-576-4.ch008
OnDemand PDF Download:
No Current Special Offers


The field of Multi-agent systems (MAS) has been an active area for many years due to the importance that agents have to many disciplines of research in computer science. MAS are open and dynamic systems where a number of autonomous software components, called agents, communicate and cooperate in order to achieve their goals. In such systems, trust plays an important role. There must be a way for an agent to make sure that it can trust another entity, which is a potential partner. Without trust, agents cannot cooperate effectively and without cooperation they cannot fulfill their goals. Many times, trust is based on reputation. It is an indication that we may trust someone. This important research area is investigated in this book chapter. We discuss main issues concerning reputation and trust in MAS. We present research efforts and give formalizations useful for understanding the two concepts.
Chapter Preview


The technology of Multi-agent Systems (MAS) offers a lot of advantages in computer science and more specifically in the domain of cooperative problem solving. MAS are systems that host a number of autonomous software programs that are called agents. Agents act on behalf of their owners giving them access to information resources easily and efficiently. Users state their requirements and agents are responsible to fulfill them. Hence, MAS include many entities trying to solve their problems that are beyond of their capabilities. For this reason, in many cases, agents must cooperate with others in order to find the appropriate information and services to achieve their goals.

It is obvious that MAS are dynamic and distributed environments where agents may cooperate and communicate with others in order to complete their tasks. A key challenge arises from this nature of MAS. In such open systems, entities change their behavior dynamically. Thus, there is a requirement for trust between agents when they must exchange information Therefore, the basic question in such cases is: How and when can we trust an agent? Agents, in the majority of cases are selfish and their intentions and beliefs change continually.

We try to address this dilemma throughout this chapter. Specifically, we cover the fields of reputation and trust in MAS. This is an active research area, which is very important due to the fact that these two concepts are used in commercial applications. However, open issues exist in many cases, as it is difficult to characterize an agent as reliable or not.

In our work, we try to provide a detailed overview of reputation and trust models highlighting their importance to open environments. Due to the abundance of the relevant models, only the basic characteristics of models are discussed. We discuss basic concepts concerning MAS, reputation and trust. Accordingly, we present efforts, formalizations, and models related to the mentioned concepts. Finally, we discuss about trust engineering issues and we present future challenges and our conclusions.

Complete Chapter List

Search this Book: