A Trust Case-Based Model Applied to Agents Collaboration

A Trust Case-Based Model Applied to Agents Collaboration

Felipe Boff (Lutheran University of Brazil (ULBRA), Brazil) and Fabiana Lorenzi (Lutheran University of Brazil (ULBRA), Brazil)
Copyright: © 2018 |Pages: 13
DOI: 10.4018/978-1-5225-2255-3.ch416

Abstract

This paper shows an approach for developing a method that uses Case-Based Reasoning (CBR) for calculating trust levels in agents' collaboration. The proposed development foresees all interactions between agents are considered in the updating process of general trust level. These collaborations are also stored in a CBR database. Each new interaction calculates a situational trust level based on similar cases. This trust level will be weighed against the global trust level, creating an indicator based on the requested collaboration without excluding the collaboration history.
Chapter Preview
Top

Introduction

The interaction among people as well as the interaction among systems can, at times, be seen from similar perspectives. People and systems construct their relationship network supported by information. People have the ability to retain information about things and, based on that, pursue their requirements. Computational systems can also function in a similar manner; however, this is highly dependent upon how they were created. As time passes, a person may lose contact with someone or develop stronger ties if they see this relationship as favorable. Systems and agents can also cease to relate to other systems or start new relationships, but, for this, actions external to the agent or system may be necessary.

One of the feelings that can bring comfort to interpersonal relationships is trust. Based on this feeling, people allow other people they trust to participate more frequently or intensely in their lives. The relationship between two systems is treated here as collaboration, where each agent, according to its computational purpose, executes operations requested by other agents. Computational systems, however, can have difficulties in evaluating their collaboration with other systems. Quantifying collaboration as appropriate or detrimental can require complex algorithms and several exceptions to a given rule. The proposal presented here suggests the creation of a method which evaluates the relationship between two agents to generate a rating, called general trust rating.

All collaborations between agents are unique and generate equally unique results. These results can be seen as a history of the collaboration between agents. This history is rich in information which can be used to underpin the generation of a trust rating. For this, the information has to be stored. The more information that is stored, the more complex the process of extraction has to be. Due to this, a Case-Based Reasoning approach is used to store this information. The collaboration between agents generates a new case in the case database, which comprises the history of this collaboration. Whether this is beneficial or not for the agent can be concluded by reviewing the whole collaboration history. By using the method developed, the correlation of this history can be quantified, generating a trust rating.

This paper presents a method which was developed to quantify the relationship between agents through the generation of a rating. As in human relations, agents may have a high degree of trust in other agents when the relationship is analyzed from a wider perspective. Nevertheless, the agent is likely to have a different trust rating for specific activities due to the environment it is exposed to or to its purpose. In order to allow this differentiation, the proposed method uses case-based reasoning for each collaboration request to find similar previous collaboration. It selects from this base of previous collaboration cases those that were similar to the current demand, generating a situational trust rating for it. The weighting of the situational with the general trust rating allows evaluating the collaboration considering a particular situation as well as the history of collaboration between the agents.

In the following sections first we present some related work and how the important points are set in the proposed method. Then we illustrate how the method was implemented, where functionalities are separated into modules so as to provide segregation and isolation of these functionalities: we present the tests that were run and the obtained results that corroborate the efficiency of the method. Finally, we present our conclusions and some future work we have planned.

Key Terms in this Chapter

Trust Rating: The level of the trust among the agents.

General Trust: Represents the trust of an agent in a specific interlocutor, considering all the interactions between these two.

Case-Based Reasoning: An approach that solves new problems using solutions applied in previous cases.

Multi-Agent Systems: A system composed by agents that are able to interact among themselves and within an environment.

Trust: A measurable level of the probability which an agent knows that another agent will perform a particular action.

Collaboration: When two or more agents collaborate to perform a task.

Situational Trust: Reflects the trust of an agent in an interlocutor in a specific situation.

Complete Chapter List

Search this Book:
Reset