Background Review for Neural Trust and Multi-Agent System

Background Review for Neural Trust and Multi-Agent System

Gehao Lu (University of Huddersfield, UK & Yunnan University, China) and Joan Lu (University of Huddersfield, UK)
DOI: 10.4018/978-1-5225-1884-6.ch016
OnDemand PDF Download:
List Price: $37.50


This chapter provides a systematic background study in the neural trust and multi-agent system. Theoretic models are discussed in details. The concepts are explained. The existing systems are analyzed. The limitations and strength of previous research are discussed. About 59 references are cited to support the study for the investigation. The study did address the research importance and significance and finally, proposed the future directions for the research undertaken.
Chapter Preview

1. Introduction

Multi-agent systems are initially introduced as a branch of distributed artificial intelligence. In 1986, Minsky and Murata proposed the concept of agents and he thought some problems could be solved through the negotiation among the social individuals (Minsky & Murata, 2004). These social individuals are agents. Agents should be highly interactive and intelligent. Hewitt thought that it is difficult to define agent as it is to define intelligence (Hewitt, 1985). Wooldridge and Jennings thought that agents should be autonomous, social interactive, proactive, and reactive (Wooldridge & Jennings, 2002). Eberhart & Shi thought that agents can react through sensing the outside environments (Eberhart & Shi, 2007).

With the increased size and complexity of the computer systems, it is nearly impossible to design a system from scratch and control every details of the system purely by human brain (Simon, 1996). It is difficult to control millions of transactions occurring in a large-scale E-market. It is also difficult to monitor an enterprise information system which encompasses huge amounts of heterogeneous devices which covers thousands of different geographical locations (Rothkopf, 2003; Coulouris, Dollimore & Kindberg, 2000a). Grid Computing, Autonomous Computing, Pervasive computing and Multi-agent systems, are all committing themselves to challenging the design of large-scale distributed system (Coulouris et al., 2000b). Computational trust is to make an intelligent agent trust another agent and delegate part of their tasks to the target agent in a heterogeneous distributed multi-agent environment. Delegation of action is the result of trust and it also forms the foundation of future large-scale cooperative computer systems. Generally, trust toward specific agent is generated through recognition and experience under repeated transactions with that agent. Reputation is the socialized trust which can be propagated through a social network of agents. It helps agents trust the target agent without any direct interaction with the target agent. The benefits of introducing trust and reputation into multi-agent system include:

  • As a lubricant, trust can eliminate much of unnecessary communications which are currently necessitates many interaction protocols thus greatly improve the performance of the multi-agent systems.

  • An agent can make decision easier based upon the evaluation of the trustworthiness of another agent. Computational trust is also a very beneficial addition to the traditional decision theory.

Trust is a kind of soft security which complements the traditional hard security like encryption, authorization, and authentication. An agent that exists in complex heterogeneous environment must possess both securities in order to be safe and effective.

Complete Chapter List

Search this Book: