Trust-Privacy Tradeoffs in Distributed Computing

Trust-Privacy Tradeoffs in Distributed Computing

Rima Deghaili (American University of Beirut, Lebanon), Ali Chehab (American University of Beirut, Lebanon) and Ayman Kayssi (American University of Beirut, Lebanon)
Copyright: © 2010 |Pages: 11
DOI: 10.4018/978-1-60566-414-9.ch011
OnDemand PDF Download:
No Current Special Offers


In distributed computing environments, it is often needed to establish trust before entities interact together. This trust establishment process involves making each entity ask for some credentials from the other entity, which implies some privacy loss for both parties. The authors present a system for achieving the right privacy-trust tradeoff in distributed environments. Each entity aims to join a group in order to protect its privacy. Interaction between entities is then replaced by interaction between groups on behalf of their members. Data sent between groups is saved from dissemination by a self-destruction process. Simulations performed on the system implemented using the Aglets platform show that entities requesting a service need to give up more private information when their past experiences are not good, or when the requesting entity is of a paranoid nature. The privacy loss in all cases is quantified and controlled.
Chapter Preview

2. Previous Work

Among the different trust models, (Abdul-Rahman & Hailes, 1998) present a decentralized approach to trust management aiming at reducing ambiguity by using explicit trust statements and defining a recommendation protocol to exchange trust-related information.

Other trust models in are based on reputation (Abdul-Rahman, & Hailes, 2000 ;Mui, Mohtashemi & Halberstadt, 2002; Ramchurn, Jennings, Sierra & Godo, 2003). Agents are able to reason about trust and to have opinions based on other agents’ recommendations as well as on previous experiences.

In (Damiani, Samarati, De Capitani di Vimercati, Paraboschi & Violante, 2002), the model is a self-regulating system where the peer-to-peer network is used to implement a reputation mechanism, while preserving anonymity. Reputation is computed using a distributed polling algorithm whereby resource requestors can find out about the reliability of another entity.

In (Tan 2003), a trust matrix model is used to build trust for conducting first trade transactions in electronic commerce. The model aims at finding a relation between anonymous procedural trust and personal trust based on past experience to model online trust between trading partners having never traded before.

The authors of (Jiang, Xia, Zhong & Zhang, 2004) present an autonomous trust management system for mobile agents where agents build trust relationships based on trust path searching or trust negotiation and exchange trust information to achieve global trust management without the need of a trust authority.

In (Gummadi & Yoon, 2004), security issues in peer-to-peer file sharing applications are considered. These include “peer selection” where peers having malicious tendencies are banned and “request resolution” where a peer has to choose the peer that exhausts its capabilities the least. The concept of reputation is introduced as a collective measure of all peers with a particular peer.

The TRUMMAR model (Derbas, Kayssi, Artail & Chehab, 2004) is based on reputation and aims to protect mobile agent systems from malicious hosts. TRUMMAR takes into account the concepts of reputation, first impression, loss of reputation with time, and host's sociability. The model was later enhanced as the PATROL model (Tajeddine, Kayssi, Chehab & Artail, 2006).

The authors (Wang & Vassileva, 2004) simulate a file sharing system in a peer-to-peer network where trust is defined using attributes such as reliability, honesty and competence of a trusted agent.

FIRE (Huynh, Jennings & Shadbolt, 2004) is a decentralized model for trust evaluation in open multi-agent systems where each agent should be responsible for storing trust information and evaluating trust itself. FIRE deals with open multi-agent systems in which agents are owned by many stakeholders and can enter and leave the system at any time.

The TRAVOS model (Patel, Teacy, Jennings & Luck, 2005) computes trust using probability theory and takes into account past interactions between agents.

Complete Chapter List

Search this Book: