Trust Modeling and Computational Trust: Digitalizing Trust

Trust Modeling and Computational Trust: Digitalizing Trust

DOI: 10.4018/978-1-4666-4765-7.ch002
(Individual Chapters)
No Current Special Offers


The trust model is the method to specify, evaluate, setup, and ensure trust relationships. It is the way to assist the process of trust in a digital system. This chapter introduces the technologies of trust modeling. The authors classify existing trust models based on different criteria and introduce a number of theories or technologies applied in trust evaluation. Furthermore, discussions on the current problems and challenges in the literature are presented.
Chapter Preview

1. Introduction

The method to specify, evaluate, set up, and ensure trust relationships among entities is the trust model (Yan & Holtmanns, 2008). It is the way to calculate trust (Yang, Sun, Kay & Yang, 2009). The trust model aids the digital processing and controlling of trust. Most existing trust models are based on the understanding of trust characteristics, accounting for factors influencing trust. Current work covers a wide area including ad hoc networks, ubiquitous computing, Peer-to-Peer (P2P) systems, multi-agent systems, web services, e-commerce, component software, and so on (Yan & Holtmanns, 2008).

Trust modeling has a history with about two decades. One of the earliest formalizations of trust in computing systems was done by Marsh (1994). In his approach, he integrated the various facets of trust from the disciplines of economics, psychology, philosophy and sociology. Since then, many trust models have been constructed for various computing paradigms such as ubiquitous computing, Peer-to-Peer (P2P) networks, and multi-agent systems. In almost all of these studies, trust is accepted as a subjective notion by all researchers, which brings us to a problem: how to measure trust? Translation of this subjective concept into a machine readable language is the main objective of trust modeling. Abdul-Rahman and Hailes (2000) proposed a trust model based on the work done by Marsh (1994). Their trust model focuses on online virtual communities where every agent maintained a large data structure representing a version of global knowledge about the entire network. Gil and Ratnakar (2002) described a feedback mechanism (i.e., a reputation mechanism) that assigns credibility and reliability values to sources based on the averages of feedback received from individual users.

There are various methodologies for trust modeling. They have been applied for different purposes. Some trust models are based on cryptographic technologies, e.g. Public Key Infrastructure (PKI) played as the foundation in a trust model (Perlman, 1999). A big number of trust models are developed targeting at some special trust properties, such as reputations, recommendations and risk studied by Xiong and Liu (2004) and Liang and Shi (2005). Seldom, they support multi-property of trust that is needed to take into account the factors of the trustee, the trustor and context. Many trust models have been constructed for various computing paradigms such as GRID computing, ad hoc networks, and P2P systems. These models use computational, linguistic or graphical methods. For example, Maurer (1996) described an entity’s opinion about the trustworthiness of a certificate as a value in the scope of [0, 1]. Theodorakopoulos and Baras (2006) used a two-tuple in [0, 1]2 to describe a trust opinion (refer to section 3.5). In Jøsang (1999), the metric is a triplet in [0, 1]3, where the elements in the triplet represent belief, disbelief, and uncertainty, respectively. Abdul-Rahman and Hailes (2000) used discrete integer numbers to describe the degree of trust. Then, simple mathematical operations, such as minimum, maximum and weighted average, are used to calculate unknown trust values through concatenation and multi-path trust propagation. Jøsang and Ismail (2002) and Ganeriwal and Srivastava (2004) used a Bayesian model to take binary ratings as input and compute reputation scores by statistically updating beta probability density functions. Linguistic trust metrics were used for reasoning trust with provided rules by Manchala (2000). In the context of the “Web of Trust”, many trust models (e.g., Reiter and Stubblebine (1998)) are built upon a graph where the resources or entities are nodes and trust relationships are edges.

Complete Chapter List

Search this Book: