A Context-Aware Model of Trust for Facilitating Secure Ad Hoc Collaborations

A Context-Aware Model of Trust for Facilitating Secure Ad Hoc Collaborations

Indrajit Ray (Colorado State University, USA), Indrakshi Ray (Colorado State University, USA) and Sudip Chakraborty (Valdosta State University, USA)
DOI: 10.4018/978-1-61520-682-7.ch011
OnDemand PDF Download:
List Price: $37.50


Ad hoc collaborations often necessitate impromptu sharing of sensitive information or resources between member organizations. Each member of resulting collaboration needs to carefully assess and tradeoff the requirements of protecting its own sensitive information against the requirements of sharing some or all of them. The challenge is that no policies have been previously arrived at for such secure sharing (since the collaboration has been formed in an ad hoc manner). Thus, it needs to be done based on an evaluation of the trustworthiness of the recipient of the information or resources. In this chapter, the authors discuss some previously proposed trust models to determine if they can be effectively used to compute trustworthiness for such sharing purposes in ad hoc collaborations. Unfortunately, none of these models appear to be completely satisfactory. Almost all of them fail to satisfy one or more of the following requirements: (i) well defined techniques and procedures to evaluate and/or measure trust relationships, (ii) techniques to compare and compose trust values which are needed in the formation of collaborations, and (iii) techniques to evaluate trust in the face of incomplete information. This prompts the authors to propose a new vector (we use the term “vector” loosely; vector in this work means a tuple) model of trust that is suitable for reasoning about the trustworthiness of systems built from the integration of multiple subsystems, such as ad hoc collaborations. They identify three parameters on which trust depends and formulate how to evaluate trust relationships. The trust relationship between a truster and a trustee is associated with a context and depends on the experience, knowledge, and recommendation that the truster has with respect to the trustee in the given context. The authors show how their model can measure trust in a given context. Sometimes enough information is not available about a given context to calculate the trust value. Towards this end the authors show how the relationships between different contexts can be captured using a context graph. Formalizing the relationships between contexts allows us to extrapolate values from related contexts to approximate a trust value of an entity even when all the information needed to calculate the trust value is not available. Finally, the authors develop formalisms to compare two trust relationships and to compose two or more of the same – features that are invaluable in ad hoc collaborations.
Chapter Preview


When two or more organizations collaborate, each of them needs to properly assess and tradeoff the requirements of protecting its own sensitive information and resources against the requirements of sharing some or all of them with others. Traditionally, organizational information security policies are formulated to specify who to share information and resources with, what individual pieces to share, under what circumstances, and any other restrictions on the sharing of such sensitive information and/or resources. When conventional collaborations are formed, such security policies of individual organizations are compared against each other. Any conflict between policies needs to be resolved, giving rise to a new set of security policies for the collaboration as a whole. Unfortunately, ad hoc collaborations by their very nature preclude such premeditated security policies. Such collaborations are very dynamic in nature. They can form and break down within very short periods of time. A typical example of an ad hoc collaboration is a virtual sensor network that is formed during the occurrence of an earthquake to monitor disturbances in chemical plumes owing to the earthquakes. The virtual sensor network is sustained for a small period of time by the co-operation of two special purpose sensor networks – one for monitoring chemical plumes, and the other for monitoring seismic activities. During regular times, each of these sensor networks is administered by a different entity and nothing is shared between the two. During the earthquake each sensor network needs to update its own security policies, on the spur of the moment at the time of the formation of the collaboration, to adjust to possible conflicting goals – a challenging situation. It appears that the concept of trust can be used to support such ad hoc adaptation of security policies. This is because the sharing of information and resources can be guided to a considerable by questions such as, who to trust, why to trust, and how much to trust. However, even today, there are no well-accepted formalisms or techniques for the specification of trust in such collaborative environments, and for reasoning about trust relationships. Secure collaborations are often built under the premise that concepts like “trustworthiness” or “trusted” are well understood, unfortunately without even agreeing on what “trust” means, how to measure it, how to compare two trust values and how to compose the same. This creates a number of inferential ambiguities in building secure systems, particularly those that are composed from several different components.

Consider the following example. Let us assume two financial organizations have decided to join hands to fight financial fraud. Each organization has previously generated its own information bases about fraud perpetrators, their activities, their ways and means, a fraud level rating and so on. As part of this collaboration, these information bases need to be merged. Typically each organization would have created its information base with the accumulation of information from several sources. Some of these sources are under the direct administrative control of the organization and thus are considered trustworthy. Other sources are “friendly” sources. Information originating directly from these sources is also considered trustworthy. However, these “friendly” organizations may, in turn, have obtained information from their own sources. The current organization may not have any firsthand knowledge about these other sources. If such third-hand information is made available to the corporation, then the corporation has no real basis for determining the quality of that information. It will be rather naïve for the organization to trust this information to the same extent that it trusts information from sources under its direct control. Similarly, not trusting this information at all may severely constrain the functionalities of the organization. Let us assume that somehow each organization has been able to rate the trustworthiness of various pieces of information in their own information bases in terms of qualitative measures such as high, medium or low. The question then remains how to compare “high trustworthy” for one organization with “high trustworthy” in the other. Or, what will be the trust level of the merged information. Note that this is not a limitation of the qualitative measures. The same problem arises when existing quantitative measures of trust are used.

The above example leads us to observe the following minimum requirements of any trust model for evaluating trustworthiness of entities for ad hoc collaborations:

Complete Chapter List

Search this Book: