Article Preview
TopIntroduction
Computers and computer networks facilitate even the most important and vital aspects of the modern lifestyle such as voting systems, electronic banking and trading systems, cloud computing and the internal know-how of large-scale, and even small-scale, businesses. These systems, however, are both critical and open to observation by possibly malicious parties. These facts make it crucial to secure the information used and exchanged by these systems.
It is important, then, to guarantee that no confidential information, private data or actions can be seen or deduced by an external observer. Opacity is a privacy property that formulates a system’s ability to keep hidden a secret from a passive, but knowledgable, observer.
Since its introduction in (Mazaré, 2004), and then its generalization to transition systems (Bryans, Koutny, Mazaré, & Ryan, 2008), opacity has been applied and discussed several times in the literature including the discussion of its timed variant in (Frank Cassez, 2009). These studies made appear numerous definitions, types and applications of the concept of opacity, as well as many methods to verifying and enforcing it (Bryans et al., 2008; Frank Cassez, 2009; Dubreil, Darondeau, & Marchand, 2010; Lin, 2011; Mullins & Yeddes, 2013, 2014). In this paper we continue down the path of the study of opacity in three of its variants, namely, simple opacity, -step weak opacity and -step strong opacity within the context of finite Labeled Transition Systems (LTS).
A secret subset of a system’s behavior is “Opaque” if a passive knowledgable observer is unable to deduce the occurrence of the secret from his or her observation of this system (Mullins & Yeddes, 2014). Assuming that the system is modeled by a Labeled Transition System (LTS), an observer (or intruder) has a full knowledge of the system (i.e. the LTS) but during the execution, he or she have access to a limited subset of the system’s actions called observable events (or actions). Given the LTS having as a subset of states called secret states and an intruder observing the system through a subset of events called observable events, is said to be “Opaque” if for every execution leading to a secret state, there exists another execution having the same projection on , that does not (end in a secret state). In this case, we say that is unable to know if the system had reached (ended in) a secret state or not.
We note that opacity can be formulated in two different ways, either by considering a subset of states as the secret and in this case we are talking about state-based opacity, or by considering a subset of sequences of events (named a trace) as the secret, and in this case we are talking about trace-based opacity. In this paper we are solely interested in the formalization of state-based opacitiy.