Open Access
This article is
 freely available
 reusable
Math. Comput. Appl. 2019, 24(2), 64; https://doi.org/10.3390/mca24020064
Article
Bisimulation for Secure Information Flow Analysis of MultiThreaded Programs
Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
^{*}
Author to whom correspondence should be addressed.
Received: 5 May 2019 / Accepted: 3 June 2019 / Published: 17 June 2019
Abstract
:Preserving the confidentiality of information is a growing concern in software development. Secure information flow is intended to maintain the confidentiality of sensitive information by preventing them from flowing to attackers. This paper discusses how to ensure confidentiality for multithreaded programs through a property called observational determinism. Operational semantics of multithreaded programs are modeled using Kripke structures. Observational determinism is formalized in terms of divergence weak lowbisimulation. Bisimulation is an equivalence relation associating executions that simulate each other. The new property is called bisimulationbased observational determinism. Furthermore, a model checking method is proposed to verify the new property and ensure that secure information flow holds in a multithreaded program. The model checking method successively refines the Kripke model of the program until the quotient of the model with respect to divergence weak lowbisimulation is reached. Then, bisimulationbased observational determinism is checked on the quotient, which is a minimized model of the concrete Kripke model. The time complexity of the proposed method is polynomial in the size of the Kripke model. The proposed approach has been implemented on top of PRISM, a probabilistic model checking tool. Finally, a case study is discussed to show the applicability of the proposed approach.
Keywords:
information security; secure information flow; bisimulation; multithreaded programs1. Introduction
The increase in number and variety of security attacks on computing systems amplifies the need for improvement in protection mechanisms against the security attacks. Most of these attacks target the confidentiality of sensitive information.
Cryptography, access control and firewalls are common protection mechanisms against the attacks on confidentiality. These mechanisms allow access (read or write) to the secret information for authorized users only. However, once access to secret information is granted for an authorized user, they do not have control over how the information is used. They have no technique to verify and prevent the flow of secret information to unauthorized users. For example, consider an android game with inapp purchase capability. While installing this app, it asks for permission to access the Internet and credit card information (for inapp purchases, like buying additional coins). Once an android user allows these permissions, they have no way to ensure that the credit card information is used in a legal manner and are not sent to an unauthorized server. This is an example of information flows, which potentially result in dangerous types of attacks, such as code injection or sensitive information leakage [1].
To avoid information flows and stop information leakages, secure information flow has been introduced. Secure information flow is a security mechanism for verifying and ensuring that secret information in a program does not flow to unauthorized users. It assigns two security levels to program variables: some variables are assigned public (low) security level and some are assigned secret (high) security level. The attacker, in secure information flow context, is supposed to know the source code of the program and is able to execute the program and observe values of the public variables. Secure information flow seeks to detect information flows from secret variables to public ones [2,3].
For example, consider the program l:=h, where h is a secret variable and l is a public variable. This program has a direct flow, since the attacker can infer secret information (h) by observing the public variable (l). As another example, consider the program if h>0 then l:=5 else l:=5, which has an indirect flow. The attacker can infer some secret information (h being positive or not) by observing the value of l. Verifying confidentiality of multithreaded programs and ensuring secure information flows is the main motivation of this paper.
To ensure secure information flow in multithreaded programs, a confidentiality property needs to be formalized and a verification method is needed to check whether the program satisfies the property or not.
A commonly used confidentiality property to ensure secure information flow for concurrent programs is Observational determinism [4,5,6]. It requires the program to appear deterministic to an attacker capable of observing values of the public variables during the program execution. There are various definitions of observational determinism [6,7,8,9,10,11]. However, the time complexity of verifying most of these properties is exponential in the size of the state space of the program [10]. Furthermore, they are not restrictive enough in some security contexts to thoroughly avoid information leakages.
In this paper, an automatic approach is proposed to analyze secure information flow for multithreaded programs. The proposed approach consists of two main parts: (1) a new formalization, Bisimulationbased Observational Determinism (BOD), for specifying secure information flow for multithreaded programs (Section 4.1) and (2) a polynomialtime algorithm for verifying BOD to ensure that BOD holds in a multithreaded program (Section 4.2).
Kripke structures are used to model program behavior. Any path in the Kripke structure corresponds to an execution of the program that the Kripke structure is modeling. The sequence of public values of each path denotes a public behavior that is observable by the attacker. We define an equivalence relation, called divergence weak lowbisimulation, between paths of the Kripke structure. Divergence weak lowbisimulation requires paths of the program to affect the public variables in the same way. We formalize BOD using divergence weak lowbisimulation and show that a program satisfies BOD, if and only if all paths that have the same initial public value are divergence weak lowbisimilar. Thus, paths that have the same initial public value pass through the same equivalence classes of states but not at the same time, that is, BOD allows some paths to run slower than others. BOD is defined language and schedulerindependent.
To verify BOD, a sound model checking method is proposed. The method chooses an arbitrary path for all paths having lowequivalent initial states. BOD requires divergence weak lowbisimilarity between each arbitrary path and the corresponding paths in the program. The method computes a divergence weak lowbisimulation quotient of the program, which is an abstract model of the program and consists of equivalence classes under divergence weak lowbisimulation. We show that BOD is satisfied if and only if initial states of each arbitrary path and the corresponding paths in the program are in the same equivalence classes. The time complexity of computing the quotient and hence verifying BOD is polynomial in the size of the program model.
In summary, our contributions include (1) formalizing observational determinism using a bisimulation foundation, (2) a verification algorithm for checking observational determinism.
The remainder of the paper is structured as follows. Section 2 discusses preliminaries and assumptions made throughout the paper. Section 3 discusses previous work. In Section 4, the proposed approach is explained. In this section bisimulationbased confidentiality property is formally defined using lowbisimulation and then the verification is explained in detail. Finally, Section 5 concludes by a discussion on future work.
2. Preliminaries and Assumptions
A security analysis should include a model for program, attacker and property [12]. Program and attacker models along with preliminaries for formalizing the property are explained in this section. The confidentiality property for specifying requirements is defined in Section 4.1.
2.1. Program Model
We model a program as a set of concurrently running threads with a memory shared across the threads. Kripke structures are utilized to model multithreaded programs and paths are used to model program behavior (executions).
Definition 1
(Kripke structure). A Kripke structure $\mathcal{K}$ is a tuple $(S,\to ,I,AP,V)$ where S is a set of states, $\to \subseteq S\times S$ is a transition relation, $I\subseteq S$ is a set of initial states, $AP$ is the set of possible values of the public variables and $V:S\to {2}^{AP}$ is a labeling function.
$\mathcal{K}$ is called finite if S and $AP$ are finite. The set of successors of a state s is defined as $Post\left(s\right)=\{{s}^{\prime}\in Ss\to {s}^{\prime}\}$. A state s is called terminal if $Post\left(s\right)=\varnothing $. For a Kripke structure modeling a sequential program, terminal states represent the termination of the program.
Definition 2
(Path). An path fragment $\widehat{\pi}$ of $\mathcal{K}$ is a finite state sequence ${s}_{0}{s}_{1}\dots {s}_{n}$ such that ${s}_{i}\in Post\left({s}_{i1}\right)$ for all $0<i\le n$ or an infinite state sequence ${s}_{0}{s}_{1}{s}_{2}\dots $ such that ${s}_{i}\in Post\left({s}_{i1}\right)$ for all $0<i$. An path of $\mathcal{K}$ is a path fragment that starts in an initial state and is either finite, ending in a terminal state or infinite.
The first state of the path $\pi ={s}_{0}{s}_{1}{s}_{2}\dots $ is extracted by $\pi \left[0\right]$, that is, $\pi \left[0\right]={s}_{0}$. $Paths\left(s\right)$ denotes the set of paths starting in s and $Paths\left(\mathcal{K}\right)$ the set of paths of the initial states of $\mathcal{K}$: $Paths\left(\mathcal{K}\right)={\cup}_{s\in I}Paths\left(s\right)$. Suppose ${I}^{\prime}$ is a subset of I; Then, $Paths\left({I}^{\prime}\right)={\cup}_{s\in {I}^{\prime}}Paths\left(s\right)$.
A trace of a path $\pi ={s}_{0}{s}_{1}\cdots $ is defined as $T=trace\left(\pi \right)=V\left({s}_{0}\right)V\left({s}_{1}\right)\cdots $. Two traces ${T}_{1}$ and ${T}_{2}$ over ${2}^{AP}$ are stutter equivalent, denoted ${T}_{1}\triangleq {T}_{2}$, if they are both of the form ${A}_{0}^{+}{A}_{1}^{+}{A}_{2}^{+}\cdots $ for ${A}_{0},{A}_{1},{A}_{2},\cdots \subseteq AP$ where ${A}_{i}^{+}$ is the Kleene plus operation on ${A}_{i}$ and is defined as ${A}_{i}^{+}=\{{x}_{1}{x}_{2}\cdots {x}_{k}k>0\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}\mathrm{each}\phantom{\rule{4pt}{0ex}}{x}_{i}={A}_{i}\}$. A finite trace ${t}_{1}$ is called a prefix of t, if there exists another infinite trace ${t}_{2}$ such that ${t}_{1}{t}_{2}=t$. Two traces are prefix & stutter equivalent, if one is stutterequivalent to a prefix of another.
Two Kripke structure can be combined in a single composite Kripke structure.
Definition 3
(Composite Kripke structure ${\mathcal{K}}_{1}$ ⊕ ${\mathcal{K}}_{2}$). For ${\mathcal{K}}_{i}=({S}_{i},{\to}_{i},{I}_{i},AP,{V}_{i})$, $i=1,2$: ${\mathcal{K}}_{1}\oplus {\mathcal{K}}_{2}=({S}_{1}\uplus {S}_{2},{\to}_{1}\uplus {\to}_{2},{I}_{1}\uplus {I}_{2},AP,V)$ where ⊎ stands for disjoint union and $V\left(s\right)={V}_{i}\left(s\right)$ if $s\in {S}_{i}$.
Let us assume $\mathcal{K}=(S,\to ,I,AP,V)$ is a Kripke structure that models executions of a multithreaded program, with all possible interleavings of the threads. A state of $\mathcal{K}$ indicates the current values of all variables together with the current value of the program counter that indicates the next program statement to be executed. Execution steps of the program are modeled by the transition relation. In case a state has more than one outgoing transition, the next execution step (transition) is chosen in a purely nondeterministic fashion. In consequence, executions result from the resolution of the possible nondeterminism in the program. This resolution is performed by a scheduler. A scheduler chooses in any state s one of the enabled transitions according to a scheduling policy. Probabilistic choices are not considered, that is, the scheduler is possibilistic. $AP$ is the set of values of the public variables and the function V labels each state with these values. If $\mathcal{K}$ has a terminal state ${s}_{n}$, we include a transition ${s}_{n}\to {s}_{n}$, that is, a selfloop, ensuring that the Kripke structure has no terminal state. Therefore, all paths of $\mathcal{K}$ are infinite. It is assumed that the state space of the Kripke structure of the program and the shared memory used by the threads are finite.
As an example of program modeling, consider the following program which consists of two threads
l:=0; l:=1  if l=1 then l:=h (P1)
Suppose that h is a onebit secret variable, l is a onebit public variable and  is the parallel operator. Two threads are secure if they are executed separately but concurrent execution of the two threads might leak the value of h into l. The Kripke structure of the program is depicted in Figure 1.
In this Kripke structure, the state space is $S=\{{s}_{0},{s}_{1},\dots ,{s}_{8}\}$. The set of initial states consists of two states, that is, $I=\{{s}_{0},{s}_{5}\}$. The set of transitions is $\{{s}_{0}\to {s}_{1},{s}_{1}\to {s}_{3},\dots ,{s}_{7}\to {s}_{8},{s}_{8}\to {s}_{8}\}$. The set of possible values of the public variable is $AP=\{0,1\}$. States ${s}_{0}$, ${s}_{1}$, ${s}_{4}$, ${s}_{5}$ and ${s}_{6}$ are labeled with $\left\{0\right\}$ and states ${s}_{2}$, ${s}_{3}$, ${s}_{7}$ and ${s}_{8}$ are labeled with $\left\{1\right\}$. The set of paths of the program is $\{{s}_{0}{s}_{1}{s}_{3}^{\omega},{s}_{0}{s}_{2}{s}_{4}^{\omega},{s}_{5}{s}_{6}{s}_{8}^{\omega},{s}_{5}{s}_{7}{s}_{8}^{\omega}\}$, where $\omega $ denotes infinite iteration.
2.2. Attacker Model
Secure information flow is a security mechanism for establishing program confidentiality. The sufficiency of information flow depends on the attacker model. This model defines the capabilities of the attacker, such as being able to observe program output, read program code or even inject code in the program. These capabilities may lead to information flow channels that transfer secret information to the attacker. Examples of such channels are direct, indirect, termination behavior, probabilistic, internally observable timing, externally observable timing [13], injection and power [14].
Direct flows are caused when the value of a secret variable is directly assigned to a public variable. Indirect flows are caused by the control structure of the program. Termination flows leak information through the termination or nontermination of a program execution [3]. Probabilistic channels signal secret information through the probabilistic behavior of the program. Internally observable timing flows are created when secret information influences the timing behavior of a thread, which, through the scheduler, influences the execution order of assignments to public variables [15]. Externally observable timing flows are caused when secret information influences the timing behavior of the program. Injection flows signal information when an attacker is able to inject her desired code in the program [14]. Power channels leak information via the power consumed to execute an action dependent on secret information, assuming the attacker can measure this consumption [3].
Which of these channels are a concern depends strongly on the specific context and on the power of the attacker, that is, on the observations she is able to make. We suppose that the attacker has the knowledge of the program’s source code, is able to choose a scheduling policy, run the program and observe program traces.
We assume a shared memory for the multithreaded program, where public values are stored and the attacker has read access to it. Stating in terms of Kripke structures, the attacker can only observe state labels. We also assume that the attacker can not access other memory areas, including secret variables (i.e., access control and memory protection works correctly). We aim to detect direct, indirect and internal timing leaks.
2.3. LowBisimulation
In this section, lowbisimulation and variations of it are formally defined. These definitions will be used in the next section to formalize observational determinism. Execution (path) indistinguishability is a key requirement for preventing information leaks in concurrent programs [6]. Bisimulation is a widelyused foundation for execution indistinguishability and thus for characterizing security for multithreaded programs (e.g., References [13,16,17,18]).
We assume the attacker can observe the state labels and cannot distinguish those states that have the same label but might have different secret values. This observational capability of the attacker is formalized by the following relation.
Definition 4
(Lowequivalent states). A state s is lowequivalent to another state ${s}^{\prime}$, written $s{=}_{L}{s}^{\prime}$, if $V\left(s\right)=V\left({s}^{\prime}\right)$.
We believe a program is secure if all executions (paths) having lowequivalent initial states are divergence weak lowbisimilar. Thus, we define observational determinism in terms of divergence weak lowbisimulation. For further clarity on the concept of divergence weak lowbisimulation, we first define weak lowbisimulation. Weak lowbisimulation is defined as a relation between states of a Kripke structure and requires for lowequivalent states that each transition is matched by a (suitable) path fragment.
Definition 5
(Weak lowbisimulation ≈_{L}). Let $\mathcal{K}=(S,\to ,I,AP,V)$ be a Kripke structure. A weak lowbisimulation for $\mathcal{K}$ is a binary relation R on S such that for all ${s}_{1}\phantom{\rule{4pt}{0ex}}R\phantom{\rule{4pt}{0ex}}{s}_{2}$, the following three conditions hold:
 1.
 ${s}_{1}{=}_{L}{s}_{2}$.
 2.
 If ${s}_{1}^{\prime}\in Post\left({s}_{1}\right)$ with $({s}_{1}^{\prime},{s}_{2})\notin R$, then there exists a finite path fragment ${s}_{2}{u}_{1}\dots {u}_{n}{s}_{2}^{\prime}$ with $n\ge 0$ and ${s}_{1}\phantom{\rule{4pt}{0ex}}R\phantom{\rule{4pt}{0ex}}{u}_{i}$, $i=1,\dots ,n$ and ${s}_{1}^{\prime}\phantom{\rule{4pt}{0ex}}R\phantom{\rule{4pt}{0ex}}{s}_{2}^{\prime}$.
 3.
 If ${s}_{2}^{\prime}\in Post\left({s}_{2}\right)$ with $({s}_{1},{s}_{2}^{\prime})\notin R$, then there exists a finite path fragment ${s}_{1}{v}_{1}\dots {v}_{n}{s}_{1}^{\prime}$ with $n\ge 0$ and ${V}_{i}\phantom{\rule{4pt}{0ex}}R{s}_{2}$, $i=1,\dots ,n$ and ${s}_{1}^{\prime}\phantom{\rule{4pt}{0ex}}R\phantom{\rule{4pt}{0ex}}{s}_{2}^{\prime}$.
States ${s}_{1}$ and ${s}_{2}$ are weak lowbisimilar, denoted ${s}_{1}{\approx}_{L}{s}_{2}$, if there exists a weak lowbisimulation R for $\mathcal{K}$ with ${s}_{1}\phantom{\rule{4pt}{0ex}}R\phantom{\rule{4pt}{0ex}}{s}_{2}$.
Weak lowbisimulation is defined as a relation between states within a single Kripke structure. An alternative perspective is to consider lowbisimulation as a relation between two Kripke structures. This enables comparing different Kripke structures. Take Kripke structures ${\mathcal{K}}_{1}$ and ${\mathcal{K}}_{2}$ and combine them in a single composite Kripke structure ${\mathcal{K}}_{1}\oplus {\mathcal{K}}_{2}$ (see Definition 3). Then, ${\mathcal{K}}_{1}{\approx}_{L}{\mathcal{K}}_{2}$ if and only if, for every initial state ${s}_{1}$ of ${\mathcal{K}}_{1}$, there exists a weak lowbisimilar initial state ${s}_{2}$ of ${\mathcal{K}}_{2}$, and vice versa.
Two Kripke structures can be weak lowbisimilar (initial states can be weak lowbisimilar) but may not produce indistinguishable paths. This is caused by stutter paths, that is, paths that stay forever in an equivalence class without performing any visible step. This behavior is called divergent. We would like to adapt weak lowbisimulation such that states may only be related if they both exhibit divergent paths or none of them has a divergent path. This yields a variant of weak lowbisimulation called divergence weak lowbisimulation.
Definition 6
(Divergence weak lowbisimulation ${\mathbf{\approx}}_{\mathit{L}}^{\mathit{d}\mathit{i}\mathit{v}}$ [19]). Let R be an equivalence relation on S. A state $s\in S$ is Rdivergent if there exists an infinite path fragment $\pi =s{s}_{1}{s}_{2}\dots \in Paths\left(s\right)$ such that $s\phantom{\rule{4pt}{0ex}}R\phantom{\rule{4pt}{0ex}}{s}_{j}$ for all $j>0$. Stated in words, a state s is Rdivergent if there is an infinite path starting in s that only visits states in ${\left[s\right]}_{R}$. ${\left[s\right]}_{R}$ is the equivalence class of s under the equivalence relation R. R is divergencesensitive if for any ${s}_{1}\phantom{\rule{4pt}{0ex}}R\phantom{\rule{4pt}{0ex}}{s}_{2}$: if ${s}_{1}$ is Rdivergent, then ${s}_{2}$ is Rdivergent. States ${s}_{1}$, ${s}_{2}$ are divergence weak lowbisimilar, denoted ${s}_{1}{\approx}_{L}^{div}{s}_{2}$, if there exists a divergence sensitive weak lowbisimulation R such that ${s}_{1}\phantom{\rule{4pt}{0ex}}R\phantom{\rule{4pt}{0ex}}{s}_{2}$.
We defined ${\approx}_{L}^{div}$ between two states. It can also be defined between two paths.
Definition 7
(Divergence weak lowbisimilar paths [19]). For infinite path fragments ${\pi}_{i}={s}_{0,i}{s}_{1,i}{s}_{2,i}\dots $, $i=1,2$ in $\mathcal{K}$, ${\pi}_{1}$ is divergence weak lowbisimilar to ${\pi}_{2}$, denoted ${\pi}_{1}{\approx}_{L}^{div}{\pi}_{2}$ if and only if there exists an infinite sequence of indexes $0={j}_{0}<{j}_{1}<{j}_{2}<\dots $ and $0={k}_{0}<{k}_{1}<{k}_{2}<\dots $ with:
$${s}_{j,1}{\approx}_{L}^{div}{s}_{k,2}\phantom{\rule{4pt}{0ex}}for\phantom{\rule{4.pt}{0ex}}all\phantom{\rule{4pt}{0ex}}{j}_{r1}\le j<{j}_{r}\phantom{\rule{4pt}{0ex}}and\phantom{\rule{4pt}{0ex}}{k}_{r1}\le k<{k}_{r}\phantom{\rule{4pt}{0ex}}with\phantom{\rule{4pt}{0ex}}r=1,2,\dots $$
The following lemma asserts that divergence stutter lowbisimilar states produce paths that are divergence stutter lowbisimilar. It also asserts that initial states of divergence stutter lowbisimilar paths are themselves divergence stutter lowbisimilar. The fact that ${\approx}_{L}^{div}$ can be lifted from states to paths and vice versa will used in proving the correctness of the verification algorithm in Section 4.2.
Lemma 1.
Divergence weak lowbisimilar states have divergence weak lowbisimilar paths and vice versa:
$${s}_{1}{\approx}_{L}^{div}{s}_{2}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}iff\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\forall {\pi}_{1}\in Paths\left({s}_{1}\right)\phantom{\rule{4pt}{0ex}}(\exists {\pi}_{2}\in Paths\left({s}_{2}\right).\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\pi}_{1}{\approx}_{L}^{div}{\pi}_{2})$$
Proof.
See Baier and Katoen [19], page 550. □
Divergence weak lowbisimulation is an equivalence and partitions the set of states of a Kripke structure into equivalence classes. The result is called the quotient Kripke structure. For a state set S and an equivalence R, ${\left[s\right]}_{R}=\{{s}^{\prime}\in S\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}(s,{s}^{\prime})\in R\}$ defines the equivalence class of the state $s\in S$ under R. The quotient Kripke structure with respect to ${\approx}_{L}^{div}$ is defined as follows.
Definition 8
(Divergence weak lowbisimulation quotient $\mathcal{K}/{\approx}_{L}^{div}$). The divergence weak lowbisimulation quotient of a Kripke structure $\mathcal{K}$ is defined by $\mathcal{K}/{\approx}_{L}^{div}=(S/{\approx}_{L}^{div},{\to}^{\prime},{I}^{\prime},AP,{V}^{\prime})$, where $S/{\approx}_{L}^{div}=\left\{{\left[s\right]}_{{\approx}_{L}^{div}}\right\phantom{\rule{4pt}{0ex}}s\in S\}$, ${V}^{\prime}\left({\left[s\right]}_{{\approx}_{L}^{div}}\right)=V\left(s\right)$, ${I}^{\prime}=\left\{{\left[s\right]}_{{\approx}_{L}^{div}}\right\phantom{\rule{4pt}{0ex}}s\in I\}$ and ${\to}^{\prime}$ is defined by
$$\frac{s\to {s}^{\prime}\phantom{\rule{4pt}{0ex}}\wedge \phantom{\rule{4pt}{0ex}}s\neg {\approx}_{L}^{div}{s}^{\prime}}{{\left[s\right]}_{{\approx}_{L}^{div}}{\to}^{\prime}{\left[{s}^{\prime}\right]}_{{\approx}_{L}^{div}}}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}and\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\displaystyle \frac{s\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}is\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\approx}_{L}^{div}divergent}{{\left[s\right]}_{{\approx}_{L}^{div}}{\to}^{\prime}{\left[s\right]}_{{\approx}_{L}^{div}}}$$
3. Related Work
In the literature, there are various definitions of observational determinism. These definitions formalize determinism in terms of various forms of stutter equivalence of the program executions, such as prefix and stutter equivalence on traces of each public variable [6], stutter equivalence on traces of each public variable [7], prefix and stutter equivalence on traces of all public variables together [8], stutter equivalence on traces of all public variables [9,20], schedulerspecific stutter equivalence on traces of each public variable and also traces of all public variables together [10] and equivalence to public operations [11].
For most of these definitions, model checking algorithms have been presented to verify them [7,9,10]. Huisman and Blondeel [9] specify observational determinism in modal $\mu $calculus. They use Concurrency Workbench (CWB) as the model checking tool. They encode the selfcomposed program and modal $\mu $calculus formulation of observational determinism in CWB’s specification language. Dabaghchian and Abdollahi [20] specify observational determinism in Linear Temporal Logic. They encode the selfcomposed program and LTL formula in the PROMELA specification language, using the SPIN model checker. Ngo et al. [10] present two bisimulationbased algorithms to verify observational determinism. These model checking methods have exponential time complexity, while BOD requires polynomial time complexity. BOD is a little more restrictive than these definitions, as it uses divergence weak lowbisimulation (Divergence weak lowbisimulation implies stutter equivalence but the reverse does not necessarily hold (See Baier and Katoen [19], page 549).). This restriction is not bad, as most applications require strict confidentiality properties in order to avoid leakage of sensitive information.
Giffhorn and Snelting [21] define observational determinism in terms of lowequivalence on public operations and present a program analysis algorithm for verifying it. The algorithm models concurrent programs as dependence graphs. In a program dependence graph, nodes represent program statements and edges represent data and control dependencies. Giffhorn and Snelting’s definition of observational determinism is based on traces which consist of read/write operations and memory values and are enriched with control and data dependencies. This is different from our definition of BOD and many other definitions of observational determinism, in which traces are defined using public variable values.
Another widelyused schedulerindependent property is strong security, introduced by Sabelfeld and Sands [13]. They define a partial equivalence relation, called strong lowbisimulation, that relates two multithreaded programs, with the same number of threads, only if they execute in lockstep and affect the public variables in the same manner. Then, a multithreaded program satisfies strong security if it is related to itself. Strong security is too strong and requires stepbystep indistinguishability, which implies, many intuitively secure programs are rejected. Compared to strong security, BOD is much more permissive on harmless programs.
Mantel and Sudbrock [16] propose Flexible SchedulerIndependent (FSI) Security for multithreaded programs. They partition threads of the program into high threads that definitely do not modify public variables and public threads that potentially modify public variables. An orderpreserving bijection, called low matching, is defined to map the positions of public threads in two multithreaded programs. Then, they define a partial equivalence relation, called lowbisimulation modulo low matching, which relates two multithreaded programs, with the same number of threads, if each step of a public thread in one execution is matched by a step of the matching public thread in the other execution. Thus, a multithreaded program is FSIsecure if it is related to itself. FSIsecurity is less restrictive than strong security but is languagedependent. Note that BOD is languageindependent.
Type systems have been widely used for verifying secure information flow properties [8,13,16,18,22,23,24]. They are languagedependent, in the sense that a simple language is considered and the confidentiality property is defined using the semantics of this language. Type systems are useful because they support automated, compositional verification. However, they are not extensible, as they are languagedependent and for each change in the program language or the security property, a new type system needs to be defined and proven sound [25]. On the other hand, security requirements are subject to dynamic changes. Accordingly, we use algorithmic verification and model checking, instead of type systems to verify secure information flow.
Another verification approach for information flow properties is to extend temporal logics and introduce new logics to specify these properties. HyperLTL [26] is a recently introduced temporal logic, which is an extension of Lineartime Temporal Logic (LTL) [27]. Runtime verification of information flow properties, including observational determinism, using HyperLTL is discussed in [28,29]. Verifying BOD using HyperLTL would be an interesting future work.
4. The Proposed Approach
The proposed approach for verifying observational determinism consists of two main parts: (1) a new formalization, Bisimulationbased Observational Determinism (BOD), for specifying secure information and (2) an algorithm for verifying BOD. BOD is discussed in Section 4.1 and the verification is explained in Section 4.2.
4.1. BisimulationBased Observational Determinism
Observational determinism needs to ensure that as secret inputs change, public behavior of the program remains unchanged. Thus, it requires paths with lowequivalent initial states to be indistinguishable [6]. We characterize indistinguishability of paths by divergence weak lowbisimulation.
Definition 9
(BOD). A multithreaded program $MT$ satisfies BOD with respect to all public variables, if and only if
where $\mathcal{K}$ denotes the Kripke structure of the program $MT$, ${=}_{L}$ is state lowequivalence relation, and ${\approx}_{L}^{div}$ is divergence weak lowbisimulation relation. BOD is acronym for Bisimulationbased Observational determinism.
$$\forall \phantom{\rule{4pt}{0ex}}\pi ,{\pi}^{\prime}\in Paths\left(\mathcal{K}\right).\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\pi \left[0\right]{=}_{L}{\pi}^{\prime}\left[0\right]\u27f9\pi {\approx}_{L}^{div}{\pi}^{\prime}$$
Stated in words, BOD requires two initial states that have the same public data but different secret data, to produce paths that visit the same sequence of ${\approx}_{L}^{div}$equivalence classes and affect the public variables in the same sequence. This implies weak stepwise indistinguishability, by which direct, indirect and internally observable timing flows are detected. Termination behavior flows are eliminated by adding selfloops to terminal states. As discussed earlier, paths of the program result from the resolution of the possible nondeterminism in the program and the probability of a path is not considered. This research hence does not consider probabilistic flows. In order to handle externally observable timing flows, the confidentiality property must be too restrictive, ensuring that the execution time does not depend on secret inputs. Requiring a high degree of restriction causes rejection of many intuitively secure programs and harms the precision of the property. That is why BOD does not impose a high degree of restriction. A confidentiality property should be as restrictive as possible in the sense that it does not accept leaky programs and should be permissive enough, such that it does not reject intuitively secure programs.
As an example of information flow analysis using BOD, consider the following program
l:=0; h:=l+3; l:=l+1 (P2)
The program is intuitively secure because h is not read. The Kripke structure of the program is depicted in Figure 2.
It has just one path ${s}_{0}{s}_{1}{s}_{2}^{\omega}$. Thus, P2 satisfies BOD. As another example, consider the following secure program in which l is not updated
if l>0 then h:=h+1 else h:=0 (P3)
This program is clearly BODsecure, because for each value of l, the Kripke structure has one state with a selfloop.
For insecure examples, consider the program P (Figure 1). Two threads are secure if they are executed separately. However, concurrent execution of the two threads might leak the value of h into l. The paths of this program are not divergence weak lowbisimilar and hence not BODsecure. As another example, consider the following program
l:=0; while h>0 do {l++; h} (P4)
This program is insecure and leaks the value of h into l. The Kripke structure of the program is shown in Figure 3.
The set of paths is $\{{s}_{0}^{\omega},{s}_{1}{s}_{2}^{\omega},{s}_{3}{s}_{4}{s}_{5}^{\omega},\dots \}$. These paths are not divergence weak lowbisimilar and thus BOD correctly recognizes the program as insecure. For an example of externally observable timing flow, consider the following program
l:=0; if h>0 then l:=1 else {sleep 100; l:=1} (P5)where sleep 100 means 100 consecutive skip commands. The Kripke structure of the program is depicted in Figure 4.
Paths produced are ${s}_{0}{s}_{1}^{\omega}$ and ${s}_{2}{s}_{3}$ …${s}_{102}{s}_{103}^{\omega}$, which are divergence weak lowbisimilar. Therefore, BOD (incorrectly) classifies the program as secure, while the program has external timing leak. As discussed earlier, BOD (and all other definitions of observational determinism) do not try to detect external timing leaks and avoid them.
For an example of internally observable timing flow, consider the following insecure program
l:=0 l:=2  (if h=1 then sleep 100); l:=1 (P6)
A part of the Kripke structure is depicted in Figure 5.
Due to the fact that, for example, two paths ${s}_{0}{s}_{1}{s}_{2}{s}_{3}^{\omega}$ and ${s}_{0}{s}_{4}{s}_{5}{s}_{6}^{\omega}$ are not divergence weak bisimilar, BOD correctly recognizes the program as insecure.
4.2. Verifying BOD
Here we discuss the proposed model checking algorithm for BOD verification. First, an arbitrary path ${\pi}_{i}$ is extracted for each set of lowequivalent initial states and ${\approx}_{L}^{div}$ is checked between the arbitrary path ${\pi}_{i}$ and each path starting in these lowequivalent initial states.
A main step of the verification algorithm is to construct the quotient structure of the program w.r.t ${\approx}_{L}^{div}$. Divergence weak lowbisimulation quotient is an abstraction of the Kripke structure and by considering it, enormous statespace reductions may be obtained [19]. We use an abstraction refinement technique to compute the quotient structure.
4.2.1. The Algorithm
The main steps of the verification algorithm are outlined in Algorithm 1.
Algorithm 1 Verification of BOD 

The input of the algorithm is a finite Kripke structure $\mathcal{K}=(S,\to ,I,AP,V)$ modeling the program and the output is true or false. I is partitioned into sets of lowequivalent initial states called initial state blocks$IS{C}_{0}$,…,$IS{C}_{m}$ and $ISC=\{IS{C}_{0},\dots ,IS{C}_{m}\}$ is defined. An arbitrary path ${\pi}_{i}\in Paths\left(IS{C}_{i}\right)$ is extracted from $\mathcal{K}$ for each $IS{C}_{i}$ and a Kripke structure ${\mathcal{KP}}_{i}$ is built from ${\pi}_{i}$. Kripke structures ${\mathcal{KP}}_{i}(i=0,\dots ,ISC1)$ are combined, forming a single Kripke structure ${\mathcal{KP}}_{min}=({S}_{{\mathcal{KP}}_{min}},{\to}_{{\mathcal{KP}}_{min}},{I}_{{\mathcal{KP}}_{min}},A{P}_{{\mathcal{KP}}_{min}},{V}_{{\mathcal{KP}}_{min}})$, where ${\mathcal{KP}}_{min}={\mathcal{KP}}_{0}\oplus {\mathcal{KP}}_{1}\oplus \dots \oplus {\mathcal{KP}}_{\leftISC\right1}$. Kripke structure ${\mathcal{K}}^{\prime}=({S}^{\prime},{\to}^{\prime},{I}^{\prime},AP,{V}^{\prime})$ is also constructed, where ${\mathcal{K}}^{\prime}=\mathcal{K}\oplus {\mathcal{KP}}_{min}$ and quotient of ${\mathcal{K}}^{\prime}$ w.r.t. ${\approx}_{L}^{div}$ is computed. Then, $\mathcal{K}$ satisfies BOD iff
where ${\left[{\pi}_{i}\left[0\right]\right]}_{{\approx}_{L}^{div}}$ denotes the equivalence class of ${\pi}_{i}\left[0\right]$ in ${\mathcal{K}}^{\prime}/{\approx}_{L}^{div}$. Stated in words, $\mathcal{K}$ satisfies BOD if after computing the divergence weak lowbisimulation quotient of ${\mathcal{K}}^{\prime}=\mathcal{K}\oplus {\mathcal{KP}}_{min}$, all initial states of an initial state block and the initial state of the corresponding arbitrary path belong to the same equivalence class.
$$\forall i\in \{0,\phantom{\rule{4pt}{0ex}}\dots ,ISC1\}.\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}IS{C}_{i}\subseteq {\left[{\pi}_{i}\left[0\right]\right]}_{{\approx}_{L}^{div}}$$
Now we discuss the main steps of Algorithm 1.
Taking an arbitrary path. To take an arbitrary path from $\mathcal{K}$, a depthfirst search is done. The search starts from an initial state and stops at a terminal state (a state with a selfloop).
Computing the divergence weak lowbisimulation quotient. Before explaining how to compute the quotient with respect to ${\approx}_{L}^{div}$, some definitions are provided.
Definition 10
(Stutter cycle). A stutter cycle is a cycle ${s}_{0}{s}_{1}\dots {s}_{n}$ in $\mathcal{K}$ such that ${s}_{0}{=}_{L}{s}_{i}$ for $i=1,\dots ,n$.
Definition 11
(Divergencesensitive Expansion $\overline{\mathcal{K}}$). The divergencesensitive expansion of finite Kripke structure $\mathcal{K}=(S,\to ,I,AP,V)$ is $\overline{\mathcal{K}}=(S\cup \left\{{s}_{div}\right\},\to ,I,AP\cup \left\{div\right\},\overline{V})$, where ${s}_{div}\notin S$, → extends the transition relation of $\mathcal{K}$ by the transitions ${s}_{div}\to {s}_{div}$ and $s\to {s}_{div}$ for every state $s\in S$ on a stutter cycle in $\mathcal{K}$ and $\overline{V}\left(s\right)=V\left(s\right)$ if $s\in S$ and $\overline{V}\left({s}_{div}\right)=\left\{div\right\}$.
In order to compute the quotient with respect to ${\approx}_{L}^{div}$, ${\mathcal{K}}^{\prime}$ is transformed into divergencesensitive expansion $\overline{{\mathcal{K}}^{\prime}}$ such that the equivalence classes under ${\approx}_{L}$ in $\overline{{\mathcal{K}}^{\prime}}$ coincide with the equivalence classes under ${\approx}_{L}^{div}$ in ${\mathcal{K}}^{\prime}$. To construct divergencesensitive expansion, all states on a stutter cycle must be determined, as a transition from each of these states to ${s}_{div}$ will be added. This is done by finding the strongly connected components (SCCs) in ${\mathcal{K}}^{\prime}$ that only contain stutter steps (Transition $s\to {s}^{\prime}$ is a stutter step if $s{=}_{L}{s}^{\prime}$.). The latter can be carried out using a depthfirst search algorithm. Then, computation of the quotient with respect to ${\approx}_{L}^{div}$ is reduced to the problem of computing the quotient with respect to ${\approx}_{L}$. The algorithm proposed by Groote and Vaandrager [30] is used to compute the quotient of $\overline{{\mathcal{K}}^{\prime}}$ with respect to ${\approx}_{L}$. The algorithm starts by partitioning the state space based on the lowequivalence relation. Thus, the first partition contains blocks of lowequivalent states. Then, each block is refined based on the set of reachable states from the block. Refinement continues until no refinement is possible and the blocks are stable. For more details of the algorithm please refer to Groote and Vaandrager [30] and Baier and Katoen [19]. Finally, ${\mathcal{K}}^{\prime}/{\approx}_{L}^{div}$ is obtained from $\overline{{\mathcal{K}}^{\prime}}/{\approx}_{L}$ by replacing the transitions $s\to {s}_{div}$ in $\overline{{\mathcal{K}}^{\prime}}$ with selfloops ${\left[s\right]}_{{\approx}_{L}^{div}}\to {\left[s\right]}_{{\approx}_{L}^{div}}$ and removing the state ${s}_{div}$.
4.2.2. Correctness of the Algorithm
Correctness of Algorithm 1 is proven by the following theorem. Algorithm 1 is correct when it returns true if and only if the input Kripke structure satisfies BOD, otherwise returns false.
Theorem 1.
A Kripke structure $\mathcal{K}$ satisfies BOD iff
$$\forall i\in \{0,\phantom{\rule{4pt}{0ex}}\dots ,ISC1\}.\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}IS{C}_{i}\subseteq {\left[{\pi}_{i}\left[0\right]\right]}_{{\approx}_{L}^{div}}$$
Proof.
Given $\mathcal{K}=(S,\to ,I,AP,V)$, BOD requires that all path starting in lowequivalent initial states are divergence weak lowbisimilar. To prove BOD, for each $IS{C}_{i}$ ($i=0,\phantom{\rule{4pt}{0ex}}\dots ,\leftISC\right1$) an arbitrary path ${\pi}_{i}\in Paths\left(IS{C}_{i}\right)$ is extracted and divergence weak lowbisimilarity between ${\pi}_{i}$ and each path $\sigma \in Paths\left(IS{C}_{i}\right)$ is checked. ${\pi}_{i}$ is divergence weak lowbisimilar to $\sigma $, if and only if ${\pi}_{i}\left[0\right]$ is divergence weak lowbisimilar to $\sigma \left[0\right]$. The latter fact is developed as a result of Lemma 1. Then, ${\pi}_{i}\left[0\right]$ is divergence weak lowbisimilar to each $\sigma \left[0\right]$, if and only if
where ${S}^{\prime}/{\approx}_{L}^{div}$ is the quotient of ${S}^{\prime}$ w.r.t ${\approx}_{L}^{div}$, which is an equivalence class. Note that all states of an initial state cluster should have divergence weak lowbisimilar paths and hence belong to the same equivalence class. The above condition ensures the latter too.
$$\forall C\in {S}^{\prime}/{\approx}_{L}^{div}.\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\pi}_{i}\left[0\right]\in C\u27fa\phantom{\rule{4pt}{0ex}}\sigma \left[0\right]\in C$$
Thus, BOD requires that all initial states of an initial state block $IS{C}_{i}$ and the initial state of the corresponding arbitrary path ${\pi}_{i}$ belong to the same block in ${\mathcal{K}}^{\prime}/{\approx}_{L}^{div}$. Then, $\mathcal{K}$ satisfies BOD, iff
where ${\left[{\pi}_{i}\left[0\right]\right]}_{{\approx}_{L}^{div}}$ denotes the equivalence class of ${\pi}_{i}\left[0\right]$ in ${\mathcal{K}}^{\prime}/{\approx}_{L}^{div}$. □
$$\forall i\in \{0,\phantom{\rule{4pt}{0ex}}\dots ,ISC1\}.\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}IS{C}_{i}\subseteq {\left[{\pi}_{i}\left[0\right]\right]}_{{\approx}_{L}^{div}}$$
4.2.3. Complexity of the Algorithm
Assume t is the number of transitions of $\mathcal{K}$. The time complexity of taking an arbitrary path is $O(t+S\left\right)$. Thus, the time complexity of constructing ${\mathcal{KP}}_{min}$ is $O\left(\phantom{\rule{4pt}{0ex}}\rightI\ast (t+\leftS\right\left)\phantom{\rule{4pt}{0ex}}\right)$. The time complexity of determining SCCs in ${\mathcal{K}}^{\prime}$ is $O({t}^{\prime}+{S}^{\prime}\left\right)$, where ${t}^{\prime}$ denotes the number of transitions of ${\mathcal{K}}^{\prime}$. The quotient space of ${\mathcal{K}}^{\prime}$ under ${\approx}_{L}^{div}$ can be computed in time $O\left(\phantom{\rule{4pt}{0ex}}\right({S}^{\prime}+{t}^{\prime})+{S}^{\prime}\ast (\leftAP\right+{t}^{\prime}\left)\phantom{\rule{4pt}{0ex}}\right)$ under the assumption that ${t}^{\prime}\ge \left{S}^{\prime}\right$. Thus, the costs of verifying BOD are dominated by the costs of finding the SCCs and computing the quotient space under ${\approx}_{L}^{div}$, which are both polynomialtime.
4.2.4. Implementation and Case Study
We have implemented the proposed approach upon the PRISM model checker [31]. PRISM is a tool for modeling and analyzing concurrent and probabilistic systems [31]. It includes a language, called the PRISM language, which is a statebased language for specifying systems and programs. The PRISM tool uses binary decision diagrams to build a state model of a PRISM program and store states and transitions of the model. A PRISM program can contain a set of modules which run in parallel. The overall model of the program contains all possible transitions and interleavings of the modules. A PRISM program can also contain global variables, which can be accessed and modified by all modules of the program.
We use PRISM modules to represent threads of a multithreaded program. Global variables represent the shared memory of the multithreaded program. Then, the state model (Kripke structure) is built as the parallel composition of the modules containing all possible interleavings. We changed parser of PRISM and added two reserved keywords observable and secret to indicate public (observable) and secret variables.
For evaluation, we take a case study (described below) and specify it in the PRISM language. We run PRISM to build the Kripke structure of the program. We extract the set of reachable states and create a sparse matrix containing the transitions. We then traverse the Kripke, take an arbitrary path for each initial state cluster and add the path to the Kripke with new state numbers. The divergence weak lowbisimulation quotient of the new model is computed and BOD is checked.
As a case study, consider the following program, which is borrowed from [23,32]. The program—which we call the SmithVolpano program—consists of three threads.
Thread α: while mask != 0 do while trigger0 = 0 do; /* busy waiting */ result := result  mask ; /* bitwise ‘or’ */ trigger0 := 0; maintrigger := maintrigger + 1; if maintrigger = 1 then trigger1 := 1 Thread β: while mask != 0 do while trigger1 = 0 do; /* busy waiting */ result := result & ~ mask ; /* bitwise ‘and’ with the complement of mask */ trigger1 := 0; maintrigger := maintrigger + 1; if maintrigger = 1 then trigger0 := 1 Thread γ: while mask != 0 do maintrigger := 0; if (PIN & mask ) = 0 then trigger0 := 1 else trigger1 := 1; while maintrigger != 2 do; /* busy waiting */ mask := mask / 2; trigger0 := 1; trigger1 := 1
Assume that PIN is a secret variable and result is a public variable. If the scheduling of the threads is fair, that is, each thread gets its turn infinitely often and the program starts in an initial state where maintrigger=0, trigger0=0, trigger1=0 and result=0, the program leaks some bits of PIN into result. If PIN is n bits long and the initial value of mask is equal to ${2}^{k}$ ($k<n$), then SmithVolpano leaks $k+1$ bits of PIN into result.
The PRISM description for the SmithVolpano program is given in Appendix A. It is composed of three modules Alpha, Beta and Gamma, where each module models a thread. For the sake of brevity, we only allow modules to change turn when they are in the busy waiting state. The global variables result, mask, pin, trigger0, trigger1, maintrigger and turn encode the shared variables (memory) of the program. States of the model are represented by the values of its variables, that is, (result, mask, pin, trigger0, trigger1, maintrigger, turn, c1, c2, c3).
For $n=2$, the Kripke structure of the SmithVolpano program has 228 states and 236 transitions. It contains 4 initial states, in which result (the public variable) is 0 and PIN (the secret variable) varies between 0 and 3. Thus, there is one initial state cluster and a path, containing 46 states, is extracted and added to the Kripke structure. Then, the quotient is computed. It contains 16 blocks. Since the initial state of the extracted path and all initial states of the Kripke do not belong to the same block, then the approach labels the program as BODinsecure.
Each increase in n (i.e., the bit size of PIN) doubles the number of states and transitions of the Kripke structure. However, in all cases the computed quotient contains 16 blocks and the program is labeled as BODinsecure.
Since the approach uses binary decision diagrams to construct the Kripke model and a sparse matrix is utilized to access the Kripke’s transitions, it computes the quotient and verifies BOD in seconds.
To our knowledge, no other evaluations of BOD have been published and we cannot perform a quantitative comparison of our implementation to other algorithms. However, we compare our approach to a closely related work that has been done by Ngo [32]. Ngo discusses the SmithVolpano program as a case study for his work on observational determinism. He formalizes observational determinism by stutter equivalence on traces of each public variable and also traces of all public variables together. Then, he presents two algorithmic verification methods for checking observational determinism. He has implemented his methods on the LTSmincheck tool [33]. As a case study, he specifies the SmithVolpano program in the PRISM language and uses the PRISM tool to export its state machine into three text files. His method then reads the state machine from the text files and uses the LTSmincheck tool to verify observational determinism. The verification algorithms have exponential time complexity, while our algorithm is polynomial. Furthermore, Ngo’s method reads the input model from three text files, while our method builds the model onthefly using binary decision diagrams. This makes the time complexity of our method far faster than Ngo’s methods.
5. Conclusions and Future Work
Aiming at a widelyapplicable schedulerindependent analysis for secure information flow, a bisimulationbased foundation was proposed in terms of the semantics of state transition systems. Concretely, BOD was formalized to specify secure information flow by requiring indistinguishability of executions of the program. Indistinguishability was characterized in terms of divergence weak lowbisimulation between paths of the transition system of the program. Then, a model checking algorithm was proposed to verify BOD. The algorithm constructs an abstraction of the input model and then checks BOD on it. We proved the correctness of the algorithm and discussed its time complexity. Finally, the implementation and a case study was discussed. We showed how the proposed approach can be used to model and analyze secure information flow of multithreaded programs.
As future work, we plan to develop a compositional analysis for multithreaded programs, as compositionality can be important in making the analysis scale. We believe threadmodular verification [34] is one good candidate for such an analysis. One can also use temporal logics, such as HyperLTL [26], to logically specify BOD and then use model checking methods and tools to check BOD.
Author Contributions
Supervision, J.K. and A.I.; Writing, review and editing, A.A.N.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
The PRISM description for the SmithVolpano program in the case where the initial value of mask equals to 2 is given below.
dtmc const int n = 3; // num of bits of the pin variable global result : [0.. pow (2, n) 1]; global mask : [0.. pow (2, n) 1]; global pin : [0.. pow (2, n) 1]; global trigger0 : [0..1]; global trigger1 : [0..1]; global maintrigger : [0..2]; global turn : [1..3]; module Alpha c1 : [0..5]; [] turn =1 & c1 =0 & mask !=0 > (c1 ’=1); [] turn =1 & c1 =1 & trigger0 =0 & trigger1 =1 > (turn ’=2); [] turn =1 & c1 =1 & trigger0 =0 & trigger1 !=1 > (turn ’=3); [] turn =1 & c1 =1 & trigger0 !=0 > (c1 ’=2); [] turn =1 & c1 =2 & mod( floor ( result / mask ) ,2)=0 & result +mask <= pow (2,n)1 > ( result ’= result + mask ) & (c1 ’=3); [] turn =1 & c1 =2 & mod( floor ( result / mask ) ,2)=1 > (c1 ’=3); [] turn =1 & c1 =3 > ( trigger0 ’=0) & (c1 ’=4); [] turn =1 & c1 =4 & maintrigger <2 > ( maintrigger ’= maintrigger +1) & (c1 ’=5); [] turn =1 & c1 =5 & maintrigger =1 > ( trigger1 ’=1) & (c1 ’=0); [] turn =1 & c1 =5 & maintrigger !=1 > (c1 ’=0); endmodule module Beta c2 : [0..5]; [] turn =2 & c2 =0 & mask !=0 > (c2 ’=1); [] turn =2 & c2 =1 & trigger1 =0 & trigger0 =1 > (turn ’=1); [] turn =2 & c2 =1 & trigger1 =0 & trigger0 !=1 > (turn ’=3); [] turn =2 & c2 =1 & trigger1 !=0 > (c2 ’=2); [] turn =2 & c2 =2 & mod( floor ( result / mask ) ,2)=1 > (result ’= result  mask ) & (c2 ’=3); [] turn =2 & c2 =2 & mod( floor ( result / mask ) ,2)=0 > (c2 ’=3); [] turn =2 & c2 =3 > ( trigger1 ’=0) & (c2 ’=4); [] turn =2 & c2 =4 & maintrigger <2 > ( maintrigger ’= maintrigger +1) & (c2 ’=5); [] turn =2 & c2 =5 & maintrigger =1 > ( trigger0 ’=1) & (c2 ’=0); [] turn =2 & c2 =5 & maintrigger !=1 > (c2 ’=0); endmodule module Gamma c3 : [0..6]; [] turn =3 & mask !=0 & c3 =0 > (c3 ’=1); [] turn =3 & mask =0 & c3 =0 > (c3 ’=5); [] turn =3 & c3 =1 > ( maintrigger ’=0) & (c3 ’=2); [] turn =3 & c3 =2 & mod( floor ( pin / mask ) ,2)=0 > ( trigger0 ’=1) & (c3 ’=3); [] turn =3 & c3 =2 & mod( floor ( pin / mask ) ,2)=1 > ( trigger1 ’=1) & (c3 ’=3); [] turn =3 & c3 =3 & maintrigger =2 > (c3 ’=4); [] turn =3 & c3 =3 & maintrigger !=2 > 0.5:( turn ’=1) + 0.5:( turn ’=2); [] turn =3 & c3 =4 > (mask ’= floor ( mask /2)) & (c3 ’=0); [] turn =3 & c3 =5 > ( trigger0 ’=1) & (c3 ’=6); [] turn =3 & c3 =6 > ( trigger1 ’=1) ; endmodule init mask =2 & result =0 & maintrigger =0 & trigger0 =0 & trigger1 =0 & c1 =0 & c2 =0 & c3 =0 & turn =3 endinit
References
 Mastroeni, I.; Pasqua, M. Statically Analyzing Information Flows: An Abstract Interpretationbased Hyperanalysis for Noninterference. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC ’19, Limassol, Cyprus, 8–12 April 2019; pp. 2215–2223. [Google Scholar] [CrossRef]
 Smith, G. Principles of secure information flow analysis. In Malware Detection; Springer: Berlin/Heidelberg, Germany, 2007; pp. 291–307. [Google Scholar]
 Sabelfeld, A.; Myers, A.C. Languagebased informationflow security. IEEE J. Sel. Areas Commun. 2003, 21, 5–19. [Google Scholar] [CrossRef]
 McLean, J. Proving noninterference and functional correctness using traces. J. Comput. Secur. 1992, 1, 37–57. [Google Scholar] [CrossRef]
 Roscoe, A.W. CSP and determinism in security modelling. In Proceedings of the 1995 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 8–10 May 1995; pp. 114–127. [Google Scholar]
 Zdancewic, S.; Myers, A.C. Observational determinism for concurrent program security. In Proceedings of the 16th IEEE Computer Security Foundations Workshop, CSFW’03, Pacific Grove, CA, USA, 30 June–2 July 2003; pp. 29–43. [Google Scholar]
 Huisman, M.; Worah, P.; Sunesen, K. A temporal logic characterisation of observational determinism. In Proceedings of the 19th IEEE workshop on Computer Security Foundations, CSFW’06, Venice, Italy, 5–7 July 2006. [Google Scholar]
 Terauchi, T. A type system for observational determinism. In Proceedings of the 21st IEEE Computer Security Foundations Symposium, CSF’08, Pittsburgh, PA, USA, 23–25 June 2008; pp. 287–300. [Google Scholar]
 Huisman, M.; Blondeel, H.C. Modelchecking secure information flow for multithreaded programs. In Proceedings of the Joint Workshop on Theory of Security and Applications, TOSCA’11, Saarbrücken, Germany, 31 March–1 April 2011; Springer: Berlin/Heidelberg, Germany, 2012; pp. 148–165. [Google Scholar]
 Ngo, T.M.; Stoelinga, M.; Huisman, M. Effective verification of confidentiality for multithreaded programs. J. Comput. Secur. 2014, 22, 269–300. [Google Scholar] [CrossRef]
 Bischof, S.; Breitner, J.; Graf, J.; Hecker, M.; Mohr, M.; Snelting, G. Lowdeterministic security for lownondeterministic programs. J. Comput. Secur. 2018, 1–32. [Google Scholar]
 Datta, A.; Franklin, J.; Garg, D.; Jia, L.; Kaynar, D. On Adversary Models and Compositional Security. IEEE Secur. Priv. 2011, 9, 26–32. [Google Scholar] [CrossRef]
 Sabelfeld, A.; Sands, D. Probabilistic noninterference for multithreaded programs. In Proceedings of the 13th IEEE Workshop on Computer Security Foundations, CSFW’00, Cambridge, UK, 3–5 July 2000; pp. 200–214. [Google Scholar]
 Balliu, M. Logics for Information Flow Security: From Specification to Verification. Ph.D. Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, 2014. [Google Scholar]
 Russo, A.; Hughes, J.; Naumann, D.; Sabelfeld, A. Closing internal timing channels by transformation. In Proceedings of the Annual Asian Computing Science Conference, Doha, Qatar, 9–11 December 2006; pp. 120–135. [Google Scholar]
 Mantel, H.; Sudbrock, H. Flexible schedulerindependent security. In Proceedings of the 15th European Conference on Research in Computer Security, ESORICS’10, Athens, Greece, 20–22 September 2010; pp. 116–133. [Google Scholar]
 Boudol, G.; Castellani, I. Noninterference for concurrent programs and thread systems. Theor. Comput. Sci. 2002, 281, 109–130. [Google Scholar] [CrossRef]
 Smith, G. Probabilistic noninterference through weak probabilistic bisimulation. In Proceedings of the 16th IEEE Workshop on Computer Security Foundations, CSFW’03, Pacific Grove, CA, USA, 30 June–2 July 2003; pp. 3–13. [Google Scholar]
 Baier, C.; Katoen, J.P. Principles of Model Checking; MIT Press: Cambridge, MA, USA, 2008. [Google Scholar]
 Dabaghchian, M.; Abdollahi Azgomi, M. Model checking the observational determinism security property using PROMELA and SPIN. Form. Asp. Comput. 2015, 27, 789–804. [Google Scholar] [CrossRef]
 Giffhorn, D.; Snelting, G. A new algorithm for lowdeterministic security. Int. J. Inf. Secur. 2015, 14, 263–287. [Google Scholar] [CrossRef]
 Volpano, D.; Irvine, C.; Smith, G. A sound type system for secure flow analysis. J. Comput. Secur. 1996, 4, 167–187. [Google Scholar] [CrossRef]
 Smith, G.; Volpano, D. Secure information flow in a multithreaded imperative language. In Proceedings of the 25th ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL’98, San Diego, CA, USA, 19–21 January 1998; pp. 355–364. [Google Scholar]
 Volpano, D.; Smith, G. Probabilistic noninterference in a concurrent language. J. Comput. Secur. 1999, 7, 231–253. [Google Scholar] [CrossRef]
 Barthe, G.; D’Argenio, P.R.; Rezk, T. Secure information flow by selfcomposition. In Proceedings of the 17th IEEE Workshop on Computer Security Foundations, CSFW’04, Washington, DC, USA, 28–30 June 2004; pp. 100–114. [Google Scholar]
 Clarkson, M.R.; Finkbeiner, B.; Koleini, M.; Micinski, K.K.; Rabe, M.N.; Sánchez, C. Temporal logics for hyperproperties. In Proceedings of the Third International Conference on Principles of Security and Trust, POST’14, Grenoble, France, 5–13 April 2014; pp. 265–284. [Google Scholar]
 Pnueli, A. The temporal logic of programs. In Proceedings of the 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), Providence, RI, USA, 30 September–31 October 1977; pp. 46–57. [Google Scholar] [CrossRef]
 Finkbeiner, B.; Hahn, C.; Stenger, M.; Tentrup, L. Monitoring Hyperproperties. In Runtime Verification; Lahiri, S., Reger, G., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 190–207. [Google Scholar]
 Hahn, C.; Stenger, M.; Tentrup, L. ConstraintBased Monitoring of Hyperproperties. In Tools and Algorithms for the Construction and Analysis of Systems; Vojnar, T., Zhang, L., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 115–131. [Google Scholar]
 Groote, J.F.; Vaandrager, F. An efficient algorithm for branching bisimulation and stuttering equivalence. In Proceedings of the 17th International Colloquium on Automata, Languages and Programming, Coventry, UK, 16–20 July 1990; pp. 626–638. [Google Scholar]
 Kwiatkowska, M.; Norman, G.; Parker, D. PRISM 4.0: Verification of Probabilistic Realtime Systems. In Proceedings of the 23rd International Conference on Computer Aided Verification, CAV’11, Snowbird, UT, USA, 14–20 July 2011; Gopalakrishnan, G., Qadeer, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6806, pp. 585–591. [Google Scholar]
 Ngo, T.M. Qualitative and Quantitative Information Flow Analysis for MultiThread Programs. Ph.D Thesis, University of Twente, Enschede, The Netherlands, 2014. [Google Scholar]
 Blom, S.; van de Pol, J.; Weber, M. LTSmin: Distributed and Symbolic Reachability. In Computer Aided Verification; Touili, T., Cook, B., Jackson, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 354–359. [Google Scholar]
 Flanagan, C.; Freund, S.N.; Qadeer, S. Threadmodular verification for sharedmemory programs. In Proceedings of the 11th European Symposium on Programming Languages and Systems, ESOP’02, Grenoble, France, 8–12 April 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 262–277. [Google Scholar]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).