Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

Many systems of widespread use, such as web browsers and web applications, may be modelled as reactive programs, that is programs that listen and react to their environment in a continuous way, by means of events. Since the environment may include mutually distrusting parties, such as a local user and a remote web server, reactive programs should be able to protect the confidentiality of the data they manipulate, by ensuring a secure information flow from the inputs they receive from one party to the outputs they release to another party.

Secure information flow is often formalised via the notion of noninterference (NI), expressing the absence of dependency between secret inputs and public outputs (or more generally, between inputs of some confidentiality level to outputs of lower or incomparable level). Originally introduced in [12], NI has been studied for a variety of languages, ranging from standard imperative and functional languages [16, 18] to process calculi based on CCS or the pi-calculus [11]. On the other hand, little attention has been paid to noninterference for reactive programs, with the notable exception of  [2, 13] and [7].

We shall focus here on a particular brand of reactive programming, namely the synchronous one, which was first embodied in the synchronous language SL [9], an offspring of Esterel [6], and later incorporated into various programming environments, such as C, Java, Caml and Scheme. In the synchronous paradigm, the parallel execution of programs is regulated by a notion of instant. The model of \(SL\) departs from that of Esterel in that it assumes the reaction to the absence of an event to be postponed until the end of the instant. This assumption helps disambiguating programs and simplifying the implementation of the language. It is also essential to ensure the monotonicity of programs and their reactivity to the environment.

In this work, we will not explicitly model the interaction of a reactive program with the environment (this could be easily done but it would not bring any further insight). Instead, we concentrate on the interaction within a reactive program, making sure it regularly converges to a stable state (end of instant), in which the program is ready to interact with the environment. We call this property cooperativeness [1] or internal reactivity. In the sequel, we shall abandon the distinction between internal reactivity (among the components of a program) and external reactivity (towards the environment), to focus on the former.

This paper attempts to explore “secure reactive programming in a nutshell”. To this end, we concentrate on a minimal reactive language without memory, consisting of standard sequential operators, an asymmetric parallel operator \(\not \mid \) (formalising a kind of coroutine parallelism under a deterministic scheduling), together with four typical reactive constructs, which we briefly describe next.

In our Core Reactive Language \(CRL\), programs are made of parallel components \(s,s'\) – also called “threads” for simplicity in the following – combined with the operator \({s} \not \mid {s'}\,\) and communicating by means of broadcast events. Threads may emit events, via a \({\mathtt{generate}\,{ev}}\) instruction, and get suspended while waiting for events to be emitted by other threads, through an \({\mathtt{await}\,{ev}}\) instruction. They may also explicitly yield the control to the scheduler, via a cooperate instruction, thereby suspending themselves until the end of the current instant. An instant is therefore a period of time during which all threads compute until termination or suspension. Clearly, this is a logical rather than a physical notion of instant, since the termination of instants is determined by the collective behaviour of threads rather than by some physical clock. At the end of an instant, all threads are inactive and share the same view of emitted events. At instant change, a preemption construct \({\mathtt{do}\,{s}\,\mathtt{watching}\,{ev}}\) allows some suspended parts of threads to be pruned off, thus implementing a time-out mechanism. Interaction with the environment is limited to the start and the end of instants: the environment injects events at the start of instants and collects them at the end.

The starting point of our work is the paper [2], which laid the basis for the study of noninterference in a synchronous reactive language. The present work improves on [2] in several respects, which we summarise below.

The language examined in [2] is similar to \(CRL~\)but strictly more expressive, including imperative constructs, local declarations and a memory. Indeed, our asymmetric parallel operator \(\not \mid \,\), which gives priority to its left component, is inspired by that of [2]. Here, however, we adopt a slightly different semantics for \({s}\,\not \mid \,{s'}\), which preserves the position of threads within a program, while the semantics of [2] swapped the positions of \(s\) and \(s'\) in \({s}\,\not \mid \,{s'}\) in case \(s\) was suspended, reducing it to \({s'}\,\not \mid \,{s}\). This simple change forces the scheduler in \(CRL~\) to serve the same thread at the start of each instant, thus avoiding the so-called scheduling leaks of [2], and allowing for a more relaxed typing rule for \(\not \mid \), which is just the standard rule for symmetric parallel composition.

Moreover, reactivity was not a concern in [2]: as soon as they contained while loops, programs were not guaranteed to terminate or suspend within an instant. Hence, it only made sense to consider a fine-grained notion of noninterference. By contrast, in \(CRL~\)all programs are reactive, thanks to a clear separation between the loop construct \({\mathtt{loop}\,{s}}\) and the iteration construct \({\mathtt{repeat}\,{exp}\,\mathtt{do}\,{s}}\), and to our semantics for loops, which requires them to yield the control at each iteration of their body. This makes it possible to define a notion of coarse-grained reactive noninterference (RNI), which accounts only for the I/O behaviour of programs within each instant. The coarse-grained RNI property has an advantage over the fine-grained one: it exploits in a more direct way the structure of reactive computations, and it recovers the flavour of big-step semantics within each instant, offering a more abstract NI notion for reactive programs.

Finally, our type system is more permissive than that of [2], thanks to the relaxed typing rule for parallel composition and to refined typing rules for the conditional. Both improvements are made possible by design choices of \(CRL\).

The main contributions of this paper are: (1) the reactivity result, (2) the definition of two bisimulation equivalences for synchronous reactive programs, of different granularity. To our knowledge, semantic equivalences for reactive programs have only been studied previously by Amadio [4]; (3) the proposal of two properties of reactive noninterference, based on the above bisimulations, and (4) the presentation of a type system ensuring both noninterference properties.

The rest of the paper is organised as follows. Sections 2 and 3 present the syntax and the semantics of the language \(CRL\). Section 4 is devoted to proving reactivity of \(CRL~\)programs. Section 5 introduces the two bisimulation equivalences and gives some properties of them. In Sect. 6 we define our two NI properties. Section 7 presents the security type system and the proof of its soundness. Finally, future and related work are briefly discussed in Sect. 8.

The proofs of the results are mostly omitted and may be found in [5].

Syntax

In this section we introduce the syntax of \(CRL\). Let \({Val}\) be a set of values, ranged over by \(v, v'\), \({Var}\) a set of variables, ranged over by \(x, y,z\), and \(Events\) a set of events, ranged over by \(ev, ev'\). A fixed valuation function \(V: {Var} \rightarrow {Val}\) for open terms is assumed, which however will be left implicit until Sect. 6.

Expressions. An expression \(exp \in Exp\) may be a basic value, a variable, or the value returned by a function. Letting \(\overrightarrow{exp}\) denote a tuple of expressions \(exp_1, \ldots , exp_n\), the syntax of expressions is:

$$exp\in {Exp}::= v\, ~{{|}}~\, x\, ~{{|}}~\, f({\small {\overrightarrow{exp}}})$$

The evaluation of a function call \(\,f{(\overrightarrow{exp})}\) is assumed to be instantaneous, and therefore so is the evaluation of an expression \(exp\), denoted by \(exp \leadsto v\), which is formally defined by the three rules:

Programs. We now present the syntax of \(CRL~\)programs. Alongside with typical sequential operators, \(CRL~\)includes four operators that are commonly found in reactive languages, \({\mathtt{cooperate}}, {\mathtt{generate}\,{ev}}, {\mathtt{await}\,{ev}} \) and \({\mathtt{do}\,{s}\,\mathtt{watching}\,{ev}} \), as well as a binary asymmetric parallel operator, denoted by \(\not \mid \), which performs a deterministic scheduling on its components. This operator is very close to that used in [2] and, earlier on, in the implementation of SugarCubes [10]. However, while in [2] and [10] each parallel component was executing as long as possible, our operator \(\not \mid \) implements a form of prioritised scheduling, where the first component yields the control only when terminating or suspending (late cooperation), while the second yields it as soon as it generates an event that unblocks the first component (early cooperation). The syntax of programs is given by:

$$ \begin{array} {ll} s \in {Programs}::= &{}{\mathtt{nothing}}\ | \ s;s \ |\ (s \not \mid s) \ | \ \\ &{}{\mathtt{cooperate}}\ | \ {\mathtt{generate}\,{ev}} \ | \ {\mathtt{await}\,{ev}} \ | \ {\mathtt{do}\,{s}\,\mathtt{watching}\,{ev}} \ | \ \\ &{}({\mathtt{loop}\,{s}}) \ |\ ({\mathtt{repeat}\,{exp}\,\mathtt{do}\,{s}}) \ |\ ({\mathtt{if}\,{exp}\,\mathtt{then}\,{s}\,\mathtt{else}\,{s}}) \end{array} $$

Note that our language includes two different constructs for loops and iteration, in replacement of the standard while loop operator. This allows for a clear separation between nonterminating behaviours and iterative behaviours.

Semantics

This section presents the operational semantics of \(CRL\). Programs proceed through a succession of instants, transforming sets of events. There are two transition relations, both defined on configurations of the form \(\langle s, E\rangle \), where \(s\) is a program and \(E\subseteq Events\) is an event environment, i.e. a set of present events.

Let us first give the general idea of these two transition relations:

  1. 1.

    The small-step transition relation describes the step-by-step execution of a configuration within a an instant. The general format of a transition is:

    $$\begin{aligned} {{\langle }s,E{\rangle }{\,\rightarrow \,}{\langle }s',E'{\rangle }} \end{aligned}$$

    where:

    • \(s\) is the program to execute and \(s'\) is the residual program;

    • \(E\) is the starting event environment and \(E'\) is the resulting event environment: \(E'\) coincides with \(E\) if the transition does not generate any new event; otherwise \(E'= E \cup \{ev\}\), where \(ev\) is the new generated event.

  2. 2.

    The tick transition relation describes the passage from one instant to the next, and applies only to suspended configurations. A transition of this kind has always the form:

    $$\begin{aligned} \langle s, E\rangle ~{\hookrightarrow }~ \langle [s]_E, \emptyset \rangle \end{aligned}$$

    where the resulting event environment is empty and \([s]_E\) is a “reconditioning” of program \(s\) for the next instant, possibly allowing it to resume execution at the next instant even without the help of new events from the environment.

Before formally defining \(\rightarrow \) and \(\hookrightarrow \,\), we introduce the suspension predicate \({\langle }s,E{\rangle }\ddagger \), which holds when \(s\) is suspended in the event environment \(E\), namely when all threads in \(s\) are waiting for events not contained in \(E\), or have deliberately yielded the control for the current instant by means of a cooperate instruction.

The rules defining the predicate \(\ddagger \) and the relations \(\rightarrow \) and \(~{\hookrightarrow }~\) are given in Fig. 1. The reconditioning function \([s]_E\) prepares \(s\) for the next instant: it erases all guarding cooperate instructions, as well as all guarding \({\mathtt{do}\,{s'}\,\mathtt{watching}\,{ev}}\) instructions whose time-out event \(ev\) belongs to \(E\) (i.e. has been generated).

Fig. 1.
figure 1

Operational Semantics of \(CRL~\)

We assume programs are well-typed with respect to a standard type system that ensures that in the commands \({\mathtt{if}\,{exp}\,\mathtt{then}\,{s_1}\,\mathtt{else}\,{s_2}}\) and \({\mathtt{repeat}\,{exp}\,\mathtt{do}\,{s}}\) the expression \(exp\) evaluates respectively to a boolean and to an integer \(n\ge 1\).

Let us comment on the most interesting transition rules. The execution of a parallel program always starts with its left branch (Rule (\(par_1\))). Once the left branch is over, the program reduces to its right branch (Rule (\(par_2\))). If the left branch is suspended, then the right branch executes (Rule (\(par_3\))) until unblocking the left branch. Thus early cooperation is required in the right branch. To avoid nondeterminism, a terminated right branch can only be eliminated if the left branch is suspended (Rule (\(par_4\))). A \({\mathtt{loop}\,{s}}\) program executes its body cyclically: a cooperate instruction is systematically added in parallel to its body to avoid instantaneous loops, i.e. divergence within an instantFootnote 1 (Rule (loop)). A \({\mathtt{do}\,{s}\,\mathtt{watching}\,{ev}}\) program executes its body until termination or suspension (Rule (\(watch_1\))), reducing to \({\mathtt{nothing}}\) when its body terminates (Rule (\(watch_2\))).

The small-step transition relation satisfies two simple properties.

Proposition 1

(Determinism).

Let \(s\in {Programs}\) and \(E\subseteq Events\). Then:

$$ s\ne {\mathtt{nothing}}\quad \,\Rightarrow \,\quad either \ {\langle }s,E{\rangle }\ddagger \ \, or \ \, \exists \, !\, s', E'\,. \ {{\langle }s,E{\rangle }{\,\rightarrow \,}{\langle }s',E'{\rangle }} $$

Proof

By inspecting the suspension and transition rules, it is immediate to see that at most one transition rule applies to each configuration \(\langle s, E\rangle \).

Proposition 2

(Event persistence).

Let \(s\in {Programs}\) and \(E\subseteq Events\). Then:    \({{\langle }s,E{\rangle }{\,\rightarrow \,}{\langle }s',E'{\rangle }} \,\Rightarrow \,E \subseteq E'\)

Proof

Straightforward, since the only transition rule that changes the event environment \(E\) is the rule for \({\mathtt{generate}\,{ev}}\), which adds the event \(ev\) to \(E\).

We define now the notion of instantaneous convergence, which is at the basis of the reactivity property of \(CRL~\)programs. Let us first introduce some notation.

The timed multi-step transition relation \(\langle s, E \rangle \Rightarrow _{n}\langle s', E'\rangle \) is defined by:

$$\begin{array}{l} {{\langle }s,E{\rangle }{\,\Rightarrow _{0}\,}{\langle }s,E{\rangle }}\\ {{\langle }s,E{\rangle }{\,\rightarrow \,}{\langle }s',E'{\rangle }} \ \wedge \ {{\langle }s',E'{\rangle }{\,\Rightarrow _{n}\,}{\langle }s'',E''{\rangle }} \quad \,\Rightarrow \,\ \, {{\langle }s,E{\rangle }{\,\Rightarrow _{n+1}\,}{\langle }s'',E''{\rangle }}\\ \end{array} $$

Then the multi-step transition relation \(\langle s, E \rangle \Rightarrow \langle s', E'\rangle \) is given by:

$${{\langle }s,E{\rangle }{\,\Rightarrow \,}{\langle }s',E'{\rangle }} \,\,\Leftrightarrow \,\,\exists \, n\,. \, {{\langle }s,E{\rangle }{\,\Rightarrow _{n}\,}{\langle }s',E'{\rangle }}$$

Note that the relation \(\Rightarrow \) could also be defined as \(\rightarrow ^\star \).

The immediate convergence predicate is defined by:

$$\begin{aligned} \langle s, E\rangle \,{\mathop {{{\scriptstyle \curlyvee }}}\limits ^{{\scriptstyle \ddagger }}}\,\,\,\Leftrightarrow \,\,{\langle }s,E{\rangle }\ddagger \ \vee \ s = {\mathtt{nothing}}\end{aligned}$$

We may now define the relations and predicates of instantaneous convergence and instantaneous termination:

Definition 1

(Instantaneous convergence).

$$\begin{array}{ll} \langle s, E \rangle \Downarrow \langle s', E'\rangle \ \, &{}\mathrm{if}\, \ \ \langle s, E \rangle \Rightarrow \langle s', E'\rangle \ \wedge \ {\langle s', E'\rangle \,{\mathop {{{\scriptstyle \curlyvee }}}\limits ^{{\scriptstyle \ddagger }}}} \\ \langle s, E \rangle \Downarrow \ \, &{}\mathrm{if}\ \ \ \exists s', E'\,.\,\, \langle s, E \rangle \Downarrow \langle s', E'\rangle \end{array}$$

Definition 2

(Instantaneous termination).

The timed versions \(\Downarrow _{n}\) and of \(\Downarrow \) and are defined in the expected way.

The relation \(\langle s, E \rangle \Downarrow \langle s', E'\rangle \ \) defines the overall effect of the program \(s\) within an instant, starting with the set of events \(E\). Indeed, \(\Downarrow \) may be viewed as defining the big-step semantics of programs within an instantFootnote 2. As an immediate corollary of Proposition , the relation \(\Downarrow \) is a function.

In the next section we prove that every configuration \(\langle s, E\rangle \) instantaneously converges. This property is called reactivity.

Reactivity

In this section we present our first main result, the reactivity of \(CRL~\)programs. In fact, we shall prove a stronger property than reactivity, namely that every configuration \(\langle s, E\rangle \) instantaneously converges in a number of steps which is bounded by the instantaneous size of \(s\), denoted by \( size (s)\). The intuition for \( size (s)\) is that the portion of \(s\) that sequentially follows a \({\mathtt{cooperate}}\) instruction should not be taken into account, as it will not be executed in the current instant. Moreover, if \(s\) is a loop, \( size (s)\) should cover a single iteration of its body.

To formally define the function \( size (s)\), we first introduce an auxiliary function \( dsize (s)\) (where“d” stands for “decorated”) that assigns to each program an element of \((\mathbf{Nat}\times \mathbf{Bool})\). Then \( size (s)\) will be the first projection of \( dsize (s)\). Intuitively, if \( dsize (s) = (n,b)\), then \(n\) is an upper bound for the number of steps that \(s\) can execute within an instant; and \(b\) is \({ tt}\) or \({ ff}\) depending on whether or not a \({\mathtt{cooperate}}\) instruction is reached within the instant. For conciseness, we let \(n^{\wedge }\) stand for \((n, { tt})\), \(n\) stand for \((n, { ff})\), and \(n^{\circ }\) range over \(\{n^{\wedge }, n \}\).

The difference between \(n^{\wedge }\) and \(n\) will essentially show when computing the size of a sequential composition: if the decorated size of the first component has the form \(n^{\wedge }\), then a cooperate has been met and the counting will stop; if it has the form \(n\), then \(n\) will be added to the decorated size of the second component.

Definition 3

(Instantaneous size).

The function \( size : {Programs}\rightarrow \mathbf{Nat}\) is defined by:

$$\begin{aligned} size (s) = n\ \ \, if \ \ ( dsize (s) = n\ \vee \ dsize (s) = n^\wedge ) \end{aligned}$$

where the function \( dsize : {Programs}\rightarrow (\mathbf{Nat}\times \mathbf{Bool})\) is given inductively by:

$$ \begin{array}{l} dsize ({\mathtt{nothing}}) = 0 \qquad \qquad dsize ({\mathtt{cooperate}}) = 0^{\wedge }\\ dsize ({\mathtt{generate}\,{ev}}) = dsize ({\mathtt{await}\,{ev}}) = 1\\ dsize (s_1 ; s_2) = {\left\{ \begin{array}{ll} ~{n_1}^{\wedge } &{} if \; dsize (s_1) = {n_1}^{\wedge } \\ (1 + n_1 + n_2)^{\circ } &{} if \; dsize (s_1) = n_1 \ \wedge \ dsize (s_2) = {n_2}^{\circ } \\ \end{array}\right. } \\ \\ dsize ({s_1}\not \mid {s_2}) = {\left\{ \begin{array}{ll} (1 + n_1 + n_2) ^{\wedge } &{} if \; dsize (s_1) = {n_1}^{\wedge } \ \wedge \ dsize (s_2) = n_2\\ (1 + n_1 + n_2)^{\wedge } &{} if \; dsize (s_1) = n_1 \ \wedge \ dsize (s_2) = {n_2}^{\wedge } \\ (1 + n_1 + n_2)^{\circ } &{} if \; dsize (s_1) = {n_1}^{\circ } \ \wedge \ dsize (s_2) = {n_2}^{\circ } \\ \end{array}\right. } \\ \\ dsize ({\mathtt{repeat}\,{exp}\,\mathtt{do}\,{s}}) = (m + (m\times n))^\circ \qquad {\textit{if} \ \, dsize (s) = n^\circ \ \wedge \ exp \leadsto m} \\ dsize ({\mathtt{loop}\,{s}}) = (2+ n)^{\wedge } \quad \quad if \; \ dsize (s) = n^{\circ } \\ dsize ({\mathtt{do}\,{s}\,\mathtt{watching}\,{ev}}) = (1+ n)^{\circ } \quad if \; dsize (s) = n^{\circ } \\ dsize ({\mathtt{if}\,{exp}\,\mathtt{then}\,{s_1}\,\mathtt{else}\,{s_2}}) = {\left\{ \begin{array}{ll} (1 + max {\{{n_1,n_2}\}})^{\wedge }, if \; dsize (s_i) = {n_i}^{\wedge }, \\ (1 + max {\{{n_1, n_2}\}}), \text {if for}\ \, i\ne j \, \\ \quad \quad \quad \quad dsize (s_i) = n_i \, \wedge \, dsize (s_j) = {n_j}^{\circ } \end{array}\right. } \end{array} $$

The following lemma establishes that \( size (s)\) decreases at each step of a small-step execution:

Lemma 1

(Size reduction within an instant).

$$\forall s\, \forall E\ \ \, (\, {{\langle }s,E{\rangle }{\,\rightarrow \,}{\langle }s',E'{\rangle }} \,\Rightarrow \, size (s') < size (s) \,)$$

The proof of this result is not entirely straightforward because of the use of the decorated size dsize in the definition of \( size (s)\). The proof may be found in [5].

We are now ready to prove our main result, namely that every program \(s\) instantaneously converges in a number of steps that is bounded by \( size (s)\).

Theorem 1

(Reactivity).   \(\forall s, \forall E\ \, \ (\,\exists \, n \le size (s)\ \ \langle s, E\rangle \, {}\Downarrow _{\,n}\,)\)

The proof proceeds by simultaneous induction on the structure and on the size of \(s\). Induction on the size is needed for the case \(s= {s_1}\not \mid {s_2}\). The detailed proof may be found in [5].

Fine-Grained and Coarse-Grained Bisimilarity

We now introduce two bisimulation equivalences (aka bisimilarities) on programs, which differ for the granularity of the underlying notion of observation. The first bisimulation formalises a fine-grained observation of programs: the observer is viewed as a program, which is able to interact with the observed program at any point of its execution. The second reflects a coarse-grained observation of programs: here the observer is viewed as part of the environment, which interacts with the observed program only at the start and the end of instants.

Let us start with an informal description of the two bisimilarities:

  1. 1.

    Fine-grained bisimilarity \(\approx ^{ \,fg }\). In the bisimulation game, each small step must be simulated by a (possibly empty) sequence of small steps, and each instant change must be simulated either by an instant change, in case the continuation is observable (in the sense that it affects the event environment), or by an unobservable behaviour otherwise.

  2. 2.

    Coarse-grained bisimilarity \(\approx ^{ \,cg }\). Here, each converging sequence of steps must be simulated by a converging sequence of steps, at each instant. For instant changes, the requirement is the same as for fine-grained bisimulation.

As may be expected, the latter equivalence is more abstract than the former, as it only compares the I/O behaviours of programs (as functions on sets of events) at each instant, while the former also compares their intermediate results.

Let us move now to the formal definitions of the equivalences \(\approx ^{ \,fg }\) and \(\approx ^{ \,cg }\).

We first extend the reconditioning function to the program \({\mathtt{nothing}}\) as follows:

Notation. \(\ \llcorner {s}\lrcorner _{E} \mathop {=}\limits ^{\,\mathrm {def}}\,{\left\{ \begin{array}{ll} [s]_E &{} \text {if} \ {\langle }s,E{\rangle }\ddagger \\ s &{} {\text {if} \ s = {\mathtt{nothing}}}\\ \end{array}\right. }\)

Definition 4

(Fine-grained bisimulation).

A symmetric relation \({\mathcal R}\) on programs is a fg-bisimulation if \(s_1 \,{\mathcal R}\, s_2\) implies, for any \(E\subseteq Events\):

$$\begin{array}{l} 1) \ {{\langle }s_1,E{\rangle }{\,\rightarrow \,}{\langle }s'_1,E'{\rangle }} \ \, \Rightarrow \ \, \exists \, s'_2 \, . \,\, (\,{{\langle }s_2,E{\rangle }{\,\Rightarrow \,}{\langle }s'_2,E'{\rangle }} \ \wedge \ s'_1 \,{\mathcal R}\, \,s'_2\,) \\ 2) \ {\langle }s_1,E{\rangle }\ddagger \ \, \Rightarrow \ \, \exists \, s'_2 \, . \,\, (\, \langle s_2, E\rangle \Downarrow \langle s'_2, {E}\rangle \ \,\wedge \ \,\,\llcorner {s_1}\lrcorner _{E} \,\,{\mathcal R}\, \,\llcorner {s'_2}\lrcorner _{{E}} \,)\\ \end{array} $$

Then \(s_1, s_2\) are fg-bisimilar, \(s_1\approx ^{ \,fg }s_2\), if \(\, s_1 \,{\mathcal R}\, s_2\) for some fg-bisimulation \({\mathcal R}\).

The bisimilarity \(\approx ^{ \,fg }\) is weak, in the terminology of process calculi, since it allows a single small step to be simulated by a (possibly empty) sequence of small steps. In the terminology of language-based security, an equivalence that abstracts away from the number of steps, thus allowing internal moves to be ignored, is called time-insensitive. Typically we have:

$${{\mathtt{nothing}}\,;\,{\mathtt{generate}\,{ev}}} \ \ \approx ^{ \,fg }\ {\mathtt{generate}\,{ev}}$$
$$ \mathtt{if}\ { tt}\ \mathtt{then}\ s_1 \ \mathtt{else}\ s_2 \ \ \approx ^{ \,fg }\ \ s_1 $$

The equivalence \(\approx ^{ \,fg }\) is also termination-insensitive, as it cannot distinguish proper termination from suspension nor from internal divergence (recall that no divergence is possible within an instant and thus the execution of a diverging program always spans over an infinity of instants). For instance we have:

$${\mathtt{nothing}}\quad \approx ^{ \,fg }\quad {\mathtt{cooperate}}\quad \approx ^{ \,fg }\quad {\mathtt{loop}\,{{\mathtt{nothing}}}}$$

Indeed, for any \(E\) the suspended behaviour \({\langle }{\mathtt{cooperate}},E{\rangle }\ddagger \) of the middle program can be simulated by the empty computation \(\langle {\mathtt{nothing}}, E\rangle \Downarrow \langle {\mathtt{nothing}}, E\rangle \) of the left-hand program and by the two-step computation \(\langle {\mathtt{loop}\,{{\mathtt{nothing}}}}, E\rangle \rightarrow \langle {({{\mathtt{nothing}}}\not \mid {{\mathtt{cooperate}}})\,;\,{\mathtt{loop}\,{{\mathtt{nothing}}}}}, E\rangle \rightarrow \langle {{\mathtt{cooperate}}\,;\,{\mathtt{loop}\,{{\mathtt{nothing}}}}}, E\rangle \ddagger \) of the right-hand program, since \(\llcorner {{\mathtt{cooperate}}}\lrcorner _{E} = {\mathtt{nothing}}= \llcorner {{\mathtt{nothing}}}\lrcorner _{E}\), and \(\llcorner {{{\mathtt{cooperate}}\,;\,{\mathtt{loop}\,{{\mathtt{nothing}}}}}}\lrcorner _{E} = {\mathtt{loop}\,{{\mathtt{nothing}}}}\).

The last example shows that, while it weakly preserves small-step transitions, \(\approx ^{ \,fg }\) does not preserve tick transitions. On the other hand, it detects the instant in which events are generated. In other words, it is sensitive to the clock-stamp of events. For instance, we have:

$$\begin{array}{l} {\mathtt{nothing}}\, ;\,{\mathtt{generate}\,{ev}}\ \not \approx ^{ \,fg }\ {\mathtt{cooperate}}\, ;\, {\mathtt{generate}\,{ev}}\\ \end{array} $$

because in the left-hand program \(ev\) is generated in the first instant, while in the right-hand program it is generated in the second instant. Incidentally, this example shows that \(\approx ^{ \,fg }\) is not preserved by sequential composition (as was to be expected given that \(\approx ^{ \,fg }\) is termination-insensitive).

On the other hand, we conjecture that \(\approx ^{ \,fg }\) is compositional, that is, preserved by parallel composition, because in the bisimulation game the quantification on the event environment is renewed at each step, thus mimicking the generation of events by a parallel component.

Finally, \(\approx ^{ \,fg }\) is sensitive to the order of generation of events and to repeated emissions of the same event (“stuttering”). Typical examples are:

$$({{\mathtt{generate}\,{ev_1}}}\not \mid {{\mathtt{generate}\,{ev_2}}}) \quad \not \approx ^{ \,fg }\quad ({{\mathtt{generate}\,{ev_2}}}\not \mid {{\mathtt{generate}\,{ev_1}}})$$
$$ {\mathtt{generate}\,{ev}} \ \ \not \approx ^{ \,fg }\ \ ({{\mathtt{generate}\,{ev}}\,;\,{\mathtt{generate}\,{ev}}})$$

In the last example, note that after generating the first event \(ev\) the right-hand program may be launched again in the event environment \(E=\emptyset \), producing once more \(E'= \{ev\}\). This cannot be mimicked by the left-hand program.

Definition 5

(Coarse-grained bisimulation).

A symmetric relation \({\mathcal R}\) on programs is a cg-bisimulation if \(s_1 \,{\mathcal R}\, s_2\) implies, for any \(E\subseteq Events\):

$$\begin{array}{l} \langle s_1, E\rangle \Downarrow \langle s'_1, E'\rangle \ \, \Rightarrow \ \, \exists \, s'_2 \, . \,\, (\, \langle s_2, E\rangle \Downarrow \langle s'_2, E'\rangle \ \wedge \ \llcorner {s'_1}\lrcorner _{E'} \,\,{\mathcal R}\, \,\llcorner {s'_2}\lrcorner _{E'}\,) \end{array} $$

Then \(s_1, s_2\) are cg-bisimilar, \(s_1\approx ^{ \,cg }s_2\), if \(s_1 \,{\mathcal R}\, s_2\) for some cg-bisimulation \({\mathcal R}\).

The bisimilarity \(\approx ^{ \,cg }\) compares the overall effect of two programs at every instant. Therefore, one may argue that \(\approx ^{ \,cg }\) makes full sense when coupled with reactivity. Indeed, if \(\approx ^{ \,cg }\) were applied to programs that diverge within the first instant (or to programs that are bisimilar for the first \(k\) instants and diverge in the following instant), it would trivially equate all of them. In the absence of reactivity, it would seem preferable to focus on a fine-grained bisimilarity such as \(\approx ^{ \,fg }\), which is able to detect intermediate results of instantaneously diverging computations.

Like \(\approx ^{ \,fg }\), the bisimilarity \(\approx ^{ \,cg }\) is both time-insensitive and termination-insensitive. Indeed, as will be established by Theorem 2, \(\approx ^{ \,fg }\) implies \(\approx ^{ \,cg }\). Moreover, \(\approx ^{ \,cg }\) is generation-order-insensitive and stuttering-insensitive. Typically:

$$({{\mathtt{generate}\,{ev_1}}}\not \mid {{\mathtt{generate}\,{ev_2}}}) \quad \approx ^{ \,cg }\quad ({{\mathtt{generate}\,{ev_2}}}\not \mid {{\mathtt{generate}\,{ev_1}}})$$
$${\mathtt{generate}\,{ev}} \ \approx ^{ \,cg }\ ({{\mathtt{generate}\,{ev}}\,;\,{\mathtt{generate}\,{ev}}})$$

More generally, we can show that the equivalence \(\approx ^{ \,cg }\) views the left-parallel composition \(\not \mid \) as a commutative operator:

Proposition 3

(Commutativity of \(\not \mid \) up to \(\approx ^{ \,cg }\) ).

$$\forall s_1, s_2\,. \ \ \ {s_1}\not \mid {s_2} \, \approx ^{ \,cg }\,{s_2}\not \mid {s_1}$$

On the other hand, \(\not \mid \) is associative modulo both equivalences \(\approx ^{ \,fg }\) and \(\approx ^{ \,cg }\).

Proposition 4

(Associativity of \(\not \mid \) up to \(\approx ^{ \,fg }\) and \(\approx ^{ \,cg }\) ).

$$\forall s_1, s_2, s_3\,. \ \, \ {s_1}\not \mid {({s_2}\not \mid {s_3})} \ \ \begin{array}{c}\approx ^{ \,fg }\\ \approx ^{ \,cg }\end{array} \ \ {({s_1}\not \mid {s_2})}\not \mid {s_3}$$

Let us recall that the asymmetric parallel operator \(\,\Lsh \,\) of [2] was not associative up to fine-grained semantics (a simple example was given in [2]).

We show now that \(\approx ^{ \,fg }\) is strictly included in \(\approx ^{ \,cg }\) (the strictness of the inclusion being witnessed by the examples given above):

Theorem 2

(Relation between the bisimilarities).

$$\begin{aligned} \approx ^{ \,fg }\ \subset \ \approx ^{ \,cg }\end{aligned}$$

Proof

To prove \( \approx ^{ \,fg }\ \subseteq \ \approx ^{ \,cg }\), it is enough to show that \(\approx ^{ \,fg }\) is a cg-bisimulation. Let \(s_1 \approx ^{ \,fg }s_2\). Suppose that \(\langle s_1, E\rangle \Downarrow \langle s'_1, E'\rangle \). This means that there exists \(n\ge 0\) such that:

$$\begin{aligned} \langle s_1, {E}\rangle ={{\langle }s^0_1,E^0{\rangle }{\,\rightarrow \,}{\langle }s^1_1,E^1{\rangle }}\rightarrow \cdots \rightarrow \langle s^n_1, E^n\rangle = \langle s'_1, E'\rangle \,{\mathop {{{\scriptstyle \curlyvee }}}\limits ^{{\scriptstyle \ddagger }}}{} \end{aligned}$$

Since \(s_1 \approx ^{ \,fg }s_2\), by Clauses 1 and 2 of Definition 4 we have correspondingly:

$$\begin{aligned} \langle s_2, E\rangle =\langle s^0_2, E^0\rangle \Rightarrow \langle s^1_2, E^1\rangle \Rightarrow \cdots \Rightarrow \langle s^n_2, E^n\rangle \ {\Downarrow }\ \langle s'_2, E'\rangle \quad \quad (*) \end{aligned}$$

where \(s^i_1 \approx ^{ \,fg }s^i_2\) for every \( i < n\) and \(\llcorner {s'_1}\lrcorner _{E'}\approx ^{ \,fg }\llcorner {s'_2}\lrcorner _{E'}\). Then we may conclude since \((*)\) can be rewritten as \(\langle s_2, E\rangle \Downarrow \langle s'_2, E'\rangle \).

Coarse-grained bisimilarity is very close to the semantic equivalence proposed by Amadio in [4] for a slightly different reactive language, equipped with a classical nondeterministic parallel operator. By contrast, the noninterference notion of [2] was based on a fine-grained bisimilarity (although bisimilarity was not explicitly introduced in [2], it was de facto used to define noninterference) which, however, was stronger than \(\approx ^{ \,fg }\), since it acted as a strong bisimulation on programs with an observable behaviour (i.e. affecting the event environment).

As argued previously, coarse-grained bisimilarity is a natural equivalence to adopt when reactivity is guaranteed. It allows one to recover the flavour of big-step semantics within instants. On the other hand, fine-grained bisimilarity seems a better choice when reactivity is not granted. Note that reactivity was not a concern in either [2] or [4]. Nevertheless, it had been thoroughly studied in previous work by Amadio et al. [3].

Finally, it should be noted that, since our left-parallel composition operator \(\,\not \mid \,\) is deterministic, we could as well have used trace-based equivalences rather than bisimulation-based ones. However, defining traces is not entirely obvious for computations proceeding through instants, as it requires annotating with clock-stamps the events or event sets that compose a trace (depending on whether the trace is fine-grained or coarse-grained). Moreover, bisimulation provides a convenient means for defining noninterference in our concurrent setting, allowing the notion of clock-stamp to remain implicit. Lastly, as we aim to extend our study to a fully-fledged distributed reactive language, including a notion of site and asynchronous parallelism between sites, for which determinism would not hold anymore, we chose to adopt bisimulation-based equivalences from the start.

This concludes our discussion on semantic equivalences. We turn now to the definition of noninterference, which is grounded on that of bisimulation.

Security Property

In this section we define two noninterference properties for programs, which are based on the two bisimilarities introduced in Sect. 5. As usual when dealing with secure information flow, we assume a finite lattice \((\mathcal {S},\le )\) of security levels, ranged over by \(\tau ,\sigma , \vartheta \). We denote by \(\sqcup \) and \(\sqcap \) the join and meet operations on the lattice, and by \(\bot \) and \(\top \) its minimal and maximal elements.

In \(CRL\), the objects that are assigned a security level are events and variables. An observer is identified with a downward-closed set of security levels (for short, a dc-set), i.e. a set \(\mathcal L\subseteq {\mathcal {S}}\) satisfying the property: (\(\tau \in \mathcal L\,\wedge \, \tau ' \le \tau \)) \(\,\Rightarrow \,\tau ' \in \mathcal L\).

A type environment \(\varGamma \) is a mapping from variables and events to their types, which are just security levels \(\tau , \sigma \). Given a dc-set \(\mathcal L\), a type environment \(\varGamma \) and an event environment \(E\), the subset of \(E\) to which \(\varGamma \) assigns security levels in \(\mathcal L\) is called the \(\mathcal L\)-part of \(E\) under \(\varGamma \). Similarly, if \(V: {Var} \rightarrow {Val}\) is a valuation, the subset of \(V\) whose domain is given levels in \(\mathcal L\) by \(\varGamma \) is the \(\mathcal L\)-part of \(V\) under \(\varGamma \).

Two event environments \(E_1, E_2\) or two valuations \(V_1, V_2\) are \(=_{\mathcal L}^{\varGamma }\)-equal, or indistinguishable by a \(\mathcal L\)-observer, if their \(\mathcal L\)-parts under \(\varGamma \) coincide:

Definition 6

( \({\varGamma \mathcal L}\)-equality of event environments and valuations ).

Let \(\mathcal L\subseteq \mathcal {S}\) be a dc-set, \(\varGamma \) a type environment and \(V\) a valuation. Define:

$$\begin{array}{lll} E_1 \,=_{\mathcal L}^{\varGamma }\, E_2 &{} \ { if}\ \, &{} \forall \,ev\in Events\ \, (\,\varGamma (ev) \in \mathcal L\ \Rightarrow \ (\, ev \in E_1 \Leftrightarrow ev \in E_2\,)\,)\\ V_1 \,=_{\mathcal L}^{\varGamma }\, V_2 &{} \ { if}\ \, &{}\forall \, x\in {Var}\ \, (\,\varGamma (x) \in \mathcal L\ \Rightarrow \ \, V_1(x) = V_2(x)\,) \end{array} $$

Let \(\rightarrow _V, \Rightarrow _V, \Downarrow _V\) denote our various semantic arrows under the valuation \(V\). Then we may define the indistinguishability of two programs by a fine-grained or coarse-grained \(\mathcal L\)-observer, for a given \(\varGamma \), by means of the following two notions of \({\varGamma \mathcal L}\)-bisimilarity:

Definition 7

(Fine-grained \(\varGamma \mathcal L\)-bisimilarity).

A relation \(\mathcal{{R}}\) on programs is a fg-\(\varGamma \mathcal L\)-\(V_1V_2\)-bisimulation if \(s_1 \,\mathcal{R}\, s_2\) implies, for any \(E_1,E_2\) such that \(E_1 \,=_{\mathcal L}^{\varGamma }\, E_2\):

$$\begin{array}{l} (1)\, \langle s_1, E_1\rangle \rightarrow _{V_1}\!\!\langle s'_1, E'_1\rangle \, \Rightarrow \exists \,s'_2, E'_2 \, . \, (\langle s_2, E_2\rangle \Rightarrow _{V_2}\! \!\langle s'_2, E'_2\rangle \wedge E'_1 \,{=_{\mathcal L}^{\varGamma }}\, E'_2 \wedge s'_1 \,{\mathcal R}\, s'_2) \\ (2)\, {\langle }s_1,E_1{\rangle }\ddagger \ \Rightarrow \, \exists \, s'_2, E'_2 \,. \, (\langle s_2, E_2\rangle \Downarrow _{V_2} \!\langle s'_2, E'_2\rangle \, \wedge \, E_1 \,{=_{\mathcal L}^{\varGamma }}\, E'_2 \, \wedge \, \llcorner {s_1}\lrcorner _{E_1} \,{\mathcal R}\,\,\llcorner {s'_2}\lrcorner _{E'_2} ) \\ (3) \ \mathrm{and}\ (4): \text{ Symmetric } \text{ to } (1) \text{ and } (2) \text{ for } \langle s_2, E_2\rangle \text{ under } \text{ valuation } V_2\text{. } \end{array} $$

Then programs \(s_1, s_2\) are fg-\(\varGamma \mathcal L\)-bisimilar, \(s_1 \approx ^{\, fg }_{\varGamma \mathcal L}s_2\), if for any \(V_1,V_2\) such that \(V_1 \,=_{\mathcal L}^{\varGamma }\, V_2\), \(s_1 \,\mathcal{R}\, s_2\) for some fg-\(\varGamma \mathcal L\)-\(V_1V_2\)-bisimulation \(\mathcal{R}\).

The fg-\(\varGamma \mathcal L\)-bisimilarity weakly preserves small-step transitions and convergence, while maintaining the \(\varGamma \mathcal L\)-equality on event environments. Note that, while in the definition of fg-bisimilarity it was possible to leave the valuation implicit, we need to make it explicit in the definition of fg-\(\varGamma \mathcal L\)-bisimilarity, as variables have security levels and are allowed to have different values if their level is not in \(\mathcal L\).

The reason why a fg-\(\varGamma \mathcal L\)-\(V_1V_2\)-bisimulation is parameterised on two valuations \(V_1\) and \(V_2\), and the quantification on valuations in \(\approx ^{\, fg }_{\varGamma \mathcal L}\) is only performed at the beginning of the bisimulation game, rather than at each step as for event environments, is that programs have no means to change the valuation. In a more expressive language where the valuation could change, it would be necessary to include the valuation in the environment that is quantified at each step.

Definition 8

(Coarse-grained \(\varGamma \mathcal L\)-bisimilarity).

A relation \(\mathcal{{R}}\) on programs is a cg-\(\varGamma \mathcal L\)-\(V_1V_2\)-bisimulation if \(s_1 \,\mathcal{R}\, s_2\) implies, for any \(E_1,E_2\) such that \(E_1 \,=_{\mathcal L}^{\varGamma }\, E_2\):

\( \begin{array}{ll} (1) \ \langle s_1, E_1\rangle \Downarrow _{V_1}\langle s'_1, E'_1\rangle \ \, \Rightarrow \ \, \exists \, s'_2, E'_2 \, . \,&{} (\,\, \langle s_2, E_2\rangle \Downarrow _{V_2}\langle s'_2, E'_2\rangle \, \wedge \, E'_1 \,{=_{\mathcal L}^{\varGamma }}\, E'_2 \,\ \wedge \\ &{}\quad \qquad \qquad \qquad \llcorner {s'_1}\lrcorner _{E'_1}\, \,\mathcal{R}\,\,\, \llcorner {s'_2}\lrcorner _{E'_2} \,)\\ \end{array} \)

\(\begin{array}{l} (2) \ \text{ Symmetric } \text{ to } 1) \text{ for } \langle s_2, E_2\rangle \text{ under } \text{ valuation } V_2\text{. } \end{array}\)

Two programs \(s_1, s_2\) are cg-\(\varGamma \mathcal L\)-bisimilar, \(s_1 \approx ^{\, cg }_{\varGamma \mathcal L}s_2\), if for any \(V_1,V_2\) such that \(V_1 \,=_{\mathcal L}^{\varGamma }\, V_2\), \(s_1 \,\mathcal{R}\, s_2\) for some cg-\(\varGamma \mathcal L\)-\(V_1V_2\)-bisimulation \(\mathcal{R}\).

Our reactive noninterference (RNI) properties are now defined as follows:

Definition 9

(Fine-grained and Coarse-grained RNI).

\(\begin{array}{l} \text{ A } \text{ program } s \text{ is } \text{ fg-secure } \text{ in } \varGamma \text{ if } s \approx ^{\, fg }_{\varGamma \mathcal L}s \text{ for } \text{ every } \text{ dc-set } \mathcal L\text{. }\\ \text{ A } \text{ program } s \text{ is } \text{ cg-secure } \text{ in } \varGamma \text{ if } s \approx ^{\, cg }_{\varGamma \mathcal L}s \text{ for } \text{ every } \text{ dc-set } \mathcal L\text{. } \end{array}\)

The following example, where the superscripts indicate the security levels of variables and events, illustrates the difference between fg-security and cg-security.

Example 1

The following program is cg-secure but not fg-secure:

$$ \begin{array}{ll} s = \mathtt{if}\,{x^\top = 0} &{}\mathtt{then}\, {{\mathtt{generate}\,{ev^{\,\bot }_1}}}\not \mid {{\mathtt{generate}\,{ev^{\,\bot }_2}}}\\ &{}\mathtt{else}\,{{\mathtt{generate}\,{ev^{\,\bot }_2}}}\not \mid {{\mathtt{generate}\,{ev^{\,\bot }_1}}} \end{array} $$

If we replace the second branch of \(s\) by \({{\mathtt{generate}\,{ev^{\,\bot }_1}}\,;\,{\mathtt{generate}\,{ev^{\,\bot }_2}}}\), then we obtain a program \(s'\) that is both fg-secure and cg-secure.

In general, from all the equivalences/inequivalences in page 11 we may obtain secure/insecure programs for the corresponding RNI property by plugging the two equivalent/inequivalent programs in the branches of a high conditional.

As expected, fine-grained security is stronger than coarse-grained security:

Theorem 3

(Relation between the RNI properties).

$$\begin{aligned} \quad Let s\in {Programs}. ~{ If}~s~is~{ fg}\!-\!secure~then~s~is~{ cg}\!-\!secure. \end{aligned}$$

Proof

The proof consists in showing that for any \(\varGamma \) and \(\mathcal L\), we have \(\approx ^{\, fg }_{\varGamma \mathcal L}\ \subseteq \ \approx ^{\, cg }_{\varGamma \mathcal L}\). To this end, it is enough to show that for any pair of valuations \(V_1\) and \(V_2\), any fg-\(\varGamma \mathcal L\)-\(V_1V_2\)-bisimulation \(\,\mathcal{R}\,\) is also a cg-\(\varGamma \mathcal L\)-\(V_1V_2\)-bisimulation. The reasoning closely follows that of Theorem 2 and is therefore omitted.

We conclude this section with an informal discussion about scheduling leaks. We speak of scheduling leak when the position of the scheduler at the start of an instant may depend on secrets tested in previous instants. We have mentioned already that, unlike the “swapping” operator \(\Lsh \,\) of [2], our operator \(\,\not \mid \,\) preserves the spatial structure of programs. As a consequence, the same parallel component is scheduled at the beginning of each instant, and the position of the scheduler is independent of any previous test. Thus the scheduling leaks arising with the operator \(\Lsh \,\), which implied a severe constraint in the type system of [2] (the addition of the condition \(\sigma _i\le \tau _j\) in Rule (Par)), cannot occur anymore with \(\,\not \mid \,\). In particular, it may be shown that the scheduling leak example given in [2] does not arise if we replace \(\Lsh \,\) by \(\,\not \mid \,\). This point is explained in detail in [5].

Type System

We present now our security type system for \(CRL\), which is based on those introduced in [8] and [17] for a parallel while language and already adapted to a reactive language in [2]. The originality of these type systems is that they associate pairs \((\tau , \sigma )\) of security levels with programs, where \(\tau \) is a lower bound on the level of “writes” and \(\sigma \) is an upper bound on the level of “reads”. This allows the level of reads to be recorded, and then to be used to constrain the level of writes in the remainder of the program. In this way, it is possible to obtain a more precise treatment of termination leaks Footnote 3 than in standard type systems.

Recall that a type environment \(\varGamma \) is a mapping from variables and events to security levels \(\tau , \sigma \). Moreover, \(\varGamma \) associates a type of the form \( \overrightarrow{\tau } \rightarrow \tau \) to functions, where \( \overrightarrow{\tau }\) is a tuple of types \(\tau _1,\ldots , \tau _n\). Type judgements for expressions and programs have the form \(\varGamma \,\vdash \,{exp}\, : \, \tau \) and \(\varGamma \,\vdash \,{s}\, : \, (\tau ,\sigma )\) respectively.

The intuition for \(\varGamma \,\vdash \,{exp}\, : \, \tau \) is that \(\tau \) is an upper bound on the levels of variables occurring in \(exp\). According to this intuition, subtyping for expressions is covariant. The intuition for \(\varGamma \,\vdash \,{s}\, : \, (\tau ,\sigma )\) is that \(\tau \) is a lower bound on the levels of events generated in \(s\) (the “writes”of \(s\)), and \(\sigma \) is an upper bound on the levels of events awaited or watched in \(s\) and of variables tested in \(s\) (the “reads” or guards of \(s\), formally defined in Definition 11). Accordingly, subtyping for programs is contravariant in its first component, and covariant in the second.

The typing rules for expressions and programs are presented in Fig. 2. The rules that increase the guard type are (Await), (Watching), (Repeat) and (Cond1), and those that check it against the write type of the continuation are (Seq), (Repeat) and (Loop). Note that there are two more rules for the conditional, factoring out the cases where either both branches terminate in one instant or both branches are infinite: indeed, in these cases no termination leaks can arise and thus it is not necessary to increase the guard level. In Rule (Cond2), FIN denotes the set of programs terminating in one instant, namely those built without using the constructs \({\mathtt{await}\,{ev}}\), \({\mathtt{cooperate}}\) and loop. In Rule (Cond3), INF denotes the set of infinite or nonterminating programs, defined inductively as followsFootnote 4:

  • \({\mathtt{loop}\,{s}} \, \in \, INF\);

  • \(s\in INF \,\Rightarrow \,{\mathtt{repeat}\,{exp}\,\mathtt{do}\,{s}} \in INF\);

  • \(s_1\in INF \ \vee \ s_2\in INF \,\Rightarrow \,s_1;s_2 \in INF \ \wedge \ {s_1}\not \mid {s_2} \in INF\)

  • \(s_1\in INF \ \wedge \ s_2\in INF \,\Rightarrow \,{\mathtt{if}\,{exp}\,\mathtt{then}\,{s_1}\,\mathtt{else}\,{s_2}}\in INF \)

Note that \(FIN \cup INF \subset {Programs}\). Examples of programs that are neither in \(FIN\) nor in \(INF\) are: \({\mathtt{await}\,{ev}}\), \({\mathtt{if}\,{exp}\,\mathtt{then}\,{{\mathtt{nothing}}}\,\mathtt{else}\,{({\mathtt{loop}\,{s}})}}\), and \({\mathtt{do}\,{({\mathtt{loop}\,{s}})}\,\mathtt{watching}\,{ev}}\).

Fig. 2.
figure 2

Security type system

Definition 10

(Safe conditionals).

A conditional \({\mathtt{if}\,{exp}\,\mathtt{then}\,{s_1}\,\mathtt{else}\,{s_2}}\) is safe if \(s_1, s_2\in FIN\) or \(s_1,s_2\in INF\).

The reason for calling such conditionals “safe” is that they cannot introduce termination leaks, since their two branches have the same termination behaviour.

Note that the two sets \(FIN\) and \(INF\) only capture two specific termination behaviours of \(CRL~\)programs, namely termination in one instant and nontermination. We could have refined further this classification of termination behaviours. Indeed, while only two termination behaviours are possible within each instant, due to reactivity (namely, proper termination and suspension), across instants there is an infinity of possible termination behaviours for programs: termination in a finite number \(k\) of instants, for any possible \(k\), and nontermination. In other words, we could have defined a set \(FIN_k\) for each \(k\) and replaced Rule (Cond2) by a Rule Schema (Condk). The idea would remain the same: conditionals with uniform termination behaviours need not raise their guard level. For simplicity, we chose to focus on \(FIN\) and \(INF\), leaving the finer analysis for future work.

We now prove that typability implies security via the classical steps:

Lemma 2

(Subject Reduction).

Let \(\,\varGamma \,\vdash \,{s}\, : \, (\tau ,\sigma )\). Then \({{\langle }s,E{\rangle }{\,\rightarrow \,}{\langle }s',E'{\rangle }}\) implies \(\varGamma \,\vdash \,{s'}\, : \, (\tau ,\sigma )\), and \({\langle }s,E{\rangle }\ddagger \) implies \(\varGamma \,\vdash \,{[s]_E}\, : \, (\tau ,\sigma )\).

Definition 11

(Guards and Generated Events).

(1) For any \(s\), \( Guards (s)\) is the union of the set of events \(ev\) such that \(s\) contains an \({\mathtt{await}\,{ev}}\) or a \({\mathtt{do}\,{s'}\,\mathtt{watching}\,{ev}}\) instruction (for some \(s'\)), together with the set of variables \(x\) that occur in \(s\) as argument of a function or in the control expression \(exp\) of an instruction \(\,{\mathtt{repeat}\,{exp}\,\mathtt{do}\,{s'}}\) or of an unsafe conditional \({\mathtt{if}\,{exp}\,\mathtt{then}\,{s_1}\,\mathtt{else}\,{s_2}}\,\) in \(s\).

(2) For any \(s\), \( Gen (s)\) is the set of events \(ev\) such that \({\mathtt{generate}\,{ev}}\) occurs in \(s\).

The following Lemma establishes that if \(\varGamma \,\vdash \,{s}\, : \, (\tau ,\sigma )\), then \(\tau \) is a lower bound on the levels of events in \( Gen (s)\) and \(\sigma \) is an upper bound on the levels of events and variables in \( Guards (s)\).

Lemma 3

(Guard Safety and Confinement).  

  1. 1.

    If \(\varGamma \,\vdash \,{s}\, : \, (\tau ,\sigma )\) then \(\varGamma (g) \le \sigma \) for every \(g \in Guards (s)\);

  2. 2.

    If \(\varGamma \,\vdash \,{s}\, : \, (\tau ,\sigma )\) then \(\tau \le \varGamma (ev) \) for every \(ev\in Gen (s)\).

We now state the main result of this section, namely the soundness of the type system for fine-grained reactive noninterference (and thus, by Theorem 3, also for coarse-grained reactive noninterference). The proof involves some additional definitions and preliminary results, which are not given here but reported in [5].

Theorem 4

(Typability \(\Rightarrow \) Fine-grained RNI).

Let \(s\in {Programs}\). If \(s\) is typable in \(\varGamma \) then \(s\) is fg-secure in \(\varGamma \).

Note that programs \(s,s'\) of Example 1 are not typable (although cg-secure).

Example 2

The following programs are not typable and not secure, for any of the two security properties:

$$\begin{array}{l} {{\mathtt{await}\,{ev_1^\top }}\,;\,{\mathtt{generate}\,{ev_2^\bot }}}\\ {\mathtt{loop}\,{({{\mathtt{generate}\,{ev_2^\bot }}\,;\,{\mathtt{await}\,{ev_1^\top }}})}}\\ {\mathtt{repeat}\,{x^\top }\,\mathtt{do}\,{{\mathtt{generate}\,{ev^\bot }}}} \\ ({\mathtt{repeat}\,{x^\top }\,\mathtt{do}\,{{\mathtt{cooperate}}}})\ ; \ {\mathtt{generate}\,{ev^\bot }} \\ {\mathtt{do}\,{({{\mathtt{cooperate}}\,;\,{\mathtt{generate}\,{ev_2^\bot }}})}\,\mathtt{watching}\,{ev_1^\top }} \end{array} $$

The insecure flows in the first two programs are termination leaks, due to the possibility of suspension. The second program illustrates the need for the condition \(\sigma \le \tau \) in Rule (Loop) (to produce a similar example with \(\mathtt{repeat}\) we need at least three security levels). The fourth program shows why the guard level of \(\mathtt{repeat}\) should be raised in Rule (Repeat).

We conclude with some examples illustrating the use of the conditional rules.

Example 3

The following programs \(s_i\) and \(s\) are all typable, with the given type:

$$\begin{array}{lr} s_1 = {\mathtt{if}\,{(x^\top =0)}\,\mathtt{then}\,{{\mathtt{await}\,{ev^{\top }_1}}}\,\mathtt{else}\,{{\mathtt{cooperate}}}}&{}\text{ type } (\top ,\top )\\ s_2 = {\mathtt{if}\,{(x^\top = 0)}\,\mathtt{then}\,{{\mathtt{nothing}}}\,\mathtt{else}\,{{\mathtt{generate}\,{ev^\top }}}}&{}\text{ type } (\top ,\bot )\\ s_3 = {\mathtt{if}\,{(x^\top = 0)}\,\mathtt{then}\,{{\mathtt{nothing}}}\,\mathtt{else}\,{({\mathtt{loop}\,{{\mathtt{nothing}}}})}}&{}\text{ type } (\top ,\top )\\ s_4 = {\mathtt{if}\,{(x^\top = 0)}\,\mathtt{then}\,{({\mathtt{loop}\,{{\mathtt{nothing}}}})}\,\mathtt{else}\,{({\mathtt{loop}\,{{\mathtt{cooperate}}}})}}&{}\text{ type } (\top ,\bot )\\ s = {\mathtt{generate}\,{ev^{\,\bot }_2}}&{}\text{ type } (\bot ,\bot )\\ \end{array} $$

Indeed, for all programs \(s_i\) the first component of the type (the write type) must be \(\top \) because each of the Rules (Cond1) (Cond2) and (Cond3) prevents a “level drop” from the tested expression to the branches of the conditional, as in classical security type systems. On the other hand, the second component of the type (the guard type) will be \(\bot \) for the safe conditionals \(s_2\) and \(s_4\), typed respectively using Rules (Cond2) and (Cond3), and \(\top \) for the unsafe conditionals \(s_1\) and \(s_3\), typed using Rule (Cond1).

Then \(s_2;s\) and \(s_4;s\) are typable but not \(s_1;s\) nor \(s_3;s\).

Conclusion and Related Work

We have studied a core reactive language \(CRL~\)and established a reactivity result for it, similar to those of [3, 10] but based on different design choices. We also provided a syntactic bound for the length of the converging sequences.

We then proposed two RNI properties for the language, together with a security type system ensuring them. Our RNI properties rely on two bisimulation equivalences of different granularity. One of them, coarse-grained bisimilarity, is reminiscent of the semantic equivalence studied by Amadio in [4], which however was based on trace semantics. Our RNI properties also bear some analogy with the notions of reactive noninterference proposed in [7], and particularly with the termination-insensitive notion of ID-security (see also [15]), although the underlying assumptions of the model are quite different.

The model of cooperative threads of [1] is close in spirit to the model of \(CRL\), but it is not concerned with synchronous parallelism. We should stress here that, to be appropriate for the study of a global computing setting, our synchronous model is intended to be part of a more general GALS model (Globally Asynchronous, Locally Synchronous), where various “synchronous areas” coexist and evolve in parallel, interacting with each other in an asynchronous way.

The idea of “slowing down” loops by forcing them to yield the control at each iteration, which is crucial for our reactivity result, was already used in [10] for a similar purpose. A similar instrumentation of loops was proposed in [14]. However, while in our work and in [10] a cooperate instruction is added in parallel with each iteration of the body of the loop, in [14] it is added after each iteration of the body. In a language that allows a parallel program to be followed in sequence by another program (which is not the case in [14]), our solution is more efficient in that it avoids introducing an additional suspension in case the body of the loop already contains one.

As regards future work, we expect some of our results - determinism, reactivity - to carry over smoothly to \(CRL~\)extended with memory. However, some other properties like the commutativity of \(\not \mid \) will not hold anymore in such setting, at least if the memory is freely shared among threads. Nevertheless, our bisimilarities and security properties would continue to make sense in such extended language. In the longer run, we plan to extend our study to a fully-fledged distributed reactive language, where programs are executed on different sites and may migrate from one site to the other. In this setting, execution would still be synchronous and reactive within each site (each site would be a “synchronous area” within a GALS model), but it would be asynchronous among different sites. A migrating thread would be integrated in the destination site only when this would become ready to react to its environment (whence the importance of local reactivity in each site). In a more expressive language with I/O blocking operations or other forms of abnormal termination, the enforcement of the reactivity property as well as the treatment of termination channels is likely to become more complex (although the time-out mechanism provided by the watching statement could be of some help here).