1 Introduction

The topic of self-adaptiveness emerged as a key research subject within various application domains, as a response to the growing complexity of software systems operating in many different scenarios and in highly dynamic environments. To manage this complexity at a reasonable cost, novel approaches are needed in which a system can promptly react to crucial changes by reconfiguring its behaviour autonomously and dynamically, in accordance with evolving policies and objectives.

As for a precise definition of self-adaptivity, this is still a debated question, due to the wide spectrum of the involved features. In our opinion, a simple, but rather deep, characterisation is the one presented in [7]: we define adaptation as the run-time modification of the control data ...and a component is self-adaptive if it is able to modify its own control data at run-time. We follow [7] in claiming that we need to distinguish between standard data and control data: a change in the system behaviour is part of the application logic if it is based on standard data, it is an adaptation if it is based on control data.

This paper injects the above notion of self-adaptivity into the formal framework of multiparty sessions [28], where each participant can access and modify the global state representing those (control) data whose values are critical for planning the adaptation steps, in such a way that the whole system can react to changes in the global data by reconfiguring itself. A system comprises four active parties: global types, monitors, processes and adaptation functions.

A global type represents the overall communication choreography [9]; its projections onto participants generate the monitors, which are essentially local types and set-up the communication protocols of the participants. The association of a monitor with a compliant process, dubbed monitored process, incarnates a participant where the process provides the implementation of the monitoring protocol. Notably, we exploit intersection types, union types and subtyping to make this compliance relation flexible. Processes are able to follow different incompatible computational paths. For instance, a process could contain both the code needed to buy a book and the one needed to arrange a friend meeting, the choice between the two being determined by the monitor controlling it.

The adaptation strategy is defined by global types and adaptation functions. The choreography decides when the adaptation takes place, since its monitors prescribe when some participants have to check global data, and then send a request of adaptation to the other participants together with an adaptation function. The adaptation functions contain the dynamic evolution policy, since they prescribe how the system needs to reconfigure itself based on the changes of the critical data.

When an adaptation flag is thrown, new monitors are generated, according to a new choreography: indeed, the community involved in the session modifies both its set of participants and the internal communication patterns. Therefore, dynamic adaptations are essentially triggered by control data and monitors.

Most of the approaches to self-adaptive systems in the literature do not face the main challenge of including formal tools to ensure correctness of dynamic adaptations. Some approaches address this issue by providing verification techniques for testing properties of the performed adaptation (e.g. model checking in [25] and web services testbed in [36]). Differently, the focus of the present paper is on the formal properties of the proposed framework, which ensure that adaptation steps are performed in a correct way, being controlled by global types, monitors and process types. The key results are the proofs of Subject Reduction and Progress theorems: in any session, all outputs will eventually be consumed and all processes waiting for an input will eventually receive it.

Typical scenarios that can benefit from our self-adaptation framework are those characterised by the following features:

  • a community, established for a common task or mission, has many distributed entities which interact with each other according to a given operational plan,

  • the complex dynamic environment can present crucial events which require the community to modify its plan dynamically,

  • those critical events are observed in any separate component of the system: this component can be checked by the session participants, so that the whole system can react promptly by updating itself,

  • the dynamic changes need to be rather flexible: in each new phase, other participants can be introduced or some of the old participants are no longer involved (temporarily or permanently),

  • these dynamic changes need to be safe: interactions must proceed correctly to pursue the common task.

1.1 Example

As an example of such a scenario, let us consider a company which has various productive units and sale organisations scattered around the world. Each factory has a number of machines and produces several products for nearby markets or for export. The state of the plants is checked periodically. Communications among factories and sellers exchange several data about products and prices, according to a given combination factory-seller for each product. The company chief supervises the whole organisation. In particular, she equipped the company with an adaptation policy, which gives potential alternative plans for moving productions and/or sales of a product to different entities. All the interactions among these participants run under the control of the monitors that are originated from a global type. Finally, a global state contains crucial data, for instance, the performance of machineries, plants and sale organisations. Unforeseen circumstances, such as the catastrophic event of a fire incapacitating an entire plant, can require the company organisation to update itself: new production and sale plans have to be adopted to maintain uninterrupted supply to customers.

We simplify the above scenario in the case of a Company which has two factories, \(\mathsf {iF} \) (Italian factory) and \(\mathsf {aF} \) (American factory), and two sellers, \(\mathsf {iS} \) (Italian seller) and \(\mathsf {aS} \) (American seller). In order to give a preliminary intuition of our system, we use a simplified and incomplete syntax (w.r.t. the formal presentation of next section). In Sect. 5, we will enrich and formalise this example.

To show how self-adaptation works, we consider the case when a fire incapacitates a factory. The global state contains either \(\mathsf {OK} \) or \(\mathsf {KO} \) for each of the two plants. When both plants are \(\mathsf {OK} \) the interaction takes place according to the following global type:

$$\begin{aligned} \mathsf{G}_1={\left\{ \begin{array}{ll} {\mathsf {iS}}\rightarrow {\mathsf {iF}}{ :}(\text {String},\text {Int}).\\ {\mathsf {aS}}\rightarrow {\mathsf {aF}}{ :}(\text {String},\text {Int}).\\ \mathsf {Ada} \rightarrow \{{\mathsf {iS}},{\mathsf {iF}},{\mathsf {aS}},{\mathsf {aF}}\}{ :}{\textit{check}} \end{array}\right. } \end{aligned}$$

Each seller requires to the corresponding factory a certain amount (of type \(\text {Int}\)) of an item (of type \(\text {String}\)), then the chief Ada sends a checking flag to all, as an alert for a possible adaptation. When the Italian factory is \(\mathsf {OK} \), while the American factory is \(\mathsf {KO} \), the global type is:

$$\begin{aligned} \mathsf{G}_2={\left\{ \begin{array}{ll} \mathsf {Ada} \rightarrow \mathsf {Bob} { :}\text {String}.\\ {\mathsf {iS}}\rightarrow {\mathsf {iF}}{ :}(\text {String},\text {Int}).\\ {\mathsf {aS}}\rightarrow {\mathsf {iF}}{ :}(\text {String},\text {Int}).\\ \mathsf {Ada} \rightarrow \{{\mathsf {iS}},{\mathsf {iF}},{\mathsf {aS}},\mathsf {Bob} \}{ :}{\textit{check}} \end{array}\right. } \end{aligned}$$

where Ada sends to Bob a contract (type \(\text {String}\)) for rebuilding the plant and both sellers send their requests to the Italian factory. In the symmetric case, the global type \(\mathsf{G}_3\) prescribes that both sellers send their requests to the American factory. Finally, when both factories are \(\mathsf {KO} \), \(\mathsf {Ada} \) just closes down the business by sending the label to both sellers:

The processes in Table 1 are implementations of the monitors generated by projection from all the above global types. For instance, the monitor of \(\mathsf {aS} \) from \(\mathsf{G}_1\) and \(\mathsf{G}_3\) is

$$\begin{aligned} {\mathsf {aF}}!(\text {String},\text {Int}).\mathsf {Ada}? {\textit{check}}, \end{aligned}$$

where \(!\) represents output and \(?\) represents input. The monitor of the American seller from \(\mathsf{G}_2\) is similar:

$$\begin{aligned} {\mathsf {iF}}!(\text {String},\text {Int}).\mathsf {Ada}? {\textit{check}}. \end{aligned}$$

Its monitor from \(\mathsf{G}_4\) is simpler: . The process code for the seller has only two alternative behaviours, since processes do not mention senders and receivers. The seller can send on channel \(y\) item and amount, receive the check and then restart. Otherwise, he can receive and stop.

The control data can be modified by the factory, writing \(\mathsf {KO} \) when it is incapacitated, and by Bob, writing \(\mathsf {OK} \) when he accomplished the rebuilding task. The adaptation function \(F\) in Ada’s process gives the new global type when applied to the pair \((\textit{state }{\mathsf {iF}},\textit{state }{\mathsf {aF}})\), i.e.

$$\begin{aligned} \begin{array}{lll}F({\mathsf {OK}},{\mathsf {OK}})=\mathsf{G}_1 &{}\qquad &{}F({\mathsf {OK}},{\mathsf {KO}})=\mathsf{G}_2\\ F({\mathsf {KO}},{\mathsf {OK}})=\mathsf{G}_3&{}&{}F({\mathsf {KO}},{\mathsf {KO}})=\mathsf{G}_4\end{array} \end{aligned}$$
Table 1 Processes for the Company example

A process can implement several different monitors also thanks to the external choice constructor. For instance, the process Seller can fill all the monitors that are generated by projecting the above global types onto the participants \(\mathsf {iS} \) and \(\mathsf {aS} \).

Let us consider the system choreographed by \(\mathsf{G}_1\) with the global data \(({\mathsf {OK}},{\mathsf {OK}})\). The American factory changes its state to \(\mathsf {KO} \)  and then, when the chief checks the global data, the function \(F\) generates the adaptation step which produces the global type \(\mathsf{G}_2\). After this adaptation, Bob is a new participant, while the American factory is out. Then, the American seller, as prescribed by his monitor, sends his requests to the Italian factory. When process Bob writes \(\mathsf {OK} \) for the American factory and the Italian factory is still \(\mathsf {OK} \), the global type produced by the adaptation step is again \(\mathsf{G}_1\). Then, the American factory comes back into the scene.

The present paper is a revised and extended version of Coppo et al. [12]. Key additions with respect to Coppo et al. [12] are a full formalisation of the safety properties and more examples which illustrate characterising features of our framework.

1.2 Structure of the paper

Sections 2 and 3 present the syntax of our calculus and its type system, respectively. The formal semantics is given in Sect. 4. Examples in Sect. 5 enlighten key technical points of our approach. Formal properties are the content of Sect. 6. Related works are discussed in Sect. 7. Section 8 concludes.

Table 2 Projection of a global type onto a participant

2 Syntax

2.1 Global types

Following a widely common approach, the set-up of protocols starts from global types. Global types establish overall communication schemes. In our setting, they also control the reconfiguration phase, in which a system adapts itself to new environmental conditions.

Let \(\text {L}\) be a set of labels, ranged over by \(\ell \), which mark the exchanged values as in [19] and \(\Lambda \) be a set of flags, ranged over by \(\lambda \), which transmit the adaptation information. We assume to have some basic sorts, ranged over by \(S\), i.e.

$$\begin{aligned} S~~ {::=}~~ \mathsf {Bool} ~~\mathbf {|\!\!|}~ ~\mathsf {Int} ~~\mathbf {|\!\!|}~ ~\ldots \end{aligned}$$

Definition 1

Global types are defined by:

$$\begin{aligned} \mathsf{G}~~ {::=} ~~\mathsf{p}\rightarrow \varPi :\{\ell _i({S}_i). \mathsf{G}_i \}_{i \in I}~ ~\mathbf {|\!\!|}~ ~\mathsf{p}\rightarrow \varPi { :}\{\lambda _i \}_{i\in I} ~~\mathbf {|\!\!|}~ ~\mathsf {end} \end{aligned}$$

In writing \(\{\ell _i(S_i).\mathsf{G}_i\}_{i\in I}\) and \(\{\lambda _i\}_{i \in I}\), we implicitly assume that \(\ell _i \ne \ell _j\) and \(\lambda _i \ne \lambda _j\) for all \(i\ne j\). There are only two kinds of communications: value exchange and adaptation flag exchange. Each value exchange is characterised by a label which allows to represent choices. The sender is \(\mathsf{p}\), while \(\varPi \) is the set of the receivers, which does not contain \(\mathsf{p}\) and cannot be the empty set. The participants of a global type \(\mathsf{G}\) are all the senders and the receivers in \(\mathsf{G}\), ranged over by \(\mathsf{p},\mathsf{q},\ldots \). We denote by \(\mathsf {pa} (\mathsf{G})\) the set of all participants in \(\mathsf{G}\).

Global types can terminate in two ways: either with the usual \(\mathsf {end} \) or with the exchange of adaptation flags. In the latter case, the adaptation flags are sent by a participant to all the other ones. Adaptation flags can be seen as synchronisation points, interleaved in a conversation, at which different interaction paths can be taken. In the global types syntax there is no recursion operator, but recursive protocols can be obtained by reconfiguring the system with the same global type. Recursion can then be considered as a particular case of reconfiguration. A recursion operator is included instead in the syntax of processes. For instance, all processes in the company example of the Introduction are recursive and they implement monitors which are projections of non-recursive global types.

Notably, we do not allow parallel composition of global types, which is quite common in the literature [3, 9, 10, 28]. As a matter of fact, many papers [3, 9, 28] require that two global types can be put in parallel only if their sets of participants are disjoint, so parallel composition can be expressed by interleaving. Without this condition, parallel composition of global types requires some care [10]. This issue is orthogonal to the present framework, where each participant, in all reconfiguration steps, follows one global type only (see Tables 8 and 9).

2.2 Monitors

Monitors can be viewed as local types that are obtained as projections of global types onto individual participants, as in the standard approach of Honda et al. [28] and Bettini et al. [2]. The only syntactic differences are the presence of the adaptation flags and the absence of recursion and delegation. In our calculus, however, monitors are more than types: they have an active role in system dynamics, since they guide communications and adaptations.

Definition 2

The set of monitors is defined by:

$$\begin{aligned} \begin{array}{rcllllllr} \fancyscript{M}&{} {::=} &{} {\mathsf{p}}?\{\ell _i (S_i). \fancyscript{M}_i\}_{i \in I} &{} ~\mathbf {|\!\!|}~ &{} {\varPi }!\{\ell _i (S_i). \fancyscript{M}_i\}_{i \in I}&{} ~\mathbf {|\!\!|}~ \\ &{} &{} {\mathsf{p}}?\{{\lambda }_i\}_{i \in I} &{} ~\mathbf {|\!\!|}~ &{} {\varPi }!\{{\lambda }_i \}_{i \in I} &{} ~\mathbf {|\!\!|}~ \\ &{}&{} \mathsf {end} \end{array} \end{aligned}$$

The constructs in the first line correspond to input and output actions, respectively. An input monitor \({\mathsf{p}}?\{\ell _i (S_i). \fancyscript{M}_i\}_{i \in I}\) fits with a process that can receive, for each \(i\in I\), a value of sort \(S_i\), labelled by \(\ell _i\), having as continuation a process which agrees with \(\fancyscript{M}_i\). This corresponds to an external choice. Dually, an output monitor \({\varPi }!\{\ell _i (S_i). \fancyscript{M}_i\}_{i \in I}\) fits with a process which can send (by an internal choice) a value of sort \(S_i\), distinguished by the label \(\ell _i\) for each \(i\in I\), and then continues as prescribed by \(\fancyscript{M}_i\).

The projection of global types onto participants is given in Table 2. A projection is undefined when two participants not involved, as sender or receiver, in a choice have different projections in different branchings (condition \(\mathsf{G}_i \! \upharpoonright \! \mathsf{q}\, = \mathsf{G}_j \! \upharpoonright \! \mathsf{q}\,\) for all \(i,j\in I\)). Monitors are the results of such projections.

A global type \(\mathsf{G}\) is well formed if its projections are defined for all participants and all occurrences of

$$\begin{aligned} \mathsf{p}\rightarrow \varPi :\{\lambda _i \}_{i\in I} \end{aligned}$$

are such that \(\varPi \cup \{\mathsf{p}\}=\mathsf {pa} (\mathsf{G})\): i.e. all participants are involved in each flag exchange. In the following, we assume that all global types are well formed.

2.3 Processes

Processes represent code that is associated to monitors in order to implement participants.

Differently from session calculi [2, 3, 11, 19, 23, 24, 27, 28, 33, 34], processes do not specify the participants involved in sending and receiving actions. The associated monitors determine senders and receivers. Processes represent flexible code that can be associated to different monitors to incarnate different participants. Besides communicating, processes can access the global state to read or change it.

The communication actions of processes are performed through channels. Each process owns a unique channel. We use \(y\) to denote this channel in the user code. As usual, the user channel \(y\) will be replaced at run-time by a session channel \({\mathsf {s}}[\mathsf{p}]\) (where \({\mathsf {s}}\) is the session name and \(\mathsf{p}\) is the current participant). Let \(\mathsf {c}\) denote a user channel or a session channel. We could avoid to write explicitly the channel in the user syntax, but not in the run-time syntax. We have chosen to write all channels to simplify the definition of processes.

Definition 3

Processes are defined by:

$$\begin{aligned} \begin{array}{rclll} P&{} :{:=} &{} \mathbf {0}~~~~\mathbf {|\!\!|}~ ~~~ \mathsf {op}. P~~~~\mathbf {|\!\!|}~ ~~~ X~~~~\mathbf {|\!\!|}~ ~~~ \mu X.P&{}~\mathbf {|\!\!|}~ \\ &{} &{} \mathsf {c} ? \ell (x) .P&{} ~\mathbf {|\!\!|}~ \mathsf {c} ! \ell (\mathsf {e}). P&{}~\mathbf {|\!\!|}~ \\ &{} &{} \mathsf {c} ? (\lambda ,\mathsf {T}) .P&{} ~\mathbf {|\!\!|}~ \mathsf {c} ! (\lambda (F),\mathsf {T}). P&{}~\mathbf {|\!\!|}~ \\ &{} &{} \mathsf {if} ~ \mathsf {e} ~\mathsf {then} ~P ~\mathsf {else} ~P &{} ~\mathbf {|\!\!|}~ P+P\end{array} \end{aligned}$$

The syntax of processes is rather standard, in particular the operator \(+\) represents external choice. Notice that internal and external choices with many branches can easily be encoded in our calculus, which gains in simplicity. In writing processes, we assume the precedence: prefix, external choice and recursion. Note that in the sending and receiving actions, the involved participants are missing. For instance, \(\mathsf {c} ! \ell (\mathsf {e}). P\) denotes a process which sends via the channel \(\mathsf {c}\) the label \(\ell \) and the value of the expression \(\mathsf {e} \) and then has \(P\) as continuation. Notably, a system has a global state (see Definition 5), and the \(\mathsf {op} \) operator represents an action on this global state, for instance, a “read” or “write” operation. We leave unspecified the kind of actions since we are only interested in the dynamic changes of this state, which plays the role of the control data for the self-reconfiguration of the whole system.

Types, which are statically assigned to processes, will be formally introduced in Sect. 3 (Definition 7). Types are mainly aimed at checking the matching between processes and monitors. It is convenient to include a type annotation in the syntax of the adaptation flag. The input flag \(\mathsf {c} ? (\lambda ,\mathsf {T}) .P\) represents a process that, after receiving the adaptation flag \(\lambda \), has a continuation of type \(\mathsf {T}\). Thus, the explicit annotation \(\mathsf {T}\) makes it easy to dynamically check if, after the adaptation, the current process can continue with that type, inside the new monitor. The output flag \(\mathsf {c} ! (\lambda (F),\mathsf {T}). P\) contains also the adaptation function \(F\). The application of \(F\) to the global state will determine the new global type, which provides a new choreography for the system.

2.4 Network

A process is always controlled by a monitor, which ensures that all performed actions fit the protocol prescribed by the global type. Each monitor controls a single process. So, participants correspond to pairs of processes and monitors. We write \(\fancyscript{M}[P]\) to represent a process \(P\) controlled by a monitor \(\fancyscript{M}\), dubbed monitored process. In a reconfiguration phase, the monitor controlling the process is changed according to the new global type resulting from the application of the adaptation function to the global state. At this point, the processes whose type does not fit the new monitor must leave the system and new ones can enter it. The data exchange among the participants is done by means of runtime queues (one for each active session). We denote by \({\mathsf {s}}{ :}h\) the named queue associated with the session \({\mathsf {s}}\), where \(h\) is a message queue. The empty queue is denoted by ø. Messages in queues can be either value messages \((\mathsf{p}, \varPi ,\ell (v))\), indicating that the label \(\ell \) and the value \(v\) are sent by participant \(\mathsf{p}\) to all participants in \(\varPi \), or adaptation messages \((\mathsf{p}, \varPi ,\lambda (\mathsf{G}))\), indicating that the flag \(\lambda \) and the global type \(\mathsf{G}\) are sent by participant \(\mathsf{p}\) to all participants in \(\varPi \). Queue concatenation, denoted by “\(\cdot \)”, has ø  as neutral element. A queue is \(\lambda \) -free if it contains no flag.

The sessions are initiated by the “\(\mathsf {new} \)” constructor applied to a global type (session initiator), denoted by \(\mathsf {new} (\mathsf{G})\), which generates the monitors and associates them with adequate processes (see Definition 8).

The parallel composition of session initiators processes with the corresponding monitors, and run-time queues form a network. Networks can be restricted on session names.

Definition 4

Networks are defined by:

$$\begin{aligned} N~ {::=} ~\mathsf {new} (\mathsf{G}) ~~\mathbf {|\!\!|}~ ~\fancyscript{M}[P] ~~\mathbf {|\!\!|}~ ~ {\mathsf {s}}{ :}h~~\mathbf {|\!\!|}~ ~ N~|~N~~\mathbf {|\!\!|}~ ~ (\nu {\mathsf {s}})N\end{aligned}$$

2.5 System

A system includes a network, a global state and assumes a collection of processes together with their types (according to the typing rules of Sect. 3). We use \(\sigma \) to range over global states, and we denote by \(\fancyscript{P}\) the collection of pairs (\(P\),\(\mathsf {T}\)). We represent systems as the composition (via “\(|\!|\)”) of a networks and a global state, without mentioning the process collection, which is considered an implicit parameter.

Definition 5

Systems are defined by:

$$\begin{aligned} \fancyscript{S}~ {::=} ~ N~|\!|~\sigma \end{aligned}$$

3 Process types

Process types (called simply types where not ambiguous) describe process communication behaviours [27]. They have prefixes corresponding to sending and receiving of labels and flags. In particular, an input type is a type whose first prefix corresponds to an input action, and an output type is a type whose first prefix corresponds to an output action, while the continuation of a type is the type following its first prefix. A communication type is either an input or an output type. The external choice is typed by an intersection type, since an external choice offers both behaviours of the composing processes. Dually, a conditional is an internal choice and so it is typed by a union type. Notice that union and intersection being binary constructor feet well with our binary internal and external choices. To formally define types, we start with the more liberal syntax of pre-types and then we define some restrictions that characterise types of processes.

Definition 6

The set of pre-types is inductively defined by:

$$\begin{aligned} T~ {::=} ~ ? \ell (S).T~\mathbf {|\!\!|}~ ! \ell (S). T~\mathbf {|\!\!|}~ { ? \lambda } ~\mathbf {|\!\!|}~ ! \lambda ~\mathbf {|\!\!|}~ T\wedge T~\mathbf {|\!\!|}~ T\vee T~\mathbf {|\!\!|}~ \mathsf {end} \end{aligned}$$

where \(\wedge \) and \(\vee \) are considered modulo idempotence, commutativity and associativity.

In pre-types and types, we assume that \(.\) has precedence over \(\wedge \) and \(\vee \).

In order to define types for processes, we have to avoid intersection between input types with the same first label, which would represent ambiguous external choices: indeed, the types following a same input prefix could be different and this would lead to a communication mismatch, as illustrated in Example 1 of Sect. 5. For the same reason, process types cannot contain intersections between output types with the same label. Since we have to match types with monitors, where internal choices are always taken by participants sending a label or a flag, we force unions to take output types (possibly combined by intersections or unions) as arguments. Therefore, we formalise the above restrictions by means of two mappings from pre-types to sets of labels and flags (Table 3) and then we define types by using those mappings.

Table 3 The mappings \(lin\) and \(lout\)

Definition 7

A type is a pre-type satisfying the following constraints modulo idempotence, commutativity and associativity of unions and intersections:

  • all occurrences of the shape \(T_1 \wedge T_2\) are such that

    $$\begin{aligned} lin(T_1)\cap lin(T_2) = lout(T_1)\cap lout(T_2) = \emptyset \end{aligned}$$
  • all occurrences of the shape \(T_1 \vee T_2\) are such that

    $$\begin{aligned} lin(T_1) = lin(T_2) = lout(T_1) \cap lout(T_2) = \emptyset . \end{aligned}$$

We use \(\mathsf {T}\) to range over types, and \(\fancyscript{T}\) to denote the set of types. Note that, for example, \((T\wedge T)\vee T\) is a type, whenever \(T\) is a type, since types are considered modulo idempotence.

An environment \(\Gamma \) is a finite mapping from expression variables to sorts and from process variables to types:

where the notation \(\Gamma , x{ :}S\) (\(\Gamma , X{ :}\mathsf {T}\)) means that \(x\) (\(X\)) does not occur in \(\Gamma \).

We assume that expressions are typed by sorts, as usual. The typing judgments for expressions are of the shape

$$\begin{aligned} \Gamma \vdash \mathsf {e}:S\end{aligned}$$

and the typing rules for expressions are standard.

Only processes with at most one channel can be typed. This choice is justified by the design of monitored processes as session participants and by the absence of delegation. Therefore, the typing judgments for processes have the form

$$\begin{aligned} \Gamma \vdash P \rhd \mathsf {c}{ :}\mathsf {T}. \end{aligned}$$

Typing rules for processes are given in Table 4. Observe that the type of a process after a reconfiguration is memorised in the (input or output) action in which the adaptation flag is exchanged (see Definition 3). In rules if and choice, we require that the applications of union and intersection on two types form a type (conditions \(\mathsf {T}_1 \vee \mathsf {T}_2\in \fancyscript{T}\) and \(\mathsf {T}_1 \wedge \mathsf {T}_2\in \fancyscript{T}\)). Adaptation allows us to avoid recursive types. A recursion variable is always preceded by an adaptation action, i.e. \(\mathsf {c} ? (\lambda ,\mathsf {T}) .X\) (rule rv1) and \(\mathsf {c} ! (\lambda (F),\mathsf {T}). X\) (rule rv2). In typing a recursive process \(\mu X.\,P\), rule rec ensures that the type of \(P\) is the same as the type associated to \(X\) in the environment. Note that \(\mu X. P\) is equivalent to \(P\{\mu X. P/X\}\), and so, unfolding the process, \(P\) will always be associated to all the reconfiguration flags which precede the occurrences of \(X\).

Table 4 Typing rules for processes

For example, writing the process Ada (considered in the Introduction) using the formal syntax, but leaving out labels, \(y!(\textit{check}(F),\mathsf {T}_{{ {Ada}}})\) replaces \(y!\textit{check}(F)\), where the type of the whole process \({ {Ada}}\) is:

The matching between process types and monitors (adequacy) is made rather flexible by using the subtyping defined as the reflexive and transitive closure of the relation shown in Table 5. Subtyping is monotone, for input/output prefixes, with respect to continuations and it follows the usual set theoretical inclusion of intersection and union. Notice that we use a weaker definition than standard subtyping on intersection and union types, since it is sufficient to define subtyping on types.

The intuitive meaning of subtyping is that a process with a smaller type has all the behaviours required by a bigger type and possibly more. Therefore, \(\mathsf {end} \) is the top type. An input monitor naturally corresponds to an external choice, while an output monitor naturally corresponds to an internal choice. So, intersections of input types are adequate for input monitors, and unions of output types are adequate for output monitors. Formally, we say that a type is adequate for a monitor if the conditions of the following definition hold.

Table 5 Subtyping

Definition 8

A type \(\mathsf {T}\) is adequate for a monitor \(\fancyscript{M}\) (notation \(\mathsf {T}\propto \fancyscript{M}\)) if \(\mathsf {T}\le |\fancyscript{M}|\), where the mapping \(|\;|\) is defined by:

$$\begin{aligned} \begin{array}{c} |{\mathsf{p}}?\{\ell _i (S_i). \fancyscript{M}_i\}_{i \in I}|=\bigwedge _{i \in I} ? \ell _i(S_i).|\fancyscript{M}_i|\\ |{\varPi }!\{\ell _i (S_i). \fancyscript{M}_i\}_{i \in I}|=\bigvee _{i \in I} ! \ell _i(S_i). |\fancyscript{M}_i|\\ |{\mathsf{p}}?\{{\lambda }_i\}_{i \in I}|\!=\!\bigwedge _{i \in I} ? \lambda _i\quad |{\varPi }!\{{\lambda }_i \}_{i \in I} |\!=\! \bigvee _{i \in I} ! \lambda _i \,\, |\mathsf {end} |\!=\!\mathsf {end} \end{array} \end{aligned}$$

For instance, the type \(\mathsf {T}_{\mathsf {Ada}}\) defined above is adequate for the monitor of \(\mathsf {Ada} \) obtained by projecting the global type \(\mathsf{G}_2\) discussed in the Introduction:

$$\begin{aligned} {\mathsf {Bob}!\text {String}.\{{\mathsf {iS}},{\mathsf {iF}},{\mathsf {aS}},\mathsf {Bob} \}!\textit{check}} \end{aligned}$$

Decidability of adequacy relies on decidability of subtyping. We show that subtyping is decidable. The proof exploits standard distributivity properties on intersections and unions.

Lemma 1

Subtyping is decidable.

Proof

A subtyping between two types is equivalent to a set of subtypings, in which no union occurs in the left type and no intersection occurs in the right type. In fact:

  • \(\mathsf {T}_1\vee \mathsf {T}_2 \le \mathsf {T}_3\) iff \(\mathsf {T}_1 \le \mathsf {T}_3\) and \(\mathsf {T}_2 \le \mathsf {T}_3\)

  • \((\mathsf {T}_1 \vee \mathsf {T}_2) \wedge \mathsf {T}_3 \le \mathsf {T}_4\) iff \(\mathsf {T}_1 \wedge \mathsf {T}_3 \le \mathsf {T}_4\) and \(\mathsf {T}_2 \wedge \mathsf {T}_3 \le \mathsf {T}_4\).

Notice that if \((\mathsf {T}_1 \vee \mathsf {T}_2) \wedge \mathsf {T}_3\) is a type, then both \(\mathsf {T}_1 \wedge \mathsf {T}_3 \) and \(\mathsf {T}_2 \wedge \mathsf {T}_3 \) are types. A similar argument can be used for erasing the intersections in the right type. Then, we have to decide only subtypings of the shape \(\mathsf {T}\le \mathsf {T}'\), where \(\mathsf {T}\) is an intersection of communication types and possibly \(\mathsf {end} \), while \(\mathsf {T}'\) is either a union of output types or a single input type, both possibly in union with \(\mathsf {end} \) (by Definition 7). Since \(\mathsf {end} \) is the top type:

  • If \(\mathsf {T}'\) contains \(\mathsf {end} \), then the subtyping holds.

  • If \(\mathsf {T}\) is \(\mathsf {end} \), then the subtyping fails, unless \(\mathsf {T}'\) contains \(\mathsf {end} \).

Otherwise, the occurrences of \(\mathsf {end} \) in \(\mathsf {T}\) can be erased. Thus, we are reduced to consider the cases in which \(\mathsf {T}\) is an intersection of communication types and \(\mathsf {T}'\) is either a union of output types or a single input type. In the first case, subtyping \(\mathsf {T}\le \mathsf {T}'\) holds if and only if at least one of the output prefixes of types in \(\mathsf {T}'\) is equal to the output prefix of a type in \(\mathsf {T}\), and the corresponding continuations are in the subtype relation. In the second case, subtyping \(\mathsf {T}\le \mathsf {T}'\) holds if and only if \(\mathsf {T}\) contains a type which has the same input prefix of \(\mathsf {T}'\) and the corresponding continuations are in the subtype relation. \(\square \)

4 Semantics

The evolution of a system depends on the evolution of its network and global state. The basic components of networks are the openings of sessions (through the \(\mathsf {new} \) operator on global types) and the processes associated with monitors. So, we start by describing how processes can evolve inside monitors. Monitors guide the communications of processes by choosing the senders/receivers and by allowing only some actions among those offered by the processes. This is formalised by the following LTS for monitors:

$$\begin{aligned} \begin{array}{c} \mathsf{p}? \{\ell _i (S_i).\fancyscript{M}_i\}_{i \in I} \xrightarrow {\mathsf{p}? \ell _j} \fancyscript{M}_j ~~~~~~~ j\in I\\ {\varPi }!\{\ell _i (S_i). \fancyscript{M}_i\}_{i \in I} \xrightarrow {{\varPi ! \ell _j}} \fancyscript{M}_j ~~~~~~~ j\in I\\ \mathsf{p}? \{\lambda _i\}_{i\in I} \xrightarrow {\mathsf{p}? \lambda _j} \quad \varPi ! \{\lambda _i\}_{i\in I} \xrightarrow {\varPi ! \lambda _j}~~~~~~~ j\in I \end{array} \end{aligned}$$

Processes can communicate labels and values, flags, adaptation functions and types, or can read/modify the global state trough \(\mathsf {op} \) operations. These behaviours are made explicit by the LTS in Table 6, where the treatment of recursions and conditionals is standard. In the rules for external choice, \(\alpha \) ranges over

$$\begin{aligned} {\mathsf {s}[\mathsf{p}]} ? \ell (v), {\mathsf {s}[\mathsf{p}]} ! \ell (v), {\mathsf {s}[\mathsf{p}]}? ( \lambda ,\mathsf {T}), {\mathsf {s}[\mathsf{p}]} !(\lambda (F),\mathsf {T}), \end{aligned}$$

and \(\delta \) ranges over \(\{\mathsf {op},\,\tau \}\). We omit the symmetric rules.

Table 6 LTS of processes

The choices are done by the communication actions, while the operations on the global state are transparent. This is needed since the operations on the memory are recorded neither in the process types nor in the monitors. An operation on the state in an external choice can be performed also if a branch, different from that containing the operation, is executed. The rationale is the independence of the changes in the control data from the communications among session participants. For example,

$$\begin{aligned} \begin{array}{l} \mathsf {op}.\mathsf {s}[\mathsf{p}] ? \ell _1(x) .P_1+ \mathsf {s}[\mathsf{p}] ? \ell _2(x) .P_2\xrightarrow {\mathsf {op}}\\ \mathsf {s}[\mathsf{p}] ? \ell _1(x) .P_1+ \mathsf {s}[\mathsf{p}] ? \ell _2(x) .P_2\xrightarrow {\mathsf {s}[\mathsf{p}] ? \ell _2 (v)}P_2\{v/x\}.\end{array} \end{aligned}$$

A recursion in an external choice can be unfolded independently front the chosen branch, since unfolding is an internal computation. Similarly, a conditional in an external choice can be evaluated also if a different branch is then executed. For example,

$$\begin{aligned} \begin{array}{l} (\mu X. \mathsf {if} ~ \mathsf {true} ~\mathsf {then} ~\mathsf {s}[\mathsf{p}] ? \ell _1(x) .P_1 ~\mathsf {else} ~X)+\mathsf {s}[\mathsf{p}] ? \ell _2(x) .P_2\xrightarrow {\tau }\\ (\mathsf {if} ~ \mathsf {true} ~\mathsf {then} ~\mathsf {s}[\mathsf{p}] ? \ell _1(x) .P_1 ~\mathsf {else} ~\mu X. \cdots )+\mathsf {s}[\mathsf{p}] ? \ell _2(x) .P_2\xrightarrow {\tau }\\ \mathsf {s}[\mathsf{p}] ? \ell _1(x) .P_1+\mathsf {s}[\mathsf{p}] ? \ell _2(x) .P_2\xrightarrow {\mathsf {s}[\mathsf{p}] ? \ell _2 (v)}P_2\{v/x\}.\end{array} \end{aligned}$$

We assume a standard structural equivalence on networks, in which the parallel operator is commutative and associative, and restrictions commute and enlarge their scopes without name clashes. Moreover, any monitored process with \(\mathsf {end} \) monitor behaves as the neutral element for the parallel and absorbs restrictions, that is:

$$\begin{aligned} \mathsf {end} [P]~|~N~\equiv ~N~~~\text {and} ~~~(\nu {\mathsf {s}})\mathsf {end} [P]~\equiv ~\mathsf {end} [P]. \end{aligned}$$

For message queues, we need an equivalence for commuting independent messages and another one for splitting a message to multiple receivers, see Table 7. The equivalence on message queues induces an equivalence on labelled queues in the obvious way:

$$\begin{aligned} h\equiv h' ~~ \text {implies} ~~{\mathsf {s}}:h\equiv {\mathsf {s}}:h'. \end{aligned}$$

We can distinguish between the transitions which do or do not involve the global state. For simplicity, Table 8 lists the reduction rules of the networks and Table 9 lists the reduction rules of the systems, in which all rules need the global state.

Table 7 Equivalence on message queues
Table 8 Network reduction
Table 9 System reduction

A session starts by reducing a network \(\mathsf {new} ~\mathsf{G}\) (rule Init). For each \(\mathsf{p}\) in the set \(\mathsf {pa} (\mathsf{G})\) of the participants in the global type \(\mathsf{G}\), we need to find a process \(P_\mathsf{p}\) in the collection \(\fancyscript{P}\) associated to the current system. The process \(P_\mathsf{p}\) must be such that its type \(\mathsf {T}_\mathsf{p}\) is adequate for the monitor which is the projection of \(\mathsf{G}\) onto \(\mathsf{p}\). Then, the process (where the channel \(y\) has been replaced by \(s[\mathsf{p}]\)) is associated to the corresponding monitor, and the empty queue \(\mathsf {s}\) is created. Lastly, the name \(\mathsf {s}\) is restricted. In this way, we ensure the privacy of the communications in a session (as standard in session calculi [28]). We are interested here in modelling the overall adaptation strategy, based on decoupling interfaces (i.e. monitors) and implementations (i.e. processes), rather than in the details related to the choice of processes associated to monitors. So, we have left this choice arbitrary, the only condition being type adequacy. Note, however, that a natural way of controlling the processes associated to the monitors is given by the choice of the labels and flags which relate them.

The rules In and Out define the exchange of messages through queues. The type assignment system ensures that both the type of \(P\) is adequate for \(\fancyscript{M}\) and the type of \(P'\) is adequate for \(\fancyscript{M}'\). Following [3, 11], the agreement between monitors and processes is required; a novelty is that only the monitors define the senders and the receivers of messages.

The rules AdaInCont and AdaInNew of Table 8 deal with adaptations, for the session participants which receive the adaptation flag with the new global type. The new global type is needed to compute the new monitor \(\fancyscript{M}'\) by projection. In the first rule, the continuation of the current process inside the monitor has a type which is adequate for \(\fancyscript{M}'\), so this process will fill \(\fancyscript{M}'\). In the second rule, instead, it is needed to take from a different process with a type adequate for \(\fancyscript{M}'\) the collection \(\fancyscript{P}\). In case such a process does not exists in \(\fancyscript{P}\), then the system gets stuck.

Evaluation contexts are defined by

$$\begin{aligned} {\fancyscript{E}}::= [\;] ~~\mathbf {|\!\!|}~ ~ {\fancyscript{E}}~|~N~~\mathbf {|\!\!|}~ ~ (\nu s) {\fancyscript{E}}\end{aligned}$$

The reduction rules for networks can be used for reducing systems thanks to rule SN in Table 9. Rule Cxt is a standard contextual rule. Rule Op allows processes to read/modify the global state.

The most interesting rules are AdaOutCont and AdaOutNew. In both rules, participant \(\mathsf{p}\) sends an adaptation flag and an adaptation function, whose application to the global state gives a new global type \(\mathsf{G}\). The global type \(\mathsf{G}\) may involve new participants (in \(\varPi '\setminus (\varPi \cup \{\mathsf{p}\})\)) which are added to the network by taking processes in \(\fancyscript{P}\) as in rule Init. As regards to participant \(\mathsf{p}\), the new monitor \(\mathsf{G} \! \upharpoonright \! \mathsf{p}\,\) will be associated or not to the current process according to whether its type is adequate or not for \(\mathsf{G} \! \upharpoonright \! \mathsf{p}\,\), as in rules AdaInCont and AdaInNew. In both rules AdaOutCon and AdaOutNew, the message with the reconfiguration flag and the global type \(\mathsf{G}\) will be sent to all participants of the session before the reconfiguration. This is ensured by the well formedness of global types.

The restriction to \(\lambda \)-free queues deserves some comments. It ensures no new adaptation flag can be thrown until all the receivers of the previous adaptation flag have adapted themselves. A design choice of our framework is to allow a participant to skip an adaptation phase (since it does not appear in the new global type) and then to appear again in the following adaptation. This models a common scenario in which a component is temporarily unavailable and so a new choreography is needed. In the introductory example, the American factory becomes temporarily out of the current choreography. Without the given restriction, when the component becomes available again, we could have two monitored processes with the same session channel, so loosing channel linearity. Observe, however, that by this restriction some participants are allowed to finish their communications before performing an adaptation, while other participants have already self-adapted and then started the new communications.

Note that the \(\lambda \)-freeness of queues can be implemented in several ways without breaking decentralisation, for example by semaphores on queues.

We use \(\longrightarrow ^*\) with the usual meaning and \(\longrightarrow _\fancyscript{P}\), \(\longrightarrow ^*_\fancyscript{P}\) when we want emphasise the use of \(\fancyscript{P}\) in rules Init, AdaInNew, AdaOutNew.

5 Examples

In this section, we discuss examples. The first example motivates the restrictions on the definition of process types (Definition 7). The second and the third examples extend the example given in the Introduction, using the syntax of Sect. 2. Example 2 shows a use of different adaptation flags. Example 3 illustrates the possibility of exchanging the participant who is in charge of sending the adaptation flag. For readability, we omit \(\{,\}\) in writing global types and monitors.

Example 1

This example shows the necessity of the conditions on types given in Definition 7.

Suppose that any pre-type is a type, by removing the conditions from Definition 7. Then, \(\fancyscript{P}\) could contain \((P_1,T_1)\) and \((P_2,\mathsf {T}_2)\) such that:

$$\begin{aligned} \begin{array}{c}P_1=y ! \ell (3). y ? \ell '(x) .\mathbf {0}+y ! \ell (\mathsf {true}). \mathbf {0}\\ P_2=y ? \ell (x') .y ! \ell '(- x'). \mathbf {0}\\ T_1=\, ! \ell (\mathsf {int}). ? \ell '(\mathsf {int}).\mathsf {end} \wedge ! \ell (\mathsf {Bool}). \mathsf {end} \\ \mathsf {T}_2= ? \ell (\mathsf {int}). ! \ell '(\mathsf {int}). \mathsf {end} \end{array} \end{aligned}$$

In particular, we observe that \(T_1\) has an intersection between output types with the same label. Take then the global type \(\mathsf{G}=1\rightarrow 2: \ell (\mathsf {int}). 2 \rightarrow 1: \ell '(\mathsf {int}).\mathsf {end} \), whose projections on participants \(1\) and \(2\) are

$$\begin{aligned} \fancyscript{M}_1=2!\ell (\mathsf {int}).2?\ell '(\mathsf {int}).\mathsf {end}, \fancyscript{M}_2=1?\ell (\mathsf {int}).1!\ell '(\mathsf {int}).\mathsf {end}, \end{aligned}$$

respectively. According to our definition of adequacy, \(T_1\) and \(\mathsf {T}_2\) are adequate for \(\fancyscript{M}_1\) and \(\fancyscript{M}_2\), respectively. It is easy to verify that the network \(\mathsf {new} (\mathsf{G})\) can reduce to

which is stuck. On the other hand, taking

$$\begin{aligned} P'_1=y ! \ell (3). y ? \ell '(x) .\mathbf {0}+y ? \ell (x) .\mathbf {0}\end{aligned}$$

with type \(\mathsf {T}'_1=\, ! \ell (\mathsf {int}). ? \ell '(\mathsf {int}).\mathsf {end} \wedge ? \ell (\mathsf {Bool}).\mathsf {end} \), we still have that \(\mathsf {T}'_1\) and \(\mathsf {T}_2\) are adequate for \(\fancyscript{M}_1\) and \(\fancyscript{M}_2\), but the network \(\mathsf {new} (\mathsf{G})\) smoothly terminates the computation. Note that \(\mathsf {T}'_1\) satisfies the conditions of Definition 7, so there is no possible ambiguity on which branch of the external choice must be chosen.

Example 2

Ada wants to consider the possibility to keep the business going, also when both factories are out of use. In this case, she can choose either to stop all activities or to start the reconstruction of both factories. In the former case, she sends a stop adaptation flag and in the latter case a go adaptation flag. The global type \(\mathsf{G}_4\) of the example in the Introduction must then be replaced by:

$$\begin{aligned} \mathsf{G}_4= \mathsf {Ada} \rightarrow \{{\mathsf {iS}},{\mathsf {aS}}\}{ :}\{{stop},{go}\} \end{aligned}$$

If Ada decides to go on with the business when both factories are out, both sellers send their requests to Ada. So, we need two new global types:

where is the label used to ask the amount of an item, and two adaptation functions:

$$\begin{aligned} F_g({\mathsf {KO}},{\mathsf {KO}})=\mathsf{G}_g\qquad \qquad F_s({\mathsf {KO}},{\mathsf {KO}})=\mathsf{G}_s \end{aligned}$$

A new process for implementing Ada can be:

where

$$\begin{aligned} \begin{array}{lll}\mathsf {T}_{{ {Ada}}'} &{}=&{} ! {\textit{check}}\wedge \\ &{}&{}!\text {String}. (! \textit{check} \wedge ? (\text {String}, \text {Int}).? (\text {String}, \text {Int}). !\textit{check})\\ &{}&{}\wedge (! \textit{stop} \vee !\textit{go})\end{array} \end{aligned}$$

is the new type of Ada. No modification is required for the processes representing the sellers and the factories.

Note that the process \({ {Ada}}\) of the Introduction could work as well as \({ {Ada}}'\) until both factories are out. In this case, \({ {Ada}}\) would no longer agree with the monitor corresponding to the projection of the above \(\mathsf{G}_4\) and rule \(\textsc {AdaOutNew}\) would replace \({ {Ada}}'\) to \({ {Ada}}\).

Example 3

Assume that Ada has to take a maternity leave. She then decides to transfer the job of monitoring the factories to one of the two sellers, who will play the role of deputy chief in the company. Then, Ada writes \({\mathsf {iS}}\) (Italian seller) or \({\mathsf {aS}}\) (America seller) in the global data. The deputy chief is only in charge to keep the business working, unless both factories are out, in which case he also closes down the business. The deputy chief checks the state of the factories sending as Ada the reconfiguration flag check. When Ada is back, the standard management policy is restored.

In this example, we use the same global types and adaptation function of the Introduction, but we add eight global types and another adaptation function.

The first four global types consider the case in which the American seller is the deputy chief. The global type \(G^a_1\) is the communication protocol when both factories are working:

If the Italian factory is \(\mathsf {OK} \), but the American one is \(\mathsf {KO} \) the communication protocol becomes:

If the American factory is \(\mathsf {OK} \), but the Italian one is \(\mathsf {KO} \), the global type \(\mathsf{G}^a_3\) is as expected. When both factories are \(\mathsf {KO} \)  the deputy chief is forced to close the business:

The other four global types \(\mathsf{G}^i_1\)-\(\mathsf{G}^i_4\) prescribe that the adaptation flag is sent by the Italian rather than by the American seller.

The new adaptation function \(F'\), which has a third argument corresponding to the deputy chief in charge, is defined as:

$$\begin{aligned} \begin{array}{lll}F'({\mathsf {OK}},{\mathsf {OK}},{\mathsf {aS}})=\mathsf{G}^a_1 &{}\qquad &{}F'({\mathsf {OK}},{\mathsf {KO}},{\mathsf {aS}})=\mathsf{G}^a_2\\ F'({\mathsf {KO}},{\mathsf {OK}},{\mathsf {aS}})=\mathsf{G}^a_3&{}&{}F'({\mathsf {KO}},{\mathsf {KO}},{\mathsf {aS}})=\mathsf{G}^a_4 \\ F'({\mathsf {OK}},{\mathsf {OK}},{\mathsf {iS}})=\mathsf{G}^i_1 &{}&{}F'({\mathsf {OK}},{\mathsf {KO}},{\mathsf {iS}})=\mathsf{G}^i_2\\ F'({\mathsf {KO}},{\mathsf {OK}},{\mathsf {iS}})=\mathsf{G}^i_3&{}&{}F'({\mathsf {KO}},{\mathsf {KO}},{\mathsf {iS}})=\mathsf{G}^i_4 \end{array} \end{aligned}$$

A process \({ {Ada}}''\) implementing Ada includes a conditional to take into account the maternity leave:

where is the label for the contract with Bob and

$$\begin{aligned} \begin{array}{lll}P &{}=&{} \mathsf {if~} {\ldots }~\mathsf {then~} {! (\textit{check}(F), \mathsf {T}_{{ {Ada}}''})}.X\\ &{}&{} \mathsf {else~} {\mathsf {write} ~\mathsf {{d}e{p}}. !(check(F'), \mathsf {end}).\mathbf {0}}\end{array} \end{aligned}$$

with \(\mathsf {{d}e{p}} ={\mathsf {aS}}\) or \(\mathsf {{d}e{p}} ={\mathsf {iS}}\) and

A process implementing sellers uses the adaptation function \(F'\) or \(F\) according to whether or not Ada is on leave:

where:

Table 10 Projection of generalised types onto participants

No modification is needed, instead, for the processes representing the other participants, i.e. the two factories and Bob.

Notice that \(\mathsf {T}_{{ {Ada}}''}\) differs from \(\mathsf {T}_{{ {Ada}}}\) (see page 6) only for the label (since in the Introduction we used a simplified syntax). Instead, \( \mathsf {T}_{\textit{Seller}'}\) offers more choices that the type of the process \(\textit{Seller}\) in the Introduction. As a consequence, the process \({ {Ada}}\) of the Introduction is an implementation of Ada also in the present new scenario, which never uses the right of the maternity leave. Instead, the process Seller is no longer adequate since it does not implement the chief’s behaviour.

6 Safety

In this section, we show subject reduction and progress theorems, following essentially the proof pattern of similar results for multiparty sessions, see e.g. [13]. Indeed, the main innovation of our calculus is that global types, with the corresponding monitors, are reconfigured at each adaptation step. Furthermore, participants of two different global types can coexist inside the same session. This happens when some participants have already performed the adaptation and then they are following the new global type, while other participants are still completing the interactions prescribed by the old global type. These are the crucial technical difficulties in proving that monitored well-typed processes always behave in a type-safe way. Therefore, we need to introduce typing rules for systems, which associate types to session channels.

The type of channel \(\mathsf {s}[\mathsf{p}]\) is formed by taking into account the monitor, which controls the process owning \(\mathsf {s}[\mathsf{p}]\), and the messages in the queue of the session \(\mathsf {s}\). The type of a monitored process is the association of the monitor to the session channel owned by the process. For each value message \((\mathsf{p}, \varPi ,\ell (v))\) in the queue of session \(\mathsf {s}\), we associate the type \(\varPi ! \ell (S)\) to \(\mathsf {s}[\mathsf{p}]\), where \(S\) is the sort of the value \(v\), preserving the order of messages of queue. So, lists of types of this shape form the types of session channels.

A queue can also contain adaptation messages. Note that, thanks to the condition of \(\lambda \)-freeness in rules AdaOutConT and AdaOutNew (Table 9), at most one adaptation message can occur in a queue (modulo structural equivalence, see Table 7). If the queue of session \(\mathsf {s}\) contains the message \((\mathsf{p}, \varPi ,\lambda (\mathsf{G}))\), then \(\varPi ! \lambda \) occurs in the type of \(\mathsf {s}[\mathsf{p}]\). A message type is then a list of types of the shape \(\varPi ! \ell (S)\) possibly containing a type of the shape \(\varPi ! \lambda \). After receiving the message \((\mathsf{p}, \varPi ,\lambda (\mathsf{G}))\), each participant \(\mathsf{q}\in \varPi \) of session \(\mathsf {s}\) will behave according to the monitor \(\mathsf{G} \! \upharpoonright \! \mathsf{q}\,\). Therefore, the type of \(\mathsf {s}[\mathsf{q}]\) can involve two monitors. One (explicit) monitor (dubbed active monitor) is the monitor of the monitored process owing \(\mathsf {s}[\mathsf{p}]\). The other (implicit) monitor (dubbed virtual monitor) is the projection onto \(\mathsf{q}\) of the global type contained in the adaptation message waiting to be received by \(\mathsf {s}[\mathsf{q}]\) (and to become active). A missing virtual monitor is denoted by “\(-\)”. In particular, the virtual monitor of the sender of the adaptation message is always missing. So, typing rules for queues associate types of the shape \(\langle \text {message type},\text {virtual monitor}\rangle \) (corresponding to the sent messages and the virtual monitor) to session channels.

To sum up, a type of a session channel is either an active monitor, or a pair consisting of a message type and a virtual monitor, or a triple consisting of a message type, an active monitor and a virtual monitor.

Definition 9

Message types, queue types, virtual monitors, and generalised types are defined by:

where “\(\,;\,\)” is associative, \(\epsilon \) is the type of the empty sequence of messages, such that \(\epsilon \,;\,\mathsf{m}=\mathsf{m}\,;\,\epsilon =\mathsf{m}\), and “\(-\)” denotes a missing monitor.

The typing judgements for systems are of the shape

$$\begin{aligned} \vdash _{\Sigma } \fancyscript{S} \rhd \varDelta \end{aligned}$$

where \(\Sigma \) is a set of session names (the names of the queues which occur free in the network) and \(\varDelta \) is a session typing. Session typings associate session channels to generalised types:

$$\begin{aligned} \varDelta ::=\emptyset ~\mathbf {|\!\!|}~ \varDelta ,\mathsf {s}[\mathsf{p}]{ :}\chi \end{aligned}$$

We apply to the session typings the same conventions used for environments. In particular, a session typing \(\varDelta _1, \varDelta _2\) is defined only if the domains of \(\varDelta _1\) and \(\varDelta _2\) are disjoint.

To ensure type safety, it is essential that the communications are performed in a consistent way, i.e. that data are exchanged in the right order and with the right type. Consistency of session typings is defined using projection of generalised types and duality, given in Tables 10 and 11, respectively. The projection of a generalised type onto a participant \(\mathsf{q}\) represents the communications offered to \(\mathsf{q}\).

The projection of generalised types uses the projection of message types and virtual monitors. We denote these projections by \(\chi \! \upharpoonright \! \mathsf{q}\,\), \(\mathsf{m} \! \upharpoonright \! \mathsf{q}\,\) and \({\fancyscript{V}} \! \upharpoonright \! \mathsf{q}\,\), respectively. The conditions on the equalities of projections correspond to the similar conditions in Table 2. The projection of a generalised type of the shape \(\langle \mathsf{m},\fancyscript{M},{\fancyscript{V}}\rangle \) is the concatenation of the projections of \(\mathsf{m}\), \(\fancyscript{M}\) and \({\fancyscript{V}}\). This is meaningful since \(\mathsf{m}\) represents the message already sent, \(\fancyscript{M}\) guides the behaviour of the participant before its adaptation and \({\fancyscript{V}}\) will guide its behaviour after the adaptation. Projection of generalised types (ranged over by \(\Theta \)) is defined by the following syntax

$$\begin{aligned} \Theta {::=}!\ell (S)~~\mathbf {|\!\!|}~ ~!\lambda ~~\mathbf {|\!\!|}~ ~ \Theta .\Theta ~~\mathbf {|\!\!|}~ ~?\{\Theta _i\}_{i\in I} ~~\mathbf {|\!\!|}~ ~!\{\Theta _i\}_{i\in I} ~\mathbf {|\!\!|}~ ~ \epsilon \end{aligned}$$

We assume \(\epsilon .\Theta =\Theta .\epsilon =\Theta \), since \(\epsilon \) represents no communication.

We write \(\Theta \bowtie \Theta '\) when \(\Theta \) and \(\Theta '\) are dual according to the definition of Table 11. Note that duality is defined only on \(\Theta \) which are projections of generalised types.

We can now define consistency as duality of projections.

Definition 10

A session typing \(\varDelta \) is consistent for the session \(\mathsf {s}\), notation \(\mathsf {con}({\varDelta },{\mathsf {s}})\), if \( \mathsf {s}[\mathsf{p}]:\chi \in \varDelta \) and \( \mathsf {s}[\mathsf{q}]:\chi '\in \varDelta \) with \(\mathsf{p}\not =\mathsf{q}\) imply \(\chi \! \upharpoonright \! \mathsf{q}\,\bowtie \chi ' \! \upharpoonright \! \mathsf{p}\,\). A session typing is consistent if it is consistent for all sessions which occur in it.

Table 11 Duality between projections of generalised types onto participants

It is easy to check that projections of the same global type are always dual.

Proposition 1

Let \(\mathsf{G}\) be a global type and \(\mathsf{p}\ne \mathsf{q}\). Then, \((\mathsf{G} \! \upharpoonright \! \mathsf{p}\,) \! \upharpoonright \! \mathsf{q}\,\bowtie (\mathsf{G} \! \upharpoonright \! \mathsf{q}\,) \! \upharpoonright \! \mathsf{p}\,\).

This proposition ensures that session typings obtained by projecting global types are consistent.

Table 12 gives the typing rules for systems. A session initiator is typed with the empty set of session names and with the empty session typing (rule New). To type a monitored process, we distinguish two cases. If the monitor is \(\mathsf {end} \), then the session typing is empty for any process \(P\) (rule endP). Otherwise, the channel owned by the process is associated to the monitor, provided that the type of the process (Table 4) is adequate for the monitor, according to Definition 8 (rule MP).

The next three rules type named queues. In these rules, the turn-style is decorated by the name of the queue. An empty queue ø is typed with the empty session typing (QInt). Two queue types can be composed only if at most one of them contains a monitor, while the sequence of message types is a message type. Then, we define the operator \(\sharp \) by:

$$\begin{aligned} \langle \mathsf{m},{\fancyscript{V}}\rangle \sharp \langle \mathsf{m}',-\rangle =\langle \mathsf{m},-\rangle \sharp \langle \mathsf{m}',{\fancyscript{V}}\rangle =\langle \mathsf{m}\,;\,\mathsf{m}',{\fancyscript{V}}\rangle \end{aligned}$$

Rules QSendV and QSendG use the extension of \(\sharp \) to session typings:

$$ \begin{aligned} \begin{array}{lll}\varDelta \sharp \varDelta '&{} = &{}\{\mathsf {s}[\mathsf{p}]{ :}\chi \sharp \chi '\mid \mathsf {s}[\mathsf{p}]{ :}\chi \in \varDelta ~ \& ~\mathsf {s}[\mathsf{p}]{ :}\chi '\in \varDelta '\}\cup \\ &{}&{}\{\mathsf {s}[\mathsf{p}]{ :}\chi \mid \mathsf {s}[\mathsf{p}]{ :}\chi \in \varDelta \cup \varDelta '~ \& \\ &{}&{}\mathsf {s}[\mathsf{p}]\not \in dom(\varDelta )\cap dom(\varDelta ')\}\end{array} \end{aligned}$$

Notice that in rules QSendV and QSendG the session typings only contain queue types. The queue type \(\langle \varPi !\ell (S),-\rangle \) is pushed in the queue type of \(\mathsf {s}[\mathsf{p}]\) for a value message \((\mathsf{p}, \varPi ,\ell (v))\), where \(S\) is the sort of \(v\) (rule QSendV). The queue type \(\langle \varPi !\lambda ,-\rangle \) is pushed in the queue type of \(\mathsf {s}[\mathsf{p}]\) for an adaptation message \((\mathsf{p}, \varPi ,\lambda (\mathsf{G}))\), while the queue type of \(\mathsf {s}[\mathsf{q}]\) has the projection of \(\mathsf{G}\) on \(\mathsf{q}\) as virtual monitor, for all \(\mathsf{q}\in \varPi \) (rule QSendG).

For typing the parallel composition of networks, rule NPar prescribes that each named queue does not occur twice (condition \(\Sigma _1\cap \Sigma _2=\emptyset \)) and composes session typings forming a generalised type out of a queue type and a monitor. We define the composition \(*\) between queue types and monitors as:

$$\begin{aligned} \langle \mathsf{m},{\fancyscript{V}}\rangle *\fancyscript{M}=\fancyscript{M}*\langle \mathsf{m},{\fancyscript{V}}\rangle =\langle \mathsf{m},\fancyscript{M},{\fancyscript{V}}\rangle \end{aligned}$$

We extend \(*\) to generalised types and to session typings as expected:

$$ \begin{aligned} \begin{array}{lll}\varDelta *\varDelta '&{} = &{}\{\mathsf {s}[\mathsf{p}]{ :}\chi *\chi '\mid \mathsf {s}[\mathsf{p}] { :}\chi \in \varDelta ~ \& ~\mathsf {s}[\mathsf{p}]{ :}\chi '\in \varDelta '\}\cup \\ &{}&{}\{\mathsf {s}[\mathsf{p}]{ :}\chi \mid \mathsf {s}[\mathsf{p}]{ :}\chi \in \varDelta \cup \varDelta '~ \& \\ &{}&{}~\mathsf {s}[\mathsf{p}]\not \in dom(\varDelta )\cap dom(\varDelta ')\}\end{array} \end{aligned}$$

For example, if \(\vdash _{\Sigma } N \rhd \varDelta \), then \(\vdash _{\Sigma } \mathsf {end} [P]~|~N \rhd \emptyset *\varDelta \) (by rules endP and NPar) and \(\emptyset *\varDelta ~=~\varDelta \); this fits with the structural equivalence \(\mathsf {end} [P]~|~N~\equiv ~N\).

Notice that both \(\sharp \) and \(*\) are partial operators on session typings, since they can be undefined when applied to arbitrary generalised types.

In order to take into account the structural congruence between queues (see Table 7), we consider message types modulo the equivalence relation \(\approx \) induced by the following rules (where \(Z\) stands for either \(\ell (S)\) or \(\lambda \)):

$$\begin{aligned} \begin{array}{l} \mathsf{m};\varPi ! Z;\varPi ' ! Z';\mathsf{m}'\approx \mathsf{m};\varPi ' ! Z';\varPi ! Z;\mathsf{m}'\\ \hbox {if} \ \varPi \cap \varPi '=\emptyset \\ \mathsf{m};\varPi !Z;\mathsf{m}'\approx \mathsf{m};\varPi _1!Z;\varPi _2!Z;\mathsf{m}'\\ \hbox {if} \ \varPi =\varPi _1\cup \varPi _2, \varPi _1\cap \varPi _2=\emptyset \end{array} \end{aligned}$$

This equivalence relation on message types extends to generalised types by:

$$\begin{aligned} \begin{array}{l}\mathsf{m}\approx \mathsf{m}'\text { implies }\\ \langle \mathsf{m},{\fancyscript{V}}\rangle \approx \langle \mathsf{m}',{\fancyscript{V}}\rangle \text { and }\langle \mathsf{m},\fancyscript{M},{\fancyscript{V}}\rangle \approx \langle \mathsf{m}',\fancyscript{M},{\fancyscript{V}}\rangle \end{array} \end{aligned}$$

We say that two session typings \(\varDelta \) and \(\varDelta '\) are equivalent (notation \(\varDelta \approx \varDelta '\)) if \(\mathsf {s}[\mathsf{p}]:\chi \in \varDelta \) implies \(\mathsf {s}[\mathsf{p}]:\chi '\in \varDelta '\) with \(\chi \approx \chi '\) and vice versa. Rule Equiv allows to use this equivalence relation.

Rule Res requires the session typing to be consistent for the session \(\mathsf {s}\) in order to type the restriction on \(\mathsf {s}\).

A system can be typed if the network can be typed, while the global state is arbitrary, see rule System.

A crucial observation is that virtual monitors occur in generalised types only if queues contain adaptation flags. In other words, using the condition of being \(\lambda \)-free (that is a premise of rules AdaOutCont and AdaOutNew), we get:

It is standard to prove an inversion lemma for networks and systems by induction on derivations (Table 12).

Lemma 2

(Inversion Lemma)

  1. 1.

    If \(\vdash _{\Sigma } \mathsf {new} (\mathsf{G}) \rhd \varDelta \), then \(\Sigma =\varDelta =\emptyset \).

  2. 2.

    If \(\vdash _{\Sigma } \mathsf {end} [P] \rhd \varDelta \), then \(\Sigma =\varDelta =\emptyset \).

  3. 3.

    If \(\vdash _{\Sigma } \fancyscript{M}[P] \rhd \varDelta \) and \(\fancyscript{M}\not =\mathsf {end} \), then \(\Sigma =\emptyset \) and \(\varDelta =\{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}\}\) and \( \vdash P \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}\) and \(\mathsf {T}\propto \fancyscript{M}\).

  4. 4.

    If , then \(\Sigma =\{\mathsf {s}\}\) and \(\varDelta =\emptyset \).

  5. 5.

    If \(\vdash _{\Sigma } \mathsf {s}{ :}h \cdot (\mathsf{p}, \varPi ,\ell (v)) \rhd \varDelta \), then \(\Sigma =\{\mathsf {s}\}\) and \(\varDelta \approx \varDelta '\sharp \{ \mathsf {s}[\mathsf{p}]{ :}\langle \varPi !\ell (S),-\rangle \}\) and \(\vdash _{\{\mathsf {s}\}} \mathsf {s}{ :}h \rhd \varDelta '\) and \(\vdash v{ :}S\).

  6. 6.

    If \(\vdash _{\Sigma } \mathsf {s}{ :}h \cdot (\mathsf{p}, \varPi ,\lambda (\mathsf{G})) \rhd \varDelta \), then \(\Sigma =\{\mathsf {s}\}\) and \(\varDelta \approx \varDelta '\sharp (\{ \mathsf {s}[\mathsf{p}]{ :}\langle \varPi !\lambda ,-\rangle \}\cup \{ \mathsf {s}[\mathsf{q}]{ :}\langle \epsilon ,\mathsf{G} \! \upharpoonright \! \mathsf{q}\,\rangle \mid \mathsf{q}\in \varPi \})\) and \(\vdash _{\{\mathsf {s}\}} \mathsf {s}{ :}h \rhd \varDelta '\).

  7. 7.

    If \(\vdash _{\Sigma } N_1 ~|~N_2 \rhd \varDelta \), then \(\Sigma =\Sigma _1\cup \Sigma _2\) and \(\varDelta =\varDelta _1 *\varDelta _2\) and \(\vdash _{\Sigma _1} N_1 \rhd \varDelta _1\) and \(\vdash _{\Sigma _2} N_2 \rhd \varDelta _2\) and \(\Sigma _1\cap \Sigma _2 = \emptyset \).

  8. 8.

    If \(\vdash _{\Sigma } (\nu \,\mathsf {s})N \rhd \varDelta \), then \(\Sigma =\Sigma '\setminus \{\mathsf {s}\}\) and \(\varDelta =\varDelta '\setminus \mathsf {s}\) and \(\vdash _{\Sigma '} N \rhd \varDelta '\) and \(\mathsf {con}({\varDelta },{\mathsf {s}})\).

  9. 9.

    If \(\vdash _{\Sigma } N~|\!|~\sigma \rhd \varDelta \), then \(\vdash _{\Sigma } N \rhd \varDelta \).

We also need a lemma stating how the typing depends on the first message on the queue. The proof follows immediately from the typing rules of queues.

Lemma 3

  1. 1.

    If \(\vdash _{\Sigma } \mathsf {s}{ :}(\mathsf{p}, \varPi ,\ell (v)) \cdot h \rhd \varDelta \), then \(\Sigma =\{\mathsf {s}\}\) and \(\varDelta \approx \{ \mathsf {s}[\mathsf{p}]{ :}\langle \varPi !\ell (S),-\rangle \}\sharp \varDelta '\) and \(\vdash _{\{\mathsf {s}\}} \mathsf {s}{ :}h \rhd \varDelta '\) and \(\vdash v{ :}S\).

  2. 2.

    If \(\vdash _{\Sigma } \mathsf {s}{ :}(\mathsf{p}, \varPi ,\lambda (\mathsf{G})) \cdot h \rhd \varDelta \), then \(\Sigma =\{\mathsf {s}\}\) and \(\varDelta \approx (\{ \mathsf {s}[\mathsf{p}]{ :}\langle \varPi !\lambda ,-\rangle \}\cup \{ \mathsf {s}[\mathsf{q}]{ :}\langle \epsilon ,\mathsf{G} \! \upharpoonright \! \mathsf{q}\,\rangle \mid \mathsf{q}\in \varPi \})\sharp \varDelta '\) and \(\vdash _{\{\mathsf {s}\}} \mathsf {s}{ :}h \rhd \varDelta '\).

Table 12 Typing rules for networks and systems

Monitor LTS transactions reveal the monitor shapes, as detailed in the next lemma, which can be proved by straightforward case analysis.

Lemma 4

  1. 1.

    If \(\fancyscript{M}\xrightarrow {\mathsf{p}? \ell } \fancyscript{M}'\), then \(\fancyscript{M}=\mathsf{p}? \{\ell _i (S_i).\fancyscript{M}_i\}_{i \in I}\) and \(\ell =\ell _j\) and \(\fancyscript{M}'=\fancyscript{M}_j\) for some \(j\in I\).

  2. 2.

    If \(\fancyscript{M}\xrightarrow {\varPi ! \ell } \fancyscript{M}'\), then \(\fancyscript{M}={\varPi }!\{\ell _i (S_i). \fancyscript{M}_i\}_{i \in I}\) and \(\ell =\ell _j\) and \(\fancyscript{M}'=\fancyscript{M}_j\) for some \(j\in I\).

  3. 3.

    If \(\fancyscript{M}\xrightarrow {\mathsf{p}? \lambda }\), then \(\fancyscript{M}=\mathsf{p}? \{\lambda _i\}_{i\in I}\) and \(\lambda =\lambda _j\) for some \(j\in I\).

  4. 4.

    If \(\fancyscript{M}\xrightarrow {\varPi ! \lambda }\), then \(\fancyscript{M}=\varPi ! \{\lambda _i\}_{i\in I}\) and \(\lambda =\lambda _j\) for some \(j\in I\).

The following lemma relates communications offered by processes (as LTS transactions) with their types.

Lemma 5

  1. 1.

    If \(P\xrightarrow {\mathsf {s}[\mathsf{p}] ? \ell (v)} P'\) and \( \vdash P \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}\), then either \(P=\mathsf {s}[\mathsf{p}] ? \ell (x).P_0\) and \(\mathsf {T}=\, ? \ell (S).\mathsf {T}'\) or \(P=\mathsf {s}[\mathsf{p}] ? \ell (x).P_0+P''\) and \(\mathsf {T}=\, ? \ell (S).\mathsf {T}'\wedge \mathsf {T}''\), and in both cases \( \vdash \mathsf {s}[\mathsf{p}] ? \ell (x).P_0 \rhd \mathsf {s}[\mathsf{p}]{ :} ? \ell (S).\mathsf {T}'\) and \(P'=P_0\{v/x\}\).

  2. 2.

    If \(P\xrightarrow {\mathsf {s}[\mathsf{p}] ! \ell (v)} P'\) and \( \vdash P \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}\), then either \(\mathsf {T}=\, ! \ell (S). \mathsf {T}'\) or \(\mathsf {T}=\, ! \ell (S). \mathsf {T}'\wedge \mathsf {T}''\), and \( \vdash P' \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}'\) and \(\vdash v:S\).

  3. 3.

    If \(P\xrightarrow {\mathsf {s}[\mathsf{p}]? ( \lambda ,\mathsf {T}')} P'\) and \( \vdash P \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}\), then either \(\mathsf {T}=\, ? \lambda \) or \(\mathsf {T}=\, ? \lambda \wedge \mathsf {T}''\), and \( \vdash P' \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}'\).

  4. 4.

    If \(P\xrightarrow {\mathsf {s}[\mathsf{p}]! (\lambda (F),\mathsf {T}')} P'\) and \( \vdash P \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}\), then either \(\mathsf {T}=\, ! \lambda \) or \(\mathsf {T}=\, ! \lambda \wedge \mathsf {T}''\), and \( \vdash P' \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}'\).

Proof

The proof is by structural induction on \(P\). We show only Point (1), the proof for the other points being simpler. If \(P\xrightarrow {\mathsf {s}[\mathsf{p}] ? \ell (v)} P'\), then either \(P=\mathsf {s}[\mathsf{p}] ? \ell (x).P_0\) or \(P=P_1+P_2\) and \(P_i \xrightarrow {\mathsf {s}[\mathsf{p}] ? \ell (v)} P'\) for \(i=1\) or \(i=2\). In the first case \(P\) is typed by rule rcv and \(\mathsf {T}=\, ? \ell (S).\mathsf {T}'\). In the second case \(P\) is typed by rule choice. Then \( \vdash P_i \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}_i\) and \(\mathsf {T}=\mathsf {T}_1\wedge \mathsf {T}_2\) for \(i=1,2\). By induction, either \(P_i=\mathsf {s}[\mathsf{p}] ? \ell (x).P_0\) and \(\mathsf {T}_i=\, ? \ell (S).\mathsf {T}'\) or \(P_i=\mathsf {s}[\mathsf{p}] ? \ell (x).P_0+P_i'\) and \(\mathsf {T}_i=\, ? \ell (S).\mathsf {T}'\wedge \mathsf {T}''\) for \(i=1\) or \(i=2\). In both cases \( \vdash \mathsf {s}[\mathsf{p}] ? \ell (x).P_0 \rhd \mathsf {s}[\mathsf{p}]{ :} ? \ell (S).\mathsf {T}'\) and \(P'=P_0\{v/x\}\). \(\square \)

As usual, session types are not preserved under system reduction: they evolve according to the actions performed by the corresponding participants. This is formalised by the reduction rules given in Table 13, where message types are considered modulo the equivalence relation defined above. The rules in the first line allow us to create monitors and queue types. The rules in the second line get rid of types carrying no information. The subsequent four rules deal with outputs and inputs of labels with sorts and flags. In particular, the rule in the second to last line shows how a virtual monitor becomes the current monitor when a participant adapts itself.

Table 13 Reduction of session typings

Notice that not all the left-hand sides of the reduction rules for networks and systems are typed by consistent session typings. For example,

$$\begin{aligned} \begin{array}{l}\vdash _{\{\mathsf {s}\}} \fancyscript{M}[\mathsf {s}[1] ? \ell (x) .\mathsf {s}[1] ? \ell '(y) .\mathbf {0}]~|~\mathsf {s}{ :}(2, 1,\ell (\mathsf {true})) \rhd \varDelta \end{array} \end{aligned}$$

where \(\fancyscript{M}=2?\ell (\mathsf {Bool}).2?\ell '(\text {Int}).\mathsf {end} \),

\(\varDelta =\{\mathsf {s}[1]{ :}\fancyscript{M}, \mathsf {s}[2]{ :}\langle 1 !\ell (\mathsf {Bool}),-\rangle \}\). Observe that

$$\begin{aligned} \fancyscript{M}[\mathsf {s}[1] ? \ell (x) .\mathsf {s}[1] ? \ell '(y) .\mathbf {0}]~|~\mathsf {s}{ :}(2, 1,\ell (\mathsf {true})) \end{aligned}$$

matches the left-hand side of the reduction rule In, and \(\varDelta \) is not consistent. The network obtained by putting this network in parallel with \(1 !\ell '(\mathsf {Int}).\mathsf {end} [\mathsf {s}[2] ! \ell '(7). \mathbf {0}]\) has a consistent session typing. It is then crucial to show that if the left-hand side of a reduction rule is typed by a session typing, which is consistent when composed with some session typing, then the same property holds for the right-hand side too. It is sufficient to consider the reduction rules which do not contain network and system reductions as premises, i.e. which are the leafs in the reduction trees. This is formalised in the following lemma, which is the key step for proving the Subject Reduction Theorem.

Lemma 6

(Key Lemma)

  1. 1.

    Let \(\vdash _{\Sigma } N \rhd \varDelta \), and \( N\longrightarrow {N'}\) be obtained by any reduction rule different from Equiv, and \(\varDelta *\varDelta _0\) be consistent for some \(\varDelta _0\). Then there is \(\varDelta '\) such that \(\vdash _{\Sigma } N' \rhd \varDelta '\) and \({\varDelta }~\Longrightarrow ^*~{\varDelta '}\) and \(\varDelta '*\varDelta _0\) is consistent.

  2. 2.

    Let \(\vdash _{\Sigma } \fancyscript{S} \rhd \varDelta \), and \( \fancyscript{S}\longrightarrow {\fancyscript{S}'}\) be obtained by any reduction rule different from SN, CTX, and \(\varDelta *\varDelta _0\) be consistent for some \(\varDelta _0\). Then there is \(\varDelta '\) such that \(\vdash _{\Sigma } \fancyscript{S}' \rhd \varDelta '\) and \({\varDelta }~\Longrightarrow ^*~{\varDelta '}\) and \(\varDelta '*\varDelta _0\) is consistent.

Proof

(1). The proof is by cases on network reduction rules. The cases of rule Init and Tau are trivial, since \(\varDelta =\varDelta '\).

Rule In. By Lemma 2(7),

$$\begin{aligned} \vdash _{\Sigma } \fancyscript{M}[P] ~|~\mathsf {s}{ :}(\mathsf{q}, \mathsf{p},\ell (v))\cdot h \rhd \varDelta \end{aligned}$$

implies \(\Sigma =\Sigma _1\cup \Sigma _2\)

$$\begin{aligned}&{\varDelta =\varDelta _1 *\varDelta _2} {}\end{aligned}$$
(1)
$$\begin{aligned}&{\vdash _{\Sigma _1} \fancyscript{M}[P] \rhd \varDelta _1}{}\end{aligned}$$
(2)
$$\begin{aligned}&{\vdash _{\Sigma _2} \mathsf {s}{ :}(\mathsf{q}, \mathsf{p},\ell (v))\cdot h \rhd \varDelta _2}{} \end{aligned}$$
(3)

By Lemma 4(1) \(\fancyscript{M}\!\xrightarrow {\mathsf{p}? \ell }\! \fancyscript{M}'\) implies \(\fancyscript{M}\!=\!\mathsf{q}? \{\ell _i (S_i).\fancyscript{M}_i\}_{i \in I}\) and \(\ell =\ell _j\) and \(\fancyscript{M}'=\fancyscript{M}_j\) for some \(j\in I\). By Lemma 2(3), the judgment (2) gives \(\Sigma _1=\emptyset \) and

$$\begin{aligned} {\varDelta _1=\{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}\}}{} \end{aligned}$$
(4)

and \( \vdash P \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}\) and \(\mathsf {T}\propto \fancyscript{M}\). By Lemma 3(1), the judgment (3) gives \(\Sigma _2=\{\mathsf {s}\}\) and

$$\begin{aligned} \varDelta _2\approx \{ \mathsf {s}[\mathsf{q}]{ :}\langle \mathsf{p}!\ell (S),-\rangle \}\sharp \varDelta _3{} \end{aligned}$$
(5)

and

$$\begin{aligned} \vdash _{\{\mathsf {s}\}} \mathsf {s}{ :}h \rhd \varDelta _3{} \end{aligned}$$
(6)

and \(\vdash v{ :}S\).

By Lemma 5(1) \(P\xrightarrow {\mathsf {s}[\mathsf{p}] ? \ell (v)} P'\) and \( \vdash P \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}\) imply either \(P=\mathsf {s}[\mathsf{p}] ? \ell (x).P_0\) and \(\mathsf {T}=\, ? \ell (S').\mathsf {T}'\) or \(P=\mathsf {s}[\mathsf{p}] ? \ell (x).P_0+P_1\) and \(\mathsf {T}=\, ? \ell (S').\mathsf {T}'\wedge \mathsf {T}''\). In both cases \( \vdash \mathsf {s}[\mathsf{p}] ? \ell (x).P_0 \rhd \mathsf {s}[\mathsf{p}]{ :} ? \ell (S').\mathsf {T}'\) and \(P'=P_0\{v/x\}\). The shapes of \(\mathsf {T}, \fancyscript{M}\) and \(\mathsf {T}\propto \fancyscript{M}\) imply \(S'=S_j\) and \(\mathsf {T}' \propto \fancyscript{M}'\). The consistency of \(\varDelta *\varDelta _0\) implies \(S=S'\). The we can derive \( \vdash P' \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}'\) and

$$\begin{aligned} \vdash _{\emptyset } \fancyscript{M}'[P'] \rhd \{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}'\} \end{aligned}$$
(7)

Applying Npar to (6) and (7) we derive

$$\begin{aligned} \vdash _{\{\mathsf {s}\}} \fancyscript{M}'[P']~|~\mathsf {s}: h \rhd \{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}'\}*\varDelta _3 \end{aligned}$$

Then \(\varDelta '=\{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}'\}*\varDelta _3\). From (1), (4), and (5) we get

$$\begin{aligned} \begin{array}{lll}\varDelta &{}\approx &{}\{\mathsf {s}[\mathsf{p}]{ :}\langle \mathsf{m},\mathsf{q}? \{\ell _i(S_i).\fancyscript{M}_i\}_{i\in I},{\fancyscript{V}}\rangle ,\mathsf {s}[\mathsf{q}]{ :}\langle \mathsf{p}!\ell (S);\mathsf{m}',{\fancyscript{V}}'\rangle \}\\ &{}&{}\cup \varDelta '_3\qquad \qquad \text {where } \ell =\ell _j \text { and } S=S_j \text { and } j\in I\end{array} \end{aligned}$$

for some \(\mathsf{m},{\fancyscript{V}},\mathsf{m}',{\fancyscript{V}}',\varDelta '_3\) such that

$$\begin{aligned} \varDelta _3=\{ \mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},{\fancyscript{V}}\rangle ,\mathsf {s}[\mathsf{q}] { :}\langle \mathsf{m}',{\fancyscript{V}}'\rangle \}\cup \varDelta '_3. \end{aligned}$$

Since

$$\begin{aligned} \begin{array}{l}{\{\mathsf {s}[\mathsf{q}] { :}\langle \mathsf{p}!\ell (S)\,;\,{\mathsf{m}'},{\fancyscript{V}}'\rangle , \mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},\mathsf{q}? \{\ell _i(S_i).\fancyscript{M}_i\}_{i\in I},{\fancyscript{V}}\rangle \}}\Longrightarrow \\ {\{\mathsf {s}[\mathsf{q}] { :}\langle \mathsf{m}',{\fancyscript{V}}'\rangle , \mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},\fancyscript{M}',{\fancyscript{V}}\rangle \}}\end{array} \end{aligned}$$

then we get \({\varDelta }~\Longrightarrow ^*~{\varDelta '}\). The only differences between \(\varDelta \) and \(\varDelta '\) are:

  • the erasure of the message \(\mathsf{p}!\ell (S)\) in the type of \(\mathsf {s}[\mathsf{q}]\);

  • the replacement of the monitor \(\fancyscript{M}'\) to the monitor \(\mathsf{q}? \{\ell _i(S_i).\fancyscript{M}_i\}_{i\in I}\) in the type of \(\mathsf {s}[\mathsf{p}].\)

It is then easy to check that the consistency of \(\varDelta *\varDelta _0\) implies the consistency of \(\varDelta '*\varDelta _0\).

Rule AdaInNew. By Lemma 2(7),

$$\begin{aligned} \vdash _{\Sigma } \fancyscript{M}[P] ~|~\mathsf {s}{ :}(\mathsf{q}, \mathsf{p},\lambda (\mathsf{G}))\cdot h \rhd \varDelta \end{aligned}$$

implies \(\Sigma =\Sigma _1\cup \Sigma _2\)

$$\begin{aligned}&{\varDelta =\varDelta _1 *\varDelta _2} {}\end{aligned}$$
(8)
$$\begin{aligned}&{\vdash _{\Sigma _1} \fancyscript{M}[P] \rhd \varDelta _1}{}\end{aligned}$$
(9)
$$\begin{aligned}&{\vdash _{\Sigma _2} \mathsf {s}{ :}(\mathsf{q}, \mathsf{p},\lambda (\mathsf{G}))\cdot h \rhd \varDelta _2}{} \end{aligned}$$
(10)

By Lemma 4(3) \(\fancyscript{M}\xrightarrow {\mathsf{q}? \lambda }\) implies

$$\begin{aligned} {\fancyscript{M}=\mathsf{q}? \{\lambda _i\}_{i\in I}}{} \end{aligned}$$
(11)

and \(\lambda =\lambda _j\) for some \(j\in I\). By Lemma 2(3), the judgment (9) gives \(\Sigma _1=\emptyset \) and

$$\begin{aligned} {\varDelta _1=\{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}\}}{} \end{aligned}$$
(12)

and \( \vdash P \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}\) and \(\mathsf {T}\propto \fancyscript{M}\). By Lemma 3(2), the judgment (10) gives \(\Sigma _2=\{\mathsf {s}\}\) and

$$\begin{aligned} {\varDelta _2\approx \{ \mathsf {s}[\mathsf{q}]{ :}\langle \mathsf{p}!\lambda ,-\rangle , \mathsf {s}[\mathsf{p}]{ :}\langle \epsilon ,\fancyscript{M}'\rangle \}\sharp \varDelta _3}{} \end{aligned}$$
(13)

(taking into account that \(\mathsf{G} \! \upharpoonright \! \mathsf{q}\,=\fancyscript{M}'\)) and

$$\begin{aligned} {\vdash _{\{\mathsf {s}\}} \mathsf {s}{ :}h \rhd \varDelta _3}{} \end{aligned}$$
(14)

We can obtain

$$\begin{aligned} {\vdash _{\emptyset } \fancyscript{M}'[Q\{\mathsf {s}[\mathsf{p}]/y\}] \rhd \{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}'\}}{ } \end{aligned}$$
(15)

by using rule MP, since \((Q,\mathsf {T}') \in \fancyscript{P}\), \(\mathsf {T}'\propto \fancyscript{M}'\) and \(\fancyscript{M}'\not =\mathsf {end} \) (which is implied by the premise of the rule AdaInNew). Hence, we apply rule NPar to the judgments (14) and (15) and we derive:

$$\begin{aligned} \vdash _{\{\mathsf {s}\}} \fancyscript{M}'[Q\{\mathsf {s}[\mathsf{p}]/y\}] ~|~\mathsf {s}{ :}h \rhd \{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}'\}*\varDelta _3 \end{aligned}$$

Then \(\varDelta '=\{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}'\}*\varDelta _3\). Notice that (8), (12), (11) and (13) imply

$$\begin{aligned} \varDelta \approx \{\mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},\mathsf{q}? \{\lambda _i\}_{i\in I},\fancyscript{M}'\rangle \}\cup (\{\mathsf {s}[\mathsf{q}] { :}\langle \mathsf{p}!\lambda ;\mathsf{m}',-\rangle \}\sharp \varDelta '_3) \end{aligned}$$

for some \(\mathsf{m},\mathsf{m}',\varDelta '_3\) such that

$$\begin{aligned} \varDelta _3\approx \{ \mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},-\rangle ,\mathsf {s}[\mathsf{q}] { :}\langle \mathsf{m}',-\rangle \}\cup \varDelta '_3. \end{aligned}$$

Note that \(\mathsf {s}[\mathsf{p}]\) has \(\fancyscript{M}'\) as virtual monitor in \(\varDelta _2\) and then no virtual monitor in \(\varDelta _3\). Instead, \(\mathsf {s}[\mathsf{q}]\) has no virtual monitor being the sender of the adaptation. Since

$$\begin{aligned} \begin{array}{l}{\{\mathsf {s}[\mathsf{q}] { :}\langle \mathsf{p}!\lambda \,;\,{\mathsf{m}'},-\rangle , \mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},\mathsf{q}? \{\lambda _i\}_{i\in I},\fancyscript{M}'\rangle \}}\Longrightarrow \\ {\{\mathsf {s}[\mathsf{q}] { :}\langle \mathsf{m}',-\rangle , \mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},\fancyscript{M}',-\rangle \}}\end{array} \end{aligned}$$

we get \({\varDelta }~\Longrightarrow ^*~{\varDelta '}\). The only differences between \(\varDelta \) and \(\varDelta '\) are:

  • the erasure of the message \(\mathsf{p}!\lambda \) in the type of \(\mathsf {s}[\mathsf{q}]\);

  • the erasure of the monitor \(\mathsf{q}? \{\lambda _i\}_{i\in I}\) in the type of \(\mathsf {s}[\mathsf{p}]\);

  • the monitor \(\fancyscript{M}'\) is virtual in the type of \(\mathsf {s}[\mathsf{p}]\) in \(\varDelta \) and it is active in the type of \(\mathsf {s}[\mathsf{p}]\) in \(\varDelta '\).

It is then easy to check that the consistency of \(\varDelta *\varDelta _0\) implies the consistency of \(\varDelta '*\varDelta _0\).

The proof for rules Out and AdaInCont are similar and simpler than those for rules In and AdaInNew, respectively.

(2). The proof is by cases on system reduction rules. The case of rule OP is trivial, since \(\varDelta =\varDelta '\).

Rule AdaOutCont. Being \(h\) \(\lambda \)-free, \(\varDelta \) does not contain virtual monitors. By Lemma 2(9) and (2), \(\vdash _{\Sigma } \fancyscript{M}[P] ~|~\mathsf {s}{ :}h~|\!|~\sigma \rhd \varDelta \) implies \(\Sigma =\Sigma _1\cup \Sigma _2\),

$$\begin{aligned}&{\varDelta =\varDelta _1 *\varDelta _2}{}\end{aligned}$$
(16)
$$\begin{aligned}&{\vdash _{\Sigma _1} \fancyscript{M}[P] \rhd \varDelta _1}{}\end{aligned}$$
(17)
$$\begin{aligned}&{\vdash _{\Sigma _2} \mathsf {s}{ :}h \rhd \varDelta _2}{} \end{aligned}$$
(18)

By Lemma 4(4), \(\fancyscript{M}\xrightarrow {\varPi ! \lambda }\) implies

$$\begin{aligned} {\fancyscript{M}=\varPi ! \{\lambda _i\}_{i\in I}}{} \end{aligned}$$
(19)

and \(\lambda =\lambda _j\) for some \(j\in I\). By Lemma 2(3), the judgment (17) gives \(\Sigma _1=\emptyset \) and

$$\begin{aligned}&{\varDelta _1=\{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}\}}{}\end{aligned}$$
(20)
$$\begin{aligned}&{ \vdash P \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}}{} \end{aligned}$$
(21)

and \(\mathsf {T}\propto \fancyscript{M}\).

By Lemma 5(4), the judgment (21) and \(P\! \xrightarrow {\mathsf {s}[\mathsf{p}]! (\lambda (F),\mathsf {T}')}\! P'\) imply \( \vdash P' \rhd \mathsf {s}[\mathsf{p}]{ :}\mathsf {T}'\). We consider only the case \(\fancyscript{M}_\mathsf{p}\not =\mathsf {end} \), the proof for the case \(\fancyscript{M}_\mathsf{p}=\mathsf {end} \) being similar but for the use of rule \({\{\mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},\mathsf {end},-\rangle \}}~\Longrightarrow ~{\{\mathsf {s}[\mathsf{p}]{ :}\langle \mathsf{m},-\rangle \}}\). Since \(\mathsf {T}'\propto \fancyscript{M}_\mathsf{p}\), then we derive

$$\begin{aligned} {\vdash _{\emptyset } \fancyscript{M}_\mathsf{p}[P'] \rhd \{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}_\mathsf{p}\}} {} \end{aligned}$$
(22)

by rule MP. Similarly, for all \(\mathsf{q}\in \varPi '{\setminus }(\varPi \cup \{\mathsf{p}\})\) we can derive

$$\begin{aligned} {\vdash _{\emptyset } \fancyscript{M}_\mathsf{q}[P_\mathsf{q}\{\mathsf {s}[\mathsf{q}]/y_\mathsf{q}\}] \rhd \{\mathsf {s}[\mathsf{q}]{ :}\fancyscript{M}_\mathsf{q}\}}{ } \end{aligned}$$
(23)

by rule MP, since \((P_\mathsf{q},\mathsf {T}_\mathsf{q}) \in \fancyscript{P}\) and \(\mathsf {T}_\mathsf{q}\propto \fancyscript{M}_\mathsf{q}\).

By Lemma 2(4), (5) and (6), the judgment (18) gives \(\Sigma _2=\{\mathsf {s}\}\). Rule QSendG applied to the judgment (18) derives

$$\begin{aligned} {\vdash _{\{\mathsf {s}\}} \mathsf {s}{ :}h\cdot (\mathsf{p}, \varPi ,\lambda (\mathsf{G})) \rhd \varDelta _2\sharp \varDelta _3}{ } \end{aligned}$$
(24)

where \(\varDelta _3=\{ \mathsf {s}[\mathsf{p}]{ :}\langle \varPi !\lambda ,-\rangle \}\cup \{ \mathsf {s}[\mathsf{q}]{ :}\langle \epsilon ,\mathsf{G} \! \upharpoonright \! \mathsf{q}\,\rangle \mid \mathsf{q}\in \varPi \}\). Notice that \(\varDelta _2\sharp \varDelta _3\) is defined, since \(\varDelta _2\) does not contain virtual monitors. Applying rule NPar to judgments (22), (23) and (24) we conclude

$$\begin{aligned} \vdash _{\{\mathsf {s}\}} \fancyscript{M}_\mathsf{p}[P'] ~|~N~|~\mathsf {s}{ :}h\cdot (\mathsf{p}, \varPi ,\lambda (\mathsf{G})) \rhd \varDelta ' \end{aligned}$$

where \(N=\prod _{\mathsf{q}\in \varPi '\setminus (\varPi \cup \{\mathsf{p}\})} \fancyscript{M}_\mathsf{q}[P_\mathsf{q}\{\mathsf {s}[\mathsf{q}]/y_\mathsf{q}\}]\) and \(\varDelta '=\{\mathsf {s}[\mathsf{p}]{ :}\fancyscript{M}_\mathsf{p}\}*\{\mathsf {s}[\mathsf{q}]{ :}\fancyscript{M}_\mathsf{q}\mid \mathsf{q}\in \varPi '{\setminus }(\varPi \cup \{\mathsf{p}\})\}*(\varDelta _2\sharp \varDelta _3)\). Notice that (16), (20) and (19) imply

$$\begin{aligned} \varDelta =\{\mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},{\varPi }!\{{\lambda }_i \}_{i \in I},-\rangle \}*\varDelta _2' \end{aligned}$$

for some \(\mathsf{m}\) such that \(\varDelta _2\approx \{\mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},-\rangle \}\sharp \varDelta _2'\). Since \({\{\mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m},{\varPi }!\{{\lambda }_i \}_{i \in I},-\rangle \}}~\Longrightarrow ~{\{\mathsf {s}[\mathsf{p}] { :}\langle \mathsf{m}\,;\,\varPi !\lambda ,\fancyscript{M}_\mathsf{p},-\rangle \}}\) for \(\lambda =\lambda _j\) with \(j\in I\)

\({\emptyset }~\Longrightarrow ~{\{\mathsf {s}[\mathsf{q}]{ :}\langle \epsilon ,\mathsf{G} \! \upharpoonright \! \mathsf{q}\,\rangle \}}\) for \(\mathsf{q}\in \varPi \)

\({\emptyset }~\Longrightarrow ~{\{\mathsf {s}[\mathsf{q}]{ :}\fancyscript{M}_\mathsf{q}\}}\) for \(\mathsf{q}\in \varPi '{\setminus }\varPi \cup \{\mathsf{p}\}\)

we get \({\varDelta }~\Longrightarrow ^*~{\varDelta '}\). The session typing \(\varDelta '\) contains only monitors which are projections of the global type \(\mathsf{G}\). Therefore, the consistency of \(\varDelta *\varDelta _0\) implies the consistency of \(\varDelta '*\varDelta _0\).

The proof for rule AdaOutNew proceeds as in the previous case. \(\square \)

The next lemma shows that typings for systems are invariant under structural equivalence of networks, as expected.

Lemma 7

If  \(\vdash _{\Sigma } N~|\!|~\sigma \rhd \varDelta \) and \(N\!\equiv \! N'\), then \(\vdash _{\Sigma } N'~|\!|~\sigma \rhd \varDelta \).

Proof

By Lemma 2(9) and rule System, it is enough to show that \(\vdash _{\Sigma } N \rhd \varDelta \) and \( N\equiv N'\) imply \(\vdash _{\Sigma } N' \rhd \varDelta \). The proof is by induction on the definition of structural equivalence, observing that \(\vdash _{\emptyset } \mathsf {end} [P] \rhd \emptyset \) and using typing rule Equiv. \(\square \)

Theorem 1

(Subject Reduction) If  \(\vdash _{\Sigma } \fancyscript{S} \rhd \varDelta \) with \(\varDelta \) consistent and \( \fancyscript{S}\longrightarrow ^* \fancyscript{S}'\), then \(\vdash _{\Sigma } \fancyscript{S}' \rhd \varDelta '\) for some consistent \(\varDelta '\) such that \({\varDelta }~\Longrightarrow ^*~{\varDelta '}\).

Proof

It is enough to show the statement for the case \(\fancyscript{S}\equiv {\fancyscript{E}}[N]~|\!|~\sigma \) and \(\fancyscript{S}'\equiv {\fancyscript{E}}[N']~|\!|~\sigma '\), where either \(N\longrightarrow N'\) or \(N~|\!|~\sigma \longrightarrow N'~|\!|~\sigma '\) by one of the rules considered in Lemma 6. By the structural equivalence on networks, we can assume \({\fancyscript{E}}=(\overrightarrow{\nu \mathsf {s}})([\;]~|~N_0)\) without loss of generality. Lemma 7 and Lemma 2(9), (8) and (7) applied to \(\vdash _{\Sigma } \fancyscript{S} \rhd \varDelta \) give \(\vdash _{\Sigma _1} N \rhd \varDelta _1\) and \(\vdash _{\Sigma _0} N_0 \rhd \varDelta _0\), where \(\Sigma =(\Sigma _0\cup \Sigma _1)\setminus \overrightarrow{\mathsf {s}}\) and \(\varDelta =(\varDelta _0*\varDelta _1)\setminus \overrightarrow{\mathsf {s}}\). The consistency of \(\varDelta \) implies the consistency of \(\varDelta _0*\varDelta _1\) by Lemma 2(8). In the case \(N\!\longrightarrow \! N'\), by Lemma 6(1) there is \(\varDelta '_1\) such that \(\vdash _{\Sigma _1} N' \rhd \varDelta '_1\) and \({\varDelta _1}~\Longrightarrow ^*~{\varDelta _1'}\) and \(\varDelta _0*\varDelta '_1\) is consistent. In the case \(N~|\!|~\sigma \longrightarrow N'~|\!|~{\sigma '}\), by Lemma 6(2), there is \(\varDelta '_1\) such that \(\vdash _{\Sigma _1} N'~|\!|~\sigma ' \rhd \varDelta '_1\) (that is \(\vdash _{\Sigma _1} N' \rhd \varDelta '_1\) by Lemma 2(9)) and \({\varDelta _1}~\Longrightarrow ^*~{\varDelta _1'}\) and \(\varDelta _0*\varDelta '_1\) is consistent. Therefore, we derive \(\vdash _{\Sigma } \fancyscript{S}' \rhd \varDelta '\), where \(\varDelta '= (\varDelta _0*\varDelta '_1)\setminus \overrightarrow{\mathsf {s}}\) by applying typing rules NPar, Res, and System. Observe that \({\varDelta }~\Longrightarrow ^*~{\varDelta '}\) and \(\varDelta '\) is consistent. \(\square \)

We say that a system is initial when its network is a parallel composition of session initiators, which is always typeable. The type system can guarantee progress, proviso that the collection of processes and types contains at least one process for each monitor which is created at run-time in the adaptations. This can also be statically checked when the domains of the adaptation functions which occur in processes are finite. We say that a collection \(\fancyscript{P}\) is complete if, for every global type \(G\) in the domain of an adaptation function which occurs in a process belonging to \(\fancyscript{P}\), there are processes in \(\fancyscript{P}\) whose types are adequate for the monitors obtained by projecting \(G\) onto its participants.

Theorem 2

(Progress) If \(\fancyscript{P}\) is complete, \(\fancyscript{S}\) is an initial system and \(\fancyscript{S}\longrightarrow ^*_\fancyscript{P}\fancyscript{S}'\), then \(\fancyscript{S}'\) has progress, i.e.

  1. 1.

    every input monitored process will eventually receive a message, and

  2. 2.

    every message in a queue will eventually be received by an input monitored process.

Proof

Coppo et al. [13] show that an initial system without adaptation flags has progress (by rewriting the result of [13] in our setting).

With respect to the framework of Coppo et al. [13], each adaptation step, in our model, can be seen as the starting of a new interaction protocol where all participants can be implemented thanks to the completeness of \(\fancyscript{P}\). Therefore, the following key features ensure the progress property in our case:

  1. 1.

    all the interaction protocols prescribed by a global type are terminating, either by exchanging adaptation flags or reaching \(\mathsf {end} \);

  2. 2.

    a participant is a process, monitored by a projection of a global type, with at most one channel, so there is no communications among participants monitored by different global types, even when they are in the same session;

  3. 3.

    by Subject Reduction, systems are well typed and reduction preserves the consistency of session typings, thus all communications take place in the order prescribed by the global types.

Notice that Point (2) holds thanks to the condition of \(\lambda \)-freeness in rules AdaOutCont and AdaOutNew. Concluding, the computation can be seen as a succession of independent terminating communication protocols, each of which has the progress property. So, the whole computation has it. \(\square \)

7 Related work

The literature includes several works aimed at studying adaptive systems in different application contexts and by different perspectives on the conceptual notion of adaptation. The paper [7] provides a valuable discussion on this issue and an interesting classification of various approaches. The state-of-the-art in service choreography adaptation is analysed in [31]. We focus here on the papers which are more related to the distinguishing features of our approach.

7.1 Adaptable processes

In [5], Bravetti et al. present a calculus in which adaptable processes can be modified by “update patterns”. Run-time adaptation of structured communications is approached in [20] by combining the constructors for adaptable processes of Bravetti et al. [5] with the session type system of Garralda et al. [22] for the Boxed Ambient calculus [8]. Session behaviours are never disrupted by adaptation actions, since processes engaged in active sessions cannot be updated. This calculus deals with adaptations of single processes, not with adaptations of the choreography of communicating processes, and only considers dyadic sessions and synchronous communications.

7.2 Adaptable choreographies

The paper most similar to ours is Anderson and Rathke [1], where global and session types are used to guarantee deadlock-freedom in a calculus of multiparty sessions with asynchronous communications. Only part of the running code is updated. Two different conditions are given for ensuring liveness. The first condition requires that all channel queues are empty before updating. The second condition requires a partial order between the session participants with a unique minimal element. The participants are then updated following this order. Our adaptation framework allows the progress property to be guaranteed without assuming such conditions.

The paper [14], building on [16, 17, 30] and [15], proposes a rule-based approach in which all interactions, under all possible changes produced by the adaptation rules, proceed as prescribed by an abstract model. In particular, the system is deadlock-free by construction. The adaptive system is composed by interacting participants deployed on different locations, each executing its own code. Adaptation is performed by distributed adaptation servers, which are repositories of adaptation rules. Rules can be added or removed at any moment, while the system is running. Applicability depends on execution environment and properties of the code region to be replaced. If a rule is applied, it replaces part of the code of (some of) the participants with a newer version, able to better meet the requirements. Adaptations of different participants are coordinated ensuring coherent behaviour. Data and control flow statements are done in a Java-style syntax. Central to the technical development are the notions of adaptive interaction oriented choreography and adaptive process oriented choreography, which resemble our global types and monitors. Although there are many analogies between this and our paper, there are important differences. In  [14], auxiliary communications are needed to ensure that all participants take the same branch in conditionals, and new participants cannot be added by an adaptation. Moreover in [14], adaptation involves only a part of the choreography and can be applied in any moment, while in our calculus, the interaction protocols contain the adaptation points and the reconfiguration step applies to the whole system.

The language of service choreographies defined in [6] is extended with two operators in [4] to take into account adaptation. One operator allows for the specification of adaptable scopes that can be dynamically modified, while the second may dynamically update code in one of such scopes. A similar extension is given for the service contract language of Bravetti and Zavattaro [6], so that projection can relate the two adaptable languages. The main difference with our proposal is that the set of possible participants to the modified choreographies is fixed once for all, in the adaptable scopes.

7.3 Monitors

In the literature, there are many calculi in which the process behaviour is statically and/or dynamically controlled by means of monitors, for example [21, 26]. The works that most influenced the present paper are Chen et al. [11] and Bocchi et al. [3]. The calculus in those papers is a multiparty session calculus with assertions, and therefore, it is much more expressive than our calculus. In fact, the monitors in [3, 11] prescribe not only the types of the exchanged data, but also that the values of these data satisfy some predicates. Another main difference is that those monitors contain information on the behaviours of all session participants, while our monitors represent the behaviour of single participants.

7.4 Intersection and union types

In the present paper, we type processes with intersections and unions taking inspiration from [33]. The type syntax in that paper is more liberal than ours, for example not requiring labels in an intersection and in a union be different, so more processes can be typed. Also in [33], the most interesting processes correspond to external choices between inputs and internal choices between outputs.

Subtyping for intersections and unions is naturally inspired by their set-theoretical interpretation. Considering the mapping between monitors and types of Definition 8, in this paper, we give a subtyping which is the opposite of that considered in [33]. Both subtypings have been largely used [9, 18, 23, 24, 29, 32, 34, 35, 37]. The main reason of this difference is that in typing processes, one can either assume or derive the types of channels. In the simple case of a process \(P\) with only one channel \(y\), the typing judgments have the shapes \(y:\mathsf {T}\vdash P\) and \(\vdash P\rhd \{y:\mathsf {T}\}\), respectively. This is the reason why subtyping in [23] and in [32] is defined in opposite ways. Choices between fewer inputs are smaller in the subtyping of Gay and Hole [23] and bigger in the subtyping of Mostrous et al. [32]. Choices between outputs behave dually.

8 Conclusion

We have presented a formal model of self-adaptation in multiparty sessions. The framework is based on self-adaptive monitors and global types. In the service-oriented context, global types can be exploited as choreographies of service interactions and monitors as local protocol specifications of services. From this perspective, the present paper provides a formal model for assessing the impact of highly evolving environments, which demand for dynamic self-reconfigurations of the whole system. Interactions among services may be added or removed as well as new services may be required. We use a type discipline to ensure that service collaborations will behave in a safe way after dynamic adaptations.

Differently from approaches focusing on adaptation as code modification in software systems, our approach is choreography centred (similar to [14] for this aspect). When dynamic conditions demand a change, the global choreography updates itself together with the new monitors which prescribe the new behaviours to the participants. A process fills (implements) a given monitor if its type is adequate for that monitor, otherwise a different implementation (process) need to be found. Notably, we proved that all monitored processes behave correctly and interact with each other in a safe way, once the adaptation has been performed.

As a main feature, we achieve a decentralised control of the adaptation and a notable flexibility in the dynamics of self-reconfiguration. According to its monitor, any participant can be in charge of checking global data and sending the adaptation request, instead of devoting a centralised mechanism to this task. Furthermore, the dynamic reconfiguration can add new participants, while some of the old participants are not longer involved. Finally, processes, that are simply implementation code, can follow different incompatible computational paths, thus each participant can be differently implemented in the various adaptation steps.

One apparent limitation of our calculus is that processes can only operate on a single channel. This limitation can be addressed by extending the process language and its typing rules, without major consequences on the rest of the development. Instead, adding session delegation would require further investigation.

We plan to experiment with implementations of our approach, to evaluate its feasibility.

We are working towards a quantitative version of our model, where the global state also contains dynamically evolving semantic information about processes, such as reputation or performance rates. Using this information, adaptation functions will be able to choose a single process among all the processes matching a monitor, as one of the best implementations for that participant. In the present calculus, this issue results in an arbitrary choice, since processes can be taken solely on the basis of their compatibility with monitors from the point of view of safe adaptations. In a realistic application, instead, it would be interesting to involve other requests concerning quantitative aspects.