1 Introduction

Self-adaptive systems have recently attracted a lot of interest in research, in particular in connection with systems of (cyber-physical) systems [9]. Adaptivity refers to the ability of a system to change its own behaviour. In the context of programming this concept, known under the term linguistic reflection, appears already in LISP [11], where programs and data are both represented uniformly as lists, and thus programs represented as data can be executed dynamically by means of an evaluation operator. Run-time and compile-time linguistic reflection in programming and database research has been investigated in general by Stemple, Van den Bussche and others in [12, 13].

This raises the questions how the development of adaptive systems can be supported by state-based rigorous methods such Abstract State Machines (ASMs) [5]. These methods are coupled with a genericity promise, i.e. they can be applied universally to a large class of systems supporting rigorous specification on any level of abstraction, seamless step-wise refinement from high-levels of abstraction down to implemented code, validation and tracing of requirements through the refinement process, and verification of specifications against the requirements on grounds of dedicated logics. However, reflective algorithms are not yet covered.

Gurevich’s celebrated sequential ASM thesis [8] states that sequential algorithms are captured by sequential ASMs. A key contribution of this thesis is the language-independent characterisation of a reflective algorithm by a small set of intuitively understandable postulates, by means of which a definition of a sequential algorithm on an arbitrary level of abstraction is given. Then it can be shown that every sequential algorithm as stipulated by the postulates can be step-by-step simulated by a sequential ASM.

Based on this thesis the notion behavioural theory has been introduced for a triplet comprising (1) a set of postulates that characterises a class of algorithms or systems, (2) an abstract machine model together with a plausibility proof that the abstract machine satisfy the postulates, and (3) a characterisation proof that all algorithms stipulated by the postulates are captured by the abstract machine model. That is, the sequential ASM thesis provides the behavioural theory of sequential algorithms. Other examples cover the behavioural theory of parallel algorithms developed by Blass and Gurevich [1, 2], its simplification by Ferrarotti et al. [6] using a different set of postulates, the behavioural theory of concurrent algorithms [4], and the behavioural theory of non-deterministic database transformations [10].

In this paper we investigate a behavioural theory for reflective, sequential algorithms, which was conjectured in [7]. In light of the significantly increased technical difficulties that have to be addressed when unbounded parallelism is permitted (compare the proofs in [6] with those in the sequential ASM thesis [8]) we first restrict the emphasis on sequential algorithms, where parallelism is a priori bound and does not depend on the state.

We first develop a set of postulates characterising reflective sequential algorithms (RSAs). The key issue is that an RSA must have some representation of itself, but this has to be left completely abstract. We argue that this is possible, which leads to extended states, where abstract terms that appear in the description of the algorithm are used as values, which requires a distinction concerning their interpretation in a state. The tricky problem is the generalisation of bounded exploration, for it is clear that all means of an algorithm to change itself must appear somehow in the algorithm’s description. We argue that there is still a bounded exploration witness, i.e. a set of ground terms that determines the update sets yielded in a state, but the bounded exploration postulate will nonetheless require some sophisticated differentiation concerning the interpretation of terms. The postulates for RSAs will be discussed in Sect. 2.

In Sect. 3 we proceed with the definition of reflective sequential ASMs (rASMs), which will be a straightforward extension of ASMs using a dedicated location self capturing the (syntax of the) sequential ASM that is to be applied in this state. This determines the runs of a rASM with the difference that in each step now a possibly different ASM may have been used to determine the updates that mark the state changes. We also briefly sketch the plausibility theorem though it commonly addresses the simpler proof direction.

Section 4 addresses the proof of the characterisation theorem, which is again accomplished by a sequence of lemmata, the key problem being that there is a theoretically unbounded number of different algorithms that nonetheless have to be handled uniformly. This section will be the technical key contribution of this paper. We conclude with a brief summary and outlook in Sect. 5.

2 Reflective Algorithms and Their Axiomatisation

The celebrated sequential ASM thesis needs only three simple, intuitive postulates to define sequential algorithms (for details see the deep discussion in [8]):

  • Sequential time: Each sequential computation proceeds by means of a transition function \(\tau :\mathcal {S} \rightarrow \mathcal {S}\), which maps a state \(S \in \mathcal {S}\) to its successor state \(\tau (S)\).

  • Abstract state: Each state \(S \in \mathcal {S}\) is a Tarski structure defined over a signature \(\varSigma \), i.e. a set of function symbols, by means of interpretation in a base set \(B_S\). States, initial states and transitions are closed under isomorphisms.

  • Bounded exploration: There is a fixed, finite set of ground terms W called bounded exploration witness such that whenever two states coincide on W, the update sets that determine the changes in the transition to the respective successor states are equal.

The postulates imply that sequential algorithms can only check agreement between states on a fixed and finite set of ground terms (i.e., the bounded exploration witness in the bounded exploration postulate for sequential algorithms). Reflective algorithms, however, do not satisfy this principle, as the following simple Example 1 shows. The RSA in the example does not satisfy the bounded exploration postulate for sequential algorithms. However, it is NOT the question, whether a different, non-reflective algorithm exists that solves the same problem, but whether such an algorithm would also be behaviourally equivalent.

Example 1

We describe a RSA that takes as input a search term t, a perfect binary tree T, i.e., a binary tree in which all interior nodes have two children and all leaves have the same depth, and a function \(\textit{label}\) which maps the set of nodes to an arbitrary set of labels. It traverses the graph in a breadth first order starting by its root r. If the term t appears as label of some node in the input tree, then the algorithm updates \( result \) to the lowest level in the tree which contains a node labeled with t.

We assume that in every initial state \( level = 0\), \(\textit{currentNode} = r\) and \( result = undef \). Let \(\textit{cond}\) be the following function from the natural numbers to Boolean terms:

$$\begin{aligned} cond (0)&{:}{:}= label ( r ) = t \\ cond (1)&{:}{:}= label ( leftChild ( r )) = t \vee label ( rightChild ( r )) = t \\ cond (2)&{:}{:}= label ( leftChild ( leftChild ( r ))) = t \vee label ( rightChild ( leftChild ( r ))) = t \;\vee \\&\quad \; label ( leftChild ( rightChild ( r ))) = t \vee label ( rightChild ( rightChild ( r ))) = t\\ \vdots&\qquad \qquad \qquad \vdots \\ cond (n)&{:}{:}= \underbrace{ label ( leftChild (\ldots leftChild ( r )\ldots )) = t \vee \cdots } \\&\qquad \qquad \qquad \qquad \qquad \qquad \underbrace{\dots \vee label ( rightChild (\ldots rightChild ( r )\ldots )) = t}_{2^n}. \end{aligned}$$

The algorithm works as follows:

figure a

Notice that during a run or computation of this algorithm not only the state of the algorithm evolves, but also the algorithm itself. In fact, the Boolean term in the if-statement in line 2 changes with every state transition until either the searched term t is found or all the levels of the input tree have been exhausted.

As we consider input trees of arbitrary size, this means that there is no fixed and finite bounded exploration witness for this algorithm, as we would need to either include all Boolean terms in the infinite set \(\{\textit{cond}(0), \textit{cond}(1), \ldots \}\), or include a different Boolean term \(\textit{cond}(level+1)\) depending on the interpretation of \(level+1\) in the current state.    \(\square \)

2.1 Reflective Sequential Time Postulate

Clearly, when extending the notion of sequential algorithm to include reflection we think of pairs \((S_i,P_j)\) comprising a state \(S_i\) (as in the sequential thesis), and a sequential algorithm \(P_j\). Thus, we can consider transition functions \(\tau _j : (S_i,P_j) \mapsto (S_{i+1},P_j)\) without changing the sequential algorithm \(P_j\). Likewise we may consider transition functions \(\sigma _i : (S_i, P_j) \mapsto (S_i, P_{j+1})\) changing only the algorithm. In general, a transition of a RSA can then involve both: updates to the state and updates to the algorithm.

Postulate 1

(Reflective Sequential Time Postulate). Let \(\mathcal {S}_{P}\) and \(\mathcal {I}_{P}\) denote the set of states and initial states of a sequential algorithm P, respectively (\(\mathcal {I}_{P} \subseteq \mathcal {S}_{P}\)). A RSA \(\mathcal {A}\) consists of the following:

  • A non-empty set \(\mathcal {P}_{\mathcal {A}}\) of sequential algorithms;

  • An initial algorithm \(P_0 \in {\mathcal {P}}_{\mathcal {A}}\);

  • A non-empty set \(\mathcal {S}_{\mathcal {A}} = \bigcup _{P_i \in \mathcal {P}_{\mathcal {A}}} \mathcal {S}_{P_i}\) of states;

  • A non-empty set \(\mathcal {I}_{\mathcal {A}} = \mathcal {I}_{P_0} \subseteq \mathcal {S}_{\mathcal {A}}\) of initial states;

  • A set of extended-states \(\mathcal {E}_{\mathcal {A}} = \mathcal {S}_\mathcal {A} \times \mathcal {P}_{\mathcal {A}}\);

  • A one-step transformation function \(\tau _{\mathcal {A}} : \mathcal {E}_{\mathcal {A}} \rightarrow \mathcal {E}_{\mathcal {A}}\) such that \(\tau _{\mathcal {A}}((S, P)) = (S', P')\) only if \(\tau _{P}(S) = S'\) for the one-step transformation function \(\tau _{P}\) of the sequential algorithm P.

Then a run or computation of a reflective algorithm corresponds to a sequence of pairs \((S_0,P_0)\), \((S_1, P_1)\), \((S_2,P_2)\), \(\ldots \), where \(S_0\) is an initial state in \(\mathcal {I}_{\mathcal {A}}\), \(P_0\) is the initial algorithm, and \((S_{i+1}, P_{i+1}) = \tau _\mathcal{A}((S_i,P_i))\) holds for every \(i \ge 0\).

This leads to the following three fundamental questions which we try to answer in the remaining part of this section:

 

Q1.:

How can we finitely represent a sequential algorithm P without having to adopt a concrete language for the specification of P?

Q2.:

How can we finitely characterise changes to the representation of the sequential algorithms in all states?

Q3.:

How can we define behavioural equivalence of RSAs independently from the representation of the sequential algorithms in each state?

 

2.2 Reflective Abstract Extended-State Postulate

Concerning Q1 we observe that according to the sequential ASM thesis it suffices to represent a sequential algorithm P by a set of pairs \((S,\varDelta (P,S))\) comprising a state S and the update set of P in that state. A consequence of the proof of the sequential ASM thesis in [8] is that update sets \(\varDelta (P,S_i)\) (\(i=1,2\)) are equal, if the states \(S_1\) and \(S_2\) are W-equivalent for a fixed bounded exploration witness W. We have \(S_1 \sim _W S_2\) iff \(E_{S_1} = E_{S_2}\), where \(E_S\) is the equivalence relation on W defined by \(E_S(t_1,t_2) \equiv val_S(t_1) = val_S(t_2)\) Footnote 1. It is therefore sufficient to replace the state S by a condition \(\varphi _{[S]}\), which evaluates to true on states that are W-equivalent to S. As there can only be finitely many W-equivalence classes, we obtain an abstract finite representation by a finite set of pairs \((\varphi _i,\varDelta _i)\) (\(i=1,\dots ,k\)).

Therefore, we conclude that we can capture the state-algorithm pairs in a RSA by an extension \(\varSigma _{ext}\) of the signature \(\varSigma \) using additional function symbols to represent the sequential algorithm, e.g. capturing in the state the signature of the algorithm as well as some syntactic description of it. For this, we must further permit new function symbols to be created, which can be done by exploiting the concept of “reserve”. We also conclude that the representation of algorithms in a state requires terms that are used by the algorithms to appear as values. So we have to allow terms over \(\varSigma \) (including the dormant function symbols in the reserve) to be at the same time values in an extended base set. In order to distinguish the interpretation of such terms t as values \( val _{(S,P)}(t)\) of the base set of an extended-state (SP) (which in any extended-state evaluate to themselves) from their interpretation as terms over \(\varSigma \), we use \(\textit{raise}_{(S,P)}(t)\) to denote the latter case. We may further assume that \( raise \) results in a proper term over \(\varSigma \) not containing any extra-logical constructs that are needed in the representation of an algorithm such as keywords.

Postulate 2

(Reflective Abstract Extended-State Postulate). Let \(\mathcal A\) be RSA. Fix a finite signature \(\varSigma _{ algo }\) of function symbols so that every algorithm \(P \in \mathcal {P}_{\mathcal {A}}\) can be finitely represented as some first-order structure of signature \(\varSigma _{ algo }\).

  • Every P in \(\mathcal{P}_\mathcal{A}\) is a first-order structure of signature \(\varSigma _{ algo }\) which encodes a finite representation of a sequential algorithm.

  • Every state S in \(\mathcal{S}_\mathcal{A}\) is a first-order structure of some signature \(\varSigma _S\) such that \(\varSigma _S \cap \varSigma _{ algo } = \emptyset \).

  • Every extended-state (SP) in \(\mathcal {E}_\mathcal{A}\) is a first-order structure of (extended) signature \(\varSigma _{ ext } = \varSigma _S \cup \varSigma _{ algo }\).

  • The one-step transformation function \(\tau _{\mathcal {A}}\) does not change the base set of any extended-state of \(\mathcal A\).

  • The sets \(\mathcal {S}_\mathcal {A}\) and \(\mathcal {I}_\mathcal {A}\) are closed under isomorphisms.

  • If \((S_1,P_1), (S_2,P_2) \in \mathcal {E}_\mathcal{A}\), \(S_1\) and \(S_2\) are isomorphic, \(P_1\) and \(P_2\) are behavioural equivalent sequential algorithmsFootnote 2, and further \(\tau _{ \mathcal {A}}(S_1,P_1) = (S_1', P_1')\) and \(\tau _{ \mathcal {A}}(S_2,P_2) = (S_2', P_2')\), then also \(S_1'\) and \(S_2'\) are isomorphic and \(P_1'\) and \(P_2'\) are behavioral equivalent.

Same as in the sequential ASM thesis, we need some minimal background of computation. Therefore, for every extended state (SP), we assume that S includes a binary function “\(=\)” for equality, nullary functions true, false and undef with \(\texttt {true} \ne \texttt {false}\) and \(\texttt {true} \ne \texttt {undef}\), the usual Boolean functions, the set of all ordered pairs, and an infinite reserve of elements. As explained before, we further assume that \(P_i\) includes, as values in its base set, the set of all possible ground terms (including the dormant function symbols in the reserve).

2.3 Reflective Bounded Exploration Postulate

Concerning Q2 the problem is that in general we must expect that each sequential algorithm \(P_i\) represented in an extended-state \((S_i, P_i)\) has its own bounded exploration witness \(W_i\). However, we know from the sequential ASM thesis that \(W_i\) is somehow contained in the finite representation of \(P_i\). For instance, the sequential ASM rule constructed in the proof of the sequential ASM thesis only contains subterms of terms in \(W_i\), and this holds analogously for any other representation of \(P_i\). This implies that the terms in \(W_i\) result by interpretation from terms that appear in the representation of any sequential algorithm. Thus, there must exist a finite set of terms W such that its interpretation in an extended state yields both values and terms, and the latter represent \(W_i\). We will continue to call W a bounded exploration witness. Consequently, the interpretation of W and of its interpretation in an extended state suffice to determine the update set in that state. This leads to our bounded exploration postulate for RSAs.

Definition 1

(Strong Coincidence). Let (SP) and \((S',P')\) be extended-states of a RSA. Let \(W = W_{st} \cup W_{wt}\) be a set of ground terms. We say that (SP) and \((S',P')\) strongly coincide over W iff the following holds:

  • For every \(t \in W_{st}\), \( val _{(S,P)}(t) = val _{(S',P')}(t)\).

  • For every \(t \in W_{wt}\),

    1. 1.

      \( val _{(S, P)}(t) = val _{(S', P')}(t)\).

    2. 2.

      \( val _{(S, P)}(\textit{raise}_{(S, P)}(t)) = val _{(S',P')}(\textit{raise}_{(S', P')}(t))\).

In our third and last postulate we use \(\varDelta (\mathcal{A}, (S,P))\) to denote the set of updates produced by a RSA \(\mathcal A\) in an extended-state (SP).

Postulate 3

(Reflective Bounded Exploration Postulate). For every RSA \(\mathcal A\), there is a finite set \(W= W_{st} \cup W_{wt}\) of ground terms such that \(\varDelta (\mathcal{A}, (S,P)) = \varDelta (\mathcal{A}, (S',P'))\) whenever extended-states (SP) and \((S', P')\) of \(\mathcal A\) strongly coincide on W.

If a set of ground terms \(W = W_{st} \cup W_{wt}\) satisfies the reflective bounded exploration postulate, we call it a reflective bounded exploration witness (R-witness for short) for \(\mathcal A\).

2.4 Reflective Sequential Algorithms and Behavioural Equivalence

Our three postulates give us the following machine independent definition of RSAs.

Definition 2

A reflective sequential algorithm (RSA) is an algorithm satisfying the Reflective Sequential Time, Reflective Abstract State and Reflective Bounded Exploration Postulates.

Example 2

Let us consider the algorithm in Example 1. The reflective sequential time and reflective abstract state postulates are clearly satisfied by this algorithm. Let

$$\begin{aligned} W_{st} = \{\textit{currentNode}, \textit{undef}, \textit{result}, \textit{level}, \textit{level}+1, \textit{leftChild}(\textit{currentNode})\} \end{aligned}$$

and \(W_{wt} = \{cond(\textit{level})\}\). It is not difficult to see that if two extended-states coincide on \(W = W_{st} \cup W_{wt}\), then the algorithm considered in this example produces the same set of updates in both extended-states. Thus, it also satisfies the reflective bounded exploration postulate, and consequently our definition of RSA.    \(\square \)

Next, we turn our attention to our final fundamental question Q3. The problem here is that the notion of behavioural equivalence of two sequential algorithms is bound to these having the same signature, on grounds of which we can request that the sets of runs must be identical. This cannot be carried over to RSAs in a straightforward way. However, we should be able to obtain a bijection between runs \((S_0,P_0) \rightarrow (S_1,P_1) \rightarrow (S_2,P_2) \rightarrow \dots \) and \((S_0^\prime ,P_0^\prime ) \rightarrow (S_1^\prime ,P_1^\prime ) \rightarrow (S_2^\prime ,P_2^\prime ) \rightarrow \dots \) for two RSAs \(\mathcal {A}\) and \(\mathcal {A}^\prime \). Then we should clearly have that \(S_i = S_i^\prime \) holds for all i, and that \(P_i\) and \(P_i^\prime \) are behaviourally equivalent as non-reflective, sequential algorithms. This is not yet satisfactory, as \(P_i\) and \(P_i^\prime \) may still operate on different signatures.

We can argue that it is sufficient to consider the restrictions of \(P_i\) and \(P_i^\prime \) on the “standard” part of the signatures, i.e. the functions that do not take terms as values. This would allow the algorithms \(P_i\) and \(P_i^\prime \) to differ in their changes to themselves, but these differences have de facto no effect, as the updates yielded by these algorithms produce the same state transition and result in modified, yet behaviourally equivalent algorithms throughout the complete run. In other words, the possibly differing changes to the algorithm may extend the signature by functions or integrate fragments of “code” that are never used and thus have no effect on the updates.

Definition 3

(Behavioural Equivalent RSAs). Let \(r_1 = (S_0,P_0),(S_1,P_1),\) \((S_2,P_2),\ldots ,\) and \(r_2 = (S_0',P_0')\), \((S_1',P_1'),\) \((S_2',P_2'),\ldots ,\) be runs of RSAs. We consider that \(r_1\) and \(r_2\) are essentially equivalent runs if for every \(i \ge 0\) the following holds:

  1. 1.

    \(S_i = S_i'\).

  2. 2.

    The restrictions \(P_i|_{\varSigma }\) and \(P_i'|_{\varSigma }\) of, respectively, \(P_i\) and \(P_i'\) to the signature \(\varSigma \) of \(S_i\) and \(S_i'\), constitute behavioural equivalent non-reflective sequential algorithms.

Two RSAs \(\mathcal{A}\) and \(\mathcal{A}'\) are behavioural equivalent RSAs iff \(\mathcal {A}\) and \(\mathcal {A}'\) have essentially equivalent classes of essentially equivalent runs. More precisely, iff there is a bijection \(\zeta \) between runs of \(\mathcal A\) and \(\mathcal{A}'\), respectively, such that r and \(\zeta (r)\) are essentially equivalent for all run r.

3 Reflective Abstract State Machines

In this section we define a model of reflective ASMs (rASMs for short) and show that every rASM is a RSA in the precise sense of Definition 2. Given a signature \(\varSigma \), i.e. a set of function symbols, then a sequential ASM-rule over \(\varSigma \) is defined as follows [5]:

  • assignments. \(f(t_1,\dots ,t_{ar_f}) := t_0\) (with terms \(t_i\) built over \(\varSigma \)) is a rule.

  • branching. If \(r_+\) and \(r_-\) are rules and \(\varphi \) is a Boolean term, then also if \(\varphi \) then \(r_+\) else \(r_-\) endif is a rule.

  • bounded parallel composition. If \(r_1, \dots , r_n\) are rules, then also par \(r_1 \dots r_n\) endpar is a rule.

Each rule can be interpreted in a state, and doing so yields an update set. In general, a location is a pair \(\ell = (f,(a_1,\dots ,a_k))\) with a function symbol \(f \in \varSigma \) and a k-tuple (k being the arity of f) of values from the fixed base set B, and an update is a pair \((\ell ,a_0)\) with a value \(a_0 \in B\).

The rules of an rASM are also sequential ASM rules, and the interpretation of these rules in terms of update sets coincides with those of sequential ASMs as defined in [5]. The key difference is that rASMs work over extended-states, where each extended-state includes a finite representation of the rule that determines the update set produced by the machine in the current extended-state. In this way, we also allow an rASM to produce updates to its current rule.

Let (SR) be an extended state of a rASM \(\mathcal M\). We assume that the sub-structure S includes the following background of computation:

  • An infinite reserve of values and function names.

  • All ordered pairs of elements in the base set.

  • The usual Boolean functions and usual constants true, false and undef.

  • The “program” functions \( update \), \( par \), \( if \).

The “program” functions are static and interpreted as follows:

  • \( update (f(t_1, \ldots , t_n), t_0) = (t_0, t_1, \ldots , t_n)\)

  • \( par (t_1, t_2) = ( val _S(t_1), val _S(t_2))\)

  • \( if (t_1, t_2) = (t_1, val _S(t_2))\).

Notice that the following function induces a one-to-one correspondence between ASM rules and “program” terms, so that every ASM rule can be represented as a “program” term.

  • \(\textit{progToFunction}(f(t_1, \ldots , t_n) := t_0) = update (f(t_1, \ldots , t_n), t_0)\).

  • \(\textit{progToFunction}(\mathbf{if}\, \varphi \, \mathbf{then} \, R \, \mathbf{endif}) = if (\varphi ,\textit{progToFunction}(R))\).

  • \(\textit{progToFunction}(\mathbf{par}\, R_1\, R_2\, \mathbf{endpar}) = par (\textit{progToFunction}(R_1),\) progToFunction \((R_2))\).

The sub-structure R of the extended-state (SR) (i.e., the structure which contains the encoding of the “current” ASM rule) includes:

  • The set of all ground terms.

  • A distinguished location \( self \) interpreted as a “program” term (the current ASM rule).

  • A finite alphabet A (the alphabet of the ground terms) and all strings in \(A^*\).

  • A constant \(s_i\) for each symbol \(s_i \in A\) and a constant \(\lambda \) for the empty string.

  • The usual string manipulation functions, including the concatenation function “\(\cdot \)”.

  • A total injective function \( TermToString \) from the set of all terms of vocabulary \(\varSigma \) to \(A^*\).

  • A partial function \( StringToTerm \) defined as the inverse of \( TermToString \).

  • A function \( argumentNo (t,n)\) which returns the n-th argument of the term t.

  • A function \( insertArgument (s,n,t)\) which returns a copy of t with its n-th argument replaced by s.

Since in each extended-state (SR) of a rASM, the sub-structure R represents a uniquely determined sequential ASM rule, we usually refer to it as a rule rather than as a structure, meaning the rule corresponding to the “program term” in the location \( self \).

Definition 4

An rASM \(\mathcal {M}\) is formed by:

  • A non-empty set \(\mathcal {R}_{\mathcal {M}}\) of sequential ASM rules (represented as first-order structures).

  • An initial rule \(R_0 \in {\mathcal {R}}_{\mathcal {M}}\).

  • A non-empty set \(\mathcal {S}_{\mathcal {M}}\) of states (i.e., first-order structures) closed under isomorphisms.

  • A non-empty set \(\mathcal {I}_{\mathcal {M}} \subseteq \mathcal {S}_{\mathcal {M}}\) of initial states, also closed under isomorphisms.

  • A set of extended-states \(\mathcal {E}_{\mathcal {M}} = \mathcal {S}_\mathcal {M} \times \mathcal {R}_{\mathcal {M}}\).

  • A transition function \(\tau _{\mathcal {M}}\) over \(\mathcal {E}_{\mathcal {M}}\) such that \(\tau _{\mathcal {M}}((S,R)) = (S,R)+\varDelta (R, (S,R))\) for every \((S,R) \in \mathcal {E}_{\mathcal {M}}\), , where \(R = val_S(\textit{self})\) is the closed ASM rule in location self in the extended state (SR), \(\varDelta (R,(S,R))\) is the update set yielded by this rule in S, and \((S,R)+\varDelta (R,(S,R))\) denotes the extended-state obtained by applying to (Rs) the update set \(\varDelta (R,(S,R))\).

A run or computation of a reflective sequential ASM is a finite or infinite sequence of extended states \((S_0, R_0), (S_1,R_1),(S_2,R_2), \ldots \), where \(S_0\) is a state in \(\mathcal {I}_{\mathcal {M}}\), \(R_0\) is the initial rule, and \((S_{i+1},R_{i+1})=\tau _{\mathcal {M}}((S_i,R_i))\) holds for every \(i \ge 0\).

Notice that for every \(R \in \mathcal {R}_{\mathcal {M}}\), the functions in R allow us to examine and modify the “program” term stored in self. For instance, assume that the current value stored in self is the term \( update (f(t), s)\) and that we want to change it to \( update (f(t), s+1)\). Assuming the alphabet A includes the symbols “\(+\)” and “1”, the following sequential ASM rule updates \( self \) to the desired “program” term: \( self := insertArgument ( stringToTerm ( TermToString (argumentNo( self ,2)) \cdot + \cdot 1), 2, self )\).

Of course, it is quite cumbersome to update the rule in self by using the small set of background functions provided here. Nevertheless, this is enough to show that our approach works. In practice, we can use more convenient representations, for instance by means of complex values such as syntax trees, as well as more sophisticated functions to inspect and modify the ASM rules. Note that the kind of reflection that the RRM uses is a bit different to the one we propose in this work. We could call it “partial reflection”, since the sequence of actions performed in each transition, except for the queries to the relational store, never changes. We could then think of a different definition of the reflective ASM to represent partial reflection, where we only add to the sequential ASM a rule \(\mathbf{eval} \; t\), which takes a “program” term t as its argument, and interpret it as a sequential ASM rule (other than \(\mathbf{eval}\)) which is then executed.

The next result shows the plausibility of our reflective ASM thesis.

Theorem 1

Every reflective ASM \(\mathcal {M}\) is a RSA.

Proof

(Sketch). We need to show that \(\mathcal M\) satisfies the reflective sequential time, reflective abstract extended-state and reflective bounded exploration postulates. The first two postulates are already built into the definition of rASM, and the preservation of isomorphisms is straightforward.

In order to show that \(\mathcal {M}\) satisfies also the reflective bounded exploration postulate, we let \(W_{st} = \emptyset \) and \(W_{wt} =\{ self \}\). We see next that if two extended-states (SR) and \((S',R')\) of \(\mathcal {M}\) strongly coincide over \(W_{wt}\) then \(\varDelta (R,(S,R))=\varDelta (R',(S',R'))\). Since the states strongly coincide over \(W_{wt}\) we have that:

  1. 1.

    \( val _{(S, R)}( self ) = val _{(S', R')}( self )\).

  2. 2.

    \( val _{(S, R)}(\textit{raise}_{(S, R)}( self )) = val _{(S',R')}(\textit{raise}_{(S', R')}( self ))\).

Let \(W_{r} = \{r\}\) and \(W_{r'} = \{r'\}\), where r and \(r'\) are the tuples of terms that result from the evaluation of \( self \) in (SR) and \((S',R')\), respectively. From our definition of the “program” functions and the proof of the plausibility theorem of the sequential ASM thesis, we get that \(W_r\) and \(W_{r'}\) constitute, respectively, bounded exploration witnesses for the sequential ASM rules R and \(R'\). In turn, by (1), we further have that \(W_r = W_{r'}\). Finally, by (2) we get that (SR) and \((S', R')\) coincide on \(W_r = W_{r'}\). Hence, by Gurevich’s bounded exploration postulate for sequential algorithms, we get that \(\varDelta (R,(S,R))=\varDelta (R',(S',R'))\). The plausibility theorem for RSA then follows.     \(\square \)

4 The Reflective Sequential ASM Thesis

We start by analysing an arbitrary RSA \(\mathcal A\). Let \(W_{st} \cup W_{wt}\) be a bounded exploration witness for \(\mathcal A\) and let (SP) be a state of \(\mathcal A\). We define the set of terms generated by \(W_{wt}\) in (SP) as follows: \(G^{(S,P)}_{W_{wt}} = \{ raise _{(S,P)}(t) \mid t \in W_{wt}\}\). We assume that \(W_{st} \cup G^{(S,P)}_{W_{wt}}\) is closed under sub-terms and call it the set of critical terms of (SP).

The following lemma can be proven using the same argument as in the proof of the analogous Lemma 6.2 in the sequential ASM thesis [8].

Lemma 1

If \((f, (v_1, \ldots v_n), v_0)\) is an update in \(\varDelta (\mathcal{A}, (S, P))\), then \(v_0, v_1, \ldots , v_n\) are values of critical terms of (SP).

Lemma 1 implies that every update in \(\varDelta (\mathcal{A}, (S, P))\) can be programmed by an update rule of the form \(f(t_1, \ldots t_n) := t_0\), where the terms \(t_0, t_1, \ldots , t_n\) are critical terms of (SP). To program the whole \(\varDelta (\mathcal{A}, (S, P))\), we define a sequential ASM rule \(r_{(S,P)}\) which is the parallel combination (by means of \(\mathbf {par}\) rules) of all update rules in the following finite set:

$$\begin{aligned} \{f(t_1, \ldots , t_n) :=&t_0 \mid t_0, t_1, \ldots , t_n \in W_{st} \cup G^{(S, P)}_{W_{wt}} \text { and }\\&(f, ( val _{(S,P)}(t_1), \ldots , val _{(S,P)}(t_n)), val _{(S,P)}(t_0)) \in \varDelta (\mathcal{A}, (S,P)\}. \end{aligned}$$

As \(W_{st} \cup G^{(S, P)}_{W_{wt}}\) is finite and the signature of (SP) is also finite, \(r_{(S,P)}\) is well defined.

Corollary 1

For every \((S, P) \in \mathcal{S}_\mathcal{A}\) there is a rule \(r_{(S,P)}\) such that:

  1. 1.

    \(r_{(S,P)}\) uses only critical terms, i.e., terms in \(W_{st} \cup G^{(S, P)}_{W_{wt}}\).

  2. 2.

    \(\varDelta (r_{(S,P)}, (S, P)) = \varDelta (\mathcal{A}, (S, P))\).

From now on, \(r_{(S,P)}\) is as in the previous corollary.

Lemma 2

If two extended-states (SP) and \((S', P')\) of \(\mathcal A\) strongly coincide over \(W_{st} \cup W_{wt}\), then \(\varDelta (r_{(S,P)}, (S',P')) = \varDelta (\mathcal{A}, (S', P'))\).

Proof

As (SP) and \((S', P')\) strongly coincide over \(W_{st} \cup W_{wt}\), we have that \(G^{(S, P)}_{W_{wt}} = G^{(S', P')}_{W_{wt}}\) and that, for every \(t \in W_{st} \cup G^{(S, P)}_{W_{wt}}\), \( val _{(S,P)}(t) = val _{(S', P')}(t)\). As \(r_{(S,P)}\) only involves critical terms of (SP), i.e., terms in \(W_{st} \cup G^{(S, P)}_{W_{wt}}\), we have that \(\varDelta (r_{(S,P)}, (S, P)) = \varDelta (r_{(S,P)}, (S', P'))\). By Corollary 1, \(\varDelta (r_{(S,P)}, (S,P)) = \varDelta (\mathcal{A}, (S, P)\). Finally, we obtain \(\varDelta (\mathcal{A}, (S, P)) = \varDelta (\mathcal{A}, (S', P'))\) by the reflective bounded exploration postulate.     \(\square \)

Let (SP) and \((S',P')\) be extended-states of \(\mathcal A\). We say that \((S', P')\) is relative W[(SP)]-equivalent to (SP) if \(G^{(S', P')}_{W_{wt}} = G^{(S,P)}_{W_{wt}}\), and that they coincide over W[(SP)] (in the sense of the sequential ASM thesis [8]) if \(\textit{val}_{(S, P)}(t) = \textit{val}_{(S',P')}(t)\) for all \(t \in W_{st} \cup G^{(S,P)}_{W_{wt}}\) (i.e., for all critical terms of (SP)).

The following is a straightforward corollary of Lemma 2 obtained by restricting the sets of updates to the locations in the “standard” sub-structure of the extended-states. \(\varDelta _{st}\) denotes the subset of updates with function names which do not appear in \(\varSigma _{algo}\).

Corollary 2

If two extended-states (SP) and \((S',P')\) are relative W[(SP)]-equivalent and coincide over W[(SP)], then we have \(\varDelta _{st}(r_{(S,P)}, (S',P')) = \varDelta _{st}(\mathcal{A}, (S', P'))\).

Consider the class \(\mathcal{C}[(S,P)]\) of relative W[(SP)]-equivalent states of \(\mathcal A\). Two states \((S_1, P_1)\) and \((S_2, P_2)\) of \(\mathcal A\) are W -equivalent relative to \(\mathcal{C}[(S,P)]\) iff \((S_1, P_1), (S_2, P_2) \in \mathcal{C}[(S,P)]\) and \(E_{(S_1,P_1)} = E_{(S_2,P_2)}\), where (for \(i = 1, 2\)) \(E_{(S_i,P_i)}(t_1, t_2) \equiv val _{(S_i, P_i)}(t_1) = val _{(S_i, P_i)}(t_2)\) is an equivalence relation in the set of critical terms of (SP).

Lemma 3

If two extended-states \((S_1, P_1)\) and \((S_2,P_2)\) of \(\mathcal A\) are W-equivalent relative to \(\mathcal{C}[(S,P)]\), then \(\varDelta _{st}(r_{(S_1,P_1)}, (S_2, P_2)) = \varDelta _{st}(\mathcal{A}, (S_2, P_2))\).

Proof

(sketch). Note that if we assume \(\varDelta _{st}(r_{(S_1,P_1)}, (S_3, P_3)) = \varDelta _{st}(\mathcal{A},(S_3, P_3))\) for a state \((S_3, A_3) \in \mathcal{C}[(S,P)]\) with \(S_3\) isomorphic to \(S_2\), then we get that \(\varDelta _{st}(r_{(S_1,P_1)}, (S_2, P_2)) = \varDelta _{st}(\mathcal{A}, (S_2, P_2))\). This fact is analogous to Lemma 6.8 of the sequential ASM thesis [8] and can be proven in the same way. Thus, we just need to find an extended-state \((S_3, P_3) \in \mathcal{C}[(S, P)]\) with \(S_3\) isomorphic to \(S_2\) and such that \(\varDelta _{st}(r_{(S_1,P_1)}, (S_3, P_3)) = \varDelta _{st}(\mathcal{A},(S_3, P_3))\).

Assume w.l.o.g. that the base sets of \(S_1\) and \(S_2\) are disjoint. Let \(S_3\) be the structure isomorphic to \(S_2\) which is obtained by replacing \( val _{S_2}(t)\) with \( val _{S_1}(t)\) for all critical terms t of (SP). This is well defined because \((S_1, P_1)\) and \((S_2, P_2)\) are W-equivalent relative to \(\mathcal{C}[(S,P)]\). Take \(P_3 = P_2\), then \((S_3, P_3) \in \mathcal{C}[(S,P)]\). By the reflective abstract state postulate, \((S_3, P_3)\) is an extended-state of \(\mathcal A\). Since \((S_1, P_1)\) and \((S_3, P_3)\) coincide over the set of critical terms of (SP), Corollary 2 gives \(\varDelta _{st}(r_{(S_1,P_1)}, (S_3,P_3)) = \varDelta _{st}(\mathcal{A}, (S_3, P_3))\).    \(\square \)

Let \(\varphi _{(S,P)}\) be the following Boolean term:

As the set of critical terms of an extended-state (SP) (i.e., \(W_{st} \cup G^{(S,P)}_{W_{wt}}\)) is finite, there is a finite set \(\{(S_1, P_1), \ldots , (S_n, P_n)\}\) of states in \(\mathcal{C}[(S,P)]\) (the class of relative W[(SP)]-equivalent states of \(\mathcal A\)) such that every state in \(\mathcal{C}[(S,P)]\) is W-equivalent relative to \(\mathcal{C}[(S,P)]\) to one of the states \((S_i, P_i)\). Construct a rule par if \(\varphi _{(S_1,P_1)}\) then \(r_{(S_1,P_1)}\) endif \(\dots \) if \(\varphi _{(S_n,P_n)}\) then \(r_{(S_n,P_n)}\) endif endpar Then the following result clearly follows from the previous lemmata.

Lemma 4

\(\varDelta _{st}(r_{[(S,P)]}, (S_i,P_i)) = \varDelta _{st}(\mathcal{A}, (S_i,P_i))\) for every extended-state \((S_i,P_i) \in \mathcal{C}[(S,P)]\), i.e., for every extended-state that is relative W[(SP)]-equivalent to (SP).

Thus, for every class \(\mathcal{C}([S_i,P_i])\) of extended-states of \(\mathcal A\), we have a corresponding rule \(r_{[(S_i,P_i)]}\) such that Lemma 4 holds. Now, we need to extend this result to all extended-states which belong to some run of \(\mathcal A\), not just for the extended-states in the class \(\mathcal{C}([S_i,P_i])\). Here is when the power of reflection becomes apparent.

Fix an arbitrary initial extended state (SP) of \(\mathcal A\). We define \(\mathcal M\) as the reflective ASM machine with \(\mathcal{E}_\mathcal{M} = \{(S_i,P_i') \mid (S_i,P_i) \in \mathcal{E}_\mathcal{A} \text { and } P_i'\) is the “self” representation of \(r_{[(S_i,P_i)]}\}\) and \(\mathcal{I}_\mathcal{M} = \{(S_i,P') \mid S_i \in \mathcal{I}_\mathcal{A} \text { and } P'\) is the “self” representation of \(r_{[(S,P)]}\}\).

Lemma 5

For every run of \(\mathcal A\) of the form \((S_0, P_0), (S_1, P_1), \ldots \) and corresponding run of \(\mathcal M\) of the form \((S_0', P_0'), (S_1', P_1'), \ldots \) with \(S_0 = S_0'\), it holds that \(\varDelta _{st}(r_{[(S_i,P_i')]}, (S_i',P_i')) = \varDelta _{st}(\mathcal{A}, (S_i,P_i))\).

Proof

(Sketch). We prove it by induction on an arbitrary run of \(\mathcal A\). By the reflective sequential time postulate, we know that every initial extended-state \((S_0, P_0)\) of every run of \(\mathcal A\) is relative W[(SP)]-equivalent to the initial extended-state (SP) used in the construction of \(\mathcal M\). Thus, we get from Lemma 4 that \(\varDelta _{st}(r_{[(S_0,P_0')]}, (S_0,P_0)) = \varDelta _{st}(\mathcal{A}, (S_0,P_0))\). Given the restriction to “standard” updates which do not involve updates to the algorithm, we have

$$\begin{aligned} \varDelta _{st}(r_{[(S_0,P_0')]}, (S_0',P_0')) = \varDelta _{st}(\mathcal{A}, (S_0,P_0)) . \end{aligned}$$

Regarding the inductive step. As \(P_i\) is a sequential algorithm, it is captured by a sequential ASM \(M_{i}\). Moreover, due to Gurevich’s proof of the sequential ASM thesis [8], the rule has the form

$$\begin{aligned} \mathbf{par } \; \mathbf{if } \; \psi _1 \; \mathbf{then } \; r_1 \; \mathbf{endif } \; \ldots \; \mathbf{if } \; \psi _k \; \mathbf{then } \; r_k \; \mathbf{endif } \; \mathbf{endpar }, \end{aligned}$$

where each \(r_j\) is a par block of assignment rules. All \(\psi _j\) and \(r_j\) involve critical terms defined by a bounded exploration witness of \(P_{i}\) such as \(W_{st} \cup G^{(S_{i},P_{i})}_{W_{wt}}\).

Due to construction of \(r_{[(S_i',P_i')]}\), we have that \(W_{st} \cup G^{(S_{i},P_{i})}_{W_{wt}}\) is bounded exploration witness of the “self” representation \(P_i'\) of \(r_{[(S_i',P_i')]}\). In turn, by construction of \(\mathcal M\) it can be shown that \(W_{st} \cup W_{wt}\) is a bounded exploration witness for \(\mathcal M\). Thus, the updates in \(\varDelta _{st}(r_{[(S_i',P_i')]}, (S_i',P_i'))\), transform the “self” representation \(P'_i\) of \(r_{[(S_i',P_i')]}\) into the “self” representation \(P'_{i+1}\) of \(r_{[(S_i',P_{i+1}')]}\). Since from the inductive hypothesis it can be shown that \(S'_i = S_i\), we get that \(\varDelta _{st}(r_{[(S_{i+1},P_{i+1}')]}, (S_{i+1}',P_{i+1}')) = \varDelta _{st}(\mathcal{A}, (S_{i+1},P_{i+1}))\).    \(\square \)

Using the previous key lemma, it is not difficult to show that every run of \(\mathcal A\) of the form \((S_0, P_0), (S_1, P_1), \ldots \) is essentially equivalent to the corresponding run of \(\mathcal M\) of the form \((S_0', P_0'), (S_1', P_1'), \ldots \) with \(S_0 = S_0'\), i.e., that \(S_i = S_i'\) and that the restriction of \(P_i\) and \(P_i'\) to the signature \(\varSigma \) of \(S_i\) and \(S_i'\) results in non-reflective algorithms which are behavioural equivalent. This implies our main result.

Theorem 2

For every RSA \(\mathcal A\) there is a behavioural equivalent rASM machine \(\mathcal M\).

5 Conclusion

In this paper we investigated a behavioural theory for reflective sequential algorithms (RSAs) following our conjecture in [7]. Grounded in related work concerning behavioural theories for sequential algorithms [8], (synchronous) parallel algorithms [6], non-deterministic algorithms [10] and concurrent algorithms [4] we developed a set of abstract postulates characterising RSAs, extended ASMs to reflective Abstract State Machines (rASMs), and formally sketched the proof that any RSA as stipulated by the postulates can be step-by-step simulated by a rASM. The key contributions are the postulates themselves, as they provide a language-independent definition of RSAs and the characterisation proof.

With this behavioural theory we lay the foundations for rigorous development of reflective algorithms and thus self-adaptive systems. However, several open tasks still have to be addressed before a general behavioural theory of evolving concurrent systems (ECS) will be reached. It is required to combine the behavioural theory developed in this paper with those for parallel algorithms thus proving a behavioural theory for reflective parallel algorithms, and with the theory of concurrency thus proving a behavioural theory for concurrent reflective systems, i.e. ECS. In view of the similarity of arguments in the separate behavioural theses this integration appears plausible, but nonetheless constitutes a mathematically challenging problem. Furthermore, for rigorous development extensions to the refinement method for ASMs [3] and to the logic used for verification [14] will be necessary. These will be addressed in follow-on research.