1 Introduction

The title above is inspired by the title of a paper I co-authored with Paul Voda more than 15 years ago: Programming languages capturing complexity classes [10]. In that paper we related the computational power of fragments of programming languages to complexity classes defined by imposing time and space constraints on Turing machines. Around that time, I authored and co-authored a number of related papers, e.g. [8, 9, 11], all of which were clearly inspired by work in implicit computational complexity theory from the 1990s, e.g., Bellatoni and Cook [2], Leivant [12, 13] and, particularly, Jones [5, 6].

Complexity classes like \(\text {P}\), \(\text {FP}\), \(\text {NP}\), \(\text {LOGSPACE}\), \(\text {EXPTIME}\), and so on, are defined by imposing explicit resource bounds on a particular machine model, namely the Turing machine. E.g., \(\text {FP}\) is defined as the class of functions computable in polynomial time on a deterministic Turing machine. The definition puts constraints on the resources available to the Turing machines, but no constraints on the algorithms available to them. A Turing machine may compute a function in the class by any imaginable algorithm as long as it works in polynomial time. Implicit computational complexity theory studies classes of functions (problems, languages) that are defined without imposing explicit resource bounds on machine models, but rather by imposing linguistic constraints on the way algorithms can be formulated. When we explicitly restrict our language for formulating algorithms, that is, our programming language, then we may implicitly restrict the computational resources needed to execute algorithms. If we manage to find a restricted programming language that captures a complexity class, then we will have a so-called implicit characterization. A seminal example is Bellatoni and Cook’s [2] characterization of \(\text {FP}\). They give a functional programming language (which they call a function algebra). This language consists of a few initial functions and two definition schemes (safe composition and safe primitive recursion) which allow us to define new functions. These schemes put rather severe syntactical restrictions on how we can define functions, but they do not refer to polynomially bounded Turing machines or any other kind of resource bounded computing machinery. It is not easy to write programs when we have to stick to these schemes, even experienced programmers might find it hard to multiply two numbers but, be that as it may, this is a programming language that yields an implicit characterization of a complexity class. It turns out that a function can be computed by a program written in Bellantoni & Cook’s language if and only if it belongs to the complexity class \(\text {FP}\).

There is an obvious link between implicit computational complexity and reversible computing. A programming language based on natural reversible operations will impose restrictions on the way algorithms can be formulated, and thus, also restrictions on the computational resources needed to execute algorithms. Hence, the following question knocks at the door: Will it be possible find reversible programming languages that capture some of the standard complexity classes? The answer turns out to be YES. We will present a reversible language that captures, or if you like, gives an implicit characterization of, the (maybe not very well-known) complexity class \(\text {ETIME}\). A few small modifications of this language yield a reversible language that captures the very well-known complexity class \(\text {P}\).

Our languages are based on a couple of naturally reversible operations. To increase, or decrease, a natural number by 1 modulo a base b is such an operation: \(\dots 0,1,2, \ldots , b-2, b-1, 0, 1, 2 \ldots \). The successor of \(b-1\) becomes 0, and then \(b-1\) becomes the predecessor of 0. Thus, “increase” and “decrease” are the reverse of each other. To move an element from the top of one stack to the top of another stack is another such operation as we can simply move the element back to the stack it came from.

This paper addresses students and researchers interested in programming languages, reversible computations and computer science in general, they will not necessarily be experts in computability or complexity theory. We will give priority to readability over technical accuracy, but still this is a fairly technical paper, and we will assume that the reader is faintly acquainted with Turing machines and basic complexity theory (standard textbooks are Arora and Barac [1], Jones [7] and Sipser [16]).

Implicit computational complexity theory is definitely a broader and richer research area than our short discussion above may indicate. More on the subject can be found in Dal Lago [3].

2 Reversible Bottomless Stack (RBS) Programs

An infinite sequence of natural numbers \(s_1, s_2, s_3, \ldots \) is a bottomless stack if there exists k such that \(s_i=0\) for all \(i>k\). We use \(\langle x_1,\ldots , x_n, 0^*]\) to denote the bottomless stack \(s_1, s_2, s_3, \ldots \) where \(s_i= x_i\) when \(i\le n\), and \(s_i=0\) when \(i>n\). We say that \(x_1\) is the top element of \(\langle x_1,\ldots , x_n, 0^*]\). Observe that 0 is the top element of the stack \(\langle 0^*]\). Furthermore, observe that \(\langle 0,0^*]\) is the same stack as \(\langle 0^*]\) (since \(\langle 0,0^*]\) and \(\langle 0^*]\) denote the same sequence of natural numbers). We will refer to \(\langle 0^*]\) as the zero stack.

The syntax of the imperative programming language RBS is given in Fig. 1. Any element in the syntactic category Command will be called a program, and we will use the word command and the word program interchangeably throughout the paper. We will now explain the semantics of RBS.

Fig. 1.
figure 1

The syntax of the language RBS. The variable X in the loop command is not allowed to occur in the loop’s body.

An RBS program manipulates bottomless stacks, and each program variable holds such a stack. The input to a program is a single natural number m. When the execution of the program starts, the input m will be stored at the top of the stack hold by \({\texttt {X}}_1\), that is, we have \({\texttt {X}}_1 = \langle m, 0^*]\). All other variables occurring in the program hold the zero stack when the execution starts. A program is executed in a base b which is determined by the input: we have \(b=\max (m+1,2)\) if \({\texttt {X}}_1 = \langle m, 0^*]\) when the execution starts. The execution base b is kept fixed during the entire execution.

Let \({\texttt {X}}\) and \({\texttt {Y}}\) be program variables. We will now explain how the primitive commands work. The command \(({\texttt {X}} \, \texttt {to}\, {\texttt {Y}})\) pops off the top element of the stack held by \({\texttt {X}}\) and pushes it onto the stack held by \({\texttt {Y}}\), that is

The command \({\texttt {X}}^+\) increases the the top element of the stack held by \({\texttt {X}}\) by \(1 \;(\text {mod}\; b)\), that is

$$\begin{aligned} \{{\texttt {X}}= \langle x_1 , \ldots , x_n, 0^*] \}\; {\texttt {X}}^+ \; \{ {\texttt {X}}= \langle x_1+ 1 \;(\text {mod}\; b) , x_2 \ldots , x_n, 0^*] \}. \end{aligned}$$

The command \({\texttt {X}}^-\) decreases the the top element of the stack held by \({\texttt {X}}\) by \(1 \;(\text {mod}\; b)\), that is

$$\begin{aligned} \{{\texttt {X}}= \langle x_1 , \ldots , x_n, 0^*] \}\; {\texttt {X}}^- \; \{ {\texttt {X}}= \langle x_1- 1 \;(\text {mod}\; b) , x_2 \ldots , x_n, 0^*] \}\, . \end{aligned}$$

Observe that we have

$$\begin{aligned} \{{\texttt {X}}= \langle b-1 ,x_2 \ldots , x_n, 0^*] \}\; {\texttt {X}}^+ \; \{ {\texttt {X}}= \langle 0 , x_2 \ldots , x_n, 0^*] \} \end{aligned}$$

and

$$\begin{aligned} \{{\texttt {X}}= \langle 0 ,x_2 \ldots , x_n,0^*] \}\; {\texttt {X}}^- \; \{ {\texttt {X}}= \langle b-1 , x_2 \ldots , x_n, 0^*] \} \end{aligned}$$

when b is the base of the execution.

The semantics of the command \({\texttt {C}}_1\texttt {;}\,{\texttt {C}}_2\) is as expected. This is the standard composition of the commands \({\texttt {C}}_1\) and \({\texttt {C}}_2\), that is, first \({\texttt {C}}_1\) is executed, then \({\texttt {C}}_2\) is executed. The command \(\texttt {loop} \; {\texttt {X}}\; \texttt {\{} \, {\texttt {C}}\, \texttt {\}}\) executes the command \({\texttt {C}}\) repeatedly k times in a row where k is the top element of the stack held by \({\texttt {X}}\). Note that the variable \({\texttt {X}}\) is not allowed to occur in \({\texttt {C}}\) and, moreover, the command will not modify the stack held by \({\texttt {X}}\).

Example 1

Let \({\texttt {C}}_1\) be the program \(\texttt {loop} \; {\texttt {X}}_1 \; \texttt {\{} \; {\texttt {X}}_2^{+} \; \texttt {\}}\texttt {;}\,({\texttt {X}}_2 \, \texttt {to}\, {\texttt {X}}_1)\). We have

$$\begin{aligned} \{{\texttt {X}}_1 = \langle 17, 0^*] \; \wedge \; {\texttt {X}}_2 = \langle 0^*] \}\; {\texttt {C}}_1 \; \{ {\texttt {X}}_1 = \langle 17, 17, 0^*] \; \wedge \; {\texttt {X}}_2 = \langle 0^*] \}. \end{aligned}$$

Let \({\texttt {C}}_2\) be the program \(\texttt {loop} \; {\texttt {X}}_1 \; \texttt {\{} \; {\texttt {X}}_2^{+} \; \texttt {\}}\texttt {;}\,{\texttt {X}}_2^{+}\texttt {;}\,({\texttt {X}}_2 \, \texttt {to}\, {\texttt {X}}_1)\). We have

$$\begin{aligned} \{{\texttt {X}}_1 = \langle 17, 0^*] \; \wedge \; {\texttt {X}}_2 = \langle 0^*] \}\; {\texttt {C}}_2 \; \{ {\texttt {X}}_1 = \langle 0, 17, 0^*] \; \wedge \; {\texttt {X}}_2 = \langle 0^*] \} \end{aligned}$$

since the execution base is 18. All numbers stored on stacks during an execution will be strictly less than the execution base, and thus, less than or equal to \(\max (m,1)\) where m is the input.    \(\square \)

Intuitively, it should be clear that \(\texttt {RBS}\) programs are reversible in a very strong sense. \(\texttt {RBS}\) is an inherently reversible programming language in the terminology of Matos [14]. If we like, we can of course state this insight more formally. The next definition and the following theorem will be a step in that direction.

Definition 2

We define reverse command of C, written \(\mathtt{C}^R\), inductively over the structure C:

  • \((\mathtt{X}_i^+)^R = \mathtt{X}_i^-\)

  • \((\mathtt{X}_i^-)^R = \mathtt{X}_i^+\)

  • \(({\mathtt{X}_i}\,\mathtt{to}\,{\mathtt{X}_j})^R = ({\mathtt{X}_j}\,\mathtt{to}\,{\mathtt{X}_i})\)

  • \((\mathtt{C}_1 \texttt {;}\,\mathtt{C}_2)^R = \mathtt{C}_2^R \texttt {;}\,\mathtt{C}_1^R\)

  • \((\mathtt{loop} \; \mathtt{X}_i \; \texttt {\{} \, \mathtt{C} \, \texttt {\}})^R = \mathtt{loop} \; \mathtt{X}_i \; \texttt {\{} \, \mathtt{C}^R \, \texttt {\}}\).

   \(\square \)

Theorem 3

Let \({\texttt {C}}\) be a program, and let \({\texttt {X}}_1,\ldots , {\texttt {X}}_n\) be the variables occurring in \({\texttt {C}}\). Furthermore, let m be any natural number. We have

$$\begin{aligned} \{ {\texttt {X}}_1 = \langle m, 0^*] \; \wedge \; \bigwedge _{i=2}^n \; {\texttt {X}}_i = \langle 0^*] \}\; {\texttt {C}}\texttt {;}\,{\texttt {C}}^R \; \{ {\texttt {X}}_1 = \langle m, 0^*] \; \wedge \; \bigwedge _{i=2}^n \; {\texttt {X}}_i = \langle 0^*] \}. \end{aligned}$$

It is a nice, and maybe even challenging, exercise to write up a decent proof Theorem 3, even if it should be pretty clear that the theorems holds. We will offer a proof in the next section. The reader not interested in the details of the proof, may skip that section.

We will now define the set of problems that can be decided by an \(\texttt {RBS}\) programs. To that end, we need to determine how an \(\texttt {RBS}\) program should accept, and how an \(\texttt {RBS}\) program should reject, its input. Any reasonable convention will do, and we will just pick a simple and convenient one.

Definition 4

An RBS program C accepts the natural number m if C executed with input m terminates with 0 at the top of the stack hold by \(\mathtt{X_1}\), otherwise, C rejects m.

A problem is a set of natural numbers.Footnote 1 An RBS program C decides the problem A if C accepts all m that belong to A and rejects all m that do not belong to A. Let \(\mathcal {S}\) denote class of problems decidable by an RBS program.    \(\square \)

Let A be the set of even numbers. Then A is a problem. Figure 2 shows an RBS program that decides A.

Fig. 2.
figure 2

The program accepts every even number and rejects every odd number.

Now, any RBS program decides a problem, and \(\mathcal {S}\) is obviously a well-defined class of computable (decidable) problems. We have defined \(\mathcal {S}\) by a reversible programming language. We have not defined \(\mathcal {S}\) by imposing resource bounds on Turing machines or any other machine models. What can we say about the computational complexity of the problems we find in \(\mathcal {S}\)? May it be the case that \(\mathcal {S}\) equals a complexity class?

3 The Proof of Theorem 3

This section is dedicated to a detailed proof of Theorem 3 (readers not interested may jump ahead to Sect. 4). First, we need some terminology and notation: We will say that a (bottomless) stack is a b-stack if every number stored on the stack is strictly smaller than b. Furthermore, we will use \(\mathcal{V}({\texttt {C}})\) to denote the set of program variables occurring in the command \({\texttt {C}}\), and for any positive integer m and any command \({\texttt {C}}\), we define the command \({\texttt {C}}^m\) by \({\texttt {C}}^1\equiv {\texttt {C}}\) and \({\texttt {C}}^{m+1}\equiv {\texttt {C}}^m\texttt {;}\,{\texttt {C}}\).

Now, assume that \({\texttt {C}}\) is an \(\texttt {RBS}\) command with \(\mathcal{V}({\texttt {C}}) \subseteq \{{\texttt {X}}_1, \ldots , {\texttt {X}}_n\}\). Furthermore, assume that \({\texttt {C}}\) is executed in base b and that \(\alpha _1,\ldots , \alpha _n, \beta _1,\ldots , \beta _n\) are b-stacks. With these assumptions in mind, we make the following claim:

$$\begin{aligned} \text {If }\;\;\{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; {\texttt {C}}\; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\, ,\; \text {then }\;\; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\; {\texttt {C}}^R \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \} . \end{aligned}$$
(claim)

Theorem 3 follows straightforwardly from this claim. So all we need to do is to prove the claim.

We will of course carry out induction on the structure of the command \({\texttt {C}}\), and our proof will split into the tree base cases (i) \({\texttt {C}}\equiv {\texttt {X}}_i^+\), (ii) \({\texttt {C}}\equiv {\texttt {X}}_i^-\) and (iii) \({\texttt {C}}\equiv ({\texttt {X}}_j \, \texttt {to}\, {\texttt {X}}_i)\) and the two inductive cases (iv) \({\texttt {C}}\equiv {\texttt {C}}_1\texttt {;}\,{\texttt {C}}_2\) and \({\texttt {C}}\equiv \texttt {loop} \, {\texttt {X}}_i \, \texttt {\{} {\texttt {C}}_0 \texttt {\}}\) (see Fig. 1).

Case (i). Assume

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; {\texttt {X}}_i^+ \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}. \end{aligned}$$

Then we also have \(\{ {\texttt {X}}_i = \alpha _i \}\; {\texttt {X}}_i^+ \; \{ {\texttt {X}}_i = \beta _i \}\) where

$$\begin{aligned} \alpha _i = \langle m_1,m_2,\ldots , m_k ,0^*]\;\;\;\text { and }\;\;\; \beta _i = \langle m_1 + 1 \;(\text {mod}\; b),m_2, \ldots , m_k ,0^*] \end{aligned}$$

for some \( m_1,\ldots , m_k<b\). We have \((m_1 + 1 \;(\text {mod}\; b)) - 1 \;(\text {mod}\; b) = m_1\) when \(m_1 < b\). Thus we have \(\{ {\texttt {X}}_i = \beta _i \}\; {\texttt {X}}_i^- \; \{ {\texttt {X}}_i = \alpha _i \}\). By Definition 2, we have \(\{ {\texttt {X}}_i = \beta _i \}\; ({\texttt {X}}_i^+)^R \; \{ {\texttt {X}}_i = \alpha _i \}\). Now, since neither \({\texttt {X}}_i^+\) nor \(({\texttt {X}}_i^+)^R\) will modify any stack held by a variable \({\texttt {X}}_j\) where \(j\ne i\), we also have

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\; ({\texttt {X}}_i^+)^R \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}. \end{aligned}$$

This concludes the proof of case (i). The proofs of the cases (ii) and (iii) are very similar to the proof of case (i). We leave the details to the reader and proceed with the inductive cases.

Case (iv). Assume

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; {\texttt {C}}_1\texttt {;}\,{\texttt {C}}_2 \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}. \end{aligned}$$

Then there exist b-stacks \(\gamma _1,\ldots , \gamma _n\) such that

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; {\texttt {C}}_1 \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \gamma _\ell \}\;\;\;\text { and } \;\;\; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \gamma _\ell \}\; {\texttt {C}}_2 \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}. \end{aligned}$$

We apply our induction hypothesis both to \({\texttt {C}}_1\) and to \({\texttt {C}}_2\) and conclude

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \gamma _\ell \}\; {\texttt {C}}_1^R \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\;\;\;\text { and } \;\;\; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\; {\texttt {C}}_2^R \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \gamma _\ell \}. \end{aligned}$$

It follows that

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\; {\texttt {C}}_2^R \texttt {;}\,{\texttt {C}}_1^R \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}. \end{aligned}$$

Finally, as Definition 2 states that \(({\texttt {C}}_1\texttt {;}\,{\texttt {C}}_2)^R = {\texttt {C}}_2^R \texttt {;}\,{\texttt {C}}_1^R\), we have

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\; ({\texttt {C}}_1\texttt {;}\,{\texttt {C}}_2)^R \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}. \end{aligned}$$

This completes the proof of case (iv).

Case (v). Assume

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; \texttt {loop} \; {\texttt {X}}_i \; \texttt {\{} \, {\texttt {C}}_0 \, \texttt {\}} \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \} \end{aligned}$$
(*)

and let m be the top element of the stack \(\alpha _i\).

If \(m=0\), we have

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; \texttt {loop} \; {\texttt {X}}_i \; \texttt {\{} \, {\texttt {C}}_0 \, \texttt {\}} \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}. \end{aligned}$$

as the command \({\texttt {C}}_0\) will not be executed at all. Thus, we also have

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; \texttt {loop} \; {\texttt {X}}_i \; \texttt {\{} \, {\texttt {C}}_0^R \, \texttt {\}} \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \} . \end{aligned}$$

and by Definition 2, we have

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; (\texttt {loop} \; {\texttt {X}}_i \; \texttt {\{} \, {\texttt {C}}_0 \, \texttt {\}})^R \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}. \end{aligned}$$

This proves that the claim holds when \(m=0\). We are left to prove that the claim holds when \(m>0\). Thus, in the remainder of this proof we assume that \(m>0\).

First we prove

$$\begin{aligned} \text {If }\;\;\{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; {\texttt {C}}_0^m \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \},\; \text {then }\;\; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\; ({\texttt {C}}_0^R)^m \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \} . \end{aligned}$$
(†)

by a secondary induction on m.

Let \(m=1\). Then we have \(C_0^m \equiv {\texttt {C}}_0\), and an application of our main induction hypothesis to \({\texttt {C}}_0\) yields (\(\dagger \)). Let \(m>1\). Then we have

$$\begin{aligned} C_0^m \equiv C_0^{m-1}\texttt {;}\,{\texttt {C}}_0 \;\;\; \text { and } \;\;\; ({\texttt {C}}_0^R)^m\equiv {\texttt {C}}_0^R\texttt {;}\,({\texttt {C}}_0^R)^{m-1} \end{aligned}$$

and (\(\dagger \)) holds by our induction hypothesis on m and case (iv) above. This concludes the proof of (\(\dagger \)).

We are now ready to complete our proof the claim. By (*), we have

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}\; {\texttt {C}}_0^m \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}. \end{aligned}$$

By (\(\dagger \)), we have

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\; ({\texttt {C}}_0^R)^m \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}. \end{aligned}$$

Since \({\texttt {X}}_i\not \in \mathcal{V}({\texttt {C}}_0)\), we have \(\beta _i=\alpha _i\), and thus, the top element of \(\beta _i\) is the same as the top element of \(\alpha _i\), namely m. It follows that

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\; \texttt {loop} \; {\texttt {X}}_i \; \texttt {\{} \, {\texttt {C}}_0^R \, \texttt {\}} \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}. \end{aligned}$$

Finally, as Definition 2 states that \(\texttt {loop} \; {\texttt {X}}_i \; \texttt {\{} \, {\texttt {C}}_0^R \, \texttt {\}}= ( \texttt {loop} \; {\texttt {X}}_i \; \texttt {\{} \, {\texttt {C}}_0 \, \texttt {\}} )^R\), we have

$$\begin{aligned} \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \beta _\ell \}\; (\texttt {loop} \; {\texttt {X}}_i \; \texttt {\{} \, {\texttt {C}}_0 \, \texttt {\}} \ )^R \; \{ \bigwedge _{\ell =1}^n {\texttt {X}}_\ell = \alpha _\ell \}. \end{aligned}$$

This completes the proof of case (v).

4 Simulation of Turing Machines

4.1 A General Strategy

Let us first see how we can simulate a Turing machine in a standard way in a standard high-level language. Thereafter we will discuss how we can simulate a Turing machine in our rudimentary reversible language. In the standard language we will of course be able to simulate any Turing machine, no matter how much time and space resources the machine requires. In the reversible language we will only be able to simulate those Turing machines that run in time \(O(2^{kn})\) (where k is a constant and n is the length of the input).

We assume some familiarity with Turing machines. The reader is expected to know that a Turing machine computes by writing symbols from a finite alphabet \(a_1,\ldots ,a_A\) on an infinite tape which is divided into cells; know that one of the cells is scanned by the machine’s head; know a there is a finite number of states \(q_1,\ldots , q_Q\); and so on.

The input w will be available on the tape when a Turing machine M starts, and the actions taken by M will be governed by a finite transition table. Each entry of the table is a 5-tuple

$$\begin{aligned} a_i, q_k, a_j , D, q_\ell \end{aligned}$$
(*)

where \(a_i,a_j\) are alphabet symbols; \(q_k, q_\ell \) are states; and D is ether “left” or “right”. Such a tuple is called a transition and tells M what to do when it scans the symbol \(a_j\) in state \(q_k\): in that case M should write the symbol \(a_j\), move its head one position in the direction given by D, and then proceed in state \(q_\ell \). We restrict our attention to deterministic Turing machines, and for each alphabet symbol \(a_i\) and each non-halting state \(q_k\), there will be one, and only one, transition that starts with \(a_i, q_k\). So a Turing machine knows exactly what to do until it reaches one its halting states, and then it simply halts (if it halts in a dedicated state \(q_{\text {accept}}\), it accepts its input; if it halts in a dedicated state \(q_{\text {reject}}\), it rejects its input). This entails that we can simulate a Turing machine by a sequence of if-then statements embedded into a while-loop. We need one if-then statement for each transition:

Minimum one transition will be executed each time the loop’s body is executed, and the running time of M (on input w) will more or less be the number of times the body is executed. (It might happen that more than one transition is executed when the loop’s body is executed once, but that will not cause any trouble.) In order to simulate the actions taken by the transitions, we need a representation of the computing machinery. We need to keep track of the current state, we need to keep track of the symbols on the tape, and we need to identify the scanned cell. The current state can simply be stored in a register \(\texttt {STATE}\), but how should we deal with the tape? The tape is divided into an infinite sequence of cells

$$\begin{aligned} C_1,C_2,C_2,\ldots , C_{s-1},C_s, C_{s+1}, \ldots \end{aligned}$$

where one of the cells \(C_s\) is scanned by the head. Only finitely many of these cells will contain anything else than the blank symbol. Let us say that \(C_i\) contains blank when \(i>B_0\). In order to simulate the machine it will obviously be sufficient to store the symbols in the cells \(C_1,C_2, \ldots , C_B\) where \(B= \max (B_0,s)+1\). In addition we need to keep track of the scanned cell \(C_s\). A convenient way to deal with the situation will be to use a stack \(\texttt {STACK}_L\), a register SCAN, another stack \(\texttt {STACK}_R\), and store the tape content in the following way:

Now we can mimic the movements of the head by pushing and popping alphabet symbols in the obvious way, and the transition (*) can be implemented by a program of the form

4.2 Can RBS Programs Simulate Turing Machines?

The input to an RBS program is a natural number, and we will thus discuss to what extent an RBS program can simulate a Turing machine that takes a single natural number as input.

We have seen that a program with only one while-loop can simulate a Turing machine (and we will for sure need at least one while-loop in order to simulate an arbitrary Turing machine). Now, while-loops are not available in RBS, and the best we can do in order to simulate a Turing machine is to use a fixed number of nested for-loops:

$$\begin{aligned} \texttt {loop}\,{\texttt {Y}}_1 \, \texttt {\{}\, \texttt {loop}\,{\texttt {Y}}_2 \, \texttt {\{}\, \ldots \texttt {loop}\,{\texttt {Y}}_k\, \texttt {\{}\, \langle \text {sequence of if-then statements}\rangle \, \texttt {\}} \ldots \texttt {\}} \texttt {\}}. \end{aligned}$$

Since an RBS program cannot increase the numerical value of its input, the body of each of these loops will be executed maximum \(\max (m,1)\) times where m is the input to the RBS program (and to the Turing machine the program simulates). Thus it is pretty clear that we cannot simulate a Turing machine if its running time is not bounded by \(m^k\) for some constant k. This corresponds to a bound \(2^{k|m|}\) where k is a constant and |m| is the length of the input m, that is, |m| equals the number of symbols needed to represent the natural number m in binary notation. In the following we will see that any Turing machine that uses such an amount of computation time can be simulated by an RBS program.

It turns out that an RBS program can simulate the transitions of a Turing machine M in essentially the same way as the high-level program sketched above, given that the input to M is sufficiently large (on small inputs the simulation might fail). Stacks are directly available in RBS, and thus an RBS program can easily represent the tape and mimic the movements of the head. On the other hand, assignment statements and if-then statements are not directly available. This makes things a bit tricky. Let us first see how RBS programs to a certain extent can simulate programs written in a non-reversible programming language called \(\texttt {LOOP}^-\).

4.3 \(\texttt {LOOP}^-\) Programs

The syntax of \(\texttt {LOOP}^-\) is given in Fig. 3. Any element in the syntactic category Command will be called a program. A \(\texttt {LOOP}^-\) program manipulates natural numbers, and each program variable holds a single natural number. The command \({\texttt {X}}\,\texttt {:=}\,k\) assigns the fixed number k to the variable \({\texttt {X}}\). The command \({\texttt {X}}\,\texttt {:=}\,{\texttt {Y}}\) assigns the number hold by the variable \({\texttt {Y}}\) to the variable \({\texttt {X}}\). The command \(\texttt {pred(}{\texttt {X}}\texttt {)}\) decreases the value hold by the variable \({\texttt {X}}\) by 1 if the value is strictly greater than 0; and leave the value hold by \({\texttt {X}}\) unchanged if the value is 0. Furthermore, the command \({\texttt {C}}_1\texttt {;}\,{\texttt {C}}_2\) is the standard composition of the commands \({\texttt {C}}_1\) and \({\texttt {C}}_2\), and the command \(\texttt {loop} \; {\texttt {X}}\; \texttt {\{} \, {\texttt {C}}\, \texttt {\}}\) executes the command \({\texttt {C}}\) repeatedly k times in a row where k in the number hold by \({\texttt {X}}\). Note that the variable \({\texttt {X}}\) is not allowed to occur in \({\texttt {C}}\) and that the command \(\texttt {loop} \; {\texttt {X}}\; \texttt {\{} \, {\texttt {C}}\, \texttt {\}}\) does not modify the value held by \({\texttt {X}}\).

Fig. 3.
figure 3

The syntax of the language \(\texttt {LOOP}^-\). The variable X in the loop command is not allowed to occur in the loop’s body.

An RBS program can represent a \(\texttt {LOOP}^-\) variable \({\texttt {X}}\) holding natural number k by a variable \({\texttt {X}}\) (we use the same variable name) holding the stack \(\langle k, 0^*]\). The command \({\texttt {X}}\,\texttt {:=}\,k\) can then be simulated by the program

$$\begin{aligned} ({\texttt {X}} \, \texttt {to}\, {\texttt {Z}})\texttt {;}\,\underbrace{\, {\texttt {X}}^+\texttt {;}\,{\texttt {X}}^+\texttt {;}\,\ldots {\texttt {X}}^+ \, }_{\text { increase }k\text { times}} \end{aligned}$$

where \({\texttt {Z}}\) is an auxiliary variable (\({\texttt {Z}}\) works as a trash bin). Now, observe that this will only work if the base of execution is strictly greater than k, but that will good enough to us. The command \({\texttt {X}}\,:=\,{\texttt {Y}}\) can be simulated by the program

$$\begin{aligned} ({\texttt {X}} \, \texttt {to}\, {\texttt {Z}})\texttt {;}\,\texttt {loop}\;{\texttt {Y}}\; \texttt {\{}\, {\texttt {X}}^+ \, \texttt {\}} \end{aligned}$$

where \({\texttt {Z}}\) is an auxiliary variable (\({\texttt {Z}}\) works as a trash bin). Furthermore, the command \(\texttt {pred(}{\texttt {X}}\texttt {)}\) can be simulated by a program that uses auxiliary variables \({\texttt {Y}}\) and \({\texttt {Z}}\) (which represent natural numbers) and the simulations of the assignment statements given above:

This shows how \(\texttt {RBS}\) programs can simulate all the primitive \(\texttt {LOOP}^-\) commands. It is easy to see that

  • the RBS command \({\texttt {C}}_1^\prime \texttt {;}\, {\texttt {C}}_2^\prime \) simulates the \(\texttt {LOOP}^-\) command \({\texttt {C}}_1\texttt {;}\, {\texttt {C}}_2\) if \({\texttt {C}}_1^\prime \) simulates \({\texttt {C}}_1\) and \({\texttt {C}}_2^\prime \) simulates \({\texttt {C}}_2\)

  • the RBS command \(\texttt {loop}\; {\texttt {X}}\; \texttt {\{} \, {\texttt {C}}^\prime \, \texttt {\}}\) simulates the \(\texttt {LOOP}^-\) command \(\texttt {loop}\; {\texttt {X}}\; \texttt {\{} \, {\texttt {C}}\, \texttt {\}}\) if \({\texttt {C}}^\prime \) simulates \({\texttt {C}}\).

Hence, any \(\texttt {LOOP}^-\) program can be simulated by an \(\texttt {RBS}\) program given that the input is sufficiently large. On small inputs simulations might fail since the simulation of the assignment \({\texttt {X}}\,\texttt {:=}\,k\) only works if the execution base is strictly greater than k.

The \(\texttt {LOOP}^-\) language turns out to be more expressive than one might expect at a first glance, and all sorts of conditional statements and if-then constructions are available in the language. As an example, let us see how we can implement the construction

$$\begin{aligned} \texttt {if} \; {\texttt {X}}\, \texttt {=}\, {\texttt {Y}}\; \texttt {then}\; {\texttt {C}}_1 \;\texttt {else}\; {\texttt {C}}_2. \end{aligned}$$

We will need some axillary variables \({\texttt {X}}'\), \({\texttt {Y}}'\), \({\texttt {Z}}\), \({\texttt {U}}\) which do not occur in any of the commands \({\texttt {C}}_1\) and \({\texttt {C}}_2\). First we execute the program

This program sets both \({\texttt {X}}'\) and \({\texttt {Y}}'\) to 0 if \({\texttt {X}}\) and \({\texttt {Y}}\) hold the same number. If \({\texttt {X}}\) and \({\texttt {Y}}\) hold different numbers, one of the two variables \({\texttt {X}}', {\texttt {Y}}'\) will be set to a number strictly greater than 0. Then we execute the program

The composition of these two programs executes the program \({\texttt {C}}_1\) exactly once (and \({\texttt {C}}_2\) will not be executed at all) if \({\texttt {X}}\) and \({\texttt {Y}}\) hold the same number. If \({\texttt {X}}\) and \({\texttt {Y}}\) hold different numbers, \({\texttt {C}}_2\) will be executed exactly once (and \({\texttt {C}}_1\) will not be executed at all). The reader should note that this implementation of if-then-else construction does not contain any assignments of the form \({\texttt {X}}\,\texttt {:=}\, k\) where \(k>1\).

It is proved in Kristiansen [8] that \(\texttt {LOOP}^-\) captures the complexity class \(\text {LINSPACE}\), that is, the set of problems decidable in space O(n) on a deterministic Turing machine (n is the length of the input). Hence, the considerations above indicate that \(\text {LINSPACE}\subseteq \mathcal {S}\). However, we are on our way to proving a stronger result, namely that \(\text {LINSPACE}\subseteq \mathcal {S}= \text {ETIME}\). The equality \(\text {LINSPACE}{\mathop {=}\limits ^{\tiny ?}} \text {ETIME}\) is one of the many notorious open problems of complexity theory. The general opinion is that the equality does not hold.

4.4 RBS Programs that Simulates Time-Bounded Turing Machines

We have seen that RBS programs (nearly) can simulate \(\texttt {LOOP}^-\) programs. \(\texttt {LOOP}^-\) can assign constants to registers and perform if-then-else constructions. This helps us to see how to an RBS program can simulate an arbitrary \(2^{k|m|}\) time Turing machine M. Such a program may be of the form

$$\begin{aligned} \begin{array}{l} \langle \text {initiate the tape with the input }m \rangle \texttt {;}\,\\ {\texttt {Y}}_1\,\texttt {:=}\, \langle \text {the input }m \rangle \texttt {;}\,{\texttt {Y}}_2\,\texttt {:=}\, \langle \text {the input }{} { m} \rangle \texttt {;}\,\ldots \texttt {;}\,{\texttt {Y}}_k\,\texttt {:=}\, \langle \text {the input }{} { m} \rangle \texttt {;}\,\\ \texttt {loop}\,{\texttt {Y}}_1 \, \texttt {\{}\, \texttt {loop}\,{\texttt {Y}}_2 \, \texttt {\{}\, \ldots \texttt {loop}\,{\texttt {Y}}_k\, \texttt {\{}\, {\texttt {T}}_1\texttt {;}\,{\texttt {T}}_2\texttt {;}\,\ldots \texttt {;}\,{\texttt {T}}_r \, \texttt {\}} \ldots \texttt {\}} \texttt {\}}. \end{array} \end{aligned}$$

We represent the symbols in M’s alphabet \(a_1, \ldots a_A\) by the numbers \(1,\ldots , A\) and M’s states \(q_1, \ldots q_Q\) by the numbers \(1,\ldots , Q\). We use two stacks to hold the content of the tape, and we use registers \(\texttt {STATE}\) and SCAN to hold respectively the current state and the scanned cell. Each \(T_s\) will take care of a transition \( a_i, q_k, a_j , D, q_\ell \) and be of the form

$$\begin{aligned} \texttt {if}\; \texttt {SCAN}\, \texttt {=}\, i \;\texttt {and}\; \texttt {STATE}\, \texttt {=}\, k \; \texttt {then}\; \texttt {\{}\, \texttt {SCAN}\,\texttt {:=}\, j\texttt {;}\,\ldots \text { push and pop }\ldots \texttt {;}\,\texttt {STATE}\,\texttt {:=}\, \ell \, \texttt {\}}. \end{aligned}$$

We are left with a minor problem: This will not work for small inputs. This will only work if the base of execution \(b = \max (m+1,2)\) is strictly greater than \(\max (A,Q)\). Only then will the simulating program be able to perform the necessary assignments of constants to variables. In some sense we cannot deal with this problem. An RBS program will not be able to simulate (in any reasonable sense of the word) an arbitrary \(2^{k|m|}\) time Turing machine M on small inputs, but still there will be an RBS program that decides the same problem as M.

We have seen that it suffices to assign the constants 0 and 1 to variables in order to implement the if-then-else construction in \(\texttt {LOOP}^-\). This entails that the if-then-else construction will work on small inputs as the base of execution always will be strictly greater than 1. Hence, if the problem A is decided by a \(2^{k|m|}\) time Turing machine M, there will also be an RBS program that decides A. This program will be of the form

5 Main Results

5.1 A Characterization of \(\text {ETIME}\)

Definition 5

Let |m| denote the number of digits required to write the natural number m in binary notation. For any natural number k, let \(\text {ETIME}_k\) be the class of problems decidable in time \(O(2^{k|m|})\) on a deterministic Turing machine. Let \(\text {ETIME}= \bigcup _{i\in \mathbb N} \text {ETIME}_i\).    \(\square \)

Theorem 6

\(\mathcal {S}= \text {ETIME}\).

Proof

The proof of the inclusion \(\mathcal {S}\subseteq \text {ETIME}\) should be straightforward to anyone experienced with Turing machines. Assume \(A\in \mathcal {S}\) (we will argue that \({A\in \text {ETIME}}\)). Then there is an \(\texttt {RBS}\) program \({\texttt {C}}\) that decides A. Let m be the input to \({\texttt {C}}\). Each loop in \({\texttt {C}}\) will be executed maximum \(m+1\) times since the base of execution will be \(\max (m+1,2)\). Thus, there exist constants \(k_0, k_1\) (not depending on m) such that \(k_0(m+1)^{k_1}\) bounds the number of primitive commands executed by \({\texttt {C}}\) on input m. A Turing machine can simulate the execution of \({\texttt {C}}\) on input m with polynomial overhead. Thus there exist constants \(k_2,k_3\) such that \(k_2(m+1)^{k_3}\) bounds the number of steps a Turing machine needs to decide if m is in A. There exists k such that \(k_2(m+1)^{k_3} < 2^{k|m|}\). Hence, \({A\in \text {ETIME}}\). This proves the inclusion \(\mathcal {S}\subseteq \text {ETIME}\).

We turn to proof of the inclusion \({\text {ETIME}\subseteq \mathcal {S}}\). Assume \({A\in \text {ETIME}}\) (we will argue that \(A\in \mathcal {S}\)). Then there is a \(O(2^{k|m|})\) time Turing machine M that decides A. Now, M will run in time \(2^{k_0|m|}\) when \(k_0\) is sufficiently large. In the previous section we saw that there will be an RBS program that decides the same problem as M. Hence, \(A\in \mathcal {S}\). This proves the inclusion \(\text {ETIME}\subseteq \mathcal {S}\).   \(\square \)

5.2 A Characterization of \(\text {P}\)

Would it not be nice if we could find a reversible language that captures a complexity class that is a bit more attractive than \(\text {ETIME}\)? Now, \(\text {P}\) is for a number of reasons, which the reader might be aware of, one of most popular and important complexity classes. Luckily, it turns out that a few modifications of RBS yield a characterization of \(\text {P}\).

First we modify the way \(\texttt {RBS}\) programs receive input. The input will now be a string over some alphabet. Any alphabet that contains at least two symbols will do and, for convenience, we will stick to the alphabet \(\{a,b\}\). The base of execution will at program start be set to the length of the input. Otherwise, nearly everything is kept as before: Every variable will still hold a bottomless stack storing natural numbers. All commands available in the original version of RBS will be available in the new version. A program will still accept its input by terminating with 0 at the top of the stack held by \({\texttt {X}}_1\), otherwise, the program rejects its input. Moreover, all variables including \({\texttt {X}}_1\), the variable that used to import the input, hold the zero stack when the execution of a program starts.

Next we extend \(\texttt {RBS}\) by two commands with the syntax

$$\begin{aligned} \texttt {case \; inp[}X\texttt {]=a\!:}\; \texttt {\{} \, com\, \texttt {\}} \;\;\; \text { and } \;\;\; \texttt {case \; inp[}X\texttt {]=b\!:}\; \texttt {\{} \, com\, \texttt {\}} \end{aligned}$$

where X is a variable and com is a command which does not contain X. These commands make it possible for a program to access its input. The input is a string \(\alpha _0\alpha _1,\ldots ,\alpha _{b-1}\) where b is the execution base and \(\alpha _i\in \{a,b\}\). Assume that \({\texttt {X}}_j\) holds a stack where top element is k. The command

$$\begin{aligned} \texttt {case\; inp[}{\texttt {X}}_j\texttt {]=a\!:}\; \texttt {\{} \; {\texttt {C}}\; \texttt {\}} \end{aligned}$$

executes the command \({\texttt {C}}\) if \(\alpha _k = a\), otherwise, the command does nothing. The command

$$\begin{aligned} \texttt {case \; inp[}{\texttt {X}}_j\texttt {]=b\!:}\; \texttt {\{} \; {\texttt {C}}\; \texttt {\}} \end{aligned}$$

executes the command \({\texttt {C}}\) if \(\alpha _k = b\), otherwise, the command does nothing.

We still have a reversible language. The two new commands are reversible. The variable \({\texttt {X}}_j\) is not allowed to occur in \({\texttt {C}}\) and will consequently not be modified by \({\texttt {C}}\). Thus, for \(\texttt {x}\in \{\texttt {a}, \texttt {b}\}\), we may extend Definition 2 by

$$\begin{aligned} \big (\texttt {case \; inp[}{\texttt {X}}_j\texttt {]=x\!:}\; \texttt {\{} \, {\texttt {C}}\, \texttt {\}} \ \big )^R \; = \; \texttt {case \; inp[}{\texttt {X}}_j\texttt {]=x\!:}\; \texttt {\{} \, {\texttt {C}}^R \, \texttt {\}}. \end{aligned}$$

and Theorem 3 will still hold.

To avoid confusion we will use \(\texttt {RBS}'\) to denote our new version of RBS. We require that the input to an \(\texttt {RBS}'\) program is of length at least 2 (so we exclude the empty string and the one-symbol strings a and b). This is of course a bit artificial, but it seems to be the most convenient way to deal with a few annoying problems of technical nature. Accordingly, we also require that every string in a language (see the definition below) is of length at least 2.

Fig. 4.
figure 4

The program accepts any string that starts with a nonempty sequence of a’s and ends with a single b (the input to a program should at least contain two symbols). The program rejects any string that is not of this form. The program accepts by terminating with \({\texttt {X}}_1=\langle 0^*]\) and rejects by terminating with \({\texttt {X}}_1=\langle 1, 0^*]\).

Definition 7

A language L is a set of strings over the alphabet \(\{a,b\}\) , moreover, every string in L is of length at least 2.

An \(\texttt {RBS}'\) program C decides the language L if C accepts every string that belongs to L and rejects every string that does not belong to L. Let \(\mathcal {S}'\) be class of languages decidable by an RBSprogram.

Let |w| denote the length of the string w. For any natural number k, let \(\text {P}_k\) be the class of languages decidable in time \(O(|w|^k)\) on a deterministic Turing machine. Let \(\text {P}= \bigcup _{i\in \mathbb N} \text {P}_i\).   \(\square \)

Figure 4 shows an \(\texttt {RBS}'\) program which decides the language given by the regular expression \(a^*ab\).

The proof of the next theorem is very similar to the proof of Theorem 6, and the reader should be able to provide the details. Just recall that the execution base of an \(\texttt {RBS}'\) program is set to the length of the input. Hence, the number of primitive instructions executed by an \(\texttt {RBS}'\) program will be bounded by \(|w|^k\) where |w| is the length of the input w and k is a sufficiently large constant, and moreover, an \(\texttt {RBS}'\) program of the form

$$\begin{aligned} \begin{array}{l} \texttt {loop}\,{\texttt {Y}}_1 \, \texttt {\{}\, \texttt {loop}\,{\texttt {Y}}_2 \, \texttt {\{}\, \ldots \texttt {loop}\,{\texttt {Y}}_k\, \texttt {\{}\; \langle \ldots \text { a list of transitions }\ldots \rangle \; \texttt {\}} \ldots \texttt {\}} \texttt {\}}. \end{array} \end{aligned}$$

will execute \(\langle \ldots \text { a list of transitions }\ldots \rangle \) exactly \(|w|^k\) times if each and one of the variables \({\texttt {Y}}_1,\ldots {\texttt {Y}}_k\) holds a stack where the top element is |w|.

Theorem 8

\(\mathcal {S}' = \text {P}\).

6 Some Final Remarks

We have argued that there is a link between implicit computational complexity theory and the theory of reversible computation, and we have showed that both \(\text {ETIME}\) and \(\text {P}\) can be captured by inherently reversible programming languages. In general, implicit characterizations are meant to shed light on the nature of complexity classes and the many notoriously hard open problems involving such classes. Implicit characterizations by reversible formalisms might yield some new insights in this respect. It is beyond the scope of this paper to discuss or interpret the theorems proved above any further, but one might start to wonder how different aspects of reversibility relate to time complexity, space complexity and nondeterminism.

The author is not aware of any work in reversible computing that is closely related to the work presented above, but some work of Matos [14] is at least faintly related. Matos characterizes the primitive recursive functions by an inherently reversible loop-language.Footnote 2 Paolini et al. [15] do also characterize the primitive recursive functions by a reversible formalism. Their work is of a recursion-theoretic nature and has a different flavor than ours, but it is possible that such studies might lead to interesting characterizations of complexity classes.

We finish off this paper by suggesting a small research project. It should be possible to extend \(\texttt {RBS}\) to an inherently reversible higher-order language. First-order programs will be like the ones defined and explained above. Second-order programs will manipulate stacks of stacks, third-order programs will manipulate stacks of stacks of stacks, and so on. This will induce a hierarchy: the class of problems decidable by a first-order \(\texttt {RBS}\) program, the class of problems decidable by a second-order \(\texttt {RBS}\) program, ...by a third-order \(\texttt {RBS}\) program, and so on. By the same token, \(\texttt {RBS}'\) will induce a hierarchy: the class of languages decidable by a first-order \(\texttt {RBS}'\) program, the class of languages decidable by a second-order \(\texttt {RBS}'\) program, and so on. These two hierarchies should be compared to the alternating time-space hierarchies studied in Goerdt [4], Jones [6], Kristiansen and Voda [10] and many other papers.