Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Edward Fredkin, Tommaso Toffoli, Norman Margolus, Elwyn Berlekamp and John Conway are fully recognized founders of the field of collision-based computing. The term “collision-based computing” was first used in Adamatzky (2002a). There are, however, several equivalent but lesser-used terms such as “signal computing,” “ballistic computing,” “free space computing,” and “billiard ball computing.” The idea of collision-based computing is based on studies dealing with collisions of signals traveling along discrete chains, on a one-dimensional cellular automata. The interaction of signals traveling along one-dimensional conductors was among the famous problems in physics, biology, and physiology for centuries, and the problem of interaction was interpreted in terms of finite state machines in the 1960s. The earliest computer science-related results on signal interaction can be attributed to:

  • Atrubin (Atrubin 1965), who designed the first-ever multiplier based on a one-dimensional cellular automaton in 1965;

  • Fischer (Fischer 1965), who developed a cellular automaton generator of prime numbers in 1965; and

  • Waksman (Waksman 1966), who initiated the very popular firing squad synchronization problem and provided an eight-state solution, in 1966.

Banks (1971) showed how to build wires and simple gates in configurations of a two-dimensional binary-state cellular automaton. This was not architecture-free computing, because a wire was represented by a particular stationary configuration of cell states (this was rather a simulation of a “conventional” electrical, or logical, circuit). However, Banks's design was a huge influence on the theory of computing in cellular automata and beyond.

In 1982, Elwyn Berlekamp, John Conway, and Richard Gay proved that Game of Life “can imitate computers” (Berlekamp et al. 1982).

They mimicked electric wires by lines “along which gliders travel” and demonstrated how to do a logical gate by crashing gliders into one another. Chapter 25 of their “Winning Ways” (Berlekamp et al. 1982) demonstrates computing designs that do not simply look fresh 20 years later but are still rediscovered again and again by Game of Life enthusiasts all over the Net.

Berlekamp, Conway, and Gay employed a vanishing reaction of gliders – two crashing gliders annihilate themselves – to build a NOT gate. They adopted Gosper's eater to collect garbage and to destroy glider streams. They used combinations of glider guns and eaters to implement and and or gates, and the shifting of a stationary pattern or block, by a mobile pattern, or glider, when designing auxiliary storage of information.

There is even the possibility that space–time itself is granular, composed of discrete units, and that the universe, as Edward Fredkin of M.I.T. and others have suggested, is a cellular automaton run by an enormous computer. If so, what we call motion may be only simulated motion. A moving particle in the ultimate microlevel may be essentially the same as one of our gliders, appearing to move on the macro-level, whereas actually there is only an alteration of states of basic space–time cells in obedience to transition rules that have yet to be discovered. – Berlekamp et al. (1982).

Meanwhile, in 1978, Edward Fredkin and Tommaso Toffoli submitted a 1-year project proposal to DARPA, which got funding and thus started a chain of remarkable events.

Originally, Fredkin and Toffoli aimed to “drastically reduce the fraction of” energy “that is dissipated at each computing step” (Fredkin and Toffoli 2002). To design a non-dissipative computer they constructed a new type of digital logic – conservative logic – that conserves both “the physical quantities in which the digital signals are encoded” and “the information present at any moment in a digital system” (Fredkin and Toffoli 2002).

Fredkin and Toffoli (1982) further developed these ideas in the seminal paper “Conservative Logic,” from which a concept of ballistic computers emerged. The Fredkin–Toffoli model of conservative computation – the billiard ball model – explores “elastic collisions involving balls and fixed reflectors.” Generally, they proved that “given a container with balls one can do any kind of computation.”

The billiard ball model became a masterpiece of cellular automaton theory, thanks to Norman Margolus who invented a cellular automaton (block cellular automata or partitioned cellular automata) implementation of the model. Norman published this result in 1984 (Margolus 1984). “Margolus neighborhood” and “billiard ball model cellular automata” are exploited widely nowadays.

A detailed account of collision-based computing is not provided. There are no excuses to avoid reading original sources (Berlekamp et al. 1982; Fredkin and Toffoli 1982; Margolus 1984). A comprehensive, self-contained report of the modern state of collision-based computing is provided in the book by Adamatzky (2002a). The present chapter rather discusses personal experience designing collision-based computing schemes in one- and two-dimensional cellular automata, and spatially extended nonlinear media. It also provides some hands-on examples of recently discovered collision-based computing devices, with the hope that the examples will help readers to experiment with their own designs.

The chapter is structured as follows. Principles of collision-based computing are outlined in Sect. 2. Section 3 shows how basic logical gates can be implemented by colliding localizations in natural systems – simulated reaction–diffusion medium (Sect. 3.1) and light-sensitive Belousov–Zhabotinsky medium (Sect. 3.2). The theoretical foundations of computing with signals in 1D cellular automata are presented in Sect. 4, including collision-based implementation of 1D Turing machine (Sect. 4.2.1) and cyclic tag systems (CTS) (Sect. 4.2.2). The excursion to architectureless computing is complete with abstract geometrical computation in Sect. 5, where time and space are continuous and particles/localizations are dimensionless.

2 Principles of Collision-Based Computing

This section attempts to summarize all types of collision-based computers. A collision-based computer is an empty space populated with mobile and stationary localizations. The mobile localizations used for computing so far are:

  • Billiard balls (Fredkin and Toffoli 1982)

  • Gliders in cellular automata models (Berlekamp et al. 1982; Adamatzky and Wuensche 2007; Delorme and Mazoyer 2002; Rennard 2002; Rendell 2002)

  • Solitons in nonlinear media (Jakubowski et al. 1996, 2001; Steiglitz 2001; Anastassiou et al. 2001; Rand et al. 2005; Rand and Steiglitz 2009), and

  • Localized wave fragments in excitable chemical media (Adamatzky 2004; Adamatzky and De Lacy Costello 2007)

Examples of stationary localizations are:

  • “Still lives,” blocks, and eaters in Conway's Game of Life (Berlekamp et al. 1982; Rendell 2002)

  • Standing waves in automaton models of reaction–diffusion systems (Adamatzky and Wuensche 2007) and

  • Breathers in computing devices based on polymer chains and oscillons in vibrating granular materials (Adamatzky 2002b)

Usually mobile localizations represent signals and stationary localizations are used to route the signals in the space; however, by allowing balls and mirrors to change their states, in addition to velocity vectors, one can, in principle, build multivalued logic circuits.

Classical examples of collision-based gates are shown in Fig. 1 . The interaction gate involves two balls to represent values of variables x and y (Fig. 1a ). If two balls are present at the input trajectories, this corresponds to both variables having Truth values that collide and deflect as a result of collisions. The deflected trajectories of the balls represent conjunctions of the input variables, xy. If only one ball—say the ball corresponding to the variable x—is present initially, then this ball travels along its original trajectories and does not change its velocity vector. The undisturbed trajectories of the balls represent logical functions \(\overline{x}y\) and \(x\overline{y}\) (Fig. 1a ).

Fig. 1
figure 00581figure 00581

Basics of billiard ball model: Fredkin–Toffoli interaction gate (a) and switch gate (b). (From Fredkin and Toffoli 1982.)

The switch gate (Fig. 1b ) is another famous example, which also demonstrates the role of mirrors (stationary localizations). In the switch gate, the signal x is conditionally routed by a control signal c. If the signal is not present, the ball x continues along its original trajectory traveling southeast. The ball x is delayed and its trajectory shifts eastward if the signal c is present in the system. After collision, balls x and c are reflected but their further propagation is restricted by mirrors: the ball c collides with the Northern mirror and the ball x with the Southern mirror (Fig. 1b ).

3 Collision-Based Computing in Natural Systems

Those focused on implementation issues may wonder how a scheme of collision-based computing can be implemented in natural, chemical, physical, or biological materials. This section provides two examples of computing with localizations in spatially extended quasi-chemical (reaction–diffusion cellular automata) and excitable chemical (Belousov–Zhabotinsky reaction) systems.

3.1 Computing Schemes in Reaction–Diffusion Cellular Automata: Spiral Rule

The reaction–diffusion cellular automaton Spiral Rule (Adamatzky and Wuensche 2007) exhibits a wide range of mobile and stationary localizations, thus offering unique opportunities to employ both traveling and still patterns in a computation process.

An automaton is designed that emulates nonlinearity of activator (A) and inhibitor (I) interaction for subthreshold concentrations of activator. The following quasi-chemical reaction was used to construct cell-state transition rules (Adamatzky and Wuensche 2007):

$$\matrix {A + 6S \rightarrow A & A + I \rightarrow I & A + 3I \rightarrow I \cr & A + 2I \rightarrow S & 2A \rightarrow I \cr & 3A \rightarrow A & \beta A \rightarrow I \cr & I \rightarrow S}$$

For subthreshold concentration of the inhibitor and threshold concentrations of activator, the activator is suppressed by the inhibitor. For critical concentrations of the inhibitor, both inhibitor and activator dissociate, producing the substrate.

The quasi-chemical reactions are mapped to cellular automaton rules as follows. Take a totalistic hexagonal cellular automaton (CA), where a cell takes three states – substrate S, activator A, and inhibitor I – and the cell updates its state depending on just the numbers of different cell-states in its neighborhoods. The update rule can be written as follows:

$${x}^{t+1} = f({\sigma }_{ I}{(x)}^{t},{\sigma }_{ A}{(x)}^{t},{\sigma }_{ S}{(x)}^{t})$$

where σ p (x)t is the number of cell x's neighbors (in seven cells neighborhood) with cell-state p ∈ {I, A, S} at time step t. The rule is compactly represented as a matrix M = (M ij ), where 0 ≤ ij ≤ 7, 0 ≤ i + j ≤ 7, and M ij ∈ {I, A, S} (Adamatzky et al. 2006). The output state of each neighborhood is given by the row-index i (the number of neighbors in cell-state I) and column-index j (the number of neighbors in cell-state A). One does not have to count the number of neighbors in cell-state S, because it is given by \(7 - (i + j)\). A cell with a neighborhood represented by indices i and j will update to cell-state M ij , which can be read off the matrix. In terms of the cell-state transition function, this can be presented as follows: \({x}^{t+1} = {M}_{{\sigma }_{2}{(x)}^{t}{\sigma }_{1}{(x)}^{t}}\). The exact structure of the transition matrix is as follows (Adamatzky and Wuensche 2007):

$$M = \left \{ \matrix {S& A& I& A& I& I& I& I \cr S& I& I& A& I& I& I& \cr S& S& I& A& I& I& & \cr S& I& I& A& I& & \cr S& S& I& A& & & \cr S& S& I& & & \cr S& S& & & & \cr S& & & & } \right \}$$

The entry M 01 = A symbolizes the diffusion of activator A, M 11 = I represents the suppression of activator A by the inhibitor I, and M z2 = I (z = 0, …, 5) is the self-inhibition of the activator in particular concentrations. M z0 = S (z = 1, …, 7) means that the inhibitor is dissociated in the absence of the activator, and that the activator does not diffuse in subthreshold concentrations; M zp = I, p ≥ 4 is an upper-threshold self-inhibition.

Starting in a random initial configuration, the automaton will evolve towards a quasi-stationary configuration, with typically two types of stationary localizations, and a spiral generator of mobile localizations, or gliders (Fig. 2 ). The core of a glider-gun is a discrete analog of a “classical” spiral wave. However, at some distance from the spiral wave tip, the wave front becomes unstable and splits into localized wave fragments. The wave fragments continue traveling along their originally determined trajectories and keep their shape and velocity vector unchanged unless disturbed by other localizations. So, the wave fragments behave as in sub-excitable Belousov–Zhabotinsky systems.

Fig. 2
figure 00582figure 00582

A typical quasi-stable configuration of the CA, which started its development in a random initial configuration (with 1 ∕ 3 probability of each cell-state). Cell-state I (inhibitor) is shown by a black disk, cell-state A (activator) by a circle, and cell-state S (substrate) by a dot. One can see there are two types of stationary localizations (glider eaters) and a spiral glider-gun, which emits six streams of gliders, with a frequency of a glider per six time steps in each glider stream. (From Adamatzky and Wuensche 2007.)

Basic gliders, those with one (activator) head, are found in five types (Fig. 3 ), which vary by the number of trailing inhibitors. Three types (G 34, G 24, G 43) alternate between two forms. Two types (G 4, G 5) have just one form. The spiral glider-gun in Fig. 2 emits G 34 gliders. An alternative, low frequency, spiral glider-gun (Wuensche and Adamatzky 2006) (not shown) releases G 4 gliders. These basic gliders, and also a variety of more complicated gliders including mobile glider-guns, are also generated by many other interactions. Stationary localizations, or eaters (Fig. 3i , j), are another important feature of the CA.

Fig. 3
figure 00583figure 00583

The basic localizations in a Spiral Rule automaton: gliders (ag) and eaters (hi). (ag) Five types of gliders, shown here traveling west, in the direction of their activator head (cell-state A), with a tail of trailing inhibitors made up of several cell-states I. The glider designator G ab refers to the numbers of trailing inhibitors. (a) and (b) Two forms of glider G 34. (c) Glider G 4. (d) Glider G 5. (e) and (f) Two forms of glider G 24. (g) and (h) Two forms of glider G 43. (i) Eater E 3. (j) Eater E 6. (From Adamatzky and Wuensche 2007.)

The principal components of any computing device are the input interface, memory, routers, and logical gates. Readers are referred to the paper Adamatzky and Wuensche (2007) to study possible input functions.

How does one implement a memory in the Spiral Rule CA? The eater E 6 can play the role of a 6-bit flip-flop memory device. The substrate-sites (bit-down) between inhibitor-sites (Fig. 3i , j) can be switched to an inhibitor-state (bit-up) by a colliding glider.

An example of writing one bit of information in E 6 is shown in Fig. 4 . Initially E 6 stores no information. The aim is to write one bit in the substrate-site between the northern and northwestern inhibitor-sites (Fig. 4a ). A glider G 34 is generated (Fig. 4b , c) that travels west. G 34 collides with (or brushes past) the north edge of E 6 resulting in G 34 being transformed to a different type of glider, G 4 (Fig. 4g , h). There is now a record of the collision – evidence that writing was successful. The structure of E 6 now has one site (between the northern and northwestern inhibitor-sites) changed to an inhibitor-state (Fig. 4j ) – a bit was saved.

Fig. 4
figure 00584figure 00584

Write bit. (a) t. (b) t + 1. (c) t + 2. (d) t + 3. (e) t + 4. (f) t + 5. (g) t + 6. (h) t + 7. (i) t + 8. (j) t + 9. (From Adamatzky and Wuensche 2007.)

To read a bit from the E 6 memory device with one bit-up (Fig. 5a ), one collides (or brushes past) with glider G 34 (Fig. 5b ). Following the collision, the glider G 34 is transformed into a different type of basic glider, G 34 (Fig. 5g ), and the bit is erased (Fig. 5j ).

Fig. 5
figure 00585figure 00585

Read and erase bit. (a) t. (b) t + 5. (c) t + 7. (d) t + 8. (e) t + 9. (f) t + 10. (g) t + 11. (h) t + 12. (i) t + 13. (j) t + 14. (From Adamatzky and Wuensche 2007.)

To route signals, one can potentially employ other gliders to act as mobile reflectors. Figure 6 shows how a glider traveling northwest collides with a glider traveling west, and is reflected southwest as a result of the collision. However, both gliders are transformed to different types of gliders. This is acceptable on condition that both types of gliders represent the same signal, or signal modality.

Fig. 6
figure 00586figure 00586

Glider reflection. (a) t. (b) t + 1. (c) t + 2. (d) t + 3. (e) t + 4. (f) t + 5. (g) t + 6. (h) t + 7. (i) t + 8. (j) t + 9. (From Adamatzky and Wuensche 2007.)

There are two more gates that are useful in designing practical collision-based computational schemes. They are the fanout gate and the erase gate. The fanout gate is based on glider multiplication. There are a few scenarios where one glider can be multiplied by another glider (for details see the original beehive rule [Wuensche 2005]); for example, one can make a fanout gate by colliding glider G 34 with glider G 24. The gliders almost annihilate as a result of the collision, but recover into a complicated one, which splits into three G 5 gliders. To annihilate a glider, one can collide it with the central body of an eater, or with another glider; for example, head-on collisions usually lead to annihilation.

The asynchronous xor gate can be constructed from the memory device in Figs. 4 and 5 , employing the eater E 6 and the glider G 34. The incoming trajectory of the gliders is an input x = 〈x, y〉 of the gate, and the state of the cell that is switched to the inhibitor state by the gliders is an output z of the gate (this cell is shown by ⊗ in Fig. 7a ). As seen in Fig. 4 , when glider G 34 brushes by the eater E 6 it “adds” one inhibitor state to the eater configuration (Fig. 4 , t + 7), and transforms itself into glider G 43. If glider G 34 brushes by E 6 with an additional inhibitor state (Fig. 5 , t), it “removes” this additional state and transforms itself into glider G 4 (Fig. 5 , t + 11).

Fig. 7
figure 00587figure 00587

Asynchronous xor gate. (a) Position of output cell is shown by ⊗. (b) Operation implemented by the gate, input state G 34 is logical True, output state S is False, output state I is True. (From Adamatzky and Wuensche 2007.)

Assume that the presence of glider G 34 symbolizes input logical True and its absence – input False, inhibitor state I in cell ⊗ – output True and substrate state S – output False. The result of this logical operation can be read directly from the configuration of E 6 or by sending a control glider to brush by E 6 to detect how the glider is transformed. Then the structure implements exclusive disjunction (Fig. 7b ). The gate constructed is asynchronous, because the output of the operation does not depend on the time interval between the signals but only on the value of signals: when the inhibitor state is added or removed from E 6 the configuration of E 6 remains stable and does not change till another glider collides into it.

The eater E 6 can take four different configurations resulting from the interactions of gliders brushing past, and there are seven types of gliders produced in collisions with the eater (including some basic types flipped). One therefore can envisage (Adamatzky and Wuensche 2007) that a finite state machine can be implemented in the eater-glider system. The internal state of such a machine is represented by the configuration of the eater, the type of the incoming glider symbolizes the input symbol of the machine, and the type of the outgoing glider represents the output state of the machine.

To construct the full state transition table of the eater-glider machine, seven types of gliders are collided into four configurations of the eater and the results of the collisions are recorded. For the sake of compact representation, the configurations of the eater are encoded as shown in Fig. 8 . The gliders are denoted as follows: G 34 as a, G 43 as b, G 5 as c, G 4 as d, G 24 as e, G 4 (glider G 4 flipped horizontally) is f, and G 43 (glider G 43 flipped horizontally) is g. The state transition table is shown in Fig. 9 .

Fig. 8
figure 00588figure 00588

Encoding the internal states of the glider-eater machine in the configuration of eater E 6. (a) α. (b) β. (c) χ. (d) δ.

Fig. 9
figure 00589figure 00589

The state transition table of the eater-glider machine. Tuple xy, a pair made up of an eater state x and glider state y, at the intersection of the ith row and jth column, signifies that being in state i while receiving input j the machine takes state x and generates output y.

Consider the internal states of the eater-glider machine as unary operators on the set {a, b, c, d, e, f, g}, that is, the machine's state is reset to its initial state after the collision with the glider. For example, the unary operator α implements the following transformation: ab, bc, ca, da, ed, fe, ge. The operators have the following limit sets: operator α has the limit set {a, b, c}, β – set {c}, χ has two limit sets {a, d} and {b, c}, and operator δ – two limit sets {a, b, c, d} and {e, f}. Considering unary operators a, …, g operating on set {α, β, χ, δ}, one obtains the limit sets shown in Fig. 10 . Many of the operators have more than two limit sets, which may indicate a significant computational potential of the eater-glider machine.

Fig. 10
figure 005810figure 005810

Limit sets of unary operators a, …, g.

To characterize the eater-glider machine in more detail, a study was conducted to find out what output strings are generated by the machine when the machine receives the uniform infinite string s , s ∈ {a, ⋯…, g} on its input. These input string to output string transformations are shown in Fig. 11 .

Fig. 11
figure 005811figure 005811

Input string to output string transformations implemented by the eater-glider machine. String s, at the intersection of the ith row and jth column, tells one that being initially in state i and receiving a uniform string j, the machine generates string s.

Input string abcdefg evokes the following output strings when fed into the machine. The machine starting in state α generates string begabac, in state β string dcgabac, in state χ string deccgae, and in state δ string bcccgae.

3.2 Collision-Based Computing in Excitable Chemical Media

This section gives a brief introduction to implementation of collision-based circuits in excitable chemical media, the Belousov–Zhabotinsky (BZ) system. Examples discussed here are based on numerical experiments; see the chapter Reaction–Diffusion Computing of this book for implementation of collision-based circuits in a BZ medium in chemical laboratory conditions. Now, computing is discussed with localized wave fragments in the Oregonator (Field and Noyes 1974; Tyson and Fife 1980) model adapted to a light-sensitive BZ reaction with applied illumination (Beato and Engel 2003; Krug et al. 1990):

$$\eqalign{ {{\partial u} \over {\partial t}} & = {{1} \over {\varepsilon }} \left(u - {u}^{2} - (fv + \phi ) {{u - q} \over {u + q}} \right) + {D}_{u}{\nabla }^{2}u, \ {\rm and} \cr {{\partial v} \over {\partial t}} & = u - v}$$

where variables u and v represent local concentrations of bromous acid HBrO2 and the oxidized form of the catalyst ruthenium Ru(III), ε sets up a ratio of time scale of variables u and v, q is a scaling parameter depending on reaction rates, f is a stoichiometric coefficient, φ is a light-induced bromide production rate proportional to the intensity of illumination (an excitability parameter – moderate intensity of light will facilitate excitation process, higher intensity will produce excessive quantities of bromide which suppresses the reaction). It is assumed that the catalyst is immobilized in a thin layer of gel; therefore, there is no diffusion term for v. To integrate the system, one uses the Euler method with five-node Laplacian operator, time step \(\Delta t = 1{0}^{-3}\) and grid point spacing Δx = 0.15, with the following parameters: \(\phi = {\phi }_{0} + A/2\), A = 0.0011109, φ 0 = 0.0766, ε = 0.03, f = 1.4, q = 0.002.

The chosen parameters correspond to a region of “higher excitability of the sub-excitability regime” outlined in Sendiña-Nadal et al. (2001), which supports propagation of sustained wave fragments (Fig. 12a ). These wave fragments are used as quanta of information in the design of CB logical circuits. The waves were initiated by locally disturbing initial concentrations of species; for example, ten grid sites in a chain are given value u = 1.0 each. This generates two or more localized wave fragments, similar to counter-propagating waves induced by temporary illumination in experiments. The traveling wave fragments keep their shape for around 4 ⋅ 103–104 steps of simulation (4–10 time units), then decrease in size and vanish. The wave's life-time is sufficient, however, to implement logical gates; this also allows one not to worry about “garbage collection” in the computational medium.

Fig. 12
figure 005812figure 005812

Basic operations with signals. Overlay of images taken every 0.5 time units. Exciting domains of impurities are shown in black, inhibiting domains of impurities are shown in gray. (a) Wave fragment traveling north. (b) Signal branching without impurities: a wave fragment traveling east splits into two wave fragments (traveling southeast and northeast) when it collides into a smaller wave fragment traveling west. (c) Signal branching with impurity: wave fragment traveling west is split by impurity (shown on the right) into two waves traveling northwest and southwest. (d) Signal routing (U-turn) with impurities: wave fragment traveling east is routed north and then west by two impurities (shown on the right). An impurity-reflector consists of inhibitory (gray) and excitatory (black) chains of grid sites. (From Adamatzky 2004.)

Signals are modeled by traveling wave fragments (Sendiña-Nadal et al. 2001; Beato and Engel 2003): a sustainably propagating wave fragment (Fig. 12a ) represents the true value of a logical variable corresponding to the wave's trajectory (momentarily wire). To demonstrate that a physical system is logically universal, it is enough to implement negation and conjunction or disjunction in the spatio-temporal dynamics of the system. To realize a fully functional logical circuit, one must also know how to operate input and output signals in the system's dynamics, namely to implement signal branching and routing; delays can be realized via appropriate routing.

One can branch a signal using two techniques. Firstly, one can collide a smaller auxiliary wave to a wave fragment representing the signal, the signal-wave will then split into two signals (these daughter waves shrink slightly down to stable size and then travel with constant shape a further 4 ⋅ 103 time steps of the simulation) and the auxiliary wave will annihilate (Fig. 12b ). Secondly, one can temporarily and locally apply illumination impurities on a signal's way to change properties of the medium and thus cause the signal to split (Fig. 12c ).

A control impurity, or reflector, consists of a few segments of sites for which the illumination level is slightly above or below the overall illumination level of the medium. Combining excitatory and inhibitory segments, one can precisely control the wave's trajectory, for example, determining a U-turn of a signal (Fig. 12d ).

A typical billiard ball model interaction gate (Fredkin and Toffoli 1982; Margolus 1984) has two inputs – x and y, and four outputs – \(x\overline{y}\) (ball x moves undisturbed in the absence of ball y), \(\overline{x}y\) (ball y moves undisturbed in the absence of ball x), and twice xy (balls x and y change their trajectories when they collide into each other). It was not possible to make wave fragments implement exact billiard-ball gates, because the interacting waves either fused or one of the waves was annihilated as a result of the collision with another wave.

However, a BZ (nonconservative) version of a billiard-ball gate with two inputs and three outputs is implemented, which is just one xy output instead of two. This BZ collision gate is shown in Fig. 13 .

Fig. 13
figure 005813figure 005813

Two wave fragments undergo angle collision and implement interaction gate \(\langle x,y\rangle \rightarrow \langle x\overline{y},xy,\overline{x}y\rangle\). (a) In this example x = 1 and y = 1, both wave fragments are present initially. Overlay of images taken every 0.5 time units. (b) Scheme of the gate. In upper left and bottom left corners of (a) one sees domains of wave generation, two echo wave fragments are also generated, they travel outward from the gate area and thus do not interfere with computation. (From Adamatzky 2004.)

The rich dynamic of the BZ medium allows one to also implement complicated logical operations just in a single interaction event. An example of a composite gate with three inputs and six outputs is shown in Fig. 14 . As one sees in Fig. 14 , some outputs, for example \(\overline{x}yz\), are represented by gradually vanishing wave fragments. The situation can be dealt with by either using a very compact architecture of the logical gates or by installing temporary amplifiers made from excitatory fragments of illumination impurities.

Fig. 14
figure 005814figure 005814

Implementation of \(\langle x,y,z\rangle \rightarrow \langle x\overline{y},\overline{y}z,xyz,\overline{x}yz,\overline{x}y\overline{z},xy\overline{z}\rangle\) interaction gate. Overlay of images of wave fragments taken every 0.5 time units. The following combinations of input configuration are shown: (a) x = 1, y = 1, z = 0, north–south wave collides with east–west wave. (b) x = 1, y = 1, z = 1, north–south wave collides with east–west wave, and with west–east wave. (c) x = 1, y = 0, z = 1, west–east and east–west wave fragments pass near each other without interaction. (d) x = 0, y = 1, z = 1, north–south wave collides with east–west wave. (e) Scheme of the gate. (From Adamatzky 2004.)

As known from results of computer simulations and experimental studies, classical excitation waves merge or annihilate when they collide with one another. This may complicate the implementation of nontrivial logical circuits in classical excitable media. Wave fragments, however, behave a bit differently, more like quasi-particles.

In computational experiments with exhaustive analysis of all possible collisions between localized wave fragments (Adamatzky and De Lacy Costello 2007), all the collisions are classified as (Fig. 15 ): quasi-elastic reflection of wave fragments (Fig. 15a ), pulling and pushing of a wave fragment by another wave fragment (Fig. 15b , c), sliding of one wave fragment along the refractory trail of another fragment (Fig. 15d ), and translation of a wave fragment along one axis by another wave fragment (Fig. 15e ). Examples of two types of collision are shown in Fig. 16 .

Fig. 15
figure 005815figure 005815

Schematic representation of interaction between wave-fragments. (a) Reflection. (b) Attraction. (c) Repulsion. (d) Sliding. (e) Shifting-sliding. (From Adamatzky and De Lacy Costello 2007.)

Fig. 16
figure 005816figure 005816

(ad) Sliding: snapshots of wave fragment moving southwest colliding with wave fragment traveling southeast. Center of the initial excitation of the southwest wave fragment is shifted southwards by 22 sites. (From Adamatzky and De Lacy Costello 2007.) (eh) Slide-shift: snapshots of wave fragment moving southeast colliding with wave fragment traveling west.

4 One-Dimensional Cellular Automata

This section deals with cellular automata in general and discrete signals in particular. It is shown both how they compute in the classical understanding and how they can be used to implement some phenomena relevant to massively distributed computing but without any sequential counterpart.

Cellular automata (CA) were introduced as a discrete model for parallel, local, and uniform phenomena, whether of engineering, physical, or biological nature. They often provide a medium where signals naturally appear and are thoroughly used to understand the global dynamics. On the other hand, a correct handling of such signals is the key to compute and to design special purpose CA.

A cellular automaton works in the following way. The space is regularly partitioned in cells. All the cells are identical and are regularly displayed as an infinite array (having as many dimensions as the modeled space). Each cell can be in finitely many states and evolves according to its state and the states of the surrounding cells – locality. All the cells are identical and behave similarly – uniformity. The cells are all updated synchronously, like a massive parallel process sharing a single clock. In essence, CA form a discrete model: discrete time, discrete space, and discrete values.

Generally, the array is considered to extend infinitely in all directions so that there are no boundaries to deal with. However, borders can be encoded using special states which are never changed and make the two sides independent. Another way to simulate a CA on a computer is to consider that outside of a finite range all cells are in the same stable state (called a quiescent state). Finite/periodic configuration can also be generated by displaying the cells to be on a ring (or torus): the last and first are then neighbors.

Previous collision-based systems are CA, or, more accurately, are simulated by CA, some of them work on hexagonal lattices. Computation in dimension 2 and above can be done straightforwardly as already presented by bit encoding and implementation of logic gates.

From now on only one-dimensional space is considered because, on the one hand, in higher dimensions it can – up to some point – be treated alike or correspond to what has been exemplified in previous sections, and, because, on the other hand, dimension 1 is particularly restrictive and needs special focus.

The reader interested in CA might consult Ilachinski (2001), Kari (2005), Sarkar (2000) for various topics not covered here.

4.1 Signals in CA

In space–time diagrams, or orbits, configurations are concatenated as iterations go. They are very important in order to comprehend the dynamics. Since the temporal coordinate is added, this leads to an array with one more dimension than the underlying space.

A signal is anything periodic in the space–time diagram. If the periodicity is in two directions, then it can fill the whole space–time and is referred as a background, the substrata upon which signals move. The frontier between two backgrounds is a signal as long as it is periodic. Figure 17 provides an example with a complex background (alternatively 0111, 1000, 1101, and 0010 repeating) on the left. On the right, signals are revealed by an ad hoc filtering. This example illustrates the variety in width and pattern of signals. As can be seen, collisions between signals can be pretty complicated.

Fig. 17
figure 005817figure 005817

Filtering to exhibit signals and backgrounds on rule 54 (inspired from Boccara et al. 1991, Fig.7). Time is increasing upwards.

4.2 Computing in One-Dimensional Systems

In dimension two and above, as soon as it is possible to carry information around (with signal), to make information crossing and duplication, and to process it (with collision) in the proper way to yield a sufficiently large set of logical gates, computation is straightforwardly possible.

In dimension 1, this is not so easy, because there is no way for a signal to go around another one or some static artifact. This means that signal propagation is really an issue and, for example, handling garbage signals can be cumbersome. One way to cope with it is to have various kinds of signals not identically affected by artifacts (some would pass unaffected while others interact).

This means that to use the bits and gates scheme, one has to be very careful with available signals and have a very clever positioning of the logic circuitry, which is always thought of as two dimensional (time often provides the extra dimension for displaying it). For example, the construction of Ollinger (2002) provides a small CA that is able to compute with circuit implementation where displaying the circuit is not straightforward.

Another way to tackle computation is to use other formalisms. One classical way is to go back to Turing machines, another one is to implement minimal rewriting systems like cyclic tag systems.

4.2.1 Turing Machine

A Turing machine is a finite automaton acting on a potentially infinite array of symbols. The computation starts with the automaton in its initial state and the input written on the tape. The tape is accessed through a read/write head. At each iteration, the automaton reads the symbol under the head, rewrites a symbol, changes its state, and moves the head left or right. A Turing machine is basically as one dimensional as its tape, so that it naturally fits on a one-dimensional CA.

The simulation is rather simple: signals encoding the symbols are motionless and one signal amounting to the state of the automaton is moving round updating the symbols. Collisions are as follows: a moving state meets a stationary symbol, which leaves a new stationary symbol and a moving new state. (Figure 24 in the next section provides a clear illustration of this in a continuous setting.) This was implemented by, for example, Lindgren and Nordahl (1990).

4.2.2 Cyclic Tag Systems (CTS)

Another way to define computation is to rely on a word that is rewritten until the system halts. The input is given as the starting word and the output is the final word. Connexion with a Turing machine is straightforward when considering the tape as a finite word and the state of the automaton as an extra symbol at the head.

Cyclic tag systems (CTS) is a particular case of many Turing-universal rewriting systems. A CTS considers a binary word together with a circular list of binary words (appendants). At each iteration, the first bit is removed from the word; if it is 1, then the first appendant of the list is added at the end of the word, then the list is rotated. The computation stops when the word is empty or a special halting word (denoted h in the example) is activated. An example is provided in Fig. 18 .

Fig. 18
figure 005818figure 005818

Example of computation of a CTS (given on the first line).

Cyclic tag systems were proven to be able to perform any computation (Cook 2004) and even to do so in polynomial time (Woods and Neary 2006) and were used to build very small universal Turing machines (Neary and Woods 2009). They have been implemented in one-dimensional CA with collision/signal-based computing by Cook (2004) to produce computation universal CA with only two states (it is impossible to compute with less). The construction relies on signals moving and colliding that are sought and classified in order to work on a more abstract level.

As far as minimal CA are concerned, it is worth mentioning that four states are enough to get a one-dimensional CA, which is able to simulate any other one-dimensional CA (Richard and Ollinger 2008). This is not Turing universality since the simulation encompasses infinite configurations.

4.3 Signal-Specific Issues

Since CA form a model for physical phenomena and also for parallel computing architectures, other issues arise. One is to understand the underlying dynamics when used as a discrete model. The other is to design CA for special purposes.

4.3.1 Signals to Understand Dynamics

Once a simulation is running as a CA, one common way to understand the dynamics is to try to find regularities in the space–time diagram and observe them. They are often the key to the underlying dynamics and predictions. (Although it might happen that what is observed is nothing but an artifact of the model and has no counterpart in reality.) This is made clear by considering two examples.

The first one models sand dripping in a one-dimensional space. The dynamics are quite simple: there is nothing in the initial configuration and at each iteration one grain is added to the first pile/cell. Each time a pile has at least two more grains than the next one, a grain falls.

If only grains dropping at odd time are colored in black, then their position is as in Fig. 19a after 10,000 iterations and only the top grains will ever move. This strange disposal, as well as the two different slopes and precise long-term behavior, are explained by signals (Durand-Lose 1996, 1998). These signals can be identified on Fig. 19b where the successive configurations (iteration 100–150) are set one after the other to form a volume. On this volume, triangles can be seen on the lower (as well as the upper parts), with their frontiers as signals (which can be revealed with an appropriate filtering).

Fig. 19
figure 005819figure 005819

One-dimensional sand dripping. (a) Dotting even grains (Durand-Lose 1996, Fig. 6) at iteration 10,000. (b) Successive configurations (Durand-Lose 1996, Fig. 3).

Another example is provided by Das et al. (1995). The aim is to generate a CA with two states and a five-closest-cell neighborhood, such that, on a ring of any size, whatever configuration it is started on it always ends up blinking: all cells are 0 then all cells are 1, alternatively, forever. Instead of trying to build it straightaway, evolutionary programming is used: random CA are generated; they are ranked according to their “blinking capability”; then the best are kept, recombined and mutated to form the next generation. It then cycles through ranking and the next generation until one “fully blinking” CA emerges.

The obtained CA is analyzed in terms of signal (again with some filtering), as illustrated in Fig. 20 . (The apparent non-connectivity of some signals comes from the filtering and also because cells can influence one another up to distance 2.) The evolutionary process goes in steps where various intermediate levels of “blinking” appear. Analyzing typical members of each level reveals the progressive apparition of signals and collision rules.

Fig. 20
figure 005820figure 005820

Figure and filtering to highlight signals and analyze (from Das et al. 1995, Fig. 1). Time is increasing downward. (a) Space–time diagram. (b) Filtered space–time diagram.

It is very important to notice how, in a context where signals were not asked for by the evolutionary process, they indeed appear and are the key to both the desired dynamics and the evolutionary process.

The previous example does not compute in the classical understanding; nevertheless, it provides a relevant dynamical global property.

4.3.2 Signals to Generate a Particular Behavior

Computing, in the usual understanding, is a special behavior. But CA can also be thought of as a computing model on its own. Then primitives can be specially defined to take advantage of the huge parallelism. Signals are the key to managing it.

As already cited, the pioneering work of Fischer (1965) generates prime numbers using a parallel implementation of the Sieve of Eratosthenes where they are indicated as no signal on the first cell at the corresponding times. Prime numbers can then be read on the space–time diagram by observing the state of the first cell.

Another complex behavior is the famous firing squad synchronization problem (FSS). Starting from all but one cell in a quiescent state, one wants all the cells to enter simultaneously the same state – which has not been used before. Like if they would all blink for the very first time synchronously.

Each example of Fig. 21 shows the Euclidean conception and then the discrete implementation of a FSS solution. Both constructions rely on recursive cuts in half. In the discrete implementation, at some point, the granularity of space – a cell cannot be divided – is reached. Since CA are synchronous, this point is reached simultaneously everywhere, ensuring the global synchronization of the entire array of cells.

Fig. 21
figure 005821figure 005821

FSS implementations. (a) Divide and conquer in 3n-steps (Yunès 2007a, Fig. 2.3). (b) Eight-states and 4n-steps (Yunès 2007b, Fig. 1).

4.3.3 Signals as a Computational Model on Its Own

Mazoyer et al. developed a computing model where multiplication (as in Fig. 22a ), composition (as in Fig. 22b ), and even iteration are graphically achieved. The programming system is achieved by using a trellis of adaptable size that can be dynamically generated. The computation is carried out over the trellis. Composition is then achieved by having the next computation displayed on a new trellis. Recursion is provided by a scheme that dynamically starts another iteration providing each time an adapted trellis.

Fig. 22
figure 005822figure 005822

Mazoyer's system. Time is increasing upwards. (a) Multiplication (Mazoyer 1996, Fig. 4). (b) Composition of multiplications (Mazoyer 1996, Fig. 8).

5 Abstract Geometrical Computation

Abstract geometrical computation (AGC) is an idealization of collision-based computing where particles/signals are dimensionless (time and space are continuous). Although the number of signals in a bounded space is expected to be finite, it is unbounded. Another way to consider AGC is as a continuous counterpart of CA as a limit when the size of the cells tends to zero. In CA, signals are almost always the key to understanding and designing, and, indeed, in the literature, the discreteness of CA and of their signals is often put aside to reason at a more abstract level and is left to technical details.

In abstract geometrical computation, dimensionless signals move at constant speed. When they meet, they are replaced by others according to some rewriting rules. A signal machine (SM) gathers the definition of the nature of available signals (called meta-signals), their speeds, and the collision rules. There are finitely many meta-signals and each one is assigned a constant speed (velocity and direction). The constant speed may seem surprising, but each discrete CA signal has indeed a constant speed.

The space–time diagrams generated are continuous in both space and time and the traces of signals form line segments. For a given meta-signal, all these segments are parallel. In this section, only one-dimensional AGC are considered, so that the space–time diagram is always two dimensional with time increasing upwards as illustrated by Fig. 23 .

Fig. 23
figure 005823figure 005823

Example of a space–time diagram.

Space–time diagrams can be much more complicated and, as presented below, use continuity to implement the Zeno paradox and emulate the black hole model of computation.

5.1 Computing

Computing, in the classical understanding, is pretty easy. In fact, many constructions for CA directly translate to ACG and are even easier, since discreteness does not have to be taken into account. Two ways to achieve computability are presented: Turing machines and cyclic tag systems.

For Turing machines (as presented in Sect. 4.2.1), the implementation is quite straightforward: static signals encode the tape – one signal per symbol – and one signal encodes the state of the automaton and the position of the head. The latter moves forth and back from symbol to symbol as the read/write head. This is depicted in Fig. 24 . The construction is detailed in Durand-Lose (2009).

Fig. 24
figure 005824figure 005824

Simulating a Turing machine. (a) TM evolution. (b) TM simulation.

Cyclic tag systems are presented in Sect. 4.2.2. The simulation is done by encoding, left to right, the word and then the circular list of appendants by parallel signals encoding the bits as shown in Fig. 25a . At each iteration, the list is rotated to the right, but before that, if the erased bit of the word is 1, a copy is left. The copy is directly added at the right of the word as in Fig. 25b , which presents one full iteration. The full simulation of the example of Fig. 18 is given in Fig. 25c . The rightward drifting corresponds to the erasure of the word from the left and to the rightward rotation of the list. Each group of oblique lines corresponds to one rotation of the list.

Fig. 25
figure 005825figure 005825

Simulating a cyclic tag system. (a) Initial configuration on 101 and 011 :: 10 :: 10 :: 01. (b) Initial configuration and first iteration on 101 and 011 :: 10 :: 10 :: 01. (c) Full simulation on 101 & 011 :: h :: 0110 :: 01011.

This simulation is detailed in Durand-Lose (2008a). The signal machine obtained is able to simulate any CTS, and is thus Turing universal. It is the smallest one known (it has only 13 meta-signals).

5.2 Geometric Primitives

As for CA, AGC can also be considered as a model on its own, with particular operators, which might not have any classical computation counterpart (like FSS) or discrete counterpart.

All the following constructions involve adding meta-signals and rules to a given SM, so that the desire capability exists. Signals have to be added to the initial configuration in order to fire the effect. Rules can also be modified in order to dynamically fire it.

Space and time are continuous and scaleless, and since signals have no dimension, there is no limit on the scalability. If all signals are scaled spatially by a given coefficient, the whole computation is also scaled temporally. So, the duration of a computation is meaningless and complexity measures, such as the maximal number of collisions, with a causal link, have to be used.

Spatial rescaling of the initial configuration is a static operation. It is also possible to do it during the computation. The construction relies on the ability to freeze a configuration.

Freezing a computation is quite simple. One signal is added on one side and it crosses the entire configuration. Doing so, each time it meets a signal (or enters a previously existing collision), it replaces it with another signal encoding the meta-signal. All the encoding signals have the same speed: they are parallel and do not interact and, moreover, the distance between them is preserved. The configuration is frozen and shifts. To unfreeze it, a signal comes from the side, crosses the configuration, and replaces each encoding signal by the encoded one (or the result of the collision). Freezing and unfreezing signals must have the same speed so that the configuration is restored exactly as it was, up to a translation. (And indeed they correspond to the same meta-signal toggle.) This is depicted in Fig. 26 .

Fig. 26
figure 005826figure 005826

Freezing and unfreezing. (a) Normal computation. (b) Translated computation. (c) Example.

Meanwhile, when a configuration is frozen into parallel signals, it is possible to act on these signals. One simple trick is to change their direction. In Fig. 27a , this is done twice so as to restore the original direction. (An extra signal is used on the right to delete the structure.) As a result, since different slopes are used to change direction, the distances between signals are scaled (here by one half). Adding freezing and unfreezing signals (automatically fired in due time), an artifact, to scale down a whole configuration, is generated as can be seen on the Fig. 27c .

Fig. 27
figure 005827figure 005827

Scaling. (a) Principle. (b) Scaled computation. (c) Example.

The initial configuration has been modified in order to start straightaway the effect. It is also possible to fire it dynamically during the computation when some special collision happens. This is used to iterate it.

5.3 Infinite Acceleration and Non-recursive Function

Scaling can be automatically restarted ad infinitum; since both space and time are continuous, this is not a problem – at least before the singularity. In Fig. 28 , a computation producing an infinite trellis, extending indefinitely in both space and time, is folded into the structure. The right signal and the middle one are very important since the folded computations should be bounded in order to ensure that it is fully scaled.

Fig. 28
figure 005828figure 005828

Folding or infinite rescaling. (a) Folding structure. (b) Example.

A singularity refers to an accumulation of infinitely many collisions and signals to a given location. Figure 28a shows that the structure alone already creates a singularity. This illustrates the Zeno paradox. The leftmost signals of Fig. 28a form an infinite sequence with infinitely (yet countably) many collisions, although the time elapsed as well as the total distance crossed by signals is finite.

All the signals and collisions that would have existed if there would have been no folding indeed exist, but in a bounded portion of the space and time diagram. (The equivalence of the plane and a bounded part is somehow implemented.) So that up to the presence of the structure and of the frozen parts and the different scales, the computation is the same.

Any computation starting with finitely many signals can be folded. Inside the structure, the rescaling provides speedup (the closer they are, the faster they collide). This speedup is unbounded and goes to infinity. Inside the structure there is a space–time where time-lines are infinitely accelerated compared to the outside. This is the first step to emulate the black-hole model of computation (Hogarth 1994; Etesi and Németi 2002; Lloyd and Ng 2004).

The second step is to find a way for an atomic piece of information to leave the black hole, here the singularity. One meta-signal can be added and rules changed so that the new meta-signal is not affected by the structure but can be generated by the initial computation.

The final step is to add bounding signals on both sides of the folding (called here horizonLe and horizonRi). At their collision point, the folded computation is entirely in the causal past as displayed in Fig. 29 . Any signal leaving the folding would have been collected.

Fig. 29
figure 005829figure 005829

Framing the folding to collect all signals exiting the folding.

This way the black-hole model is emulated: there are two time-lines, one for the machine and one for the observer. The machine one is infinite. On the observer one, after a finite duration, the whole machine one is entirely in the causal past. A single piece of information can be sent by the machine to the observer and it has to be sent after a finite (machine) duration. The ultimate date for the observer to receive anything from the machine is finite and known. To understand the power of this model, just imagine that the machine only sends a signal if its computation ends. The observer receives a signal only if it stops and after a duration the observer is aware that any signal sent would have been received. So, by just checking its clock, the observer knows that the computation did not halt when it is the case. Altogether, the halting problem (whose undecidability is the cornerstone of Turing computability theory) is decidable!

This model, as well as AGC, clearly computes beyond Turing. In formal term, it can decide Σ 0 1 formulae in the arithmetical hierarchy, which is a recursive/computable predicate prefixed by an existential quantifier. This includes, for example, the consistency of Peano arithmetic and set theory, many conjectures such as the Collatz conjecture and Goldbach's conjecture.

Defining what happens at the singularity point is not easy, especially since the accumulating set can be a line segment, a fractal curve or even a Cantor! (To understand this, take a continuous look at the FSS: what would happen at the bottom of Fig. 21 ?) For singularity on a single point, a careful continuation has been proposed in Durand-Lose (2009) that allows us to climb the arithmetical hierarchy. There is also another use of isolated singularity as a way to provide limits for analog computation.

5.4 Analog Computation

Unlike collision-based computing and CA, with dimensionless signals in a continuous space, it is possible to encode real numbers as the distance between signals. In the AGC context, this distance is exact—that is, it is a real number in exact precision. As long as signals are parallel, this distance is preserved. This allows one to encode real numbers and to make some computations over them.

Since any real number can be encoded exactly, the model ipso facto falls out of classical computability (because of cardinalities, there is no way to encode all the reals with natural numbers or finite strings). Therefore, an analog model of computation has to be searched for.

With this encoding of real numbers, AGC is equivalent to the linear Blum–Shub–Smale model (BSS) (Blum et al. 1998). In the linear BSS model, variables hold (exact) real numbers and the operations available are addition, multiplication by a constant and branch, according to the sign of a variable. This equivalence is true as long as the BSS machine has an unbounded number of variables (accessed through a context shift like moving a window over an infinite array of variables), there are finite signals and there is no singularity (Durand-Lose 2007).

If singularities are used in a proper way, it becomes possible to multiply two variables. Then the classical BSS model can be implemented in AGC (Durand-Lose 2008b). The simulation is not possible in the other way anymore since, for example, the exact square rooting can also be computed by AGC.

Just as in the discrete case, a proper handling of isolated singularities of any order can be used to decide quantified (over natural but not real numbers) predicates in the BSS model and climb the BSS-arithmetical hierarchy (Durand-Lose 2009).