Abstract
Collision-based computing is an implementation of logical circuits, mathematical machines, or other computing and information processing devices in homogeneous, uniform and unstructured media with traveling mobile localizations. A quanta of information is represented by a compact propagating pattern (gliders in cellular automata, solitons in optical systems, wave fragments in excitable chemical systems). Logical truth corresponds to presence of the localization, logical false to absence of the localization; logical values can also be represented by a particular state of the localization. When two or more traveling localizations collide, they change their velocity vectors and/or states. Post-collision trajectories and/or states of the localizations represent results of logical operations implemented by the collision. One of the principal advantages of the collision-based computing medium – hidden in 1D systems but obvious in 2D and 3D media – is that the medium is architecture-less: nothing is hardwired, there are no stationary wires or gates, a trajectory of a propagating information quanta can be seen as a momentary wire. The basics of collision-based computing are introduced, and the collision-based computing schemes in 1D and 2D cellular automata and continuous excitable media are overviewed. Also a survey of collision-based schemes, where particles/collisions are dimensionless, is provided.
Access provided by Autonomous University of Puebla. Download reference work entry PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
Edward Fredkin, Tommaso Toffoli, Norman Margolus, Elwyn Berlekamp and John Conway are fully recognized founders of the field of collision-based computing. The term “collision-based computing” was first used in Adamatzky (2002a). There are, however, several equivalent but lesser-used terms such as “signal computing,” “ballistic computing,” “free space computing,” and “billiard ball computing.” The idea of collision-based computing is based on studies dealing with collisions of signals traveling along discrete chains, on a one-dimensional cellular automata. The interaction of signals traveling along one-dimensional conductors was among the famous problems in physics, biology, and physiology for centuries, and the problem of interaction was interpreted in terms of finite state machines in the 1960s. The earliest computer science-related results on signal interaction can be attributed to:
-
Atrubin (Atrubin 1965), who designed the first-ever multiplier based on a one-dimensional cellular automaton in 1965;
-
Fischer (Fischer 1965), who developed a cellular automaton generator of prime numbers in 1965; and
-
Waksman (Waksman 1966), who initiated the very popular firing squad synchronization problem and provided an eight-state solution, in 1966.
Banks (1971) showed how to build wires and simple gates in configurations of a two-dimensional binary-state cellular automaton. This was not architecture-free computing, because a wire was represented by a particular stationary configuration of cell states (this was rather a simulation of a “conventional” electrical, or logical, circuit). However, Banks's design was a huge influence on the theory of computing in cellular automata and beyond.
In 1982, Elwyn Berlekamp, John Conway, and Richard Gay proved that Game of Life “can imitate computers” (Berlekamp et al. 1982).
They mimicked electric wires by lines “along which gliders travel” and demonstrated how to do a logical gate by crashing gliders into one another. Chapter 25 of their “Winning Ways” (Berlekamp et al. 1982) demonstrates computing designs that do not simply look fresh 20 years later but are still rediscovered again and again by Game of Life enthusiasts all over the Net.
Berlekamp, Conway, and Gay employed a vanishing reaction of gliders – two crashing gliders annihilate themselves – to build a NOT gate. They adopted Gosper's eater to collect garbage and to destroy glider streams. They used combinations of glider guns and eaters to implement and and or gates, and the shifting of a stationary pattern or block, by a mobile pattern, or glider, when designing auxiliary storage of information.
There is even the possibility that space–time itself is granular, composed of discrete units, and that the universe, as Edward Fredkin of M.I.T. and others have suggested, is a cellular automaton run by an enormous computer. If so, what we call motion may be only simulated motion. A moving particle in the ultimate microlevel may be essentially the same as one of our gliders, appearing to move on the macro-level, whereas actually there is only an alteration of states of basic space–time cells in obedience to transition rules that have yet to be discovered. – Berlekamp et al. (1982).
Meanwhile, in 1978, Edward Fredkin and Tommaso Toffoli submitted a 1-year project proposal to DARPA, which got funding and thus started a chain of remarkable events.
Originally, Fredkin and Toffoli aimed to “drastically reduce the fraction of” energy “that is dissipated at each computing step” (Fredkin and Toffoli 2002). To design a non-dissipative computer they constructed a new type of digital logic – conservative logic – that conserves both “the physical quantities in which the digital signals are encoded” and “the information present at any moment in a digital system” (Fredkin and Toffoli 2002).
Fredkin and Toffoli (1982) further developed these ideas in the seminal paper “Conservative Logic,” from which a concept of ballistic computers emerged. The Fredkin–Toffoli model of conservative computation – the billiard ball model – explores “elastic collisions involving balls and fixed reflectors.” Generally, they proved that “given a container with balls one can do any kind of computation.”
The billiard ball model became a masterpiece of cellular automaton theory, thanks to Norman Margolus who invented a cellular automaton (block cellular automata or partitioned cellular automata) implementation of the model. Norman published this result in 1984 (Margolus 1984). “Margolus neighborhood” and “billiard ball model cellular automata” are exploited widely nowadays.
A detailed account of collision-based computing is not provided. There are no excuses to avoid reading original sources (Berlekamp et al. 1982; Fredkin and Toffoli 1982; Margolus 1984). A comprehensive, self-contained report of the modern state of collision-based computing is provided in the book by Adamatzky (2002a). The present chapter rather discusses personal experience designing collision-based computing schemes in one- and two-dimensional cellular automata, and spatially extended nonlinear media. It also provides some hands-on examples of recently discovered collision-based computing devices, with the hope that the examples will help readers to experiment with their own designs.
The chapter is structured as follows. Principles of collision-based computing are outlined in Sect. 2. Section 3 shows how basic logical gates can be implemented by colliding localizations in natural systems – simulated reaction–diffusion medium (Sect. 3.1) and light-sensitive Belousov–Zhabotinsky medium (Sect. 3.2). The theoretical foundations of computing with signals in 1D cellular automata are presented in Sect. 4, including collision-based implementation of 1D Turing machine (Sect. 4.2.1) and cyclic tag systems (CTS) (Sect. 4.2.2). The excursion to architectureless computing is complete with abstract geometrical computation in Sect. 5, where time and space are continuous and particles/localizations are dimensionless.
2 Principles of Collision-Based Computing
This section attempts to summarize all types of collision-based computers. A collision-based computer is an empty space populated with mobile and stationary localizations. The mobile localizations used for computing so far are:
-
Billiard balls (Fredkin and Toffoli 1982)
-
Gliders in cellular automata models (Berlekamp et al. 1982; Adamatzky and Wuensche 2007; Delorme and Mazoyer 2002; Rennard 2002; Rendell 2002)
-
Solitons in nonlinear media (Jakubowski et al. 1996, 2001; Steiglitz 2001; Anastassiou et al. 2001; Rand et al. 2005; Rand and Steiglitz 2009), and
-
Localized wave fragments in excitable chemical media (Adamatzky 2004; Adamatzky and De Lacy Costello 2007)
Examples of stationary localizations are:
-
“Still lives,” blocks, and eaters in Conway's Game of Life (Berlekamp et al. 1982; Rendell 2002)
-
Standing waves in automaton models of reaction–diffusion systems (Adamatzky and Wuensche 2007) and
-
Breathers in computing devices based on polymer chains and oscillons in vibrating granular materials (Adamatzky 2002b)
Usually mobile localizations represent signals and stationary localizations are used to route the signals in the space; however, by allowing balls and mirrors to change their states, in addition to velocity vectors, one can, in principle, build multivalued logic circuits.
Classical examples of collision-based gates are shown in Fig. 1 . The interaction gate involves two balls to represent values of variables x and y (Fig. 1a ). If two balls are present at the input trajectories, this corresponds to both variables having Truth values that collide and deflect as a result of collisions. The deflected trajectories of the balls represent conjunctions of the input variables, xy. If only one ball—say the ball corresponding to the variable x—is present initially, then this ball travels along its original trajectories and does not change its velocity vector. The undisturbed trajectories of the balls represent logical functions \(\overline{x}y\) and \(x\overline{y}\) (Fig. 1a ).
The switch gate (Fig. 1b ) is another famous example, which also demonstrates the role of mirrors (stationary localizations). In the switch gate, the signal x is conditionally routed by a control signal c. If the signal is not present, the ball x continues along its original trajectory traveling southeast. The ball x is delayed and its trajectory shifts eastward if the signal c is present in the system. After collision, balls x and c are reflected but their further propagation is restricted by mirrors: the ball c collides with the Northern mirror and the ball x with the Southern mirror (Fig. 1b ).
3 Collision-Based Computing in Natural Systems
Those focused on implementation issues may wonder how a scheme of collision-based computing can be implemented in natural, chemical, physical, or biological materials. This section provides two examples of computing with localizations in spatially extended quasi-chemical (reaction–diffusion cellular automata) and excitable chemical (Belousov–Zhabotinsky reaction) systems.
3.1 Computing Schemes in Reaction–Diffusion Cellular Automata: Spiral Rule
The reaction–diffusion cellular automaton Spiral Rule (Adamatzky and Wuensche 2007) exhibits a wide range of mobile and stationary localizations, thus offering unique opportunities to employ both traveling and still patterns in a computation process.
An automaton is designed that emulates nonlinearity of activator (A) and inhibitor (I) interaction for subthreshold concentrations of activator. The following quasi-chemical reaction was used to construct cell-state transition rules (Adamatzky and Wuensche 2007):
For subthreshold concentration of the inhibitor and threshold concentrations of activator, the activator is suppressed by the inhibitor. For critical concentrations of the inhibitor, both inhibitor and activator dissociate, producing the substrate.
The quasi-chemical reactions are mapped to cellular automaton rules as follows. Take a totalistic hexagonal cellular automaton (CA), where a cell takes three states – substrate S, activator A, and inhibitor I – and the cell updates its state depending on just the numbers of different cell-states in its neighborhoods. The update rule can be written as follows:
where σ p (x)t is the number of cell x's neighbors (in seven cells neighborhood) with cell-state p ∈ {I, A, S} at time step t. The rule is compactly represented as a matrix M = (M ij ), where 0 ≤ i ≤ j ≤ 7, 0 ≤ i + j ≤ 7, and M ij ∈ {I, A, S} (Adamatzky et al. 2006). The output state of each neighborhood is given by the row-index i (the number of neighbors in cell-state I) and column-index j (the number of neighbors in cell-state A). One does not have to count the number of neighbors in cell-state S, because it is given by \(7 - (i + j)\). A cell with a neighborhood represented by indices i and j will update to cell-state M ij , which can be read off the matrix. In terms of the cell-state transition function, this can be presented as follows: \({x}^{t+1} = {M}_{{\sigma }_{2}{(x)}^{t}{\sigma }_{1}{(x)}^{t}}\). The exact structure of the transition matrix is as follows (Adamatzky and Wuensche 2007):
The entry M 01 = A symbolizes the diffusion of activator A, M 11 = I represents the suppression of activator A by the inhibitor I, and M z2 = I (z = 0, …, 5) is the self-inhibition of the activator in particular concentrations. M z0 = S (z = 1, …, 7) means that the inhibitor is dissociated in the absence of the activator, and that the activator does not diffuse in subthreshold concentrations; M zp = I, p ≥ 4 is an upper-threshold self-inhibition.
Starting in a random initial configuration, the automaton will evolve towards a quasi-stationary configuration, with typically two types of stationary localizations, and a spiral generator of mobile localizations, or gliders (Fig. 2 ). The core of a glider-gun is a discrete analog of a “classical” spiral wave. However, at some distance from the spiral wave tip, the wave front becomes unstable and splits into localized wave fragments. The wave fragments continue traveling along their originally determined trajectories and keep their shape and velocity vector unchanged unless disturbed by other localizations. So, the wave fragments behave as in sub-excitable Belousov–Zhabotinsky systems.
Basic gliders, those with one (activator) head, are found in five types (Fig. 3 ), which vary by the number of trailing inhibitors. Three types (G 34, G 24, G 43) alternate between two forms. Two types (G 4, G 5) have just one form. The spiral glider-gun in Fig. 2 emits G 34 gliders. An alternative, low frequency, spiral glider-gun (Wuensche and Adamatzky 2006) (not shown) releases G 4 gliders. These basic gliders, and also a variety of more complicated gliders including mobile glider-guns, are also generated by many other interactions. Stationary localizations, or eaters (Fig. 3i , j), are another important feature of the CA.
The principal components of any computing device are the input interface, memory, routers, and logical gates. Readers are referred to the paper Adamatzky and Wuensche (2007) to study possible input functions.
How does one implement a memory in the Spiral Rule CA? The eater E 6 can play the role of a 6-bit flip-flop memory device. The substrate-sites (bit-down) between inhibitor-sites (Fig. 3i , j) can be switched to an inhibitor-state (bit-up) by a colliding glider.
An example of writing one bit of information in E 6 is shown in Fig. 4 . Initially E 6 stores no information. The aim is to write one bit in the substrate-site between the northern and northwestern inhibitor-sites (Fig. 4a ). A glider G 34 is generated (Fig. 4b , c) that travels west. G 34 collides with (or brushes past) the north edge of E 6 resulting in G 34 being transformed to a different type of glider, G 4 (Fig. 4g , h). There is now a record of the collision – evidence that writing was successful. The structure of E 6 now has one site (between the northern and northwestern inhibitor-sites) changed to an inhibitor-state (Fig. 4j ) – a bit was saved.
To read a bit from the E 6 memory device with one bit-up (Fig. 5a ), one collides (or brushes past) with glider G 34 (Fig. 5b ). Following the collision, the glider G 34 is transformed into a different type of basic glider, G 34 (Fig. 5g ), and the bit is erased (Fig. 5j ).
To route signals, one can potentially employ other gliders to act as mobile reflectors. Figure 6 shows how a glider traveling northwest collides with a glider traveling west, and is reflected southwest as a result of the collision. However, both gliders are transformed to different types of gliders. This is acceptable on condition that both types of gliders represent the same signal, or signal modality.
There are two more gates that are useful in designing practical collision-based computational schemes. They are the fanout gate and the erase gate. The fanout gate is based on glider multiplication. There are a few scenarios where one glider can be multiplied by another glider (for details see the original beehive rule [Wuensche 2005]); for example, one can make a fanout gate by colliding glider G 34 with glider G 24. The gliders almost annihilate as a result of the collision, but recover into a complicated one, which splits into three G 5 gliders. To annihilate a glider, one can collide it with the central body of an eater, or with another glider; for example, head-on collisions usually lead to annihilation.
The asynchronous xor gate can be constructed from the memory device in Figs. 4 and 5 , employing the eater E 6 and the glider G 34. The incoming trajectory of the gliders is an input x = 〈x, y〉 of the gate, and the state of the cell that is switched to the inhibitor state by the gliders is an output z of the gate (this cell is shown by ⊗ in Fig. 7a ). As seen in Fig. 4 , when glider G 34 brushes by the eater E 6 it “adds” one inhibitor state to the eater configuration (Fig. 4 , t + 7), and transforms itself into glider G 43. If glider G 34 brushes by E 6 with an additional inhibitor state (Fig. 5 , t), it “removes” this additional state and transforms itself into glider G 4 (Fig. 5 , t + 11).
Assume that the presence of glider G 34 symbolizes input logical True and its absence – input False, inhibitor state I in cell ⊗ – output True and substrate state S – output False. The result of this logical operation can be read directly from the configuration of E 6 or by sending a control glider to brush by E 6 to detect how the glider is transformed. Then the structure implements exclusive disjunction (Fig. 7b ). The gate constructed is asynchronous, because the output of the operation does not depend on the time interval between the signals but only on the value of signals: when the inhibitor state is added or removed from E 6 the configuration of E 6 remains stable and does not change till another glider collides into it.
The eater E 6 can take four different configurations resulting from the interactions of gliders brushing past, and there are seven types of gliders produced in collisions with the eater (including some basic types flipped). One therefore can envisage (Adamatzky and Wuensche 2007) that a finite state machine can be implemented in the eater-glider system. The internal state of such a machine is represented by the configuration of the eater, the type of the incoming glider symbolizes the input symbol of the machine, and the type of the outgoing glider represents the output state of the machine.
To construct the full state transition table of the eater-glider machine, seven types of gliders are collided into four configurations of the eater and the results of the collisions are recorded. For the sake of compact representation, the configurations of the eater are encoded as shown in Fig. 8 . The gliders are denoted as follows: G 34 as a, G 43 as b, G 5 as c, G 4 as d, G 24 as e, G 4 (glider G 4 flipped horizontally) is f, and G 43 (glider G 43 flipped horizontally) is g. The state transition table is shown in Fig. 9 .
Consider the internal states of the eater-glider machine as unary operators on the set {a, b, c, d, e, f, g}, that is, the machine's state is reset to its initial state after the collision with the glider. For example, the unary operator α implements the following transformation: a → b, b → c, c → a, d → a, e → d, f → e, g → e. The operators have the following limit sets: operator α has the limit set {a, b, c}, β – set {c}, χ has two limit sets {a, d} and {b, c}, and operator δ – two limit sets {a, b, c, d} and {e, f}. Considering unary operators a, …, g operating on set {α, β, χ, δ}, one obtains the limit sets shown in Fig. 10 . Many of the operators have more than two limit sets, which may indicate a significant computational potential of the eater-glider machine.
To characterize the eater-glider machine in more detail, a study was conducted to find out what output strings are generated by the machine when the machine receives the uniform infinite string s ∗, s ∈ {a, ⋯…, g} on its input. These input string to output string transformations are shown in Fig. 11 .
Input string abcdefg evokes the following output strings when fed into the machine. The machine starting in state α generates string begabac, in state β string dcgabac, in state χ string deccgae, and in state δ string bcccgae.
3.2 Collision-Based Computing in Excitable Chemical Media
This section gives a brief introduction to implementation of collision-based circuits in excitable chemical media, the Belousov–Zhabotinsky (BZ) system. Examples discussed here are based on numerical experiments; see the chapter Reaction–Diffusion Computing of this book for implementation of collision-based circuits in a BZ medium in chemical laboratory conditions. Now, computing is discussed with localized wave fragments in the Oregonator (Field and Noyes 1974; Tyson and Fife 1980) model adapted to a light-sensitive BZ reaction with applied illumination (Beato and Engel 2003; Krug et al. 1990):
where variables u and v represent local concentrations of bromous acid HBrO2 and the oxidized form of the catalyst ruthenium Ru(III), ε sets up a ratio of time scale of variables u and v, q is a scaling parameter depending on reaction rates, f is a stoichiometric coefficient, φ is a light-induced bromide production rate proportional to the intensity of illumination (an excitability parameter – moderate intensity of light will facilitate excitation process, higher intensity will produce excessive quantities of bromide which suppresses the reaction). It is assumed that the catalyst is immobilized in a thin layer of gel; therefore, there is no diffusion term for v. To integrate the system, one uses the Euler method with five-node Laplacian operator, time step \(\Delta t = 1{0}^{-3}\) and grid point spacing Δx = 0.15, with the following parameters: \(\phi = {\phi }_{0} + A/2\), A = 0.0011109, φ 0 = 0.0766, ε = 0.03, f = 1.4, q = 0.002.
The chosen parameters correspond to a region of “higher excitability of the sub-excitability regime” outlined in Sendiña-Nadal et al. (2001), which supports propagation of sustained wave fragments (Fig. 12a ). These wave fragments are used as quanta of information in the design of CB logical circuits. The waves were initiated by locally disturbing initial concentrations of species; for example, ten grid sites in a chain are given value u = 1.0 each. This generates two or more localized wave fragments, similar to counter-propagating waves induced by temporary illumination in experiments. The traveling wave fragments keep their shape for around 4 ⋅ 103–104 steps of simulation (4–10 time units), then decrease in size and vanish. The wave's life-time is sufficient, however, to implement logical gates; this also allows one not to worry about “garbage collection” in the computational medium.
Signals are modeled by traveling wave fragments (Sendiña-Nadal et al. 2001; Beato and Engel 2003): a sustainably propagating wave fragment (Fig. 12a ) represents the true value of a logical variable corresponding to the wave's trajectory (momentarily wire). To demonstrate that a physical system is logically universal, it is enough to implement negation and conjunction or disjunction in the spatio-temporal dynamics of the system. To realize a fully functional logical circuit, one must also know how to operate input and output signals in the system's dynamics, namely to implement signal branching and routing; delays can be realized via appropriate routing.
One can branch a signal using two techniques. Firstly, one can collide a smaller auxiliary wave to a wave fragment representing the signal, the signal-wave will then split into two signals (these daughter waves shrink slightly down to stable size and then travel with constant shape a further 4 ⋅ 103 time steps of the simulation) and the auxiliary wave will annihilate (Fig. 12b ). Secondly, one can temporarily and locally apply illumination impurities on a signal's way to change properties of the medium and thus cause the signal to split (Fig. 12c ).
A control impurity, or reflector, consists of a few segments of sites for which the illumination level is slightly above or below the overall illumination level of the medium. Combining excitatory and inhibitory segments, one can precisely control the wave's trajectory, for example, determining a U-turn of a signal (Fig. 12d ).
A typical billiard ball model interaction gate (Fredkin and Toffoli 1982; Margolus 1984) has two inputs – x and y, and four outputs – \(x\overline{y}\) (ball x moves undisturbed in the absence of ball y), \(\overline{x}y\) (ball y moves undisturbed in the absence of ball x), and twice xy (balls x and y change their trajectories when they collide into each other). It was not possible to make wave fragments implement exact billiard-ball gates, because the interacting waves either fused or one of the waves was annihilated as a result of the collision with another wave.
However, a BZ (nonconservative) version of a billiard-ball gate with two inputs and three outputs is implemented, which is just one xy output instead of two. This BZ collision gate is shown in Fig. 13 .
The rich dynamic of the BZ medium allows one to also implement complicated logical operations just in a single interaction event. An example of a composite gate with three inputs and six outputs is shown in Fig. 14 . As one sees in Fig. 14 , some outputs, for example \(\overline{x}yz\), are represented by gradually vanishing wave fragments. The situation can be dealt with by either using a very compact architecture of the logical gates or by installing temporary amplifiers made from excitatory fragments of illumination impurities.
As known from results of computer simulations and experimental studies, classical excitation waves merge or annihilate when they collide with one another. This may complicate the implementation of nontrivial logical circuits in classical excitable media. Wave fragments, however, behave a bit differently, more like quasi-particles.
In computational experiments with exhaustive analysis of all possible collisions between localized wave fragments (Adamatzky and De Lacy Costello 2007), all the collisions are classified as (Fig. 15 ): quasi-elastic reflection of wave fragments (Fig. 15a ), pulling and pushing of a wave fragment by another wave fragment (Fig. 15b , c), sliding of one wave fragment along the refractory trail of another fragment (Fig. 15d ), and translation of a wave fragment along one axis by another wave fragment (Fig. 15e ). Examples of two types of collision are shown in Fig. 16 .
4 One-Dimensional Cellular Automata
This section deals with cellular automata in general and discrete signals in particular. It is shown both how they compute in the classical understanding and how they can be used to implement some phenomena relevant to massively distributed computing but without any sequential counterpart.
Cellular automata (CA) were introduced as a discrete model for parallel, local, and uniform phenomena, whether of engineering, physical, or biological nature. They often provide a medium where signals naturally appear and are thoroughly used to understand the global dynamics. On the other hand, a correct handling of such signals is the key to compute and to design special purpose CA.
A cellular automaton works in the following way. The space is regularly partitioned in cells. All the cells are identical and are regularly displayed as an infinite array (having as many dimensions as the modeled space). Each cell can be in finitely many states and evolves according to its state and the states of the surrounding cells – locality. All the cells are identical and behave similarly – uniformity. The cells are all updated synchronously, like a massive parallel process sharing a single clock. In essence, CA form a discrete model: discrete time, discrete space, and discrete values.
Generally, the array is considered to extend infinitely in all directions so that there are no boundaries to deal with. However, borders can be encoded using special states which are never changed and make the two sides independent. Another way to simulate a CA on a computer is to consider that outside of a finite range all cells are in the same stable state (called a quiescent state). Finite/periodic configuration can also be generated by displaying the cells to be on a ring (or torus): the last and first are then neighbors.
Previous collision-based systems are CA, or, more accurately, are simulated by CA, some of them work on hexagonal lattices. Computation in dimension 2 and above can be done straightforwardly as already presented by bit encoding and implementation of logic gates.
From now on only one-dimensional space is considered because, on the one hand, in higher dimensions it can – up to some point – be treated alike or correspond to what has been exemplified in previous sections, and, because, on the other hand, dimension 1 is particularly restrictive and needs special focus.
The reader interested in CA might consult Ilachinski (2001), Kari (2005), Sarkar (2000) for various topics not covered here.
4.1 Signals in CA
In space–time diagrams, or orbits, configurations are concatenated as iterations go. They are very important in order to comprehend the dynamics. Since the temporal coordinate is added, this leads to an array with one more dimension than the underlying space.
A signal is anything periodic in the space–time diagram. If the periodicity is in two directions, then it can fill the whole space–time and is referred as a background, the substrata upon which signals move. The frontier between two backgrounds is a signal as long as it is periodic. Figure 17 provides an example with a complex background (alternatively 0111, 1000, 1101, and 0010 repeating) on the left. On the right, signals are revealed by an ad hoc filtering. This example illustrates the variety in width and pattern of signals. As can be seen, collisions between signals can be pretty complicated.
4.2 Computing in One-Dimensional Systems
In dimension two and above, as soon as it is possible to carry information around (with signal), to make information crossing and duplication, and to process it (with collision) in the proper way to yield a sufficiently large set of logical gates, computation is straightforwardly possible.
In dimension 1, this is not so easy, because there is no way for a signal to go around another one or some static artifact. This means that signal propagation is really an issue and, for example, handling garbage signals can be cumbersome. One way to cope with it is to have various kinds of signals not identically affected by artifacts (some would pass unaffected while others interact).
This means that to use the bits and gates scheme, one has to be very careful with available signals and have a very clever positioning of the logic circuitry, which is always thought of as two dimensional (time often provides the extra dimension for displaying it). For example, the construction of Ollinger (2002) provides a small CA that is able to compute with circuit implementation where displaying the circuit is not straightforward.
Another way to tackle computation is to use other formalisms. One classical way is to go back to Turing machines, another one is to implement minimal rewriting systems like cyclic tag systems.
4.2.1 Turing Machine
A Turing machine is a finite automaton acting on a potentially infinite array of symbols. The computation starts with the automaton in its initial state and the input written on the tape. The tape is accessed through a read/write head. At each iteration, the automaton reads the symbol under the head, rewrites a symbol, changes its state, and moves the head left or right. A Turing machine is basically as one dimensional as its tape, so that it naturally fits on a one-dimensional CA.
The simulation is rather simple: signals encoding the symbols are motionless and one signal amounting to the state of the automaton is moving round updating the symbols. Collisions are as follows: a moving state meets a stationary symbol, which leaves a new stationary symbol and a moving new state. (Figure 24 in the next section provides a clear illustration of this in a continuous setting.) This was implemented by, for example, Lindgren and Nordahl (1990).
4.2.2 Cyclic Tag Systems (CTS)
Another way to define computation is to rely on a word that is rewritten until the system halts. The input is given as the starting word and the output is the final word. Connexion with a Turing machine is straightforward when considering the tape as a finite word and the state of the automaton as an extra symbol at the head.
Cyclic tag systems (CTS) is a particular case of many Turing-universal rewriting systems. A CTS considers a binary word together with a circular list of binary words (appendants). At each iteration, the first bit is removed from the word; if it is 1, then the first appendant of the list is added at the end of the word, then the list is rotated. The computation stops when the word is empty or a special halting word (denoted h in the example) is activated. An example is provided in Fig. 18 .
Cyclic tag systems were proven to be able to perform any computation (Cook 2004) and even to do so in polynomial time (Woods and Neary 2006) and were used to build very small universal Turing machines (Neary and Woods 2009). They have been implemented in one-dimensional CA with collision/signal-based computing by Cook (2004) to produce computation universal CA with only two states (it is impossible to compute with less). The construction relies on signals moving and colliding that are sought and classified in order to work on a more abstract level.
As far as minimal CA are concerned, it is worth mentioning that four states are enough to get a one-dimensional CA, which is able to simulate any other one-dimensional CA (Richard and Ollinger 2008). This is not Turing universality since the simulation encompasses infinite configurations.
4.3 Signal-Specific Issues
Since CA form a model for physical phenomena and also for parallel computing architectures, other issues arise. One is to understand the underlying dynamics when used as a discrete model. The other is to design CA for special purposes.
4.3.1 Signals to Understand Dynamics
Once a simulation is running as a CA, one common way to understand the dynamics is to try to find regularities in the space–time diagram and observe them. They are often the key to the underlying dynamics and predictions. (Although it might happen that what is observed is nothing but an artifact of the model and has no counterpart in reality.) This is made clear by considering two examples.
The first one models sand dripping in a one-dimensional space. The dynamics are quite simple: there is nothing in the initial configuration and at each iteration one grain is added to the first pile/cell. Each time a pile has at least two more grains than the next one, a grain falls.
If only grains dropping at odd time are colored in black, then their position is as in Fig. 19a after 10,000 iterations and only the top grains will ever move. This strange disposal, as well as the two different slopes and precise long-term behavior, are explained by signals (Durand-Lose 1996, 1998). These signals can be identified on Fig. 19b where the successive configurations (iteration 100–150) are set one after the other to form a volume. On this volume, triangles can be seen on the lower (as well as the upper parts), with their frontiers as signals (which can be revealed with an appropriate filtering).
Another example is provided by Das et al. (1995). The aim is to generate a CA with two states and a five-closest-cell neighborhood, such that, on a ring of any size, whatever configuration it is started on it always ends up blinking: all cells are 0 then all cells are 1, alternatively, forever. Instead of trying to build it straightaway, evolutionary programming is used: random CA are generated; they are ranked according to their “blinking capability”; then the best are kept, recombined and mutated to form the next generation. It then cycles through ranking and the next generation until one “fully blinking” CA emerges.
The obtained CA is analyzed in terms of signal (again with some filtering), as illustrated in Fig. 20 . (The apparent non-connectivity of some signals comes from the filtering and also because cells can influence one another up to distance 2.) The evolutionary process goes in steps where various intermediate levels of “blinking” appear. Analyzing typical members of each level reveals the progressive apparition of signals and collision rules.
It is very important to notice how, in a context where signals were not asked for by the evolutionary process, they indeed appear and are the key to both the desired dynamics and the evolutionary process.
The previous example does not compute in the classical understanding; nevertheless, it provides a relevant dynamical global property.
4.3.2 Signals to Generate a Particular Behavior
Computing, in the usual understanding, is a special behavior. But CA can also be thought of as a computing model on its own. Then primitives can be specially defined to take advantage of the huge parallelism. Signals are the key to managing it.
As already cited, the pioneering work of Fischer (1965) generates prime numbers using a parallel implementation of the Sieve of Eratosthenes where they are indicated as no signal on the first cell at the corresponding times. Prime numbers can then be read on the space–time diagram by observing the state of the first cell.
Another complex behavior is the famous firing squad synchronization problem (FSS). Starting from all but one cell in a quiescent state, one wants all the cells to enter simultaneously the same state – which has not been used before. Like if they would all blink for the very first time synchronously.
Each example of Fig. 21 shows the Euclidean conception and then the discrete implementation of a FSS solution. Both constructions rely on recursive cuts in half. In the discrete implementation, at some point, the granularity of space – a cell cannot be divided – is reached. Since CA are synchronous, this point is reached simultaneously everywhere, ensuring the global synchronization of the entire array of cells.
4.3.3 Signals as a Computational Model on Its Own
Mazoyer et al. developed a computing model where multiplication (as in Fig. 22a ), composition (as in Fig. 22b ), and even iteration are graphically achieved. The programming system is achieved by using a trellis of adaptable size that can be dynamically generated. The computation is carried out over the trellis. Composition is then achieved by having the next computation displayed on a new trellis. Recursion is provided by a scheme that dynamically starts another iteration providing each time an adapted trellis.
5 Abstract Geometrical Computation
Abstract geometrical computation (AGC) is an idealization of collision-based computing where particles/signals are dimensionless (time and space are continuous). Although the number of signals in a bounded space is expected to be finite, it is unbounded. Another way to consider AGC is as a continuous counterpart of CA as a limit when the size of the cells tends to zero. In CA, signals are almost always the key to understanding and designing, and, indeed, in the literature, the discreteness of CA and of their signals is often put aside to reason at a more abstract level and is left to technical details.
In abstract geometrical computation, dimensionless signals move at constant speed. When they meet, they are replaced by others according to some rewriting rules. A signal machine (SM) gathers the definition of the nature of available signals (called meta-signals), their speeds, and the collision rules. There are finitely many meta-signals and each one is assigned a constant speed (velocity and direction). The constant speed may seem surprising, but each discrete CA signal has indeed a constant speed.
The space–time diagrams generated are continuous in both space and time and the traces of signals form line segments. For a given meta-signal, all these segments are parallel. In this section, only one-dimensional AGC are considered, so that the space–time diagram is always two dimensional with time increasing upwards as illustrated by Fig. 23 .
Space–time diagrams can be much more complicated and, as presented below, use continuity to implement the Zeno paradox and emulate the black hole model of computation.
5.1 Computing
Computing, in the classical understanding, is pretty easy. In fact, many constructions for CA directly translate to ACG and are even easier, since discreteness does not have to be taken into account. Two ways to achieve computability are presented: Turing machines and cyclic tag systems.
For Turing machines (as presented in Sect. 4.2.1), the implementation is quite straightforward: static signals encode the tape – one signal per symbol – and one signal encodes the state of the automaton and the position of the head. The latter moves forth and back from symbol to symbol as the read/write head. This is depicted in Fig. 24 . The construction is detailed in Durand-Lose (2009).
Cyclic tag systems are presented in Sect. 4.2.2. The simulation is done by encoding, left to right, the word and then the circular list of appendants by parallel signals encoding the bits as shown in Fig. 25a . At each iteration, the list is rotated to the right, but before that, if the erased bit of the word is 1, a copy is left. The copy is directly added at the right of the word as in Fig. 25b , which presents one full iteration. The full simulation of the example of Fig. 18 is given in Fig. 25c . The rightward drifting corresponds to the erasure of the word from the left and to the rightward rotation of the list. Each group of oblique lines corresponds to one rotation of the list.
This simulation is detailed in Durand-Lose (2008a). The signal machine obtained is able to simulate any CTS, and is thus Turing universal. It is the smallest one known (it has only 13 meta-signals).
5.2 Geometric Primitives
As for CA, AGC can also be considered as a model on its own, with particular operators, which might not have any classical computation counterpart (like FSS) or discrete counterpart.
All the following constructions involve adding meta-signals and rules to a given SM, so that the desire capability exists. Signals have to be added to the initial configuration in order to fire the effect. Rules can also be modified in order to dynamically fire it.
Space and time are continuous and scaleless, and since signals have no dimension, there is no limit on the scalability. If all signals are scaled spatially by a given coefficient, the whole computation is also scaled temporally. So, the duration of a computation is meaningless and complexity measures, such as the maximal number of collisions, with a causal link, have to be used.
Spatial rescaling of the initial configuration is a static operation. It is also possible to do it during the computation. The construction relies on the ability to freeze a configuration.
Freezing a computation is quite simple. One signal is added on one side and it crosses the entire configuration. Doing so, each time it meets a signal (or enters a previously existing collision), it replaces it with another signal encoding the meta-signal. All the encoding signals have the same speed: they are parallel and do not interact and, moreover, the distance between them is preserved. The configuration is frozen and shifts. To unfreeze it, a signal comes from the side, crosses the configuration, and replaces each encoding signal by the encoded one (or the result of the collision). Freezing and unfreezing signals must have the same speed so that the configuration is restored exactly as it was, up to a translation. (And indeed they correspond to the same meta-signal toggle.) This is depicted in Fig. 26 .
Meanwhile, when a configuration is frozen into parallel signals, it is possible to act on these signals. One simple trick is to change their direction. In Fig. 27a , this is done twice so as to restore the original direction. (An extra signal is used on the right to delete the structure.) As a result, since different slopes are used to change direction, the distances between signals are scaled (here by one half). Adding freezing and unfreezing signals (automatically fired in due time), an artifact, to scale down a whole configuration, is generated as can be seen on the Fig. 27c .
The initial configuration has been modified in order to start straightaway the effect. It is also possible to fire it dynamically during the computation when some special collision happens. This is used to iterate it.
5.3 Infinite Acceleration and Non-recursive Function
Scaling can be automatically restarted ad infinitum; since both space and time are continuous, this is not a problem – at least before the singularity. In Fig. 28 , a computation producing an infinite trellis, extending indefinitely in both space and time, is folded into the structure. The right signal and the middle one are very important since the folded computations should be bounded in order to ensure that it is fully scaled.
A singularity refers to an accumulation of infinitely many collisions and signals to a given location. Figure 28a shows that the structure alone already creates a singularity. This illustrates the Zeno paradox. The leftmost signals of Fig. 28a form an infinite sequence with infinitely (yet countably) many collisions, although the time elapsed as well as the total distance crossed by signals is finite.
All the signals and collisions that would have existed if there would have been no folding indeed exist, but in a bounded portion of the space and time diagram. (The equivalence of the plane and a bounded part is somehow implemented.) So that up to the presence of the structure and of the frozen parts and the different scales, the computation is the same.
Any computation starting with finitely many signals can be folded. Inside the structure, the rescaling provides speedup (the closer they are, the faster they collide). This speedup is unbounded and goes to infinity. Inside the structure there is a space–time where time-lines are infinitely accelerated compared to the outside. This is the first step to emulate the black-hole model of computation (Hogarth 1994; Etesi and Németi 2002; Lloyd and Ng 2004).
The second step is to find a way for an atomic piece of information to leave the black hole, here the singularity. One meta-signal can be added and rules changed so that the new meta-signal is not affected by the structure but can be generated by the initial computation.
The final step is to add bounding signals on both sides of the folding (called here horizonLe and horizonRi). At their collision point, the folded computation is entirely in the causal past as displayed in Fig. 29 . Any signal leaving the folding would have been collected.
This way the black-hole model is emulated: there are two time-lines, one for the machine and one for the observer. The machine one is infinite. On the observer one, after a finite duration, the whole machine one is entirely in the causal past. A single piece of information can be sent by the machine to the observer and it has to be sent after a finite (machine) duration. The ultimate date for the observer to receive anything from the machine is finite and known. To understand the power of this model, just imagine that the machine only sends a signal if its computation ends. The observer receives a signal only if it stops and after a duration the observer is aware that any signal sent would have been received. So, by just checking its clock, the observer knows that the computation did not halt when it is the case. Altogether, the halting problem (whose undecidability is the cornerstone of Turing computability theory) is decidable!
This model, as well as AGC, clearly computes beyond Turing. In formal term, it can decide Σ 0 1 formulae in the arithmetical hierarchy, which is a recursive/computable predicate prefixed by an existential quantifier. This includes, for example, the consistency of Peano arithmetic and set theory, many conjectures such as the Collatz conjecture and Goldbach's conjecture.
Defining what happens at the singularity point is not easy, especially since the accumulating set can be a line segment, a fractal curve or even a Cantor! (To understand this, take a continuous look at the FSS: what would happen at the bottom of Fig. 21 ?) For singularity on a single point, a careful continuation has been proposed in Durand-Lose (2009) that allows us to climb the arithmetical hierarchy. There is also another use of isolated singularity as a way to provide limits for analog computation.
5.4 Analog Computation
Unlike collision-based computing and CA, with dimensionless signals in a continuous space, it is possible to encode real numbers as the distance between signals. In the AGC context, this distance is exact—that is, it is a real number in exact precision. As long as signals are parallel, this distance is preserved. This allows one to encode real numbers and to make some computations over them.
Since any real number can be encoded exactly, the model ipso facto falls out of classical computability (because of cardinalities, there is no way to encode all the reals with natural numbers or finite strings). Therefore, an analog model of computation has to be searched for.
With this encoding of real numbers, AGC is equivalent to the linear Blum–Shub–Smale model (BSS) (Blum et al. 1998). In the linear BSS model, variables hold (exact) real numbers and the operations available are addition, multiplication by a constant and branch, according to the sign of a variable. This equivalence is true as long as the BSS machine has an unbounded number of variables (accessed through a context shift like moving a window over an infinite array of variables), there are finite signals and there is no singularity (Durand-Lose 2007).
If singularities are used in a proper way, it becomes possible to multiply two variables. Then the classical BSS model can be implemented in AGC (Durand-Lose 2008b). The simulation is not possible in the other way anymore since, for example, the exact square rooting can also be computed by AGC.
Just as in the discrete case, a proper handling of isolated singularities of any order can be used to decide quantified (over natural but not real numbers) predicates in the BSS model and climb the BSS-arithmetical hierarchy (Durand-Lose 2009).
References
Adamatzky A (ed) (2002a) Collision-based computing. Springer, London
Adamatzky A (ed) (2002b) Novel materials for collision-based computing. Springer, Berlin
Adamatzky A (2004) Collision-based computing in Belousov–Zhabotinsky medium. Chaos Soliton Fract 21:1259–1264
Adamatzky A, De Lacy Costello B (2007) Binary collisions between wave-fragments in sub-excitable Belousov–Zhabotinsky medium. Chaos Soliton Fract 34:307–315
Adamatzky A, Wuensche A (2007) Computing in spiral rule reaction-diffusion hexagonal cellular automaton. Complex Syst 16(4):277–298
Adamatzky A, Wuensche A, De Lacy Costello B (2006) Glider-based computation in reaction-diffusion hexagonal cellular automata. Chaos Soliton Fract 27:287–295
Anastassiou C, Fleischer JW, Carmon T, Segev M, Steiglitz K (2001) Information transfer via cascaded collisions of vector solitons. Optics Lett 26:1498–1500
Atrubin AJ (1965) A one-dimensional real-time iterative multiplier. IEEE Trans Electron Computers EC-14(1):394–399
Banks E (1971) Information and transmission in cellular automata. Ph.D Dissertation, MIT, cited by Toffoli and Margolus (1987)
Beato V, Engel H (2003) Pulse propagation in a model for the photosensitive Belousov-Zhabotinsky reaction with external noise. In: Schimansky-Geier L, Abbott D, Neiman A, van den Broeck C (eds) Noise in complex systems and stochastic dynamics. Proceedings of SPIE, 2003
Berlekamp ER, Conway JH, Guy RL (1982) Winning ways for your mathematical plays, vol 2 Games in particular. Academic, London
Blum L, Cucker F, Shub M, Smale S (1998) Complexity and real computation. Springer, New York
Boccara N, Nasser J, Roger M (1991) Particle-like structures and interactions in spatio-temporal patterns generated by one-dimensional deterministic cellular automaton rules. Phys Rev A 44(2):866–875
Cook M (2004) Universality in elementary cellular automata. Complex Syst 15:1–40
Das R, Crutchfield JP, Mitchell M, Hanson JE (1995) Evolving globally synchronized cellular automata. In: Eshelman LJ (ed) International conference on genetic algorithms '95. Morgan Kaufmann, San Mateo, CA, pp 336–343
Delorme M, Mazoyer J (2002) Signals on cellular automata. In: Adamatzky A (ed) Collision-based computing. Springer, Berlin, pp 234–275
Durand-Lose J (1996) Grain sorting in the one dimensional sand pile model. Complex Syst 10(3):195–206
Durand-Lose J (1998) Parallel transient time of one-dimensional sand pile. Theoret Comp Sci 205(1–2):183–193
Durand-Lose J (2007) Abstract geometrical computation and the linear Blum, Shub and Smale model. In: Cooper S, Löwe B, Sorbi A (eds) Computation and logic in the real world. 3rd Conference Computability in Europe (CiE '07). Springer, no. 4497 in LNCS, pp 238–247
Durand-Lose J (2008a) Abstract geometrical computation: small Turing universal signal machines. In: Neary T, Seda A, Woods D (eds) International workshop on the complexity of simple programs. Cork University Press, Cork, Ireland, December 6–7
Durand-Lose J (2008b) Abstract geometrical computation with accumulations: beyond the Blum, Shub and Smale model. In: Beckmann A, Dimitracopoulos C, Löwe B (eds) Logic and theory of algorithms. CiE 2008 (abstracts and extended abstracts of unpublished papers). University of Athens, Athens, pp 107–116
Durand-Lose J (2009) Abstract geometrical computation 3: Black holes for classical and analog computing. Nat Comput 8(3):455–472
Etesi G, Németi I (2002) Non-Turing computations via Malament-Hogarth space-times. Int J Theor Phys 41(2):341–370, gr-qc/0104023
Field RJ, Noyes RM (1974) Oscillations in chemical systems. iv. limit cycle behavior in a model of a real chemical reaction. J Chem Phys 60:1877–1884
Fischer PC (1965) Generation of primes by a one-dimensional real-time iterative array. J ACM 12(3):388–394
Fredkin EF, Toffoli T (1982) Conservative logic. Int J Theor Phys 21(3/4)219–253
Fredkin EF, Toffoli T (2002) Design principles for achieving high-performance submicron digital technologies. In: Adamatzky A (ed) Collision-based computing. Springer, Berlin, pp 27–46
Hogarth ML (1994) Non-Turing computers and non-Turing computability. In: Hull D, Forbens M, Burian RM (eds) Biennial meeting of the philosophy of science association. East Lansing, MI, pp 126–138
Ilachinski A (2001) Cellular automata – a discrete universe. World Scientific, Singapore
Jakubowski MH, Steiglitz K, Squier RK (1996) When can solitons compute? Complex Syst 10(1):1–21
Jakubowski MH, Steiglitz K, Squier RK (2001) Computing with solitons: a review and prospectus. Multiple Valued Logic 6(5–6):439–462
Kari J (2005) Theory of cellular automata: a survey. Theoret Comp Sci 334:3–33
Krug HJ, Pohlmann L, Kuhnert L (1990) Analysis of the modified complete oregonator (MCO) accounting for oxygen- and photosensitivity of Belousov-Zhabotinsky systems. J Phys Chem 94:4862–4866
Lindgren K, Nordahl MG (1990) Universal computation in simple one-dimensional cellular automata. Complex Syst 4:299–318
Lloyd S, Ng YJ (2004) Black hole computers. Sci Am 291(5):31–39
Margolus N (1984) Physics-like models of computation. Phys D 10:81–95
Mazoyer J (1996) Computations on one dimensional cellular automata. Ann Math Artif Intell 16:285–309
Neary T, Woods D (2009) Four fast universal Turing machines. Fundam Inform 410(4):443–450
Ollinger N (2002) The quest for small universal cellular automata. In: ICALP '02, Springer, Heidelberg, no. 2380 in LNCS, pp 318–329
Rand D, Steiglitz K (2009) Computing with solitons. In: Meyers RA (ed) Encyclopedia of complexity and systems science. Springer, Heidelberg
Rand D, Steiglitz K, Prucnal P (2005) Signal standardization in collision-based soliton computing. Int J Unconvent Comput 1:31–45
Rendell P (2002) Turing universality of the game of life. In: Adamatzky A (ed) Collision-based computing. Springer, Berlin, pp 513–540
Rennard JP (2002) Implementation of logical functions in the game of life. In: Adamatzky A (ed) Collision-based computing. Springer, London, pp 491–512
Richard G, Ollinger N (2008) A particular universal cellular automaton. In: Neary T, Woods D, Seda AK, Murphy N (eds) The complexity of simple programs. National University of Ireland, Cork
Sarkar P (2000) A brief history of cellular automata. ACM Comput Surv 32(1):80–107
Sendiña-Nadal I, Mihaliuk E, Wang J, Pérez-Muñuzuri V, Showalter K (2001) Wave propagation in subexcitable media with periodically modulated excitability. Phys Rev Lett 86:1646–1649
Steiglitz K (2001) Time-gated Manakov spatial solitons are computationally universal. Phys Rev E 63:1660–1668
Toffoli T, Margolus N (1987) Cellular automata machine - a new environment for modeling. MIT Press, Cambridge, MA
Tyson JJ, Fife PC (1980) Target patterns in a realistic model of the Belousov-Zhabotinsky reaction. J Chem Phys 73:2224–2237
Waksman A (1966) An optimum solution to the firing squad synchronization problem. Inform Control 9(1):66–78
Woods D, Neary T (2006) On the time complexity of 2-tag systems and small universal Turing machines. In: 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS '06), IEEE Computer Society, Berkeley, CA, pp 439–448
Wuensche A (2005) Glider dynamics in 3-value hexagonal cellular automata: the beehive rule. Int J Unconventional Comput 1:375–398
Wuensche A, Adamatzky A (2006) On spiral glider-guns in hexagonal cellular automata: activator-inhibitor paradigm. Int J Modern Phys C 17(7):1009–1026
Yunès JB (2007a) Automates cellulaires; fonctions booléennes. Habilitation à diriger des recherches, Université Paris 7
Yunès JB (2007b) Simple new algorithms which solve the firing squad synchronization problem: a 7-states 4n-steps solution. In: Durand-Lose J, Margenstern M (eds) Machine, Computations and Universality (MCU '07). Springer, Berlin, no. 4664 in LNCS, pp 316–324
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this entry
Cite this entry
Adamatzky, A., Durand-Lose, J. (2012). Collision-Based Computing. In: Rozenberg, G., Bäck, T., Kok, J.N. (eds) Handbook of Natural Computing. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-92910-9_58
Download citation
DOI: https://doi.org/10.1007/978-3-540-92910-9_58
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-92909-3
Online ISBN: 978-3-540-92910-9
eBook Packages: Computer ScienceReference Module Computer Science and Engineering