Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

6.1 Introduction

It seems reasonable to suppose that the next step in developing artificial intelligent systems, having human thinking abilities, should be based on a better understanding of existing and new laws of nature responsible for the dynamics of thinking systems. One should remember how the physical sciences succeeded in opening and exploiting the physical principles and laws of nature for the creation of the special theories and mathematical tools necessary for breaking into the micro-world of atoms and molecules, constructing new processes and machines along the way.

We desire to combine the structural complexity of the brain’s neural networks with the mathematical models derived from the laws of nature responsible for complex heterogeneous biochemical reaction dynamics, accompanied with storage, processing and exchanges of information. Biochemical reactions and processes, which are taking place within and between the brain’s neurons, combining to compose neural networks, are responsible for specific brain functions. We hardly expect serious progress in the improvement of modern AI systems along the way to the creation of “artificial brain” systems without a detailed understanding of the internal mechanisms of biochemical processes in the brain, which should include the physicochemical meaning and roles of “information” and “information exchange” currently not presented elsewhere. We also need to accept a lack of complete fundamental physical principles and mathematical models for living and especially the thinking systems responsible for the origin and functioning of human intelligence and its connection to consciousness, cognition, creativity, learning, and rational decision making. In order to address these questions, we need to define the physicochemical meaning of “information” and “information exchange” in relation to regular processes such as the mass, charge and energy exchanges taking place during biochemical reaction dynamics within the brain’s neurons and neural networks. The classical meaning of “information”, introduced by Shannon (1948) and Brillouin (1962), was based on the physics of thermodynamics and probabilistic principles related to measures of the quantity of “information” without considering the quality of information which is important for thinking systems.

The meaning of “information exchange”, which we are introducing, reflects the extreme sensitivity of chaotic states of the neurons to the infinitesimal portion of energy (which we intend to relate to “information”) contained by the internal and external stimuli delivered to a neuron(s) that causes unique patterns that can be associated with the brain’s mental properties. From the point of view of delivered energy, those infinitesimal stimuli drastically change the current chaotic state of an individual neuron and the whole neural network. The ability of neurons to receive and react to infinitesimal signals we associate with “information exchange” within and between neurons. For “information exchange” to occur, dynamical process within the physical or biological system of neurons and neural networks by necessity should have chaotic regimes to be able to change under infinitesimal influences (stimuli or signals) for specific patterns to emerge. “Information exchange” takes place in parallel with the regular biochemical reactions between the neuron’s biochemical constituents (atoms, molecules, ions, etc.). The fundamental difference between the process of “information exchange”, where an infinitesimal amount of energy produces large effects, and regular biochemical reactions is that for regular exchanges, the more energy consumed, the bigger effect that could be expected from the interaction. It is also the important that while all constituents participating in a regular biochemical reaction could be in any physical state, the “information exchange” requires that constituents are in a chaotic state, the only state that could be changed by an infinitesimal (small) portion of energy and be considered as “information”.

To construct a general theoretical approach and mathematical model of neural network dynamics for “information exchange”, we introduce a new extremal dynamical principle for multicomponent biochemical reaction dynamics. This new principle leads to a system of non-linear difference equations for the numerous embedded chaotic regimes in the mathematical modeling of “information exchange” within and between neurons. This proposed principle results from the extension of the maximum entropy principle, the \( \uppi - \) theorem of the theory of dimensionality and stoichiometry for the multicomponents of the chemical reactions occurring (Gontar 1993, 2004)

As will be shown, the equations derived from the dynamical principle enable the simulation of specific natural neural network features, namely “self-organization” and “self-synchronization”. These features lead to the emergence of a new “phenomenological” state (s) within the “artificial brain” in the form of the specific discrete time and space patterns which we intend to correlate with human consciousness, cognition, creativity (which is the ability of the system, natural or artificial, to generate innovative results in the form of art, music, poetry, technical inventions, etc.) and intelligence, which should correspond to general rational behavior and decision making. We are presenting here results of numerical simulations, performed by the proposed approach, to demonstrate the 2D patterns generated in the form of ornaments and mandalas (Figs. 6.2 and 6.3) to support the idea that artificial neural networks, when constructed from a first physical principle, necessarily lead to the variety of dynamical artistic “patterns” traditionally considered to be the prerogative of human creative abilities.

Formulated here, the first physicochemical dynamical principle could serve as a possible explanation of the origin of the “driving force” for thinking system dynamics, thereby opening a new perspective to simulate the brain’s cognitive functions with the goal of eventually developing artificial brain systems.

6.2 Background

The idea of translating the properties of the interconnected neurons of the human brain into mathematical models gave impetus to the development of an ANN functioning as interconnected individual neurons simulated by step, linear and sigmoid functions (Haykin 1998). Even this pure mathematical approach applied to the complexity of neural networks has demonstrated the ability of ANNs to perform numerous and “intelligent” operations, including image and signal recognition, assisted decision making, and control and navigation among many other applications associated with human intelligence. At the same time it should be clear that an ANN based on pure mathematics includes a variety of solutions that may not be relevant to real processes. Therefore, the use of mentioned above ANN makes it problematic for autonomous and intelligent systems when the time or data for training and learning is limited and when innovative and rational solutions are required.

When we are talking about the scientific understanding of intelligence, we should realize that its origin and explanation could be found only within the understanding of living cells (neurons) and their biochemical processes. The way to the creation of artificial intelligence lays in understanding the physicochemical laws responsible for brain functioning. In spite of the fact that living cells are composed of the same atoms and molecules as non-living matter, they do not appear to obey the physical laws of quantum mechanics and statistical physics formulated for non-living matter. It seems that on the scale of their operations, living and thinking cells and systems, such as the brain, may not obey the laws of thermodynamics and the second law of thermodynamics in particular. Numerous attempts to depict the existing laws of physics for the dynamics of living cells have not allowed biologists to understand any better what are thoughts, consciousness and cognition, and the many other specific features of living and thinking matter. The extreme complexity of the structural and behavioral properties of brain neurons and networks does not manifest dynamics similar to that observed and simulated in inert matter physics. Living cells, such as neurons, present behavior comparable to that of a well-organized factory under optimal control and synchronization, with “information” and biochemical exchanges between constituents taking part in living and thinking cycles and processes that “rationally” and “creatively” respond to internal and external stimuli. Self-reproduction, “information exchange”, memory, aging and emerged and self-organizing mechanisms make a living as thinking systems. They form an extremely complex theoretical object of research that requires new fundamental principles and laws which should reflect a specificity for living and thinking matter in contrast to non-living systems. Classical physics, initially focused on inert matter dynamical processes, traditionally exploits as a mathematical tool continuous time and space with differential equations known also as the calculus of the infinitesimal. We think that “living and thinking systems” require the introduction of a new calculus, which we call the “calculus of iterations” and leading to systems of difference equations. These equations should be directly derived from first principles, reflecting a specificity of living systems, for further use in the mathematical modeling of the dynamics of living and thinking systems (Gontar 1995). Under some assumptions, these two calculi intersect when Δt → 0, but we intend to benefit from using difference equations independently from differential equations for a source of mathematical models. Difference equations, by their very nature, have numerous embedded chaotic regimes which could be applied for mathematically modeling one of the basic concepts of thinking system dynamics: the “information exchange” based on chaotic regimes (Gontar 1995, 2004). To emphasize our preference of difference equations for mathematical modeling of living and thinking systems, we need to mention that differential equations have a limited list of equations with chaotic regimes which exist within a narrow range of parameters. Numerical integration of systems of differential equations are always accompanied by the contradiction between the continuous variables and discrete computer calculations that complicate the identification of the computational artifacts and real chaotic regimes of the simulated physical system. These are the reasons why difference equations, with their clear physicochemical meanings for the variables and parameters derived from first physicochemical principles and laws of nature, are preferable to differential equations for modeling living and thinking systems (Gontar 2000a, b).

As already mentioned, the brain consists of neurons interconnected to form complex neural networks. Another empirical fact is that each neuron operates as a “biochemical reactor” where numerous chemical, electrochemical reactions and biochemical reactions occur. Before introducing our basic hypothesis about thinking system mathematical models, let us remind the reader that chemical reactions between the original “simple and non-living” elements (atoms, molecules, etc.) can lead to the creation of more complex systems such as bacteria with the manifest new properties in the emergence of “life”. By analogy, the brain’s specific properties, such as consciousness, cognition and creativity could result from the biochemical reactions and the information exchange within and between the neurons composing a neural network. All kinds of brain activity, including cognitive properties, are fully defined by the states of neurons and their dynamics that depend on a neuron’s internal ith chemical constituent concentrations, y i . To simulate a brain’s cognitive functions, we construct a mathematical model that describes the dynamics of the chemical constituents distributed among the brain’s neural networks. We hypothesize that a neuron’s chemical constituent concentrations distributed among the neural network associate with the “phenomenological states” manifesting as consciousness, cognition and creativity among the brain’s other properties. These phenomenological states are represented by the calculated concentrations y i  (t q R) of the ith chemical constituents distributed on the brain’s neural networks for any network state, t q , within the discrete space R. Structurally, the human brain is composed of frontal, parietal, occipital and temporal lobes, cerebellum, etc. Each brain’s part, for the purpose of mathematical modeling, could be represented by a specific form of 2D or 3D neural network interconnected with other parts of the brain to promote information exchanges. In Fig. 6.1, one can see nine “mathematical neurons” with the discrete coordinates (r x , r y ) interconnected via information exchanges (designated by arrows) within the 2D artificial neural network \( R\left({r}_x,{r}_y\right);\ {r}_x,{r}_y=1,2\dots {N}^{\prime } \). Each neuron is represented through the mechanism of its biochemical reactions by the matrix of stoichiometric coefficients |v li | (Gontar 1997):

Fig. 6.1
figure 1

A neural network composed by interconnecting through “information exchange” neurons (blue arrows). Each neuron is represented by the mechanism of biochemical reaction dynamics with “information exchange” between the neuron’s constituents (green arrows) and between constituents composing other neural networks representing different parts of the “artificial brain” (red arrows)

figure a

Equ 6.1

Here, A i is the list of constituents composing a neuron (atoms, molecules, ions: H, H2O, \( {\mathrm{Ca}}^{+},\ {\mathrm{OH}}^{-} \), etc.). A green arrow designates “information exchange” within the neuron, a blue arrow the “information exchange” between the different neurons within a neural network, and a red arrow “information exchange” between different neural networks representing specific parts of the brain, or information received from the environment through the sensors and actuators of the brain.

Based on mathematical identity between the basic equations of the \( \uppi - \) theorem of the theory of dimensionality (Brandt 1957), and, from the principle of maximum entropy, the thermodynamic mass \( - \) action law equations for complex chemical equilibrium, we propose to extend the second law of thermodynamics on open systems with a new extremal principle for neural networks representing biochemical reaction dynamics (Gontar 2004). In the case of neural networks, represented by the neurons with internal biochemical reactions and information exchange within and between the neurons, as well as with neurons from other networks, the new extremal principle can be formulated as follows: the evolution of neural networks proceed in such a way that at any discrete time t at state q, \( {t}_q,\ q=1,2\dots Q \), each neuron within the network with discrete coordinates R (r x , r y ) is fully defined by its chemical constituent concentrations y i  (t q R), a minimizing function (6.2) for the space of constituent concentrations \( 0<{y}_{i}({t}_q,\ \bf{R})<1 \), and under the constraint of the mass conservation law (6.3):

$$ \begin{array}{lll} mi{n}_{y_{i\ }\left({t}_q,\ \boldsymbol{R}\right)}\ \varPhi \left({y}_{i\ }\left({t}_q, \boldsymbol{R}\right)\ \right)\notag \\ & \quad = {\displaystyle \sum_{i=1}^N}{y}_{i\ }\left({t}_q, \boldsymbol{R}\right)\ \left( \ln \left({y}_{i\ }\left({t}_q, \boldsymbol{R}\right)\right) + {f}_i\left({\pi}_l^d,\ {w}_l^{'},{\rho}_{li}^{'},{\beta}_{li}^{\otimes },{t}_{q-s}, {\mathfrak{J}}_{l,g}\right)\right) \end{array}$$
(6.2)
$$ {\displaystyle \sum_{i=1}^N}{\alpha}_{ij}^T{y}_{i\ }\left({t}_q, \boldsymbol{R}\right) = {b}_j^0 $$
(6.3)
$$ {f}_i\,{=}\,{\displaystyle \sum_{l=1}^{N-M}}\!\left(\!-v_{li}^T \ln \!\left(\!{\pi}_l^d\ \exp {-}\!\left(\!{w}_l{+}{\displaystyle \sum_{i=1}^N}{\rho}_{li}{y}_{i\ }\!\left({t}_{q{-}s},\boldsymbol{R}\right){+}{\displaystyle \sum_{i{=}1}^{N^{\prime }}}{\beta}_{li}^{\otimes }{y}_i\!\left({t}_{q-s},\ {\boldsymbol{R}}^{\otimes}\right){+}{\mathfrak{J}}_{l,g}\!\right)\!\right)\!\right) $$
(6.4)

Here \( {\boldsymbol{R}}^{\otimes } \) are coordinates of the neighboring neurons participating in an information exchange with the currently considered neuron R(r x , r y ); π d l  and w l are empirical parameters characterizing the rate of the lth biochemical reaction; \( {\rho}_{li}\ \mathrm{and}\ {\beta}_{li}^{\otimes } \) are empirical parameters characterizing the intensity of information exchange within and between the neurons; the α T ij are the elements of transposed molecular matrix |α li | to indicate the number of system basic constituents of type j (j = 1,2M) in the constituent of type i (i = 1,2N); the b 0 j reflect the total concentration of the jth constituent in a neural network; and \( s=1,2\dots \) is the index characterizing “system memory” and indicates the state prior to the currently considered state, t q , (in this work we are considering only the previous state, t q or \( s=1 \)). \( {\mathfrak{J}}_{l,g}\left({y}_{i{l}^{\prime}}^g\left({t}_{q-1},{\boldsymbol{R}}^{\boldsymbol{g}}\right)\right) \) is the function characterizing information exchange between the l th reaction within neural network with coordinates R and the other g th(1, 2 … G) neural networks with coordinates R g (for example, frontal lob coordinates denoted as R, occipital lob R 1, sensors R 2, actuators R 3, etc.). As an initial approximation to the explicit form of the unknown function \( {\mathfrak{J}}_{l,g} \), we approximate it with a linear regression corresponding to the neural network constituent concentrations y g i (\( {t}_{q-1} \)) and empirical parameters ξ i :

$$ {\mathfrak{J}}_{l,g}\left({y}_{i{l}^{\prime}}^g\!\left({t}_{q-1},{\boldsymbol{R}}^{\boldsymbol{g}}\right)\!\right) = {{\displaystyle \sum}}_{l^{\prime}}^{L^{\prime }}{\displaystyle \sum_{i=1}^{N^{{\prime\prime} }}}{\xi}_{i\ }{y}_{i{l}^{\prime}}^g\left({t}_{q-1},{\boldsymbol{R}}^{\boldsymbol{g}}\right) $$
(6.5)

N″ is the number of constituents within the g th neural network, \( {l}^{\prime }\ \mathrm{is}\ 1,2\dots {L}^{\prime}\Big) \) for the number of reactions in the neurons representing the g th neural network.

In the case when an interaction between the neurons from different neural networks is not limited by “information exchange”, the exchange of chemical constituents could be introduced into (6.1) through the extension of the |v li | matrix by adding the corresponding chemical reactions between the neurons.

The formulated dynamical extremal principle (6.2) and (6.3) equivalent to the solution of the following system of N non-linear difference equations has a unique solution for all \( {y}_{i\ }\left({t}_{q-s},\boldsymbol{R}\right)>0 \) (Gontar 1993, 2004):

$$ {\displaystyle \prod_{i=1}^N}{y}_i^{{}_{li\ }}\left({t}_q,\boldsymbol{R}\right){=}{\pi}_l^d\ \exp \left({-}{w}_l{+}{\displaystyle \sum_{i=1}^N}{\rho}_{li}{y}_{i\ }\left({t}_{q-s},\boldsymbol{R}\right){+}{\displaystyle \sum_{i=1}^{N^{\prime }}}{\beta}_{li}^{\otimes }{y}_i\left({t}_{q-s},\ \boldsymbol{R},{\boldsymbol{R}}^{\otimes}\right){+}{\mathfrak{J}}_{l,g}\right) $$
(6.6)
$$ {\displaystyle \sum_{i=1}^N}{\alpha^T}_{ij\ }{y}_{i\ }\left({t}_q,\boldsymbol{R}\right) = {b}_j^0;s=1,2\dots $$
(6.7)

Mathematical model (6.6) and (6.7) could simulate brain dynamics, since according to our assumptions, it is fully defined by the evolution of a neuron’s constituent concentrations y i  (t q  , R g) distributed on the neural network R g. Specific cognitive brain functions could be interrelated with a neuron’s constituent concentration distributions y i  (t q , R g) which, as it will be shown, represent complex patterns that could be related to specific cognitive functions of the brain such as the creation of a work of art like a mandala.

The formal meaning of “information exchange” introduced here reflects a special type of interaction between complex and living systems, unlike an energy exchange, and has specific features. The energy can be delivered or transmitted from its source to any receiver to change its state without any requirements for the receiver to be under predefined conditions. However, “information exchange” in our view could take place only if the receiver is “ready” for that type of interaction. For the receiver “to be ready” means that it should react (“perceive”) to infinitely small transmitted signals, since information conveyed usually contains small amounts of energy that nevertheless could drastically change the state of the receiver. As we know from deterministic chaos, any physical system (a network of neurons operating as a receiver in our case) could accept infinitely small signals only when it is in a chaotic state. This type of interaction, which we have named “information exchange” in comparison to regular energy exchanges, complements the living and thinking system dynamics which, as it is now well known, contained chaotic regimes. The meaning of the “information exchange” presented here and being applied to the interaction between humans could be illustrated by the fact that even “one word” (bad or good) exchanged between humans could cause a strong emotional reaction. This is supporting the idea about the use of mathematical models with embedded chaotic regimes to simulate the basic thinking system properties by information exchange. Information exchange could exist on the level of individual neurons, neural networks and between the interconnected neural networks of a whole brain.

Equations (6.6) and (6.7) written for the initial hypothesis about a mechanism of biochemical transformation and a scheme of information exchange within the neuron for any given parameters \( {\pi}_l^d,\ {w}_l,{\rho}_{li},{\beta}_{li}^{\otimes }\ \mathrm{and}\ {b}_j^0 \) enable us to compute the unique distribution of each ith constituent’s concentration y i  (t q , R g) with a neural network at state t q . The obtained distributions represent a visual dynamical pattern (e.g., a mandala) where equal values of y i  (t q , R g) can be marked by the same color taken from an arbitrary palette (Gontar and Grechko 2006, 2007).

As shown in Fig. 6.2, the extremal principle denoted by (6.2) and (6.3) followed by (6.6) and (6.7) enable the generation of various dynamical patterns related to those observed or produced by complex, living and thinking systems: spirals, rings, waves and artistic patterns in a form of creative ornaments and mandalas. These results support the idea that this proposed principle could be applied to the mathematical modeling of any physicochemical system with chemical reactions like the human brain because its functioning is defined by the biochemical reactions occurring within neurons and neural networks. By this process, the different brain functions such as consciousness, cognition, creativity, and decision making could be directly related to the biochemical reaction mechanisms and dynamics within the neural networks and then associated with the specific patterns that emerge in a form that composes the neuron chemical constituent distributions and their dynamics. The proposed extremal principle denoted by (6.2) and (6.3) can be considered as a “driving force” for brain functioning by consuming and exchanging energy and information and used as mathematical tool for the creation of autonomous “artificial brain” systems. Mentioned above, the brain’s cognitive functions should be interrelated with the specific complex patterns emerging from the “artificial brain” and controlled by the internal and external stimuli and by special training and learning of the ANN. This supervised and unsupervised training could provide a rational interaction of the artificial brain systems with the environment, artificial agents and humans. The proposed mathematical model has demonstrated its ability to generate an almost unlimited variety of complex and creative 1, 2 and 3D dynamical patterns (Gontar 1997, 2000a, b). The problem of the creation of autonomous conscious artificial brain systems then becomes the technical problem of how to provide training and learning for such a system by finding the concrete mechanism of biochemical reactions and parameters of the mathematical model (6.6) and (6.7) that correspond to the desired “intelligent” or rational behavior.

Fig. 6.2
figure 2

Selected examples of the 2D patterns generated by the (6.9) sequences y A  (t q  , R) corresponding to the arbitrarily chosen \( {t}_q, q=1.2 \dots Q \) for different sets of parameters \( {\pi}_1^d,{\pi}_2^d,\ {b}^0,\ {\rho}_1,{\rho}_2,{\rho}_3,{\beta}_1^{\otimes },{\beta}_2^{\otimes },{\beta}_3^{\otimes } \); network \( \boldsymbol{R}\left(100\times 100\right) \)

6.3 Numerical Simulations

As an example of using the proposed paradigm to simulate brain creativity in a form of 2D images such as ornaments and mandalas, we developed a system for the automatic finding of the model parameters that correspond to desired patterns. The general mechanism of the biochemical reactions expressed by (6.1) and written for the two reactions between three constituents with information exchange looks as follows:

figure b

Equ 6.8

A, B and C designate three constituents composing each neuron in the network which are participating in two biochemical reactions: \( A\to\ B\ \mathrm{and}\ B\ \to C \). Here green arrows designate “information exchange” within the neuron, and blue arrows the “information exchange” between the different neurons within a neural network.

Equations (6.6) and (6.7) for this chemical reaction within the neuron and neural network dynamics with “information exchange” could be presented in the explicit form for every one of the three constituents and for y 1(R, t q ) (Gontar and Grechko 2006):

$$ {y}_1\left(\boldsymbol{R},{t}_q\right)=\frac{b_A^0}{1+{\pi}_1^d{\mathcal{D}}_1+{\pi}_2^d{\mathcal{D}}_2} $$
(6.9)
$$ {\mathcal{D}}_1= exp\left(-\left({w}_1+{\displaystyle \sum_{i=1}^3}{\rho}_{1i}{y}_{i\ }\left(\boldsymbol{R},{t}_{q-1}\right)+{\displaystyle \sum_{i^{\prime }=1}^{N^{\prime }}}{\beta}_{1i}^{\otimes }{y}_1\Big({t}_{q-1},\ \boldsymbol{R},{\boldsymbol{R}}^{\otimes}\right)\right) $$
$$ {\mathcal{D}}_2= exp\left(-\left({w}_2+{\displaystyle \sum_{i=1}^3}{\rho}_{2i}{y}_{i\ }\left(\boldsymbol{R},{t}_{q-1}\right)+{\displaystyle \sum_{i^{\prime }=1}^{N^{\prime }}}{\beta}_{2i}^{\otimes }{y}_1\Big({t}_{q-1},\ \boldsymbol{R},{\boldsymbol{R}}^{\otimes}\right)\right), $$

with the initial and boundary conditions:

$$ {y}_{i\ }\left({t}_{q=0},\ {r}_x,{r}_y\right)=\left\{\begin{array}{l} {b}_j^0,\ i=1,2\dots M \\ {}0,\ i=M+1,M+2,\dots N \end{array}\right. $$
(6.10)
$$ {y}_{i\ }\left({t}_q,\ {r}_x,{r}_y\right)=\left\{\begin{array}{l} {y}_{i\ }\left({t}_q,\ {r}_x,{r}_y\right),\ 0<{r}_x,{r}_y<\left|\boldsymbol{R}\right|\ \left( inside\ the\ network\right)\\ {}0, \kern0.5em {r}_x,{r}_y\ge \left|\boldsymbol{R}\right|\ \left( outside\ the\ network\right) \end{array}\right.. $$
$$ M=1\ \mathrm{is}\ \mathrm{the}\ \mathrm{number}\ \mathrm{of}\ \mathrm{main}\ \mathrm{constituents}\ \left(\mathrm{components}\right); $$
$$ N=3\ \mathrm{is}\ \mathrm{the}\ \mathrm{total}\ \mathrm{number}\ \mathrm{of}\ \mathrm{consituents}. $$

We also put constraints on the empirical parameters in equation (6.9):

$$ {w}_1={w}_2=0; $$
$$ {\rho}_{li}={\rho}_{l^{\prime }i};\ l\ne {l}^{\prime };,l,{l}^{\prime }=1,2 $$
(6.11)
$$ {\beta}_{li}^{\otimes }={\beta}_{l^{\prime }i}^{\otimes }\ \left\{\begin{array}{l} l\ne {l}^{\prime },\ l,{l}^{\prime }=1,2 \\ {r}_x\ne {r}_{y\ }\ {r}_x,{r}_y = 1,2\dots 9 \end{array}\right. $$

By making these assumptions, we reduce the number of controlled parameters to 9: \( {\pi}_1^d,{\pi}_2^d,\ {b}^0,\ {\rho}_1,{\rho}_2,{\rho}_3,{\beta}_1^{\otimes },{\beta}_2^{\otimes },{\beta}_3^{\otimes } \). The number of parameters can be extended if we need to generate more complex patterns to better correspond to the experimental data. In our examples, each neuron for the neural network considered has coordinates (r x , r y ) fully characterized by the concentrations of its \( N=3 \) constituents y i  (t q r x , r y ) at any state t q .

The values of the parameters \( {\rho}_{li},{\beta}_{li}^{\otimes } \) could be used as a quantitative characteristic of the level of information exchange. Qualitative conclusions about information exchange could be made from the obtained results: if the desired output (a specific pattern) has not appeared for a given set of parameters, it means that the scheme used and the level of information exchange should be changed.

For the purpose of a visualization, generated by the (6.9) array of data, we chose one of the three constituents, for example y 1 (t q r x , r y ). Selected results of the patterns generated are presented in Fig. 6.2. As can be seen, even simple mechanisms of biochemical reactions for (6.8) with its reduced number of parameters in (6.10) reflect the simplified scheme of information exchange within and between neurons; (6.9) possesses different solutions, which could be observed both in reality (sand, spirals and ring waves) and in the form of mandalas produced by artists. By varying the neuron’s biochemical reaction internal mechanism, as a scheme of information exchange for the model parameters, we can use (6.6) and (6.7) to generate an unlimited source of complex patterns including symmetrical images in the form of mandalas. It also should be clear that for each chosen biochemical reaction, any type of pattern exists in a limited domain of the model’s parameters, found by inverse problem solutions. For that purpose, we need an automatic search of the parameters to generate the desired pattern. Such an automatic system, based on a specially constructed genetic algorithm, has been developed and has demonstrated a high level of performance in finding desired symmetrical patterns such as specific mandalas (Gontar and Grechko 2006). Each mechanism of the neural network biochemical reaction should be considered as an initial hypothesis for finding the desired solution as a specific pattern by performing a search in parameter space. If the initial hypothesis does not result in the desired pattern, it should be changed and repeated with a new search of parameter space until the pattern corresponding to the formalized criteria, such as a desired shape, symmetry, etc., has been found.

Obtained by (6.6) and (6.7), the symmetrical patterns are similar to the analogous patterns produced by human artists in the form of mandalas, as shown on Figs. 6.2 and 6.3 and similar to those mandalas presented by Jung (1973). Thus, the proposed ANN could be expanded to different areas of human mental and cognitive activity. One can suppose that, in general, human brain cognitive functions could be connected with the specific patterns emerging in the brain from a neural network’s constituent concentrations, demonstrating real mental activity as in the case of an art painting that results in a desired mandala. If so, by using search methods, such as a genetic or simulated annealing algorithm for given fitness function (desired artistic pattern, optimal robot’s trajectory, etc.), an ANN architecture (6.6) and (6.7) and its parameters could be obtained for its further use as an artificial brain system with cognitive properties.

Fig. 6.3
figure 3

Evolution of the pattern for discrete states generated by (6.9), represented by selected states \( {t}_q, q=1.2 \dots Q,\ q=120,130,\dots \) correspond to the concrete state of the ANN

The dynamical patterns shown in Fig. 6.3 are usually accompanied by a series of discrete chaotic states y i  (t q ) , representing each neuron of a network with coordinates R(r x , r y ). Based on the results we have obtained, systems of interconnected neurons could demonstrate well-organized collective behavior by an ANN in a form of 2D symmetrical pattern at t q , \( q=250 \), while each individual neuron demonstrates a chaotic regime by its constituents \( {y}_{i\ }\left({t}_q\right),\ q=1,2,3\dots \) as it shown on Fig. 6.4. These symmetrical patterns demonstrate self-organization and self-synchronization within the ANN composed of interconnected “chaotic oscillators” (chaotic regimes provided by (6.9)). This supports the statement that “chaos is creative” in a sense that interconnected chaotic regimes are usually accompanied by a high level of collective organization in a form of specific time-space distributed patterns (Gontar 2007).

Fig. 6.4
figure 4

Evolution of six neurons with coordinates \( \left({r}_x=1,{r}_y=100;{r}_x=25,{r}_y=75;{r}_x=25,\right.\) \(\left. {r}_y{=}1;{r}_x{=}50,{r}_y{=}25;{r}_x{=}50, {r}_y{=}1;{r}_x{=}50,{r}_y{=}50\right)\) for the 2D ANN \( \boldsymbol{R}\left(100\times 100\right) \) represented by the concentration y 1 (t q r x , r y ) sequences that contained 100 discrete states \( {t}_q (q=200,201\dots Q=300) \). The symmetrical pattern presented corresponds to \( q=250 \). All six neurons are demonstrating chaotic regimes, while for 100 states the 2D patterns are different, but symmetrical

At this point we would like to discuss what is in common and what is the difference between the approach presented here as a neural network for distributed discrete biochemical reaction dynamics and 2D cellular automata (CA) (Wolfram 2002). Both approaches are operating in discrete space and time and both are using neighboring cell states updated from the previous state of a neuron (each cell of the CA lattice corresponds to the neuron in our network). The main difference between these two approaches is that the CA is not including in its algorithm any physicochemical meaning or constraints from the laws of nature as we presented here, namely the conservation laws, the second law of thermodynamics and the stoichiometry of chemical reactions. This difference makes the CA limited in the control of the patterns it generates. Another difference is that the CA rules are discrete and therefore any type of pattern, corresponding to a particular rule, cannot be transformed into another pattern smoothly, as we know usually happens with natural processes. Other well-known discrete time and space mathematical models, such as fractals (Mandelbrot’s, Julia sets) and L-systems, should also be considered as purely empirical computational models; but they do not have any relation to fundamental physicochemical principles and laws of nature, and therefore their solutions hardly can be used for extrapolation; they lack a clear definition of “discrete time and space” related to continuous time and space; and they do not establish relations between model parameters and experimentally obtained data. These reasons limit these other approaches to exploit discrete mathematical models for solving the real problems related to the mathematical modeling of the environment where autonomous intelligent robots are likely to perform. In contrast, (6.6) and (6.7) not only generate the patterns related to those observed in nature, but also provide continuous control through the variation of a model’s parameters that should enable future rational robot actions.

We intend to apply the proposed paradigm for mathematical modeling to specific brain features such as consciousness, cognition and creative problem solutions in order to construct the “artificial brain” systems with the cognitive properties for autonomous robot rational behavior that resemble human behavior. From our point of view, conscious “artificial brain” systems are those systems that possess the ability to generate the “phenomenological states” associated with the complex dynamical patterns shown in Figs. 6.2 and 6.3. These “phenomenological states” could be used to illustrate rational, innovative and cognitive actions by feeding the data collected into ANN for learning and forecasting. By “artificial consciousness”, we plan to determine the “phenomenological states” as a form of specific dynamical patterns defined by the mathematical model parameters in the (6.6) and (6.7) that correspond to the internal (learning) and external stimuli (environmental data) that provide the desired rational actions of an intelligent agent or robot.

For example, the autonomous “conscious” robot navigation in an unknown environment could be realized by the proposed approach if we extracted its continuous trajectories from the generated patterns in a form of continuous curves connected to the pattern’s internal edges. Environmental data, for example, about obstacles could be introduced into the neural network as shown in Fig. 6.5a. Neurons with coordinates occupied by the obstacle are not changing their state during the network’s dynamics. Another option for extracting trajectories could be realized by connecting neurons with equal states as shown in Fig. 6.5b. The obtained trajectories could then be transferred to a robot’s navigation system for movement across a real terrain.

Fig. 6.5
figure 5

(a) A trajectory for an autonomous agent (marked by black curve), composed of neurons on the edge of a pattern generated by (6.6) and (6.7) in the presence of an obstacle in the form of a cross. (b) Trajectories composed by the neurons with equal states y A (t q , R) extracted from the generated 2D pattern

The choice of the concrete trajectory for navigation satisfies the conditions of rationality applied to a human as conscious behavior: the minimum distance to a destination, or the avoidance of collision with an obstacle, etc. (Gontar and Tkachenko 2012). This approach could be extended to an artificial agent’s intelligent functions by extracting the desired information embedded within the patterns generated by (6.6) and (6.7).

We underline the difference between the mathematical modeling of an “artificial conscious brain” and its process of “learning”: the former is related to the generation of desired patterns with the embedded rational information by the (6.6) and (6.7) with known parameters (“direct problem”), while “learning” is the mathematical procedure of finding the model (6.6) and (6.7) parameters from the analysis of the experimental data about the environment (Grechko and Gontar 2009).

6.4 Conclusion

Development of the living and thinking system dynamic basic equations should be the basis for a new generation of artificial neural networks and artificial brain systems. It will require the formulation of new fundamental principles and laws of nature which would reflect the main features of living and thinking systems. Formulated within classical physics and chemistry, the known principles and physical laws of nature have been directed to explain non-living system dynamics and hardly could be applied to living and thinking systems.

Instead, we have suggested new principles and basic equations in the form of difference equations, reflecting specific living and thinking system characteristics, such as “information” and “information exchange”. These equations have enabled us to describe the biochemical reaction dynamics that accompany the information exchanges that occur between neurons and neural networks. These difference equations possess numerous chaotic regimes that can simulate the emergence of collective states as complex dynamical patterns. Similar to living systems (those that have emerged from non-living elements to reproduce in and to communicate with the environment), thinking systems composed of “non-thinking” neurons and its constituents when interconnected into networks demonstrate properties similar to the emergence of life, such as emergence of “thoughts”, learning, memorizing, consciousness, cognition, creativity, and communication with other “thinking systems” and with the environment.

We associated a neural network’s creative dynamical patterns (an artificial brain’s “phenomenological states”) with consciousness, cognition and creativity involved in the artwork of a mandala. We believe that this application can be extended for future research into robotics since these patterns suggest rational solutions to the problems arising for autonomous intelligent robots during their missions. Application of the extracted solutions from the simulated “phenomenological states” (patterns) for an autonomous robot’s actions will look like intelligent behavior to an observer.

We presented one possible approach to formulate the new principle for thinking system dynamics basic equations, and how to apply it to autonomous robot navigation.

Our proposed paradigm for living and thinking system dynamics opens a discussion about the physical meaning of discrete space and time versus continuous space and time, “deterministic chaos” versus probabilistic approaches, and continuous differential equations versus discrete difference equations. This discussion puts more emphasis on developing new principles and laws of nature that might be responsible for brain functioning, including consciousness, cognition and creativity among other brain functions. On that basis, we have suggested how we may be able to create a conscious and cognitive artificial brain system.