Keywords

1 Introduction: Big Data Algorithmics in the Laws of Nature

The amount of information associated with Life and Mind overwhelms the diversification of the material world. Disregarding the information processing in the foundation of Nature modern science gets into a variety of complications. The paradigm of fundamental physics that does not explicitly incorporate an information processing mechanism is not just incomplete, it is merely wrong. As John A. Wheeler tersely said: “the physical world is made of information with energy and matter as incidentals”.

Realization of information processing encounters two types of problems related to hardware and to software. In this paper, we contemplate in a broad sense the software problems in connection to the situation of the inundation of information dubbed Big Data. The hardware problems associated with this Big Data situation have been addressed in some general way in our previous publications. As an issue of practical computer engineering these problems has been outlined in [1]. The hardware model of the informational infrastructure of the physical world in the form of a cellular automaton mechanism has been described with numerous ramifications in [25].

Constructive solutions for natural science necessitate elegant operational algorithms. Any algorithm is workable, but inappropriate algorithms translate into clumsy ad hoc theories. The ingenious algorithmic solutions devised for the Big Data situation transpire as effective laws of nature.

The Big Data situation stumble upon two types of problems: how to exercise meaningful actions under overabundance of information and how to actually generate objects having extremely rich information contents. Starting with Sect. 2, we introduce a computational model for Big Data that goes beyond ordinary Turing computations. The incapability to explicitly use all the available Big Data leads to the concept of bounded rationality for Artificial Intelligence as depicted in Sect. 3. This approach emphasizes the Freudian idea of the decisive role of unconsciousness for Natural Intelligence, which is delineated in Sect. 4.

Generating bulkiness through step-by-step growth is not suitable for mass production. Thus, creation of Mind implicates usage of Cloud Computing as described in the above-mentioned Sect. 4. In this case, the tremendous contents of human memory are built-up through joining an already existing repository of information. Section 5 considers both types of Big Data developments. Mental problems, like neuroses, schizophrenia, and autism, are believed to present pure software distortions of the context background brought about by Cloud Computing. Also considered is the other part of massive Big Data formations, which is the method for self-reproduction of macromolecule configurations.

The suggested Big Data algorithms can be fulfilled in the physical world organized as an Internet of Things. Section 6 concludes with an overview of experimental possibilities to verify this surmised construction. It presents the most compelling Experimentum Crucis exposing the Internet of Things in the framework of the Holographic Universe.

2 The Computational Model for Big Data

Information processing begins with the idea of a computational model. Computational model is an abstract scheme for transforming information. It operates in the following cycling: extracting an item of data from the memory—transforming the given item of data—returning the transformed item to the memory. The first idea of this kind with memory presented as an infinite tape having sequential access was introduced by Alan Turing as a formal definition of the concept of an algorithm. John von Neumann had introduced a practical computational model using random access memory for the realization of first computers. Remarkably, despite of the tremendous successes of computer technology at all fronts in more than half a century the basic computational model stays the same. This indicates at something of fundamental significance. The famous Church-Turing Thesis conveys an informal statement that all reasonable computational models are in fact equivalent in their algorithmic expressiveness. In simple words, any calculation that can be done on one computer can be done on another computer; the difference is only in performance. This immediately raises the question about the brain. Thus, on one hand the facilities of the brain have to be equated with those of a conventional computer, on the other hand, this does not seem likely.

We introduce a somewhat different computational model specifically suitable for a Big Data environment (Fig. 1). In this computational model the extraction of a data item from memory is determined by the whole bulk of data. Thus, only a relatively small part of the given Big Data explicitly contributes to actual computations, the vast majority of these Big Data contributes to the computations implicitly by determining what data items should be extracted for definite usage. So, access to specific data items in this computational model is determined by the context of all data. This context-addressable access is different from that in traditional associative memories. It is similar to what is provided by Google’s PageRank algorithm. Some hardware/software details for the realization of the presented computational model are discussed in [1].

Fig. 1
figure 1

The “Big Data” computational model

3 Bounded Rationality Approach to Artificial Intelligence

Potentials of Turing computations may be expanded with a speculative assumption of an “Oracle”—a black box guiding the choice of available alternatives. In complexity theory, an “Oracle machine” is an abstraction to study decision problems. The lofty question of P = NP, i.e., whether non-deterministic and deterministic decision problems are equivalent in their efficiency is not strictly resolved yet, but the extraordinary power of “Oracle” computations is quite obvious.

The Big Data computational model exhibits the features of an “Oracle machine”. The selections of appropriate data items by a genuine “Oracle” or by a huge context—“pseudo-Oracle” are indistinguishable. The alternative: a truly supernatural “Oracle” vs. simulated “pseudo-Oracle” can be compared with the alternative: “Free Will” versus “Determinism”. Thus, random choices with really random generators or with pseudo-random procedures, as well as life or pre-recorded TV shows, are indistinguishable.

The problem of “Artificial Intelligence” is usually associated with making clever decisions under an abundance of data, real or synthetic. In general, this might involve creating a very elaborate model for the system of study, so it would be able to accommodate as much as possible of all of the available data. However, in many practical cases this is unrealistic. A more sensible approach would be to utilize a simplified model of the system, which is guided by an “Oracle”. An exemplar of an “Oracle” could be produced within the suggested computational model using a rich context of Big Data. So, this “Artificial Intelligence” system could acquire “Intuition from Context” (Fig. 2).

Fig. 2
figure 2

Decision-to-data problem

The classical target for Artificial Intelligence is the game of chess. The success in this direction has been achieved primarily by application of high computational power. With the suggested approach we are planning to test another scheme: a beginner displays several possible moves in accordance with some simplified understanding of the game; an “Oracle” (a qualified human or a supercomputer) makes a best selection of the displayed moves. Thus, a substantial playing improvement could be anticipated. It would be interesting to explore this approach for the game of Go that is more computationally challenging than the game of chess. A possible beneficial influence of this arrangement on the mental state of the implicated human player will be brought up in Sect. 5.1.

4 Realization of Natural Intelligence

In 1943 McCulloch and Pitts introduced a formal computational unit—an artificial neuron. An elaborate network of these units is able to solve intricate multidimensional mathematical problems [6]. At the same time, there is a strong belief that complex artificial neural network activities should exhibit the sophistication of the brain. Yet the approach to the conception of the brain as a “complex” network of neurons is inadequate, since slow erratic combinations of electrical and chemical processes in neuron systems cannot match the high performance characteristics of the brain in terms of processing power and reliability. It becomes apparent that understanding of the brain needs a radical paradigm shift towards extracorporeal organization of human memory [7]; also, see the analysis in [8]. Extracorporeal realization of biological memory is based on our cellular automaton model of the physical world resulting in the organization of Nature as an Internet of Things [25, 9]. Corresponding illustrations are given below in Figs. 3 and 4. Since the organization of the brain operates with tremendous amounts of information its workings should be presented within a construction that is able to handle the suggested computational model for Big Data.

Fig. 3
figure 3

Cellular automaton unfolding into the holographic universe

Fig. 4
figure 4

Layered holographic memory: origin of nonlocality and mesoworld sophistication

The information capacity of human memory should be virtually unlimited as everything is continuously recorded and never erased. From this perspective John von Neumann estimated that the capacity of human memory is about 2.8 × 1020 bits [10]. The tremendous amount of information stored in human memory is used implicitly as a passive context, while only a rather small portion of this information is active explicitly. In the book [11], it is somehow estimated that humans of 80 years age actively employ only a very tiny piece of all memory information—about 1 Gb. Influences of the passive background on the workings of the brain are in accordance with Freud’s theory of unconsciousness. The role of the unconsciousness in mental diseases is discussed in the next Sect. 5.1.

The computational model with context-addressable access could be beneficial for a broad number of applications when the information processing performance increases with the accumulation of examples. Particular instances include learning a language, pattern recognition, reinforcing the skills etc. Building up a large context allows approaching the effective solutions of almost all problems. However, some of the information processing tasks, for example, arithmetic calculations, would be done more smoothly with ordinary computational models rather than employing Big Data contexts.

The main operational mechanism for the implementation of the Big Data computational model is streaming. The significance of the streaming capacity for the organization of the brain is exposed through the effect of the so-called Penfield movies [12]. This effect was observed by stimulation of different parts of the brain during surgery. The subject of this stimulation began to relive earlier periods of time in the greatest detail including various sensory components—visual, auditory, olfactory, and even somatic. Two circumstances are relevant to our Big Data consideration: first, the recall produces random samples of true experience, usually, of no significance in the life of the patient—context background, and second, the appearing pictures are “time-ordered”, the events go forward but never backwards—this enables the organization of streaming.

The principles of holography materialize the extracorporeal placement of human memory. Holographic organization of the Brain and the Universe is a popular topic for abstract theoretical speculations (see [13]). In our concept, the holographic mechanism is a secondary construction atop of the cellular automaton model of the physical world (Fig. 3). Realization of a holographic mechanism entails clear technical requirements: a reference beam generating wave trains pulses and a relatively thin recording medium in compliance with spatial and temporal coherence. This leads to a special design of holographic memory with spreading recording layer (Fig. 4). The presented construction naturally incorporates the otherwise inconceivable property of the Universe nonlocality.

The spreading activation layer of the holographic memory acquires and retains signals from all the events in the Universe. Among those are signals from the brains that are recorded as the states of its memory. This information is modulated by the conformational oscillations of the particular DNAs, so the whole holographic memory of the Universe is shared among the tremendous variety of biological organisms [14].

We would like to single out two prominent physical properties in relation to the considered construction: the tridimensionality of space and the anisotropy of the Cosmic Microwave Background. As long as physical and biological processes rely on the informational control of the holographic mechanism it is necessary that the waves involved in this mechanism propagate in accordance with the Huygens principle, i.e., with sharply localized fronts. Otherwise, the interference of holographic waves will blur. Huygens principle occurs strictly only in 3D space; this implicates the tridimensionality for the physical space and, hence, for the space of perception [15].

The appearance of the anisotropy of the Cosmic Microwave Background is of remarkable significance. To a very great surprise of cosmologists, in about the year of 2003 a certain pattern built-in in the Cosmic Microwave Background has been discovered [16]. This pattern was called the Axis-of-Evil as it merely should not be there—as commonly understood the Cosmic Microwave Background must be uniform. There had been put forward a number of esoteric ideas that the unexpected imprint in the Cosmic Microwave Background is a message from a Supreme Being or from a neighboring Universe. In our theory, the Cosmic Microwave Background is not a post-creation remnant of the cooling down matter, but an accompanying factor of the layered holographic activities. Our explanation of the anisotropy of the Cosmic Microwave Background Cosmic is natural, easy, and neat. The Cosmic Microwave Background is indeed uniform if observed from the center—the pole issuing reference beam. But when observed from the eccentric position of the Solar system these activities become distorted. Our model exactly predicts the angle between the dipole and quadrupole axes: −40° [17] (see Fig. 5). If necessary, higher order axes can be also exactly calculated and compared. Another, more simple and clear manifestation of the Holographic Universe is referred to in the conclusion.

Fig. 5
figure 5

Anisotrophy of the cosmic microwave background “Axis-of-Evil”—eccentric observation of holographic recordings predicated theoretically several years before the actual discovery

Full realization of the surmised computational scheme for the organization of the brain with the required holographic memory parameters does not seem realistic with the hardware resources available on Earth. As long as the major operation needed for the organization of the brain is massive stream processing, partial realization of this functionality for the suggested Big Data computational model can go in two directions.

First, following the way suggested in [1] the required stream processing could be arranged with the pipelining that has a distinctive capability to effectively accommodate on-the-fly computations for an arbitrary algorithm. The most essential part of this processing is the suggested technique for on-the-fly clusterization. This type of the brain functionality would be most suitable for special intelligent tasks, such as knowledge discovery—formulation and verification of hypotheses.

Second, in a much broader sense, the imitation of Natural Intelligence could be achieved by direct implementation of the basic holographic scheme for the brain by emulation of the smart unobtainable hardware of the Universe with digital holography. This can be approached with Cloud Computing (Fig. 6). At a given Internet site “layers” of the holographic transformations for different objects are calculated with different angles of the incident reference beam. Search for a specified object is done by a sequential lookup for best matches with the digital holograms in the recorded layers. The incidence angle of the reference beam reconstructed even from partial match identifies the object. The computational process consists of iterations of these sequential lookups.

Fig. 6
figure 6

Emulation of cognitive facilities of the brain with digital holography

The mental activities of the brain are supposed to be completely software-programmable with such a Cloud Computing arrangement. The characteristic feature of “subconscious” processing—manipulation with small quantities of data whose selection is holistically determined by the entire data contents—can be exactly accommodated in the given framework. The selection procedure can be paralleled with an iterative version of Google’s PageRank, where a uniquely specified item rather than a subset of items must be extracted. This specification may simply rely on a kind of “I am feeling lucky” button, and might be enhanced using the established procedure for on-the-fly clustering.

5 Big Data Fabrication of Biological Configurations

The Big Data environment poses obvious challenges to the organization of information processing when the huge amounts of data have to be reduced to something that makes sense. Away from that, different complications arise in relation to Big Data circumstances when it is necessary to produce objects with tremendous structural variety. This is an important existential question. A viable object has to be created in a relatively small number of steps, say in O(1) using algorithms terminology. Algorithmic constructions that take O(N) steps for creating very large objects would not be practicable. A productive resolution of the Big Data induced concerns presents a decisive issue for bio-medical objects. Here we will show two characteristic instances of these Big Data problems that can be effectively resolved within our concept of the physical world as an Internet of Things, as it combines informational and physical processes. These two Big Data creations are related to the informational filling of human memory and to the reproduction of the material variety of macromolecules.

5.1 Joining the Cloud and Mental Disorders

Let us consider how human memory could be amassed with the Big Data. Human brain contains about 1011 neurons and 1014 synapses. It is believed that updates of the synapses somehow develop the contents of human memory. Let us assume that chemical processes associated with one update take 1/100 s. So, getting one update at a time would lead to formation of the whole system of synapses in about 30,000 years. Thus, assuming that a child of 3 years of age acquires a system of synapses, which is in essence prepared, this system should have been continuously reorganized with the pace of 106 updates per second.

In terms of algorithmic effectiveness, formation of a Big Data memory structure by individual updates, even performed in parallel, does not appear feasible. In our conception of the physical world as an Internet of Things the problem of the formation of biological memory is efficiently resolved in a simple way with much less time and effort. This can be achieved merely by joining the holographic Cloud. The required updates of the Cloud contents are done at the pace of the repetition rate of the holography reference beam—1011 Hz. The information substance obtained from the Cloud basically constitutes the background context for the Big Data computational model. Evolution of this context while transferring from one generation to the next one is a conservative process. For good or for worse, the core of this context—paradigms, habits, myths, morality, etc.—cannot undergo rapid transformations. This context changes slowly. In a sense, the fact of contest conservatism keeps up with the von Neumann’s saying: “It is just foolish to complain that people are selfish and treacherous as it is to complain that the magnetic field does not increase unless the electric field has a curl. Both are laws of nature”.

Possible disruptions of the considered process for acquiring the context background for newly developed organism can result in mental disorders, like neuroses, schizophrenia, and autism. In most cases of such disorders changes in the physical constituents of the brain are insignificant. So, it is a software rather than a hardware problem.

Various details associated with the considered mental disorders seem to corroborate our hypothetical scheme of their origin. First, let us start with the issue of heredity. The article [18] reports a sensational observation that “older men are more likely than young ones to father a child who develops autism or schizophrenia”. The study found that “the age of mothers had no bearing on the risk for these disorders”. The explanation of this observation implicates “random mutations that become more numerous with advancing paternal age”. It is questionable that the alleged mutations occur at random because of the doubts why should these mutations target specifically mental disorders. In our concept, the observed effect can be elucidated considering the diagram in Fig. 4: the amount of the holographic layers accumulating the father’s life information increases with father’s age; so when this information is used to create the context background for the newborn child it might encounter more disruption influences. Also, the suspected transgeneration epigenetic influences on autism could be related to the same surmised mechanism for the context background formation. Very surprisingly, as indicated in [19], “The mental health of a child’s mother during pregnancy is widely considered a risk factor for emotional and behavioral problems later in the child’s life. Now a new study finds that the father’s mental health during the pregnancy also plays a role.”

More than 500 genes have so far been implicated in autism showing that no clear genetic cause will be identified [20]. Thus, it is vital to look at the role of the environmental factors. Babies exposed to lots of traffic-related air pollution in the womb and during their first year of life are more likely to develop autism, according to [21]. In our view, nanodust affecting DNA conformational oscillations, and, hence, their communicating facilities, changes the context background. Finally, let us turn our attention to some possibilities of recovery as reported in [22]: “Doctors have long believed that disabling autistic disorders last a lifetime, but a new study has found that some children who exhibit signature symptoms of the disorder recover completely.” In our concept, this self-cure could be enhanced by applying the technique exhibited in Fig. 2.

5.2 3D Printing and Self-Replication of Macromolecules

The organization of the physical world as an Internet of Things allows Big Data configurations to be produced not just for informational structures but for material constructions as well. The former are being developed through joining the Cloud Computing process, while the latter making use of quantum mechanics provide what can be called quantum “3D printing”. Thus, an impact of information signals on material activities is exercised in synapses gaps where the propagation of electro-chemical pulses in axons and dendrites continues by chemical neurotransmitters. This way neural activity inside the brain can be modulated by the information control from the outside extracorporeal memory.

A vital Big Data operation in living systems is self-replication of macromolecules. This is largely related to the creation of proteins in morphogenesis and metabolism. The regular way of protein production according to the Central Dogma of molecular biology: DNA—mRNA—protein is not sufficient. Two main reasons can be pointed out. First, it is not feasible to have bulkiness fabricated step-by-step. And second, in many circumstances the proteins are to be exactly reproduced with their folding structures, like prions in the case of “Mad Cow disease”. The other way for macromolecules reproduction that we will present here has been suggested in [23]. The Big Data malfunctions associated with protein reproductions constitute for the brain “hardware” problems—neurological diseases, while above mentioned disruptions associated with the creation of contextual background constitute for the brain “software” problems—mental disorders.

The suggested procedure of self-replications of macromolecules is depicted in Fig. 7. It is based on our interpretation of quantum mechanics behavior as a result of interactive holography [24]. The involvement of the holographic mechanism directly exposes the dominant quantum property of nonlocality that otherwise appears inconceivable. The specifics of the quantum mechanics behavior are essentially determined by the interaction of two entities: the actual particles and their holographic feedback images. It has been shown that quantum transitions as random walks of these entities are described by Schrödinger’s equation. The imprecision in localization of a particle between actual and virtual entities leads to the fundamental quantum principle of uncertainty. In relation to macromolecules this produces mesoscopic displacements of their components that leads to an effective algorithmic procedure for reproduction of the “Big Data” structures.

Fig. 7
figure 7

The algorithm for reproduction of macromolecules. 1 Macromolecule components with holography copies. 2 Random scattering of the components over both place. 3 Two half-full patterns are reconstructed to completeness

The facilities for self-reproduction possibility of macromolecules should reveal new yet not recognized properties of the physical world as anticipated by P.L. Kapitsa [25]. The surmised algorithm for self-replication of macromolecules develops by means of swapping of particles with their holographic placeholders as illustrated and explicated in Fig. 7. The suggested self-replication algorithm can be figuratively imagined as “Xerox” copying. The proliferation of proteins in biological organisms by means of application of this algorithm is analogous to the creation of Gutenberg’s Galaxy of books thanks to a breakthrough invention of the printing press.

6 Conclusion: Experimentum Crucis

Coping with the “Big Data” situation constitutes the key problem for purposeful behavior in natural and artificial systems. Human reaction to the Big Data environment is bounded rationality—a decision-making process complying with cognition limitations and imposed deadlines. The ideology of bounded rationality leads to a computational model of the brain that goes beyond the traditional Turing algorithmics. This Big Data computational model reveals the unconsciousness as the basis for sophistication.

The effectiveness of the given computational model encourages evaluating this approach for the general paradigm of the organization of biological information processing. Such a consideration leads to the view of the physical world as an Internet of Things. This kind of theoretical edifice is inspired by the practical advancements in modern information technology. Similarly, creation of the steam engine in the Industrial Revolution promoted the theory of thermodynamics. The new paradigm of the physical world as an Internet of Things materializes in the framework of the Holographic Universe. Distinctively, information processes in this construction realize the most mysterious property of the physical world—quantum nonlocality. Nowadays, conventional interpretation of quantum theory encounters more and more serious complications. Thus, several prominent scientists say that “the absurdity of the situation” cannot be ignored any longer and quantum mechanics is going to be replaced with “something else” [26].

Introduction of a new idea encounters fierce opposition from the public in general, and this work should not be an exclusion. Yet—an exceptional circumstance—this work clearly shows why opposition to new ideas actually happens. People do not debate about the validity of arguments as logic is integrated in human mental process. People argue about the interpretations of premises that are determined by the scheme of the built-up “Big Data” context background. Therefore, there is basically no chance to make people to change their mind. The famous words of Max Plank manifestly present the pessimistic reality in conformity with the considered scheme: “A new scientific truth does not triumph by convincing opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

The surest way to confirm a new theory is Experimentum Crucis. This methodology attracts an experiment that is consistent with the new theory, but is in an irreconcilable disagreement with the established theory. Overall, experiments do not positively prove a theory; but they can only surely disprove it. So, as long as a new idea cannot prevail directly, it can do it by a counterattack with an Experimentum Crucis that undermines opponent’s paradigm.

As a matter of fact, holography mechanism is quite sensitive to objects locations. Thus, the eccentric positioning of the Solar System in the Holographic Universe (Fig. 4) determines the otherwise incomprehensible anisotropy of the Cosmic Microwave Background. Yet, a more compelling Experimentum Crucis for the establishment of the given construction should be simple and sensible. As such, we consider the “calendar effect” introduced in [23]. As seen in Fig. 4, the position of the Earth changes due to its motion on the solar orbit; so we can expect annual variations in all phenomena that are related to quantum mechanics. This “calendar effect” is of universal applicability, and it is apparently clear, like, for example, the statement that nearly all bodies expand when heated. Currently, most vivid examples of the surmised calendar effect have been determined for two phenomena: annual variability of the rates of radioactive decay in physics [27] and “seasonal” variations in cardiac death rates in biology [28]. Less clear-cut examples of calendar effect for numerous bio-medical occasions have been described; as to physics, these calendar variations have to be anticipated for a number of fine quantum effects whose outcomes should systematically fluctuate from month to month.

The most articulate manifestation of the calendar effect through variations of heart attacks [28] can be regarded as a generalization of the celebrated Michelson experiment, at this time with a positive outcome, where holography plays the role of interferometry and ailing hearts serve as detectors for malformed proteins (Fig. 8).

Fig. 8
figure 8

Revealing the holographic infrastructure of the universe: parallels of the Michelson experiment and the calendar effect