1 Introduction

Most of the free will literature is put to the question whether our beliefs in free will might or might not be compatible with causation. First of all, the starting point of this paper is apparently “incompatibilist” as more interesting to physics where free will happens at variance with determinism. The old problem is that their ontological status gives us a little possibility to define causality exactly just as free will itself. We do not know what must “free will” mean exactly, and we have the same vagueness in regard to determinism. Unlike the fundamental conservation laws in physics, there is no quantitative conservation law of causality to be measured, calculated, or even explained statistically like the second law of thermodynamics.

Rather, determinism is a general idea that everything in Nature is a computation going by actions of law in a clockwork way. In reality, we simply observe the stable blocks of events, and draw an objective conclusion that some event \(X\) always precedes another event \(Y\). We take this evidence into account by saying that \(X\) must necessarily cause \(Y\) and conclude that any causal chain of events is irreflexive (without causal loops) and transitive within light cones of spacetime as it is presented in the causal set approach in quantum gravity (Bombelli et al. 1987).Another mathematically equivalent way is to depict those chains as a partially ordered set by a directed acyclic graph consisting of “ancestors” and “offspring” (e.g. Wood and Spekkens 2015).

After all, Laplacian determinism wants the universe to evolve continually by computing its next state in accordance with natural laws, and claims that if one could be powerful enough to know all the indexical conditions of a certain locally isolated system precisely at one time, one might completely compute its state at another time. For example, having those conditions \(A\) measured to unlimited precision at a moment of tossing a coin, one might compute the final position \(B\) of the coinnot on average, but exactly, without assuming any one probability, i.e. uniformly with the probability \(p(B|A) = 1\). Probabilistic descriptions widely used in science are viewed there to represent the state of our knowledge, not of Nature herself abhorring uncertainty and randomness.

Superdeterminism is a hypothesis that genuine randomness is impossible in Nature, and the present state of any physical system is totally and uniquely predetermined from the past. In this sense, superdeterminism is nothing but the claim that determinism must be complete in describing physical processes. Einstein criticized quantum mechanics just from the position that an ultimate physical theory that should necessarily be consistent, i.e. excluding contradictions, should be complete as well in computing the state of reality (Harrigan and Spekkens 2010).The Einstein–Podolsky–Rosen (EPR) paradox had been proposed to exhibit the incompleteness of quantum mechanics with its uncertainty principle and random nonlocal collapse by arguing that some hidden local variable should necessarily be introduced in quantum descriptions (Einstein et al. 1935). Correspondingly, until the completeness of determinism had been explicitly questioned, the free will problem debated in philosophy since ancient times was also “hidden” and put outside the physics scope.

Classical determinism as being historically indifferent to (or agnostic about) free will can be called “naïve” here. Relativity holds this position in the sense that a passive observer put to one or another frame of reference is not rejected but rather considered to play no essential role there. A complete deterministic theory is said to be observer-independent in the same sense as the physical world must be invariant to measurement devices. Even Bell raising free will to one of the assumptions of his famous no-go theorem had been, in his own words, embarrassed as being caught in a metaphysical position, and treated the experimenters’ settings of their measurement devices as “not determined in the overlap of the backward light cones” or “effectively free for the purpose at hand” (Bell 1993).

Thus, when imposed upon “naïve” determinism, superdeterminism can be seen as a statement that not only all objects classically observed in physics, but also the observers themselves are completely controlled by natural laws from the past, though, possibly, in the totally unobservable ways (’t Hooft 2007). It does not matter much if we are unable to explicitly trace out the ways among very complex causal chains as those unfold with time in the universe. What is in principle is that conscious observers cannot have free will as all their actual actions have causal roots in their past light cones. To challenge superdeterminism, it must be asked, is an observer in principle able to make a free choice? Superdeterminism answers, never.

However, quantum mechanics had shown that properties such as uncertainty, randomness and nonlocality might be irremovable from reality at the quantum level. One of important consequences of quantum mechanics is well expressed by the Conway and Kochen (2008) free will theorem (FWT).The theorem is based on the EPR paradox and the Kochen and Specker (1967) paradox by emphasizing the role of free will instead of nonlocal correlations stressed in Bell’s theorem, and then FWT states that if only free will is admitted to be really inherent to observers in setting their measuring devices, it spontaneously invokes some freedom in the entangled particles’ response. From a neuroscientific perspective, as Conway and Kochen themselves imply, what the FWT states is that “freedom” in particles’ behavior should be reversed to the ultimate explanation of the experimenters’ freedom to make a choice independent of the past.

It is usually noted that the free will assumption is not quite relevant to FWT since the experimenters’ choice which measurements to perform can be lightly replaced by an observer-independent physical system with no conscious choice such as a pseudorandom number generator. Nevertheless, it is true that the corner-stone of the problem is randomness initiated by observers when they decide just how a device must be arranged to measure a corresponding outcome assumed to be predetermined in the underlying reality. In regard to Bell’s theorem formulated to disprove the hidden local variable, a general picture obtained in FWT is that the outcome does not really exist prior to a measurement but will be generated randomly “on-the-fly” by the measurement. Though this does not help really in understanding how randomness might be relevant to the observers’ actions, randomness and free will come to be relative in the fundamental physical framework (as it will be discussed throughout this paper).

The idea that randomness genuine physically, not merely as a statistical pattern of our predictions, cannot be created in a completely deterministic world has been yet noted by Einstein in his famous phrase “God does not play dice with the universe.” Thus, free will must be rejected from the position that causation gives us no chance to be free. ’t Hooft (2007) insists that the present must emerge consistently from the past, and none can modify the present without assuming some modification of the nearest past that in turn must be modified from its nearest past, and so on. Eventually, one should modify the very big bang over all “ancestors” in the causal chains responsible for one’s local actual present. Hence, any assumption about modification of the present must be perfectly impossible, and free will can never take place in the universe.

Today quantum mechanics with its no-go baggage is commonly deemed to be a well-established theory not merely as an excellent mathematical model that can be disproved in any positive way. Hence, the apparent incompleteness of quantum mechanics from a classical viewpoint (Einstein et al. 1935) that nevertheless cannot be reducible to classical physics must be imposed upon determinism itself in the sense that there cannot be a duplex world which one part is quite deterministic whereas its other part can behave somewhat freely and, thus, reserve some place for free will of conscious observers. While consistency of determinism is undeniable, its completeness can be postulated only by superdeterminism.

In this paper, it will be shown that superdeterminism is either self-inconsistent logically or mystical and untestable. In the standard big bang cosmology, the initial conditions of the superdeterministic universe with no random element anywhere would still have been random in spite of the premise. Otherwise a primordial artificial design should have taken place there. Yet, when conceived in favor of hidden variable theories, superdeterminism might underlie a wide class of timeless and/or time-reversible theories (Barbour 2000; Anderson 2017) as well. On the contrary, with randomness in Nature, the arrow of time is preserved. Moreover, the future of the universe could not have been predetermined completely in the sense that it should be impossible in principle to predict from the boundary conditions of the big bang or at any later moment of its evolution whether live and conscious observers might or might not appear there.

2 Free Will and Randomness

Since quantum mechanics had been established, the various ontological speculations were imposed upon the nature of reality such as the Heisenberg’s duplex world with the underlying reality qualitatively less real than the classical world of observed facts (Heisenberg 1958), or the Neumann–Wheeler’s ontology of quantum attributes created by observation related to a “choice on the part of an experimenter” (von Neumann 1955; Wheeler 1990). As being based on the Copenhagen interpretation, they can be viewed as dualistic in both quantum–classical and mind–body senses if we agree that free will is really inherent to consciousness but incompatible with determinism of the classical world. On the contrary, the many-worlds interpretation (Everett 1957) with no wavefunction collapse is not dualistic, but this leaves no room for observer’s freedom, and can itself be viewed as superdeterministic in character (e.g. Gisin 2012).

Yet, there is Bohm’s (1980) theory of the underlying (one-world) reality as an undivided wholeness. What is of most interest to my aim here is that Bohm (1990) attempts to resolve both kinds of dualism by adopting some “mind-like quality” of Nature, though not defined explicitly by him but rather hinged on the idea that in Bohmian mechanics particles can “make a choice” at beam splitters as being guided by their wave function. However, they again do it in a superdeterministic fashion. This is because the many-worlds (local) interpretation and Bohmian (nonlocal) mechanics are very similar in their statistical treatment of quantum randomness (it will be clear later that Bohmian wholeness can be rather related to a superdeterministic designed universe). In particular, Bohm says:

The content of our own consciousness is then some part of this over-all process. It is thus implied that in some sense a rudimentary mind-like quality is present even at the level of particle physics, and that as we go to subtler levels; this mind-like quality becomes stronger and more developed. Each kind and level of mind may have a relative autonomy and stability (p. 283).

First, it is naturally to agree that if we are indeed endowed with free will, this could not have been done despite the rest in the universe. Hence, there should be reserved a certain physical place and an unambiguous way for the evolution of free will from simple (comparatively) physical patterns to extremely complex human behavior. But it is very dubious that this special ability might emerge trivially from some complex (but quite deterministic) systems as, for example, chaotic systems. To put it apart from neuroscience and biology with its “genetic determinism” to the fundamental physical level as Bell (1993) had put in his theorem: no hidden deterministic (even non-local as in Bohmian mechanics) variable might control free will. In fact, we have a tautology: free will is to be free of the past.

The characteristic does not assert that free will really exists but only suggests a criterion under which we can, at least, theoretically, distinguish this very special behavior from classical processes described statistically. Then how might something be free? In fact, the only natural phenomenon that might be able to account for every bit of freedom in the universe should be quantum randomness. The probabilities in present-day quantum mechanics are fundamentally different from those in statistical mechanics. In statistical mechanics the probabilities are conceived to be reducible to finer details in the underlying ontological states. In contrast to this, the probabilities in quantum mechanics, according to contemporary orthodoxy, are irreducible to any underlying more detailed specification.

Thus, in the statistical context, free will should not be predictable to unlimited precision in principle. The unpredictability might not be of a mere statistical account depending on own incomplete knowledge of the underlying physical processes but must be inherent to the processes themselves. What criterion might be then taken by an external experimenter to certify free will as objectively emerging from our brain processes if our subjective reports cannot be scientifically reliable? The experimenter should be able to say meaningfully that free will is more than a computation running on “autopilot” over the brain dynamics. A reason to assume randomness is that when we deal with free will, a choice that might be characterized as genuinely random in a profound neuroscientific testing by monitoring spontaneous brain activities across many spatial–temporal scales, and a choice that might be defined to be truly free (not computable in advance from the initial conditions even with the help of any future technology) should be objectively indiscernible in the fundamental physical framework of determinism.

Since von Neumann (1955) had conjectured that consciousness might be an active participant of observation, a lot of attention was given to the idea whether brain itself might be described as a quantum system (Penrose 1989; Kak 1995; Stapp 2007; Tegmark 2015). It is naturally, therefore, to reduce the origin of free will to the quantum level encapsulated by Heisenberg (1958) in his words that quantum mechanics represents no longer the behavior of the particle but rather our knowledge of this behavior. The same words can be said about conscious observers. We describe human behavior probabilistically. Do the probabilities depend on our incomplete knowledge of the brain dynamics that is completely predetermined at the fundamental physical level, or are those coming from the dynamics itself that can admit randomness in principle? Thus, I do not see another way to justify the appearance of free will in the world besides the criterion of Conway and Kochen (2008) asserting that randomness in the particle behavior exhibits exactly the same kind of freedom of the past we grant to experimenters.

It seems right that admitting randomness into the universe does not explain free will as the freedom of choice made at one’s conscious will, not arbitrarily or by caprice. The problem is that the very concept of freedom is controversial and usually taken by many people to be self-controlled. It is commonly believed that human free actions should be caused by their own reasons rather than occur randomly (Koch 2009). Yet, it was pointed out many times that the very term “free will” (or “self-controlled freedom”) can be defied as paradoxical logically and consisting of two notions incompatible in the physical sense. Then either “will” or its “free” part should be undermined. In the first case, some kind of mysterious freedom would bypass determinism as having no physical substrate but able to impose its own physical constraints upon a choice. This is outside science. Another way is to accept a certain “compatibilist” will albeit predetermined from the past and computable in principle, i.e. illusory in essence, but still subjectively free in one’s awareness.

Apart from those both, only two ultimate explanations can be put to physics. On one hand, if a voluntary action emerges in unconscious ways from neural (and deterministic all the time) processes before a subject become aware of it, as it is usually reported in Libet-type neuroscientific experiments (Libet et al. 1983) studied extendedly in the literature (Soon et al. 2008; Guggisberg and Mottaz 2013; Schlegel et al. 2015; Papanicolaou 2017), even if emerging from background neuronal (classical in character) noise (Schurger et al. 2016), there is no genuine freedom in it, though some mechanistic “will” can be granted there. On the other hand, if the subject’s action is not totally predetermined by the past of the universe, it is difficult to find a testable difference between randomness and freedom (though uncontrolled) in the physical framework. At least, theoretically, such an action could indeed be free of the past, but might there be one’s personal will? A way to combine again the “free” part with “will” (or “freedom” with “control”) is to admit a random quantum element only in origin.

Today there is a wealth of evidence that neural processes relevant to cognition can be sensitive to quantum fluctuations presented widely within cellular structures (e.g. Sahu et al. 2013; O’Reilly and Olaya-Castro 2014; Chenu and Scholes 2015) in wet protein environments conducive to the survival of quantum effects (Brookes 2017). The key quantized events might then be not averaged on classical timescales (Tegmark 1999) but amplified classically across different neural levels and “orchestrated” in brain (Hameroff and Penrose 2014) as soon as even quantum perturbations of a single neuron have a small but non-zero chance to trigger off an avalanche (London et al. 2010) amplified enormously across many spatial–temporal scales to a degree that could lead consciousness to a controlled choice that, nevertheless, had not be fully determined by the antecedent brain process. Thus, we could think about a decision maker as a free agent while at the same time not simply coming to decisions by a random process or by physical predetermination.

In this sense, free will still has something to do with randomness as the ultimate scientifically legitimate obstacle to physical predetermination that might prevent us from computing the final outcome exactly from the local indexical conditions of a physical system of interest. For instance, Aaronson (2016) calls such an obstacle “Knightian uncertainty” viewed not as a practical, i.e. epistemic in character, limitation to knowing the indexical conditions of the brain states but as a natural gap in predicting free will as being based on quantum privacy of the neural “freebits” guaranteed by the no-cloning theorem (Wootters and Zurek 2008). As a result, brain, i.e. one’s individual consciousness, could not be copied in principle by any future technology to run automatically on a digital computer having, clearly, no bit of freedom in origin. Otherwise, if brain-cloning would be possible, then on the same compatibilist assumption, one might grant “free will” to the computer. This paper will have nothing to do with such a kind of will since this philosophical notion can be well related to a superdeterministic account.

Since genuine free will must contrast with the general notion of determinism as a computation, it seems right to define free will from the position that if a system’s behavior might be computed in advance (as resulting from deterministic processes), there should be no reason at all to ascribe any freedom to the system. Indeed, if a free choice might be computed completely from the antecedent state of the brain and the environment that influences it, then the free choice would be disproved by the fact of the computation. Of course, predictions of the kind “Between life and death Alice will choose the first one with the probability \(p\) close to 1” can be made always and is not valid to the issue (as soon as Alice might still take the second option). On the contrary, having her behavior predictable uniformly at all times with the probability \(p = 1\), we might then, at least in principle, copy Alice’s consciousness mechanistically in favor of the brain-machine identity with no free will in the machine’s behavior. Clearly, Alice’s free choice should be then dismissed as subjectively illusory. Otherwise we might ascribe “compatibilist” freedom to a machine as well.

3 Incompleteness and Theory of Everything

Today quantum mechanics is commonly believed to be complete in the sense that its probabilistic descriptions tell us everything that can be actually knowable about the underlying reality. Though the unitary evolution of the Schrödinger equation is deterministic, quantum mechanics is often regarded as an indeterministic theory because of the wave function collapse that makes the evolution stochastic. However, it always remains unclear what extent of “indeterminism” can be relevant to the general picture of the classical world we observe around us. Instead, quantum mechanics can be said to be deterministically incomplete (Einstein et al. 1935) as soon as this probabilistic theory does not tell us everything about what is actually going on in the underlying reality.

Therefore, I would like to consider the very determinism in terms of consistency (Con) and completeness (Com). Causality as an action of physical laws is a fundamental property of Nature. Determinism is apparently consistent. Not Con but Com of determinism is defied by quantum phenomena, and needs superdeterminism to be completed by rejecting both randomness and free will. Let us now return to the definition of determinism as a computation going over all physical processes in Nature. If randomness would merely be a consequence of our incomplete knowledge about those processes, we never might defy determinism. On the contrary, if randomness would be real in Nature, we might be skeptical about a complete description of the universe as a computation including conscious observers as its natural part. More exactly, this should mean that the “theory of everything” conceived to be described logico-mathematically (e.g. Tegmark 2007) could not be complete in principle.

A key difficulty with random events is that it is hard to ensure that such events are unpredictable in principle because there always can be presupposed the incompleteness of knowledge about the initial conditions. After all, randomness should account for two at first sight different things: free will and the evolution of life. The idea that the universe cannot be a computer working by a time-reversible algorithm (Wharton 2015) can be also directly linked to the free will problem. Clearly, in a timelessly “frozen” block universe (Barbour 2000), there is no room for both randomness and free will. On the contrary, with randomness in Nature, the evolution of the universe could not be predetermined completely in the sense that it would be impossible to predict—if even one might precisely know all the initial conditions and be powerful enough to make total calculations—whether life and conscious observers might or might not arise eventually in the universe. Such a scenario can be possible if only the universe is not a computer calculating continually causal blocks of events from the uniquely special boundary conditions determined at the big bang.

Instead, every actual moment must be somewhat special—though locally—just as the big bang itself is special globally (Smolin 2015). A general picture behind the Schrödinger equation is then that the underlying quantum reality contains “continuous potentialities” (Stapp 2001) that might evolve into a redundant set of consistent histories (Griffiths 2002) only one of which, however, should become actual as it is in quantum Darwinism where the quantum states selected from the Hilbert space are seen to be random but objective as being recorded in the past which all observers agree on (Zurek 2009; Riedel et al. 2016). The random element emerging at every actual present is unique and irreversible. Thus, just the randomness should be relevant to all natural processes that might be viewed as free in origin.

If the universe is not the unique computation derived from the initial conditions specialized in the big bang, then the actual present is still open to the future in order to permit variations for randomness at every present moment. To put simply, the universe as a whole cannot be self-determined at the present just by this present but will be completely determined from the next moment. Any random event still has a cause as its initial conditions, and an observer choosing one option rather than another at the present does not at all violate determinism whose consistency will be completed as Con+ Com only behind the observer in the past. Instead, one might speak of the “backward” or post factum causality conservation at every actual present moment.

Indeed, whatever any conscious observer can make freely at will, causality will be completely preserved post factum. We never observe two or more incompatible effects of the same cause—whether it is the state of a classical object or a quantum particle. Independent observers always have the same one causally consistent past including their own activities. Inconsistency of determinism should be absolutely impossible for a simple reason that the universe might not exist at all on such a condition allowing lawless freedom. It is a logical absurd to deny determinism in the presence of us conscious observers of this universe, but its completeness at every actual present moment can be defied in respect to our human status made ready beforehand in a superdeterministic scenario of the universe.

Let us compare the post factum causality conservation with the “ontology conservation law” suggested by ’t Hooft (2016) just for such a scenario where causality is completed forward. The law posits that the universe must necessarily exist in a single (not superposed) ontological state at the big bang and evolve always into a single ontological state by the ante factum causality conservation. Ontology is preserved from the past to the future, so that any real outcome of an experiment can never be in a quantum superposition like Schrödinger’s cat. There is no spontaneous collapse, no objective reduction, no environment-induced decoherence, no bit of randomness. Nor can it be done to observers’ brain as well. Their ability to freely choose, for example, how to set their apparatus to obtain a particle’s response as a measurement outcome is illusory. Both their input and outcome are predetermined from the past, not by chance.

On the contrary, the post factum conservation takes any physical process in the reverse order when every state of the process comes to be classically certain (ontologically single) from the next uncertain present state that in turn will become certain from the future. All those states will unfold in time to complete all causal chains starting with the big bang except for the actual present where observers exercise their free will from the incompleteness (inCom) of determinism. Just the randomness is the unique phenomenon that might endow observers with free will in the universe taken to be causally completed pots factum not ante factum.

4 Randomness in Quantum Mechanics

Quantum mechanics tells us that Nature can admit genuine randomness. The problem is often debated in terms of \(\psi\)-ontology (Pusey et al. 2012; Leifer 2014) put to the question whether the wavefunction describes probabilistically the epistemic states of observers’ knowledge or the ontic states of Nature herself at the quantum level. The orthodox approach holds that we should be satisfied with the epistemic states where the wavefunction collapse is the effect of acquiring new information like updating of classical Bayesian joint probability in the light of freshly obtained data, whereas the realistic interpretation is based on no-go theorems and asserts that the underlying ontic reality is just the same, i.e. stochastic fundamentally.

As stated, randomness can be viewed as a main feature of free behavior by our definition. Though it does not well help to understand how can randomness affect free will in brain as it is discussed in neuroscience (e.g. Koch 2009; Barlas and Obhi 2013; Lavazza 2016), in quantum mechanics both randomness and an observer’s free choice appear to be converging to the measurement problem. The initial conditions of a quantum system cannot be determined until the complete information is extracted from measurements. The complete information is hidden by the very principles of quantum mechanics. A quantum measurement acts like a delayed completion of the initial conditions of an observed system. This appears to be dependent of an observer in retro-causation, though, of course, cannot be used by the observer to change the past, only to decide that were not yet observed. Thus, within an independent quantum system the observer’s will can be viewed as interfering randomly but substantially.

Of course, this is does not mean at all that consciousness could bend the universe to its will by observation because the environment-induced decoherence would occur spontaneously everywhere as some kind of “self-measurement” of a quantum system by interaction with environment. The theory of decoherence (Joos et al. 2003) emphasizes the role of environment where classicality is an emergent phenomenon regardless of conscious observers there. Thus, when taken abstractly, consciousness is only a part of classical environment. When put to its own dynamics, however, consciousness itself can be viewed as a quantum system going randomly under “self-measurement” in the brain dynamics to give rise to free will.

What we have in a general picture is consciousness that is placed in a brain that is placed in a body that is placed in a laboratory that is placed … in the universe. Those all environments envelop consciousness at different spatial scales and various physically achievable levels. Free will can disturb the brain’s dynamics to govern the body’s movement to arrange the laboratory’s devices to influence a faraway system to change something in the world. What scale might the free will’s disturbance extend to? Generally speaking, nothing prevents us from assuming that under very special circumstances like (figuratively) Archimedes’ fulcrum to move the earth, one might somehow change a macrostate of our habitable world—not despite causality but by preserving Con of determinism. Then why should we think of it from the Com position as if that important event caused at one’s will had been completely predetermined from a distant past?

In this way, the free will theorem implies that consciousness can intervene randomly in the state of a quantum system and, at the same time, be itself a phenomenon of some kind of quantum processes in brain amplified well enough to cause a behavioral effect as Stapp (2007) and Hameroff and Penrose (2014) have advocated. Conway and Kochen (2008) reformulate this process in a provocative manner:

if indeed we humans have free will, then elementary particles already have their own small share of this valuable commodity. More precisely, if the experimenter can freely choose the directions in which to orient his apparatus in a certain measurement, then the particle’s response (to be pedantic – the universe’s response near the particle) is not determined by the entire previous history of the universe.

There is even a more provocative formulation ever made in favor of free will. This is Wheeler’s Participatory Anthropic Principle (PAP) suggested by him in respect to the delayed choice double-slit experiment. Wheeler (1990) had formulated PAP as follows: Observers are necessary to bring the universe into being. In his thought, as observers are endowed with the ability to make a free (in particular, delayed) choice, just the factor can be crucial for the classical and objective universe which we indeed see around ourselves. PAP had not gathered much adherence for a simple reason that it was implausible to admit that observers might be necessary to put the universe to classicality. The quantum phenomena do not exclusively require a conscious observer as opposed to a particle detector able to bring about the wavefunction collapse as well. Hence, we cannot be satisfied with any formulation referring to observers like us humans.

Today Zurek with colleagues impose emergence of classicality on quantum Darwinism where observers acquire information about the states of quantum systems in the universe indirectly, by monitoring fragments of the environment that decoheres these systems to the objective past (Zurek 2009; Riedel et al. 2016). Quantum Darwinism can be put to a picture where the classical Con+ Com determinism emerges from the quantum (inCom) level over time by post factum causality conservation.

5 Free Will in Bell’s Assumptions

Local hidden variable theories were conceived of as a way that should restore Com of determinism despite the conclusion that Nature might be deterministically inCom at the quantum level. The Bell theorem was the first of the no-go theorems that had disproved those theories. The theorem known also as Bell-CHSH (Clauser et al. 1969) inequality violation (BIV) follows from three basic assumptions (though there are different modifications depending on how the probabilities were conditioned): realism R, locality L, and measurement independence (fair sampling) denoted here as F for convenience. Beginning with Bell’s works in 1960s, the F assumption is typically justified by an appeal to experimental free will.

In short, BIV starts with a statistical joint probability distribution \(p(a, b|A, B)\) restricted to two experimenters, commonly named Alice and Bob. Here \(A\) and \(B\) denote their measurement settings (choices) in regard to their measurement outcomes \(a\) and \(b\) respectively. In following EPR terminology, Bell (1993) himself treated R in regard to the hidden local variables \(\lambda\) that “determine precisely the results of individual measurements” denoted as \(a\) and \(b\) here. In \(\psi\)-ontology context, realism is usually viewed as a requirement (counterfactual definiteness) that physical systems possess ontologically definite properties prior to and independent of the epistemic measurements made by Alice and Bob (or anyone else).

In Bell’s formulation, as stated above, no hidden local variable might have control over free will. This is presented probabilistically by each observer’s choice \(A\) and \(B\) assumed to be independent of those variable(s) \(\lambda\),

$$\begin{aligned} & p\left( {A|\lambda } \right) = p\left( A \right), \\ & {\text{and}} \\ & p\left( {B|\lambda } \right) = p\left( B \right). \\ \end{aligned}$$
(1)

The Eq. (1) is usually called “measurement independence” (thought modified by some authors to “parameter independence” and “outcome independence” but their probabilistic formulations are more conditioned on L). The measurement independence, i.e. F assumption here, seems to be natural to all experimental sciences as the freedom of observers to choose the initial conditions of experimentation, for example, by deciding how to orient their polarizer, not which property of a physical system will be measured (as being related rather to R).

To ensure locality L, their measurement outcomes \(a\) and \(b\) must be spacelike separated against superluminal signaling forbidden in relativity. On a whole, the hidden local variable \(\lambda\) may include all the information about the past of the entire universe except for the experimenters’ settings \(A\) and \(B\) as it is in F. After all, R and L both hold that Alice’s and Bob’s spacelike separated outcomes \(a\) and \(b\) must be determined by \(\lambda\) but each alone cannot causally depend on what is done with the other spatially separated system,

$$\begin{aligned} & p\left( {a|A,B,\lambda } \right) = p\left( {a|A,\lambda } \right), \\ & {\text{and}} \\ & p\left( {b|A,B,\lambda } \right) = p\left( {b|B,\lambda } \right). \\ \end{aligned}$$
(2)

For clarity, F means that any possible correlations on the pair of entangled particles under settings \(A\) and \(B\) cannot be enforced by the experimenters’ biased choices when modifying their measurement devices whereas L prevents those correlations from being caused by a notorious “spooky action at a distance”. However, a local and deterministic explanation of quantum correlations in BIV is always possible, as shown by Brans (1988): one simply needs the physical systems being measured to have suitable statistical correlations with the physical systems performing the measurement via some common cause. But then we must realize that even experimenters’ actions have their traces back in the past as well as if Nature chose the state experimenters are in. This is the price one has to pay for superdeterminism, as ’t Hooft (2015) insists on (discussed below in Sects. 6, 7). Thus, F assumption (1) can indeed be viewed as a necessary condition to test any scientific theory in making fair sampling and even, in general, in thinking at unbiased will that is not controlled by any kind of hidden variables.

Ultimately, the general assumption of BIV is given by factorizing the joint probability distribution,

$$p\left( {a,b |A,B,\lambda } \right) = p\left( {a |A,\lambda } \right) \cdot p\left( {b |B,\lambda } \right).$$
(3)

The probability factorization (3) is irrelevant to superluminal signaling, and thus quantum nonlocality emerging in BIV does not violate the relativistic postulate (e.g. Ballentine and Jarret 2010).To avoid confusing with this fundamental principle, the term “Bell nonseparability” can be even suggested as more neutral than “quantum nonlocality” (Hall 2015). Indeed, in quantum mechanics itself, the no-cloning theorem (Wootters and Zurek 2008) prohibits information travelling faster than light because of nonlinearity of quantum cloning. Instead, some “privacy” of quantum states is protected from uncovering by a random collapse.

The assumptions (1)–(3) hold that the settings \(A\) and \(B\) are free of the hidden variables \(\lambda\) and causally independent of each other. Accordingly, as being spacelike separated their measurement outcomes \(a\) and \(b\) on a pair of entangled particles should not correlate. But they do it as if their responses continue indeed to be physically inseparable at a distance. Formally, given the joint probability distribution, the statistical calculations based upon R, L, and F will be violated just as quantum mechanics predicts (and has it amply confirmed in many experiments today). In \(\psi\)-ontology terms, this means that while the classical probability \(p\) is clearly epistemic in character, the unit vector \(|\psi\) must be indeed ontic in describing the behavior of Nature at the quantum level not of our incomplete knowledge about the behavior. To put it differently: quantum effects are genuinely random unlike the familiar statistical patterns in gambling.

Because of metaphysical concepts involved in Bell’s assumptions, the long discussion in physics over decades varies in concluding that either realism R, or locality L, or both unified as “local realism” have to be ruled out by BIV. Many physicists as Gisin (2012) insist on the locality violation solely, whereas realism cannot in principle be deniable. Zeilinger with collaborators in (Gröblacher et al. 2007) have concluded, “Our result suggests that giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned” in regard to an idea that Nature should have been uniquely determined at the quantum level prior to and independently of measurements. Yet, Griffiths (1987) argues, “What this violation tells us is not the locality breaks down, but rather that classical physics no longer applies in the quantum domain”.

After all, to preserve both R and L, the Bell’s loophole against F can be proposed for superdeterminism. This is a hypothesis that Alice and Bob have no free will to set their measuring devices independently of the environment and of their own past as it was assumed in F presented with Eq. (1). Correspondingly, their choices \(A\) and \(B\) themselves should have a common cause in the overlap of their past light cones. Indeed, at least one such causal ancestor should have been certainly fixed in the big bang to be consistently related to the hidden local variables \(\lambda\) controlling somehow their brains. This means, in particular, that if one might be powerful enough to know those variables, one might predict Alice’s (and Bob’s) choice exactly as \(p\left( {A |\lambda } \right) = 1\). What we have finally is a full deterministic picture but only on the condition that, as randomness is impossible anywhere, the particles’ behavior (R), experimenters’ free settings \(A\) and \(B\) (F), and their measurement outcomes \(a\) and \(b\) (L) all must be constrained by causal (though unobservable to us) chains from the common cause as their deterministic “ancestor” (Brans 1988).

6 Superdeterminism as Untestable Cosmic Conspiracy

Now I would like to begin with a general assumption that there are fundamental quantum properties such as uncertainty, nonlocality and randomness that characterize different aspects of one and the same quantum reality. Indeed, it is shown that Heisenberg’s uncertainty principle and quantum nonlocality are inextricably and quantitatively linked in statistical frameworks for all physical theories (Oppenheim and Wehner 2010).Yet, nonlocal correlations of entangled quantum states are used to certify the presence of genuine randomness in cryptographic applications. The reason is intuitively simple: if there are no local hidden variables for shared randomness, no one can hold a copy of these non-existing variables (Pironio et al. 2010; Gisin and Fröwis 2018). With all those, the Bell’s theorem does obviously nothing with the consistency Con of determinism. What has been ultimately shown is incompleteness inCom, or, more exactly, that the Com of determinism cannot be compatible with BIV when it is based on R, L, and F.

The fact that an unknown quantum state cannot be discovered by a measurement or revealed by cloning (Wootters and Zurek 2008) suggests that the quantum reality holds some privacy incompatible with our usual understanding of existence in classical physics. Griffiths (1987) argues, not local realism, but “classical realism” is defied by BIV. What sort of realism must it be in our understanding? As superluminal signaling is prohibited and replaced with no-cloning in quantum mechanics, might then “classical realism” be treated as a statement “no privacy in the underlying reality” to stand for the hidden variables \(\lambda\)? If so, just this secured privacy could protect the quantum world against Com as if the very Nature would prevent us from completing the theory of everything by those hidden local variables. For the same reason the privacy should protect “freebits” (Aaronson 2016) against “cloning” brain (personality) despite the fact that the brain processes always decohere to a classically pure state of mind in which observers perceive themselves all the time by the “perpetum cogito mechanism” (Yurchenko 2016).

Now on the assumption that uncertainty, nonlocality, and randomness all describe different aspects of the same one quantum private ontology (if one would reject randomness but retain nonlocality, one could go to hidden nonlocal variables and, thus, to Bohmian mechanics that is superdeterministic after all), their logical opposites can constitute a list of the postulates underlying superdeterminism. Let \({\text{R}}\) be “no uncertainty in the ontological basis”, and \({\text{L}}\) means “nononlocality” in regard to both quantum correlations and superluminal signaling as well. After all, \(\neg {\text{F}}\) stands for “no randomness” which being applied to the brain processes means “no free will” as well. Importantly, when taken together these three key assumptions should provide the theory of everything with Com of determinism. Symbolically,

$${\text{R }}\& L \& \neg F = Com$$
(4)

Suppose, some physicist skeptical about the role of observers might ask, would BIV be in principle possible in the absence of free will? More exactly, how might the Com condition (4) be reconciled with the violations experimentally evidenced many times in following the Bell’s loophole for superdeterminism?

$${\text{Com}} \Rightarrow {\text{BIV}}?$$
(5)

To account for Eq. (5), the universe should “know” both Alice and Bob’s choices ahead of them. It happens that the loophole for Com to agree on BIV is a way where a certain fine-tuned conspiracy should covertly emerge across all \(A\), \(B\), \(a\), and \(b\) controlled by the local hidden variables \(\lambda\) (Shimony et al. 1976, Wood and Spekkens 2015). Though Bell had dismissed his own loophole in favor of free will, superdeterminism remains attractive to proponents of the theory of everything (that, clearly, should be Con+ Com with no random element). This means that if the universe is a computation, then all physical events must be described by a finite set of elementary, deterministic operations, and those operations can be in principle executed by a Turing machine or some other digital automaton. We arrive at the following conclusion. If superdeterminism can be true, then it can be said with certainty that free will is nothing but an illusion. Of course, randomness and free will can by no means violate Con of determinism by assuming contradictions in Nature. But if Nature herself does not “play dice” from the inCom position, brain is nothing but a machine.

So, ’t Hooft (2016) advocates the local hidden variables in contrast to quantum mechanics with its no-go baggage by means of the Cellular Automaton Interpretation (CAI). The cellular automata are well-known from Conway’s Game of Life including deterministic patterns of Turing machine. While being governed by a Boolean algebra of operators, those can display complex particle-like behavior (Vichniac 1984) but exclude randomness and, hence, free will at once. ’t Hooft postulates some “very special ontological basis” in Hilbert space for all super-microscopic physical observables placed at the Planck scale. The postulate is put on the standard cosmological model where the universe must necessarily arise from some ontic state preexisting at the lowest physical level. This ontic state is committed to the “ontology conservation law” standing for the forward or ante factum causality conservation in Nature.

CAI is conceived of as a superdeterministic theory of everything, i.e. as the ultimate Con+ Com theory able, in particular, to account for all Bell-type experiments as it was presented in the previous section by Eq. (4). Correspondingly, the hidden Planckian basis with the ontology conservation law can stand for realism R in the Bell’s theorem. Locality L is then imposed upon cellular automata interactions. After all, ’t Hooft (2016) argues for \(\neg {\text{F}}\) by asserting that free will is an “actually extremely simple notion” in a physical context. What one might expect in a theory is that the theory predicts how its variables evolve in an unambiguous way with no randomness from any initial state chosen by observers. Thus, the \(\neg {\text{F}}\) assumption reduces free will to the observers’ ability to choose those “unconstrained initial conditions” (’t Hooft 2007) that cannot, however, modify the past. Finally, all those assumptions must provide CAI with Com.

Now I would like to concentrate mostly on an observer’s free will presented by Eq. (1) as the measurement independence assumption. To provide its negation \(\neg {\text{F}}\), let Alice’s setting \(A\) of an apparatus (such as a polarizer) in Bell-type experiments be decided by tossing a coin (or by replacing her choice with a pseudorandom number generator) whose probabilistic outcome is pure statistical, i.e. epistemic in character while being completely determined so that no truly random (free) bit is believed to be possible there. Thus, it seems that the free will assumption is not crucial to BIV and can indeed be thrown overboard. Nevertheless, as Conway and Kochen (2008) note, Alice should still wish to toss the coin (or to choose that generator) because it might not be made by itself. While inanimate things have no will, the state of affairs around Alice goes classically by itself in a clockwork fashion. Unlike them, Alice’s behavior can be free, i.e. not predetermined completely from the past, so that any her action must interfere randomly in the state of affairs.

It does not matter much if instead of tossing a coin Alice would receive “instructions” from a distant quasar how to orient her polarizer. Indeed, her free will would be eliminated as well by those cosmic photons emitted billions of years ago. However, when so modified, “cosmic conspiracy” instead of Alice’s free choice must inevitably occur in the experiment (Gallicchio et al. 2014) as it is presented in Fig. 1.

Fig. 1
figure 1

All causal chains unfold from the big bang within the global light cone. Here the present \(t_{pr}\) is conventionally depicted as a spacelike horizontal line. Then the past of the entire universe within the global light cone is Con+ Com below the line for any causal chain separately (as the relativity of simultaneity does not affect the causal order) but not at \(t_{pr}\). A schematic Bell-type experiment can be then inserted. Here a source \(S\) of two entangled photons lies beyond Alice’s and Bob’s past light cones while their spacelike-separated settings \(A\) and \(B\) are just placed on the line to make the future measurement outcomes \(a\) and \(b\).Whenever observers make choices, this moment always is their actual present \(t_{pr}\) while the measurement will be made independently at a moment \(t_{pr}^{'} > t_{pr}\) as a new actual present standing for the particles’ response not for their will. A cosmic conspiracy (Gallicchio et al. 2014) can then be imposed upon the assumption that Alice and Bob set their polarizers not at will but by monitoring the light signals (as \(45^\circ\) diagonals) emitted by distant quasars \(Q_{A}\) and \(Q_{B}\). To account for BIV, both \(Q_{A}\) and \(Q_{B}\) should “conspire” with the source \(S\) by a common cause (as their mathematical infimum in the causal structure of the universe) in the past billions of years ago

On a whole, whatever choice Alice has done, her intervention is either free or constrained from the past. Letting the intervention be genuinely free or constrained depends on how causality conservation is working in time—whether forward or backward—over all natural systems uniformly including Alice’s brain. There is a tiny difference between saying (1) “\(X\) is completely determined from the big bang at \(t = 0\) for all \(t > 0\)” and (2) “\(X\) is completely determined at every actual present \(t = t_{pr}\) for all \(t < t_{pr}\)” because those both say the same one about the whole previous history of the universe at any past moment \(t\). More exactly, they impose Con+ Com on any closed interval \(\Delta t\) within the past of the global light cone (Fig. 1), \(\Delta t \subset \left[ {0,t_{pr} } \right)\) except for \(t_{pr}\) that holds inCom in (2) but not in (1).The problem of Com in the Bell-type experiments will remain if even all conscious observers are removed out of the universe because free will in Eq. (1) can result only from inCom.

However, as ‘t Hoof insists, such a sort of cosmic conspiracy is excessive on the Com assumption in Eq. (5) as soon as everything in our universe takes place for a reason caused by the action of physical laws, not just by chance. The only necessary thing for \(\neg {\text{F}}\) is to agree that Alice’s any choice of “unconstrained initial conditions” had been predetermined locally by her nearest past controllable by the hidden local variables \(\lambda\). It is just the way to say that Alice’s brain is a clockwork toy of a global computation going by the ante factum causality conservation over the whole universe. In the world with no random element, everything is completely predetermined including Alice’s brain when she decides how to set her apparatus. Whatever thoughts, intentions, and actions observers will have at their actual present moment \(t_{pr}\), those had been already put in the causal order from the past.

Instead, the inCom position is as follows. Here incompleteness of determinism at every actual present is at the first place, not Alice’s free will that can be replaced statistically by tossing a coin, or by receiving “instructions” from a faraway quasar, or even by using a radioactive atom (that itself is in inCom position over the decay times) as a source of the certified random numbers, without affecting BIV at all. Indeed, once Alice’s free choice has been made at the present moment \(t_{pr}\), her intervention in the causal order contributes instantaneously to the Con+ Com picture, and then this freshly changed macrostate goes consistently further to some new present moment \(t_{pr}^{'}\) when the measurement will be performed.

Bell (1993) himself agreed that any manipulation with a random element in the experimental settings would unlikely be vital to BIV. Conway and Kochen (2008) also emphasized that free will itself could not be prior to the particle responses (otherwise one would go to Wheeler’s PAP), but both sorts of freedom in their behaviors should be physically connected in origin at the quantum level. Randomness is essential only as resulting in Alice’s freedom to act. Alice’s choice is effective in setting a polarizer at her actual present but this does not matter much in the entangled particle’s “freedom” to randomly respond afterward. Of course, observers have no control over the behavior of the wavefunction (’t Hooft 2007). Instead, as being already contributed to the Con+ Com picture, Alice’s any choice \(A\) can no longer be something crucial at any moment \(t_{pr}^{'} > t_{pr}\) when she is detecting the response \(a\) as in Fig. 1. There is no physical difference between the effect caused by Alice’s measurement and the spontaneous environment-induced decoherence (Joos et al. 2003; Riedel et al. 2016) in the universe even with no observers at all.

After all, “cosmic conspiracy” is only a trick to eliminate the observers’ free choice from experimental descriptions. But superdeterminism holds it fatalistically: nothing might behave freely in the universe that is completely determined from the past. Indeed, if time means nothing fundamentally, and the present moment \(t_{pr}\) is relativistically unessential, then the difference between Com and inCom vanishes over all causal chains in the universe (Fig. 1). Hence, one might wonder about the broken symmetry of causation: if the past as we see it behind us every time is consistent and completely determined then why the future should not be Con+ Com as well? Accordingly, any future event should be in principle unchangeable just as the past ones are. It happens that superdeterminism and timeless theories are intrinsically founded on the same general assumption by postulating that the universe is a computation (time-reversible in principle).

In other words, all those computational outputs over the whole universe including all brains are to be locally predetermined at every step and given definite in advance to provide Com with R, L, and \(\neg {\text{F}}\) in Eq. (4). As a result, if the present is indeed illusory, free will is illusory as well. So, Smolin (2015) has noticed that those who believe in the illusory character of the passage of time tend also to believe that Nature is a computation without assuming any randomness, and the many-worlds interpretation gives a complete picture of it. In the same logic, they typically maintain strong artificial intelligence, i.e. the brain-machine identity with no free will. One, at least, reason why those beliefs can be wrong is that superdeterminism turns out to be untestable.

In science, a sample taken arbitrarily out of a domain of interest under analysis is often called random. At first sight, this should have nothing to do with genuine randomness in Nature. In fact, such a sample is tacitly presupposed to be chosen at an experimenter’s unbiased will for receiving an objectively representative picture over big data. But if free will is indeed physically linked to randomness in the brain processes, the sample can be viewed as a result of true randomness/freedom in the experimenter’s brain. Conversely, if free will is dismissed ultimately, nothing can be called random at all because any choice (sampling, decision) should have been predetermined and somewhat biased—though, clearly, not at the experimenter’s freedom.

Thus, a price one has to pay for the \(\neg {\text{F}}\) assumption resulting in our somewhat biased will is that superdeterminism cannot be experimentally falsified. Finally, whatever event will happen anywhere in the universe, including all brains responsible for preparations and measurements in scientific experiments (in particular, those related to BIV or Libet-type experiments), it always can be said that the result should be just one that had been predetermined from the beginning of time. This is a reason why many physicists reject superdeterminism as the end of our rationality (Zeilinger 2010) and even “suicide of Science” (Gisin 2012). On the other hand, if experimenters cannot make better as being guided by predetermination, even with no way to improve their results, there is no reason to reject those results to date. Instead, we can simply agree that those are the natural conditions under which science can be uniquely done (Vaccaro 2018). Does then it mean that the triumph of science will ultimately bring the future generations to the superdeterministic theory of everything as a scientifically rigorous version of fatalism with no bit of freedom in human behavior?

For example, consider the following statement: “We do what we must do, and we have what we must have.” Many people will find this statement robust, if it is tacitly conditioned on “naïve” determinism. No doubt we must do lots of things (firstly, those of the physiological nature as breathing) to be alive, and our future depends on what we are doing now. But in a superdeterministic context, the conditionals must sound like that: “We do what the universe wants us to do, and we have what the universe had put to us by law.” Thus, the future still can be thought to depend on what people are doing presently but their beliefs turns to be no more than a popular fallacy since everything comes causally out of the big bang whose boundary conditions are solely in principle.

7 Big Bang and Designed Universe

Well, we do not believe Wheeler’s PAP that observers are necessary to put the universe into being. Then a physicist skeptical about free will can say that Eq. (5) merely eliminates the infamous observer-dependent mystery from quantum mechanics by stating: the universe would be the same one if even all observers were taken away there. Well, we agree that this statement is undeniable, and the physicist can be happy with it. However, quite another question arises. If free will (as emerging in brain) should be causally impossible without randomness by our definition, then would the evolution of the universe be the same one on the Com condition? This is to ask, might such a universe with no irreversible randomness generate evolutionarily in itself that skeptical physicist (and any other physicist) because it follows from nothing that conscious observers would have to appear there?

The answer may be: yes, it might be—but only under the very special big bang conditions. Thus, the dilemma behind the Con+ Com picture is as follows. Of course, free will cannot be responsible for quantum correlations in Bell-type experiments but randomness linked to uncertainty and nonlocality can. In other words, randomness still can take place in Nature by spontaneous decoherence processes if even conscious observers are totally removed out of the universe. The question the theory of everything is finally left with is the following: How and why should those observers have appeared in a superdeterministic universe?

First, in the standard cosmological model, the universe had come into being from an unobservable state, the big bang singularity. The second law of thermodynamics holds that entropy can only increase on a whole in the universe with time. Hence, physicists believe, the entropy of the universe in its initial state was very small, probably just zero. In other words, the big bang should have been established in an improbably high order encoding generic information about all events in the evolution of the universe. Second, superdeterminism claims that no randomness might trespass between causal chains as those should unfold from the big bang within the global light cone (Fig. 1). Recall, the present in superdeterminism should computationally (and physically) play no role at all in the evolution of the universe. Correspondingly, there could be no room for free will.

In order to hold Con+ Com, superdeterminism wants everything in the universe, in particular, the origin and evolution of life on the earth, but yet life of each of us and all our biological progenitors, all of our everyday actions, even a bit of our brain activity at any moment of time to be predetermined in the big bang. No unconstrained conditions might be chosen anywhere. Instead, only different degrees of constraining would be imposed on observers’ freedom in the states when Alice can be walking on the street, or involved with experimental preparations, or locked motionless in a box, or even unconscious under anesthesia. With controlling everything in the universe, the big bang should be responsible solely for all human histories including Alice’s any situation. But logically it means that either those initial conditions had been very aptly chosen by an artificial design or appeared randomly. Whatever decision will be taken there, both those outcomes are incompatible with superdeterminism postulated just against free will and randomness that should nevertheless be assumed, at least, once to trigger off all the causal chains of events in the universe.

Indeed, in the light of the existence of us conscious observers we should assume that the initial conditions of the universe had been established randomly but favorable to us as it was hotly debated a few decades ago in regard to the anthropic arguments which appeal to the fortuitous coincidence of life-supporting laws of Nature and the fine-tuned fundamental cosmological constants (e.g. Barrow and Tipler 1986). Of course, it would be futile to refer to those arguments as being tautological by self-evidence in standard “naïve” determinism. However, when put to a superdeterministic scenario, this assumption turns out to be self-inconsistent by admitting randomness, at least, for the big bang.

Clearly, we have no right to be surprised about an obvious thing that a great number of statistically independent happenings from those at the cosmological scale, including the fine tuning of fundamental constants, to ordinary physical events in causal chains should have been necessary in the past for our existence in the universe allowing for randomness (firstly, in genetic mutations of Darwinian evolution) and everyday freedom of humans to decide their way and influence each other. We all as a biological species, and each of us alone as a person, are just the accidental byproduct of all of that. There is no reason to wonder why “anthropic” arguments as being always made post factum are deterministically legal. But superdeterminism is based on the ante factum causality conservation that prohibits randomness and free will anywhere in the evolution of the universe.

Hence, a primordial non-random design should take place there. However, it must be certainly surprising if we are not merely machines but each of us endowed with consciousness and the illusion of free will is a machine designed from the big bang in the theory of everything. This is to be an unavoidable logical consequence of superdeterminism in the big bang scenario. Eventually, not only the origin and existence of conscious observers on the earth should have been encoded in the initial conditions of the universe, the life of each of us should have been encoded as well by excluding any bit of freedom in the past over all our evolutionary progenitors from simplest biological organisms to our own parents. Hence, all our life might be (at least, theoretically) predictable by one powerful enough to make all computations running from those conditions. The only “freedom” we are left with in such a designed universe is our ignorance of the future.

The general aim of science is to make successful predictions about how things are going on, and the theory of everything is tacitly conceived of as the triumph of human knowledge embracing consistently the hierarchy of all branches of science from physics to psychology (Tegmark 2007) to ultimate explanation of consciousness (Yurchenko 2017). Now suppose that Alice might be able to compute her near future just by virtue of the superdeterministic theory of everything (that would lead us, as stated, to scientifically rigorous fatalism). Does it mean that Alice as having gained this valuable knowledge of the future should nonetheless follow her own predictions (possibly, unfavorable to her), or she might take the opportunity to alter her mind in regard to the predetermined course of the future events by a logical “choice-mechanism” (MacKay1960)? Why not?

Indeed, it is striking to assume that the designed universe would somehow prevent Alice from changing her mind and her future actions as those should have been unchangeably put in advance from the big bang. Suppose, however, for a moment that this might be. Since Alice’s knowledge would have to be scientifically legitimate and embodied naturally in her memories (as any other knowledge), Nature could not do it to Alice’s mind without resorting to violence. How should this be done in her brain? To preserve Con+ Com, this way would lead us to some sort of a philosophical “zombie” as if Alice might not resist her “destiny” put in a clockwork way by ante factum causality conservation (and therefore theoretically computable!). Thus it is not yet sufficient to deprive Alice of free will while her deterministically legal ability to logically think might be intact. As already knowing her destiny, she should instantly forget her own predictions and remain ignorant of her future.

Another way seems to be more plausible. This is to assume that the designed universe might “foresee” Alice’s computations and falsify them beforehand to avoid a self-inconsistency in the future. Whatever actions Alice would then take, just those had been “conceived” by the universe. This is very similar to the way that led Bohm (1980, 1990) to postulate his “mind-like” (nonlocal and superdeterministic) universe with the implicate order. However here we go back again to cosmic conspiracy that forbids violating the future once predetermined at the big bang moment. If we see these explanations as unacceptable, the only one reasonable conclusion can be suggested. Namely, Alice could not make the precise predictions at all.

Importantly, those predictions must be impossible not only practically because no human, nor even any other imaginable intelligent being, can be able to compute faster than Nature herself, as ’t Hooft (2016) argues by adding that this explains how Alice may restore the illusion of free (compatibilist) will in superdeterminism. Such an explanation—though credible (no one can be omniscient)—opens the door to cosmic conspiracy which might confuse Alice’s brain in gaining even a small amount of information that might be sufficient to defy her predetermined future. Instead the precise predictions must be impossible in principle exclusively due to randomness that Nature as unwilling to lie to us admits in physical processes by the post factum causality conservation. On this condition, the very universe could not “know” the future. There is no self-contraction in the inCom position.

After all, one might generalize this dilemma to the statement: any precise knowledge of the future at least locally (just by virtue of the superdeterministic theory of everything possible only due to Con+ Com) is impossible in principle (and resulting in inCom) because this knowledge can falsify the future. With no bit of mysticism, an obstacle to gaining such a scientific knowledge of the future must emerge eventually from quantum privacy (Wootters and Zurek 2008) as it is in cryptographic applications: if Alice will learn an unknown quantum state, the state will be disturbed by her learning. The disturbance would then be amplified many times by classical processes and free will of conscious beings at larger scales.

This is to say: if even the destiny of the universe is predetermined on average at the cosmological scale in the sense that beginning with the big bang the universe will inevitably end up with the big “apocalypse” in a global scenario, there still may be a huge difference between a scenario where no conscious observers had existed for some time, and another one where we (inhabiting a tiny region of spacetime) are asking these questions. The difference is of the same kind as the very essence of our life put uniformly between birth randomly granted to us and our death predetermined thermodynamically at a coarse scale. Those two extreme states tell us nothing about what had been between them (if they themselves are to be distinguishable in that global scenario of the universe)—in particular, whether we had or had not freedom to decide our way.

8 Conclusions

Free will and randomness come somehow to be physically indiscernible when one attempts to define them at the underlying physical level of determinism. In a pure logical way, randomness can be possible only due to incompleteness of determinism as inCom. Even in the quantum domain, consistency Con is not at stake but can be completed only post factum by allowing randomness to emerge at any actual present. Instead, to exclude both free will and randomness, superdeterminism holds Con+ Com by postulating that causality is completed ante factum, i.e. conserved ahead of all processes in the universe from the big bang as the only moment of physical importance.

It can be shown that superdeterminism can be self-inconsistent in big bang cosmology. First, thermodynamically the universe should have been extremely ordered in the big bang to start out. Hence, on the assumption of the superdeterministic scenario, all the evolution of the universe with no bit of randomness in the innumerous sets of causal chains unfolded continually out of the big bang and often entangled with each other as those would grow and expand from the past towards the future, every event of those chains should have been completely encapsulated in the initial conditions of the universe. On a whole, those initial conditions should have been either random or constrained. In regard to superdeterminism, the former contradicts its premise and hence must be rejected.

Otherwise the latter has to be taken. Then an artificial design would be inevitable there. What does it mean? The modern evolutionary theory is explicitly based on the random genetic mutations (stemming ultimately from the quantum level) that have to be undergone natural selection in Darwinism. Both these are commonly believed to be entirely responsible for biodiversity of life emerging at the molecular level where Nature begins to exhibit properties typically related to living systems. In the absence of irreversible randomness, by assuming that genetic mutations are pseudorandom, only the artificial primordial design might be responsible for evolution and, in particular, our existence in a superdeterministic universe totally computable in advance from the big bang.

On the contrary, with irreversible randomness in Nature, the evolution of the universe cannot be predetermined completely, and the future is open just as people feel when they decide their way. This goes in line with the arrow of time by stating that every actual now-moment is as special locally as the big bang is special globally in contrast to the timeless theories. If the passage of time is real, free will can be real as well. The ultimate free choice we are left with in a theory lies between randomness, i.e. inCom resulting from “quantum privacy” on one hand, and an artificial primordial design, i.e. Com followed by “cosmic conspiracy” on the other hand. In the latter case, however, any our choice should have been just designed from the beginning of time. In other words, superdeterminism can restore Com of determinism as a computation running from the big bang only by pushing every bit of freedom in the universe back to the prerequisites of the big bang itself.

Let me finish with a rhetoric note. If one takes Einstein’s aphorism about God and dice seriously, one must agree that this notion of “God” forbids free will but endows people with the illusion of freedom (and responsibility) by a reason incomprehensible to them. In following this logic God should have, in particular, designed a human named Albert Einstein at birth to put Special/General Relativity and the aphorism in his brain within the sophisticated scenario where we humans play a role of passive participants. To me it is striking to assume that all our actions have been controlled ahead of time by the variables hidden in the Planckian basis. Importantly, the variables (if viewed in Quantum Bayesianism) should be hidden forever; otherwise by uncovering those Alice could in principle falsify her predetermined destiny thereby bringing the universe to a contradiction. One might then seriously think that God prevents humans from making their future better.