1.1 Forms of Complexity

There are at least 45 definitions of complexity according to Seth Lloyd as reported in The End of Science (Horgan, 1997, pp. 303–305). Rosser Jr. (1999) argued for the usefulness in studying economics of a definition he called dynamic complexity that was originated by Day (1994).Footnote 1 This is that a dynamical economic system fails to generate convergence to a point, a limit cycle or an explosion (or implosion) endogenously from its deterministic parts. It has been argued that nonlinearity was a necessary but not sufficient condition for this form of complexity,Footnote 2 and that this definition constituted a suitably broad “big tent” to encompass the “four C’s”Footnote 3 of cybernetics , catastrophe , chaos, and “small tent ” (now better known as heterogeneous agents ) complexity.

Norbert Wiener (1948) founded cybernetics, which relied on computer simulations and was popular with Soviet central planners and computer scientists long after it was not so admired in the West. Jay Forrester (1961), inventor of the flight simulator, founded its rival system dynamics , arguing that nonlinear dynamical systems can produce “counterintuitive” results. Probably its most famous application was in The Limits to Growth (Meadows et al. 1972), eventually criticized for its excessive aggregation. Arguably both came from general systems theory (von Bertalanffy, 1950, 1974), which in turn developed from tektology , the general theory of organization due to Bogdanov (1925-29).

Catastrophe theory developed out of broader bifurcation theory, which relies on strong assumptions to characterize patterns of how smoothly changing control variables can generate discontinuous changes in state variables at critical bifurcation values (Thom, 1975), with Zeeman’s (1974) model of stock market crashes the first use of it in economics. Empirical methods for studying such models depend on multi-modal statistics (Cobb et al. 1983; Guastello 2011a, b). Due to the strict assumptions it relies upon, a backlash developed against its use, although Rosser Jr. (2007) argued this became overdone.Footnote 4

While chaos theory can be traced back to Poincaré (1890), it became prominent after climatologist Edward Lorenz (1963) discovered sensitive dependence on initial conditions, aka “the butterfly effect.” Applications in economics followed suggestions made by May (1976). Debates over empirical measurement and problems associated with forecasting have reduced its application in economics (Dechert, 1996).Footnote 5 It is possible to develop models that exhibit combined catastrophic and chaotic phenomena as in chaotic hysteresis,Footnote 6 first shown as possible in a macroeconomic model by Puu (1990), with Rosser Jr. et al. (2001) estimating such patterns for investment in the Soviet Union in the post-World War II period.

The small tent or heterogeneous agents type of dynamic complexity does not have a precise definition. Influentially, Arthur et al. (1997a) argue that such complexity exhibits six characteristics: (1) dispersed interaction among locally interacting heterogeneous agents in some space, (2) no global controller that can exploit opportunities arising from these dispersed interactions, (3) cross-cutting hierarchical organization with many tangled interactions, (4) continual learning and adaptation by agents, (5) perpetual novelty in the system as mutations lead it to evolve new ecological niches, and (6) out-of-equilibrium dynamics with either no or many equilibria and little likelihood of a global optimum state emerging. Many point to Thomas Schelling’s (1971) study on a 19-by-19 Go boardFootnote 7 of the emergence of urban segregation due to nearest neighbor effects as an early example.

Other forms of nonlinear dynamic complexity seen in economic models include non-chaotic strange attractors (Lorenz 1983), fractal basin boundaries (Lorenz 1983; Abraham et al. 1997), flare attractors (Hartmann and Rössler 1998; Rosser Jr. et al. 2003a), and more.

Other non-dynamic complexity approaches used in economics have included structural (Pryor 1995; Stodder 1995),Footnote 8 hierarchical (Simon 1962), informational (Shannon 1948). Algorithmic (Chaitin 1987), stochastic (Rissanen 1986), and computational (Lewis 1985; Albin with Foley 1998; Velupillai 2000).

Those arguing for focus on computational complexity include Velupillai (2005a, b) and Markose (2005), who say that the latter concept is superior because of its foundation on more well-defined ideas, such as algorithmic complexity (Chaitin 1987) and stochastic complexity (Rissanen 1989, 2005). These are seen as founded more deeply on the informational entropy work of Shannon (1948) and Kolmogorov (1983). Mirowski (2007) argues that markets themselves should be seen as algorithms that are evolving to higher levels in a Chomsky (1959) hierarchy of computational systems, especially as they increasingly are carried over computers and become resolved through programmed double-auction systems and the like. McCauley (2004, 2005) and Israel (2005) argue that such dynamic complexity ideas as emergence are essentially empty and should be abandoned for either more computational-based or more physics-based ones, the latter especially relying on invariance concepts.

At the most profound level computational complexity involves the problem of non-computability. Ultimately this depends on a logical foundation, that of non-recursiveness due to incompleteness in the Gödel sense (Church 1936; Turing 1937). In actual computer programs this manifests itself most clearly in the form of the halting problem (Blum et al. 1998). This amounts to the halting time of a program being infinite, and it links closely to other computational complexity concepts such as Chaitin’s algorithmic complexity. Such incompleteness problems present foundational problems for economic theory (Rosser Jr. 2009a, 2012a, b; Landini et al. 2020; Velupillai 2009).

In contrast, dynamic complexity and such concepts as emergence are useful for understanding economic phenomena and are not as incoherent and undefined as has been argued. A sub-theme of some of this literature, although not all of it, has been that biologically based models or arguments are fundamentally unsound mathematically and should be avoided in more analytical economics. Instead, such approaches can be used in conjunction with the dynamic complexity approach to explain emergence mathematically and that such approaches can explain certain economic phenomena that may not be easily explained otherwise.

1.2 Foundations of Computational Complexity Economics

Velupillai (2000, pp. 199–200) summarizes the foundations of what he has labeled computable economics Footnote 9 in the following.

Computability and randomness are the two basic epistemological notions I have used as building blocks to define computable economics. Both of these notions can be put to work to formalize economic theory in effective ways. However, they can be made to only on the basis of two theses: the Church-Turing thesis, and the Kolmogorov-Chaitin-Solomonoff thesis.

Church (1936) and Turing (1937) independently realized that several broad classes of functions could be described as “recursive” and were “calculable” (programmable computers had not yet been invented). Turing (1936, 1937) was the first to realize that Gödel’s (1931) Incompleteness Theorem provided a foundation for understanding when problems were not “calculable,” called “effectively computable” since Tarski (1949). Turing’s analysis introducing the generalized concept of the Turing machine , now viewed as the model for a rational economic agent within computable economics (Velupillai 2005b, p. 181). While the original Gödel theorem relied upon a Cantor diagonal proof arising from self-referencing, the classic manifestation of non-computability in programming is the halting problem : that a program will simply run forever without ever reaching a solution (Blum et al. 1998).

Much of recent computable economics has involved showing that when one tries to put important parts of standard economic theory into forms that might be computable, it is found that they are not effectively computable in any general sense. These include Walrasian equilibria (Lewis 1992), Nash equilibria (Prasad 1991; Tsuji et al. 1998), more general aspects of macroeconomics (Leijonufvud 1993), and whether a dynamical system will be chaotic or not (da Costa et al. 2005).Footnote 10

Indeed, what are viewed as dynamic complexities can arise from computability problems that arise in jumping from a classical and continuous real number framework to a digitized, rational numbers-only framework. An example is the curious “finance function” of Clower and Howitt (1978) in which solution variables jump back and forth over large intervals discontinuously as the input variables go from integers, to non-integer rationals to irrational numbers and back. Velupillai (2005b, p. 186) notes the case of a Patriot missile missing its target by 700 m and killing 28 soldiers as “friendly fire” in Dhahran, Saudi Arabia in 1991 due to a computer’s non-terminating cycling through a binary expansion on a decimal fraction. Finally, the discovery of chaotic sensitive dependence on initial conditions by Lorenz (1963) because of computer roundoff error is famous, a case that is computable but undecidable.

There are actually several computability based definitions of complexity, although Velupillai (2000, 2005a, b) argues that they can be linked as part of the broader foundation of computable economics. The first is the Shannon (1948) measure of information content, which can be interpreted as attempting observe structure in a stochastic system. It is thus derived from a measure of entropy in the system, or its state of disorder. Thus, if p(x) is the probability density function of a set of K states denoted by values of x, then the Shannon entropy is given by

$$ H(X)=-{{\sum \limits}_{\mathrm{x}=1}}^K\ln \left(p(x)\right) $$
(1.1)

From this is it is trivial to obtain the Shannon information content of X = x as

$$ \mathrm{SI}(x)=\ln \left(1/p(x)\right) $$
(1.2)

It came to be understood that this equals the number of bits in an algorithm that it takes to compute this code. This would lead Kolmogorov (1965) to define what is now known as Kolmogorov complexity as the minimum number of bits in any algorithm that does not prefix any other algorithm a(x) that a Universal Turing Machine (UTM) would require to compute a binary string of information, x, or,

$$ \mathrm{K}(x)=\min \left|a(x)\right|, $$
(1.3)

where │ │ denotes length of the algorithm in bits.Footnote 11 Chaitin (1987) would independently discover and extend this minimum description length (MDL) concept and link it back to Gödel incompleteness issues, his version being known as algorithmic complexity, which would get taken up later by Albin (1982)Footnote 12 and Lewis (1985, 1992) in economic contexts.Footnote 13

While these concepts usefully linked probability theory and information theory with computability theory, they all share the unfortunate aspect of being non-computable. This would be remedied by the introduction of stochastic complexity by Rissanen (1978, 1986, 1989, 2005). The intuition behind Rissanen’s modification of the earlier concepts is to focus not on the direct measure of information but to seek a shorter description or model that will depict the “regular features” of the string. For Kolmogorov a model of a string is another string that contains the first string. Rissanen (2005, pp. 89–90) defines a likelihood function for a given structure as a class of parametric density functions that can be viewed as respective models, where θ represents a set of k parameters and x is a given data string indexed by n:

M k = f x n θ : θ Є R k .
(1.4)

For a given f, with f(yn) a set of “normal strings,” the normalized maximum likelihood function will be given by

$$ {f}^{\ast}\left({x}^n,{M}_k\right)=f\left({x}^n,{\theta}^{\ast}\left({x}^n\right)\right)/\left[{\int}_{\theta (yn)}\mathrm{f}\left({y}^n,\theta \left({y}^n\right)\right)\mathrm{d}{y}^n\right], $$
(1.5)

where the denominator of the right-hand side can be defined as being Cn,k.

From this the stochastic complexity is given by

$$ -\ln {f}^{\ast}\left({x}^n,{M}_k\right)=-\ln f\left({x}^n,{\theta}^{\ast}\left({x}^n\right)\right)+\ln\ {C}_{n,k}. $$
(1.6)

This term can be interpreted as representing “the ‘shortest code length’ for the data xn that can be obtained with the model class Mk.” (Rissanen 2005, p. 90). With this we have a computable measure of complexity derived from the older ideas of Kolmogorov, Solomonoff, and Chaitin. The bottom line of Kolmogorov complexity is that a system is complex if it is not computable. The supporters of these approaches to defining economic complexity (Israel 2005; Markose 2005; Velupillai 2005a, b) point out the precision given by these measures in contrast to so many of the alternatives.

However, Chaitin’s algorithmic complexity (1966, 1987) introduces a limit to this precision, an ultimate underlying randomness. He considered the problem of a program having started without one knowing what it is and thus facing a probability that it will halt, which he labeled as Ω. He saw this randomness as underlying all mathematical “facts.” Indeed, this Ω itself is in general not computable (Rosser Jr. 2020a).

An example of this involves a theorem of Maymin (2011) that straddles the boundary of the deep unsolved problem of whether P (polynomial) equals NP (non-polynomial) in programs,Footnote 14 thus having an unknown Ω. This theorem shows that under certain information conditions markets are efficient if P = NP, which few believe. At the edge of this da Costa and Doria (2016) use the O’Donnell (1979) algorithm that is exponential and thus not P but slowly growing so “almost P” to establish a counterexample function to the P = NP problem. The O’Donnell algorithm holds if P < NP is probable for any theory strictly stronger than Primitive Recursive Arithmetic, even as that cannot prove it. Such problems might appear such as in the computationally complex traveling salesman problem. Da Costa and Doria establish that under these conditions the O’Donnell algorithm behaves as an “almost P” system that implies an outcome of “almost efficient markets.” This is a result that walks on the edge of the unknown, if not the unknowable.

A deeper logical issue underlying computational complexity and economics involves fundamental debates over the nature of mathematics itself. Conventional mathematics assumes axioms labeled the Zermelo-Fraenkel-[Axiom of] Choice system, or ZFC. But some of these axioms have been questioned and efforts have been made to develop axiomatic mathematical systems not using them. The axioms that have been challenged have been the Axiom of Choice, the Axiom of Infinity, and the Law of the Excluded Middle. A general term for these efforts has been constructivist mathematics , with systems that particularly emphasize not relying on the Law of the Excluded Middle, which means no use of proof by contradiction, has been known as intuitionism , initially developed by Luitzen Brouwer (1908) of fixed point theorem fame.Footnote 15 In particular, standard proofs of the Bolzano-Weierstrass theorem use proof by contradiction, with this underlying Sperner’s Lemma, which in turn underlies standard proofs of both the Brouwer and Kakutani fixed point theorems used in general and Nash equilibrium existence proofs (Velupillai 2006, 2008).Footnote 16

For mathematicians, if not economists, the most important of these debatable axioms is the Axiom of Choice, which allows for the relatively easy ordering of infinite sets. This underpins standard proofs of major theorems of mathematical economics, with Scarf (1973) probably the first to notice these possible problems. The Axiom of Choice is especially important in topology and central parts of real analysis. On the one hand, its most ultimate formulation has been shown to be false by Specker (1953). But one way out of some of these problems is by using Non-standard analysis that allows for infinite and infinitesimal real numbers (Robinson 1966), which allows for avoiding the use of the Axiom of Choice for proving some important theorems.

The question of the Axiom of Infinity may perhaps be most closely tied to the questions about computational complexity. The deep philosophical idea behind these constructivist approaches is that mathematics should deal with finite systems that are more realistic and more readily and easily computed. Going against this most strongly was Cantor’s introduction of levels of infinity into mathematics, an innovation that led Hilbert to praise Cantor for “bringing mathematicians into paradise.” But the computability critics argue that mathematical economics must fit the real world in a credible way, with efforts ongoing at constructing such an economics based on a constructivist foundation (Velupillai 2005a, b, 2012; Bartholo et al. 2009; Rosser Jr. 2010a, 2012a).

1.3 Epistemology and Computational Complexity

Regarding computational complexity, Velupillai (2000) provides definitions and general discussion and Koppl and Rosser Jr. (2002) provide a more precise formulation of the problem, drawing on arguments of Kleene (1967), Binmore (1987), Lipman (1991), and Canning (1992). Velupillai defines computational complexity straightforwardly as “intractability” or insolvability. Halting problems such as studied by Blum et al. (1998) provide excellent examples of how such complexity can arise, with this problem first studied for recursive systems by Church (1936) and Turing (1936, 1937).

In particular, Koppl and Rosser reexamined the famous “Holmes-Moriarty” problem of game theory, in which two players who behave as Turing machines contemplate a game between each other involving an infinite regress of thinking about what the other one is thinking about (Morgenstern 1935). Essentially this is the problem of n-level playing with n having no upper limit (Bacharach and Stahl 2000). This has a Nash equilibrium, but “hyper-rational” Turing machines cannot arrive at knowing they have that solution or not due to the halting problem. That the best reply functions are not computable arises from the self-referencing problem involved fundamentally similar to those underlying the Gödel Incompleteness Theorem (Rosser Sr 1936; Kleene 1967, p. 246). Aaronson (2013) has shown links between these problems in game theory and the N = P problem of computational complexity. Such problems extend to general equilibrium theory as well (Lewis 1992; Richter and Wong 1999; Landini et al. 2020).

Binmore’s (1987, pp. 209–212) response to such undecidability in self-referencing systems invokes a “sophisticated” form of Bayesian updating involving a degree of greater ignorance. Koppl and Rosser agree that agents can operate in such an environment by accepting limits on knowledge and operate accordingly, perhaps on the basis of intuition or “Keynesian animal spirits” (Keynes 1936). Hyper-rational agents cannot have complete knowledge, essentially for the same reason that Gödel showed that no logical system can be complete within itself.

However, even for Binmore’s proposed solution there are also limits. Thus, Diaconis and Freedman (1986) have shown that Bayes’ Theorem fails to hold in an infinite dimensional space. There may be a failure to converge on the correct solution through Bayesian updating, notably when the basis is discontinuous. There can be convergence on a cycle in which agents are jumping back and forth from one probability to another, neither of which is correct. In the simple example of coin tossing, they might be jumping back and forth between assuming priors of 1/3 and 2/3 without ever being able to converge on the correct probability of 1/2. Nyarko (1991) has studied such kinds of cyclical dynamics in learning situations in generalized economic models.

Koppl and Rosser compare this issue to that of Keynes’s problem (1936, Chap. 12) of the beauty contest. In this the participants are supposed to win if they most accurately guess the guesses of the other participants, potentially involving an infinite regress problem with the participants trying to guess how the other participants are going to be guessing about their guessing and so forth. This can also be seen as a problem of reflexivity (Rosser Jr. 2020b). A solution comes by choosing to be somewhat ignorant or boundedly rational and operating at a particular level of analysis. However, as there is no way to determine rationally the degree of boundedness, which itself involves an infinite regress problem (Lipman 1991), this decision also ultimately involves an arbitrary act, based on animal spirits or whatever, a decision ultimately made without full knowledge.

A curiously related point here is in later results (Gode and Sunder 1993; Mirowski 2002) on the behavior of zero intelligence traders. Gode and Sunder have shown that in many artificial market setups zero intelligence traders following very simple rules can converge on market equilibria that may even be efficient. Not only may it be necessary to limit one’s knowledge in order to behave in a rational manner, but one may be able to be rational in some sense while being completely without knowledge whatsoever. Mirowski and Nik-Kah (2017) argue that this completes a transformation of the treatment of knowledge in economics in the post-World war II era from assuming that all agents have full knowledge to all agents having zero knowledge.

A further point on this is that there are degrees of computational complexity (Velupillai 2000; Markose 2005), with Kolmogorov (1965) providing a widely accepted definition that the degree of computational complexity is given by the minimum length of a program that will halt on a Turing machine. We have been considering the extreme cases of no halting, but there is indeed an accepted hierarchy among levels of computational complexity, with the knowledge difficulties experiencing qualitative shifts across them. This hierarchy is widely seen as consisting of four levels (Chomsky 1959; Wolfram 1984; Mirowski 2007). At the lowest level are linear systems, easily solved, with such a low level of computational complexity we can view them as not complex. Above that level are polynomial (P) problems that are substantially more computationally complex, but still generally solvable. Above that are exponential and other non-polynomial (NP) problems that are very difficult to solve, although it remains as yet unproven that these two levels are fundamentally distinct, one of the most important unsolved problems in computer science. Above this level is that of full computational complexity associated where the minimum length is infinite, where the programs do not halt. Here the knowledge problems can only be solved by becoming effectively less intelligent.

1.4 Foundations of Dynamic Complexity Economics

In contrast with the computationally defined measures described above, the dynamic complexity definition stands out curiously as for its negativity: dynamical systems that do not endogenously and deterministically generate certain “well-behaved” outcomes. The charge that it is not precise carries weight. However, the virtue of it is precisely its generality guaranteed by its vagueness. It can apply to a wide variety of systems and processes that many have described as being “complex.” Of course, the computationalists argue with reason that they are able to subsume substantial portions of nonlinear dynamics with their approach, as for example with the already mentioned result on the non-computability of chaotic dynamics (Costa et al. 2005).

However, most of this recent debate and discussion, especially by Israel (2005), McCauley (2005), and Velupillai (2005b, 2005c) has focused on a particular outcome that is associated with some interacting agents models within the smaller tent (heterogeneous interacting agents) complexity part of the broader big tent dynamic complexity concept. This property or phenomenon is emergence . It was much discussed by cyberneticists and general systems theorists (von Bertalanffy 1974), including under the label anagenesis (Boulding 1978; Jantsch 1982), although it was initially formalized by Lewes (1875) and expanded by Morgan (1923), drawing upon the idea of heteropathic laws due to Mill (1843, Book III). Much recent discussion has focused on Crutchfield (1994) because he has associated it more clearly with processes within computerized systems of interacting heterogeneous agents and linked it to minimum length computability concepts related to Kolmogorov’s idea, which it makes it easier for the computationalists to deal with. In any case, the idea is of the dynamic appearance of something new endogenously and deterministically from the system, often also labeled self-organization .Footnote 17

Furthermore, all of these cited here would add another important element, that it appears at a higher level within a dynamic hierarchical system as a result of processes occurring at lower levels of the system. Crutchfield (1994) allows that what is involved is symmetry breaking bifurcations, which leads McCauley (2005, pp. 77–78) to be especially dismissive, identifying it with biological models (Kaufmann 1993) and declaring that “so far no one has produced a clear empirically relevant or even theoretically clear example.” The critics complain of implied holism and Israel identifies it with Wigner’s (1960) “mystical” alienation from the solidly grounded view of Galileo.

Now the complaint of McCauley amounts to an apparent lack of invariance , a lack of ergodicity or steady state equilibria, with clearly identifiable symmetries whose breaking brings about these higher-level reorganizations or transformations.

We can understand how a cell mutates to a new form, but we do not have a model of how a fish evolves into a bird. That is not to say that it has not happened, only that we do not have a model that helps us to imagine the details, which must be grounded in complicated cellular interactions that are not understood. (McCauley 2005, p. 77)Footnote 18

While he is probably correct that the details of these interactions are not fully understood, a footnote on the same page points in the direction of some understanding that has appeared, not tied directly to Crutchfield or Kaufmann. McCauley notes the work of Hermann Haken (1983) and his “examples of bifurcations to pattern formation via symmetry breaking.” Several possible approaches suggest themselves at this point.

One approach is that of synergetics due to Haken (1983), alluded to above. This deals more directly with the concept of entrainment of oscillations via the slaving principle (Haken 1996), which operates on the principle of adiabatic approximation . A complex system is divided into order parameters that are presumed to move slowly in time and “slave” faster moving variables or subsystems. While it may be that the order parameters are operating at a higher hierarchical level, which would be consistent with many generalizations made about relative patterns between such levels (Allen and Hoekstra 1990; Holling 1992; Radner 1992), this is not necessarily the case. The variables may well be fully equivalent in a single, flat hierarchy , such as with the control and state variables in catastrophe theory models. Stochastic perturbations can lead to structural change near bifurcation points.

If slow dynamics are given by vector F, fast dynamics generated by vector q, with A, B, and C being matrices, and ε a stochastic noise vector, then a locally linearized version is given by

$$ \mathrm{d}\mathbf{q}=\mathbf{Aq}+\mathbf{B}\left(\mathbf{F}\right)\mathbf{qC}\left(\mathbf{F}\right)+\boldsymbol{\upvarepsilon} . $$
(1.7)

Adiabatic approximation is given by

$$ \mathrm{d}\mathbf{q}=-{\left(\mathbf{A}+\mathbf{B}\left(\mathbf{F}\right)\right)}^{-1}\mathbf{C}\left(\mathbf{F}\right). $$
(1.8)

Fast variable dependence on the slow variables is given by A + B(F). Order parameters are those of the least absolute value.

The symmetry breaking bifurcation occurs when the order parameters destabilize by obtaining eigenvalues with positive real parts, while the “slave variables” exhibit the opposite. Chaos is one possible outcome. However, the most dramatic situation is when the slaved variables destabilize and “revolt” (Diener and Poston 1984), with the possibility of the roles switching within the system and former slaves replacing the former “bosses” to become the new order parameters. An example in nature of such an emerging and self-organizing entrainment might the periodic and coordinated appearance of the slime mold out of separated amoebae, which later disintegrates back into its isolated cells (Garfinkel 1987). An example in human societies may be the outbreak of the mid-fourteenth century Great Plague in Europe , when accumulating famine and immunodeficiency exploded in a massive population collapse (Braudel 1967).

Another approach is found in Nicolis (1986), derived from the work of Nicolis and Prigogine (1977) on frequency entrainment. Rosser Jr. (1994) have argued that this can serve as a possible model for the anagenetic moment, or the emergence of a new level of hierarchy . Let there be n well-defined levels of the hierarchy , with L1 at the bottom and Ln at the top. A new level, Ln+1, or dissipative structure , can emerge at a phase transition with a sufficient degree of entrainment of the oscillations at that level. Let there be k oscillating variables, xj and zi(t) be an independently and identically distributed exogenous stochastic process with zero mean and constant variance, then dynamics are given by the coupled, nonlinear differential equations of the form

$$ \mathrm{d}{x}_i/\mathrm{d}t={f}_i\left({x}_j,t\right)+{z}_i(t)+{{\sum \limits}_{\mathrm{j}=1}}^k{\int}_1^{\mathrm{k}}{x}_j\left({t}^{\prime}\right){\mathbf{w}}_{ij}\left({t}^{\prime }+\tau \right)\mathrm{d}{t}^{\prime }, $$
(1.9)

with wij representing a cross-correlation matrix operator. The third term is the key, either being “on” or “off,” with the former showing frequency entrainment. Nicolis (1986) views this in terms of a model of neurons, with a master hard nonlinear oscillator being turned on by a symmetry breaking of the cross-correlation matrix operator when the probability distribution of the real parts of its eigenvalues exceeding zero.Footnote 19 Then a new variable vector will emerge at the Ln+1 level that is yj, which will damp or stimulate the oscillations at level Ln, depending on whether the sum over them is below or above zero.Footnote 20 An example might be the emergence of a new level of urban hierarchy (Rosser Jr. 1994).

Regarding the relation between dynamic complexity and emergence another perspective on this has come from the Austrian School of economics (Koppl 2006, 2009; Lewis 2012; Rosser Jr. 2012a), with the idea that market economic systems spontaneously emerge, one of their deepest ideas, which they drew from the Scottish Enlightenment of Hume and Smith, as well as such thinkers as Mill (1843) and Herbert Spencer (1867-1874) who wrote on both evolution and economic sociology (Rosser Jr. 2014b). This link can be found in the work of Carl Menger (1871/1981), the founder of the Austrian School. Menger posed this as follows in terms of what economic research should discover (Menger 1883/1985, p. 148):

…how institutions which serve the common welfare and are extremely significant for its development come into being without a common will directed toward establishing them.

Menger (1892) then posed the spontaneous emergence of commodity monies in primitive societies with no fiat role by states as an important example of this.

Various followers of Menger did not pursue this approach strongly, many emphasizing equilibrium approaches not all that different from the emerging neoclassical view, which was an idea one could find in Menger’s work, who is widely viewed as one of the founders of the neoclassical marginalist approach along with Jevons and Walras. The crucial figure who revived an interest in emergence among the Austrians and developed it much further was Friedrich A. Hayek (1948, 1967).Footnote 21 Hayek drew on the incompleteness results of Gödel, aware of the role of self-referencing in this, and how overcoming the paradoxes of incompleteness may involve emergence of a higher level that can understand the lower level. Curiously his awareness of this originally came from his work in psychology in his 1952 The Sensory Order (pp, 188–189):

Applying the same general principles to the human brain as an apparatus of classification. It would appear to mean that, even though we may understand its modus operandi in general terms, or, in other words possess an explanation of the principle on which it operates, we shall never, by any means of the same brain, be able to arrive at a detailed explanation of its working in particular circumstances, or be able to predict what the results of it operations will be. To achieve this would be to require a brain of a higher order complexity, though it might still be built on the same principles. Such a brain might be able to explain what happens in our brain, but it would in turn be unable to explain its own operations, and so on.

Koppl (2006, 2009) argues that this argument applies as well to Hayek’s long opposition to central planning, with a central planner facing just this problem when they attempt to understand the effect on the economy they are trying to plan of their own planning efforts.Footnote 22 This view of the importance of complexity and emergence would come to be widely influential in Austrian economics since Hayek put forward his arguments and continues to be so (O’Driscoll and Rizzo 1985; Lachmann 1986; Lavoie 1989; Horwitz 1992; Wagner 2010).

1.5 Dynamic Complexity and Knowledge

In dynamically complex systems, the knowledge problem becomes the general epistemological problem. Consider the specific problem of being able to know the consequences of an action taken in such a system. Let G(xt) be the dynamical system in an n-dimensional space. Let an agent possess an action set A. Let a given action by the agent at a particular time be given by ait. For the moment let us not specify any actions by any other agents, each of whom also possesses his or her own action set. We can identify a relation whereby xt = f(ait). The knowledge problem for the agent in question thus becomes, “Can the agent know the reduced system G(f(ait) when this system possesses complex dynamics due to nonlinearity”?

First of all, it may be possible for the agent to be able to understand the system and to know that he or she understands it, at least to some extent. One reason why this can happen is that many complex nonlinear dynamical systems do not always behave in erratic or discontinuous ways. Many fundamentally chaotic systems exhibit transiency (Lorenz 1992). A system can move in and out of behaving chaotically, with long periods passing during which the system will effectively behave in a non-complex manner, either tracking a simple equilibrium or following an easily predictable limit cycle. While the system remains in this pattern, actions by the agent may have easily predicted outcomes, and the agent may even be able to become confident regarding his or her ability to manipulate the system systematically. However, this essentially avoids the question.

Let us consider four forms of dynamic complexity: chaotic dynamics, fractal basin boundaries, discontinuous phase transitions in heterogeneous agent situations, and catastrophe theoretic models related to heterogenous agent systems. For the first of these there is a clear problem for the agent, the existence of sensitive dependence on initial conditions. If an agent moves from action ait to action ajt, where |ait − ajt| < ε < 1, then no matter how small ε is, there exists an m such that |G(f(ait+t) − G(f(ajt+t)| > m for some t for each ε. As ε approaches zero, m/ε will approach infinity. It will be very hard for the agent to be confident in predicting the outcome of changing his or her action. This is the problem of the butterfly effect or sensitive dependence on initial conditions. More particularly, if the agent has an imperfectly precise awareness of his or her actions, with the zone of fuzziness exceeding ε, the agent faces a potentially large range of uncertainty regarding the outcome of his or her actions. In Edward Lorenz’s (1963) original study of this matter when he “discovered chaos,” when he restarted his simulation of a three-equation system of fluid dynamics partway through, the roundoff error that triggered a subsequent dramatic divergence was too small for his computer to “perceive” (at the four decimal place).

There are two offsetting elements for chaotic dynamics. Although an exact knowledge is effectively impossible, requiring essentially infinitely precise knowledge (and knowledge of that knowledge), a broader approximate knowledge over time may be possible. Thus, chaotic systems are generally bounded and often ergodic (although not always). While short-run relative trajectories for two slightly different actions may sharply diverge, the trajectories will at some later time return toward each other, becoming arbitrarily close to each other before once again diverging. Not only may the bounds of the system be knowable, but the long-run average of the system may be knowable. There are still limits as one can never be sure that one is not dealing with a long transient of the system, with it possibly moving into a substantially different mode of behavior later. But the possibility of a substantial degree of knowledge, with even some degree of confidence regarding that knowledge is not out of the question for chaotically dynamic systems.

Regarding fractal basin boundaries, first identified for economic models by Hans-Walter Lorenz (1992) in the same paper in which he discussed the problem of chaotic transience. Whereas in a chaotic system there may be only one basin of attraction, albeit with the attractor being fractal and strange and thus generating erratic fluctuations, the fractal basin boundary case involves multiple basins of attraction, whose boundaries with each other take fractal shapes. The attractor for each basin may well be as simple as being a single point. However, the boundaries between the basins may lie arbitrarily close to each other in certain zones.

In such a case, for the purely deterministic case once one is able to determine which basin of attraction one is in, a substantial degree of predictability may ensue. Yet there may be the problem of transient dynamics, with the system taking a long and circuitous route before it begins to get anywhere close to the attractor, even if the attractor is merely a point in the end. The problem arises if the system is not strictly deterministic, if G includes a stochastic element, however small. In this case one may be easily pushed across a basin boundary, especially if one is in a zone where the boundaries lie very close to one another. Thus there may be a sudden and very difficult to predict discontinuous changes in the dynamic path as the system begins to move toward a very different attractor in a different basin. The effect is very similar to that of sensitive dependence on initial conditions in epistemological terms, even if the two cases are mathematically quite distinct.

Nevertheless, in this case as well there may be something similar to the kind of dispensation over the longer run we noted for the case of chaotic dynamics. Even if exact prediction in the chaotic case is all but impossible, it may be possible to discern broader patterns, bounds and averages. Likewise in the case of fractal basin boundaries with a stochastic element, over time one should observe a jumping from one basin to another. Somewhat like the pattern of long run evolutionary game dynamics studied by Binmore and Samuelson (1999), one can imagine an observer keeping track of how long the system remains in each basin and eventually developing a probability profile of the pattern, with the percent of time the system spends in each basin possibly approaching asymptotic values. However, this is contingent on the nature of the stochastic process as well as the degree of complexity of the fractal pattern of the basin boundaries. A non-ergodic stochastic process may render it very difficult, even impossible, to observe convergence on a stable set of probabilities for being in the respective basins, even if those are themselves few in number with simple attractors.

For the case of phase transitions in systems of heterogeneous locally interacting agents, the world of the so-called “small tent complexity.” Brock and Hommes (1997) have developed a useful model for understanding such phase transitions, based on statistical mechanics. This is a stochastic system and is driven fundamentally by two key parameters, a strength of interactions or relationships between neighboring agents and a degree of willingness to switch behavioral patterns by the agents. For their model the product of these two parameters is crucial, with a bifurcation occurring for their product. If the product is below a certain critical value, then there will be a single equilibrium state. However, once this product exceeds a particular critical value two distinct equilibria will emerge. Effectively the agents will jump back and forth between these equilibria in herding patterns. For financial market models (Brock and Hommes 1998) this can resemble oscillations between optimistic bull markets and pessimistic bear markets, whereas below the critical value the market will have much less volatility as it tracks something that may be a rational expectations equilibrium.

For this kind of a setup there are essentially two serious problems. One is determining the value of the critical threshold. The other is understanding how the agents jump from one equilibrium to the other in the multiple equilibrium zone. Certainly the second problem resembles somewhat the discussion from the previous case, if not involving as dramatic a set of possible discontinuous shifts.

Of course once a threshold of discontinuity is passed it may be recognizable when it is approached again. But prior to doing so it may be essentially impossible to determine its location. The problem of determining a discontinuity threshold is a much broader one that vexes policymakers in many situations, such as attempting to avoid catastrophic thresholds that can bring about the collapse of a species population or of an entire ecosystem. One does not want to cross the threshold, but without doing so, one does not know where it is. However, for less dangerous situations involving irreversibilities, it may be possible to determine the location of the threshold as one moves back and forth across it.

On the other hand in such systems it is quite likely that the location of such thresholds may not remain fixed. Often such systems exhibit an evolutionary self-organizing pattern in which the parameters of the system themselves become subject to evolutionary change as the system moves from zone to zone. Such non-ergodicity is consistent not only with Keynesian style uncertainty, but may also come to resemble the complexity identified by Hayek (1948, 1967) in his discussions of self-organization within complex systems. Of course for market economies Hayek evinced an optimism regarding the outcomes of such processes. Even if market participants may not be able to predict outcomes of such processes, the pattern of self-organization will ultimately be largely beneficial if left on its own. Although Keynesians and Hayekian Austrians are often seen as in deep disagreement, some observers have noted the similarities of viewpoint regarding these underpinnings of uncertainty (Shackle 1972; Loasby 1976; Rosser Jr. 2001a, b). Furthermore, this approach leads to the idea of the openness of systems that becomes consistent with the critical realist approach to economic epistemology (Lawson 1997).

Considering this problem of important thresholds brings us to the final of our forms of dynamic complexity to consider here, catastrophe theory interpretations. The knowledge problem is essentially that previously noted, but is more clearly writ large as the discontinuities involved are more likely to be large as the crashes of major speculative bubbles. The Brock-Hommes model and its descendants can be seen as a form of what is involved, but the original catastrophe theory approach brings out key issues more clearly.

The very first application of catastrophe theory in economics by Zeeman (1974) indeed considered financial market crashes in a simplified two-agent formulation: fundamentalists who stabilized the system by buying low and selling high and “chartists” who chase trends in a destabilizing manner by buying when markets rise and selling when they fall. As in the Brock-Hommes formulation he allows for agents to change their roles in response to market dynamics so that as the market rises fundamentalists become chartists, accelerating the bubble, and when the crash comes they revert to being fundamentalists, accelerating the crash . Rosser Jr. (1991) provides an extended formalization of this in catastrophe theory terms that links it to the analysis of Minsky (1972) and Kindleberger (2001), further taken up in Rosser Jr. et al. (2012) and Rosser Jr. (2020c). This formulation involves a cusp catastrophic formulation with the two control variables being the demands by the two categories of agents, with the chartists’ demand determining the position of the cusp that allows for market crashes.

The knowledge problem here involves something not specifically modeled in Brock and Hommes, although they have a version of it. It is the matter of the expectations of agents about the expectations of the other agents. This is effectively the “beauty contest” issue discussed by Keynes in Chapter 12 of this General Theory (1936). The winner of the beauty contest in a newspaper competition is not who guesses the prettiest girl, but who guesses best the guesses of the other participants. Keynes famously noted that one could start playing this about guessing the expectations of others in their guesses of others’ guesses, and that this could go to higher levels, in principle, an infinite regress leading to an impossible knowledge problem. In contrast, the Brock and Hommes approach simply has agents shifting strategies after watching what others do. These potentially higher level problems do not enter in. These sorts of problems reappear in the problems associated with computational complexity.

1.6 Knowledge and Ergodicity

A controversial issue involving knowledge and complexity involves the deep sources of the Keynes-Knight idea of fundamental uncertainty (Keynes 1921; Knight 1921). Both of them made it clear that for uncertainty there is no underlying probability distribution determining important events that agents must make decisions about. Keynes’s formulation of this has triggered much discussion and debate as to why he saw this lack of a probability distribution arising.

One theory that has received much attention, due to Davidson (1982-83), is that while neither Keynes nor Knight ever mentioned it, what can bring about such uncertainty, especially for Keynes’s understanding of it, is the appearance of nonergodicity in the dynamic processes underlying economic reality. In making this argument, Davidson specifically cited arguments made by Paul Samuelson (1969, p. 184) to the effect that “economics as a science assumes the ergodic axiom.” Davidson relied on this to assert that failure of this axiom is an ontological matter that is central to understanding Keynesian uncertainty, when knowledge breaks down. Many have since repeated this argument, although Alvarez and Ehnts (2016) argue that Davidson misinterpreted Samuelson who actually dismissed this ergodic view as being tied to an older classical view that he did not accept.

Davidson’s argument has more recently come under criticism by various observers, perhaps most vigorously recently by O’Donnell (2014-15), who argues that Davidson has misrepresented the ergodic hypothesis, that Keynes never considered it, and that Keynesian uncertainty is more a matter of short-run instabilities to be understood using behavioral economics rather than the asymptotic elements that are tied up with ergodicity. An important argument by O’Donnell is that even in an ergodic system that is going to go to a long-run stationary state, it may be out of that state for a period of time so long that one will be unable to determine if it is ergodic or not. This is a strong argument that Davidson has not succeeded in fully replying to (Davidson 2015).

Central to this is to understand the ergodic hypothesis itself and its development and limits, as well as its relationship to Keynes’s own arguments, which turns out to be somewhat complicated, but indeed linked to central concerns of Keynes in an indirect way, especially given that he never directly mentioned it. Most economists discussing this matter, including both Davidson and O’Donnell, have accepted as the definition of an ergodic system that over time (asymptotically) its “space averages equal its time averages.” This formulation was due to Ehrenfest and Ehrenfest-Afanessjewa (1911), with Paul Ehrenfest a student of Ludwig Boltzmann (1884) who expanded the study of ergodicity (and coined the term) as part of his long study of statistical mechanics, particularly how a long term aggregate average (such as temperature) could emerge from a set of dynamically stochastic parts (particle movements). It turns out that for all its widespread influence, the precise formulation by the Ehrenfests was inaccurate (Uffink 2006). But this reflected that there were multiple strands in the meaning of “ergodicity.”

In fact there is ongoing debate about how Boltzmann coined the term in the first place. His student, Ehrenfest, claimed it was from combining the Greek ergos (“work”) with hodos (“path”), while it has been argued by Gallavotti (1999) that it came from him using his own neologism, monode, meaning a stationary distribution, instead of hodos. This fits with most of the early formulations of ergodicity that analyzed it within the context of stationary distributions.

Later discussions of ergodicity would draw on two complementary theorems proven by Birkhoff (1931) and von Neumann (1932), although the latter was proven first and emphasizes measure preservation, while Birkhoff’s variation was more geometric and related to recurrence properties in dynamical systems. Both involve long-run convergence, and Birkhoff’s formulation showed not only measure preservation but that for a stationary ergodic system a metric indecomposability such that not only is the space properly filled, but that it is impossible to break the system into two that will also fully fill the space and preserve measure, a result extending fundamental work by Poincaré (1890) on how recurrence and space filling help explain how chaotic dynamics can arise in celestial mechanics.

In von Neumann’s (1932) formulation let T be a measure-preserving transformation on a measure space with for every square-integrable function f on that space, (Uf)(x) = f(Tx), then U is a unitary operator on the space. For any such unitary operator U on a Hilbert space H, the sequence of averages:

$$ \left(1/n\right)\left(f+ Uf+\dots +{U}^{n-1}f\right) $$
(1.10)

is strongly convergent for every f in H. We note that these are finite measure spaces and that this refers to stationary systems, just as with Boltzmann.

Birkhoff’s (1931) extension, sometimes called the “individual ergodic theorem ,” modifies the above sequence of averages to be:

$$ \left(1/n\right)\left(f(x)+f(Tx)+\dots +f\left({T}^{n-1x}\right)\right) $$
(1.11)

that converge for almost every x. These complementary theorems have been generalized to Banach spaces and many other conditions.Footnote 23 It was from these theorems that the next wave of developments in Moscow and elsewhere would evolve.Footnote 24 This was the state of ergodic theory when Keynes had his debate over econometrics at the end of the 1930s with that student of Paul Ehrenfest, Jan Tinbergen.

The link between stationarity and ergodicity would come to weaken in later study, with Malinvaud (1966) showing that a stationary system might not be ergodic, with a limit cycle being an example, with Davidson aware of this case from the beginning of his discussions. However, it continued to be believed that ergodic systems must be stationary, and this remained a key for Davidson as well as being accepted by most of his critics, including O’Donnell. However, it turns out that this may break down in ergodic chaotic systems of infinite dimension, which may not be stationary (Shinkai and Aizawa 2006), which brings back the role of chaotic dynamics in undermining the ability to achieve knowledge of a dynamical system, even one that is ergodic.

Given these complications it is worthwhile to return to Keynes to understand what his concerns were, which came out most clearly in his debates with Tinbergen (1937, 1940; Keynes, 1938) over how to econometrically estimate models for forecasting macroeconomic dynamics. A deep irony here is that Tinbergen was a student of Paul Ehrenfest and so was indeed influenced by his ideas on ergodicity, even as Keynes did not directly address this matter. In any case, what Keynes objected to was the apparent absence of homogeneity , essentially a concern that the model itself changes over time. Keynes’s solution to this was to break a time-series down into sub-samples to see if one gets the same parameter estimates as one does for the whole time-series. Homogeneity is not strictly identical to either stationarity or ergodicity, but it is probably the case that at the time Tinbergen, following Ehrenfest, probably assumed all three held for the models he estimated. Thus indeed the ergodic hypothesis was assumed to hold for these early econometric models, whereas Keynes was skeptical of there being a sufficient homogeneity for one to assume one knew what the system was doing over time (Rosser Jr. 2016a).

1.7 Reflexivity and the Unification of Complexity Concepts

Closely related to self-referencing is the idea of reflexivity . This is a term with no agreed upon definition, and it has been used in a wide variety of ways (Lynch 2000). It is derived from the Latin reflectere, which is usually translated to mean “bending back,” but can refer to “reflex” as in a knee jerking when tapped, not what is meant here, or more generally is linked to “reflection” as in an image being reflected, possibly back and forth many times as in the situation of two mirrors facing each other. This latter is more what the focus is here and more the type that is connected with self-referencing and all that implies. Someone who made that link strongly was Douglas Hofstadter (1979) in his Gödel, Escher, Bach: An Eternal Golden Braid as well as even more so later (Hofstadter 2006). For Hofstadter, reflexivity is linked to the foundations of consciousness through what calls “strange loops” of indirect self-referencing, which he sees certain prints by Maurits C. Escher as highlighting, particularly his “Drawing Hands” and also his “Print Gallery,” with many commentators on reflexivity citing “Drawing Hands,” which shows two hands drawing each other (Rosser Jr. 2020b).Footnote 25 Hofstadter argues that the foundation for his theory is the Incompleteness Theorem of Gödel, with its deep self-referencing, along with certain pieces by J.S. Bach, as well as these prints by Escher.

The term has probably been most widely used, and with the greatest variety of meanings, in sociology (Lynch 2000)). Its academic usage was initiated by prominent sociologist, Robert K. Merton (1938), who used it to pose the problem of sociologists thinking about how their studies and ruminations fit into the broader social framework, both in how they themselves are influenced by that framework in terms of biases and paradigms, but also in terms of how their studies and how they do their studies might reflect back to influence society as well. Among the sociologists the most radical uses of the concept involved sharp self-criticism wherein one deconstructs the paradigm and influences one is operating in to the point that one can barely do any analysis at all (Woolgar 1991), with many complaining that this leads to a nihilistic dead end. The earliest usages of the term by economists followed this particular strand of analyzing how particular economists are operating within certain methodological frameworks and how they came to do so from broader societal influences and how their work may then reflect back to influence society, sometimes even through specific policies or even ways of gathering and reporting policy-relevant data (Hands 2001; Davis and Klaes 2003).

Merton (1948) would also use the idea to propose the idea of the self-fulfilling prophecy , an idea that has been widely applied in economics as with the concept of sunspot equilibria (Azariadis 1981), with many seeing this as deriving originally from Keynes (1936, Chap. 12) and his analysis of financial market behavior based on the early twentieth century British newspaper beauty contests. In those contests newspapers would publish photos of young women and ask readers to rate them on their presumed beauty. The winner of such a contest was not the person who guessed which young woman was objectively the most beautiful, but rather which one received the most votes. This meant that a shrewd player of such a game was really trying to guess the guesses of the other players, with Keynes comparing this to financial markets where the underlying fundamental of an asset is less important for its market value than what investors think it is. This led Keynes even to note that this kind of reasoning can move to higher levels, trying to think what others think others think, and on to still higher levels in a potential infinite regress, a classic infinite reflection in a non-halting program. This beauty contest idea of Keynes has come to be viewed as a centerpiece of his philosophical view, implying ultimately not only reflexivity but complexity as well (Davis 2017).

Among the first to pick up on Keynes’s argument and apply it to self-fulfilling prophecies in financial markets and also bringing in reflexivity as relevant to this was George Soros (1987), who would later also argue that the analysis was part of complexity economics (Soros 2013). Soros has long argued that thinking about this beauty contest-inspired version of reflexivity has been key to his own decision-making in financial markets. He sees it as explaining boom and bust cycles in markets as in the US housing bubble of the early 2000s, whose decline set off the Great Recession. He first got the term from being a student of Karl Popper’s in the 1950s (Popper 1959), with Popper also an influence on Hayek (1967) in connection with these ideas (Caldwell 2013). Thus the idea of reflexivity with links to arguments about incompleteness and infinite regresses associated with self-referencing have become highly influential among economists and financiers studying financial market dynamics and other related phenomena.

We now see the possibility of linking our major schools of complexity through the subtle strange loopiness involved in indirect self-referencing at the heart of a deeper form of reflexivity. The indirect self-referencing at the heart of Gödel’s incompleteness theorem is deeply linked to computational complexity in that it leads to the infinite do loops of the highest level of computational complexity in which a program never stops. The way out of incompleteness involves in effect what Davis and Klaes invoked: moving to a higher hierarchical level in which an exogenous agent or program determines what is true or false, although this opens the door to incoherence (Landini et al. 2020). The indirect self-referencing opens the door to dynamic complexity in its implications for market dynamics, with this also linking to hierarchical complexity as new levels of hierarchy can be generated. Let us consider briefly how this comes out of the fundamental Gödel (1931) theorem.

The Gödel theorem is really two theorems. The first one is the incompleteness one: any consistent formal system in which elementary arithmeticFootnote 26 can be carried out is incomplete; there are statements in the language of the formal system that can neither be proved nor disproved within the formal system. The second one addresses the problem of consistencyFootnote 27: for any consistent formal system in which elementary arithmetic can be carried out, the consistency of the formal system cannot be proved within the formal system itself. So, coherence implies incompleteness, but any attempt to overcome incompleteness by moving to a higher level involves one being unable to prove the consistency of this higher level system, with both parts of this failing due to paradoxes of (reflexive) self-referencing leading to paradoxes.

Hofstadter (2006) provides an excellent discussion of the nature of the indirectness involved in proving the main part of the theorem, which involves the use of “Gödel numbers.” These are numbers assigned to logical statements, and their use can lead to the creation of self-referencing paradoxical statements even within a system especially designed to avoid such self-referencing statements. The system that Gödel subjected this treatment to eventually generates a statement equivalent to “This sentence is unprovable” was the logical system developed by Whitehead and Russell (1910-13) specifically to provide a consistent formal foundation for mathematics without logical paradoxes. Russell in particular was much concerned about the possibility of paradoxes in set theory, such as those involving self-referencing sets. The classic problem was “Does the set of all sets that do not contain themselves contain itself?” A famous simple version of this involves “Who shaves the barber in a town where the barber only shaves those who do not shave themselves?” Both of these involve similar endless do-loops arising from their self-referencing. Whitehead and Russell attempted to eliminate these annoyances by developing the theory of types that established hierarchies of sets in ways to avoid having them refer to themselves. But then Gödel pulled his trick of establishing his numbers, which he applied to the system of Whitehead and Russell so as through indirection to generate a self-referencing statement that involved a paradox unresolvable within the system. It is rather like how the hole Escher put in the middle of his “Print Gallery” allowed for the man to look at a print on a wall in a gallery of a city that contains the gallery in which he is standing looking at it.

Thus it is not surprising that the problem of self-referencing has lain at the core of much of the thinking about reflexivity from an early point, and that this thinking took on a sharper edge when various figures thought about Gödel’s theorem, or even earlier about the paradoxes considered by Bertrand Russell. Linking this to understanding to complexity provides a foundation for a reflexive complexity that encompasses all the major forms of complexity.

1.8 Further Observations

In computationally complex systems the problem of understanding them is related to logic, the problems of infinite regress and undecidability associated with self-referencing in systems of Turing machines. This can manifest itself as the halting problem, something that can arise even for a computer attempting to precisely calculate even a dynamically complex system as for example the exact shape of the Mandelbrot set (Blum et al. 1998). A Turing machine cannot understand fully a system in which its own decisionmaking is too crucially a part. However, knowledge of such systems may be gained by other means.

To the extent that models have axiomatic foundations rather than being merely ad hoc, which many of them ultimately are, these foundations are strictly within the non-constructivist, classical mathematical mode, assuming the Axiom of Choice, the Law of the Excluded Middle, and other hobby horses of the everyday mathematicians and mathematical economists. To the extent that they provide insight into the nature of dynamic economic complexity and the special problem of emergence (or anagenesis), they do not do so by being based on axiomatic foundationsFootnote 28 that would pass muster with the constructivists and intuitionists of the early and mid-twentieth century, much less their more recent disciples, who are following the ideal hope that “The future is a minority; the past and present are a majority,” to quote Velupillai (2005b, p. 13), himself paraphrasing Shimon Peres from an interview about the prospects for Middle East peace.

There are a considerable array of models available for contemplating or modeling emergent phenomena operating at different hierarchical levels. An interesting area to see which of the approaches might prove to be most suitable may well be in the study of the evolution of market processes as they themselves become more computerized. This is the focus of Mirowski (2007) who goes so far as to argue that fundamentally markets are algorithms. The simple kind of posted price – spot market most people have traditionally bought things in is at the bottom of a Chomskyian hierarchy of complexity and self-referenced control. Just as newer algorithms may contain older algorithms within them, so the emergence of newer kinds of markets can contain and control the older kinds as they move to higher levels in this Chomskyian hierarchy . Futures markets may control spot markets, options markets may control futures markets, and the ever higher order of these markets and their increasing automation pushes the system to a higher level towards the unreachable ideal of being a full-blown Universal Turing Machine (Cotogno 2003).

Mirowski brings to bear more recent arguments in biology regarding coevolution, noting that the space in which the agents and systems are evolving itself changes with their evolution. To the extent that the market system increasingly resembles a gigantic assembly of interacting and evolving algorithms, both biology and the problem of computability will come to bear and will come to bear and influence each other (Stadler et al. 2001). In the end the distinction between the two may become irrelevant.

In the great contrast of computational and dynamic complexity, we see crucial overlaps involving how the paradoxes arising from self-referencing underlying computational complexity can imply the emergence so deeply associated with dynamic complexity. These interrelations may become most manifest when contemplating the mirror world of reflexivity and its endless concatenations. These are among the many considerations that lie at the foundations of complexity economics.