Abstract
There are at least 45 definitions of complexity according to Seth Lloyd as reported in The End of Science (Horgan, 1997, pp. 303–305). Rosser Jr. (1999) argued for the usefulness in studying economics of a definition he called dynamic complexity that was originated by Day (1994). This is that a dynamical economic system fails to generate convergence to a point, a limit cycle or an explosion (or implosion) endogenously from its deterministic parts. It has been argued that nonlinearity was a necessary but not sufficient condition for this form of complexity, and that this definition constituted a suitably broad “big tent” to encompass the “four C’s” of cybernetics, catastrophe, chaos, and “small tent” (now better known as heterogeneous agents) complexity.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
1.1 Forms of Complexity
There are at least 45 definitions of complexity according to Seth Lloyd as reported in The End of Science (Horgan, 1997, pp. 303–305). Rosser Jr. (1999) argued for the usefulness in studying economics of a definition he called dynamic complexity that was originated by Day (1994).Footnote 1 This is that a dynamical economic system fails to generate convergence to a point, a limit cycle or an explosion (or implosion) endogenously from its deterministic parts. It has been argued that nonlinearity was a necessary but not sufficient condition for this form of complexity,Footnote 2 and that this definition constituted a suitably broad “big tent” to encompass the “four C’s”Footnote 3 of cybernetics , catastrophe , chaos, and “small tent ” (now better known as heterogeneous agents ) complexity.
Norbert Wiener (1948) founded cybernetics, which relied on computer simulations and was popular with Soviet central planners and computer scientists long after it was not so admired in the West. Jay Forrester (1961), inventor of the flight simulator, founded its rival system dynamics , arguing that nonlinear dynamical systems can produce “counterintuitive” results. Probably its most famous application was in The Limits to Growth (Meadows et al. 1972), eventually criticized for its excessive aggregation. Arguably both came from general systems theory (von Bertalanffy, 1950, 1974), which in turn developed from tektology , the general theory of organization due to Bogdanov (1925-29).
Catastrophe theory developed out of broader bifurcation theory, which relies on strong assumptions to characterize patterns of how smoothly changing control variables can generate discontinuous changes in state variables at critical bifurcation values (Thom, 1975), with Zeeman’s (1974) model of stock market crashes the first use of it in economics. Empirical methods for studying such models depend on multi-modal statistics (Cobb et al. 1983; Guastello 2011a, b). Due to the strict assumptions it relies upon, a backlash developed against its use, although Rosser Jr. (2007) argued this became overdone.Footnote 4
While chaos theory can be traced back to Poincaré (1890), it became prominent after climatologist Edward Lorenz (1963) discovered sensitive dependence on initial conditions, aka “the butterfly effect.” Applications in economics followed suggestions made by May (1976). Debates over empirical measurement and problems associated with forecasting have reduced its application in economics (Dechert, 1996).Footnote 5 It is possible to develop models that exhibit combined catastrophic and chaotic phenomena as in chaotic hysteresis,Footnote 6 first shown as possible in a macroeconomic model by Puu (1990), with Rosser Jr. et al. (2001) estimating such patterns for investment in the Soviet Union in the post-World War II period.
The small tent or heterogeneous agents type of dynamic complexity does not have a precise definition. Influentially, Arthur et al. (1997a) argue that such complexity exhibits six characteristics: (1) dispersed interaction among locally interacting heterogeneous agents in some space, (2) no global controller that can exploit opportunities arising from these dispersed interactions, (3) cross-cutting hierarchical organization with many tangled interactions, (4) continual learning and adaptation by agents, (5) perpetual novelty in the system as mutations lead it to evolve new ecological niches, and (6) out-of-equilibrium dynamics with either no or many equilibria and little likelihood of a global optimum state emerging. Many point to Thomas Schelling’s (1971) study on a 19-by-19 Go boardFootnote 7 of the emergence of urban segregation due to nearest neighbor effects as an early example.
Other forms of nonlinear dynamic complexity seen in economic models include non-chaotic strange attractors (Lorenz 1983), fractal basin boundaries (Lorenz 1983; Abraham et al. 1997), flare attractors (Hartmann and Rössler 1998; Rosser Jr. et al. 2003a), and more.
Other non-dynamic complexity approaches used in economics have included structural (Pryor 1995; Stodder 1995),Footnote 8 hierarchical (Simon 1962), informational (Shannon 1948). Algorithmic (Chaitin 1987), stochastic (Rissanen 1986), and computational (Lewis 1985; Albin with Foley 1998; Velupillai 2000).
Those arguing for focus on computational complexity include Velupillai (2005a, b) and Markose (2005), who say that the latter concept is superior because of its foundation on more well-defined ideas, such as algorithmic complexity (Chaitin 1987) and stochastic complexity (Rissanen 1989, 2005). These are seen as founded more deeply on the informational entropy work of Shannon (1948) and Kolmogorov (1983). Mirowski (2007) argues that markets themselves should be seen as algorithms that are evolving to higher levels in a Chomsky (1959) hierarchy of computational systems, especially as they increasingly are carried over computers and become resolved through programmed double-auction systems and the like. McCauley (2004, 2005) and Israel (2005) argue that such dynamic complexity ideas as emergence are essentially empty and should be abandoned for either more computational-based or more physics-based ones, the latter especially relying on invariance concepts.
At the most profound level computational complexity involves the problem of non-computability. Ultimately this depends on a logical foundation, that of non-recursiveness due to incompleteness in the Gödel sense (Church 1936; Turing 1937). In actual computer programs this manifests itself most clearly in the form of the halting problem (Blum et al. 1998). This amounts to the halting time of a program being infinite, and it links closely to other computational complexity concepts such as Chaitin’s algorithmic complexity. Such incompleteness problems present foundational problems for economic theory (Rosser Jr. 2009a, 2012a, b; Landini et al. 2020; Velupillai 2009).
In contrast, dynamic complexity and such concepts as emergence are useful for understanding economic phenomena and are not as incoherent and undefined as has been argued. A sub-theme of some of this literature, although not all of it, has been that biologically based models or arguments are fundamentally unsound mathematically and should be avoided in more analytical economics. Instead, such approaches can be used in conjunction with the dynamic complexity approach to explain emergence mathematically and that such approaches can explain certain economic phenomena that may not be easily explained otherwise.
1.2 Foundations of Computational Complexity Economics
Velupillai (2000, pp. 199–200) summarizes the foundations of what he has labeled computable economics Footnote 9 in the following.
Computability and randomness are the two basic epistemological notions I have used as building blocks to define computable economics. Both of these notions can be put to work to formalize economic theory in effective ways. However, they can be made to only on the basis of two theses: the Church-Turing thesis, and the Kolmogorov-Chaitin-Solomonoff thesis.
Church (1936) and Turing (1937) independently realized that several broad classes of functions could be described as “recursive” and were “calculable” (programmable computers had not yet been invented). Turing (1936, 1937) was the first to realize that Gödel’s (1931) Incompleteness Theorem provided a foundation for understanding when problems were not “calculable,” called “effectively computable” since Tarski (1949). Turing’s analysis introducing the generalized concept of the Turing machine , now viewed as the model for a rational economic agent within computable economics (Velupillai 2005b, p. 181). While the original Gödel theorem relied upon a Cantor diagonal proof arising from self-referencing, the classic manifestation of non-computability in programming is the halting problem : that a program will simply run forever without ever reaching a solution (Blum et al. 1998).
Much of recent computable economics has involved showing that when one tries to put important parts of standard economic theory into forms that might be computable, it is found that they are not effectively computable in any general sense. These include Walrasian equilibria (Lewis 1992), Nash equilibria (Prasad 1991; Tsuji et al. 1998), more general aspects of macroeconomics (Leijonufvud 1993), and whether a dynamical system will be chaotic or not (da Costa et al. 2005).Footnote 10
Indeed, what are viewed as dynamic complexities can arise from computability problems that arise in jumping from a classical and continuous real number framework to a digitized, rational numbers-only framework. An example is the curious “finance function” of Clower and Howitt (1978) in which solution variables jump back and forth over large intervals discontinuously as the input variables go from integers, to non-integer rationals to irrational numbers and back. Velupillai (2005b, p. 186) notes the case of a Patriot missile missing its target by 700 m and killing 28 soldiers as “friendly fire” in Dhahran, Saudi Arabia in 1991 due to a computer’s non-terminating cycling through a binary expansion on a decimal fraction. Finally, the discovery of chaotic sensitive dependence on initial conditions by Lorenz (1963) because of computer roundoff error is famous, a case that is computable but undecidable.
There are actually several computability based definitions of complexity, although Velupillai (2000, 2005a, b) argues that they can be linked as part of the broader foundation of computable economics. The first is the Shannon (1948) measure of information content, which can be interpreted as attempting observe structure in a stochastic system. It is thus derived from a measure of entropy in the system, or its state of disorder. Thus, if p(x) is the probability density function of a set of K states denoted by values of x, then the Shannon entropy is given by
From this is it is trivial to obtain the Shannon information content of X = x as
It came to be understood that this equals the number of bits in an algorithm that it takes to compute this code. This would lead Kolmogorov (1965) to define what is now known as Kolmogorov complexity as the minimum number of bits in any algorithm that does not prefix any other algorithm a(x) that a Universal Turing Machine (UTM) would require to compute a binary string of information, x, or,
where │ │ denotes length of the algorithm in bits.Footnote 11 Chaitin (1987) would independently discover and extend this minimum description length (MDL) concept and link it back to Gödel incompleteness issues, his version being known as algorithmic complexity, which would get taken up later by Albin (1982)Footnote 12 and Lewis (1985, 1992) in economic contexts.Footnote 13
While these concepts usefully linked probability theory and information theory with computability theory, they all share the unfortunate aspect of being non-computable. This would be remedied by the introduction of stochastic complexity by Rissanen (1978, 1986, 1989, 2005). The intuition behind Rissanen’s modification of the earlier concepts is to focus not on the direct measure of information but to seek a shorter description or model that will depict the “regular features” of the string. For Kolmogorov a model of a string is another string that contains the first string. Rissanen (2005, pp. 89–90) defines a likelihood function for a given structure as a class of parametric density functions that can be viewed as respective models, where θ represents a set of k parameters and x is a given data string indexed by n:
For a given f, with f(yn) a set of “normal strings,” the normalized maximum likelihood function will be given by
where the denominator of the right-hand side can be defined as being Cn,k.
From this the stochastic complexity is given by
This term can be interpreted as representing “the ‘shortest code length’ for the data xn that can be obtained with the model class Mk.” (Rissanen 2005, p. 90). With this we have a computable measure of complexity derived from the older ideas of Kolmogorov, Solomonoff, and Chaitin. The bottom line of Kolmogorov complexity is that a system is complex if it is not computable. The supporters of these approaches to defining economic complexity (Israel 2005; Markose 2005; Velupillai 2005a, b) point out the precision given by these measures in contrast to so many of the alternatives.
However, Chaitin’s algorithmic complexity (1966, 1987) introduces a limit to this precision, an ultimate underlying randomness. He considered the problem of a program having started without one knowing what it is and thus facing a probability that it will halt, which he labeled as Ω. He saw this randomness as underlying all mathematical “facts.” Indeed, this Ω itself is in general not computable (Rosser Jr. 2020a).
An example of this involves a theorem of Maymin (2011) that straddles the boundary of the deep unsolved problem of whether P (polynomial) equals NP (non-polynomial) in programs,Footnote 14 thus having an unknown Ω. This theorem shows that under certain information conditions markets are efficient if P = NP, which few believe. At the edge of this da Costa and Doria (2016) use the O’Donnell (1979) algorithm that is exponential and thus not P but slowly growing so “almost P” to establish a counterexample function to the P = NP problem. The O’Donnell algorithm holds if P < NP is probable for any theory strictly stronger than Primitive Recursive Arithmetic, even as that cannot prove it. Such problems might appear such as in the computationally complex traveling salesman problem. Da Costa and Doria establish that under these conditions the O’Donnell algorithm behaves as an “almost P” system that implies an outcome of “almost efficient markets.” This is a result that walks on the edge of the unknown, if not the unknowable.
A deeper logical issue underlying computational complexity and economics involves fundamental debates over the nature of mathematics itself. Conventional mathematics assumes axioms labeled the Zermelo-Fraenkel-[Axiom of] Choice system, or ZFC. But some of these axioms have been questioned and efforts have been made to develop axiomatic mathematical systems not using them. The axioms that have been challenged have been the Axiom of Choice, the Axiom of Infinity, and the Law of the Excluded Middle. A general term for these efforts has been constructivist mathematics , with systems that particularly emphasize not relying on the Law of the Excluded Middle, which means no use of proof by contradiction, has been known as intuitionism , initially developed by Luitzen Brouwer (1908) of fixed point theorem fame.Footnote 15 In particular, standard proofs of the Bolzano-Weierstrass theorem use proof by contradiction, with this underlying Sperner’s Lemma, which in turn underlies standard proofs of both the Brouwer and Kakutani fixed point theorems used in general and Nash equilibrium existence proofs (Velupillai 2006, 2008).Footnote 16
For mathematicians, if not economists, the most important of these debatable axioms is the Axiom of Choice, which allows for the relatively easy ordering of infinite sets. This underpins standard proofs of major theorems of mathematical economics, with Scarf (1973) probably the first to notice these possible problems. The Axiom of Choice is especially important in topology and central parts of real analysis. On the one hand, its most ultimate formulation has been shown to be false by Specker (1953). But one way out of some of these problems is by using Non-standard analysis that allows for infinite and infinitesimal real numbers (Robinson 1966), which allows for avoiding the use of the Axiom of Choice for proving some important theorems.
The question of the Axiom of Infinity may perhaps be most closely tied to the questions about computational complexity. The deep philosophical idea behind these constructivist approaches is that mathematics should deal with finite systems that are more realistic and more readily and easily computed. Going against this most strongly was Cantor’s introduction of levels of infinity into mathematics, an innovation that led Hilbert to praise Cantor for “bringing mathematicians into paradise.” But the computability critics argue that mathematical economics must fit the real world in a credible way, with efforts ongoing at constructing such an economics based on a constructivist foundation (Velupillai 2005a, b, 2012; Bartholo et al. 2009; Rosser Jr. 2010a, 2012a).
1.3 Epistemology and Computational Complexity
Regarding computational complexity, Velupillai (2000) provides definitions and general discussion and Koppl and Rosser Jr. (2002) provide a more precise formulation of the problem, drawing on arguments of Kleene (1967), Binmore (1987), Lipman (1991), and Canning (1992). Velupillai defines computational complexity straightforwardly as “intractability” or insolvability. Halting problems such as studied by Blum et al. (1998) provide excellent examples of how such complexity can arise, with this problem first studied for recursive systems by Church (1936) and Turing (1936, 1937).
In particular, Koppl and Rosser reexamined the famous “Holmes-Moriarty” problem of game theory, in which two players who behave as Turing machines contemplate a game between each other involving an infinite regress of thinking about what the other one is thinking about (Morgenstern 1935). Essentially this is the problem of n-level playing with n having no upper limit (Bacharach and Stahl 2000). This has a Nash equilibrium, but “hyper-rational” Turing machines cannot arrive at knowing they have that solution or not due to the halting problem. That the best reply functions are not computable arises from the self-referencing problem involved fundamentally similar to those underlying the Gödel Incompleteness Theorem (Rosser Sr 1936; Kleene 1967, p. 246). Aaronson (2013) has shown links between these problems in game theory and the N = P problem of computational complexity. Such problems extend to general equilibrium theory as well (Lewis 1992; Richter and Wong 1999; Landini et al. 2020).
Binmore’s (1987, pp. 209–212) response to such undecidability in self-referencing systems invokes a “sophisticated” form of Bayesian updating involving a degree of greater ignorance. Koppl and Rosser agree that agents can operate in such an environment by accepting limits on knowledge and operate accordingly, perhaps on the basis of intuition or “Keynesian animal spirits” (Keynes 1936). Hyper-rational agents cannot have complete knowledge, essentially for the same reason that Gödel showed that no logical system can be complete within itself.
However, even for Binmore’s proposed solution there are also limits. Thus, Diaconis and Freedman (1986) have shown that Bayes’ Theorem fails to hold in an infinite dimensional space. There may be a failure to converge on the correct solution through Bayesian updating, notably when the basis is discontinuous. There can be convergence on a cycle in which agents are jumping back and forth from one probability to another, neither of which is correct. In the simple example of coin tossing, they might be jumping back and forth between assuming priors of 1/3 and 2/3 without ever being able to converge on the correct probability of 1/2. Nyarko (1991) has studied such kinds of cyclical dynamics in learning situations in generalized economic models.
Koppl and Rosser compare this issue to that of Keynes’s problem (1936, Chap. 12) of the beauty contest. In this the participants are supposed to win if they most accurately guess the guesses of the other participants, potentially involving an infinite regress problem with the participants trying to guess how the other participants are going to be guessing about their guessing and so forth. This can also be seen as a problem of reflexivity (Rosser Jr. 2020b). A solution comes by choosing to be somewhat ignorant or boundedly rational and operating at a particular level of analysis. However, as there is no way to determine rationally the degree of boundedness, which itself involves an infinite regress problem (Lipman 1991), this decision also ultimately involves an arbitrary act, based on animal spirits or whatever, a decision ultimately made without full knowledge.
A curiously related point here is in later results (Gode and Sunder 1993; Mirowski 2002) on the behavior of zero intelligence traders. Gode and Sunder have shown that in many artificial market setups zero intelligence traders following very simple rules can converge on market equilibria that may even be efficient. Not only may it be necessary to limit one’s knowledge in order to behave in a rational manner, but one may be able to be rational in some sense while being completely without knowledge whatsoever. Mirowski and Nik-Kah (2017) argue that this completes a transformation of the treatment of knowledge in economics in the post-World war II era from assuming that all agents have full knowledge to all agents having zero knowledge.
A further point on this is that there are degrees of computational complexity (Velupillai 2000; Markose 2005), with Kolmogorov (1965) providing a widely accepted definition that the degree of computational complexity is given by the minimum length of a program that will halt on a Turing machine. We have been considering the extreme cases of no halting, but there is indeed an accepted hierarchy among levels of computational complexity, with the knowledge difficulties experiencing qualitative shifts across them. This hierarchy is widely seen as consisting of four levels (Chomsky 1959; Wolfram 1984; Mirowski 2007). At the lowest level are linear systems, easily solved, with such a low level of computational complexity we can view them as not complex. Above that level are polynomial (P) problems that are substantially more computationally complex, but still generally solvable. Above that are exponential and other non-polynomial (NP) problems that are very difficult to solve, although it remains as yet unproven that these two levels are fundamentally distinct, one of the most important unsolved problems in computer science. Above this level is that of full computational complexity associated where the minimum length is infinite, where the programs do not halt. Here the knowledge problems can only be solved by becoming effectively less intelligent.
1.4 Foundations of Dynamic Complexity Economics
In contrast with the computationally defined measures described above, the dynamic complexity definition stands out curiously as for its negativity: dynamical systems that do not endogenously and deterministically generate certain “well-behaved” outcomes. The charge that it is not precise carries weight. However, the virtue of it is precisely its generality guaranteed by its vagueness. It can apply to a wide variety of systems and processes that many have described as being “complex.” Of course, the computationalists argue with reason that they are able to subsume substantial portions of nonlinear dynamics with their approach, as for example with the already mentioned result on the non-computability of chaotic dynamics (Costa et al. 2005).
However, most of this recent debate and discussion, especially by Israel (2005), McCauley (2005), and Velupillai (2005b, 2005c) has focused on a particular outcome that is associated with some interacting agents models within the smaller tent (heterogeneous interacting agents) complexity part of the broader big tent dynamic complexity concept. This property or phenomenon is emergence . It was much discussed by cyberneticists and general systems theorists (von Bertalanffy 1974), including under the label anagenesis (Boulding 1978; Jantsch 1982), although it was initially formalized by Lewes (1875) and expanded by Morgan (1923), drawing upon the idea of heteropathic laws due to Mill (1843, Book III). Much recent discussion has focused on Crutchfield (1994) because he has associated it more clearly with processes within computerized systems of interacting heterogeneous agents and linked it to minimum length computability concepts related to Kolmogorov’s idea, which it makes it easier for the computationalists to deal with. In any case, the idea is of the dynamic appearance of something new endogenously and deterministically from the system, often also labeled self-organization .Footnote 17
Furthermore, all of these cited here would add another important element, that it appears at a higher level within a dynamic hierarchical system as a result of processes occurring at lower levels of the system. Crutchfield (1994) allows that what is involved is symmetry breaking bifurcations, which leads McCauley (2005, pp. 77–78) to be especially dismissive, identifying it with biological models (Kaufmann 1993) and declaring that “so far no one has produced a clear empirically relevant or even theoretically clear example.” The critics complain of implied holism and Israel identifies it with Wigner’s (1960) “mystical” alienation from the solidly grounded view of Galileo.
Now the complaint of McCauley amounts to an apparent lack of invariance , a lack of ergodicity or steady state equilibria, with clearly identifiable symmetries whose breaking brings about these higher-level reorganizations or transformations.
We can understand how a cell mutates to a new form, but we do not have a model of how a fish evolves into a bird. That is not to say that it has not happened, only that we do not have a model that helps us to imagine the details, which must be grounded in complicated cellular interactions that are not understood. (McCauley 2005, p. 77)Footnote 18
While he is probably correct that the details of these interactions are not fully understood, a footnote on the same page points in the direction of some understanding that has appeared, not tied directly to Crutchfield or Kaufmann. McCauley notes the work of Hermann Haken (1983) and his “examples of bifurcations to pattern formation via symmetry breaking.” Several possible approaches suggest themselves at this point.
One approach is that of synergetics due to Haken (1983), alluded to above. This deals more directly with the concept of entrainment of oscillations via the slaving principle (Haken 1996), which operates on the principle of adiabatic approximation . A complex system is divided into order parameters that are presumed to move slowly in time and “slave” faster moving variables or subsystems. While it may be that the order parameters are operating at a higher hierarchical level, which would be consistent with many generalizations made about relative patterns between such levels (Allen and Hoekstra 1990; Holling 1992; Radner 1992), this is not necessarily the case. The variables may well be fully equivalent in a single, flat hierarchy , such as with the control and state variables in catastrophe theory models. Stochastic perturbations can lead to structural change near bifurcation points.
If slow dynamics are given by vector F, fast dynamics generated by vector q, with A, B, and C being matrices, and ε a stochastic noise vector, then a locally linearized version is given by
Adiabatic approximation is given by
Fast variable dependence on the slow variables is given by A + B(F). Order parameters are those of the least absolute value.
The symmetry breaking bifurcation occurs when the order parameters destabilize by obtaining eigenvalues with positive real parts, while the “slave variables” exhibit the opposite. Chaos is one possible outcome. However, the most dramatic situation is when the slaved variables destabilize and “revolt” (Diener and Poston 1984), with the possibility of the roles switching within the system and former slaves replacing the former “bosses” to become the new order parameters. An example in nature of such an emerging and self-organizing entrainment might the periodic and coordinated appearance of the slime mold out of separated amoebae, which later disintegrates back into its isolated cells (Garfinkel 1987). An example in human societies may be the outbreak of the mid-fourteenth century Great Plague in Europe , when accumulating famine and immunodeficiency exploded in a massive population collapse (Braudel 1967).
Another approach is found in Nicolis (1986), derived from the work of Nicolis and Prigogine (1977) on frequency entrainment. Rosser Jr. (1994) have argued that this can serve as a possible model for the anagenetic moment, or the emergence of a new level of hierarchy . Let there be n well-defined levels of the hierarchy , with L1 at the bottom and Ln at the top. A new level, Ln+1, or dissipative structure , can emerge at a phase transition with a sufficient degree of entrainment of the oscillations at that level. Let there be k oscillating variables, xj and zi(t) be an independently and identically distributed exogenous stochastic process with zero mean and constant variance, then dynamics are given by the coupled, nonlinear differential equations of the form
with wij representing a cross-correlation matrix operator. The third term is the key, either being “on” or “off,” with the former showing frequency entrainment. Nicolis (1986) views this in terms of a model of neurons, with a master hard nonlinear oscillator being turned on by a symmetry breaking of the cross-correlation matrix operator when the probability distribution of the real parts of its eigenvalues exceeding zero.Footnote 19 Then a new variable vector will emerge at the Ln+1 level that is yj, which will damp or stimulate the oscillations at level Ln, depending on whether the sum over them is below or above zero.Footnote 20 An example might be the emergence of a new level of urban hierarchy (Rosser Jr. 1994).
Regarding the relation between dynamic complexity and emergence another perspective on this has come from the Austrian School of economics (Koppl 2006, 2009; Lewis 2012; Rosser Jr. 2012a), with the idea that market economic systems spontaneously emerge, one of their deepest ideas, which they drew from the Scottish Enlightenment of Hume and Smith, as well as such thinkers as Mill (1843) and Herbert Spencer (1867-1874) who wrote on both evolution and economic sociology (Rosser Jr. 2014b). This link can be found in the work of Carl Menger (1871/1981), the founder of the Austrian School. Menger posed this as follows in terms of what economic research should discover (Menger 1883/1985, p. 148):
…how institutions which serve the common welfare and are extremely significant for its development come into being without a common will directed toward establishing them.
Menger (1892) then posed the spontaneous emergence of commodity monies in primitive societies with no fiat role by states as an important example of this.
Various followers of Menger did not pursue this approach strongly, many emphasizing equilibrium approaches not all that different from the emerging neoclassical view, which was an idea one could find in Menger’s work, who is widely viewed as one of the founders of the neoclassical marginalist approach along with Jevons and Walras. The crucial figure who revived an interest in emergence among the Austrians and developed it much further was Friedrich A. Hayek (1948, 1967).Footnote 21 Hayek drew on the incompleteness results of Gödel, aware of the role of self-referencing in this, and how overcoming the paradoxes of incompleteness may involve emergence of a higher level that can understand the lower level. Curiously his awareness of this originally came from his work in psychology in his 1952 The Sensory Order (pp, 188–189):
Applying the same general principles to the human brain as an apparatus of classification. It would appear to mean that, even though we may understand its modus operandi in general terms, or, in other words possess an explanation of the principle on which it operates, we shall never, by any means of the same brain, be able to arrive at a detailed explanation of its working in particular circumstances, or be able to predict what the results of it operations will be. To achieve this would be to require a brain of a higher order complexity, though it might still be built on the same principles. Such a brain might be able to explain what happens in our brain, but it would in turn be unable to explain its own operations, and so on.
Koppl (2006, 2009) argues that this argument applies as well to Hayek’s long opposition to central planning, with a central planner facing just this problem when they attempt to understand the effect on the economy they are trying to plan of their own planning efforts.Footnote 22 This view of the importance of complexity and emergence would come to be widely influential in Austrian economics since Hayek put forward his arguments and continues to be so (O’Driscoll and Rizzo 1985; Lachmann 1986; Lavoie 1989; Horwitz 1992; Wagner 2010).
1.5 Dynamic Complexity and Knowledge
In dynamically complex systems, the knowledge problem becomes the general epistemological problem. Consider the specific problem of being able to know the consequences of an action taken in such a system. Let G(xt) be the dynamical system in an n-dimensional space. Let an agent possess an action set A. Let a given action by the agent at a particular time be given by ait. For the moment let us not specify any actions by any other agents, each of whom also possesses his or her own action set. We can identify a relation whereby xt = f(ait). The knowledge problem for the agent in question thus becomes, “Can the agent know the reduced system G(f(ait) when this system possesses complex dynamics due to nonlinearity”?
First of all, it may be possible for the agent to be able to understand the system and to know that he or she understands it, at least to some extent. One reason why this can happen is that many complex nonlinear dynamical systems do not always behave in erratic or discontinuous ways. Many fundamentally chaotic systems exhibit transiency (Lorenz 1992). A system can move in and out of behaving chaotically, with long periods passing during which the system will effectively behave in a non-complex manner, either tracking a simple equilibrium or following an easily predictable limit cycle. While the system remains in this pattern, actions by the agent may have easily predicted outcomes, and the agent may even be able to become confident regarding his or her ability to manipulate the system systematically. However, this essentially avoids the question.
Let us consider four forms of dynamic complexity: chaotic dynamics, fractal basin boundaries, discontinuous phase transitions in heterogeneous agent situations, and catastrophe theoretic models related to heterogenous agent systems. For the first of these there is a clear problem for the agent, the existence of sensitive dependence on initial conditions. If an agent moves from action ait to action ajt, where |ait − ajt| < ε < 1, then no matter how small ε is, there exists an m such that |G(f(ait+t′) − G(f(ajt+t′)| > m for some t′ for each ε. As ε approaches zero, m/ε will approach infinity. It will be very hard for the agent to be confident in predicting the outcome of changing his or her action. This is the problem of the butterfly effect or sensitive dependence on initial conditions. More particularly, if the agent has an imperfectly precise awareness of his or her actions, with the zone of fuzziness exceeding ε, the agent faces a potentially large range of uncertainty regarding the outcome of his or her actions. In Edward Lorenz’s (1963) original study of this matter when he “discovered chaos,” when he restarted his simulation of a three-equation system of fluid dynamics partway through, the roundoff error that triggered a subsequent dramatic divergence was too small for his computer to “perceive” (at the four decimal place).
There are two offsetting elements for chaotic dynamics. Although an exact knowledge is effectively impossible, requiring essentially infinitely precise knowledge (and knowledge of that knowledge), a broader approximate knowledge over time may be possible. Thus, chaotic systems are generally bounded and often ergodic (although not always). While short-run relative trajectories for two slightly different actions may sharply diverge, the trajectories will at some later time return toward each other, becoming arbitrarily close to each other before once again diverging. Not only may the bounds of the system be knowable, but the long-run average of the system may be knowable. There are still limits as one can never be sure that one is not dealing with a long transient of the system, with it possibly moving into a substantially different mode of behavior later. But the possibility of a substantial degree of knowledge, with even some degree of confidence regarding that knowledge is not out of the question for chaotically dynamic systems.
Regarding fractal basin boundaries, first identified for economic models by Hans-Walter Lorenz (1992) in the same paper in which he discussed the problem of chaotic transience. Whereas in a chaotic system there may be only one basin of attraction, albeit with the attractor being fractal and strange and thus generating erratic fluctuations, the fractal basin boundary case involves multiple basins of attraction, whose boundaries with each other take fractal shapes. The attractor for each basin may well be as simple as being a single point. However, the boundaries between the basins may lie arbitrarily close to each other in certain zones.
In such a case, for the purely deterministic case once one is able to determine which basin of attraction one is in, a substantial degree of predictability may ensue. Yet there may be the problem of transient dynamics, with the system taking a long and circuitous route before it begins to get anywhere close to the attractor, even if the attractor is merely a point in the end. The problem arises if the system is not strictly deterministic, if G includes a stochastic element, however small. In this case one may be easily pushed across a basin boundary, especially if one is in a zone where the boundaries lie very close to one another. Thus there may be a sudden and very difficult to predict discontinuous changes in the dynamic path as the system begins to move toward a very different attractor in a different basin. The effect is very similar to that of sensitive dependence on initial conditions in epistemological terms, even if the two cases are mathematically quite distinct.
Nevertheless, in this case as well there may be something similar to the kind of dispensation over the longer run we noted for the case of chaotic dynamics. Even if exact prediction in the chaotic case is all but impossible, it may be possible to discern broader patterns, bounds and averages. Likewise in the case of fractal basin boundaries with a stochastic element, over time one should observe a jumping from one basin to another. Somewhat like the pattern of long run evolutionary game dynamics studied by Binmore and Samuelson (1999), one can imagine an observer keeping track of how long the system remains in each basin and eventually developing a probability profile of the pattern, with the percent of time the system spends in each basin possibly approaching asymptotic values. However, this is contingent on the nature of the stochastic process as well as the degree of complexity of the fractal pattern of the basin boundaries. A non-ergodic stochastic process may render it very difficult, even impossible, to observe convergence on a stable set of probabilities for being in the respective basins, even if those are themselves few in number with simple attractors.
For the case of phase transitions in systems of heterogeneous locally interacting agents, the world of the so-called “small tent complexity.” Brock and Hommes (1997) have developed a useful model for understanding such phase transitions, based on statistical mechanics. This is a stochastic system and is driven fundamentally by two key parameters, a strength of interactions or relationships between neighboring agents and a degree of willingness to switch behavioral patterns by the agents. For their model the product of these two parameters is crucial, with a bifurcation occurring for their product. If the product is below a certain critical value, then there will be a single equilibrium state. However, once this product exceeds a particular critical value two distinct equilibria will emerge. Effectively the agents will jump back and forth between these equilibria in herding patterns. For financial market models (Brock and Hommes 1998) this can resemble oscillations between optimistic bull markets and pessimistic bear markets, whereas below the critical value the market will have much less volatility as it tracks something that may be a rational expectations equilibrium.
For this kind of a setup there are essentially two serious problems. One is determining the value of the critical threshold. The other is understanding how the agents jump from one equilibrium to the other in the multiple equilibrium zone. Certainly the second problem resembles somewhat the discussion from the previous case, if not involving as dramatic a set of possible discontinuous shifts.
Of course once a threshold of discontinuity is passed it may be recognizable when it is approached again. But prior to doing so it may be essentially impossible to determine its location. The problem of determining a discontinuity threshold is a much broader one that vexes policymakers in many situations, such as attempting to avoid catastrophic thresholds that can bring about the collapse of a species population or of an entire ecosystem. One does not want to cross the threshold, but without doing so, one does not know where it is. However, for less dangerous situations involving irreversibilities, it may be possible to determine the location of the threshold as one moves back and forth across it.
On the other hand in such systems it is quite likely that the location of such thresholds may not remain fixed. Often such systems exhibit an evolutionary self-organizing pattern in which the parameters of the system themselves become subject to evolutionary change as the system moves from zone to zone. Such non-ergodicity is consistent not only with Keynesian style uncertainty, but may also come to resemble the complexity identified by Hayek (1948, 1967) in his discussions of self-organization within complex systems. Of course for market economies Hayek evinced an optimism regarding the outcomes of such processes. Even if market participants may not be able to predict outcomes of such processes, the pattern of self-organization will ultimately be largely beneficial if left on its own. Although Keynesians and Hayekian Austrians are often seen as in deep disagreement, some observers have noted the similarities of viewpoint regarding these underpinnings of uncertainty (Shackle 1972; Loasby 1976; Rosser Jr. 2001a, b). Furthermore, this approach leads to the idea of the openness of systems that becomes consistent with the critical realist approach to economic epistemology (Lawson 1997).
Considering this problem of important thresholds brings us to the final of our forms of dynamic complexity to consider here, catastrophe theory interpretations. The knowledge problem is essentially that previously noted, but is more clearly writ large as the discontinuities involved are more likely to be large as the crashes of major speculative bubbles. The Brock-Hommes model and its descendants can be seen as a form of what is involved, but the original catastrophe theory approach brings out key issues more clearly.
The very first application of catastrophe theory in economics by Zeeman (1974) indeed considered financial market crashes in a simplified two-agent formulation: fundamentalists who stabilized the system by buying low and selling high and “chartists” who chase trends in a destabilizing manner by buying when markets rise and selling when they fall. As in the Brock-Hommes formulation he allows for agents to change their roles in response to market dynamics so that as the market rises fundamentalists become chartists, accelerating the bubble, and when the crash comes they revert to being fundamentalists, accelerating the crash . Rosser Jr. (1991) provides an extended formalization of this in catastrophe theory terms that links it to the analysis of Minsky (1972) and Kindleberger (2001), further taken up in Rosser Jr. et al. (2012) and Rosser Jr. (2020c). This formulation involves a cusp catastrophic formulation with the two control variables being the demands by the two categories of agents, with the chartists’ demand determining the position of the cusp that allows for market crashes.
The knowledge problem here involves something not specifically modeled in Brock and Hommes, although they have a version of it. It is the matter of the expectations of agents about the expectations of the other agents. This is effectively the “beauty contest” issue discussed by Keynes in Chapter 12 of this General Theory (1936). The winner of the beauty contest in a newspaper competition is not who guesses the prettiest girl, but who guesses best the guesses of the other participants. Keynes famously noted that one could start playing this about guessing the expectations of others in their guesses of others’ guesses, and that this could go to higher levels, in principle, an infinite regress leading to an impossible knowledge problem. In contrast, the Brock and Hommes approach simply has agents shifting strategies after watching what others do. These potentially higher level problems do not enter in. These sorts of problems reappear in the problems associated with computational complexity.
1.6 Knowledge and Ergodicity
A controversial issue involving knowledge and complexity involves the deep sources of the Keynes-Knight idea of fundamental uncertainty (Keynes 1921; Knight 1921). Both of them made it clear that for uncertainty there is no underlying probability distribution determining important events that agents must make decisions about. Keynes’s formulation of this has triggered much discussion and debate as to why he saw this lack of a probability distribution arising.
One theory that has received much attention, due to Davidson (1982-83), is that while neither Keynes nor Knight ever mentioned it, what can bring about such uncertainty, especially for Keynes’s understanding of it, is the appearance of nonergodicity in the dynamic processes underlying economic reality. In making this argument, Davidson specifically cited arguments made by Paul Samuelson (1969, p. 184) to the effect that “economics as a science assumes the ergodic axiom.” Davidson relied on this to assert that failure of this axiom is an ontological matter that is central to understanding Keynesian uncertainty, when knowledge breaks down. Many have since repeated this argument, although Alvarez and Ehnts (2016) argue that Davidson misinterpreted Samuelson who actually dismissed this ergodic view as being tied to an older classical view that he did not accept.
Davidson’s argument has more recently come under criticism by various observers, perhaps most vigorously recently by O’Donnell (2014-15), who argues that Davidson has misrepresented the ergodic hypothesis, that Keynes never considered it, and that Keynesian uncertainty is more a matter of short-run instabilities to be understood using behavioral economics rather than the asymptotic elements that are tied up with ergodicity. An important argument by O’Donnell is that even in an ergodic system that is going to go to a long-run stationary state, it may be out of that state for a period of time so long that one will be unable to determine if it is ergodic or not. This is a strong argument that Davidson has not succeeded in fully replying to (Davidson 2015).
Central to this is to understand the ergodic hypothesis itself and its development and limits, as well as its relationship to Keynes’s own arguments, which turns out to be somewhat complicated, but indeed linked to central concerns of Keynes in an indirect way, especially given that he never directly mentioned it. Most economists discussing this matter, including both Davidson and O’Donnell, have accepted as the definition of an ergodic system that over time (asymptotically) its “space averages equal its time averages.” This formulation was due to Ehrenfest and Ehrenfest-Afanessjewa (1911), with Paul Ehrenfest a student of Ludwig Boltzmann (1884) who expanded the study of ergodicity (and coined the term) as part of his long study of statistical mechanics, particularly how a long term aggregate average (such as temperature) could emerge from a set of dynamically stochastic parts (particle movements). It turns out that for all its widespread influence, the precise formulation by the Ehrenfests was inaccurate (Uffink 2006). But this reflected that there were multiple strands in the meaning of “ergodicity.”
In fact there is ongoing debate about how Boltzmann coined the term in the first place. His student, Ehrenfest, claimed it was from combining the Greek ergos (“work”) with hodos (“path”), while it has been argued by Gallavotti (1999) that it came from him using his own neologism, monode, meaning a stationary distribution, instead of hodos. This fits with most of the early formulations of ergodicity that analyzed it within the context of stationary distributions.
Later discussions of ergodicity would draw on two complementary theorems proven by Birkhoff (1931) and von Neumann (1932), although the latter was proven first and emphasizes measure preservation, while Birkhoff’s variation was more geometric and related to recurrence properties in dynamical systems. Both involve long-run convergence, and Birkhoff’s formulation showed not only measure preservation but that for a stationary ergodic system a metric indecomposability such that not only is the space properly filled, but that it is impossible to break the system into two that will also fully fill the space and preserve measure, a result extending fundamental work by Poincaré (1890) on how recurrence and space filling help explain how chaotic dynamics can arise in celestial mechanics.
In von Neumann’s (1932) formulation let T be a measure-preserving transformation on a measure space with for every square-integrable function f on that space, (Uf)(x) = f(Tx), then U is a unitary operator on the space. For any such unitary operator U on a Hilbert space H, the sequence of averages:
is strongly convergent for every f in H. We note that these are finite measure spaces and that this refers to stationary systems, just as with Boltzmann.
Birkhoff’s (1931) extension, sometimes called the “individual ergodic theorem ,” modifies the above sequence of averages to be:
that converge for almost every x. These complementary theorems have been generalized to Banach spaces and many other conditions.Footnote 23 It was from these theorems that the next wave of developments in Moscow and elsewhere would evolve.Footnote 24 This was the state of ergodic theory when Keynes had his debate over econometrics at the end of the 1930s with that student of Paul Ehrenfest, Jan Tinbergen.
The link between stationarity and ergodicity would come to weaken in later study, with Malinvaud (1966) showing that a stationary system might not be ergodic, with a limit cycle being an example, with Davidson aware of this case from the beginning of his discussions. However, it continued to be believed that ergodic systems must be stationary, and this remained a key for Davidson as well as being accepted by most of his critics, including O’Donnell. However, it turns out that this may break down in ergodic chaotic systems of infinite dimension, which may not be stationary (Shinkai and Aizawa 2006), which brings back the role of chaotic dynamics in undermining the ability to achieve knowledge of a dynamical system, even one that is ergodic.
Given these complications it is worthwhile to return to Keynes to understand what his concerns were, which came out most clearly in his debates with Tinbergen (1937, 1940; Keynes, 1938) over how to econometrically estimate models for forecasting macroeconomic dynamics. A deep irony here is that Tinbergen was a student of Paul Ehrenfest and so was indeed influenced by his ideas on ergodicity, even as Keynes did not directly address this matter. In any case, what Keynes objected to was the apparent absence of homogeneity , essentially a concern that the model itself changes over time. Keynes’s solution to this was to break a time-series down into sub-samples to see if one gets the same parameter estimates as one does for the whole time-series. Homogeneity is not strictly identical to either stationarity or ergodicity, but it is probably the case that at the time Tinbergen, following Ehrenfest, probably assumed all three held for the models he estimated. Thus indeed the ergodic hypothesis was assumed to hold for these early econometric models, whereas Keynes was skeptical of there being a sufficient homogeneity for one to assume one knew what the system was doing over time (Rosser Jr. 2016a).
1.7 Reflexivity and the Unification of Complexity Concepts
Closely related to self-referencing is the idea of reflexivity . This is a term with no agreed upon definition, and it has been used in a wide variety of ways (Lynch 2000). It is derived from the Latin reflectere, which is usually translated to mean “bending back,” but can refer to “reflex” as in a knee jerking when tapped, not what is meant here, or more generally is linked to “reflection” as in an image being reflected, possibly back and forth many times as in the situation of two mirrors facing each other. This latter is more what the focus is here and more the type that is connected with self-referencing and all that implies. Someone who made that link strongly was Douglas Hofstadter (1979) in his Gödel, Escher, Bach: An Eternal Golden Braid as well as even more so later (Hofstadter 2006). For Hofstadter, reflexivity is linked to the foundations of consciousness through what calls “strange loops” of indirect self-referencing, which he sees certain prints by Maurits C. Escher as highlighting, particularly his “Drawing Hands” and also his “Print Gallery,” with many commentators on reflexivity citing “Drawing Hands,” which shows two hands drawing each other (Rosser Jr. 2020b).Footnote 25 Hofstadter argues that the foundation for his theory is the Incompleteness Theorem of Gödel, with its deep self-referencing, along with certain pieces by J.S. Bach, as well as these prints by Escher.
The term has probably been most widely used, and with the greatest variety of meanings, in sociology (Lynch 2000)). Its academic usage was initiated by prominent sociologist, Robert K. Merton (1938), who used it to pose the problem of sociologists thinking about how their studies and ruminations fit into the broader social framework, both in how they themselves are influenced by that framework in terms of biases and paradigms, but also in terms of how their studies and how they do their studies might reflect back to influence society as well. Among the sociologists the most radical uses of the concept involved sharp self-criticism wherein one deconstructs the paradigm and influences one is operating in to the point that one can barely do any analysis at all (Woolgar 1991), with many complaining that this leads to a nihilistic dead end. The earliest usages of the term by economists followed this particular strand of analyzing how particular economists are operating within certain methodological frameworks and how they came to do so from broader societal influences and how their work may then reflect back to influence society, sometimes even through specific policies or even ways of gathering and reporting policy-relevant data (Hands 2001; Davis and Klaes 2003).
Merton (1948) would also use the idea to propose the idea of the self-fulfilling prophecy , an idea that has been widely applied in economics as with the concept of sunspot equilibria (Azariadis 1981), with many seeing this as deriving originally from Keynes (1936, Chap. 12) and his analysis of financial market behavior based on the early twentieth century British newspaper beauty contests. In those contests newspapers would publish photos of young women and ask readers to rate them on their presumed beauty. The winner of such a contest was not the person who guessed which young woman was objectively the most beautiful, but rather which one received the most votes. This meant that a shrewd player of such a game was really trying to guess the guesses of the other players, with Keynes comparing this to financial markets where the underlying fundamental of an asset is less important for its market value than what investors think it is. This led Keynes even to note that this kind of reasoning can move to higher levels, trying to think what others think others think, and on to still higher levels in a potential infinite regress, a classic infinite reflection in a non-halting program. This beauty contest idea of Keynes has come to be viewed as a centerpiece of his philosophical view, implying ultimately not only reflexivity but complexity as well (Davis 2017).
Among the first to pick up on Keynes’s argument and apply it to self-fulfilling prophecies in financial markets and also bringing in reflexivity as relevant to this was George Soros (1987), who would later also argue that the analysis was part of complexity economics (Soros 2013). Soros has long argued that thinking about this beauty contest-inspired version of reflexivity has been key to his own decision-making in financial markets. He sees it as explaining boom and bust cycles in markets as in the US housing bubble of the early 2000s, whose decline set off the Great Recession. He first got the term from being a student of Karl Popper’s in the 1950s (Popper 1959), with Popper also an influence on Hayek (1967) in connection with these ideas (Caldwell 2013). Thus the idea of reflexivity with links to arguments about incompleteness and infinite regresses associated with self-referencing have become highly influential among economists and financiers studying financial market dynamics and other related phenomena.
We now see the possibility of linking our major schools of complexity through the subtle strange loopiness involved in indirect self-referencing at the heart of a deeper form of reflexivity. The indirect self-referencing at the heart of Gödel’s incompleteness theorem is deeply linked to computational complexity in that it leads to the infinite do loops of the highest level of computational complexity in which a program never stops. The way out of incompleteness involves in effect what Davis and Klaes invoked: moving to a higher hierarchical level in which an exogenous agent or program determines what is true or false, although this opens the door to incoherence (Landini et al. 2020). The indirect self-referencing opens the door to dynamic complexity in its implications for market dynamics, with this also linking to hierarchical complexity as new levels of hierarchy can be generated. Let us consider briefly how this comes out of the fundamental Gödel (1931) theorem.
The Gödel theorem is really two theorems. The first one is the incompleteness one: any consistent formal system in which elementary arithmeticFootnote 26 can be carried out is incomplete; there are statements in the language of the formal system that can neither be proved nor disproved within the formal system. The second one addresses the problem of consistencyFootnote 27: for any consistent formal system in which elementary arithmetic can be carried out, the consistency of the formal system cannot be proved within the formal system itself. So, coherence implies incompleteness, but any attempt to overcome incompleteness by moving to a higher level involves one being unable to prove the consistency of this higher level system, with both parts of this failing due to paradoxes of (reflexive) self-referencing leading to paradoxes.
Hofstadter (2006) provides an excellent discussion of the nature of the indirectness involved in proving the main part of the theorem, which involves the use of “Gödel numbers.” These are numbers assigned to logical statements, and their use can lead to the creation of self-referencing paradoxical statements even within a system especially designed to avoid such self-referencing statements. The system that Gödel subjected this treatment to eventually generates a statement equivalent to “This sentence is unprovable” was the logical system developed by Whitehead and Russell (1910-13) specifically to provide a consistent formal foundation for mathematics without logical paradoxes. Russell in particular was much concerned about the possibility of paradoxes in set theory, such as those involving self-referencing sets. The classic problem was “Does the set of all sets that do not contain themselves contain itself?” A famous simple version of this involves “Who shaves the barber in a town where the barber only shaves those who do not shave themselves?” Both of these involve similar endless do-loops arising from their self-referencing. Whitehead and Russell attempted to eliminate these annoyances by developing the theory of types that established hierarchies of sets in ways to avoid having them refer to themselves. But then Gödel pulled his trick of establishing his numbers, which he applied to the system of Whitehead and Russell so as through indirection to generate a self-referencing statement that involved a paradox unresolvable within the system. It is rather like how the hole Escher put in the middle of his “Print Gallery” allowed for the man to look at a print on a wall in a gallery of a city that contains the gallery in which he is standing looking at it.
Thus it is not surprising that the problem of self-referencing has lain at the core of much of the thinking about reflexivity from an early point, and that this thinking took on a sharper edge when various figures thought about Gödel’s theorem, or even earlier about the paradoxes considered by Bertrand Russell. Linking this to understanding to complexity provides a foundation for a reflexive complexity that encompasses all the major forms of complexity.
1.8 Further Observations
In computationally complex systems the problem of understanding them is related to logic, the problems of infinite regress and undecidability associated with self-referencing in systems of Turing machines. This can manifest itself as the halting problem, something that can arise even for a computer attempting to precisely calculate even a dynamically complex system as for example the exact shape of the Mandelbrot set (Blum et al. 1998). A Turing machine cannot understand fully a system in which its own decisionmaking is too crucially a part. However, knowledge of such systems may be gained by other means.
To the extent that models have axiomatic foundations rather than being merely ad hoc, which many of them ultimately are, these foundations are strictly within the non-constructivist, classical mathematical mode, assuming the Axiom of Choice, the Law of the Excluded Middle, and other hobby horses of the everyday mathematicians and mathematical economists. To the extent that they provide insight into the nature of dynamic economic complexity and the special problem of emergence (or anagenesis), they do not do so by being based on axiomatic foundationsFootnote 28 that would pass muster with the constructivists and intuitionists of the early and mid-twentieth century, much less their more recent disciples, who are following the ideal hope that “The future is a minority; the past and present are a majority,” to quote Velupillai (2005b, p. 13), himself paraphrasing Shimon Peres from an interview about the prospects for Middle East peace.
There are a considerable array of models available for contemplating or modeling emergent phenomena operating at different hierarchical levels. An interesting area to see which of the approaches might prove to be most suitable may well be in the study of the evolution of market processes as they themselves become more computerized. This is the focus of Mirowski (2007) who goes so far as to argue that fundamentally markets are algorithms. The simple kind of posted price – spot market most people have traditionally bought things in is at the bottom of a Chomskyian hierarchy of complexity and self-referenced control. Just as newer algorithms may contain older algorithms within them, so the emergence of newer kinds of markets can contain and control the older kinds as they move to higher levels in this Chomskyian hierarchy . Futures markets may control spot markets, options markets may control futures markets, and the ever higher order of these markets and their increasing automation pushes the system to a higher level towards the unreachable ideal of being a full-blown Universal Turing Machine (Cotogno 2003).
Mirowski brings to bear more recent arguments in biology regarding coevolution, noting that the space in which the agents and systems are evolving itself changes with their evolution. To the extent that the market system increasingly resembles a gigantic assembly of interacting and evolving algorithms, both biology and the problem of computability will come to bear and will come to bear and influence each other (Stadler et al. 2001). In the end the distinction between the two may become irrelevant.
In the great contrast of computational and dynamic complexity, we see crucial overlaps involving how the paradoxes arising from self-referencing underlying computational complexity can imply the emergence so deeply associated with dynamic complexity. These interrelations may become most manifest when contemplating the mirror world of reflexivity and its endless concatenations. These are among the many considerations that lie at the foundations of complexity economics.
Notes
- 1.
Velupillai (2011) has labeled this view of dynamic complexity as “Day-Rosser” complexity.
- 2.
Strictly speaking, this is incorrect. Goodwin (1947) showed such endogenous dynamic patterns in coupled linear systems with lags. Similar systems were analyzed by Turing (1952) in his paper that has been viewed as the foundation of the theory of morphogenesis, a complexity phenomenon par excellence. However, the overwhelming majority of such dynamically complex systems involve some nonlinearity, and the uncoupled normalized equivalent of the coupled linear system is nonlinear.
- 3.
This coinage came from Horgan (1997, Chap. 11) who sneeringly labeled the four C’s to represent chaoplexity, which he considered to be an intellectual bubble or fad. Rosser Jr. (1999) argued that this was a coinage like “Impressionism” that was initially an insult but can be seen as a useful characterization.
- 4.
Arnol’d (1993) provides a clear discussion of the mathematical issues involved while avoiding the controversies.
- 5.
For further discussion of underlying mathematical controversies involving chaos theory, see Rosser Jr. (2000b, Mathematical Appendix).
- 6.
- 7.
It has often been claimed incorrectly that Schelling used a chess board for this study.
- 8.
Structural complexity appears in the end to amount to “complicatedness,” which Israel (2005) argues is merely an epistemological concept rather than an ontological one, with “complexity” and “complicatedness” coming from different Latin roots (complecti, “grasp, comprehend, or embrace” and complicare, “fold, envelop”), even if many would confuse the concepts (including even von Neumann 1966). Rosser Jr. (2004) argues that complicatedness as such poses essentially trivial epistemological problems, how to figure out a lot of different parts and their linkages.
- 9.
“Computable economics” was neologized by Velupillai in 1990 and is distinguished from “computational economics,” symbolized by the work one finds at conferences of the Association for Computational Economics and its journal, Computational Economics. The former focuses more on the logical foundations of the use of computers in economics while the latter tends to focus more on specific applications and methods.
- 10.
Another main theme of computable economics involves considering which parts of economic theory can be proved when such classical logical axioms are relaxed as the Axiom of Choice and the exclusion of the middle. Under such constructive mathematics problems can arise for proving Walrasian equilibria (Pour-El and Richards 1979; Richter and Wong 1999; Velupillai 2002, 2006) and Nash equilibria (Prasad 2005).
- 11.
It should be understood that whereas on the one hand Kolmogorov’s earliest work axiomatized probability theory, his efforts to understand the problem of induction would lead him to later argue that information theory precedes probability theory (Kolmogorov 1983). McCall (2005) provides a useful discussion of this evolution of Kolmogorov’s views.
- 12.
Albin liked the example of the capital aggregation problem raised by Joan Robinson (1953-54) that in order to aggregate capital one needs to already know the marginal product of capital in order to determine the discount rate for calculating present values, while at the same time one already needs to know the value of aggregate capital in order to determine its marginal product. Conventional economics attempts to escape this potentially infinite do loop by simply assuming that all of these are conveniently simultaneously solved in a grand general equilibrium.
- 13.
Closely related would be the universal prior of Solomonoff (1964) that puts the MDL concept into a Bayesian framework. From this comes the rather neatly intuitive idea that the most probable state will also have the shortest length of algorithm to describe it. Solomonoff’s work was also independently developed, drawing on the probability theory of Keynes (1921).
- 14.
The P = NP problem was first identified by John Nash Jr. (1955) in a letter to the US National Security Agency discussing encryption methods in cryptanalysis, which was classified until 2013. Nash said he thought it was true that P did not equal NP, but noted he was unable to prove it, and it remains unproven to this day.
- 15.
Ironically Brouwer’s original proof of his fixed point theorem relied on ZFC axioms, with him only providing an intuitionistic alternative much later (Brouwer 1952).
- 16.
- 17.
This term has been especially associated with Bak (1996) and his self-organized criticality, although he was not the first to discuss self-organization in these contexts.
- 18.
- 19.
In a related model, Holden and Erneux (1993) show that the systemic switch may take the form of a slow passage through a supercritical Hopf bifurcation., thus leading to the persistence for a while of the previous state even after the bifurcation point has been passed.
- 20.
Yet another approach involves the hypercycle idea due to Eigen and Schuster (1979), discussed in the next chapter.
- 21.
- 22.
The opposition to central planning and support for spontaneous emergence of market systems from the bottom up shows up in a long debate among philosophers regarding whether emergence only works bottom up or whether it can involve top to bottom causation. Van Cleve (1990) introduces supervention as allowing this top down causation in emergent systems, while Kim (1999) argues that emergent processes must be fundamentally bottom up. Lewis (2012) argues that Hayek moved toward the supervention view in his later writings that also emphasized group evolutionary processes (Rosser Jr. 2014b).
- 23.
See Halmos (1958) for how these theorems link measure theory to probability theory.
- 24.
Velupillai (2013, pp. 432–433, n8) shows that while most ergodic theory has followed a frequentist formulation, the Moscow School would draw on Keynes’s ideas in their approach to these issues.
- 25.
Examples of reflexivity in art are often thought to involve the Droste Effect, in which a work contains an image of itself within itself, clearly a matter of self-referencing. Among the earliest known examples is a painting by Giotto from 1320, The Stefaneschi Triptych, in which in the central panel Cardinal Stefaneschi is depicted kneeling before Saint Peter and presenting to him the triptych itself. Needless to say, even if they cease to be depicted after a finite sequence of images, such artworks exhibiting this Droste Effect imply an infinite regress of ever smaller images containing ever smaller images (Rosser Jr. 2020b).
- 26.
By “elementary arithmetic” is meant that which can be derived from Peano’s axiom set assuming standard logic of the Zermelo-Frankel type with the Axiom of Choice (ZFC).
- 27.
It should be noted that in his original theorem Gödel was only able to prove incompleteness for a limited form of ω-consistency. A proof for a more general form of consistency was provided by Rosser Sr (1936) who used the “Rosser Sentence” (or “trick”): “If this sentence is provable, then there is a shorter proof of its negation.” This has led some to refer to the combined theorem as the “Gödel-Rosser Theorem.”
- 28.
While this movement focuses on refining axiomatic foundations, it ultimately seeks to be less formalistic and Bourbakian. This is consistent with the history of mathematical economics, which first moved towards a greater axiomatization and formalism within the classical mathematical paradigm, only to move away from it in more recent years (Weintraub 2002).
References
Aaronson, Scott. 2013. Why Philosophers Should Care about Computational Complexity. In Computability: Turing, Gödel, Church, and Beyond, ed. Jack Copeland, Carl J. Posy, and Oron Shagrir, 261–328. Cambridge, MA: MIT Press.
Abraham, Ralph H. 1985. Chaostrophes, Intermittency, and Noise. In Chaos, Fractals, and Dynamics, ed. P. Fischer and W.R. Smith, 3–22. New York: Marcel Dekker.
Abraham, Ralph, and Christopher D. Shaw. 1987. Dynamics: A Visual Introduction. In Self-Organizing Systems: The Emergence of Order, ed. F. Eugene Yates, 543–597. New York: Plenum Press.
Abraham, Ralph, Laura Gardini, and Christian Mira. 1997. Chaos in Discrete Dynamical Systems: A Visual Introduction in 2 Dimensions. New York: Springer.
Albin, Peter S. 1982. The Metalogic of Economic Predictions, Calculations and Propositions. Mathematical Social Sciences 3: 129–158.
Albin, Peter S. with Duncan K. Foley. 1998. Barriers and Bounds to Rationality: Essays on Economic Complexity and Dynamics in Interactive Systems. Princeton: Princeton University Press.
Allen, Timothy F.H., and Thomas W. Hoekstra. 1990. Toward a Unified Ecology. New York: Columbia University Press.
Alvarez, M. Carrión, and Dirk Ehnts. 2016. Samuelson and Davidson on Ergodicity: A Reformulation. Journal of Post Keynesian Economics 39: 1–16.
Arthur, W. Brian, Steven N. Durlauf, and David A. Lane. 1997a. Introduction. In The Economy as an Evolving Complex System II, ed. W. Brian Arthur, Steven N. Durlauf, and David A. Lane, 1–14. Reading, MA: Addison-Wesley.
Azariadis, Costas. 1981. Self-Fulfilling Prophecies. Journal of Economic Theory 25: 380–396.
Bacharach, Michael and Dale O. Stahl. 2000. Variable Frame Level-n Theory. Games and Economic Behavior 32, 220–246.
Bak, Per. 1996. How Nature Works: The Science of Self-Organized Criticality. New York: Copernicus Press for Springer-Verlag.
Bartholo, Robert S., Carios A.N. Cosenza, Francisco A. Doria, and Carlos T.R. Lessa. 2009. Can Economics Systems be Seen as Computing Devices. Journal of Economic Behavior and Organization 70: 72–80.
von Bertalanffy, Ludwig. 1950. An Outline of General Systems Theory. British Journal of the Philosophy of Science 1: 114–129.
———. 1974. Perspectives on General Systems Theory. New York: Braziller.
Binmore, Ken. 1987. Modeling Rational Players, I. Economics and Philosophy 3: 9–55.
Binmore, Ken, and Larry Samuelson. 1999. Equilibrium Selection and Evolutionary Drift. Review of Economic Studies 66: 363–394.
Birkhoff, George D. 1931. Proof of the Ergodic Theorem. Proceedings of the National Academy of Sciences 17: 656–660.
Bishop, Errett A. 1967. Foundations of Constructive Analysis. New York: McGraw-Hill.
Blum, Lenore, Felipe Cucker, Michael Shub, and Steve Smale. 1998. Complexity and Real Computation. New York: Springer-Verlag.
Bogdanov, Aleksandr A. 1925-29. Tektologia: Vsobschaya Oranizatsionnay Nauka, Volumes 1-III. 3rd ed. Leningrad-Moscow: Kniga.
Boltzmann, Ludwig. 1884. über die Eigenschaften Monocyklischer und andere damit verwander Systeme. Crelle’s Journal für due reine und augwandi Matematik 100: 201–212.
Boulding, Kenneth E. 1978. Ecodynamics: A New Theory of Social Evolution. Beverly Hills: Sage.
Braudel, Fernand. 1967. Civilization Matérielle et Capitalisme. Paris: Librairie Armand Colin. (English translation: K. Miriam. 1973. Capitalism and Material Life. New York: Harper and Row).
Brock, William A., and Cars H. Hommes. 1997. A Rational Route to Randomness. Econometrica 65: 1059–1095.
———. 1998. Heterogeneous Beliefs and Routes to Chaos in a Simple Asset Pricing Model. Journal of Economic Dynamics and Control 22: 1235–1274.
Brouwer, Luitzen E.J. 1908. De Onbetrouwbaarheid der Logische Principes. Tijdschrift voor wijsbegeerte 2: 152–158.
———. 1952. An Intuitionist Correction of the Fixed-Point Theorem on the Sphere. Proceedings of the Royal Society London 213: 1–2.
Caldwell, Bruce. 2004. Hayek’s Challenge: An Intellectual Biography of F.A. Hayek. Chicago: University of Chicago Press.
———. 2013. George Soros: Hayekian? Journal of Economic Methodology 20: 350–356.
Canning, David. 1992. Rationality, Computability, and Nash Equilibrium. Econometrica 60: 877–888.
Chaitin, Gregory J. 1966. On the Length of Programs for Computing Finite Binary Sequences. Journal of the ACM 13: 547–569.
———. 1987. Algorithmic Information Theory. Cambridge, UK: Cambridge University Press.
Chomsky, Noam. 1959. On Certain Formal Properties of Grammars. Information and Control 2: 137–167.
Church, Alonzo. 1936. A Note on the Entscheidungsproblem. Journal of Symbolic Logic 1: 40–41, correction 101–102.
van Cleve, J. 1990. Magic or Mind Dust: Panpsychism vs. Emergence. Philosophical Perspectives 4: 214–226.
Clower, Robert W., and Peter W. Howitt. 1978. The Transactions Theory of the Demand for Money: A Reconsideration. Journal of Political Economy 86: 449–465.
Cobb, L., P. Koppstein, and N.H. Chen. 1983. Estimation and Moment Recursion Relationships for Multimodal Distributions of the Exponential Family. Journal of the American Statistical Association 78: 124–130.
Costa, da, C.A. Newton, and Francisco A. Doria. 2005. Computing the Future. In Computability, Complexity and Constructivity in Economic Analysis, ed. K. Vela Vellupilai, 15–50. Victoria: Blackwell.
———. 2016. On the O’Donnell Algorithm for NP Complete Problems. Review of Behavioral Economics 3: 221–242.
Cotogno, Paolo. 2003. Hypercomputation and the Physical Church-Turing Thesis. British Journal for the Philosophy of Science 54: 181–223.
Crutchfield, James P. 1994. The Calculi of Emergence: Computation, Dynamics and Induction. Physica D 75: 11–54.
Davidson, Paul. 1982-83. Rational Expectations: A Fallacious Foundation for Studying Crucial Economic Decision-Making Processes. Journal of Post Keynesian Economics 5: 182–198.
Davidson, Paul. 2015. A Rejoinder to O’Donnell’s Critique of the Ergodic/Nonergodic Approach to Keynes’s Concept of Uncertainty. Journal of Post Keynesian Economics 38: 1–18.
Davis, John B. 2017. The Continuing Relevance of Keynes’s Philosophical Thinking: Reflexivity, Complexity, and Uncertainty. Annals of the Fondazione Luigi Einaudi 51: 55–76.
Davis, John B., and Matthias Klaes. 2003. Reflexivity: Curse or Cure? Journal of Economic Methodology 10: 329–352.
Day, Richard H. 1994. Complex Economic Dynamics, Volume I: An Introduction to Dynamical Systems and Market Mechanisms. Cambridge, MA: MIT Press.
Dechert, W. Davis, ed. 1996. Chaos Theory in Economics: Methods, Models, and Evidence. Edward Elgar: Cheltenham.
Diaconis, Persi, and D. Freedman. 1986. On the Consistency of Bayes Estimates. Annals of Statistics 14: 1–26.
Diener, Marc, and Tim Poston. 1984. The Perfect Delay Convention. In Chaos and Order in Nature, ed. Hermann Haken, 2nd ed., 249–268. Berlin: Springer-Verlag.
Ehrenfest, Paul, and Tatiana Ehrenfest-Afanessjewa. 1911. Begriffte Grundlagen der Statistschen Auffassunf in der Mechanik. In Encyclopädie der Matematischen Wissenschaften, Vol. 4, ed. F. Klein and C. Müller, 3–90. Leipzig: Teubner. (English translation, M.J. Moravcsik, 1959. The Conceptual Foundations of the Statistical Approach to Mechanics. Ithaca: Cornell University Press.).
Eigen, Manfred, and Peter Schuster. 1979. The Hypercycle: A Natural Principle of Self-Organization. Berlin: Springer-Verlag.
Forrester, Jay W. 1961. Industrial Dynamics. Cambridge, MA: MIT Press.
Gallavotti, G. 1999. Statistical Mechanics: A Short Treatise. Berlin: Springer-Verlag.
Garfinkel, Alan. 1987. The Slime Mold Dictostliam as a Model of Self-Organization in the Social Systems. In Self-Organizing Systems: The Emergence of Order, ed. F. Eugene Yates, 181–212. New York: Plenum Press.
Gode, D., and Shyam Sunder. 1993. Allocative Efficiency of Markets with Zero Intelligence Traders: Markets as a Partial Substitute for Individual Rationality. Journal of Political Economy 101: 119–137.
Gödel, Kurt. 1931. Über Formal Unentscheidbare Satze Principia Mathematica und Vergwander Systeme I. Monatshefte für Mathematik und Physik 38: 173–198.
Goodwin, Richard M. 1947. Dynamical Coupling with Especial Reference to Markers Having Production Lags. Econometrica 15: 181–204.
Haken, Hermann. 1983. “Synergetics.” An Introduction. Nonequilibrium Phase Transitions in Physics, Chemistry, and Biology. 3rd ed. Berlin: Springer-Verlag.
———. 1996. The Slaving Principle Revisited. Physica D 87: 95–103.
Halmos, Paul R. 1958. Von Neumann on Measure and Ergodic Theory. Bulletin of the American Mathematical Society 64: 86–94.
Hands, D. Wade. 2001. Reflection without Rules: Economic Methodology and Contemporary Science. Cambridge, UK: Cambridge University Press.
Hartmann, Georg C., and Otto E. Rössler. 1998. Coupled Flare Attractors—A Discrete Prototype for Economic Modelling. Discrete Dynamics in Nature and Society 2: 153–159.
Hayek, Friedrich A. 1948. Individualism and Economic Order. Chicago: University of Chicago Press.
———. 1967. The Theory of Complex Phenomena. In Studies in Philosophy, Politics and Economics, 22–42. London: Routledge & Kegan Paul.
Hofstadter, Douglas R. 2006. I am a Strange Loop. New York: Basic Books.
Holden, Lisa, and Thomas Erneux. 1993. Understanding Bursting Oscillations as Periodic Slow Passage through Bifurcation and Limit Points. Journal of Mathematical Biology 31: 351–365.
Holling, C.S. 1992. Cross-Scale Morphology, Geometry, and Dynamics of Ecosystems. Ecological Monographs 62: 447–502.
Horgan, John. 1997. The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Age. Paperback ed. New York: Broadway Books.
Horwitz, Steven. 1992. Monetary Evolution, Free Banking and Economic Order. Boulder: Westfield Press.
Israel, Giorgio. 2005. The Science of Complexity: Epistemological Problems and Perspectives. Science in Context 18: 1–31.
Jantsch, Erich. 1982. From Self-Reference to Self-Transcendence: The Evolution of Self-Organization Dynamics. In Self-Organization and Dissipative Structures, ed. William C. Schieve and Peter M. Allen, 344–353. Austin: University of Texas Press.
Kaufmann, Stuart A. 1993. The Origins of Order: Self-Organization and Selection in Evolution. Oxford: Oxford University Press.
Keynes, John Maynard. 1921. Treatise on Probability. London: Macmillan.
———. 1936. The General Theory of Employment, Interest and Money. London: Macmillan.
———. 1938. Professor Tinbergen’s Method. The Economic Journal 49: 558–568.
Kim, Jaegwon. 1999. Making Sense of Emergence. Philosophical Studies 95: 3–36.
Kindleberger, Charles P. 2001. Manias, Panics, and Crashes: A History of Financial Crises. 4th ed. New York: Basic Books.
Kleene, Stephen C. 1967. Mathematical Logic. New York: John Wiley & Sons.
Kleene, Stephen C., and Richard E. Vesley. 1965. Foundations of Intuitionistic Mathematics. Amsterdam: North-Holland.
Knight, Frank H. 1921. Risk, Uncertainty, and Profit. Boston: Hart, Schaffer, and Marx.
Kolmogorov, Andrei N. 1965. Three Approaches to the Quantitative Definition of Information. Problems of Information Transmission 1: 4–7.
———. 1983. Combinatorial Foundations of Information Theory and the Calculus of Probabilities. Russian Mathematical Surveys 38 (4): 29–40.
Koppl, Roger. 2006. Austrian Economics at the Cutting Edge. Review of Austrian Economics 19: 231–241.
———. 2009. Complexity and Austrian Economics. In Handbook of Complexity Research, ed. J. Barkley Rosser Jr., 393–408. Cheltenham: Edward Elgar.
Koppl, Roger, and J. Barkley Rosser Jr. 2002. All That I Have to Say Has Already Crossed Your Mind. Metroeconomica 53: 339–360.
Lachmann, Ludwig. 1986. The Market as an Economic Process. Oxford: Basil Blackwell.
Landini, Simone, Mauro Gallegati, and J. Barkley Rosser Jr. 2020. Consistency and Incompleteness in General Equilibrium Theory. Journal of Evolutionary Economics 30: 205–230.
Lavoie, Don. 1989. Economic Chaos or Spontaneous Order? Implications for Political Economy of the New View of Science. Cato Journal 8: 613–635.
Lawson, Tony. 1997. Economics and Reality. London: Routledge.
Leijonufvud, Axel. 1993. Towards a Not-Too-Rational Macroeconomics. Southern Economic Journal 60: 1–13.
Lewes, George Henry. 1875. Problems of Life and Mind. London: Kegan Paul Trench Turbner.
Lewis, Alain A. 1985. On Effectively Computable Realizations of Choice Functions. Mathematical Social Sciences 10: 43–80.
———. 1992. On Turing Degrees of Walrasian Models and a General Impossibility Result in the Theory of Decision Making. Mathematical Social Sciences 24: 143–171.
Lewis, Paul. 2012. Emergent Properties in the Work of Friedrich Hayek. Journal of Economic Behavior and Organization 82: 368–378.
Lipman, Barton L. 1991. How to Decide How to Decide How to…,: Modeling Limited Rationality. Econometrica 59: 1105–1125.
Loasby, Brian J. 1976. Choice, Complexity and Ignorance. Cambridge, UK: Cambridge University Press.
Lorenz, Edward N. 1963. Deterministic Non-Periodic Flow. Journal of Atmospheric Science 20: 130–141.
Lorenz, Hans-Walter. 1992. Multiple Attractors, Complex Basin Boundaries, and Transient Motion in Deterministic Economic Systems. In Dynamic Economic Models and Optimal Control, ed. Gustav Feichtinger, 411–430. Amsterdam: North-Holland.
Lynch, Michael. 2000. Against Reflexivity as an Academic Virtue and Source of Privileged Knowledge. Theory, Culture, and Society 17: 26–54.
Malinvaud, Edmond. 1966. Statistical Methods for Econometrics. Amsterdam: North-Holland.
Markose, Sheri M. 2005. Computability and Evolutionary Complexity: Markets as Complex Adaptive Systems. Economic Journal 115: F159–F192.
May, Robert M. 1976. Simple Mathematical Models with Very Complicated Dynamics. Nature 261: 459–467.
Maymin, Philip Z. 2011. Markets are Efficient if and only if P = NP. Algorithmic Finance 1: 1–11.
McCall, John J. 2005. Induction. In Computability, Complexity and Constructivity in Economic Analysis, ed. K. Vela Velupillai, 105–131. Victoria: Blackwell.
McCauley, Joseph L. 2004. Dynamics of Markets: Econophysics and Finance. Cambridge, UK: Cambridge University Press.
McCauley, Joseph I. 2005. Making Mathematics Effective in Economics. In Computability, Complexity and Constructivity in Economic Analysis, ed. K. Vela Veulupillai, 51–84. Victoria: Blackwell.
Meadows, Donella H., Dennis L. Meadows, Jorgen Randers, and William W. Behrens III. 1972. The Limits to Growth. New York: Universe.
Menger, Carl. 1871/1981. Principles of Economics. Translated into English by James Dingwall and Bert F. Hoselitz. New York: New York University Press.
Menger, Carl. 1883/1985. Investigations into the Method of the Social Sciences with Special Reference to Economics. Translated into English by Francis J. Nock. New York: New York University Press.
Menger, Carl. 1892. On the Origin of Money. Economic Journal 2: 239–255.
Merton, Robert K. 1938. Science and the Social Order. Philosophy of Science 5: 523–537.
———. 1948. The Self-Fulfilling Prophecy. Antioch Review 8: 183–210.
Mill, John Stuart. 1843. A System of Logic: Ratiocinative and Inductive. London: Longmans Green.
Minsky, Hyman P. 1972. Financial Instability Revisited: The Economics of Disaster. Reappraisal of the Federal Reserve Discount Mechanism 3: 97–136.
Mirowski, Philip. 2002. Machine Dreams: Economics Becomes a Cyborg Science. Cambridge, UK: Cambridge University Press.
———. 2007. Markets Come to Bits: Evolution, Computation, and Markomata in Economic Science. Journal of Economic Behavior and Organization 63: 209–242.
Mirowski, Philip, and Edward Nik-Kah. 2017. Knowledge We Have Lost in Information: The History of Information in Modern Economics. New York: Oxford University Press.
Moore, Christopher. 1990. Undecidability and Unpredictability in Dynamical Systems. Physical Review Letters 64: 2354–2357.
———. 1991a. Generalized Shifts: Undecidability and Unpredictability in Dynamical Systems. Nonlinearity 4: 199–230.
———. 1991b. Generalized One-Sided Shifts and Maps of the Interval. Nonlinearity 4: 737–745.
Morgan, C. Lloyd. 1923. Emergent Evolution. London: Williams and Norgate.
Morgenstern, Oskar. 1935. Voltommene Vorastlicht und Wirtschafrliches Gleichgewicht. Zeitschrift für Nationalökonome 6: 337–357.
Nash, John F., Jr.. 1955. “Letter to National Security Agency.” nsa.gov/Portals/70/documents/news-features/declassified-documents/nash-letters/nash_letters1.pdf.
von Neumann, John. 1932. Proof of the Quasi-Ergodic Hypothesis. Proceedings of the National Academy of Sciences 18: 263–266.
———. 1966. Theory of Self-Reproducing Automata, edited and complied by Arthur W. Burks. Urbana: University of Illinois Press.
Nicolis, John S. 1986. Dynamics of Hierarchical Systems: An Evolutionary Approach. Berlin: Springer-Verlag.
Nicolis, Grégoire, and Ilya Prigogine. 1977. Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order through Fluctuations. New York: Wiley-Interscience.
Nyarko, Yaw. 1991. Learning in Mis-Specified Models and the Possibility of Cycles. Journal of Economic Theory 55: 416–427.
O’Donnell, M. 1979. A Programming Language Theorem that is Independent of Peano Arithmetic. Proceedings of the 11th Annual ACM Symposium on the Theory of Computation: 179–188.
O’Donnell, Rod M. 2014-15. A Critique of the Ergodic/Nonergodic Approach to Uncertainty. Journal of Post Keynesian Economics 37: 187–209.
O’Driscoll, Gerald P., and Mario J. Rizzo. 1985. The Economics of Time and Ignorance. Oxford: Basil Blackwell.
Poincaré, Henri. 1890. Sur les Équations de la Dynamique et le Problème de Trois Corps. Acta Mathematica 13: 1–270.
Popper, Karl. 1959. The Logic of Scientific Discovery. London: Hutchinson Verlag von Julius Springer.
Pour-El, Marian Boykan, and Ian Richards. 1979. A Computable Ordinary Differential Equation which Possesses no Computable Solution. Annals of Mathematical Logic 17: 61–90.
Prasad, Kislaya. 1991. Computability and Randomness of Nash Equilibria in Infinite Games. Journal of Mathematical Economics 20: 429–442.
———. 2005. Constructive and Classical Models for Results in Economics and Game Theory. In Computability, Complexity and Constructivity in Economic Analysis, ed. K. Vela Velupillai, 132–147. Victoria: Blackwell.
Pryor, Frederic L. 1995. Economic Evolution and Structure: The Impact of Complexity on the U.S. Economic System. New York: Cambridge University Press.
Puu, Tönu. 1990. A Chaotic Model of the Business Cycle. Occasional Paper Series in Socio-Spatial Dynamics 1: 1–19.
Radner, Roy S. 1992. Hierarchy: The Economics of Managing. Journal of Economic Literature 30: 1382–1415.
Richter, M.K., and K.V. Wong. 1999. Non-Computability of Competitive Equilibrium. Economic Theory 14: 1–28.
Rissanen, Jorma. 1978. Modeling by Shortest Data Description. Automatica 14: 465–471.
———. 1986. Stochastic Complexity and Modeling. Annals of Stochastics 14: 1080–1100.
———. 1989. Stochastic Complexity in Statistical Inquiry. Singapore: World Scientific.
———. 2005. Complexity and Information in Modeling. In Computability, Complexity and Constructivity in Economic Analysis, ed. K. Vela Velupillai, 85–104. Victoria: Blackwell.
Robinson, Joan. 1953-54. The Production Function and the Theory of Capital. Review of Economic Studies 21: 81–106.
Robinson, Abraham. 1966. Non-Standard Analysis. Amsterdam: North-Holland.
Rosser, J. Barkley, Jr. 1991. From Catastrophe to Chaos: A General Theory of Economic Discontinuities. Boston: Kluwer.
———. 1994. Dynamics of Emergent Urban Hierarchy. Chaos, Solitons & Fractals 4: 553–562.
———. 1999. On the Complexities of Complex Economic Dynamics. Journal of Economic Perspectives 13 (4): 169–182.
———. 2000a. From Catastrophe to Chaos: A General Theory of Economic Discontinuities: Mathematics, Microeconomics, Macroeconomics, and Finance, Volume II. Boston: Kluwer.
———., ed. 2000b. Complexity in Economics, Volumes I-III: The International Library of Critical Writings in Economics, 174. Cheltenham: Edward Elgar.
———. 2001a. Alternative Keynesian and Post Keynesian Perspectives on Uncertainty and Expectations. Journal of Post Keynesian Economics 23: 545–566.
———. 2001b. Complex Ecologic-Economic Systems and Environmental Policy. Ecological Economics 17: 23–37.
———. 2004. Epistemological Implications of Economic Complexity. Annals of the Japan Association for Philosophy of Science 31 (2): 3–18.
———. 2007. The Rise and Fall of Catastrophe Theory Applications in Economics: Was the Baby Thrown Out with the Bathwater? Journal of Economic Dynamics and Control 31: 3255–3280.
———. 2009a. Computational and Dynamic Complexity in Economics. In Handbook of Complexity Research, ed. J. Barkley Rosser Jr., 22–35. Cheltenham: Edward Elgar.
———. 2010a. Constructivist Logic and Emergent Evolution in Economic Complexity. In Computability, Constructive and Behavioural Economic Dynamics: Essays in Honour of Kumaraswamy (Vela) Velupillai, ed. Stefano Zambelli, 184–197. London: Routledge.
———. 2012a. Emergence and Complexity in Austrian Economics. Journal of Economic Behavior and Organization 81: 122–128.
———. 2012b. On the Foundations of Mathematical Economics. New Mathematics and Natural Computation 8: 53–72.
———. 2014a. The Foundations of Economic Complexity in Behavioral Rationality in Heterogeneous Expectations. Journal of Economic Methodology 21: 308–312.
———. 2016a. Reconsidering Ergodicity and Fundamental Uncertainty. Journal of Post Keynesian Economics 38: 331–354.
———. 2020a. Incompleteness and Complexity in Economic Theory. In Unraveling Complexity: The Life and Work of Gregory Chaitin, ed. Shyam Wuppuluri and Francisco Antonio Doria, 345–367. Singapore: World Scientific.
———. 2020b. Reflections on Reflexivity and Complexity. In History, Methodology and Identity for a 21st Social Economics, ed. C. Wade Hands Wilfred Dolfsma and Robert McMaster, 67–86. London: Routledge.
———. 2020c. The Minsky Moment and the Revenge of Entropy. Macroeconomic Dynamics 24: 7–23.
Rosser, J. Barkley, Jr., Marina V. Rosser, Steven J. Guastello, and Robert W. Bond. 2001. Chaotic Hysteresis in Systemic Economic Transformation. Nonlinear Dynamics, Psychology, and Life Sciences 5: 345–368.
Rosser, J. Barkley, Jr., Ehsan Ahmed, and Georg C. Hartmann. 2003a. Volatility via Social Flaring. Journal of Economic Behavior and Organization 50: 77–87.
Rosser, J. Barkley, Jr., Marina V. Rosser, and Mauro Gallegati. 2012. A Minsky-Kindleberger Perspective on the Financial Crisis. Journal of Economic Issues 45: 449–458.
Rosser, J. Barkley, Sr. 1936. Extensions of Some Theorems of Gödel and Church. Journal of Symbolic Logic 1: 87–91.
Samuelson, Paul A. 1969. Classical and Neoclassical Theory. In Monetary Theory: Readings, ed. Robert W. Clower, 182–194. Hammondsworth: Penguin.
Scarf, Herbert E. 1973. The Computation of Economic Equilibria. New Haven: Yale University Press.
Schelling, Thomas C. 1971. Dynamic Models of Segregation. Journal of Mathematical Sociology 1: 143–186.
Shackle, George L.S. 1972. Epistemics and Economics: A Critique of Economic Doctrines. Cambridge, UK: Cambridge University Press.
Shannon, Claude E. 1948. A Mathematical Theory of Communication. Bell System Technical Journal 27 (379-423): 623–656.
Shinkai, S., and Y. Aizawa. 2006. The Lempel-Zev Complexity of Non-Stationary Chaos in Infinite Ergodic Cases. Progress of Theoretical Physics 116: 503–515.
Simon, Herbert A. 1962. The Architecture of Complexity. Proceedings of the American Philosophical Society 106: 467–482.
Solomonoff, R.J. 1964. A Formal Theory of Inductive Inference Parts I and II. Information and Control 7 (1-22): 224–154.
Soros, George. 1987. The Alchemy of Finance. Hoboken: Wiley & Sons.
———. 2013. Fallibility, Reflexivity, and the Human Uncertainty Principle. Journal of Economic Methodology 20: 309–329.
Specker, E.P. 1953. The Axiom of Choice in Quine’s New Foundations for Mathematical Logic. Proceedings of the National Academy of Sciences U.S.A. 39: 972–975.
Spencer, Herbert. 1867-1874. Descriptive Sociology: Encyclopedia of Social Facts, Representing the Constitution of Every Type and Grade of Human Society, Past and Present, Stationary and Progressive, Classified and Tabulated for Easy Comparison and Convenient Studies of the Relations of Social Phenomena. London: Williams and Norgate.
Stadler, Barbel, Peter Statler, Gunter Wagner, and Walter Fontana. 2001. The Topology of the Possible: Formal Spaces Underlying Patterns of Evolutionary Change. Journal of Theoretical Biology 21: 241–274.
Stodder, James P. 1995. The Evolution of Complexity in Primitive Economies: Theory. Journal of Comparative Economics 20: 1–31.
Tarski, Alfred. 1949. On Essential Undecidability (Abstract). Journal of Symbolic Logic 14: 75–76.
Thom, René. 1975. Structural Stability and Morphogenesis: An Outline of a Theory of Models. Reading: Benjamin.
Tinbergen. 1937. An Econometric Approach to Business Cycles. Paris: Hermann.
———. 1940. On a Method of Statistical Business Research: A Reply. Economic Journal 50: 41–54.
Tsuji, Marcelo, Newton C.A. da Costa, and Francisco A. Doria. 1998. The Incompleteness Theories of Games. Journal of Philosophical Logic 27: 553–568.
Turing, Alan M. 1936. On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2 (42): 239–265.
———. 1937. Computability and λ-Definability. Journal of Symbolic Logic 2: 153–163.
———. 1952. The Chemical Basis of Morphogenesis. Philosophical Transactions of the Royal Society B 237: 37–72.
Uffink, Jos. 2006. A Compendium of the Foundations of Classical Statistical Physics. Institution for History and Foundations of Science, University of Utrecht.
Vaughn, Karen I. 1999. Hayek’s Thought and Market Order as an Instance of Complex Adaptive Systems. Journal des économistes et des études humaines 9: 241–246.
Velupillai, Kumaraswamy. 2000. Computable Economics. Oxford: Oxford University Press.
Velupillai, K. Vela. 2002. Effectivity and Constructivity in Economic Theory. Journal of Economic Behavior and Organization 49: 307–325.
———. 2005a. Introduction. In Computability, Complexity and Constructivity in Economic Analysis, ed. K. Vela Velupillai, 1–14. Victoria: Blackwell.
———. 2005b. A Primer on the Tools and Concepts of Computable Economics. In Computability, Complexity and Constructivity in Economic Analysis, ed. K. Vela Velupillai, 148–197. Victoria: Blackwell.
———. 2006. Algorithmic Foundations of Computable General Equilibrium Theory. Applied Mathematics and Computation 179: 360–369.
———. 2008. Uncomputability and Undecidability in Economic Theory. Department of Economics Working Paper 806, University of Trento, Italy.
———. 2009. A Computable Economist’s Perspective on Computational Complexity. In Handbook of Complexity Research, ed. J. Barkley Rosser Jr., 36–83. Cheltenham: Edward Elgar.
———. 2011. Nonlinear Dynamics, Complexity, and Randomness: Algorithmic Foundations. Journal of Economic Surveys 25: 547–568.
———. 2012. Taming the Incomputable, Reconstructing the Nonconstructive, and Deciding the Undecidable in Mathematical Economics. New Mathematics and Natural Computation 8: 5–51.
———. 2013. Post Keynesian Precepts for Nonlinear, Endogenous, Nonstochastic, Business Cycle Theories. In Handbook of Post Keynesian Economics, ed. Geoffrey C. Harcourt and Peter Kreisler, 415–442. Oxford: Oxford University Press.
Vriend, Nicolaas J. 2002. Was Hayek an Ace? Southern Economic Journal 68: 811–840.
Wagner, Richard E. 2010. Mind, Society, and Human Action. New York: Routledge.
Weintraub, E. Roy. 2002. How Economics Became a Mathematical Science. Durham: Duke University Press.
Whitehead, Alfred North, and Bertrand Russell. 1910-13. Principia Mathematica, Volumes I-III. London: Cambridge University Press.
Wiener, Norbert. 1948. Cybernetics: Or Control and Communication in the Animal and Machine. Cambridge, MA: MIT Press.
Wigner, Eugene. 1960. The Unreasonable Effectiveness of Mathematics in the Natural Sciences. Communications in Pure and Applied Mathematics 13: 1–14.
Wolfram, Stephen. 1984. Universality and Complexity in Cellular Automata. Physica D 10: 1–35.
Woolgar, Steve. 1991. The Turn in Technology in Social Studies of Science. Science, Technology & Human Values 16: 20–50.
Zeeman, E. Christopher. 1974. On the Unstable Behavior of the Stock Exchanges. Journal of Mathematical Economics 1: 39–44.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Rosser, J.B. (2021). Logical and Philosophical Foundations of Complexity. In: Foundations and Applications of Complexity Economics. Springer, Cham. https://doi.org/10.1007/978-3-030-70668-5_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-70668-5_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-70667-8
Online ISBN: 978-3-030-70668-5
eBook Packages: Economics and FinanceEconomics and Finance (R0)