Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

This chapter is not intended to be an original report of recent results and development of agent-based simulations (ABSs) and agent-based computational economics (ABCE) in particular. The chapter instead intends to introduce beginners in the field the basic facts about why ABCE is required now and what types of tasks and possibilities it enables for the development of economics. It also intends to explain to economists, but not specialists in the field, how ABCE relates to old theoretical problems that arose many years ago.

ABCE and ABS in general place a heavy burden on beginner economists to acquire computer programming abilities and skills. Beginners in ABCE generally do not have enough time to make historical surveys of the development of economics for the last half century. Many specialists have begun to use ABCE without engaging in any deep reflection on why ABCE and ABS in general are required as a new method in economics and how they are related to old methods of economics. It is rather rare to address this topic as a main issue associated with ABCE. However, knowledge of the history of economics is important in situating ABCE research projects correctly in a wider perspective. This chapter provides a brief overview of the history of modern economics, mainly from the 1970s to the present, focusing on problems left unsolvable within the framework of standard economics.

This paper will also be interesting for economists who are not specialists in ABCE. These economists sometimes show keen interest in ABCE. They have come to know several models of various topics and believe that computer simulation may illustrate certain aspects of economic behavior, but they do not usually imagine that ABCE provides a new tool in economics, which is comparable to mathematics, and that this new tool may mark a breakthrough and open a way to a new scope in economics.

Computer simulation is a new tool in economics. This does not mean that simulation has totally replaced two older methods: the literal or conceptual method and the mathematical method. All three methods are complementary. The same researchers may use all three methods in appropriate fields and for appropriate tasks.Footnote 1 However, ABCE is not a simple method added to the standard economics. In fact, it has the task of remedying a malaise that has prevailed in economics for a long time.

The ill of modern economics lies in the fact that it attacks only problems that one can formalize and analyze by mathematical methods . The typical framework is that of equilibrium and maximization. This framework has dominated mathematical analysis. A monumental achievement in this direction was the work of Arrow and Debreu [5] on the existence of general competitive equilibrium. As a framework of the market economy, the general equilibrium theory (GE theory) contained serious defects, but it became an ideal model for mathematical economics. The term “theoretical” became a synonym for “mathematical,” and the term “mathematical economics” was replaced by the term “theoretical economics.” The main tendency of “theoretical economics” was to follow the track of the GE theory. People searched for problems that they could formulate and solve mathematically. They did not examine the validity of formulations. They could formulate and solve the problem. They were satisfied interpreting this fact as a demonstration that the formulation was right.

The 1950s was a time of euphoria for mathematical economics and for GE theory. People believed in the possibility of economics. They imagined that mathematical economics plus the use of computers (meaning econometrics) might turn economics into an exact science like physics. This general mood continued almost through the 1960s. At the same time, some economists began to reconsider the possibility of mathematical economics and acknowledged that mathematical methods have a fundamental weakness in treating economic phenomena. In the mid-1960s, there was a continuous debate now called the Cambridge capital controversy [9, 33] . It revealed that a serious logical problem lies at the root of the simple expression of the production function. Economists became more reflective and critical on the state of economics. Maurice Dobb [20] called the 1960s “a decade of high criticism.”

Many criticisms of the basis of economics appeared in the first half of the 1970s. Many economists, including leaders of mainstream economics, posed a question on the very basis of economic science and the usefulness of the mathematical method.Footnote 2 Many asked what was wrong with economics and called for a paradigm change. In 1973, Frank Hahn [29], one of the leaders of general equilibrium analysis, described the mood of the time as “the winter of our discontent.”

Those in young generations may have difficulty imagining the atmosphere of that time. It is helpful to remember the shock and disarray among economists that occurred just after the bankruptcy of Lehman Brothers. Paul Krugman, the Nobel Laureate in Economics for 2008 and famous New York Times columnist, was famously cited as stating that “most work in macroeconomics in the past 30 years has been useless at best and harmful at worst.”Footnote 3 The expressions used in the 1970s were not as strong and catchy as Krugman’s statement, but the reflections on the state of economics were more profound and deeply considered. Many economists questioned the very framework of economics based on the concepts of equilibrium and maximization.

In the mid-1970s, the atmosphere changed. The Vietnam War (or the American War in Vietnam) ended. Protest songs changed to focus on self-confinement. A shift of interest occurred in the theory fields, too. Rational expectation became a fad. Game theory hailed a second boom. The winter of our discontent ended suddenly. Enquiries into the theoretical framework were discarded. In the mid-1990s, Arrow [6, p.451] still viewed GE theory as “the only coherent account of the entire economy.”

The economists who were critical of the main tendency of “theoretical economics” reacted rather irrationally. Many of them, from Marxists to ontological realists, blamed mathematics as the main vehicle that led economics to the present-day deplorable state. They also confused theory and mathematics. What we should blame is not mathematics but the theoretical framework. Mathematics is a tool. It is a powerful tool, but not a unique one. The stagnation of economics arose partly because of the underdevelopment of new tools suitable for analyzing complex economies. ABCE is an effort to develop new analytical tools.

ABCE provides a new analytical tool, but it is not the final target. It has a different mission: to reconstruct economics from the very foundations of the discipline. The reconstruction of economics requires the development of a new and powerful method, perhaps as powerful as mathematics, that is suitable for the analysis of the wider situation of the real economy.

It is important for those who work with ABCE to understand this mission. A strong magnetic field exists. It attracts every effort to the neoclassical traditions. There is no tabula rasa in economics (or in any other science). If researchers are not aware of it, they cannot escape this magnetic field. It is necessary to situate their research in the long history of theoretical polemics around GE theory. They should also know what has been left unsolved and how deformed most of the questions were by the “theoretical necessity of the theory.”

Therefore, my discussion goes back to the first half of the 1970s, when reflections erupted among many eminent and leading economists. I even go back further, to when discussions paved the way for the eruption of the 1970s. I also summarize how these criticisms of the 1970s were accepted and what types of attempts were made. Some of this history is famous among heterodox economists. Young economists rarely have time to learn this sinuous history, and ABCE practitioners who started in information engineering have practically no chance to learn these questions. As a result, the present paper will also be useful for all types of ABCE specialists.

This chapter is organized as follows. The tour of the past is composed of two parts. Section 1.2 starts with an introduction that shows how a critical mood permeated economics in the 1970s. The subsequent subsections examine three major controversies that led to the critical mood of the first half of the 1970s. All three controversies have a common point. The theoretical problems raised were unsolvable under the general equilibrium framework of economics. Section 1.3 examines the later developments of the GE framework after the 1970s and various trials to extend and rescue the framework. My conclusion is simple. The GE framework is suffering from a scientific crisis and needs a paradigm change. A comprehensive paradigm shift requires a new research tool. Agent-based simulation is a promising candidate as a new tool. Section 1.4 argues what kind of significance and possibilities it has for the future of economics.

2 General Crisis of Economics: State of Economics During and Before the First Half of the 1970s

Let me start my discussion with the state of economics in the 1970s. I started economics in the 1970s, but it is not the reason that I chose this period as the starting point. For most young economists, the 1970s are the old days that they know only through the history of economics. Many of those economists may not know and even cannot imagine the atmosphere of the time. Mainstream economics often ignores this period. When it comments on this period, there is a tendency to underrate the meaning of the discussions presented during the period. The typical attitude is something like this: people presented many problems and difficulties in the 1960s and 1970s, but economics has overcome them and developed a great deal since that time.

The fact is that some problems remained unsolved. The only difference between the first and second halves of the 1970s is that people ceased to question those difficult problems, which may require the reconstruction or even destruction of existing frameworks. After 1975, a strong tendency appeared among young economists who believed that the methodology debate was fruitless and it was wise to distance themselves from it. However, understanding the criticism presented in the first half of the 1970s is crucial when one questions the fundamental problems of economics and aims to achieve a paradigm change.

The first half of the 1970s was indeed a key period when the two possibilities were open. Many eminent economists talked about the crisis of economics. The list of interventions is long. It was common for presidential addresses to take a severely critical tone. Examples of interventions included Leontief [49], Phelps Brown [61], Kaldor [40], Worwick [94], and others.Footnote 4 Other important interventions were Kornai [44], J. Robinson [67, 68] and Hicks [38]. These eminent economists expressed many points of contention and asked to change the general direction of economic thinking. Leontief warned against relying too much upon governmental statistics. Kornai recommended an anti-equilibrium research program. Kaldor argued that the presence of increasing returns to scale made equilibrium economics irrelevant to real economic dynamics. Robinson asked to take into consideration the role of time. Alternatives were almost obvious. The choice was either to keep the equilibrium framework or to abandon it in favor of constructing a new framework.

In terms of philosophy of science, the question was this: Is economics now undergoing a scientific crisis that requires a paradigm change? Or is it in a state that can be remedied by modifications and amendments to the present framework? These are difficult questions to answer. The whole of one’s research life may depend on how one answers them. To search for answers to these deep questions, it is necessary to examine the logic of economics, how some of the debates took place, and how they proceeded and ended.

2.1 Capital Theory Controversies

Let us start with the famous Cambridge capital controversy [9, 33]. The controversy concerned how to quantify capital. Cambridge economists in England argued that capital is only measurable when distribution (e.g., the rate of profit) is determined. This point became a strong base of criticism against the neoclassical economics of the 1960s.

The 1950s were a hopeful time for theoretical economics. In 1954, Arrow and Debreu [5] provided a strict mathematical proof on the existence of competitive equilibrium for a very wide class of economies. Many other mathematical economists reported similar results with slightly different formulations and assumptions. As Alexei Leijonhufvud [48] caricatured in his “Life Among the Econ,” people placed mathematical economics at the top of the economic sciences and supposed that it must reign as queen. The 1950s were also a time when computers became available for economic studies, and Laurence Klein succeeded in building a concrete econometric model. Many people believed that mathematical economics plus computers would open a new golden age in economics just like physics at the time of Isaac Newton and afterward. In the 1960s, a new trend emerged. Hope changed to doubt and disappointment.

Some of the doubts were theoretical. The most famous debate of the time was the controversy on capital theory, which took the form of a duel between Cambridge in England and Cambridge, Massachusetts, in the United States. In the standard formulation of the time, the productivity of capital, the marginal increase in products by the increase of one unit of capital, determined the profit rate. This was the very foundation of the neoclassical distribution theory. The opposite side of this assertion was the marginal theory of wage determination. The theory dictates that the productivity of labor determines the wage rate. The exhaustion theorem, based on a production function, reinforced these propositions. A production function represents a set of possible combinations of inputs and outputs that can appear in production. A production function that satisfies a standard set of assumptions is customarily called the Solow-Swan type. The assumptions include the following conditions: (1) The production function is in fact a function and defined at all nonnegative points. The first half of the condition means that the products or outputs of production are determined once the inputs of the production are given.Footnote 5 (2) The production function is smooth in the sense that it is continuously differentiable along any variables. (3) The production function is homogeneous of degree 1. This means that the production function f satisfies the equation f(tx, ty, , tz) = tf(x, y, , z) for all nonnegative t.

The exhaustion theorem holds for all Solow-Swan-type production functions . If a production function f is continuously differentiable and homogeneous of degree 1, then the adding up theorem

$$\displaystyle{f(K,L) = rK + wL}$$

holds, where

$$\displaystyle{r = \partial f/\partial K\quad \mathrm{and}\quad w = \partial f/\partial L.}$$

The proof of the theorem is simple. Using the differentiability of the function, one can easily obtain the formula by the Leibnitz theorem on the derivation of a composite function. The adding up theorem indicates that all products can be distributed among contributors to the production as either dividends or wages. No profit remains for the firm. This is what the exhaustion theorem claims and the basis of the neoclassical theory of distribution.

In this formulation, capital is a mass that is measurable as a quantity before prices are determined. Let us call this conception “the physical mass theory.” Samuelson called it the “Clark-like concept of aggregate capital.”Footnote 6 The story began when a student of Cambridge University named Ruth Cohen questioned how techniques could be arranged in an increasing order of capital/labor ratios when reswitching was possible. Reswitching is a phenomenon in which a production process that becomes unprofitable when one increases the profit rate can become again profitable when one increases the profit rates further. Piero Sraffa [89] gave an example of reswitching in his book.

Joan Robinson of Cambridge University shone a spotlight on this phenomenon. If reswitching occurs, the physical mass theory of capital is not tenable. Robinson claimed that the standard theory of distribution is constructed on a flawed base. Samuelson and Levhari of MIT (in Cambridge, Massachusetts) tried to defend the standard formulation by claiming that the reswitching phenomenon is an exceptional case that can be safely excluded from normal cases. They formulated a “non-switching” theorem for a case of non-decomposable production coefficient matrix and presented a proof of the theorem [52]. As it was soon determined, the theorem was false (see Samuelson et al. [72]).Footnote 7 In his “A Summing Up,” P.A. Samuelson admitted that “[reswitching] shows that the simple tale told by Jevons, Bohm-Bawerk, Wicksell, and other neoclassical writers … cannot be universally valid.”

The symposium in 1966 was a showdown. The Cambridge, England, group seemed to win the debate. A few years after the symposium, people refrained from apparent use of production functions (with a single capital quantity as their argument). However, some peculiar things happened, and the 1980s saw a revival of the Solow-Swan-type production function, as if the Cambridge capital controversy had never occurred.

The resurgence occurred in two areas: one was the real business cycle theory and the other was the endogenous growth theory. Both of them became very influential among mainstream economists. The real business cycle (RBC) theory adopted as its main tool the dynamic stochastic general equilibrium (DSGE) theory. DSGE was an innovation in the sense that it includes expectation and stochastic (i.e., probabilistic) external shocks. Yet the mainframe of DSGE relied on a Solow-Swan-type production function. The endogenous growth theory succeeded in modeling the effect of common knowledge production. It also relied on a Solow-Swan-type production function. Its innovation lay in the introduction of knowledge as an argument of the production function. In this peculiar situation, as Cohen and Harcourt [15] put it, “contributors usually wrote as if the controversies had never occurred.” At least in North American mainstream economics, the capital controversy fell completely into oblivion.Footnote 8

How could this situation take place? One may find a possible answer in Samuelson’s 1962 paper [71], written in the first stage of the controversy. Samuelson dedicated it at the time of Joan Robinson’s visit to MIT. He proposed the notion of a surrogate production function in this paper. This concept was once rejected by Samuelson himself, and it is said that he resumed his former position later. The surrogate production function, however, is not our topic. At the beginning of the paper, Samuelson compared two lines of research. One is a rigorously constructed theory that does not use any “Clark-like concept of aggregate capital.” The argument K in a production function is nothing other than the capital in the physical mass theory. Another line of research is analysis based on “certain simplified models involving only a few factors of production.” The rigorous theory “leans heavily on the tools of modern linear and more general programming.” Samuelson proposed calling it “neo-neoclassical” analysis. In contrast, more “simple models or parables do,” he argued, “have considerable heuristic value in giving insights into the fundamentals of interest theory in all its complexities.”

Mainstream economists seem to have adopted Samuelson’s double-tracked research program. The capital controversy revealed that there is a technical conceptual problem in the concept of capital. This anomaly occurs in the special case of combinations of production processes. While simple models may not reflect such a detail, they give us insights on the difficult problem. Their heuristic value is tremendous. Burmeister [13] boasted of this. In fact, he asserted that RBC theory , with its DSGE model,Footnote 9 and endogenous growth theory are evidence of the fecundity of a Solow-Swan-type production function. He blamed its critics, stating that they had been unable to make any fundamental progress since the capital controversy. In his assessment, “mainstream economics goes on as if the controversy had never occurred. Macroeconomics textbooks discuss ‘capital’ as if it were a well-defined concept, which is not except in a very special one-capital-good world (or under other unrealistically restrictive conditions). The problems of heterogeneous capital goods have also been ignored in the ‘rational expectations revolution’ and in virtually all econometric work” [13, p.312].

Burmeister’s assessment is correct. It reveals well the mood of mainstream economists in the 1990s and the 2000s just before the bankruptcy of Lehman Brothers. This mood was spreading all over the world. Olivier Blanchard [11] stated twice in his paper that “[t]he state of macro is good.” Unfortunately for Blanchard, the paper was written before the Lehman collapse and published after the crash.

Of course, after the Lehman collapse, the atmosphere changed radically. Many economists and supporters of economics such as George Soros started to rethink economics.Footnote 10 A student movement, the Rethinking Economics network , was started in 2012 in Tübingen, Germany, and has spread worldwide. The mission of the organization is to “diversify, demystify, and reinvigorate economics.” The students who launched the network acknowledge that mainstream economics has something wrong with it and claim plurality in economics education. It became evident that the abundance of papers does not indicate true productivity in economics. We should develop a new economics, and we need a new research apparatus. ABCE can serve as such an apparatus. This is the main message of this chapter.

Blanchard [11] emphasized the “convergence in vision” (Section 2) and in methodology (Section 4) in recent macroeconomics. The term “New Consensus Macroeconomics” frequently appears in newspapers and journals. This does not mean, however, that macroeconomics comes close to the truth. It only means that economists’ visual field became narrower. Students are revolting against this contraction of vision.

2.2 Marginal Cost Controversy

The capital theory controversy concerned macroeconomics. Although it is not as famous as the capital theory controversy, another controversy erupted just after World War II in the United States. It concerned microeconomics. The controversy questioned the shape of cost functions and the relevance of marginal analysis. It is now called the marginalist controversy [35].Footnote 11

R.A. Lester [50] started the controversy in 1946. Lester was a labor economist, and minimum wage legislation was his concern. He employed the question paper method. One of his questions was this: What factors have generally been the most important in determining the volume of employment in firms during peacetime? Out of 56 usable replies, 28 (50 %) rated market demand as the most important factor (with 100 % weight) in determining the volume of employment. For the other 28 firms, the average weight for market demand was 65 %. Only 13 replies (23 %) included wages among the factors considered.

The equality of a marginal product and price were the very basis of the neoclassical theory of the firm, and it was this condition that determined the volumes of production and employment. Other questions revealed unfavorable facts for marginal analysis. Many firms did not calculate the marginal cost at all. The average cost function was not U shaped as the standard theory usually assumed. It was reasonable to suppose that the marginal cost either remained constant for a wide range of production volumes or decreased until the capacity limit was reached. Combining personal observations and informal communications, Lester argued that standard marginal analysis had little relevance in determining the volume of production. He also questioned whether the marginal productivity of labor determines wages. This was a scandal among neoclassical economists.

F. Machlup [55] first responded to Lester’s attack. He wrote a long paper that was published in the same volume as Lester’s (but in a different issue). He was an acting editor of the American Economic Review (AER) and had a chance to read the papers submitted to AER. Machlup argued that the marginal theory is the foundational principle of economics and that criticism of this basic principle requires a thorough understanding of economic theory. He claimed that economics (in a narrow sense) is a science that explains human conduct with reference to the principles of maximizing satisfaction or profit. In view of this definition, he argued, “any deviations from the marginal principle would be extra-economic.” He also argued that it is inappropriate to challenge the marginal theory of value using the question sheet method. Machlup’s reaction to Lester reminds me of two books that are closely related to Austrian economics. The first is L. Robbins [65], and the second is L. von Mises [59]. Robbins [65, p.16] gave a famous definition of economics as follows: “Economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.” This definition is frequently cited even today. Von Mises preferred to use the term “praxeology” instead of economics. He believed that praxeology is a theoretical and systematic science and claimed that “[i]ts statements and propositions are not derived from experience. They are, like those of logic and mathematics, a priori” [59, 1, II. 8]. Machlup held the same apriorism as Robbins and von Mises. We understand well why Machlup reacted vehemently to the empirical research work raising doubt about marginal analysis. The two antagonists had very different views of what economic science is and ought to be.

In the following year, AER published Lester’s answer to Machlup’s criticisms, Machlup’s rejoinder to the answer, and a critical comment by G.L. Stigler [90]. Hansen’s paper [32] was sympathetic to Lester, although the main subject matter was Keynes’ theory of employment. At the end of 1947, Eiteman’s short paper [21] appeared in AER, and in 1948, R. Gordon’s paper [26], which was also critical of the standard theory, followed. Eiteman’s intervention raised a new series of debates about the pros and cons of the marginal theory. Articles from R.B. Bishop [10] and W.W. Haines [30] also appeared in AER. In December of that year, H. Apel [4] entered the debate from the standpoint of a defender of the traditional theory. In the following year, Lester [51] and Haines [31] exchanged criticisms.

Three years later, Eiteman and Guthrie [22] published the results of a more complete survey. To respond to the criticisms made by many defenders of marginal theory, they conducted a carefully organized questionnaire survey and gathered a large number of responses. They posed questions after they had explained the research intentions and the meanings of questions to avoid the criticism that the respondents did not understand the meaning of the questions well. Eiteman and Guthrie briefly and clearly explained the meaning of average cost. They showed a set of curves in figures and asked which shapes the functions of their firms obeyed.

The report described the results in detail. For 1,082 products on which they obtained answers, only 52 answers corresponded to the five figures that reflected the neoclassical theory of the firm. The sixth figure, in which the average cost decreased until it reached a point very close to the lowest cost point and then increased a bit afterward, accounted for 381 products. The seventh figure, in which the average cost decreased until it reached the capacity limit, accounted for 636 % or 59 % of the answers. The case of the sixth figure was rather favorable to anti-marginalist claims, but there remained a possibility of objections from marginalists. However, the number of answers for the seventh figure numbered close to 6 out of 10. This showed that a majority of the firms were not obeying the rule advanced by the marginalists.

It is easy to show this reasoning by a simple calculation. The marginalist principle assumes that, given the market price, firms choose the production volume (or supply volume) at the point where they can maximize their profit. A simple calculation shows that the marginal cost should be equal to the price or m(x) = p at the point where the profit is maximal. Here, the function m(x) is defined as the marginal cost at the production volume x. The result that Eiteman and Guthrie obtained implies that it is impossible for this formula to be satisfied.

This logical relation easily turns out as follows. Let the function f(x) be the total cost at the production volume x; the average cost function a(x) is expressed as f(x)∕x, and the marginal cost function m is given by m(x) = f′(x). The following equation obtains

$$\displaystyle{ a'(x) =\{\, f(x)/x\}' =\{ m(x)x - f(x)\}/x^{2}. }$$
(1.1)

If m(x) = p, then each member of the above equations is equal to \(\{\,p \cdot x - f(x)\}/x^{2}\), which is the profit divided by x 2. This means that if firms are making a profit in the ordinary state of operations, then the left member of equation (1.1) must be positive. If the marginalist theory is right, then the average cost must rise. What Lester found and Eiteman and Guthrie confirmed was that the average cost decreased at the normal level of production. Lester was right when he concluded that the marginalist theory of the firm contains a serious flaw.

In the face of this uncomfortable fact, two economists who believed in marginalism rose to defend the theory: A.A. Alchian [3] and Milton Friedman [24]. Alchian’s paper appeared not in AER but in the Journal of Political Economy, and it was published prior to Eiteman and Guthrie’s final report. Alchian partly accepted Lester’s contentions and other anti-marginalists’ arguments that factory directors did not even know the exact value of the marginal cost and did not much care to behave according to the marginalist rule. From this retreated position, Alchian developed an astute argument that compromised the new findings and the marginalist principle. He admitted that some of the firms may not be producing at the volume where they achieve maximal profit. However, he went on to state that, in the long term, firms that are not maximizing their profit will be defeated by competition and ousted from the market. As the result of this competition for survival, firms with maximizing behavior will prevail.

Alchian’s paper [3] is often cited as the first to introduce the logic of evolution in the economic analysis. Indeed, it is a seminal paper in evolutionary economics. However, we should also note that the simple argument borrowed from Alchian contains two false claims. First, it is not true that competition leads necessarily to maximal behavior even if it exists. It is possible that the evolutionary selection process remains at a suboptimal state for a long time. Second, the marginalist rule gives maximal profit only when a particular condition is satisfied. Indeed, the marginalist rule implicitly assumes that firms can sell as much as they want at the given market price. If this is true, the total sales equal to p ⋅ x, where p is the market price and x is the volume of production, and equal to the quantity sold. Then, if f is the total cost function, the profit is given by the following expression: p ⋅ xf(x). If the function f is differentiable, the maximal is attained only at the point where

$$\displaystyle{ p = f'(x) = m(x). }$$
(1.2)

If this equation is satisfied at a point and the marginal cost is increasing at that point, the maximal profit is obtained when firms operate at volume x. This is what the marginal principle indicates. However, this argument includes a crucial misconception. Firms normally face limits in demand. The marginal cost remains constant for a wide range of production volumes. What happens when they cannot sell as much as they want? In that case, p ⋅ x would not be the actual sales. Formula (1.2) does not give the maximal profit point. The marginalist rule gives the maximum profit in a particular situation, but that particular situation is extremely rare, and wise firms adopt rules other than the marginalist rule. Alchian was wrong in forgetting this crucial point.

The second person who rose to defend the marginalist principle was Milton Friedman [24]. Citing Popper’s theory on the impossibility of the confirmation of scientific statements, Friedman went a step further. Friedman argued that propositions have positive meanings when they are falsifiable. A statement is scientifically valuable when the statement seems unlikely to be true at the first examination. Friedman argued as follows. Trees develop branches and leaves as if they are maximizing sunlight reception. It is unlikely that the trees plan to achieve that. Likewise, many economic assumptions are not realistic at all. However, if one supposes that people act as if they are maximizing their profits and utilities, one can obtain a good prediction of their actions. This is the reason that the maximization principle works, and this principle is more valuable when it seems more unrealistic.

Friedman totally ignores the fact that science is a system of propositions and that the propositions of this system should be logically consistent with each other. Many economic assumptions are observable. One can determine whether those assumptions are true. The proposition included in an assumption is a predictive rule with the same title as what Friedman refers to as prediction. If assumptions turn out to be false, these assumptions should be replaced by new assumptions that are consistent both with observations and with the propositions of the system. Friedman denies one of the most important factors that led modern sciences to their success: the consistency and coherence of a science or at least a part of a science. Modern science developed on the basis of experiments. Logical consistency helped very much in developing it. Friedman denied this important maxim of modern sciences. It is true that sciences faced a phase of inconsistency in various observations and theories. Science developed in trying to regain consistency, not simply in abandoning it.

Friedman’s arguments were extremely dogmatic and apologetic. Popper argued that science develops when someone finds a new phenomenon that the old system of science cannot explain and when the discoverer or some other person finds a new theory (i.e., a new system of concepts and propositions) that is consistent with the new discovery. Friedman pretended to rely on Popper and betrayed him in content. It is quite strange that Friedman named his methodology “positivist.” It is more reasonable to abandon the old marginalist principle in favor of a new principle or principles that are consistent with the new observations. Alchian’s idea is applicable at this level. Economic science evolves. The consistency of principles and observations is one of the motivating forces that drive economics to develop.Footnote 12

There is a profound reason that marginalists could not adopt such a flexible attitude. A stronger motive drove them: the “theoretical necessity” of the theory (I use this phrase in a pejorative way). In other words, the framework they have chosen forces them to cling to the marginalism, though they face facts that contradict their analysis. This is the coupling of equilibrium and maximization. How it happens is explained in the next section. Two important concepts are defined in preparation. A firm is in increasing returns to scale when the average cost is decreasing, and it is in decreasing returns to scale when the average cost is increasing. Lester and Eiteman confirmed that most firms are operating in the increasing returns-to-scale regime, whereas the marginal theory of value supposes the decreasing returns-to-scale regime. These are two conflicting conceptions of the conditions of production, named laws of returns.

2.3 “Empty Boxes” Controversy and Sraffa’s Analysis on Laws of Returns

There was a precursor to the marginalist controversy. As early as 1922, J.H. Clapham, the first professor of economic history at Cambridge, wrote a paper titled “Of Empty Economic Boxes”[14]. In the same year, A.C. Pigou, also a professor of economics at Cambridge, wrote a “Reply”[62] to Clapham. Two years later, D. Robertson published a paper titled “Empty Boxes”[66], and Pigou commented on it [63]. Robertson described the debate between Clapham and Pigou “a battle of giants.” This debate (and Robertson’s intervention) is sometimes called the “empty boxes” controversy.

Clapham [14] criticized the concepts of increasing and decreasing returns as useless. One can classify industries into these two types of returns, but they are empty boxes with no empirical and theoretical basis. He also pointed out that a conceptual problem lay in the notion of increasing returns . Alfred Marshall , the real founder of the English neoclassical school, knew these concepts well and was aware of the problem. Increasing returns inside firms were contradictory to a competitive market. Marshall excluded the internal economy (the name given by Marshall to increasing returns in a firm) and confined it to the external economy. The external economy appears as an increase in returns for all firms in an industry when the total scale of production increases.

The fundamental idea of neoclassical economics is simple. It is based on the assumption that the best method of economic analysis is to investigate equilibrium. Marshall preferred to analyze partial equilibrium. Leon Walras formulated the concept of general equilibrium (GE) . An economy is in GE by definition when the demand and supply of all commodities are equal and all subjects are maximizing their objectives (utility or profit). The basic method was to search for prices that satisfied these conditions. Marshall, who was a close observer of the economic reality, never believed that GE was a good description of reality, but he could not present a good and reasonable explanation that partial equilibrium analysis is much more realistic than the GE framework.

In both frameworks of equilibrium, general or partial, increasing to returns was a problem. In 1926, Piero Sraffa published an article titled “On Laws of Returns under Competitive Conditions”[88]. He knew both of the analytical schemes: general equilibrium and partial equilibrium. He did not mention any names of people who were involved in the empty boxes controversy. Whether he knew of it or not, the controversy prepared readers to examine Sraffa’s new paper closely. Sraffa addressed mainly the Marshallian tradition, but the logic was applicable to the Walrasian framework.

Sraffa examined the logical structure of the equilibrium theory in a rather sinuous way. Sraffa showed first that laws of returns either decreasing or increasing have no firm grounds. The explanations given in Marshall’s textbook are more motivated by the “theoretical necessity” of the theory than by the results of observations of actual firms. The law of decreasing returns was rarely observed in modern industry. The law of increasing returns was incompatible with the conditions of a competitive economy. As a conclusion, Sraffa suggested that firms were at a first approximation in constant returns.

This simple observation implies dire consequences for economics. As seen in the previous subsection, firms cannot determine their supply volume on the basis of the equation p = m, when the marginal cost remains almost constant. This denies the possibility of the very concept of supply function that is defined based on increasing marginal cost. Neoclassical economics is founded on the concepts of supply and demand functions . If one of the two collapses, the whole framework collapses.

Sraffa’s conclusion was simple; he suggested a radical reformulation of economic analysis. He observed that the chief obstacle, when a firm wants to increase the volume of its production, does not lie in the internal conditions of production but “in the difficulty of selling the larger quantity of goods without reducing the price, or without having to face increased marketing expenses” [88, p.543]. Each firm, even one subjected to competitive conditions, faces its own demand, and this forms the chief obstacle that prevents it from increasing its production.

Sraffa proposed a true revolution in economic analysis, but it simply meant a return to the common sense of businesspeople.

First, he recommended changing the concept of competition. The neoclassical theory of competition supposed: (1) competing producers cannot affect market prices, and (2) competing producers are in circumstances of increasing costs. In these two points, Sraffa emphasized that “the theory of competition differs radically from the actual state of things” [88, p. 542]. Many, if not all, firms set their product prices, yet they are competing with each other fiercely. Most firms operate with constant or decreasing costs when considering overhead. The concept of competition was indeed radically different from actual competition.Footnote 13

Second, as mentioned above, it was not the rise of the production cost that prevented firms from expanding their production. Without reducing prices or paying more marketing costs, they cannot expect to sell more than they actually do. Put another way, firms produce as much as the demand is expressed (or expected) for their products. Based on this observation, we may establish the principle that firms produce as much as demand requires.Footnote 14

This was really a revolution. Before Sraffa pointed it out, all economists implicitly supposed that firms could sell their products as much as they wanted, at market price. The concept of the supply function depends on this assumption. The supply function of an industry is the sum of individual firms’ supply functions. The supply function of a firm is, by definition, the volume it wants to offer to the market at a given system of prices. This concept implies that the firm has, for each price system, a supply volume that it is willing to sell but does not want to increase its offer beyond that volume. The marginalist rule (rule 3 in the previous subsection) is fulfilled only if (a) firms are producing in conditions of increasing costs and (b) firms can sell their products as much as they want. Sraffa rejected these two assumptions, observing closely what was happening in the market economy.

As Robertson [66] witnessed, many economists knew that a majority of firms are producing in the state of decreasing costs (or increasing returns in our terms). More precisely, unit cost is the sum of two parts: variable costs and overhead costs per unit. Variable costs are normally proportional to the volume of production. Overhead costs decrease when the volume of production is expanded. Consequently, unit costs normally decrease. The major results that Lester and Eiteman discovered are, in fact, confirmations. The vehement reaction from the marginalists testifies to how difficult these simple facts were to digest.

At the time when he wrote the paper, Sraffa might not have had any clear intent to pursue this revolutionary destruction. In the last half of the paper, he discussed various aspects of price determination and the degree of monopolies. However, after he published this paper, Sraffa kept silent, except for a few papers, notably including a discussion of Hayek’s theory of interest. Not only was he busy in the preparation of the Collected Works of Ricardo but he also did not know how to proceed. He moved slowly but deeply. More than 30 years later, in 1960, he finally published a small book [89] with a rather long title: Production of Commodities by Means of Commodities. The book was subtitled Prelude to a Critique of the Political Economy.

Between 1926 and 1960, the theoretical landscape of economics changed greatly. Indeed, these 30 years were the most fruitful period of mathematical economics. The first move occurred in the 1930s in Vienna. Scholars including Carl Menger, the son of the founder of Austrian economics Carl Menger, began inquiring about the positive solvability of the systems of equations that appeared in economics. Before that, people were satisfied with counting the number of equations and the number of variables and examining if the two coincided. Now, they questioned whether there was a nonnegative system of solutions. However, it was a turbulent period. The Nazis invaded Austria in 1938. Many intellectuals were forced to escape from Vienna. Many of them moved to Britain and then to the United States. After World War II, the United States became the center of mathematical economics. In 1954, Arrow and Debreu published their seminal article on the “Existence of Competitive Equilibrium”[5]. Many other related contributions appeared around this period. Arrow and Debreu’s theory was beautiful as a formulation and perfect as mathematics.

In view of this development, Sraffa’s concern was outside the current. His thought was, however, deep enough to undermine the very basis of the now mathematically complete general equilibrium theory. The next section examines what types of problems there are in the GET as economic formulations. Then, the development of equilibrium theory after the 1970s and return to the question why equilibrium analysis was doomed to fail are addressed.

3 Possibilities and Limits of General Equilibrium: State of Economics After the 1970s

After the 1970s, many macroeconomic theories took the form of general equilibrium theory. We may ask one question here. Are they really general equilibrium theories? Many models pretend to be so. They are in the sense that they deal with all major aspects of the economy. They are not in the sense that (in most cases) they assume one good and single representative agent for producers and consumers. A typical case is dynamic statistical general equilibrium theory.Footnote 15

3.1 Assumptions of Arrow and Debreu’s Formulation

There are several versions of general equilibrium theory (GET) . Arrow and Debreu’s formulation was accepted as the standard model of the GET. Because of its generality and elegance, Arrow and Debreu’s formulation was superior to all other models proposed at that time. Morishima [60] objected to this from an economic point of view, but he remained in the minority.

Arrow and Debreu’s theory assumes a very general situation. It assumes an economy with many consumers or households, many firms or producers, and many goods and services. Each consumer possesses his/her own preference, expressed by a smooth, convex, and non-satiable utility function. It was assumed that preferences are independent of the preferences and consumption of others. Each firm is expressed by a production possibility set, which represents the technology of the firm. Each individual possesses an initial endowment and satisfies a subsistence condition. In addition to endowments in nature, individuals possess shares of firms. With some assumptions on the shape of production possibility sets, Arrow and Debreu proved the existence of a competitive equilibrium.

The generality of the model was important. The modern market economy is a system composed of an enormous number of people and commodities. GET was conceived as a unique theory that explains theoretically how this enormous system works. This is the reason that, after many years of critical reflection, Arrow [6, p. 451] claimed that the GET remained “the only coherent account of the entire economy.”

Most of the conditions assumed were very general and seemed harmless. However, the beautiful formulation hides big problems. Objections to Arrow and Debreu’s GET were numerous. As it became a kind of central dogma of theoretical economics, it attracted many criticisms. We may group them into two categories. One contains criticism of the unrealistic assumptions of the model. The other concerns interpretations of the model.

Concerning the assumptions of Arrow and Debreu [5], the criticisms centered on two parts:

  1. 1.

    Preferences

  2. 2.

    The production possibility set

Many in radical economics have argued that individuals’ preferences are dependent on each other. They emphasized the endogenous evolving nature of preferences. The dynamic and interpersonal character of preferences is indicative of the need for agent-based simulations (ABSs). This may give a good theme for ABS. Yet this is a weak criticism. If we add one or two minor changes in formulation of preferences, Arrow and Debreu’s framework can well overcome these objections.

More fundamental and fatal flaws hide behind the assumption that people can find a maximal solution. To define the demand function , it is supposed that consumers maximize their utility under the condition of budget constraints. Let us examine this point in detail.

Let u be the utility function. A consumer with a budget B maximizes

$$\displaystyle{ u(x_{1},x_{2},\ldots,x_{N}) }$$
(1.3)

under the condition that

$$\displaystyle\begin{array}{rcl} & & x_{1}p_{1} + x_{2}p_{2} + \cdots + x_{N}p_{N} \leq B, \\ & & x_{1} \geq 0,\,x_{2} \geq 0,\,\ldots,\,x_{N} \geq 0. {}\end{array}$$
(1.4)

Here, p = ( p 1, p 2, , p N ) is a price vector. Let us suppose that all p k are positive for the simplicity of explanation. Then, the set \(\Delta \) of points x = (x 1, x 2, , x N ) that satisfies condition (1.4) is closed and bounded. If function u is continuous on the bounded closed domain, by Weierstrass’s theorem of several variables function, u attains a maximal value v = u(z 1, z 2, , z N ) at some point z = (z 1, z 2, , z N ). In the mathematical locution, v is the maximal value, and z = (z 1, z 2, , z N ) is the maximal solution. Evidently, the maximal value is unique for any maximization problem , whereas solutions may not be unique. If the utility function satisfies the usual conditions, the set of maximal solutions is a closed, convex, and bounded set. The demand function is a correspondence between p = ( p 1, p 2, , p N ) and the set of all maximum solutions. This correspondence is upper hemicontinuous.Footnote 16 Then, Kakutani’s fixed-point theorem gives the existence of an equilibrium.

What this formulation neglects is the cost of the consumers’ calculation. If we interpret the above maximization problem as an integer problem, i.e., for a problem that seeks solutions with integer values for all components of solutions, there is no big difference in real business. Most exchanges take place by counting units of commodities. We pose a simplifying assumption. Let utility function u be linear with integer coefficients. This interpretation and specification reveals a hidden difficulty behind the above simple maximization problem. Indeed, the integer maximization problem with a linear condition is what one calls the “knapsack problem ” in the field of computational complexity [75, §6, pp.90–91]. We can easily solve this problem with no difficulty for some special instances.Footnote 17 An example is the case where \(p_{1} = p_{2} = \cdots = p_{N}\). Then, the problem is to find the biggest coefficient of linear function u. To solve the problem in general, however, the solving procedure becomes much longer, and it normally requires computation time that is asymptotically proportional to 2N. The exponential function increases rapidly. Even for a rather small number of commodities N, the calculation becomes practically impossible because it takes too much time. The use of computers is not very helpful, for it only enlarges the limits by less than 100. The following is an example of the estimated time when one wants to solve an integer problem by, say, a personal computer (Table 1.1. Of course, the time depends on many factors, including the algorithm used and the speed of the computer; the table is just an indication of how rapidly the computation time increases).

Table 1.1 Computation time increases with the number of commodities

In economics taught in schools, the number of commodities is always 2 or 3. As an illustration, this is justified. When one wants to draw a figure on a paper, this sort of simplification is inevitable. However, a real economy includes a relatively large number of commodities. We have no detailed statistics about the number of commodities. The Japan Standard Commodity Classification contains 13,757 items for the finest classification (six-digit classification, 1990 revision). This classification is not sufficient to specify a commodity. Even a standard type of convenience store deals with around 5,000 items. In a country like Japan, it is not exorbitant to assume that there are more than 100 billion items. Even if people maximize their utility, they cannot arrive at a solution even after billions of years. If one estimates the computing time, it is a tremendous error to assume that consumers are maximizing their utility.

Defenders of the GET would say that they are not assuming that consumers are really maximizing their utility. It is sufficient, they think, to assume that consumers maximize their utility only approximately. These defenders of the GET are making an error, confusing maximal value and maximal solutions. If consumers use approximate solutions, the obtained utility value is close to the maximal value. In the construction of a demand function , what matters is the composition of the solutions. Let (y 1, y 2, , y N ) be a solution that satisfies condition 2, and suppose that the utility value u(y 1, y 2, , y N ) is very close to the maximal value u(z 1, z 2, , z N ). In the integer problem like those under conditions 1 and 2, the set of positive y j may be completely different from the set of positive z j . Approximation does not ensure that solutions are near and approximate [75].

The question of computing time is only an instance of a more general problem of economics: the assumption of perfect rationality. The same question arises for producers. Most textbooks on microeconomics demand that readers choose prices or quantities that maximize the firm’s profit. If the problem is a stylized one, a routine process gives the answer. There are also problems that require deliberation. Any deliberative decision-making situation is always so complicated that no maximization problem applies. The human ability to engage in rational calculation and information gathering is limited. Herbert A. Simon [86] summarized these human limits using the key word “bounded rationality.” If human rationality is unlimited, as Simon [85, 3rd Edition p.220; 4th Edition p.322] stated in his seminal book, administrative theory “would consist of the single precept: Always select that alternative, among those available, which will lead to the most complete achievement of your goal.” The seemingly innocent formulation of Arrow and Debreu contains a fundamental flaw in assuming a perfect or unbounded rationality for economic agents.

Once we abandon the assumption of perfect rationality, we should formulate the behaviors of consumers and producers differently. This is why we need an evolutionary economics point of view. Agents are no longer maximizing decision-makers except in very special situations where maximization is possible. In such situations, we can formulate the behavior of agents as routines. Each agent has its own rules of conduct : in one case, they act in one way, and in another case, they act in another way. In the simplest form, we can represent an agent as a set of routine behaviors . A routine behavior consists of the set of behaviors and rules. Its conduct may become more complicated, for the if-then rules take a complex chain structure. One of the simplest but sufficiently complex formulations is Langton’s “classifier system,” first introduced for his artificial life world.

All these behaviors are “rule-based behaviors.” A person is an agent with a set of rule-based behaviors. He or she classifies a situation as a particular case, searches a conduct rule in his repertory of conduct, and acts in accordance with the chosen rule. Each behavior is a simple rule. We know that we can easily mimic these behaviors. It is not difficult to reproduce the social interactions of these behaviors in a virtual world in a computer. ABS is suitable for this kind of analysis and this is discussed in Sect. 1.4.

As Arrow [6] suggested, GET can incorporate bounded rationality, for what matters for the proof of the existence of competitive equilibrium is the hemicontinuity of demand correspondence. If we can achieve such a correspondence, whether agents behave rationally or not does not matter. However, once we abandon complete rationality, the mathematical formulation of consumers’ behavior becomes too complex and does not permit mathematical analysis. While GET seems very general, it is in reality confined to a very narrow world.

Another flaw in Arrow and Debreu’s formulation is concerned with the production possibility set. The assumptions imposed on the shape of the possibility sets are very simple. Setting aside such conditions as the impossibility of net positive production, two crucial conditions are closedness and convexity. The mathematical meanings of these conditions are clear. If a series of production x(i){i = 1, 2, } is possible and the series converges to a vector x, it is plausible to assume that production x is also possible. Convexity is much simpler. If two productions x and y are possible, convexity means that the production α x +β y is also possible for any nonnegative α and β when \(\alpha +\beta = 1\). If the scaling down and addition of production are always possible, the production possibility set is convex. Thus, upon first examination, the assumptions on possibility sets seem plausible and harmless. This is a simple trick.

The most important problem of the convexity assumption is that it excludes increasing returns to scale. In this point, the Arrow-Debreu formulation inherits the same flaw as the neoclassical framework for production. Producers face constant or decreasing returns to scale. In the Arrow-Debreu formulation, the higher cost of production prevents producers from producing more. The logic is the same as that of the Marshallian framework. There is nothing mysterious in this coincidence. The Arrow-Debreu model assumes the concept of the supply function (or excess demand function), and this concept requires decreasing returns to scale. The abstract character of the mathematical model often obscures the real constraints that it assumes implicitly.

Defenders of GET were aware of this flaw and tried to extend the GET framework to include increasing returns to scale. I will discuss the history of this attempt in Sect. 1.3.4.

3.2 Problems of Interpretations of the Arrow-Debreu Theory

The criticism of flaws in the assumptions is, in a sense, extrinsic. A close examination reveals more intrinsic problems with the theory.

The first question is rarely discussed. What kind of situation does the Arrow-Debreu equilibrium describe? Is it a long-term equilibrium or a short-term equilibrium? There are many careless misinterpretations. Many people believe that the Arrow-Debreu equilibrium is a long-term one. Defenders of GET say that equilibrium may not establish itself instantaneously, but it will appear, sooner or later, after a sufficient time of the “tâtonnements” (groping process).

If this interpretation is to be plausible, only two cases are possible. In case I, all endowments are given constantly by nature, and there are no futures markets. In case II, some endowments are the result of past acts of accumulation and transaction. In case I, futures markets play no role, because they simply do not exist. In case II, futures markets play an important role. To understand this point, we must consider the time structure of the model. The first point to grasp is that the Arrow-Debreu equilibrium expresses markets at a point in time, say T, and futures markets are also open at time T. The only difference between a futures market with a spot market is that the transacted good is a future good and delivery takes place in the future. The transaction is a promise to deliver a specified good at a determined time. Let us assume that traders keep their promises. Futures markets generate flows of goods. The delivery of a good at a future time means that the trader receives it as an endowment. Therefore, the existence of futures markets generates flows of endowments that are dependent on past transactions. The case II interpretation presupposes a shifting economy behind the equilibrium. If the case II scenario produces a stationary state, the conditions of equilibrium must contain many other equations that do not appear in Arrow and Debreu’s formulation. Without these conditions, what seems like an equilibrium may generate a fluctuation of the shifting economy. Arrow and Debreu did not examine this possibility. There is no guarantee that this fluctuation converges to a stable state. Indeed, we can construct an example in which one cannot extend the shifting economy beyond a certain point in time.

As a conclusion, Arrow and Debreu introduced the dated good market, but it was not as successful as many researchers thought. Mathematically, it was a simple generalization. Economically, the logic of futures markets is not well incorporated in the GE framework. GE with futures markets can be a component of a dynamic development analysis, but nobody pursued that line of investigation. Even at that time, the question of the instantaneous establishment of GE remains.

This issue is only a symptom indicating that there is some misconception in the GET research program. It cannot be a long-term or a short-term theory. The term “general equilibrium” bewildered people. Despite its apparent generality, the Arrow-Debreu formulation is only a description of an equilibrium state at a point in time. It simply means that the equilibrium state will not change if the same initial endowments and other conditions, such as preferences, are given. The existence of such a state does not teach us much about real market transactions.

The second question is a famous one. A short explanation will be sufficient. The Arrow-Debreu equilibrium includes no role for money. This is true for any other forms of general equilibrium, for equilibrium means equality of demand and supply for every good. No room remains for money as a medium of exchange.

3.3 Shapes of Excess Demand Functions

In the 1970s, there were new discoveries. Hugo Sonnenschein [87] found in 1973 that a very wide class of functions could be an aggregate excess demand function of an economy. We know that an aggregate excess demand function satisfies two characteristic conditions:

  1. 1.

    It is continuous and homogeneous of degree zero.

  2. 2.

    It satisfies Walras’ law.

Suppose that there are N types of goods. Let \(\Pi \) be a set of all price vectors in R N that satisfy conditions p i  ≥ 0 for all \(i = 1,2,\ldots,N - 1\) and p N  = 1. Let \(\Pi (\epsilon )\) be a subset of \(\Pi \) that satisfies conditions 1∕ε ≥ p i  ≥ ε for \(i = 1,2,\ldots,N - 1\). A Walras function is a vector-valued function ( f 1, f 2, , f N ) that satisfies conditions 1 and 2. A typically polynomial Walras function is a Walras function whose functions are polynomials of first N − 1 price variables. Sonnenschein proved that any typically polynomial Walras function is indeed an aggregate excess demand function of an economy with normal insatiable convex utility functions. Weierstrass’ theorem states that any Walras function has an approximate polynomial function. This means that any Walras function can be uniformly approximated on \(\Pi (\epsilon )\) by typically polynomial Walras functions. In other words, typically polynomial Walras functions are dense in the set of all Walras functions. This means that an aggregate demand function can take approximately any function on \(\Pi \).

Rolf Mantel [56] reported that Sonnenschein’s theorem can be extended to include all continuously differentiable functions with a certain C property. Gerard Debreu [17] provided a stronger theorem than Sonnenschein’s. Debreu showed that one could take any continuous function as a possible approximate aggregate demand function. Like Sonnenschein, Debreu assumed that the functions are homogeneous of degree zero and satisfy Walras’ law. The new theorems are true over any ε-trimmed price spaces \(\Delta (\epsilon ) =\{ \mathbf{p} = (\,p_{1},p_{2},\ldots,p_{N})\mid 1/\epsilon > p_{j} >\epsilon \ \mbox{ for all}\ j\}\).Footnote 18 Debreu’s result means that any Walras function can be the aggregate demand function of an economy on this trimmed price space.

Mantel and Debreu examined a sufficient number of persons to get the above result. Mantel showed that the number of individuals needed in the economy is at a maximum of 2N and conjectured that N would be sufficient. Debreu showed that N is sufficient and that it is the minimum of such numbers.

Although these are rather technical results, their impacts were tremendous. The GET has a standard set of research programs. This set includes uniqueness, stability, and comparative statics. All these analyses supposed a well-behaved aggregate excess demand function. The Sonnenschein-Mantel-Debreu theorem (SMD theorem) means that assumptions that guarantee good behavior at the individual level do not carry over to the aggregate level. Thus, the SMD theorem destroyed standard research programs of the GET.

The SMD theorem changed the general orientation of research programs. To obtain an aggregate demand function with certain nice properties, it became clear that it is necessary to assume some special distributions for initial endowments, thus departing from the generality of theory assumptions. The SMD theorem was, in this sense, very influential, but the influence remained within the framework of the GET. It only demonstrated that the Arrow-Debreu-type model can be much more complicated than it was assumed before the theorem. It is also important to point out that the instability shown by the SMD theorem indicates nothing on the movement of the real economy. Comparative statics teach us almost nothing in the real dynamics of economic adjustment.Footnote 19

As for trials outside of GET, Rizvi [64, pp. 230–231] sums them up concisely: Thus in the 10 years following the Shafer-Sonnenschein [74] survey, we find a number of new directions in economic theory. It was around this time that rational-choice game theory methods came to be adopted throughout the profession, and they represented a thoroughgoing change in the mode of economic theory. Even so, following a growing realization of formal difficulties with rational-choice game theory as well as experimental evidence that did not agree with some of its predicted outcomes, a group of practitioners turned to evolutionary game theory. Indeed, the rise of experimental economics itself represents an important development in the growth of alternative approaches in the wake of general equilibrium theory’s difficulties.

3.4 GET and Increasing Returns to Scale

Whereas the SMD theorem shows the difficulties within the research program of GET , the questions of increasing returns to scale include much wider contents and perspectives. Indeed, increasing returns to scale are a common phenomenon. We can observe them widely in most of industrial production.Footnote 20

There were several approaches to “solve” the questions raised by the existence of increasing returns to scale. The first of such attempts was made by Alfred Marshall . Marshall knew very well that increasing returns inside a firm would eventually lead to a monopoly and destruction of the competitive economy. When Marshall was editing his Principles of Economics (1st edition, 1890; 8th edition, 1920), Great Britain and the United States were witnessing the emergence of giant companies through a process of mergers of many companies. Marshall was more concerned about the consistency of theory rather than incorporating the new trend that he observed in the real world. His astute invention was the concept of “externality.” He admitted the existence of increasing returns to scale that are external to firms and internal to industry and denied the existence of increasing returns to scale that are internal to firms. With this conception, Marshall succeeded in saving the logical consistency of his system. As we have seen, what Sraffa criticized was this “solution.”

The second attempt was to deny the importance of increasing returns to scale. Many economists admitted the possibility of increasing returns to scale but tried to confine these phenomena to special industries such as railways and utility supplies (such as gas, water, and electricity). In these industries, they thought, a public authority should control these monopolies. These monopolistic firms are usually called public-purpose enterprises. This doctrine continues to be taught in undergraduate economics courses. According to this “solution,” we observe no strong increasing returns to scale, and if they were observed, they have no serious significance.

Some economists emphasized the general validity of convexity assumptions concerning a production possibility set. Indeed, if we admit that productions are additive and divisible, we can logically deduce the convexity of the production possibility set. The flaw in this reasoning lies with the divisibility assumption. Another tricky explanation emphasized the generality of input substitutions. If we fix one or some inputs and increase other inputs, decreasing returns are general rules. We should not confuse input substitution and increasing returns to scale . In increasing returns to scale, the best combination of inputs should be chosen.

These constitute apologetic reasoning. They have no power for serious observers and theorists. By and by, particularly after the 1980s, increasing returns to scale became recognized as one of the most important anomalies or irregularities that should be incorporated into the framework of GET.

A major attempt in the new direction was to change the behavior of producers. One method was to assume that firms are no longer price takers and have a pricing rule. Many alternative assumptions were proposed. Three of them were as follows:

  1. 1.

    Average cost-pricing rule

  2. 2.

    Two-part marginal pricing rule

  3. 3.

    Constrained profit maximization rule

All these rules induce a correspondence from P × F to P, which is upper hemicontinuous with nonempty, closed, and convex sets as values. Here, F stands for the Cartesian product of the N set of weakly efficient production points, and P stands for the price simplex. If a pricing rule induces a correspondence with the above properties, it is possible to apply Kakutani’s fixed-point theorem and prove the existence of an equilibrium in which all consumers and firms have no need to change their plans.

Average cost pricing is one of the rules that can be imposed by society on public-purpose firms. As for the behavior of competitive firms with increasing returns to scale, pricing rules with quantity constraints deserve closer examination. Two different formulations are possible. One was proposed by Scarf [73] and the other by Dehez and Dréze [18]. The first sets constraints on inputs, whereas the second sets constraints on outputs. Both of them tried to show that the equilibrium is compatible with “voluntary trading.”

I will skip the details of the concept of voluntary trading.Footnote 21 It is a good characterization that includes both price-taking behavior for decreasing returns-to-scale producers and supply behavior for increasing returns-to-scale producers. In fact, for a producer with a smooth convex production set (the case of a “normal” producer with decreasing returns to scale), a minimal output price under voluntary trade implies that the output quantity of the producer is the same as the profit-maximizing quantity under given input and output prices. Another important feature of voluntary trade is the supply behavior of the increasing returns-to-scale producers. If the producer is operating at the output price p and output quantity y, it is ready to produce more when the market demand is bigger than y. This attitude is similar to the behavior of producers described by Sraffa [88]. He emphasized that what limits production to the actual level is not the increase in cost but the constraint of demand for the producer’s product.

The result obtained by Dehez and Dréze [18] is astonishing. They proved two theorems for a private ownership economy under several standard conditions, except that, for the production sets, they did not assume convexity:

Theorem V.:

Under assumptions C.1 to C.3 and P.1 to P.4, a voluntary trading equilibrium exists.

Theorem M.:

Under assumptions C.1 to C.3 and P.1 to P.4, a minimal voluntary trading equilibrium exists.

The concepts of the equilibria are given as follows. A voluntary trading equilibrium is a set of a price vector p; a list of production plans y 1, y 2, , y N; and the list of consumption plans x 1, x 2, , x M that satisfies the following three conditions:

  1. 1.

    Excess demand is nonpositive with the free goods rule.

  2. 2.

    Consumption plan x i is the best choice for each consumer i given the vector price p and profits.

  3. 3.

    For each producer j, the price vector p and the production plan y j satisfy the voluntary trade condition for the production set Y j .

The same set is defined a minimal voluntary trading equilibrium when, in addition to conditions 1, 2, and 3, the minimal output price condition is satisfied. (I omit the definition of this last condition.) Note that theorem M is stronger than theorem V.

At a first glance, it seems that Dehez and Dréze’s results [18] were a victory of the long-continued efforts to extend the GET to include increasing returns to scale. In reality, it is not.

Let us examine closely Dehez and Dréze’s results. A concave producer has a production possibility set, the complement of which is convex in an appropriate half space. This means that the increasing returns to scale apply at all points of production. Theorem M proves the existence of an equilibrium even in the non-convex (i.e., increasing returns-to-scale) environment. Theorem M states that there is an equilibrium where concave producers operate without profit, for the output price is equal to the average cost that is minimal in the voluntary trade price set. In other words, the concave producers always produce at the break-even point.

A minimal voluntary trading equilibrium permits convex (i.e., decreasing returns-to-scale) producers to get positive profits but does not permit concave producers to get any positive profits. This is exactly the opposite of what we usually observe in a real (but not in a theoretical) market economy. Even in the paradigm of GET, this result is disastrous because the equilibrium is consistently not Pareto efficient.

At the end of my discussion on increasing returns to scale, it is worth adding some words on the same topics in the macroeconomic literature. Indeed, eminent economists such as Josef E. Stiglitz, James M. Buchanan, Robert E. Lucas, and Martin Weitzman showed a keen interest for the effects of (static and dynamic) increasing returns. Buchanan and Yoon [12] edited an anthology on this theme. David Warsh [93], a Boston Globe columnist, wrote a journalistic book titled Knowledge and the Wealth of Nations. He contrasted the pin factory parable (increasing returns) against the invisible hand parable (equilibrium) and pointed out that the two theories are logically contradictory, so the pin factory discourse was suppressed in favor of the invisible hand logic. In the latter half of the book, Warsh discussed the role of Paul M. Romer [70] in this “increasing returns revolution” in economic thought. In this and other papers, Romer treated knowledge as the third input, together with capital and labor, to the aggregate production functions. He avoided the usual difficulties in introducing increasing returns by assuming the spillover effects of knowledge. Increasing returns appear only in the macroeconomic analysis. By introducing the logic of externality, Romer succeeded in incorporating increasing returns, just as A. Marshall did. As for the logic of explanation, Marshall and Romer were structurally the same.

Dixit and Stiglitz [19], Krugman [45], and others have made other attempts concerning monopolistic competition. Using the Dixit-Stiglitz utility function, Krugman succeeded in explaining that a degree of diversity occurs under monopolistic competition. He explained that this would show why intra-industry trade is increasing in volume and in proportion. This result relies too much on the symmetric assumptions on both the producers’ and consumers’ sides. It is rather a poor result. It does not explain how specialization occurs between countries, for specialization takes place purely by chance in Krugman’s symmetric world.Footnote 22

3.5 Computable General Equilibrium

Some computer simulation researchers believe that the GET is not so bad and is even useful in some ways, for computable general equilibrium models (CGE models) are constructed and actually used.

It is necessary to distinguish two different aims of economic models. GET is primarily an “algebraic theory.”Footnote 23 It does not aim to provide a prediction. That understanding is simply a Friedmanian misconception. GE models contain many variables and functions, but it is normally difficult to replace them with observed data. An algebraic theory teaches us the principle of a system. In the case of economics, a good theory teaches us how the market economy works [7]. As Arrow and others insisted, GET gave a coherent account of how an economy as worldwide network works with no directing headquarters. As a parable, GET has produced a fine picture. However, it contains various fatal flaws. GE is a refined theory as mathematics but a fanciful confabulation as economics. The insight that GET gives is far from reality and often toxic. That is why we have determined that we should reject GET. It has no future.

The second aim of economic models is to give predictions. They are conceived as policy tools. Many economic models, private and public, are working for this purpose. As positive science, there are many insufficiencies in these models. They are like fortune-telling. People misuse economic models thinking that they express causal relations between variables. Despite these problems, it is an inevitable work. As Keynes hinted, in the field of predictions and policymaking, we should be as humble as dentists who try to ease the client’s pain without knowing its real cause. We should also note that basic medical sciences, empowered by the recent development of biophysics, have improved treatment tremendously. In economics, we should pursue both sides: practical treatment and basic science. It is yet important to know that practice and theory may have a great distance between them.

Broadly speaking, CGE models are a type of GE model. In contrast to other GE models, their aim is to be useful for concrete economic analysis. On this point, they are closer to most macroeconomic models. The difference between mainstream econometric models and CGE models lies in their orientations. Mainstream econometric macro models are constructed as simply as possible. They use a small number of aggregate variables and a small number of equations. CGE models contain a large number of variables and equations. They rely heavily on detailed statistical data. CGE models descend from Leontief’s input-output table and have the aspect of a new form of a system of national accounts.

This difference between mainstream econometric models and CGE models comes from different philosophies in useful model building. Mainstream models aim for speed and accuracy. CGE models aim to be usable in various analyses in policy assessment before any concrete implementations.

In the 1960s and even in the 1970s, there was a widespread belief that, if we can build a large-scale econometric model, we can get more accurate results. This belief was abandoned a long time ago. The economy is a huge network that includes tremendous number of variables. Interactions between them are very complex, and the introduction of new variables and equations does not help very much to improve models.

3.6 Dynamic Stochastic General Equilibrium Models

Another strand of computable macroeconomic models is called dynamic stochastic general equilibrium (DSGE) models. This type of model is much more popular among macroeconomic specialists for more than 20 years. The Royal Swedish Academy of Sciences awarded the Nobel Prize in Economic Sciences in 2004 to Kydland and Prescott, who were major promoters of DSGE models.

DSGE models have incorporated expectations and substitution between consumptions at two points in time. As they are actually popular models, be they abstract or computable, there are many versions, but most of them have a very simple structure. They assume one type of good and a unique representative agent. The agent represents consumers who have identical preferences.Footnote 24 They choose how much they consume this unique good at a given time. In this sense, goods are differentiated as time-specific goods. If they expect inflation, consumers prefer to consume more now than later. The word “general” means simply that the agent chooses these differentiated goods. “Dynamic” means only that agents have expectations of future events. Preference and the production function remain the same. In the proper sense of the words, DSGE is but a static model. “Stochastic” means that there are external shocks to the economy. Whatever happens, an agent is ready to adapt its behavior and redress the disturbed equilibrium. With these characteristics, DSGE models are normally understood to be rigorous models that have firm microfoundations. Based on this assessment, discussions and observations of neoliberal new classical economists and more liberal new Keynesians are both mainly based on one DSGE model or another.

However, criticism of DSGE models abounds and became much stronger after the Lehman Brothers bankruptcy. We cite here a short historical assessment in a paper by Colander and others [16, p. 237] prepared and published before the shock:

The exaggerated claims for the macro models of the 1960s led to a justifiable reaction by macroeconomists wanting to “do the science of macro right”, which meant bringing it up to the standards of rigor imposed by the General Equilibrium tradition. Thus, in the 1970s the formal modeling of macro in this spirit began, including work on the micro foundations of macroeconomics, construction of an explicit New Classical macroeconomic model, and the rational expectations approach. All of this work rightfully challenged the rigor of the previous work. The aim was to build a general equilibrium model of the macro economy based on explicit and fully formulated micro foundations.

The authors’ conclusion was as follows:

Einstein once said that models should be as simple as possible but not more so. If the macro economy is a complex system, which we think it is, existing macro models are “more so” by far. They need to be treated as such. We need to acknowledge that our current representative agent DSGE models are as ad hoc as earlier macro models. There is no exclusive right to describe a model as “rigorous”. This does not mean that work in analytical macro theory should come to a halt. But it should move on to models that take agent interaction seriously, with the hope that maybe, sometime in the future, they might shed some direct light on macro policy, rather than just provide suggestive inferences.

3.7 Why did the Mainstream Research Program Fail?

The above conclusions of Colander and others [16] are fairly natural, but it would be better to add some words about symptomatic observations on the present state of economics.

If we examine the situation with an open mind, symptoms of economic science’s crisis are evident. All difficulties of economics come from the fact that it cannot escape the equilibrium framework. Equilibrium is a framework that treats the economy as if it is in a static state. However, the economy is a dynamical entity. Stock prices and foreign exchange rates fluctuate by the minute. An economy is always changing: competition, the business cycle, the boom and collapse of financial markets, and growth and stagnation. Commodities, people’s behaviors, technology, institutions, and organizations change over a longer period.

Since the time of John Stuart Mill , economists have known that the basic method of analysis should be switched from static to dynamic. Many economists espoused this ideal, but it was never realized. Even nonspecialists in economics knew that the economy is always changing. To rescue economics from the yoke of statics, J. R. Hicks devised the idea of shifting equilibrium . Under this interpretation, the economy is a series of equilibria at any moment in time but shifts from equilibrium to equilibrium. Another mode of thinking is to examine inter-temporal equilibrium conditions. A typical example is dynamic stochastic general equilibrium models Despite these palliative ideas, analysis based on equilibrium framework cannot escape its essential character of being static. As Ichikawa Atsunobu [39] emphasizes, it is necessary to see the potential limits of the present system of a science. As he tells us, there is always a margin between the actual state of a system and its limits. If one is bewildered by the small remaining margin of development, one cannot achieve a breakthrough. It is, rather, time to abandon the old framework. If we do not, we cannot go further.

Why did people adhere so closely to the GE framework? It is a conundrum. To solve this, it is necessary to make a short detour into the history of science. Economics is a part of science and was strongly influenced by the general methodology of scientific investigation. Throughout the nineteenth century, this thinking involved measurement and mathematics. Newtonian analysis was extended to the various fields of physics and engineering, and it was believed that this method could be applied to such a field as economics. Fortunately (or, in reality, unfortunately?), economics succeeded in incorporating mathematical analysis into economic reasoning, and it could take the form of a scientific field. However, this pseudo-success paved the way to the present state of economics.

Emerging in 1930s in Vienna, mathematical reasoning in economics became much more refined throughout the twentieth century.Footnote 25 One of the highest peaks in this direction was Arrow-Debreu’s theory of a general equilibrium model [5]. In the 1950s and 1960s, there was a kind of fever or blind trust in economics: mathematical economics, together with econometrics, was expected to become a real science, comparable to physics. Criticism in the first half of the 1970s was a reaction to this euphoria. This history of economics is closely related to a major change in scientific research modes and will be explained in Sect. 4.2

Arrow and Debreu’s success was conditioned on two factors. Their theory was based on two orientations: maximization and equilibrium. These two research frameworks helped very much in the successful application of mathematics, but they also indicated the limits of mathematical analysis.

Once the hypothesis of maximization was abandoned, economists were obliged to stand in an uncomfortable position. Economic agents’ behaviors are no longer determined uniquely. How do economic agents behave? There are no leading principles by which to formulate economic behaviors a priori. Economists must start their analysis with observations of actual economic behavior. However, this was not an easy attempt, for it required sharp insight and the power of abstraction. Moreover, this method is in some sense contradictory to the well-established custom of economics. It is still believed that economics can be constructed as an axiomatic science from such principles as rational decision-making.Footnote 26 Many theoretical economists were afraid to lose mathematics as a tool of analysis. They knew that real agents were not behaving as utility or profit maximizers, but they preferred to conserve tools rather than to abandon them.

Similar logic had worked with regard to the equilibrium framework. In any market economy, prices and quantities are mutually dependent. This kind of mutual dependence can be analyzed by two methods: one is equilibrium analysis and the other is process analysis . In equilibrium analysis, all relevant variables are thought to be constant through (probably virtual) time. An important related question, whether the economy has any mechanisms to arrive at equilibrium, was seldom questioned. If an economy is in equilibrium, the analysis becomes drastically simple. If we confine ourselves to the analysis of equilibrium, it is sufficient to inquire whether a system of equations has a solution and whether solutions are unique or not. For the existence of an equilibrium state, one could use Kakutani’s fixed-point theorem, one of the most general forms of fixed-point theorems. This was one reason that Arrow and Debreu’s theory was successful.

If we abandon the equilibrium framework, then all variables become dependent on time. Process analysis becomes necessary, but the researchers’ burden becomes much heavier. This approach was tried sporadically, but it was doomed to remain fruitless because there were no good tools to pursue the processes systematically. Process analysis was extremely difficult if the main tool of analysis was limited to mathematics alone. Except for models with one or two variables (as attempted in macroeconomic analysis) and for linear system cases, very few results were obtained. Process analysis was ideal but impractical, as the formula became complicated and did not permit an easy understanding of the meaning. Simply speaking, process analysis was intractable if we wanted a certain level of reality for the analysis.

This was the deep reason that economists resisted so strongly admitting the deficiency of their framework despite the repeated criticisms against GET and other neoclassical frameworks. Mathematics (or at least formula calculation) is not well adapted to analyzing complex phenomena. These days, however, complexity has become a popular topic, and many have understood that mathematical formula calculation has an intrinsic limit as a tool of analysis. Agent-based simulation (ABS) or agent-based computational economics (ABCE) changed this status quo. This is the very reason that a long guided tour was necessary to understand the deep mission and the possibility of the ABS .

3.8 What Happened During this Century and a Half?

As a conclusion to our brief tour over the history of economics over more than a century and a half, let us cite a paragraph in Sraffa’s 1926 paper [88, p. 536]:

In the tranquil view which the modern theory of value presents us there is one dark spot which disturbs the harmony of the whole. This is represented by the supply curve, based upon the laws of increasing and diminishing returns. That its foundations are less solid than those of the other portions of the structure is generally recognized. That they are actually so weak as to be unable to support the weight imposed upon them is a doubt which slumbers beneath the consciousness of many, but which most succeed in silently suppressing. From time to time someone is unable any longer to resist the pressure of his doubts and expresses them openly; then, in order to prevent the scandal spreading, he is promptly silenced, frequently with some concessions and partial admission of his objections, which, naturally, the theory had implicitly taken into account. And so, with the lapse of time, the qualifications, the restrictions and the exceptions have piled up, and have eaten up, if not all, certainly the greater part of the theory. If their aggregate effect is not at once apparent, this is because they are scattered about in footnotes and articles and carefully segregated from one another.

This paragraph describes the intellectual atmosphere during the first quarter of the twentieth century and is still prophetic if we reflect on what happened during these 40 years since 1970s and what is happening now. We know that foundations of neoclassical economics “are actually so weak as to be unable to support the weight imposed upon them.” There were many who expressed their doubts, and they were “promptly silenced, frequently with some concessions and partial admission of [their] objections, which, naturally, the theory had implicitly taken into account.” This was the history that was continually repeated over the century and a half after the rise of neoclassical economics. It continues to be repeated [91].

The most important lesson to draw from this history is that something was missing in our efforts on reconstructing economics. Much of the criticism was addressed and then accumulated. This is necessary but not sufficient for the reconstruction of economics. Mathematics provided economics with a powerful tool for analysis, but singular reliance on this tool is now the main cause of the current troubles with economics. We must introduce or create a new analytical tool as powerful as mathematics. A promising candidate is computer simulation or agent-based simulation. The next section discusses what kind of possibilities agent-based simulation has for the future of economics.

4 Tasks and Possibilities of ABS

We have made a long journey through economics, before and after the 1970s. We have seen that economic science is seriously ill. The history of economics after the 1970s teaches us the necessity for a paradigm change in economics itself. A research program that seeks a modification and redressing of mainstream economics is deemed to fail. We should pursue a breakthrough . To achieve a breakthrough, it is not sufficient simply to rearrange concepts and theorems. A new tool for economic analysis is necessary. Its reconstruction requires something very new. ABS or ABCE is one such possibility.

4.1 New Bag for a New Wine: ABS as a New Tool for Economics

If we summarize the history of economic analysis very briefly, we can detect three stages. The period before the 1870s was characterized by a method of analysis that employed literary explanations and history, and concept making played an important role. The second period ranges from the 1870s to the present. A new tool of economics, mathematics, was introduced. In the latter half of the twentieth century, mathematics was a synonym for the theory. At that time, what was mathematically formulated was considered theoretical and therefore scientific. Now we are standing at the starting point of the third period. There are overlaps in every periodization. We are in a phase of transition.

This transition may have started around the 1970s at the earliest and in the 1990s at the latest. In the meantime, two important events occurred. First, a new style of mathematics emerged. Chaos, fractals, and power laws were discovered in every field of science. People acknowledged that reality is much more complex than those that the classical tools , such as differential equations, can describe well. The second event was the advent of personal computers. Calculation became faster and easier beyond comparison. This made agent-based simulation possible.

The new mathematics was a new conception of the world. From Newton to Poincaré to René Thom, the world was differentiable. The world was considered a dynamical system. This meant that everything could be described by a system of differential equations. However, the world has changed much since the arrival of new mathematics. Fractal dimensions were introduced. The forms were no longer differential. We must remember that, in the nineteenth century, a Weierstrass function was thought to be most pathological and barely accepted with astonishment. The discovery of chaos was another big impact that changed the worldview, even though it was based on a dynamical system. The standard classical view of the regular world was discarded in favor of acknowledgment of the complex world .

The new mathematics contributed to the establishment of this new worldview , but it also revealed the limited bounds of mathematics . It was dethroned from omnipotence and sent into retreat, where mathematical reasoning is useful only in a fortunate, simple situation. However, we should be pleased. At the same time as the arrival of the new mathematics, another powerful tool came to the rescueFootnote 27: computer simulation. ABS models are a part of this general trend [47].

Economics changed much when it began to use mathematics as a tool of analysis. In the last quarter of the nineteenth century, mathematics was a new tool for economics, and it opened a new big possibility. Without mathematization of economics, no strict reasoning was possible. However, as we have seen above, mathematics was a trap for economics. Even when we recognized many anomalies, contradictions in the theory, and its irrelevance to reality, mainstream economics wanted to remain loyal to mathematics and, as a consequence, to maximization and to the equilibrium framework. A majority of economists thought that there was no choice other than mathematics. To change the status quo, it is not sufficient to change our minds. Without developing a new tool, we must continue to use an old tool. It is absolutely necessary to seek a new tool of analysis. Now it is time to pour new wine (new contents) into a new bag (new tool). The introduction of ABS models has such a meaning in economics. As such, it is important for new researchers to know the merits and tasks that we face with regard to this new tool. It will be a crucial knowledge for the further development of economics and to lead our inquiry in the right direction.

Although ABS can provide a powerful tool for economics, it is not yet an experienced and mature tool. It is not sufficient to use ABS models as a convenient set of analyses, but it is necessary to develop ABS as a good tool. It provides a big possibility and task. To make it effective, we should be good model builders. As many have pointed out, it is rather easy to build an ABS, but few models are good ones. Once a computer model is implemented, it can produce enormous amounts of results, but if the assumptions used in the model are wrong, they are meaningless. ABS models have a strong tendency to result in “garbage in, garbage out.” A good ABS model satisfies many requirements of different levels. We will discuss the question of how to formulate agents’ economic behavior in Sect. 1.4.4. The problem of ABS, however, does not stop here. We should also consider more subtle, meta-level problems. They may be classified into two groups.

The first group of problems is concerned with the conditions needed to be a good model . For example, we can easily cite the following three tasks:

  1. 1.

    Build a simulation model that is relevant to real-world questions

  2. 2.

    Build a simulation model that helps to understand what is happening in a real economic process

  3. 3.

    Build a meaningful simulation model

The first two tasks are easy to understand. The third task may include tasks 1 and 2, but it can imply a different meaning. A simulation model is useless if it can only give a result that is obtained by mathematical formulations. If a mathematical proof is possible, the simulation can only afford a verification check. In that case, there is no raison d’être for ABS as a new tool.

The second group of tasks is concerned with how to obtain scientific knowledge and how to confirm that it is true. In simulations, for example, we always face the following tasks:

  1. 4.

    Find a method to discover an interesting phenomenon

  2. 5.

    Find a method to establish general tendencies or laws

  3. 6.

    Find a criterion to estimate the generality of a tendency or a law

All these tasks are difficult problems. We may not arrive easily at first-step solutions. Despite the difficulties, it is necessary to attack these tasks so that ABS can become a truly scientific method. Indeed, it is not only economics that faces these tasks. Many science fields face similar and common problems.

This is not surprising, as we are entering a new phase of scientific research. The experiences of other fields may be helpful. At the same time, the history of experimental sciences may be indicative. Experiments are now the most fundamental mode of modern scientific research, but this method did not come to the world easily and swiftly. It took many hundreds of years before experiments became a firm mode of scientific research. ABS researchers should learn from the history of the development of experimental sciences. In this regard, it will be necessary to make a short detour into the history of science and situate ABS in the long-range history of scientific research.

4.2 The Third Paradigm in Scientific Research

Let us review the history of science as a development of different modes of scientific research so that we can understand the situation of ABS and ABCE.

The first mode or paradigm of scientific research was theory . This mode of scientific research originated in Ancient Greece, or Classical Greece. Many people may be surprised to read this. If we see that the word “theory” came from the Greek word “theōri’a” (\(\theta \epsilon \omega \rho \iota '\alpha\)), which means “speculation” or “contemplation,” my contention may become more plausible. Theō’ri’a is a derivative of the verb “theō’rēō,” meaning “I look at.” The word “speculation” is based on the Latin word of the same original meaning. Observe and contemplate! This was the original method of theoretical effort. “Theorem” comes from the Greek word “theō’rēma,” meaning “proposition to be proved.”

The same types of contemplations and speculations must have occurred in Ancient India and Ancient China. It was Greece that developed logical reasoning to an extreme. Mathematics as a logical science emerged in Greece. Euclid’s Elements of Geometry was a compilation of known theorems arranged in a logical order. Theorems are known in India, China, Egypt, and Mesopotamia, but only in Classical Greece were different theorems arranged in a logical order beginning with the first principles, i.e., postulates and axioms. It is amazing that the Greeks proceeded so deeply into logical reasoning. It is not strange that Elements remained a must-read textbook for more than two millennia. The idea that an indefinite number of theorems can be derived from a small number of postulates and axioms is quite unusual, although it became one of the indispensable pillars of modern science. This is demonstrated by the fact that Chinese scholars could not understand the significance of arranging theorems in a logical order when Euclid’s Elements were imported and translated in seventeenth-century China.

The second mode or paradigm of scientific research came very late compared with theory. It was experiments . A clear date for the beginning of experimentation was not marked until Galileo Galilei established modern experimental science. His falling bodies experiment from the Leaning Tower of Pisa is the best known of his experiments. We should also note that Galileo used a telescope to observe the sun, moon, and planets. He discovered the rings of Saturn and satellites of Jupiter. The telescope served as an extension of the sensory organs. He became the first person to change our thinking by means of observations.

Observations are not usually considered a part of experiments, but observation by using special instruments is very close to an experiment. Although observation is an important aspect of speculation, it became a scientific research tool only after the arrival of modern experimental science. Experiments and observations are an inseparable pair. We can include observation as a kind of experiment.

The origin of experiments is quite vague. We may go back to medieval alchemy and further to Archimedes . All empirical studies had some characteristics of experiments or observations. It took many centuries for burgeoning experimentation to grow into a scientific research tool.

Observation is a part of the experimental mode. Experiments, instruments, and observations form a triplet in experimental science. Experiments and observations existed before experimental science was established as modern science. Observations became an indispensable element of experiments when observation became a controlled act of data gathering with the use of instruments. The latter helped to obtain accuracy and reproducibility and extended our ability to perceive beyond our five senses.

Experiments together with instruments and observations became a scientific research method only after various procedures were stipulated. The result of an experiment, if it is an important one, is recognized as an established fact only when other independent experiments confirm the result. History shows that experiments required a much greater understanding of scientific research methods. This point gives us a valuable lesson when we think of making agent-based simulation a true scientific research method.

The third mode or paradigm of scientific research is computer simulation . As computers are rather new devices, computer simulation has only a short history. In a science such as chemistry, computer simulation has become a well-established method of research, as it is thought that a complete research project should contain three parts: a theoretical examination, an experiment, and a computer simulation. In astronomy, computer simulations are often used to show graphically how the universe evolves. Even in physics, computer simulations are used to generate statistical movements of large-scale systems. In other fields, such as biology, computer simulations are used as parables. Artificial life is a famous example, but its relevance to biology is still ambiguous.

In addition to the aforementioned three paradigms of scientific research, Gray [27] proposed a fourth paradigm: data exploration. Footnote 28 Others named it e-Science. We are living in an age of data deluge. Data exploration comprises all activities related to data processing: data capturing, data curation, data analysis, data visualization, and all related operations. Implementation requires millions of lines of code for a large-scale experiment. As Gray pointed out, the software cost dominates many large-scale experiments. Data exploration itself is a field of engineering rather than a science, but science and engineering go together. We should recall Galileo’s telescope and the arrival of computers. They changed the mode of scientific research enormously. Engineering not only helped science but changed scientific research methods tremendously. The same thing is happening now in the domain of data processing. Data exploration is indeed changing the mode of scientific research, and we may say that it has marked the arrival of a new paradigm [54].

The four modes of research are not exclusive. No research effort is possible without depending on other modes. They are complementary. The trouble with ABS lies with two factors. First, we lack a firm theoretical basis. Second, ABS is still young, and we lack a good metatheory by which we orient and control our research. The second factor is common to all simulation experiments, and we can and must learn from other disciplines where more experience and theoretical examination are accumulated. The first factor is proper to economics, but we should not be discouraged by this. Economics went astray because it depended too much on mathematics, which was not well suited to study complex phenomena. In contrast, we can expect that ABS may serve as a good tool for reestablishing economics. The mission of ABS is as great as this.

As experiments needed a long time, perhaps centuries, to be established, simulation study will require a long time before it will be established firmly as a mature mode of scientific research. It needs much work and reflection in building simulation models, implementing models, interpreting simulation results, finding a law assessing the relevance of models, and other tasks. On the basis of these activities, we need a kind of new philosophy by which to lead our meta-level reflections. We do not yet have a concrete vision on this level, but one will emerge as research through simulation proceeds. It may reveal the strengths and weaknesses of simulations, but at the same time, research will teach us how to compensate for weaknesses by combining other modes of scientific research. We should be patient. Experimentation was not built in a day. We need many years, if not centuries, before simulations become a full-fledged scientific research method. In this regard, all computer-based science faces similar problems. We should promote transdisciplinary communications and discussions. Learners of ABS and ABCE should build the ability to communicate and discuss common problems with researchers in other fields.

4.3 Complexity and Tractability

Process analysis is not only more general than equilibrium analysis. It opens a new logic that has so far been impossible in economic analysis. For example, as discussed at length, increasing returns to scale were a vexing question, but process analysis can easily incorporate it into its logic. It is sufficient to assume that firms produce their products at the same rate at which they are sold (Sraffa’s principle ; see page 17).Footnote 29

ABS has, of course, some special features that are not common to computer simulation in general. Simulations are used in macroeconomics, but the latter is not fully exploiting the possibility of computer simulations. They are used only in lieu of solving a system of equations algebraically. Researchers use computers only to obtain numerical solutions. This kind of simulation has no power to rehabilitate economics. What ABS aims at is quite different. ABS may provide a foundation for a new economics not based on equilibrium.

An example of such a new possibility is the process analysis of sequential development of economics states. Sequential analysis is one of the old tools of economics. Stockholm school economists and English economists, such as Hawtrey, Keynes, and Robertson, discussed monetary problems in this framework in 1920s and 1930s. They used it extensively but could not obtain firm results, and Keynes returned to more traditional equilibrium analysis in his General Theory [43]. The reason for this moderation is simple. If the sequence is traced by calculating mathematical expressions, the algebraic formula becomes too complicated and exceeds our manipulation ability. When expressions include max or min operators, distinguishing cases become too large to do a thorough case analysis. When, instead, the sequence is pursued numerically, we can get a result more easily, but we are not sure whether the obtained result reflects any general rule. Such a concern was partly eliminated when computers were introduced. Starting from different initial conditions, we can trace and see what happens in general more easily.

Sequential analysis is also called process analysis . The landscape of process is very different from that of equilibrium. In some sense, they stand at opposite extremes. Process seeks to clarify the mechanisms of change at every move. Equilibrium neglects all these changes and seeks to determine at what state the process ceases to change. If such a state exits, analysis becomes extremely simplified. If we know the mapping from one period to the next, the equilibrium is a fixed point of the mapping, and there is no need to know how the state evolves outside of equilibrium.

At the edge of intractability, this simplifying assumption was in some domains very useful. Equilibrium analysis was widely used in mechanics and thermo-dynamics. In economics, too, in the first phase of mathematical analysis, it was useful as the first approximation. It was reasonable to assume that demands were nearly equal to supplies. However, at some point in economics, equilibrium became a dogma. When we encounter a case where the equilibrium framework is not applicable, every kind of apology was added, and the framework was saved. Instead of trying to produce a new framework, the majority of economists wanted to conserve their old framework. These reactions can be interpreted as a model case of protective belt making in the face of anomalies, in Imre Lakatos’ terms . Research programs with equilibrium changed from progressive to degenerating.

Process analysis is much more complicated than equilibrium. Instead of analyzing only fixed points, it was necessary to analyze, so to say, the mapping itself. A general theory was sometimes too difficult to construct. Even a simple second-order mapping such as

$$\displaystyle{ x(n) = a \cdot x(n - 1)\{1 - x(n - 1)\} }$$

is extremely complicated when we want to classify different behaviors of the series x(n){n = 1, 2, 3, } when the coefficient a changes. Li and Yorke [53] found a theorem: any continuous mapping that maps an interval into itself shows the behavior referred to as “chaos” if the mapping has a fixed point of period 3. This was the starting point of chaos theory.

The Li-Yorke theorem destroyed the classical image of the dynamical system. The classical image of the dynamical system was rather simple. There are some isolated fixed points. If the initial points are, by chance, off the fixed points, the points converge to one of the fixed points. Indeed, in the case of a two-dimensional system of differential equations, the solution paths are classified in three cases. One is the convergence to a fixed point, i.e., the equilibrium point. The second is divergent cases. The points go out of any bounded set. The third is a limit cycle. The path approaches a closed curve, i.e., a limit cycle. Then, except for the cases of divergence, the limiting state of any dynamic process is either equilibrium or a closed cycle, showing periodic ups and downs like a trade cycle. However, this image was justifiable only when we are working with a low-dimensional differential dynamical system. If the system becomes high dimensional or even in a low-dimensional case if the system is described by difference equations, the limiting behavior becomes astonishingly complex. Li-Yorke’s chaos is a simple example of the latter case. Many types of strange attractors have been discovered since then. Convergence to an equilibrium point or to a limit cycle is a rather exceptional case.

4.4 Features of Human Behavior

ABSs have two characteristics as a method of analysis: (1) they are constructed as interactions from the behavior of economic agents, and (2) they investigate the process of how the economy proceeds and changes. As for the first point, ABS models are different from macroeconomic models but have common characteristics as microeconomics. As a tool of analysis, the second characteristic is more important. Because of this characteristic, we can implement many phases of human behavior , which was practically impossible when we were confined to equilibrium analysis. Therefore, let us start with the second characteristic.

A process analysis proceeds as follows. The analysis is divided into steps. In each step, agents do what they can do. This may sound trivial, but in fact, this is a crucial point. Human agents are entities with limited capabilities. As the subject of an action, we can point out three aspectsFootnote 30:

  1. 1.

    Limited range of information gathering (limited sight)

  2. 2.

    Limited ability in information processing (limited rationality)

  3. 3.

    Limited ability to carry out something (limited executive capacity)

Each agent at each step observes a few variables, makes decisions almost instantaneously, and acts. In ABS models, it is equivalent to use some limited number of values that are already determined, calculate some simple formulas, and change some variables. Simon (1984) discussed the first two aspects under the subject of bounded rationality .

Process analysis can take different time spans. Much decision-making is done habitually, but a few important decisions are made deliberately, consuming many hours and much labor. Katona [41] contrasts habitual or routine behavior and genuine decision-making. Production workers’ movements are mostly habitual. Mintzberg [58] reports that a factory manager makes more than 1,000 decisions in a day. Those decisions must be habitual ones, whereas a decision to build a new factory or to launch a new product must be a highly genuine one. Habitual behavior or routine decisions have a shorter time span, and genuine decision-making requires long deliberation and is done with a long interim period.

It is necessary to employ different time scales for different layers of decisions and actions. Various kinds of adjustments take the form of routine behavior and have a proper time span according to the purpose and nature of adjustments.

The ability to build a good ABS requires many capabilities, such as a good and critical knowledge of economics, a good observer’s view of economic affairs, good formulations of human behavior, or good skill in implementing a model. To obtain a good formulation, a basic knowledge of human behavior is necessary. Routine behavior is relatively easy to formulate since it can be formulated as a chain of if-then directives . This formulation has a quite wide coverage. The Turing machine idea is based on the fact that any computable function can be represented as an ordered set of directives of the form q1S1S2q2 [78, Subsection 6.4]. We can reasonably suppose that any routine behavior can be depicted as an ordered set of if-then directives of the form q1S1S2q2.

We can derive two important lessons from this: (1) it is the internal state q1 that determines what will be observed and (2) the action to be taken S2 should be within the limit of executive capacity. The first point expresses the active and subjective aspect of the agent, and the second point indicates that a change that an agent can make is a small part of the world. These aspects of human behavior can be fully incorporated into ABS models. Contrary to equilibrium analysis, which generally assumes maximizing behavior of the agents, there is no need for ABS to assume that human agents are fully rational and farsighted in space and time. Human agents in ABS are thus myopic entities and respond to a small number of variables that they can observe. The effects of their actions diffuse slowly, from part to part and step by step, on an entire economy.

Differences between the equilibrium and process analysis are listed in Table 1.2.

Table 1.2 Comparison between equilibrium and process analysis

We lack the space to explain each row of difference in detail, but we can observe from this table that process analysis with the aid of ABS can dispense with various unrealistic assumptions that are often required for equilibrium analysis.

Another merit of process analysis lies in its systematic decomposition into step-by-step examinations. This eases the burden of implementation enormously, as the decomposition of process to periods can be easily programmed as a repetition of a cycle. Once a program for a series of events in a period is written, it can be used for another period. This type of repeated work is most easily done by computers. Agents’ behavior and the total process can thus be implemented in an ABS model. As a conclusion of this subsection, we can safely say that process analysis and computer simulation in the form of ABS have good chemistry.

4.5 Evolutionary Economics and Micro-Macro Loops

We have listed the merits of ABS models. There is another important advantage that equilibrium analysis does not have. It is likely that ABS will open up new possibilities for evolutionary economics .

Evolutionary economics emphasizes that major categories of economic entities can be better understood when we conceive of them as something that evolves [78]. Seven categories are notable: commodities, technology, economic behavior, institutions, organizations, systems, and knowledge.Footnote 31 ABS models are a suitable tool of analysis for an evolutionary study of rules of conduct.

ABS may include learning and even evolution. There is no need to keep the set of rules of conduct fixed for all periods. The number of rules may increase or decrease. Some rules may be excluded from the set, and some others may be included in the set. The voluntary acquisition of new rules of conduct might be called learning , whereas involuntary or unconscious changes in rules of conduct might be better called evolution . However, there is no essential difference between learning and evolution. As with genetic algorithms (GAs) , it is also possible to implement selection. Unlike standard GA, fitness function is not given a priori in ABS models. Selection mechanisms may be different for different categories. Firms are extinguished when they go bankrupt. A behavior pattern will be propagated by a more complex feature depending on experience, rumors, and reputation.

At any rate, ABS and the research agenda of evolutionary economics have much in common. However, the possibilities of ABS do not stop here. The introduction of new rules of conduct may change the mode of movement of an economy, and this changed mode may influence the selection. Then, we should study the micro-macro loops that can be observed in the economy.

A micro-macro loop is a kind of “coevolution” between behaviors of the agents and behaviors of the market. However, the term “coevolution” should be better reserved for coevolution between two species or two entities of the same level. The concept of a “micro-macro loop” has been proposed to indicate mutual conditionings between different levels [78]. In sociology, a similar term, “micro-macro link,” is used. However, in the latter expression, the evolutionary point of view is rather lacking, and it risks being interpreted as an example of general conditioning between two different levels.

It will be easier to understand this notion with an example [83, Chapter 6]. The daily volatility of the Nikkei index has decreased considerably since around 2004. Many explanations are possible, and it is difficult to determine the main reason that pushed down the volatility. One possible explanation is that the number of Web traders has increased, while transaction costs have decreased. As a consequence, the number of day traders also increased. One of their preferred trade patterns is to place twin orders to sell and buy in the same quantities. Selling prices are set 1 % higher than the opening price of the day, and buying prices are set 1 % lower than the opening price. If both orders are executed, the trader can get 2 % of the margin minus the transaction cost. If the transaction cost is less than 0.5 %, then the trader can get a 1 % profit net of the transaction cost. This kind of trading behavior should have the effect of suppressing the width of daily ups and downs (this is the precise definition of “daily volatility”). It is possible that this has influenced the volatility of the Nikkei index.

However, the story does not stop here. The expected profit rate of the day traders is conditioned by the daily volatility. When the daily volatility exceeds an average of 2 %, the traders’ chance of success is rather high. However, if the daily volatility decreases on average to 1 %, the chance that the trader can effectively contract both of the twin orders decreases. If none of the orders is contracted, there is no harm. However, if only one of the twin orders is contracted, the trader is obliged to offer the counter order to keep his or her position neutral. If this is done successfully, the trader should bear a certain loss from the trade. The expected rate of profit for the traders depends sharply on the level of daily volatility. It is possible that the daily volatility will be pressed down such that the expected profit rate nears zero. If this is true, this is a beautiful example of micro-macro loops.

We can observe many different micro-macro loops in the economy. Another example is the micro-macro loop that we can observe between the foreign exchange rate of a country and the country’s productivity improvement [25, §6]. Productivity improvement is a result of a change on workers’ behavior (labor productivity), production processes, institutional and organizational improvement, and other factors. If the general level of productivity of a country increases, the foreign exchange rate changes, in the long run (probably 4 to 5 years’ time), in favor of the country. This means that the real wages of the country have increased and that firms and workers are obliged to improve productivity to maintain competitiveness. Thus, this micro-macro loop generates truly dynamical development. Another example is related to the so-called Japanese mode of management [77, 78].

At a more basic theoretical level, micro-macro loops play an important role. This is observed in the production adjustment process when firms produce according to Sraffa’s principle. If firms react to the present demand, the economy-wide adjustment process is normally divergent even if the demand flow is stationary. However, if firms adjust their production based on a demand prediction with a demand average of more than five periods, the economy-wide adjustment process converges to a constant production level that corresponds to the given demand [84].

We can observe many micro-macro loops in economic processes. The typical time structure of micro-macro loops deserves a remark. Each loop has two arrows of causation. One is an effect from micro to macro, and the other is an effect from macro to micro. The first effect is easy to see and instantaneous. The behavior of each agent generates the total process. The second effect is more complicated and depends on an eventual change in the agents’ behavior. Thus, this is an evolutionary process. Behavioral evolution requires more time than micro-macro effects. Micro-macro loops are observable when we examine an economic process with a relatively long time span.

Micro-macro loops are an interesting topic in themselves, but they have grave consequences for the methodology of social sciences . Neoclassical economics stands on methodological individualism. Sociology is divided into two stances. One is methodological individualism, and the other is methodological holism. The existence of micro-macro loops signifies that neither methodological individualism nor holism is valid because the actual state is determined as a result of evolutionary development and structured by micro-macro loops. In this sense, micro-macro loops overcome the old dichotomy between methodological individualism and holism. Both of them are insufficient. We must observe micro-macro loops.

Micro-macro loops cannot be clarified by equilibrium analysis. They have not been investigated deeply and analytically because of a lack of appropriate tools. ABS models have the possibility to provide those tools. When they succeed in this task, economics will change enormously.

5 Conclusions

Economics is now seriously ill. It needs a fundamental change in its framework. Renovation requires new paradigms in both principles and research methods. ABS , as the third paradigm of scientific research, offers a good chance at the required renovation. As a new mode of research, it has the possibility to change economics as greatly as mathematics changed it in the twentieth century (even if it went in the wrong direction). ABS makes it possible not only to solve present problems more smoothly but also to make new problems possible and tractable. When ABS is developed, we will be liberated from the yoke of the equilibrium framework. Researchers who work on ABS models have a duty to develop them. Building a good ABS model requires a good critical knowledge of economics, a deep understanding of human behavior, and a good knowledge of ABS as the third mode of economic research . This is a heavy burden. This guided tour aims to be helpful for young ABS researchers.