These are the most directly and readily observable attributes of commodities (goods and services produced for and exchanged on the market). Both price and quantity relate to a unit (piece, bushel, barrel, pound etc.), established usually by commercial practice as the customary unit of reckoning.

The intrinsically numerical character of prices and quantities renders accounts and statistics, the incessant measurement of the stream of commodities, feasible. This preoccupation is motivated by and yields motivation to business and economic interests. It also seems to be responsible for the profound drive to develop economic theories with the aid of mathematical tools, applied already successfully to the exigencies of natural sciences.

The units of measurement are manifold on the various markets and are also arbitrary to a certain extent. If the units undergo any changes, say, when measuring in grammes instead of ounces, then the numerical magnitude of both prices and quantities changes accordingly. Nevertheless this change in their numerical expression must not alter the total value (volume) of a given amount of commodities so measured: if the unit is doubled then the price of the new unit doubles likewise but the numerical expression of the quantity is halved.

This interdependence of prices and quantities prompted an historically early perception of their parallel, dual character. To this was soon added the appreciation of the mutual effect they exert on each other on the market. As Smith (1776) explained: if the quantity brought to market surpasses the effective demand, that is if an oversupply exists, this will depress prices. On the other hand, a high or excessively profitable price will induce a stepped up production of the commodity in question, possibly also a reduction in its effective demand. This skew-symmetric relationship, with quantities acting negatively on prices while prices influence quantities positively, has remained the popular wisdom of everyday economics up to the present day.

Later investigations and descriptions pointed to the existence of different mechanisms; be it the ‘target farmer’ in Third World countries who reacts to a rise in prices by reducing the quantity brought to market, or instances of administratively guided economic situations where the economic agents try to minimize their productive effort once prices are fixed. Still the basic form of interdependence on the market, as elucidated by Smith, remained valid in the majority of economic transactions and gained popular and scientific sanction and consensus.

Smith argued that there was a more or less perfect functioning of the ‘invisible hand’ of the market forces that promote equilibrium (equality of production and consumption, prices and costs) on almost all markets almost all of the time. Equilibrium therefore came to be seen as the normal state of affairs: the productive effort geared to match effective and solvent needs of the society. Random shocks, whether caused by changes in taste, technology or circumstances, were believed soon to be adjusted to. Hence the general prescription to economists (and politicians): not to interfere with this near perfect mechanism and not to tolerate obstacles, constraints, monopolies hampering the smooth operation of markets.

Here the economic profession split for the next two centuries. Economists less convinced about the fairness and impartiality, optimality and efficiency of markets and worried also about the historically emerging adverse tendencies, started critical investigations. They still accepted equilibrium as a theoretical tool of reasoning yet became increasingly aware of certain inadequacies observed on the market. With Ricardo (1817) and Marx (1867) the school of the labour theory of value came into being. This school maintained that prices and quantities are regulated in last instance by the respective amounts of live and congealed labour bestowed on the production of the commodities in question. They were interested mainly in long run tendencies in the economic circumstances of whole societies and used equilibrium reasoning to spell out these tendencies and also as a critical tool against existing imperfections. They were also responsible for developing more clearly the dual categories of value-in-use and value-inexchange: the extensive and intensive attributes of commodities. Marx particularly excelled in developing economic terminology in decidedly dual categories with analogous and parallel reasoning for price-type and quantity-type theorems as, for instance, the process of production and the process of realization, surplus product and surplus value, technical and organic composition of capital etc. This he considered as the main achievement of his approach.

The best thing in my book is: 1. the emphasis on the dual character of labour, right in the first chapter, according to whether the labour is expressed in use value or exchange value (this is the basis of the whole understanding of facts).

The other school, less critical about the market and seeking rather the perfection of market mechanisms, has been interested more in short run responses of the economic system, looking for local and particular explanation of the actual behaviour found on the diverse markets. They maintained that prices and quantities are determined by the marginal adjustments needed to adapt to equilibrium; thus prices, in particular, depend on marginal costs and quantities will be determined by maximizing profits. Among others it has been mainly Pareto (1896) and Marshall (1920) who honed the economic arguments to the textbook precision of present day economics.

With Böhm-Bawerk (1896) the battle between the two schools became exacerbated and they spared no argument in refuting ‘inimical’ standpoints. This confrontation remained heated and mostly unjust on both sides, harbouring a sometimes implicit, sometimes explicit, political content roughly dividing the two camps into evolutionary and revolutionary protagonists.

Considering its strictly theoretical merits the feud, nevertheless, resembles the altercation in mechanics: Newton’s followers starting from equilibrium considerations and in search of the causa efficiens, while d’Alembert’s disciples fight for an optimizing approach and are looking for the causa finalis, the aim and purpose of motion. It took much time and pain to acknowledge finally the basic equivalence of the two seemingly inimical and antagonistic approaches.

A similar insight has been injected into economics by von Neumann (1937). The theoretical roots of his approach to and model of General Economic Equilibrium can be found partly in earlier unifying efforts in mathematical economics and partly in thermodynamic reasoning.

As a pioneer in mathematical economics Walras (1874–7) had already developed a model to determine the prices and quantities of a given economic system simultaneously. By establishing 2n equations in the 2n unknowns, n prices and n quantities, he claimed the problem to be theoretically solved.

The idea was brilliant, the set-up ingenious, the proof incomplete. By counting equations it is not possible to prove existence and uniqueness of a mathematical solution. Even in the relatively simple case of linear equations where all the unknowns appear in their simplest form, multiplied only by some coefficients and then added up, the equations may be inconclusive. They may be contradictory, not permitting any solution at all. They may also be redundant and allow multiple solutions. And even if a solution exists and is unique we cannot exclude on a priori grounds some negative elements. Yet negative prices or negative quantities are usually meaningless in an economic context and cannot be accepted as genuine solutions.

These perplexing problems were eliminated finally by von Neumann in the following way.

Let A = {aik} be the matrix of commodity inputs, i = 1, 2, …, m required to sustain one unit of the process k = 1, 2, …, n and similarly B = {bik} the matrix of outputs yielded by the respective processes. Then, given p prices and x quantities (or ‘intensities of production’) pAx and pBx will express the total value of inputs (respectively, outputs). Thus λ = λ (p, x) = pBx/pAx represents the rate of interest (as a relation of proceeds to advances in the process of realization, or the rate of possible growth as a relation of commodities produced to commodities consumed in the production process).

Analysing the gradients of this function leads to the following dual conclusion: If ∂λ/x = (pB − λpA)/pAx is non-positive, that is if

$$ pB\le \lambda pA $$
(1)

then λ cannot be further increased by any variation of x and hence will be maximal. If inequality obtains in (1) for any k, then xk = 0 because the process operates at a loss and should be discontinued.

If on the other hand, ∂λ/p = (Bx − λAx)/pAx is non-negative, that is if

$$ Bx\ge \lambda Ax $$
(2)

then λ cannot be further diminished by any variation of p and hence will be minimal. If inequality obtains in (2) for any i, then pi = 0 because the commodity is produced in a superfluous quantity and thus turns into a ‘free’ good.

Von Neumann now proved that the function λ (p, x) has a ‘saddle point’ for positive prices and quantities, where the maximal rate of growth equals the minimal rate of interest. Thus he succeeded in solving the economic problem of equilibrium by defining a so-called potential function and replacing equations by inequalities. Existence and positivity of prices and quantities in equilibrium still permit multiple equilibria, in a double sense.

Firstly, as can be seen, every multiple of the equilibrium price system yields the same equilibrium value and likewise every multiple of the equilibrium quantities is again a system in equilibrium. Thus only proportions and not absolute magnitudes are determined. Yet by choosing, as Walras did, one of the prices as ‘numeraire’ and expressing all the others as multiples of this ‘numeraire’ – and fixing one of the quantities as the reference unit – the system can be made wholly determinate.

Secondly, there are certain cases – they could be called) ‘degenerate’ – where true multiplicity of entirely different solutions may emerge. This problem can sometimes be remedied by a small perturbation of the initial data. Yet, it now appears that the possibility of multiple equilibria cannot be ruled out ab ovo, because they may appear in real economic systems just as well.

The theoretically decisive root of von Neumann’s approach can be found in phenomenological thermodynamics, especially with Gibbs (1875), whose treatise ‘On the Equilibrium of Heterogeneous Substances’ synthesized classical thermodynamics and opened the way for physical chemistry. He applied first a ‘max-min’ criterion for equilibrium: maximizing entropy and minimizing energy, just as von Neumann maximized the growth rate and minimized the rate of interest, and he seems to have been the first to apply inequalities as well as equations in the description and analysis of equilibrium.

Von Neumann was fully aware of the analogy and stressed it when setting up his potential function Φ (X, Y) to be maximized by quantities X and minimized by prices Y:

A direct interpretation of the function Φ (X, Y) would be highly desirable. Its rôle appears to be similar to that of thermodynamic potentials in phenomenological thermodynamics; it can be surmised that the similarity will persist in its full phenomenological generality (independently of our restrictive idealization).

Von Neumann’s original notation followed the then accepted usage in physics: X for ‘extensive magnitudes’, that is quantities, and Y for ‘intensive magnitudes’, that is prices. The gradients of a potential function (the partial derivatives according to the variables) spell out the ‘force field’ in physics, and the vanishing of those gradients is the necessary requirement of equilibrium. In the von Neumann model, as in thermodynamics, theoretical considerations induce a complex) ‘saddle point’ problem: instead of simply maximizing the potential function the saddle point can be found only through minimizing by some and maximizing by other variables.

It is not pure coincidence that this thermodynamic approach proved to be so fertile in handling economic problems. New investigations in the axiomatic foundation of thermodynamics indicate (Giles 1964, p. 26) that ‘any experimentally verifiable assertion of thermodynamics can be expressed in terms of states with the aid of the operation + and the relation → alone’.

Though the axioms related to the permitted → transformations may turn out slightly differently in economics – there is important work undertaken concerning variously formulated basic axioms, Debreu (1959) being a powerful and articulate example – it is evident that the mathematical structure underlying the two scientific disciplines is closely similar in each case.

The new approach, because of the unification of criteria of optimality with criteria of equilibrium, did much to bridge the gap between the two opposing schools of economic thought. Both found their basic ideas tolerably well reflected in the set-up of the von Neumann model and hence a new round of revision and even partial reconciliation could be started.

One should stress: it has been surely the ‘restrictive idealization’ that facilitated the general acceptance of the new approach. The model only encompasses linear processes with a linear combination of inputs, resulting in a likewise linear combination of commodities. It represents, furthermore, only the production of freely reproducible commodities, that is: it does not contain any external constraints on the scale of production. Such a model keeps data and computational requirements relatively modest and is also easy to grasp.

With matrix notation now universally accepted this convenient shorthand made the model mathematically transparent. The very simple statement of dual equilibrium: λpA = pB and λAx = Bx could not possibly be simplified further.

We now have an almost complete mathematical theory of so-called ‘matrix pencils’, that is matrices of the form A + λB. It is interesting to note that Weierstrass (1867) reported on his investigations concerning this form in the same decade in which most of the ingredients, indispensable for our topic to take its present shape, were published. Marx, Walras, Gibbs and Weierstrass made known their results in the same decade not only independently but without having the slightest notion about each other.

With the advent of computers, also pioneered by von Neumann, matrices with several thousands of rows and columns became manageable and this permitted and motivated an everbroadening use and proliferation of a family of models having their theoretical and mathematical source in the von Neumann model.

Some very important and justly famous models were developed in the next decades. Being all equivalent in a mathematical sense to the Neumann model, as it has been demonstrated in most instances by the respective authors themselves, they can and should be considered as mathematical variants of the latter: input-output analysis, as proposed by Leontief (1941), linear programming, as investigated by Dantzig (1947) and Kantorovich (1940), the neo-Ricardian model set up by Sraffa (1960) and finally two-person game theory, an earlier product of von Neumann (1928), reaching broader scholarly circles only with the von Neumann and Morgenstern (1944) volume. (The last contains a further generalization to n-person games.)

In spite of the mathematical equivalence those models have been developed mostly independently and have roots in widely different economic considerations. Sraffa’s approach, a careful and consistent restatement of Ricardo’s value theory, proved to be particularly important. The underlying idea, if possible, is even more simple here. In a self-replacing system where, in the absence of growth, λ = 1 with no joint products, hence B = 1, the prices can be determined unequivocally by the postulate: the inputs required to reproduce the respective commodities have to be defrayed from the proceeds of selling the same commodities. Hence the proportions of prices and quantities are determined by the dual system of equations

$$ pA=p\mathrm{and} Ax=x $$
(3)

Still in the more realistic cases, when extended reproduction and joint products have to be admitted, the description and solution is more rigorously and easily furnished by embedding the Sraffa system in a general von Neumann model.

Considering also the neo-Marxian restatement of labour theory as furnished by Brody (1970) and Morishima (1973), exploiting the Leontief model, where

$$ p\left(A+\lambda B\right)=p \ \mathrm{and}\left(A+\lambda B\right)x=x $$
(4)

and B interpreted as a stock-input matrix, a certain consensus seems to be reached:

According to the neoclassical exposition of Hahn (1982), all the schools would compute the same numerical magnitudes for prices and quantities for an economic system in equilibrium. They would accept the same system of equations, though they would interpret those equations differently. Deeper and yet unreconciled differences emerge only when abandoning the critical point of equilibrium.

With painfully won reconciliation in sight a new theoretical attack on equilibrium reasoning takes shape. Kornai (1971), collecting all the critical observations and deeply influenced by the inadequacies of economic systems which endeavour to replace the market by equilibrium computations declared: the equilibrium school ‘has become a brake on the development of economic thought’.

Paradigms – and equilibrium thinking is one such, with a domain much broader than economics alone – are seldom damaged by criticism. They may be done away with only by new and more powerful paradigms. Hence they rather thrive on objections – and all the internal problems already emerged with Smith who implicitly or explicitly maintained that equilibrium (i) exists, is (ii) optimal, is (iii) pursued and is also (iv) achieved.

Existence has been proved yet under ‘restrictive idealization’ in linear models but by a shrewd mind, knowing that it is permitted to approximate most functions, however complicated, linearly by taking their derivatives in the neighbourhood of the point analysed. (This may be achieved by taking a series expansion and neglecting terms of higher order.) The isomorphism of matrices and operators has been also well known to the pioneer of operator theory. So it is no wonder that all the models introduced are wide open to further generalization. Here non-linear programming, with Kuhn and Tucker (1956) and Martos (1975) and non-linear input–output models with Morishima (1964) have to be mentioned, also the success in generalizing the Neumann model by Medvegyev (1984) and applying operator calculus with Thijs ten Raa (1983). An increasing unification with linear and non-linear systems theory and with modern non-equilibrium thermodynamics can be safely predicted.

Optimality has also ethical, social, psychological and political connotations because one has to propose an entity (growth rate, utility, satisfaction, equity etc.) to be optimized. In this respect our subject belongs to the domain of welfare economics. Mathematically, the question is fairly simple: equilibrium and optimality can be made to correspond because solving equations is equivalent with minimizing the errors of the solution. That is: the solutions of Ax = b and Ax = r with Σ (r − b)2 → minimal are the same if they both exist.

Ethical, political, and other convictions will of course always influence scholars in choosing and developing their topics but, luckily, they do not play any role in proving or refuting theorems and corollaries.

Stability, the question whether equilibrium can or cannot be achieved, if pursued, and maintained, once achieved, is the most interesting question in the forefront of present research. The stability analysis of economic systems, performed by methods borrowed again from physics and also thermodynamics: analysis of the eigenvalues of the response matrix, negative definiteness, discussion of the second partial derivatives, the le Chatelier–Braun principle etc. indicate that both market and planning systems are usually stable, yet seldom asymptotically stable, and if asymptotically stable the speed of convergence is usually very slow.

Stability means that a given deviation from equilibrium will not grow without bound: if the deviation is initially small it will not become infinite. This secures the feasibility of the system, its ability to function; yet a system may be stable and perform very poorly. Even asymptotic stability, that is achieving the decline and vanishing of discrepancies, is an unsatisfactory criterion in economic matters because by the time the equilibrium point is reached or approximated it may be already displaced by changes of the system itself.

In reality economic systems move not in slowly changing equilibrium states but along socalled transients, a succession of non-equilibrium positions. Thus we are still far from an acceptable theory of economic motion. The models introduced spell out requirements of equilibrium but not the actual forces bringing, or not bringing the system to equilibrium. Still, certain inroads have been made by models of cycles, for example, Kalecki (1935), Goodwin (1967) and Brody (1985).

But perhaps more important than analysis seems to be the task of synthesis. Acknowledging that neither plan nor market can avoid economic fluctuations, the quest for controlling prices and quantities in a smoother and more efficient way is understandable. Questions of optimal control in linear and nonlinear systems emerge and once approximately solved the search will go unavoidably deeper: how to control the position of equilibrium itself, how to become master of structure and technology. To shape interdependence itself in a conscientious manner, to influence the outcome of technological and structural change is the next item on the agenda of mathematical economics.

See Also