Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The connections between economics and statistics are close but complex; for a better understanding they need to be considered within the perspective of the history of thought.

This does not mean going back in time to remote origins. Attempts to trace back the origins of our disciplines may lead us as far back as we like, but the exercise in retrospective exploration rarely proves interesting. What is worth recalling here is the fact that for a long time and in an important line of research—that of the political arithmeticians—no distinction was made between the figure of the economist and that of the statistician (nor even of the demographer), for they all converged in the same figures: William Petty and John Graunt around the mid-seventeenth century, and then Gregory King and Charles Davenant and so on, to Patrick Colquhoun with his Treatise on the wealth, power and resources of the British Empire of 1814, taking in on the way the famous missing appendix of the Essay on the nature of trade in general by Richard Cantillon (1755). Political arithmetic was the term used by Petty himself and later by his successors to denote their researches based on quantitative estimation of the main characteristics of the contemporary societies.

Attempts to identify a separate origin for one of our disciplines, attributing to John Graunt alone the origin of demography, come up against the fact that William Petty clearly exerted a strong influence on his friend’s work, to the extent that it has even been conjectured that the Natural and political observations upon the bills of mortality of the City of London, published in 1662, were in fact written by Petty, anxious to have his friend admitted to the Society for the Advancement of Learning (or Royal Society, as it is known today).Footnote 1 This is most probably a far-fetched hypothesis, but Petty did have an influence, and a great one; in any case, it is certainly a mistake to isolate Graunt’s work from the broader context of political arithmetic. This is, of course, just one episode, but it helps to remind us that political arithmetic represented one broad line of research.

There can be no denying that subsequently the various disciplines followed largely independent courses for a long period of time. It was time when, more generally speaking, in the natural sciences as in the social sciences, segmentation of research went hand-in-hand with scientific progress, and indeed has often been identified with it. In the economics field we have industrial economists, economists of labour, money, energy and tourism, public finance, historians of economic thought and so on. Similar divisions also exist in the field of statistics, albeit with possibly less clear-cut demarcations. The SIS (Società Italiana di Statistica) counts among its members probabilists and demographers, market researchers and operations researchers, and so forth.

We should distinguish between segmentation in the fields of research and specialisation among researchers. The latter is inevitable despite the fact that often in the course of a lifetime’s research there is time to address various issues and acquire a range of specialisations. Segmentation, on the other hand, has to do above all with the professionalisation of research, and thus with the organisation of academic careers, driving us, for example, to take great pains over redefining chair groupings. Segmentation can be very harmful, as for example when it results in relegating fields of research on the borderline between sectors to a limbo, as is the case today with the history of economic thought. But even specialisation itself, albeit undoubtedly necessary within certain limits, can only work well as long as we do not lose sight of the connections with the foundations of fields of research that may seem to be somewhat remote from the field chosen to work in. To take just a couple of examples, the labour economist cannot afford to be totally unaware of political organisations or labour law, while the demographer must have some acquaintance with Markov chains and economics.

Taken to extremes, both specialisation and segmentation are harmful. Twenty years ago Giacomo Becattini, then president of the Società Italiana degli Economisti, had already warned of “specialists of the left big toe” who lose sight of the connections with the rest of the foot, not to mention the rest of the human body. Since then segmentation and, above all, specialisation have continued to forge ahead.

For example, it can happen—in fact it does happen quite often—for an industrial economist unacquainted with the history of thought to repeat unawares the very mistakes made by Marshall that Sraffa had already pointed out in 1925. Indeed, in some cases—and in particular with reference to the separation between macro and microeconomics—the division between fields of research may play a crucial role in defending against criticisms which could otherwise prove unanswerable, for it allows macroeconomists to ignore the findings of the debate on capital theory, traditionally included in the field of value theory, and thus of microeconomics.

If relations between the different fields of research within economics (or, I imagine, within statistics) are already so difficult, the difficulties are obviously all the greater if we go on to consider relations between statisticians and economists.Footnote 2 The distinction between the two fields—statistics and economics—is clear enough today, and there is no need to go into precise definitions (which could, indeed, complicate matters). Although I graduated in statistics, I cannot claim to be considered a statistician, nor even “also” a statistician, as well as being an economist. But if we look back to the period before the professionalisation of our research fields, we see a rather different situation. I had heard of William Petty and the tradition of political arithmeticians from Vittorio Castellano, who was my professor of statistics, before I was told about them by Paolo Sylos Labini and Piero Sraffa, my masters in economics. Each of them referred to different aspects of a personality as multifaceted as that of Petty, who had also been among other things a cartographer, a physician, a professor of music and a nautical engineer. What is, however, certain, is that the statistician and the economist cohabited with no distinction in Petty’s political arithmetic.Footnote 3

To bring Petty’s position into clearer focus, we will take a step back. As Petty himself explicitly recognised, behind his method—the method of political arithmetic—lay Bacon, whose teaching in this respect can be summed up in a celebrated passage:

the men of experiment are like the ant; they only collect and use: the reasoners resemble spiders, who make cobwebs out of their own substance. But the bee takes a middle course; it gathers its material from the flowers of the garden and of the field, but transforms and digests it by a power of its own. Not unlike this is the true business of the philosophy; for it neither relies solely or chiefly on the power of the mind, nor does it take the matter which it gathers from natural history and mechanical experiments and lay it up in the memory whole, as it finds it; but lays it up in the understanding altered and digested.Footnote 4

Here Bacon contrasts the inductive method—a blend of empiricism and rationalism—with the syllogistic-deductive method of the Aristotelian tradition and the Renaissance tradition of pure empiricists (technicians and alchemists). The reasoning—the theoretical elaboration—neither precedes nor follows the collection and processing of data, but accompanies it. Not only are the data organised within a sort of “preanalytic vision”, as Schumpeter called it, i.e. within a broad outline conception of the object of study, but it is also necessary to piece together a network of interrelated concepts forming the framework within which the collection of data takes place. The latter operation may prove complex, as the experts examining national accounts know all too well, but it is of prime importance; Schumpeter himself saw it as the first step, after the “preanalytic vision”, in the work of research in the field of the social sciences.Footnote 5

To take an example, unemployment data are collected on the basis of precise definitions that may be modified in the course of time and generally differ from country to country; nevertheless, behind these definitions lies a theoretical approach—the neoclassical or marginalist one—according to which unemployment measures the distance from full employment equilibrium, defined in terms of equality between labour demand and supply. On the other hand, in classical and Sraffian theory there is no such thing as the notion of a full employment equilibrium and, correspondingly, there is no notion of unemployment or rate of unemployment, nor are there any attempts to measure this magnitude. In classical theory, rather, the central concept lies in the levels or rates of activity or employment.Footnote 6

Crude empiricism in which all conceptual construction is eschewed can go no further than collecting disordered information on realities. However, it is in fact necessary to impose some order on the collection of information unless we are prepared to make do with a one-to-one scale description of reality—a feat that is not only practically impossible but also useless, for it would offer no help in getting the bearings of a situation, which is precisely what the social scientist sets out to do.

Alongside his condemnation of crude empiricism, Bacon comes up with at least equally radical indictment of the syllogists—scholars set on confining research activity to mere deductive logic. His reference here, as we have seen, is to the tradition of Scholastic philosophy, but his critique might equally be taken to apply to attempts to reduce the various fields of social sciences to applications of mathematics—attempts that have in fact been stepped up over the last few years, despite the warnings of Fuà, Becattini, Lombardini, Sylos Labini and others in a celebrated letter published in the daily newspaper Repubblica on 30 October 1988.

With his political arithmetic Petty was working within the methodological framework indicated by Bacon, but there was an important additional element, namely the influence of Hobbes’s materialist sensism. When he states “Pondere mensura et numero Deus omnia fecit” (citing a famous passage from the Bible, the Book of Wisdom, xi. 20), Petty makes it clear that he sees in political arithmetic an adequate tool not only to describe reality but also to represent it theoretically, precisely because, according to the materialistic–mechanical tradition pursued by Galileo and Hobbes, reality itself has quantitative structure. Let us recall in this connection the famous passage in Galileo:

this great book which is open in front of our eyes – I mean the Universe – […] is written in mathematical characters.Footnote 7

In other words, for Petty it is not only a matter of surveying and describing reality in terms of number, weight and measure, i.e. in quantitative terms, but also of expressing oneself in those terms in the attempt to interpret reality, identifying its salient characteristics, precisely because the inner structure of reality “is written in mathematical characters”, for the physical sciences as much as for the sciences of the human body or the social sciences. The task of the scientist is therefore to discover these laws, in the etymological sense of removing the cover that hides them, identifying them beneath the covering of those multifarious contingencies that complicate the world—without, however, changing its intrinsic nature.

This eminently clear-cut methodological conception attended upon the birth of the modern sciences and their early developments, from the physics of Newton to the chemistry of Laplace, from anatomy to politic arithmetic. Contrasting with it, about a century later, we have Adam Smith’s approach, stated with a degree of emphasis in his celebrated Wealth of Nations: “I have no great faith in political arithmetick”.Footnote 8 Often this position has been interpreted as mistrust in the rough and ready methods applied in estimating statistical data by Petty and his followers, but it was not simply this (or, at least, not only this). Undeniably, Petty’s data were assembled in a decidedly rudimentary way, ingenious as it often proved. Since then considerable progress has been made in the methods of collecting statistical data. Nevertheless, even today, albeit with all due caution, many make use of the estimations of the last of the political arithmeticians, the late Angus Maddison, on population and income trends from the year 0 of the modern age to today,Footnote 9 and many worthy economic historians and statisticians are engaged in the labour of reconstructing the time series with no hope of achieving the degree of precision we may take to be, averagely speaking, guaranteed for the data produced by Istat (the Italian Statistical Institute) today. However, the point Smith was making with his declaration of no confidence in political arithmetic is rather more important. It represented a methodological conception differing from Petty in at least some important respects, implying a critique of the idea that political arithmetic—or, in our context, statistical analysis—opens the way to the discovery of actual “laws” inherent in nature and society (and thus something different from simple statistical regularities or “stylised facts”, as Nicholas Kaldor used to call them).

Smith referred to this methodological conception in the opening pages of his History of Astronomy, a text highly praised by Schumpeter, who challengingly considered it the only work by Smith endowed with real originality.Footnote 10 Before recounting the history of the transition from the Ptolemaic to the Copernican conception, SmithFootnote 11 explained that nature appears to us as a series of “events which appear solitary and incoherent with all that go before them, which therefore disturb the easy movement of the imagination”; in order to surmount this vexing situation men have recourse to philosophy, or scientific reflection. More precisely, philosophy is defined as “the science of the connecting principles of nature” (where again we find the conception of the sciences of nature, man and society as all one). The task of philosophy is “to introduce order into this chaos of jarring and discordant appearances”, “by representing the invisible chains which bind together all these disjointed objects”. Performance of this task entails construction of “philosophical systems” (just like the two different cosmological conceptions, Ptolemaic and Copernican, illustrated by Smith in the following pages of his text). These philosophical systems, Smith stresses, are “mere inventions of the imagination, to connect together the otherwise disjointed and discordant phaenomena of nature”. In other words, the researcher contemplating some aspect of the world and seeking to interpret its functioning (the philosopher, in Smith’s terminology) has an active role, of creation and not discovery of theories. Thus Smith rejects the idea of a mathematical structure for reality, as Galileo had argued for physics and astronomy, as Hobbes and subsequently Condillac with sensism had extended to the human body and as Petty and the political arithmeticians had extended to the “body politic”, or in other words society.

“Philosophical systems” like the Ptolemaic or Copernican systems may be “inventions of the imagination”, but they can help us to get our bearings amid the chaos of real events. However, there is clearly no possibility to verify the theories by demonstrating that they correspond to the intrinsic laws of nature, for this would require such laws to have an existence of their own, independently of the theories: to be inscribed in the real world, and not a creation of our thought. In comparing different philosophical systems and choosing between them there are, therefore, no straightforward, unambiguous answers.

Thus Smith anticipated the positions recently advanced by Feyerabend or McCloskey, reference being to “honest conversation”, a “rhetoric” of scientific debate. More precisely, in the Lectures on rhetoric Smith proposed a model resembling procedure in a trial at law, presenting the evidence for or against a certain thesis in order to choose which propositions to accept or reject.Footnote 12 Such ideas have a long tradition behind them, going back to the Greeks Sophists who disputed the theses advanced by Socrates and Plato on the existence of a Truth inscribed in reality which philosophical investigation should seek to unveil. By contrast, the Sophists advocated open discussion on the pros and cons of each particular thesis.

In scientific debate the rhetoric method presupposes honest behaviour on the part of all the participants in the debate. This connection between theory of knowledge and ethics is made possible by recourse to the notion of the impartial spectator, which Smith (1976) set out in his Theory of moral sentiments, published in 1759. Individuals evaluate their actions (in our case researchers evaluate their theories) taking the point of view of an “impartial spectator” who, aware of all the details known to them, abandons all personal interests and judges on the basis of the moral criterion developed shortly after by Kant in the Critique of judgement.

Apart from method, a further aspect distinguishing Smith from Petty, an undeclared but recognisable follower of Machiavelli, is precisely the ineradicable presence of an ethical element in analysis of reality, both in his method of work, as we have seen, and in his choice of problems and, indeed, in the simplifications of reality that have to be made to be able to address them at the theoretical level.

Working with statistics also remains fundamental to this methodological approach, but no longer as a means to discover and represent laws inscribed in reality, nor as a tool to verify theoretical findings. Here, rather, statistical work constitutes a fundamental tool to assemble evidence in support of this or that thesis, albeit with the understanding that the various theses can never be definitively judged true or false, but only more or less likely in the light of the knowledge so far acquired. We may follow Popper (1934) in holding that confutation is more important than confirmation in the work of scientific research, but always with the proviso that no such confutation can be taken to be definitive, either. As Popper put it, one black swan sufficed to confute the thesis that all swans are white; yet DNA analysis of black swans—impossible 80 years ago when this variety of swan was discovered in Australia—might in principle have subsequently demonstrated such a genetic distance from white swans as to force us to consider them two different species.

A position similar in many respects to Smith’s was arrived at a century and a half later by John Maynard Keynes, with his Treatise on probability (1921). In the succession of various approaches to probability, Keynes’s contribution followed upon the classic conceptualisation by James Bernoulli (1654–1705; his Ars conjectandi was published posthumously in 1713) and his immediate successors, including his nephew Daniel Bernoulli (1700–1782), and upon the frequentist approach expounded by, among others, John Venn (1834–1923), author of a celebrated text, Logic of chance (1866), and professor at Cambridge at the time Keynes was studying there, but before the subjectivist line taken by De Finetti (1930), Ramsey (1931) and Savage (1954).Footnote 13

As we know, in the classical definition probability is expressed as the number of favourable cases divided by the number of possible cases. This definition applies particularly well to “regular” games like roulette or dice, provided, of course, that we are thinking of a perfectly constructed roulette wheel or dice that are not loaded. In fact, the definition implies complete specification of the range of events divided into a finite number of elementary events (e.g. the six sides of the die, the thirty-seven numbers of the roulette wheel—bearing in mind that zero is also to be included), considered equally probable on the basis of the principle of insufficient reason or principle of indifference, according to which there is no reason to consider one elementary event more probable than another. The task pursued by the calculus of probability is to determine the probability of complex events, like two or seven emerging as the sum when two dice are thrown.

Here we come up against two limitations. To begin with, applying the findings of probability calculus to a concrete game implies a distinct assumption which is not always verified and which we cannot generally rely on being correct, namely that the concrete game in question is practically indistinguishable from the perfectly “regular” game analysed at the level of the theory. Secondly, and even more importantly, the vast majority of the problems that we are interested in studying from a probabilistic point of view have nothing like the same characteristics as the regular game.

On the other hand, the frequentist approach has to do with the inductive knowledge methodology. The idea is that the probability of an event is the limit to which the relative frequency of the event tends in successive observations (stochastically independent from each other) of some variable, for instance the stature of conscripts or the throw of a die, or repeated independent measures of the same magnitude, when the number of observations tends to infinity. As with the classical definition, the frequentist definition implies an objective view of probability. The objective nature of the probability statement lies in the fact that it is considered to depend on the intrinsic properties of the phenomenon under consideration, and thus in a sense derived from them.

Strictly speaking, as Richard von Mises, an exponent of the frequentist approach pointed out, it can be applied only to “collectives”, or in other words successions of uniform events only differing in some observable characteristic which is the object of scrutiny, when the principle of randomness holds—when, that is, no regular sequence occurs, making it impossible to devise a successful strategy applicable to the order of sequence. Clearly, these are very restrictive conditions which should stand in the way of application of the frequentist approach to any social phenomena. More generally speaking, theoretically if we interpret the requisite of infinite length of the series of observations literally, no finite series of observations, however long, can constitute a collective. In this respect we may recall the sceptical view of the inductive method taken by David Hume long ago, in the eighteenth century: experience of past events can play an important role in moulding our beliefs; however, it can be contradicted by subsequent events; consequently, we cannot infer from a sequence of past events generalisations applicable to future events.

According to the subjective (or “personalist”) approach which was proposed around 1930 by Bruno de Finetti and Frank Ramsey independently of one another, and which gained ground after publication of the text by Savage (1954), the statement of probability is subjective in the sense that it is a state of mind, not a state of nature. Probability can be defined as the lowest betting odds one would accept on a given event. If the individual in question is indifferent to the event, then the “supply price” and “demand price” of the bet are equal, and correspond to the assessment of the probability of the event considered by that individual. The mathematical theory of probability is entrusted with the task of ensuring the logical consistency of each agent’s book of bet offers, identifying arbitrage strategies should misalignments arise.

According to the Italian mathematician Francesco Paolo Cantelli (1875–1966), the field of probability calculus is made up of various subfields, for each of which one of the various approaches mentioned above will prove the most suitable: urn theory (conceived as a development of the classical theory of probability) for cases in which equally probable atomic events can be defined; the frequentist approach for matters of insurance; and the subjective betting approach for fields such as horse races. Naturally, as pointed out by De Finetti, the mathematical treatment is similar in the three cases; what Cantelli meant to stress was the different natures of the phenomena considered, implying different procedures to assemble the data upon which to perform probabilistic analysis.

Keynes, who acquired a grounding in mathematics in Cambridge, offered an original contribution to the topic in a book (Keynes 1921) which began as a fellowship dissertation but took several years to complete. His contribution, which he continued to defend in the following years, even in the face of the new subjective approach formulated by his friend Frank Ramsey, addresses three aspects: definition of probability as pertaining to the field of logic, the concept of the “weight of the argument” and the so-termed “theory of groups”.

Keynes defines probability as “the degree of rational belief” that one may have in a proposition (a hypothesis) on the basis of the available evidence. Thus in itself probability is not an objective property of the phenomena under consideration, but a logical relationship introduced by the agent between the available evidence, on the one hand, and the proposition under consideration (primary proposition) on the other. The logical relationship (or secondary proposition) can differ from one agent to another due to differences in the knowledge each may have, but also to differences in intellective powers. At the same time, the probability statement retains a certain empirical correlative in the reference to the available evidence, which has the effect of a constraint on the rational observer.

In this connection we may recall the ethical element which Smith introduced into scientific debate: the agent cannot decide at will the probability to be attributed to an event (the secondary proposition), but must respect the evidence at his disposal, seeking to infer the same evaluations of probability as anyone else might. For Keynes, then, it is not only the internal consistency of the subjective evaluations of probability of each person that counts, but also the objective correspondence with what we know; indeed, although it cannot be unambiguously specified, this objective element constitutes the dominant aspect, to the extent that we may classify Keynes’s among the objective rather than subjective theories of probability.

In this way we can distinguish “rational” from “irrational” belief; the ethic of individual responsibility, in which Keynes follows Moore, leads our evaluations of probability to respect of the hard facts, as far as we can know them, since our evaluations are to be used as guide for our actions. From this point of view, the evaluation of probability must be as independent as possible from our subjective preferences. We may take the case of a doctor working on a diagnosis: to begin with, there is the effort to gather the relevant evidence regarding the patient, after which it is time to draw a conclusion that, fallible as it may be, is the best the doctor can offer, where “the best” does not mean the most optimistic one, but the one that best corresponds to the actual state of the patient. (Here it is also worth considering the concept of relevant evidence: there is no need to know everything about the patient and surrounding world, but as much as can possibly serve to evaluate the possibility of an illness and contagion, also taking into account the fact that the diagnosis has to be made as promptly as possible, or at least within a reasonable length of time, and this is a circumstance characterising the diagnoses of both the doctor and the economist. The relevant evidence has to do with the problem addressed, implying selection guided by the initial hypotheses which can then be redirected towards new aspects on the basis of the provisional evaluations as they are formulated.)

The second notion that Keynes introduces into his logic of probability is the “weight of the argument”, defined as the width of the evidence upon which our probability statement is grounded. This is an additional dimension in evaluation of probability. It cannot always be expressed quantitatively, nor indeed precisely, given the logical impossibility of defining a priori the entire field of evidence which would serve to express a sure evaluation of probability. (Definition of the weight of the argument as the ratio between the evidence known to us and the full evidence can help us to understand the concept, but does not allow us to measure it, since the very notion of full evidence cannot in general be defined a priori.) Nevertheless, even a rough idea of the weight of the argument can serve to distinguish between essentially different situations. For example—and coming to a point of considerable relevance to the economic theory that Keynes was to develop in the following years—we can distinguish between the uncertainty regarding the decisions that entrepreneurs have to take on levels of investment in new plant and the decisions that financiers have to make in choosing the assets in which to invest their financial resources. The problems of everyday life are normally characterised by an intermediate degree of uncertainty, somewhere between the maximum associated with total ignorance and the minimum associated with established facts or matters of probabilistic risk (such as playing with dice deemed not to be loaded).

This is a point that needs to be borne in mind if we are to understand the third element of Keynes’s theory, the theory of groups. The “group” is specified in purely logical terms, as a set of propositions with two components: those propositions (independent of one another) that define the group as premises, and the propositions that can logically be derived from the premises. Within each “group” we can provide evaluations of probability, and the probability calculus ensures their internal consistency. The idea of the group recalls the “language games” of the later Wittgenstein (1953), the Wittgenstein of the Philosophical Investigations, but also the stress placed by Alfred Marshall, Keynes’s master, on the use of “short causal chains”.Footnote 14 Our analyses must address well-defined issues; they cannot cover overlarge fields without losing solidity.

In the light of this as well as of the critiques raised against the frequentist approach, the inductive method—and thus the use of statistical inference—calls for a considerable degree of caution. This is certainly not to say it should be rejected, for the tools of statistics are indeed of great help in getting the bearings of the available evidence, but there is the need to guard against the possibility of inferring laws of general applicability from the analysis of specific datasets.

In other words, the prospect that Keynes opens up for us lies along a tricky path between the Scylla of the uncritical empiricism of the frequentist tradition and the Charybdis of the solipsism of the subjective approach, which concentrates on the internal consistency of the system of individual beliefs while failing to take into account the fact that such beliefs guide action insofar as they refer to the realities we are faced with.

The foregoing points can be taken as a background for a few final, alas somewhat pessimistic, considerations of a personal nature on the relationship between statistics and economics and, more generally, between statistics and the social sciences in research and teaching.

When the time came for me to enrol at university, having decided to study economics I sought advice on the faculty I should apply to—economics, political sciences, social sciences, law or statistics? The most useful advice came from two sources. An elderly great-uncle, professor of commercial law, sagely advised me: “What counts are the professors you will study with, more than the subjects you study. Enrol where Sylos Labini teaches, whichever that faculty may be”. So I went to meet Sylos: “To be an economist,” he told me, “you need history, philosophy, mathematics and statistics. The latter are essential and hard to learn on your own; enrol in Statistics”. Luckily the two recommendations coincided, and so I enrolled in Statistics.

Of course, this would not have been the most appropriate choice if I had wanted to study economics as a technical subject, to seek a job in some firm. But it was the best choice to study economics as a social science. Application of statistical methodology is impossible without a good knowledge of the field of the issues to be investigated; in fact, as we have seen, statistical inference cannot lead to scientifically definitive results. At the same time, a discipline like economics cannot be cultivated by deducing abstract theories from a priori, like Bacon’s spiders, but calls for constant comparison with reality—a scientifically serious comparison, mediated by the tools of quantitative analysis. This, I believe, applies to all the social sciences, which all need strong quantitative support and sound foundations of mathematical logic. It is a fact that has found increasing recognition throughout the world, with repercussions on the organisation of studies in the field of the social sciences. And I believe it lay behind the happy intuition of the founder of the Rome Faculty of Statistics, seeking to achieve a balance between mathematical and statistical tools and the substantive social disciplines in the order of studies.

Alas, this cultural plan—which saw the school of Sylos Labini turning out a score of full professors of economics, and top executives at the Treasury and the Bank of Italy, as well as at the World Bank and the OECD and elsewhere—has now been abandoned, with the closing first of the degree course in Statistics and Economics, then of the Department of Economics founded by Sylos Labini, to be followed by the Department of Social Sciences which we had fallen back upon, and finally the Faculty of Statistics. The idea of producing economists with a leaning towards finance, as a joint product of the preparation of actuaries, could have been, within certain limits, a useful addition to a plan for the preparation of quantitative sociologists and economists, but not a substitute for it. Moving in this direction, the tradition of statistical studies itself is doomed to steady decline. I can only hope that the other universities do not repeat the mistakes committed by the Sapienza University of Rome.