Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

It is a widespread opinion that the field of research called “Foundations of Mathematics” has become a proper field of study at the end of the Nineteenth Century, deeply related with mathematical logic: Frege , Peano , Hilbert and most recently Gödel are just some names among the most representative ones.

The identification of the foundations of mathematics with the philosophy of mathematics is disputable—even if it is widely endorsed. I don’t take this identification in a literal sense (indeed it would be obviously false because it would disclaim the existence of philosophy of mathematics before Frege). I rather take it to mean that philosophical thinking on mathematics and foundations of mathematics were identified during the thirty years from Frege to Gödel . Indeed, the so-called “Foundational” schools (Logicism, Intuitionism and Formalism) also focused on philosophical problems, for instance the classic one about the existence of mathematical entities.

Whether or not one agrees with the previous identification, it is unquestionable that it cannot be supported nowadays. There are in fact several recent philosophical views developed in a non-foundationalist or even anti-foundationalist perspective: for instance, extreme forms of Lakatosian mathematical empiricism, which deny that mathematics needs foundations, hence any analysis of them. On the other hand, it is obvious that philosophy of mathematics—as a specific field in philosophy—is influenced by the more general philosophical climate, that in turn can influence foundations in more or less direct way. For instance, if on the one hand the search for rigour and strong foundations for mathematics seems consistent with a Neopositivistic point of view, on the other hand the attention to the fallibility of mathematics, that proceeds through trials and errors, is easily associated with irrationalistic tendencies, especially in some countries.

Under these assumptions, the aim of this contribute is to analyze some topics in the philosophy of mathematics. These topics are chosen especially among those which directly or indirectly concern also the foundations of mathematics, and in particular, those to which Evandro Agazzi has given the most relevant contributions.Footnote 1

Another circumstance is worth mentioning: the research in mathematical logic and in the foundations of mathematics in Italy were brilliantly started by Peano’s school, but was rudely interrupted by many causes that I will not expose here totally, but that—summarizing—are only marginally connected with the influences of Croce’s and Gentile’s Neo-idealism and to the hostility that Peano raised among contemporary mathematicians. In fact, the main cause was that Peano’s school had completed his own foundational program and it wasn’t interested in joining the new programs that were then starting in other countries.Footnote 2 In the Sixties of the last century there was a revival of foundational studies in Italy: the Italian scholars had, therefore, “to recover” many years of foundational research during which Italy was absent from the international scene.Footnote 3

The mainstream point of view in those years (among logicians and scholars of foundations) was that mathematical theories were formal systems. This means that the vast majority of scholars believed that both the axiomatization step (of modern type, i.e. that of hypothetical-deductive systems, in Pieri’s terms) and the subsequent one, the formalization step, had always been accomplished. The latter step was completely unrelated to mathematical practice, and required that any theory made explicit the deductive logical rules used in providing a precise characterization of proofs (formal proofs) within that theory. These proofs, in fact, were defined as finite sequences, or finite trees, of formulas linked to each other by logical rules. This latter step appears obvious nowadays, especially in the area of computer science research that strives to attribute demonstration tasks to computers (computers can exclusively make formal proofs!), but in those years it had different reasons: it was related to Hilbert’s program, which was then very influential (as it still is in part nowadays). Hilbert’s program required a deep analysis of proofs in order to guarantee that no contradiction could be derived: therefore proofs should be rigorously defined (as it doesn’t happen in the mathematical practice!), indeed become formal, and be studied by what Hilbert calls Beweistheorie (proof theory). Let us emphasize—even if I said that above—that this approach affected scholars like logicians and researchers on foundations, but only very little the mathematicians working in the traditional fields of mathematical research. These mathematicians, in fact, considered the use of logic as an obstacle to their research (it was too niggling for them). By the way, this is a respectable approach to logic—perhaps a natural and obvious one—even if sometimes it is referred to by quoting the sarcastic words that Poincaré used with regard to the formal proofs.

In such a context, which saw formalization as the culminating point in the development of a theory (not in order to work within it, but to work on the theory), the so-called limitative theorems came in very soon, in spite of Hilbert’s optimism (which in hindsight we now see as unjustified). They are theorems in logic (perhaps meta-theorems would be a better world), which show that in looking for rigour at the level of formalization, we incur in irresolvable problems or problems which have unwelcome solutions.Footnote 4

Among these theorems, the second Gödel’s Incompleteness Theorem (1931)Footnote 5 has a prominent position: for any consistent formalized system that is sufficiently powerful (i.e. which can formalize at least elementary arithmetic) a consistency proof cannot be carried out by proof techniques belonging to the system in question. It means that this proof cannot be achieved by means of those elementary and reliable methods, i.e. finitary methods, proposed by Hilbert for this purpose. Even if Hilbert has never said exactly what he meant by ‘finitary’ or ‘finististic’ methods, it has been immediately evident that they were just a part—actually a very restricted one—of all proof techniques of arithmetic.

Probably, a stronger blow to formalism was dealt by the first Gödel’s incompleteness theorem, although this is seldom mentioned. This theorem concerns the syntactic incompleteness of arithmetic: against Hilbert’s famous claim “In mathematics there is no ignorabimus”, the theorem of incompleteness of arithmetic showed that there are mathematical issues that cannot be decided. Indeed, there are closed formulas (i.e. propositions for which, given an interpretation, they can be said to be true or false) of which it is demonstrable that are neither provable nor refutable, and this phenomenon is not due to a deductive weakness of the formal system.Footnote 6

However, a way out (today it seems we should say: an expedient) from Gödel’s second incompleteness theorem was proposed shortly thereafter. It was an attempt to extend Hilbert’s finitism by carrying out consistency proofs through methods which on the one hand could not be formalized within the theory under scrutiny (so to escape Gödel’s theorem), and on the other hand were sufficiently reliable to be used in the research on proof theory. They are the constructive methods, typically used in intuitionistic mathematics, but here employed in meta-mathematics rather than in mathematics. It is remarkable that the consistency proof for arithmetic given by Gentzen in 1936—in the so called ‘modified’ or ‘generalized’ Hilbert’s program—has been judged “acceptable from an intuitionistic point of view”, even if it is surely not finitary. But here a crucial question emerges: who guarantees the reliability of these constructive methods? Of course, no further consistency proof was available, because it would have required further methods to deal with the problem (so generating an infinite regress). So, these constructive methods had to be accepted for their capacity to persuade intuitively; meta-mathematics was by its own nature an informal theory. So, taking the search for rigour to the highest level by formalization, one was eventually obliged to come back to an informal theory, at least at the meta-mathematics level.

More recently, in a deeply changed philosophical context, some have taken a more radical position. Since going back to an informal treatment is unavoidable, sooner or later, why shouldn’t we stick to it from the beginning, giving up the formalization step and directing philosophical analysis directly to informal (or pre-formal, not formalized) mathematics? These two approaches are deeply different: one thing is resigning oneself to a certain return to the informal in metamathematics; another is to require that the mathematics that has to be studied (not only by mathematicians, but also by philosophers) should be the non-formalized one instead.Footnote 7 However, Lakatos —from whom I have taken the above observationFootnote 8– was interested in the philosophical revaluation of informal mathematics. Informal mathematics proceed through trial-and-error processes, by ‘proofs and refutations’. Moreover, this is the mathematics practiced daily by mathematicians and it is different from the idealization constituted by formal mathematics. In any case, Lakatos’ philosophy of mathematics seems to stall on fundamental questions it posed, particularly dealing with the problem of the potential falsifiers for informal theories.

From his Popperian, quasi-empirical and fallibilist approach, Lakatos rightly analyzes the problem of potential falsifiers for mathematics and correctly distinguishes between the cases of formal and informal theories. The theorems of the informal theories are the potential falsifiers of the formal theories. This is absolutely natural if one believes in the supremacy of the informal theories over the formal ones, although it is acceptable only for well-established informal theories (of which formal systems are intended to be counterparts). As to the informal theories, instead, Lakatos doesn’t offer an exhaustive explanation of which their potential falsifiers could be. He only offers unfinished glimpses, that do not seem to have been adequately developed by others, except by proposing again the traditional problem about nature of mathematical entities.Footnote 9

Agazzi’s point of view on these matters is not as extreme as Lakatos’ one, but there are some common elements: the need for a return to the informal in metamathematics is evident. Agazzi analyzes the question concerning the return to informal approach in mathematics and in meta-mathematics through the distinction between—in Agazzi’s terms—the “concrete” theories and the “abstract” ones, which, he stresses, are featured by a very broad and general scope language. As a matter of fact some mathematical theories, for instance arithmetic—that deals with natural numbers—want to describe privileged models. Even if the approach can be syntactic, the guide is always semantic (the intended model). Instead, other theories are “abstract” by their own nature, and to have several models is an advantage in terms of their general application. According to Agazzi, the concrete theories have “a content that doesn’t appear far from the content we usually attribute to the empirical sciences”.Footnote 10 These considerations can be seen as the conclusion of the long course that, starting from the classic axiomatic, arrived at the modern one and was finally crowned by the critical awareness produced by the theorems on the limitations of formalisms.Footnote 11 According to the classical perspective, theories were intended to deal with certain mathematical objects (discovering their properties), whereas after the transition to modern axiomatic, theories have—so to speak—emptied of their contents. This means that the syntactic view has become prevailing, if not unique. This fact is well illustrated by the statement (rather unhappy from a terminological point of view) that the axioms implicitly define the primitive concepts. But Gödel’s theorem shows that there are true propositions about natural numbers that nevertheless cannot be demonstrated within the formal system for arithmetic. This reveals that aside from the formal system (that, by the way, has infinite models, even not isomorphic to each other) the structure of natural numbers exists, regarding which arithmetic’s task should be to make true assertions.

This perspective puts forward, however, the problem of what kind of existence should be assigned to the objects of a theory: for instance, Kronecker claims that the numbers are created by God, while for Frege and Russell they are sets. Again, for intuitionistic mathematics numbers are built on the basis of two-ity. Who is right? We are dealing with a multiplicity of choices. Agazzi seems to be inclined toward a constructive conception, that also gives him the possibility to treat in a unified manner both mathematics and empirical theories. An empirical theory cuts out its “objects” within a universe of “things” using the operational predicates. They represent the “point of view” of the theory, from which objects are studied.

Similarly, mathematical objects will be identified by the operations that are considered typical of the theory under scrutiny (for instance, let us consider the difference between the arithmetic of natural numbers and that of the integers or of the rational numbers, based on the fact that we want to operate with subtraction and division, without exceptions).

It will not seem inappropriate, I hope, to recall an observation due to Peano , focused on a distinction similar to the one I’m dealing with. In 1906, Peano wrote that a consistency proof is not required for theories such as arithmetic or geometry, while it is appropriate when the postulates are hypothetical and do not correspond to real facts. The context was that of the early meta-theoretical researches at the beginning of the twentieth century. In 1900, in Paris, Hilbert had posed the question concerning the consistency of mathematical analysis. Russell , in 1902, had discovered his antinomy. In 1904, Hilbert had set the ground of what became later his foundational program. Moreover, there had been some “misunderstandings” between Padoa and Hilbert (actually, of Padoa concerning Hilbert’s judgment on his work), while Pieri had supported the idea that it was in principle appropriate looking for a consistency proof.Footnote 12 In this context, Peano had stayed on the sidelines of the debate: as mentioned above, a consistency proof is not necessary for what Agazzi calls “concrete” theories, since they “speak” about certain real objects. This position could be labeled as a form of Platonism, and one could stress what Pieri pointed out as the difference between his own “abstract” position and Peano’s physico-geometrical one. But maybe, this position is more sophisticated. Peano has written: “The axioms of arithmetic, or of geometry, are satisfied by the idea that every writer of arithmetical and geometrical issues has about the number and the point”. Moreover, Peano has added: “We think the number, therefore it exists”.Footnote 13 It’s remarkable that right when the formal way to think the mathematical theories has been developed, also a distinction has been made according to which just some axiomatic systems retain the status of theories provided with contents.

If all this seems to undermine the epistemological interest of the attempts to prove the consistency of arithmetic, let us remind that this interest has always been restricted: it was nothing more than a step toward more significant mathematical theories. Moreover, it must never be forgotten that the problem for which the solutions had been sought was the consistency of analysis, and that the first result obtained within Hilbert’s program was a proof given by Ackermann in 1924—which later on has been shown to be wrongFootnote 14—whose aim was to prove the consistency of classical analysis.

The above mentioned Lakatos’s approach has inspired a few years later the well-known book Proofs and Refutations: The Logic of Mathematical Discovery. It is a manifesto of the modern mathematical empiricism (or quasi-empiricism). More recently, this empiricism in mathematics has in turn provided new trends in the philosophy of mathematics. These trends agree on the end of foundationalism and the fallibility of mathematics (often labeled as “loss of certainty”). Again, they agree with the assimilation of mathematics to empirical theories (encouraged by the results concerning computers-aided proofs), and share doubts on the value of traditional proving activities, that sometimes have even been declared “dead”. Finally, These trends agree in linking this topic to the chronic troubles affecting the daily teaching of mathematics, and in blaming formalism for them (which, I submit, is at least arguable).Footnote 15

However, on this occasion the target is not merely the formalization, rather mathematical logic itself. Mathematical logic had been the main protagonist in the foundational studies, for instance in Frege and Russell’s logicism—which placed it as foundation of mathematics—and in the Hilbertian formalism, in which it was an essential tool for the formalization of mathematical theories, as well as for the consistency proofs. It has been considered appropriate to replace—at a methodological level—this kind of logic—that, meanwhile, had become a proper and autonomous mathematical discipline—with a new logic, a logic of discovery, as the subtitle of Lakatos’ book points out explicitly. Summarizing, the object of philosophy of mathematics should not be the “justification moment” but the mathematical practice, that is mathematics in its development, that includes, especially, the set of all those procedures followed in the search for proofs, of which there is no trace in the final proof (formal or not). The traditional studies on foundations have disregarded too much this aspect, focusing on the analysis of proofs as finished products. This deficiency has produced a widespread and almost complete lack of interest of mathematicians on the topics related with the philosophy of mathematics. This line of research seems to require our encouragement, but only on condition that—this is my opinion—it is placed side by side with foundational research, that is, it doesn’t have to replace it. However, the foundational researches have moved forward in the meantime, even if with purposes dissimilar from the original ones.Footnote 16

Specifically, regarding Agazzi’s perspective, it seems his ideas—especially with reference to the limitation theorems and the proposal to treat mathematics and empirical theories unitarily—constitute an authentic and original anticipation of positions developed in the following years. His latest stances focused on more extreme forms of mathematical empiricism—besides indicating that Agazzi is in line with some basic issues about the cognitive value of mathematics—show a constant interest toward this question and this gives us hope that some pages of his philosophy of mathematics that have yet to be written, will indeed be written in the next few years.