Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The advancement of knowledge is the big goal in human understanding. To get it, we often have to push beyond the frontier of knowledge, where our understanding dissolves and where new, strange entities appear. These require bold explorations and the consequent discoveries are not idle mind games, but crucial tools for our future life. And to have a method for carrying out these explorations is essential. Tellingly, in his famous documentary Cosmos and the homonymous book, Carl Sagan spent some of his most inspired words to stress this point:

In the last few millennia we have made the most astonishing and unexpected discoveries about the Cosmos and our place within it […]. They remind us that humans have evolved to wonder, that understanding is a joy, that knowledge is prerequisite to survival. I believe our future depends on how well we know this Cosmos in which we float like a mote of dust in the morning sky. Those explorations required skepticism and imagination both. Imagination will often carry us to worlds that never were. But without it, we go nowhere. Skepticism enables us to distinguish fancy from fact, to test our speculations [1, p. 7].

Sagan’s contrasting of imagination and skepticism evokes the two main roots of logic and reasoning: ampliative reasoning, heuristics and methods for discovering on one hand, and non-ampliative reasoning, deduction, and methods for justifying and grounding our findings on the other. From these two roots have grown, branched out and borne fruit the two main traditions in logic and philosophy of science and philosophy of mathematics in particular. These traditions have seen several conflicts during the history of western scientific and philosophical thought especially on the battleground of the role of logic, reasoning and philosophy in human understanding. The latest clash was generated by the birth of mathematical logic following Frege’s works. The battle is hardest fought between the orthodox view and the maverick view of philosophy of mathematics.

The orthodox view is that philosophy is a meta-activity, a thinking about thinking that exists to clarify concepts, remove flaws and eradicate misunderstanding. Hence, reasoning teaches us to prevent errors, and logic is its main tool. Here logic is purely deductive, that is, a closed set of sound mechanical rules.

The maverick view claims that philosophy contributes to the hunt for new knowledge, by providing a logic and a method for its generation. Here logic is an open set of fallible rules for the generation of hypotheses from a set of data, and method is a framework for solving problems. A recent example of this view is Cellucci’s revised version of the analytic method (see [2]).

Truth be told, there is a branch of the orthodox view that maintains that philosophy contributes to the advancement of knowledge, but it comes with the critical thesis that deduction and axiomatization can extend our knowledge. This is a crucial point on which the maverick view challenges the orthodoxy.

The mavericks think that deductive logic cannot genuinely extend our knowledge. They argue that no deductive rule is ampliative since the content of the conclusion is already present in its premises. According to this view, a deduction only makes explicit the information that is implicit in its premises: a deduction allows us to unfold and rewrite the information embedded in the axioms in a way that is much more understandable and testable but, logically, it cannot extend them. Axiomatic-deductive systems establish relations of logical dependence between known findings but cannot produce new findings.

Moreover, the relation between hypotheses and consequences, axioms and theorems, is radically different in these two views. According to a very radical maverick view, the starting point of an enquiry is not the axioms, but the consequences and the theorems. This point is expressed nicely by Hamming, who points out that this is true even in mathematics:

The idea that theorems follow from the postulates does not correspond to simple observation. If the Pythagorean theorem were found to not follow from the postulates, we would again search for a way to alter the postulates until it was true. Euclid’s postulates came from the Pythagorean theorem, not the other way. For over thirty years I have been making the remark that if you came into my office and showed me a proof that Cauchy’s theorem was false I would be very interested, but I believe that in the final analysis we would alter the assumptions until the theorem was true. Thus there are many results in mathematics that are independent of the assumptions and the proof [3, pp. 86–7].

The bottom line: in the hunt for new knowledge we cannot employ axioms and deduction. Axioms are the pawns, not the queens, of our understanding, and they can be sacrificed on the chessboard of knowledge. Deductions are conservative moves: they protect your pieces and strengthen your position but do not offer ways to create lines of attack to win a match.

The orthodox view replies that axioms are the rough diamonds of our knowledge, and that deduction is the tool used to cut them. In this view the cut diamond is a new product, with new properties and relations between its parts, in other words new knowledge, so deduction is ampliative. The orthodoxy supports this claim with several arguments, such as the semi-decidability of the theories, the surprise of unexpected consequences, the need of new individuals in deduction, and the epistemic aspect of conclusions (see e.g. [46]). In a nutshell, these arguments set out to show that by deducing consequences we gain genuine new knowledge since the consequences (theorems) are to axioms as plants to their seeds (using a Fregean metaphor). The seeds in itself are not enough to obtain the plants, the truth of our postulates is not enough to foresee the truth of their consequences. We need an effort to obtain a deduction from given axioms—to choose and combine the premises in the appropriate way—in the same way we need a work to get a plant from its seeds. A plant is something new with respect to the seeds, so deductive consequences are new knowledge.Footnote 1 Moreover the orthodox view states that you get new plants or new properties of a plant just working on the seeds, that is on their combination and modifications. In other words, drawing deduction by relaxations, changes and combinations of axioms are ways to produce new knowledge.

The maverick view, in turn, argues that these arguments miss the big point: there is no way to logically extend our knowledge by means of deductions from axioms. The axioms are the only things needed in deductions: an axiomatic-deductive system is a closed world, unlike a plant that is an open-world that needs to interact with an environment to grow from seeds. Moreover, you don’t need any kind of work or effort to get deductive consequences from a set of axioms since this task can be done mechanically by the British Museum Algorithm.

The real issue here is the definition of new, or novelty, that is what can be considered as new knowledge. On the orthodox side, establishing new logical relations between known components is regarded as new knowledge, on the maverick side only the production of an unknown component is regarded as such.

The clash between mavericks and orthodoxy, not only on the issue of new knowledge, has come to various attempts of reconciliation. For instance, recently Paolo Mancosu has tried to harmonize the two views within his ‘philosophy of mathematical practice’ framework (see [9]). The above Sagan’s quote suggests a fruitful way to look at this problem, and also a way out to the clash. In effect, we need both ampliative and non-ampliative reasonings in the advancement of knowledge. They serve different purposes and have different roles within the same process. The ampliative reasoning offers means to produce new hypotheses capable of enlarging our understanding. The non-ampliative reasoning provides means to test and assess these hypotheses by confronting them with existing knowledge, strengthening the process of generation of hypotheses itself. In this sense, non-ampliative reasoning is useful and even necessary for the advancement of knowledge also from a maverick view point. Moreover, the work on axioms, from time to time, can produce new knowledge, since the relaxations or the changes of our postulate can be an effective heuristic move—even though it does not require to endorse a structuralist view. In particular working deductively from axioms is a means of control, a means of discovering errors in our postulates and knowledge, and learning from them—this is a big lesson from the history of set theory.

The relation between ampliative and non-ampliative reasoning can be expressed also in terms of risk-management, that is cost-benefit ratio. Basically, the non-ampliative reasoning is a risk-aversion strategy: it aims at minimizing as much as possible the possibility of doing mistakes, but in order to reach this goal it pays a cost, that is the fact that the novel epistemic gain it offers is small or negligible. The ampliative reasoning is a risk-taking strategy: it has a potentially high cost—namely the possibility of doing bad mistakes by means of its set of fallible inferences—, which is balanced by the benefits of deep epistemic gains. This follows from the paradox of inference, which remind us that the tension between soundness and ampliativity in our reasoning cannot be dissolved.

The point is that while non-ampliative reasoning has been developed extensively in the history of philosophical and scientific thought, the same cannot be said for ampliative reasoning. One obvious reason for this is the intrinsic difficulty of producing risk-taking strategies, that is ways of reasoning at the frontier of knowledge and research. At this stage of knowledge, most of our tools for managing knowledge and solving problems vanish: the hypotheses and concepts we rely on become more and more tentative and uncertain, our knowledge-base about the objects under investigation becomes poorer and poorer, the problem-state and problem-goal can be ill-defined, the allowed ‘moves’ on the entities of our inquiry can be unknown or only partially known, as are the constraints on them. We really have feeble light, and most of our steps are made in darkness. Ampliative reasoning provides a way of increasing this light and so the recent resurgence of interest in it is hardly a surprise.Footnote 2

This volume sets out to contribute to this increase and to offer ways of obtain the advancement of knowledge in this continually expanding land, populated by moving targets. But, in a sense, this difficulty is just the lesson from the ‘mavericks’ tradition.Footnote 3 In effect the very origin of the term ‘maverick’ recalls this point. It is an eponym that derives from the eccentric Texan rancher Samuel Maverick. One of his unusual traits was that he did not brand his cattle, and the noun ‘maverick’ was first used in 1867 to denote his unbranded cattle. Accordingly Maverick’s cows turned out to be considered as outsiders, impossible to categorise by usual labels—as they were. In ampliative reasoning this feature is amplified by the fact that, quoting Bacon [22, pp. I–CXXX], “the art of discovery may improve with discoveries” (“artem inveniendi cum inventis adolescere posse”). That is, the intrinsically dynamic nature of ampliative reasoning. In effect, on one side there is the ongoing inquiry into methods for discovering, and on the other side we have that cases of discovery can be rationally evaluated, reconstructed and offered as a means of improving the ‘method’ of discovery itself.

The papers in the volume focus on a set of issues that are at the center of the development of ways of reasoning at the frontier of knowledge and of constructing ‘methods’ of discovery, such as models for revolutions and changes in paradigm, ways of treating scientific disagreement in a rational way—crucial when revolutions happen and strong disagreement can emerge inside the scientific community—, the framework for a method of discovery and inferences for generating new knowledge, heuristics for social sciences, the use of results and findings about scientific discovery to boost funding policies capable of fostering deep impact scientific discoveries. In effect, Carlo Cellucci’s and Lorenzo Magnani’s papers concentrate on conceptual frameworks for scientific discovery and way of producing advancements in scientific knowledge. Emiliano Ippoliti examines four hypotheses produced in finance in order to suggest ways of generating new knowledge. Donald Gillies offers patterns for explaining the origin of revolutions and the change in paradigm in science moving from the Kuhnian approach. Dunja Seselja, Christian Strasser and Jan Willem Wieland, propose a way of treating scientific disagreement in a rational way, in order to handle disagreements that commonly emerge inside communities during revolutionary period. Tom Nickles employs results and findings about scientific discovery (e.g. the No-Free-Lunch theorems) in order to boost funding policies capable of fostering deep impact scientific discoveries or transformative research.

More specifically, the country that the mavericks are exploring lies just between the territory of the determinism of mechanical rules and the dark land of intuition. As Carlo Cellucci states in his paper Why should the logic of discovery be revived? A reappraisal, this country is “inhabited by heuristic procedures”. And in large part they are unbranded—just like Maverick’s cows. This is one of the reasons that motivates the need for a revival of the logic of discovery. Responding to the challenge why should the logic of discovery be revived? posed by Laudan in his paper Why Was the Logic of Discovery Abandoned? [23], Cellucci argues that the logic of discovery should be revived, on the one hand, because, as Gödel’s second incompleteness theorem tells us, “mathematical logic fails to be the logic of justification, and only reviving the logic of discovery logic may continue to have an important role”. On the other hand, he argues that “scientists use heuristic tools in their work, and it may be useful to study such tools systematically in order to improve current heuristic tools or to develop new ones”. Following Aristotle’s tenet that logic must be a tool for the method of science, Cellucci looks at inferential frameworks for scientific discovery, arguing that such frameworks are provided by a revised version of the analytic method supplemented by an open set of ampliative, non-mechanical, rules of inference: various kinds of induction and analogy, generalization, specialization, metaphor, metonymy, definition, and diagrams. Cellucci examines some of these rules in mathematical contexts and argued that they can be employed both to solve problems and to find new problems, concluding that a ‘logic’ of discovery is possible, without the need to call for imaginative, insightful guessing. In particular Cellucci shows how the analytic method must be distinguished from the syntetic-analytic method proposed by Aristotle. The analytic-synthetic method suffers serious limitations: above all, “it is incompatible with Gödel’s incompleteness theorems”. For instance there are truths of a given field “which cannot be demonstrated from those principles. Their demonstration may require principles of other fields”. But, Cellucci continues, “the analytic-synthetic method requires that every truth of a given field be deducible from principles of that field. Therefore, the analytic-synthetic method is incompatible with Gödel’s first incompleteness theorem”.

In his paper Are Heuristics Knowledge Enhancing? Abduction, Models, and Fictions in Science Lorenzo Magnani focuses on ‘selective’ and ‘creative’ processes for generating hypotheses and ‘cut-down’ and ‘fill-up’ heuristics. Magnani employs an ‘eco-cognitive perspective’ and sets out to show that heuristics, even though non-mechanical, local and contextual, is the only means to extend our knowledge, defending the idea that its outcomes are not fictional. More specifically, Magnani focusses on the abduction as a means to produce new knowledge and he critically evaluates the status of abductive inferences by defining it as “very controversial”. In effect, the examination of abduction requires answering to a series of questions: does “abduction involve only the generation of hypotheses or also their evaluation”, the “criteria for the best explanation in abductive reasoning are epistemic, pragmatic, or both”, or again does “abduction preserve ignorance or extend truth or both”. Magnani provides an answer based on the so-called ignorance-preservation characterization of abduction, “contrasted with its knowledge enhancing capacity, such as it is expressed by its heuristic features” and he maintains that “even if, certainly, abductive reasoning can be considered a response to an ignorance-problem, nevertheless, through abduction, knowledge can be enhanced”.

In his paper Heuristic Appraisal at the Frontier of Research Thomas Nickles shows how better abd better understanding of scientific discoveries can improve the funding and support of research. In particular, he deals with the problem of heuristic appraisal (HA) at the frontier of research and its impact in policy. The heuristic appraisal is the “identification and evaluation of hints and clues that can provide direction to inquiry in the sometimes large gap between the extremes of complete knowledge and complete ignorance”. Nickles contrasts heuristic appraisal with the traditional confirmational appraisal (CA): HA is prospective, “directed toward possible future developments, future opportunities”, while CA is “retrospective, based on past performance”. Moving from Meno’s aporia and the No Free Lunch Theorems [24, 25], ha argues that only a local, domain-specific view on a ‘logic of discovery’ is possible. In particular, he maintained that once problem constraints and HA hints are exhausted, we can only proceed blindly, by trial and error: in this sense he states that all genuinely new knowledge is produced by an undirected variation-and-selection process. Then he applied HA to the decision-making in the funding of pioneering research and suggested ways to stimulate ‘transformative research’ policies—that is “changes that challenge current understandings, either by undermining them or by opening up new areas of investigation that current views give us no reason to anticipate and that may even have been inconceivable before”. Hence these change are not breakthroughs “in the sense of applications of already extant science and technology”. In particular Nickles is interested in understanding how it is possible to “speed up both basic and translational scientific research without major new financial investment”. This requires solving what he labels the policy problem, that is the fact that most funding agencies (especially in government) are designed “to discourage transformative HA recognition or to undervalue it in the interests of short-term accounting”. Nickles argues that this collides with the fact that “history informs us that the innovation timescale is typically an order of magnitude or more larger than the de facto accounting timescale imposed by such requirements as ‘broader impacts’. There is too much risk-avoidance, too much emphasis on quasi-guaranteed results”. Thus, Nickels argues for an increased weight to heuristic appraisal and less weight to confirmational appraisal. He examines several models for fostering research activity in general, and some for encouraging transformative research: the prizes/awards model, the Linus Pauling model, the NSF model, the DARPA model, the ‘triple helix’ model, the Rockefeller Foundation model. In the end, his contention is that it is not possible to “realistically plan (or fund) a successful revolution, and it is difficult to identify something as a revolution even while it is occurring, at least until it has been largely accomplished. Typically, what is accomplished is not what the instigators may originally have expected. The more profound the revolution, the more difficult it is to appreciate the likely outcome and its far-reaching implications in advance”. Hence, he offers a ‘general policy advice for the longer term’, which “focus on removing barriers and creating general opportunities rather than on pretending to give specific directions to the specialists in their domains”. In the end, Nickels endorses a scenario-planning approach to funding transformative research, that is a ‘as-if thinking’ that involves challenging established truth, and which requires to retain an open future (in contrast with the end-of-history view).

The dynamic of scientific revolution is the center of Donald Gillies’ paper Why do Scientific Revolutions begin?, which starts from a critique of the Kuhnian ‘Build-up of Anomalies’ model and presents two patterns for scientific revolutions: the tech-fist and the tech-last model. In the ‘tech first’ model, advances in technology come first, enabling new observations and experiments, which result in discoveries that give rise to the scientific revolution. In order to better illustrate this model Gillies provides a negative example, that is an example of what was not actually the beginning of a scientific revolution: Galileo’s telescopic discoveries. Gillies notes that “the discoveries, which Galileo made in such a short space of time with his new instrument, were truly remarkable”. In this example technological developments “lead to new instruments, and, with the help of these, a number of striking new discoveries are made”. Gillies states that the ‘tech first’ pattern is, in some cases, what stimulates the beginning of a scientific revolution, and explicitly replaces a build up of anomalies theory with a build up of new discoveries theory. He offers an extensive discussion of the beginning of the chemical revolution as an example of the first model, and shows that the build up of discoveries concerning new gases and their properties gave rise to the chemical revolution. In the ‘tech last’ model, urgent practical hard-to-solve problems, stimulate solutions by changing the paradigm and advances in tech occur as a consequence of the scientific revolution. He illustrates these features of the model by an example drawn from the history of medicine, that is the Germ Theory of Disease—one of the big revolutions in medicine started about 1865 and largely succeeded by about 1885. This revolution ended up establishing the germ theory of disease as a new paradigm for medicine, and brought antisepsis into the practice of surgery. Tellingly, Gillies argues that the distinction between tech first and tech last is important, but many scientific revolutions can stems from an interplay of both patterns, for “partly because scientific revolutions very often have different phases, and partly because it is often difficult to decide how exactly a scientific revolution should be characterised”.

In the paper Withstanding Tensions: Scientific Disagreement and Epistemic Tolerance Dunja Seselja, Christian Strasser, and Jan Willem Wieland deal with the issue of disagreement in science and how this can be shown to be rational, looking at similarities to epistemic paradoxes. They offer the solution of epistemic tolerance: a normative framework allowing scientists to continue to pose a fruitful challenge, without dismissing their opponents’ stance as epistemically futile. More specifically Seselja e Strasser move from a definition of rational scientific disagreement as disagreement on some issue plus “reasons to suppose that the stance of each participant is the result of a rational deliberation”. Then, they distinguish between the internal recognition of disagreements—by the participants in a debate, and the external one—and the outside observer (e.g. a philosopher or a historian of science): each of these kinds generates certain tensional situations. They argue that scientific controversies often involve such rational disagreements, and set out to show how scientists can tentatively recognize that their disagreement is rational: namely, on the basis of content- and form-based indices. This leads them to consider the normative question about what kind of “epistemic stance a scientist should have who has recognized she may be involved in a rational disagreement”. They show that the tension characterizing rational disagreements has properties similar to epistemic paradoxes and to the notion of toleration—as it is used in ethics and politics. Hence, they introduce the notion of epistemic toleration to answer this normative question, by providing a normative framework that allows scientists to keep on posing a fruitful challenge and at the same time taking their opponents’ stance as epistemically reasonable.

In her paper Heuristics as Methods: Validity, Reliability and Velocity Anna Grandori deals with the application of heuristics to economic problems, showing the importance and performance implications of rational heuristics in economics, in particular decisions in which resources are scarce and performance important. She argues that there are areas where those heuristics can be applied “very fast, and errors reduced drastically”. Grandori reviews research on innovative economic and organizational decision-making processes using epistemological criteria, and shows that an array, or better a portfolio of effective and ‘rational’ heuristics can be specified—different from the repertory of ‘behavioral’, potentially ‘biasing’, heuristics usually considered. Two case studies of innovative decision making under uncertainty are examined in the paper: a new product development (a major project for reducing traffic pollution) and entrepreneurial decision making (protocol analyses of financial angels’ investing decisions). Grandori sets out to show that the heuristics applied do resemble more the ‘slow and safe’ heuristics of scientific discovery, rather than the ‘fast and frugal’ heuristics of everyday life. Then, she discusses a third case study of decision making on military flights, addressing the question of whether heuristics can be ‘fast and rational’ simultaneously. She argues that results suggest that they can, and “help in identifying the rather unexplored rational heuristics sustaining ‘highly reliable’ action under risk”.

Economics, and finance in particular, is the starting point of Emiliano Ippoliti’s paper Dynamic generation of hypotheses: Mandelbrot, Soros and Far-From-Equilibrium. In order to investigate ways of generating hypotheses Ippoliti examines four hypotheses for dealing with the behavior of stock market prices, arguing that the generation of new hypotheses draws on a preliminary bottom-up, verbal, non-formal conceptualization, and maintained that this is the only way to incorporate the domain-specific features of the subject. In particular he examine the construction process of one hypothesis for stock market prices behavior, that is the far-from-equilibrium hypothesis. In order to do this he analyzes the generation of the hypotheses that preceded the far-from-equilibrium hypothesis. First of all, he considers the Efficient Market Hypothesis (EMH, see [26, 27]), pointing at its main vulnerability, the idea of ‘equilibrium’, which does not enable us to explain booms-and-busts, or at least their frequency. Then, he examines the Fractal Market Hypothesis, which offers a new interpretation of the data and shows new properties of financial markets, undermining the effectiveness of the notion of equilibrium. He argues that even though it does not explain the reasons for these properties and does not offer predictions that can be put to use–due to the sensitivity to initial conditions–it generates new mathematics and explain to us when we can expect markets to be stable. Hence, he analyses the Reflexive Market Hypothesis (see [28]), which has received little scholarly attention but offers a cogent, qualitative explanation of several properties identified but not explained by the Fractal Market Hypothesis. This hypothesis draws on the distinction between endogenous and exogenous forces in the behavior of prices and it enables us to explain boom-and-bust and crashes. In the end, he approach the Far-form-Equilibrium Hypothesis, showing how it relies on the distinction between exogenous and endogenous forces and does develop a means to forecast crashes and bubbles, for instance the so called flash-crashes (e.g. [28, 29]). The main point of this paper is to show how the means of generating these hypotheses is essential to assessing their efficiency and plausibility. More specifically he argues that in formulating a hypothesis, a selection of features of SMP is made for incorporation in a theory. This selection may be expressed mathematically in most of the cases. An examination of these means of generation can show us why some of these hypotheses are successful and efficient and some not, and can also shed light on the extent to which a particular hypothesis can be usefully applied. Thus Ippoliti argues that the study of the means of generation of hypotheses offers us a guide to formulating new hypotheses in a reliable and cogent fashion. More specifically he states that the generation of a new hypothesis has to draw on a preliminary verbal conceptualization (a discourse) on a specific subject, that is a verbal and non-formal description of it, which establishes the entities to investigate, their properties and relations, and a set of variables that affect them. This is a bottom-up process and it is the only way to incorporate the (domain) specific features of the subject in a plausible representation of it, which can possibly end up in a mathematical theory. Thus his thesis is that generation of new hypotheses and, possibly, new mathematics stems from a preliminary verbal reasoning and conceptualization, which delimitate the variables and the features of a phenomenon.