1 Preamble

Since Kitcher introduced the notion of a ‘well-ordered science’ in Science, Truth, and Democracy, it has attracted surprisingly little attention. This is in spite of the fact that Kitcher argues that this notion attempts to provide an answer to the most “fundamental normative question about science” (Kitcher 2004, p. 203). Of those commentators who have paid close attention to Kitcher’s account, they have mostly argued against his argument that we have Millian grounds to censor particular lines of research that may be used to oppress marginalized groups (Longino 2002; Eigi 2012; Pinto 2015). In this paper, I argue that Kitcher’s view of a well-ordered science rests on the premise that we can predict, within a reasonable degree of accuracy, the content of future discoveries. Along with this, several detailed models of how to allocate resources amongst research programs similarly assume that we have such predictive powers. This premise, however, is much more limited than its proponents are prepared to admit. I demonstrate this by reviving the arguments of Paul Feyerabend to show the difficulties with this premise. I go on to extract a positive view of a well-ordered science from Feyerabend and modify it with insights from Peirce and social scientific studies on theory pursuit. While more work needs to be put into this model for it to be implementable, it provides the starting point for understanding how we should divide our funding amongst distinct and, sometimes, competing research programmes.

The structure of this paper is as follows. I begin, in Sect. 1, by outlining Kitcher’s conception of a well-ordered science. This proves to be no easy task since he is vague on some crucial points. In Sect. 2, I show how this position vitally presupposes what I call ‘the projection postulate’ or the view that we can predict the content of future research. In Sect. 3, I outline three models of resource allocation that unwittingly assume the validity of the projection postulate. In Sect. 4, I use two arguments of Feyerabend’s that show why the projection postulate is much more limited than its proponents realize. In Sect. 5, I detail Feyerabend’s positive proposal for a ‘well-ordered science’ and proceed, in Sect. 6, to modify it to make it applicable. In Sect. 7, I show how this view avoids committing to the projection postulate and, thereby, offers a superior account of how to organize scientific research. I conclude by suggesting concrete practices that may be consistent with Feyerabend’s position and, therefore, may be of interest in subsequent development of this view.

2 On the very idea of a ‘well-ordered science’

Before beginning, it is worth clarifying what it means for science to be ‘ordered.’ Strictly speaking, science, just like any other practice, will always be ordered. Even a laissez-faire approach, or one that rejects central forms of governance, of organizing science will suppose that there is some ‘order’ maintained by an invisible hand of some sort.Footnote 1 Rather, the contrast between ‘order’ and ‘disorder’, as will become apparent, is between a community that intentionally pursuesFootnote 2 particular projects in a particular manner over others and a community that pursues whatever its members fancy. In other words, it is a contrast between diverting resources to fabricate particular forms of institutional support in a particular manner versus a ‘neutral’ funding mechanism that does not explicitly privilege any research program over others.

Kitcher first broaches the topic of how to organize scientific research in “The Division of Cognitive Labor” and, in a bit more detail, in chapter 8 of The Advancement of Science. However, there is a more general and basic question lurking behind these papers: should science be an ‘organized’ discipline?Footnote 3 Kitcher addresses this question in chapter 5 of Science in a Democratic Society with the introduction of the notion of a ‘well-ordered science.’ In this section, I outline this concept and examine its foundations.

KitcherFootnote 4 recognizes and stresses the point that scientists encounter excessive amounts of mundane facts.Footnote 5 The hope, then, is that science aims not just for discovering any old fact, but significant facts.Footnote 6 Kitcher invites us to consider the following three options for understanding significance:

  1. A.

    The aim of Science is to discover those fundamental principles that would enable us to understand nature.

  2. B.

    The aim of Science is to solve practical problems.

  3. C.

    The aim of Science is to solve practical problems, but, since history shows that the achievement of understanding is a means to this end, seeking fundamental principles…is an appropriate derivative goal (Kitcher 2001, p. 109).

A, Kitcher writes, has ‘no plausibility’ without a strong theological backdrop by which the fundamental truths can be discerned (110).Footnote 7 Kitcher thinks B is too narrow for the reasons given by C. He writes:

Often the best route to potential gains down the road is to investigate quite recondite questions: Thomas Hunt Morgan’s wise decision to postpone considerations of human medical genetics and concentrate on fruitflies prepared the way for the (ongoing) revolution in which molecular understandings are transforming medical practice (ibid).

Still, Kitcher thinks, C is limited since it “fail[s] to recognize the ways the ethical project has expanded the scope of human desires, equipping us with richer notions of what it is to live well” (ibid). I am unclear as to what this objection amounts to. Nevertheless, it leaves us with the difficult question of how to balance research required for practical (or ethical) goals and research done out of ‘curiosity’ (i.e., ‘basic research’). Kitcher never answers this question. However, the answer seems to be implicit in his discussion of resource allocation.

The question of how to distribute resources for research,Footnote 8 for Kitcher, breaks down into two sub-questions:

  1. (1)

    What are our goals?

  2. (2)

    What procedures should we adopt for attaining particular goals?

(1) is answered democratically.Footnote 9 Kitcher provides a model of an ideal deliberative democracy in which various participants with different backgrounds give ‘tutored’ preferences for what the goals of research should be. Ideally, this deliberation should lead to a consensus of “the entire spectrum of their society’s projects, they judge a particular level of support for continuing research… and they agree on a way of dividing the support among various lines of investigation” (115). In other words, democratic discussions construct a hierarchical list of goals of research. (2), for Kitcher, is constrained by ethical limits; we should not engage in Tuskegee-type experiments, regardless of what their results may be:

Some lines of research are off limits because they require procedures that contravene the rights of human beings. Today nobody supposes that scientists should recapitulate the Tuskegee experiment (in which black men suffering from syphilis were allowed to go untreated) or emulate the Nazi doctors in coercing subjects into damaging experiments—even though there might be important facts about the world that could be discovered only by procedures like these (Kitcher 2004, pp. 203–204).

While Kitcher is not explicit about this, these constraints are inherently deontic since they must be agnostic about the consequences of the results of research. Apart from this, Kitcher says little about (2) in this chapter. This leaves us with an open question: how do the goals of research relate to the means of obtaining those goals? I think Kitcher’s answer comes implicit in his discussion of the autonomy of science.

Kitcher contrasts his notion of a well-ordered science with the ‘autonomist’ view that scientists should be allowed to operate free of external (i.e., social, political, etc.) constraints. While it is unclear who proponents of the autonomist view are, for Kitcher, we can locate this view, most famously in Polanyi who writes:

the forces contributing to the growth and dissemination of science operate in three states. The individual scientists take the initiative in choosing their problems and conducting their investigations; the body of scientists controls each of its members by imposing the standards of science, and finally, the people decide in a public discussions whether or not to accept science as a true explanation of nature…any attempt to direct these actions from outside must inevitably distort or destroy their proper meaning (emphasis added, Polanyi 1951, p. 58).Footnote 10

A similar conclusion was reached by Lakatos:

In my view, science, as such, has no social responsibility. In my view it is society that has a responsibility – that of maintaining the apolitical, detached scientific tradition and allowing science to search for truth in the way determined purely by its inner life. Of course scientists, as citizens, have responsibility, like all other citizens, to see that science is applied to the right social and political ends. This is a different, independent question, and, in my opinion one which ought to be determined through Parliament (Lakatos 1978a, p. 258).

For Polanyi, science should be autonomous because its practices are guided by tacit knowledge which only exists for those who regularly practice the trade. As such, practitioners themselves determine both the goals and means of attaining goals. Lakatos, on the other hand, thinks that the ‘inner life’ of science is guided by methodological rules and the existing heuristics to build on previous knowledge.Footnote 11 Both, however, agree that what should be democratically deliberated is the application of the fruits of scientists’ labor. Scientific communities, according to the autonomist view, pursue their own goals set by their own standards (or the standards of philosophical conceptions of scientific methodology). Against this, Kitcher insists that democracy has a role in choosing what kinds of science should be done.Footnote 12 What is crucial, for our current purposes, is that goals determine methods of research. As Cartwright succinctly puts it: a well-ordered science is one “that answers the right questions in the right ways” (Cartwright 2006, p. 981). The ‘order’ comes from the goals (plus moral constraints). This is what Kitcher means by an ‘ordered’ science; a science that pursues particular projects (e.g., cancer research, sustainability research, etc.) at the expense of others and these projects are pursued in particular ways (e.g., use methods with broadly applicable results,Footnote 13 test for extreme cases of risk, etc.). To be ‘ordered’, in Kitcher’s use of the term, is that certain goals uniquely determine certain procedures.

Kitcher is more direct about this point in his other work. In “What Kinds of Science Should be Done?”, he summarizes his position as follows:

Well-ordered science undertakes an array of research projects, pursues them by particular methods, and applies the results to intervene in the world. The array of projects and the interventions conform to the list of priorities that would be specified in an ideal discussion; the methods satisfy the constraints that would be recognized in an ideal discussion, and are efficient at promoting the priorities (Kitcher 2004, p. 214).

We see here that the “array of projects” (i.e., what projects are fund-worthy) must ‘conform’ to the list of priorities set by democratic deliberations. But Kitcher does not allow democratic decision-making to have complete control over what projects are pursued; this would be ‘vulgar democracy’ which is ambivalent to considerations of justice. Rather, substantive moral/political principles also play a role in determining the choice of research projects:

The practice of the sciences is well-ordered…only if inquiries are directed in ways that promote the common good, conceived as aiming at the goals that would be endorsed in a democratic deliberation among well-informed participants committed to engagement with the needs and aspirations of others. Whether or not this particular elaboration of the idea of the common good is adopted, we maintain that a necessary condition for well-ordered science is that research addressed to alleviating the burden of suffering due to disease should accord with the ‘fair-share’ principle: at least insofar as disease problems are seen as comparably tractable, the proportions of global resources assigned to different diseases should agree with the ratios of human suffering associated with those diseases (Reiss and Kitcher 2009, p. 243).Footnote 14

The ‘fair-share’ principle, as Kitcher points out, is not satisfied when 90% of resources devoted to medical research only affect minority, affluent peoples. However, it is important to point out that Kitcher identifies “research on” particular diseases (e.g., tropical diseases) which have greater impacts on mortality rates among sufficiently large populations. For Kitcher, the goals (treatment for diseases) straightforwardly determine the content of what research projects are fundable.

We can immediately see now how this ideal depends on the assumption that we can predict the results of research before the research itself. If the goals of research can be achieved in many different ways, then the ‘order’ of science does not come from the goals of research alone. Is research on the properties of blackbody radiation research on quantum mechanics, or the greenhouse effect? The answer today is “both.” The answer in 1910 was “quantum mechanics” since the greenhouse effect, in its current form, was unknown at the time. Therefore, if we wanted to fund research on climate change, the relevant research would be different at different points in history. We have to guess which research will constitute the relevant research as our knowledge evolves. Let us call the assumption that these guesses can be made reliably ‘the projection postulate.’ In the next section, I will outline this postulate in more detail.

3 The projection postulate

The projection postulate can be formulated, roughly, as the claim that we can predict the future content of scientific theorizing within a reasonable degree of accuracy. While there may be some grey area as to what constitutes a ‘reasonable degree’, in this section, I intend to cash out the projection postulate and how it is presupposed in Kitcher’s conception of a well-ordered science.

First, it must be clarified by what is meant by ‘content.’ Of course, theories contain factual content about the world. But what counts as factual content is contingent on two closely linked items. The first the manner in which a theory is understood. As our understanding of a theory evolves, we often change what the ontological commitments of the theory are or what kind of predictions follow from the laws of the theory. It was by no means obvious from the outset that general relativity was committed to the existence of gravitational waves. As our understanding of general relativity changed, the content of general relativity changed. Second, in order for the content to be genuine (i.e., acceptable), it must be evaluated by some method or methods. It is one thing to have a theory or a hypothesis, it is quite another to say that that theory is true (or empirically adequate, probably true, reliable, etc.). Philosophers and scientists have spent millennia debating methods of assessment and I have nothing to say about them here. For now, it is sufficient to mention that for a theory to genuinely have content, as opposed to purported content, it must pass some set of methodological standards. In other words, the content must be accepted.Footnote 15 Because of this, a theory could not change, but the methods evaluating could change and thereby change the content of the theory.Footnote 16

The projection postulate is quite common parlance in scientific practice. Indeed, every grant application tries to justify why it expects to achieve particular results. It is also taken for granted by many philosophers of science. However, there is a specific way it is endorsed by proponents of a well-ordered science. As discussed in the previous section, for a science to be well-ordered it must have the right kind of goals (democratically deliberated goals). It is the goals of research, as well as some ethical constraints, that ‘order’ what science is to be done. But the goals of research are, ultimately, the kinds of content that we want to have knowledge of. We want a cure for cancer, more reliable neuro-interventions, a more unified theory, etc. I will not dispute this claim here. It is the additional claim that this desire for future content has implications for the distribution of funds that are allocated. This claim requires the assumption that we know what kind of science will ultimately achieve these goals as quickly and efficiently as possible. To be clear, I would expect that no one would defend a strong version of this view. However, a proponent of a well-ordered science must be able to defend that some particular order is more likely to achieve particular goals than another kind of order. They therefore must be able to defend some criteria for adopting a particular order over others. I will return to evaluate the projection postulate in Sect. 4. For now, I would like to further show how many models of resource allocation crucially depend on the projection postulate.

4 Resource allocation models

Within the past 15 years or so, there has been increasing attention to the details of how we should allocate research and what kinds of diversity within scientific communities are most conducive to progress. In this section, I will outline three of the most prominent models designed to provide the details as to how a well-ordered science may be organized and reveal their methodological assumptions.Footnote 17

4.1 The kitcher model

Kitcher first tackles the question of how to optimize resource allocation in his paper “The Division of Cognitive Labor” (1990). He never, to my knowledge, connects the model he provides in this paper to his discussion of a well-ordered science. Regardless, as we shall see, as this model relies on the projection postulate, these two aspects of Kitcher’s thought are compatible.

Kitcher’s defense of the division of labor begins with a hypothetical situation:

Imagine that the objective degree of confirmation of the phlogiston theory just prior to noon on April 23, 1787, was 0.51, that of the new chemistry 0.49. At noon, Lavoisier performed an important experiment, and the degrees of confirmation shifted to 0.49 and 0.51, respectively. Allowing for a time lag in the dissemination of the critical information, we can envisage that there was a relatively short interval after noon on April 23, 1787, before which all rational chemists were phlogistonians, and after which all were followers of Lavoisier (Kitcher 1990, p. 5).

Clearly, allocating all possible resources to Lavoisier’s research program at this instance would be an absurd way to organize a community. Why? Kitcher writes “[w]ith the evidential balance between the two theories so delicate, you would have preferred that some scientists were not quite so clear-headed in perceiving the merits of the time theories, so that the time of uniform decision was postponed” (5–6). This makes it seem like Kitcher only allows for the pursuit of theories with sufficient degrees of merit. However, Kitcher then writes:

In the 1920s and 1930s, Wegner’s claim [continental drift] seemed to face insuperable difficulties, for there were apparently rigorous geophysical demonstrations that the forces required to move the continents would be impossible large. Despite this, a few geologists, most notably Alexander du Toit, continued to advocate and articulate Wegner’s ideas. I suggest that the distribution of cognitive effort was preferable to a situation in which even the small minority abandoned continental drift (7).Footnote 18

This appears to, unwittingly, invoke an insight of Feyerabend’s which will be discussed later in this paper: that theories can make ‘comebacks’ and be pursued despite their empirical or theoretical difficulties. This leaves us without clear conditions for theory pursuit. However, the answer is implicit in his model. Kitcher asserts that each theory has a corresponding probability function that it will turn out to be true, p(n), where n is the number of allocated resources. To achieve an epistemic goal (EG), then, Kitcher’s model of pursuing two theories (1 and 2) is:

$$ {\text{E}}_{\text{G}} = p_{1} \left( n \right) \, + p_{2} \left( {{\text{N }} - n} \right) \, {-}p\left( {1\;{\text{and}}\;2\;{\text{are}}\;{\text{true}}} \right) $$

These probability functions have the following characteristics:

  1. (1)

    P increase monotonically with n

  2. (2)

    The value of p is 0 when n is 0.

  3. (3)

    P(n) approaches a limiting value as n goes to infinity (12).

(1) is consistent with the intuition that each stage of theory pursuit is going to be progressive, even if only to a limited degree. Even errors or null results are instances of learning. (2) is straightforwardly true, since we cannot come know that a theory is true without pursuing it first. With (3), Kitcher assumes that theories have some intrinsic cut-off point. Without wading too deep into the waters of Kitcher’s model, it is worth discussing the form that Kitcher’s division of labor takes.

Kitcher assumes that a ‘philosopher-monarch’, with perspicuous knowledge of the probability functions, may begin allocating resources. Kitcher is aware that this is an idealization, but he thinks it is merely an exaggeration of what scientists already do when they judge scientific theories. Kitcher provides no criteria for how fallible humans can come to know the probability values. Regardless, it is clear that without any means of knowing the probability functions, Kitcher’s model can never lead to any demands about how we should distribute our resources. In other words, knowledge of the probability functions, or the projection postulate, is a crucial assumption of Kitcher’s model.

4.2 The Strevens model

Michael Strevens’ model is grounded on what Robert Merton called the ‘priority rule’ where genuine discoveries are credited to scientists who uncover them first. Scientists are often motivated to make a discovery before others for garnering prestige. This is not a mere psychological fact of individual scientists, but a social norm governing the organization of science by acting as an incentive for discovery. Strevens uses this observation to try to handle the resource allocation problem for competing research programs that aim to make the same discovery.

Strevens begins by making the following idealized assumption:

  1. (1)

    Every research program has a single goal. There are only two possible outcomes of the program’s endeavors: total success, if it realizes the goal, or total failure, if it does not.

  2. (2)

    Different research programs have different intrinsic potentials.

  3. (3)

    A program’s chance of success—that is, the probability that it will achieve its goals-depends on two things, its intrinsic potential and the resources invested in the program (Strevens 2003, p. 61).

Strevens justifies (1), in the idealized context where two research programs try to make the same discovery and that discovery is the main goal. Serendipitous discoveries are unaccounted for.Footnote 19 However, as Strevens later notes, you can relax this standard with a research program with many success functions and this only adds to the mathematical complexity of the model. For (2), Strevens writes that a research program “has more intrinsic potential that another if, given any fixed level of investment, the one has a higher chance of success than the other” (ibid). He further assumes that the success function “does not change over time, and…that the central planner knows at all times the true form of the success function” (64). He gives no account of how we could determine what the intrinsic potential of any research program would be. (3), Strevens argues, follows from (2) and the trivial point that discovering truths requires investing resources into means of discovery.

Following Kitcher, Strevens recognizes that optimal resource allocation does not require devoting all resources to the theory with the highest intrinsic potential. Strevens mathematical presentation of the problem of resource allocation begins with a simple case:

$$ {\text{V}}_{\text{T}} \left( {{\text{n}}_{\text{T}} } \right) \, = V_{1} S_{1} \left( {n_{1} } \right) \, + V_{2} S_{2} \left( {n_{2} } \right) $$
(1)

where V is the utility of the discovery of a research program, S is its success function, and n is the number of resources allocated to the research program, and the subscript T represents the total value for society as a whole. Additive cases, like those represented by (1), hold when they pursue ‘independent’ goods (e.g., a cure for cancer and the discovery of the Higgs boson). Nonadditive cases, which Strevens is interested in, are represented thusly:

$$ {\text{V}}_{\text{T}} \left( {{\text{n}}_{\text{T}} } \right) \, = V\left( {S_{1} \left( {n_{1} } \right) \, + S_{2} \left( {n_{2} } \right) \, {-}S_{12} \left( {n_{1} {-}n_{2} } \right)} \right) $$
(2)

The final term is the probability that, given a particular funding distribution, both research programs will be successful. Strevens then simplifies the formula by assuming that the utility function is built into the success function and moves on to the case of optimality of resource distribution. Assume that adding a resource improves the research program’s chance of success, expected returns are given by:

$$ S_{1} \left( {n_{1} } \right) \, + S_{2} \left( {{\text{n}}_{\text{T}} - n_{2} } \right) \, = {\text{ V}}_{\text{T}} \left( {{\text{n}}_{\text{T}} } \right) $$
(2′)

Additionally, Strevens assumes that success functions yield decreasing marginal returns [that S(n + 1) − S(n), represented my m(n)], which is crucial since it allows for discovering local maxima which solves the optimization of resource allocation problem. We then get the local maximum defined as:

$$ m_{1} \left( n \right) \, = m_{2} \left( {n_{\text{T}} {-}n} \right) $$
(3)

Strevens admits that if we cannot have knowledge of the intrinsic potentials of research programs, then “optimality may be out of reach” (67). This is not just true of this model, but is a “universal obstacle to achieving the best possible outcome” (ibid). As will become apparent, the failure of the projection postulate does not entail abandoning all hopes of distributing funds.

4.3 The Weisberg–Muldoon model

The Weisberg–Muldoon model (WM model) follows Kitcher in modelling an optimal strategy for allocating resources in the service of pre-defined goals. As such, their ideal community pursues significance as determined by the relevant community. They model their community on what they call an epistemic landscape, modeled after fitness landscapes in population biology, where a single landscape corresponds to a ‘topic.’Footnote 20 Scientists use an ‘approach’, defined quite broadly to include their research questions, instruments, experimental techniques, methods, and background theories, to investigate the topic. We end up with an epistemic landscape like this (Fig. 1).

Fig. 1
figure 1

(taken from Weisberg & Muldoon 2009, 230)

Example of an epistemic landscape.

The z-axis corresponds to degrees of significance and each point on the x–y axes (the topology is discrete), represents the choice of approach. Scientists are initially placed randomly on different points on the landscape (at zero-significance) and move according to different rules. Weisberg and Muldoon give three algorithms corresponding to different scientific ‘attitudes.’ First there are ‘controls’, with the following rule:

  1. 1.

    Move forward one patch.

  2. 2.

    Ask: Is the patch I am investigating more significant than my previous patch?

    1. a.

      If yes: Move forward one patch.

    2. b.

      If no: Ask: Is it equally significant than my previous patch?

      1. i.

        If yes: With 2% probability, move forward one patch with a random heading. Otherwise, do not move (Weisberg and Muldoon 2009, p. 231).

Controls do not adjust their behaviour according to the behaviour of other scientists and are guaranteed to find a local maximum (232). Next are ‘followers’ with the following rule:

  • Ask: Have any of the approaches in my Moore neighborhood been investigated?

  • Ask: Have any of the approaches in my Moore neighborhood been investigated?

  • If yes: Ask: Is the significance of any of the investigated approaches greater than the significance of my current approach?

  • If yes: Move towards the approach of greater significance. If there is a tie, pick randomly between them.

  • If no: If there is an unvisited approach in my Moore neighborhood, move to it, otherwise, stop.

  • If no: Choose a new approach in my Moore neighborhood at random (240).

And, finally, what Weisberg and Muldoon call ‘mavericks’:

  • Ask: Is my current approach yielding equal or greater significance than my previous approach?

  • If yes: Ask: Are any of the patches in my Moore neighborhood unvisited?

  • If yes: Move towards the unvisited patch. If there are multiple unvisited patches, pick randomly between them.

  • If no: If any of the patches in my neighborhood have a higher significance value, go towards one of them, otherwise stop.

  • If no: Go back 1 patch and set a new random heading (241).

Simulations are run with communities with various mixes of controls, followers, and mavericks. Progress is defined as the amount of maxima discovered and the speed at which they are discovered. Since I am more interested in the philosophical basis of the model, I will forgo analyzing the results and focus on the assumptions undergirding their model.

Weisberg and Muldoon make several idealizing assumptions explicit.Footnote 21 First, there is no notion of research cost in the model. Each move is, therefore, identical in cost. Additionally, Weisberg and Muldoon are explicitly following a Kuhnian model: “All of our agents are doing normal science” (249) though their division of labor doesn’t exactly line up with Kuhn’s (cf. Kuhn 1962, Chapter 3 and 4). The way the projection postulate is assumed in the WM model is distinct from Kitcher and Strevens’ models. The epistemic landscape corresponds to a ‘topic.’ Since we are exploring the topic, the landscape presupposes realism (e.g., the world has certain facts about it which correspond to different degrees of significance). While the WM model is not as stringent as Kitcher or Strevens’ in that it allows for a balance of various ways of exploring the landscape (corresponding to the different movement algorithms), it still assumes that all agents must accept the paradigm that they are exploring. If they are exploring ‘different worlds’, in Kuhn’s sense, then there is no coherent epistemic landscape to explore. In this way we predict that a particular balancing of researchers will discover significant findings assuming that the paradigm is correct. This is likely close to what Kitcher and Strevens’ are assuming as well; probability functions and intrinsic potentials are discerned by using the existing knowledge in place. As we will see in the next few sections, however, this manner of justifying the projection postulate is not without its limits.

5 Feyerabend and the projection postulate

Feyerabend never has a sustained discussion of the projection postulate.Footnote 22 However, he makes a number of side arguments on the very idea of predicting the content of future research. In this section, I will outline and elucidate these arguments and discuss some contemporary social scientific research that supports Feyerabend’s arguments.

There are two primary arguments Feyerabend gives regarding the projection postulate. The first argument is somewhat weaker, as it is not against the projection postulate per se but rather shows its limitations if it were true. One phenomenon Feyerabend sees throughout the history of science, and the history of ideas more generally, is the rebirth of theories, methods, and so forth that were abandoned sometime in the past. Whether it was revival of ideas, like the motion of the earth that Ptolemy called ‘entirely ridiculous’, theories such as Capellan astronomy, or full-blown worldviews such as Hermeticism, many aspects of scientific theorizing which are discarded are later revived.Footnote 23 When examining this phenomenon, Feyerabend remarks that this is quite reasonable since no idea is ever fully explored. He writes:

Such developments are not surprising. No idea is ever examined in all its ramifications and no view is ever given all the chances it deserves. Theories are abandoned and superseded by more fashionable accounts long before they have had an opportunity to show their virtues (Feyerabend 1975, p. 49).

The basis of this argument is the fact that the value an idea has, or at least the value an idea has that we are aware of, can only be determined after its pursuit. As a result, pursuit is both logically and historically prior to an assessment of the value a theory has.Footnote 24 The greater we pursue an idea, or the more resources we invest in pursuing it, the more value we can obtain with no possibility of stating that we have ever reached a point where we have exhausted the value of the idea. As such, ideas that appear to have exhausted their epistemic contributions at one point in history continue to show their value in future theoretical contexts. What repercussions does this have for the projection postulate? Consider a projection of the future content of a theory:

  1. (1)

    If we invest x resources into theory y, then content z will (likely) be uncovered.

However, the second clause must be time indexed. Indeed, most instances of the projection postulate, and the ones Kitcher and his followers have in mind, are short term. No one reasonably predicts what the states of their disciplines will be in hundreds of years (or at least expects these predictions to come true). Therefore, (1) must be amended in the following way:

  1. (1′)

    If we invest x resources into theory y, then content z will (likely) be uncovered at time t.

But why should our consideration of what to do with x be limited to the value of y (z) at t? Why not extend the potential value to t′ and so on? This question must be answered since it could be the case that theory y′ will have value z′ at t′ where z′ > z at some fixed level of x. If we assume that t′ is too far in the future to reasonably make predictions about, then we cannot even assign a probability to whether y′ will genuinely yield z′ at t′. As such, a proponent of the projection postulate can only argue to devote x to y over y′ in cases of urgency where we must have z by t.

There are two ways of responding to this. First, is to argue that every case of funding is a case of urgency. Feyerabend responds, to this worry, albeit indirectly, in the first two lectures of The Tyranny of Science by arguing that given how interconnected different scientific disciplines are, any long term plan must respect that discoveries in different domains that had nothing to with their original intention. Our knowledge of climate change crucially depends on blackbody radiation, our knowledge of mental illnesses depends on Coulomb’s law, our (modern) knowledge of blood circulation depends on cellular meiosis, and so on ad naseum. All of these discoveries were made in completely different theoretical contexts. If we limited our research only to what was immediately useful (many of these discoveries did not have any obvious uses at the beginning),Footnote 25 we would drastically limit our possibilities of managing practical problems in the distant future (or even knowing that such problems exist!). As mentioned before, Kitcher recognizes this point as well. As such, we can only maintain that some funds be allocated to urgent topics. This is far from the general view of science that proponents of well-ordered science are interested in. Another way of responding is to argue that we could proceed in a piecemeal fashion. Say y′ provides us with value z″ at t where z″ < z, then y deserves x over y′. The primary problem with this response is that since this decision leads to the non-pursuit of y′, and we are being agnostic with respect to whether y or y′ will yield the greater value at t′, we cannot make any comparative judgment at t meaning that we have engaged in a suboptimal practice if, from a God’s eye view, y is less valuable at t′ than y′. As such, as Feyerabend has shown, the projection postulate can only, at best, be used to justify allocations of some funding in cases of urgency which Kitcher himself states are ‘rare cases.’

The second argument attempts to shown that justifying the projection postulate will inevitably be circular. However, as I will show, Feyerabend’s arguments in this respect are limited. For Feyerabend, there are four ways the projection postulate could be justified:

  1. (1)

    By appeal to some specific scientific methodology.

  2. (2)

    By appeal to a priori metaphysical principles.Footnote 26

  3. (3)

    By appeal to ‘expert knowledge.’

  4. (4)

    By appeal to background knowledge.

(2) was certainly not a fashionable claim in Feyerabend’s time and even less so nowadays since most philosophers accept that metaphysical conclusions should at least be compatible with scientific discoveries.Footnote 27 It can, therefore, be safely sidelined. Feyerabend is most famous in his arguments against (1) by showing that the history of science is too complicated to be guided by a set of universal methodological rules.Footnote 28 But a part of this argument is more subtle and often missed. Since all methodologies must presuppose metaphysical commitments, they are testable and, thereby, fallible. As Feyerabend writes: “The standards we use and the rules we recommend make sense only in a world that has a certain structure. They become inapplicable, or start running idle in a domain that does not exhibit this structure” (Feyerabend 1988, 233). Following Popper’s principle of maximal testability (cf. Popper 1935, Chapter 4), which requires that we test theories (and methodologies)Footnote 29 as much as possible, Feyerabend argues that some tests are unavailable without the explicit consideration of other methodologies. He writes:

My intention is… to convince he reader that all methodologies, even the most obvious ones, have their limits. The best way to show this is to demonstrate the limits and even the irrationality of some rules which she, or he, is likely to regard as basic. In the case of induction (including induction by falsification) this means demonstrating how well the counterinductive procedure can be supported by argument (Feyerabend 1975, p. 32).

More succinctly put, all methods have limits and limits are discovered by proliferating alternative methods. Why? Because “prejudices are found by contrast, not by analysis” (31) since analysis within a method will always require using some parts of the method to analyze others. This, at best, establishes the consistency of the method and does not, and cannot, show its validity without being circular. As Feyerabend tells us,

Perceptions must be identified, and the identifying mechanism will contain some of the very same elements which govern the use of the concept to be investigated. We never penetrate this concept completely, for we always use part of it in the attempt to find its constituents. There is only one way to get out of this circle, and it consists in using an external measure of comparison, including new ways of relating concepts and precepts (Feyerabend 1970b, p. 47).

Tests require contradictions and no amount of analysis can bring about a test, strictly speaking. Given Feyerabend’s allegiance to a proceduralist account of methodology,Footnote 30 where being ‘objective’ presupposes a process of maximal testability, we can say that a method is only legitimate once it has been tested relentlessly via alternative methods.Footnote 31

As a result, we cannot unequivocally appeal to any specific scientific methodology since that would lead to the non-pursuit of potential falsifiers of that methodology. Feyerabend never engages with (3) directly. He does argue, against Lakatos’ use of the ‘judgments of the scientific élite’, that tacit knowledge rarely points in a unified direction and is often based on poor reasoning.Footnote 32 In fact, contemporary research on peer-review, the practical implementation of this view, supports this view. There is massive amounts of literature that suggests that peer-review is inherently conservative (Gillies 2008; Stanford 2015); it privileges extant standards over those that are innovative. In fact, regression analyses of 32,546 applications to the NIH showed the independent contribution of ‘novelty’ was only 1.4 points, in contrast to 6.7 points for ‘significance’ (Lee 2015, p. 1275). Furthermore, the metric for methodological soundness, which often used to evaluate research proposals, is often contaminated by this conservatism; in surveying 288 NIH reviewers for the ‘Director’s Pioneer Award Program’, “the most innovative projects involved methodological risk” (1277) suggesting that methodological soundness is antithetical to innovation.Footnote 33 In other words, peer-review is quite illegitimate when it comes to assessing the pursuit-worthiness of novel ideas. The second worry of bias is that peers will promote their own pet-theory (or method or whatever) at the expense of others due to lack of charity, willingness to engage with foreign approaches, lack of knowledge of other approaches and their previous applications, or sheer stubbornness (‘researcher narcissism’ as Gillies 2014, p. 8 puts it). Finally, statistical analyses of peer-review suggest that the reviews are rarely even self-consistent with the odds of having multiple panels of experts agreeing on the fruitfulness of proposed research being below chance (Cole et al. 1981; Hargens and Herting 1990; Marsh et al. 2008; Graves et al. 2011).Footnote 34 As such, peer-review seems, at best, qualified to judge research within the context of normal science and not for judging research that deviates from accepted norms.

Feyerabend does not discuss the appeal to background knowledge, (4), at length. However, his theoretical pluralism, as will be discussed below, mimics the argumentative strategy against (2): we must develop theories (and background knowledge) that differs from established background knowledge to test it as thoroughly as possible. But this argument is more limited than Feyerabend appears to have realized. Take Kuhn’s arguments as to why normal science, or the dogmatic development of a single paradigm is maximally efficient.Footnote 35 While Feyerabend shows that multiple paradigms must be pursued simultaneously, he has not shown that the projection postulate cannot be used within a paradigm. In fact, as I will discuss in Sect. 6 more extensively, we have good reasons to think the projection postulate can reasonably be employed in the context of ‘normal science.’ Indeed, nearly every discussion of the projection postulate within the papers I have outlined in the previous sections make use of the notion of normal science. But this, again, cannot provide us with a general means of distributing resources. It only shows that we can reasonably project future discoveries in a limited manner. Science as a whole, requires a funding strategy that does not ground itself entirely upon the projection postulate.

Now that I have provided Feyerabend’s criticisms of the projection postulate, I will go on to outline his positive conception of methodology that must make do without it. As we shall see, Feyerabend’s pluralism is directly relevant to the question of how we should organize scientific research.

6 Feyerabend’s well-ordered science

As has been argued elsewhere, despite the insistence of many commentators, Feyerabend held a positive account of normative methodology (Shaw 2017). That is to say that he had positive views of how science should be practiced. Specifically, he defended peculiar kind of methodological and theoretical pluralism which has implications for how to structure funding distribution models. In this section, I will briefly recapitulate this position with an eye towards how it relates to the projection postulate.

Feyerabend’s pluralism, on my interpretation, is the cooperation of two principles: the principle of proliferation and the principle of tenacity. The principle of proliferation requires that we invent theories which ‘clash with’ established theories in the same domain. While earlier formulations of this principle required that theories must be incommensurable, or at least formally inconsistent, with established theories, this constraint becomes dropped in Against Method as Feyerabend abandons the more general strategy of formulating methodological norms through logical reconstructions of theories. Rather, the proliferated theory merely makes its competitor ‘appear absurd’ (Feyerabend 1975, fn. 2 32). Feyerabend provides three distinct arguments for this. First, pluralism of theories has a variety of psychological advantages.Footnote 36 He writes, following Mill, that a “point of view that is wholly true but not contested [by other views] will be held in a manner of prejudice, with little comprehension or feeling of its rational grounds” and that “one will not understand [an idea’s] meaning, subscribing to it will become a mere formal confession unless a contrast with other opinions shows wherein this meaning consists” (Feyerabend 1981a, p. 139).Footnote 37 A second closely related argument is that pluralism is compatible with humanitarianism since it encourages free thought, which is necessary for the “production of well developed human beings” (Feyerabend 1981b, p. 65). He writes

Choice presuppose[s] alternatives to choose from; it presupposes a society which contains and encourages ‘different opinions’ (249), a ‘negative’ logic (236f), ‘antagonistic modes of thought,’ as well as ‘different experiments of living’ (249), so that the ‘worth of different modes of life is proved not just in the imagination, but practically’ (250). [U]nity of opinion,’ however, ‘unless resulting from the fullest and freest comparison of opposite opinions, is not desirable, and diversity not an evil, but a good’ (249)’ (66).Footnote 38

Third, theoretical pluralism maximizes the testability of theories by ‘unearthing’ previously unknown tests. While the first two motivations are exceptionally important, I would like to focus on the methodological benefits of theoretical pluralism for the purposes of this paper.

Feyerabend famously argues that all ‘facts’ (i.e., factual propositions) depend on theories for their validity (Feyerabend 1958). There are two aspects to this. First, the veridicality of all propositions depend on theoretical assumptions. Any attempt to ground theories in observations will end up in an infinite regress. He writes “the attempt to give an observational account of the mediating terms cannot succeed…as any such account involves further mediating terms [i.e., theoretical assumptions] and will therefore never come to an end” (Feyerabend 1960, p. 39).Footnote 39 Second, the meaning of all terms depend on theories as well. Even apparently obvious observational terms such as ‘table’ or sentences like ‘This table is brown’ are incomprehensible without some (implicit or explicit) theory defining their conditions extension and possible inferential roles. As such, for any theory T, there will be a range of ‘facts’ that it cannot express and yet may play an integral role in testing it. Feyerabend’s favorite example of this is Brownian motion which, he contends, would never have been a refuting instance of classical thermodynamics without Einstein’s kinetic theory of heat (cf. Feyerabend 1963, pp. 92–93, 1966a, pp. 246–247, Feyerabend 1966b, 1975, pp. 39–40, 1981a, pp. 144–145).

figure a

This illustrates what Laymon (1977) calls Feyerabend’s ‘general refutation schema’ whereby some tests require alternatives and therefore maximal testability presupposes pluralism.

In Feyerabend’s earlier career, he uses this schema to argue against two views. The first is that of some empiricists, including Nagel, Hempel, and Heisenberg, which requires that all theories that are intended to succeed previously accepted theories in their domain must be logically consistent with them (Feyerabend 1962). If this were true, any theory that is inconsistent or incommensurable with previous theories would not be pursuitworthy. Theoretical pluralism is the result of what Feyerabend calls the ‘principle of proliferation’, which is formulated in the following way:

Principle of Proliferation: We should “invent, and elaborate theories which are inconsistent with the accepted point of view, even if the latter should happen to be highly confirmed and generally accepted” (Feyerabend 1965b, p. 105).Footnote 40

In other words, we proliferate theories to achieve maximal testability. The second view is that of Kuhn and, according to Feyerabend, Newton: we should only pursue new theories once our old theories have collapsed. On Kuhn’s view, paradigmsFootnote 41 collapse as the result of accumulating anomalies as well as increasing awareness of theoretical difficulties. According to Newton’s fourth rule of reasoning, we should not consider alternatives theories until we have some change in evidence (cf. Feyerabend 1970c). For Feyerabend, this position puts the cart before the horse. It assumes that all possible tests of a theory are available independent of the consideration of other theories.Footnote 42 As we have already seen, this assumption cannot be reasonably held and, therefore, we may pursue theories at any stage of research.

The principle of proliferation merely tells us what kinds of theories we can pursue and when we can pursue them. It tells us nothing about how we are to pursue theories. This gap is filled by the principle of tenacity, which allows us to pursue theories despite any apparent difficulties they may have. Feyerabend first formulates this principle in 1968.Footnote 43 He writes:

It would be imprudent to give up a theory that either is inconsistent with observational results or suffers from internal difficulties. Theories can be developed and improved, and their relation to observation is also capable of modification… Moreover, it would be a complete surprise if it turned out that all the available experimental results support a certain theory, even if the theory were true. Different observers using different experimental equipment and different methods of interpretation introduce idiosyncrasies and errors of their own, and it takes a long time until all these differences are brought to a common denominator. Considerations like these make us accept a principle of tenacity, which suggests… that we stick to this theory despite considerable difficulties (Feyerabend 1968, p. 107).

The principle assumes that with enough concerted effort from sufficiently well-equipped scientists, theories can overcome any difficulty that they are presented with. As Lakatos puts it, “[a] brilliant school of scholars (backed by a rich society to finance a few well-planned tests) might succeed in pushing any fantastic programme ahead, or, alternatively, if so inclined, in overthrowing any arbitrarily chosen pillar of ‘established knowledge’” (Lakatos 1970, p. 100). In other words, Feyerabend denies that any refutation, whether empirical or theoretical, can decisively force us to abandon a theory (Feyerabend 1961). This means that there can be no in principle ‘time limit’ for how long we can pursue theories. For any theory proliferated, we can continue pursuing it indefinitely, if we so choose.Footnote 44

It is important to highlight that the principle of tenacity does not presuppose the projection postulate. The projection postulate, as I have previously outlined, states that we can predict the content of future theories. Tenacity is a quasi-empirical principle. It is partially methodological, as should be evident from the arguments given above, and partially based on the empirical hypothesis that a sufficiently well-financed and supported research program can become successful. This is a hypothesis about research dynamics; how creative researchers can be in resolving paradoxes, how much resources scientists need to attempt new experimental designs, and so forth.Footnote 45 Additionally, proliferation is not (solely) justified because it allows us to pursue more theories that may be true (the so-called ‘hedging our bets’ defense of pluralism), but because of the dynamics of competing theories. The dynamics of testability are predictable whereas the content of proliferated theories is not. As such, Feyerabend’s methodology is not committed to the projection postulate.

While Feyerabend recognized that these two principles require each other, he also recognized that they pull in opposite directions. Proliferation requires the construction of new theories whereas tenacity requires the development of existing theories. However, Feyerabend gives us no way to balance these principles, but merely asserts that some balance must be struck. In the following section, I propose a way of balancing proliferation and tenacity using an insight from C.S. Peirce which, in spite of its age, has been remarkably well confirmed in recent social scientific literature.

7 Peirce and the ‘economics of discovery’

One popular interpretation of Peirce’s account of abduction is the ‘pursuitworthiness interpretation’ where abduction provides the means to rank hypotheses according their potential fruitfulness (McKaughan 2008).Footnote 46 Peirce’s structures abduction in the following way:

  1. (1)

    The surprising fact, C, is observed;

  2. (2)

    But if A were true, C would be a matter of course,

  3. (3)

    Hence, there is reason to suspect that A is true (CP 5.189).

However, the wording of the conclusion can be misleading on the pursuitworthiness reading where “[a]bductive reasoning makes practically grounded comparative recommendations about which available hypotheses are to be tested” (McKaughan 2008, p. 452). On this reading, A has no probative force at all. Rather,

Not only is there no definite probability [for A], but no definite probability attaches even to the mode of inference. We can only say that the Economy of Research prescribes that we should at a given stage in our inquiry try a given hypothesis, and we are to hold it provisionally as long as the fact will permit. There is no probability about it. It is a mere suggestion which we tentatively adopt (Peirce and Eisele 1976, p. 184).

In other words, the conclusions are conjectures. What’s unique about the ‘Economy of Research’ is that it provides purely economic criteria for which hypotheses to test.

[T]here is only a relative preference between different abductions; on the ground of such preference must by economical. That is to say, the better abduction is the one which is likely to lead to the truth with the lesser expenditure of time, vitality, etc. (37–38)

As Rescher (1978) points out, this means that purported views of rationality can be described in terms of cost–benefit analyses. “The whole service of logic to science… is the nature of economy” (CP fn. 18 7:220).Footnote 47 Such analyses are practically indispensable. Peirce writes:

Proposals for hypotheses inundate us in an overwhelming flood, while the process of verification to which each one must be subjected before it can count as at all an item, even of likely knowledge, is so very costly in time, energy, and money—and consequently in ideas which might have been had for that time, energy, and money, that Economy would override either other consideration even if there were any other serious considerations. In fact there are no others (5.602)

If he examines all the foolish theories he might imagine, he never will (short of a miracle) light upon the true one (2.776).

How can methodological considerations aid in the economically efficient pursuit of hypotheses? The question is exceedingly easy to answer for a Kuhnian-type conservative: we can establish “the limits of plausibility by indicating that the currently accepted scientific theories and principles of greater scope act as standards which guide scientific research” (Brown 1983, p. 406). At one point, Peirce appears to accept this view. However, he elaborates it in an interesting way:

We thus see that when an investigation is commenced, after the initial expenses are once paid, at little cost we improve our knowledge, and improvement then is especially valuable; but as the investigation goes on, additions to our knowledge cost more and more, and, at the same time, are of less and less worth. Thus, when chemistry sprang into being, Dr. Wollaston, with a few test tubes and phials on a tea-tray, was able to make new discoveries of the greatest moment. In our day, a thousand chemists, with the most elaborate appliances, are not able to reach results which are comparable in interest with those early ones. All the sciences exhibit the same phenomenon, and so does the course of life. At first we learn very easily, and the interest of experience is very great; but it becomes harder and harder, and less and less worth while, until we are glad to sleep in death (Peirce 1967, p. 644).

In modern economic terms, Peirce asserts that research programs have a built-in marginal utility function where utility (‘improved knowledge’) gradually descends and expenses rise. Notice how Feyerabend’s argument that utility is something that can only be determined in retrospect is (partially) muted here since ‘minor’ discoveries within Peircian normal science have their interest derived from the basic commitments of the research programs. In other words, background knowledge partially determines significance. As such, the utility of the research program as a whole may be left up in the air for future generations to determine, but the worth of incremental gains within that program are not subject to the same worry to the same degree. As such, their utility can be recognized much more quickly. Additionally, the projection postulate becomes valid within this context. We can predict which discoveries within normal science are more likely, not because the theory itself is true and its alternatives are false, but because the institutional and psychological factors necessary for tenacity are already in place which facilitates effective pursuit.Footnote 48 Because of this, we can pursue a theory and make decisions at each step in a funding cycle about whether to pursue it further.

What is remarkably impressive about Peirce’s insight here is how it has been supported by the literature in economics that attempt to construct generalizable marginal utility curves of scientific theories.Footnote 49 There are a few general trends, with various degrees of cross-disciplinary robustness, which are worth noting here:

  1. (1)

    The operational costs of theory pursuit tend to increase exponentially over time.

  2. (2)

    Chances of ‘internal revolutions’ decrease linearly over time.

  3. (3)

    Research duplications tend to increase over time.

  4. (4)

    Publishing profiles can be represented by Gaussian functions and thereby, decrease past a certain threshold.

For (1), most theories that have clearly testable dimensions have a well-defined start-off cost; what is needed, how much the relevant technology costs, etc.Footnote 50 However, as research progresses, new technologies are needed and more refined versions of existing technologies must be created. This is expensive and, therefore, the operational costs tend to increase with time. (2) is largely a function of sociological investigations into increasing conformity over time, something Feyerabend was well aware of (cf. Feyerabend 1965b, p. 177), and the fact that there is a greater demand to be up to date with exponentially increasing amounts of studies which can be overwhelming for revolutionary experiments. (3) derives from the increasing inability of scientists to coordinate themselves, due to the increasing size and complexity of a research program, such that each research team tackles separate questions and thereby maintain an efficient division of labor.Footnote 51 (4) is supported by studying the life-cycles of research productivity of individual scientists. Prestigious researchers, whose work tends to have higher impacts, tend to publish less frequently towards the ends of their career.Footnote 52

This provides us with some preliminary reasons for thinking that this theory of theory pursuit is (roughly) correct and there exists marginal utility models that we can construct in advance of pursuing a theory. This provides us with a few crucial notions that Feyerabend’s philosophy lacks. Specifically, we can make sense of the notion of risk without resorting to attempting to predict the future content of a theory. All conjectures begin as equally risky, making theory choice at this stage contingent on non-epistemic criteria (e.g., maintaining diversity, ethical ramifications, etc.) Research within established research programs is less risky because of sociological forces that aid tenacity. This is a feature that Feyerabend’s methodology must have and one he refuses to provide:

By speaking of risks it assumes that the progress initiated by progressive phases will be greater than the progress that follows a degenerating phase; after all, it is quite possible that progress is always followed by long-lasting degeneration, while a short degeneration (say, 50 or 100 years) precedes overwhelming and long-lasting progress (Feyerabend 1976, p. 215).

This, Feyerabend points out, is “a version of Hume’s problem” (fn. 25 215).Footnote 53 However, taken to its extreme, this makes it impossible to provide a view of risk. Indeed, Feyerabend does not motivate this skepticism very well and it begins to look like he is relying on mere ‘logical possibilities’ in his account of pursuit (cf. Achinstein 2000, p. 33). A second, closely related point, is that after this modification, the principle of tenacity is now equipped with a notion of prospective success. Feyerabend only argues that theories inconsistent with extant theories are likely to lead to progress, giving a higher degree of prospective success to novel theories.Footnote 54 We can reasonably expect research within a theoretical tradition to provide minor discoveries and successes before the marginal utility function decreases or extends asymptotically, whereas research after this point will be less likely to be successful but, if it is, will have a greater impact. Think of Feyerabend’s defense of Boltzmann’s persistence in defending atomism, which was surely a degenerating research program by the late 1800 s but was rewarded with one of the most important discoveries in twentieth century physics. Notice that this research, however, was unlike the normal science we have been considering since Boltzmann’s research laid outside of the paradigm of classical thermodynamics (e.g., it wasn’t normal science). The social features that increase dogmatism overtime had been decreased. Because of this, tenacity after the point of decreasing marginal returns is slowed down, rather than terminated altogether. This kind of tenacity seems more like an internal revolution rather than marginal gains on existing knowledge.

The empirical literature on each of these points is quite messy and makes use of many inconsistent idealizing assumptions and some more fine-grained questions have yet to be researched. Furthermore, there are other facts that may counterbalance these trends that have been omitted or overlooked. For example, improvements in technology often lower costs as pursuit increases and some revolutions happen quite suddenly and unexpectedly (e.g., the quantum revolution) which combat the claim that dogmatism increases as pursuit increases. As such, this model is not empirically reliable enough to make concrete policy decisions. However, I think it provides us with an interesting avenue, if true, for how the principle of tenacity may be adjusted.

8 Making do without the projection postulate

I have outlined the idea of a well-ordered science and the projection principle that undergirds its soundness. I have also shown the limited applicability of the projection principle and, thereby, shown its proper scope as a means of organizing scientific research. I have also provided a philosophical account of an alternative way of organizing research and provided some empirical reasons for taking it seriously. Still, we are far away from an implementable alternative. In this section, I would like to briefly consider the proposal of funding by lottery, which appears to be consistent with the position that I have defended in this paper. While I do not have the space to articulate the details of this position fully, this will provide some starting points for future research.

One method that has attracted some philosophical and social scientific literature is funding by lottery (Gilles 2008; Avin 2015a). This method allocates funds randomly. However, where the lottery is introduced differs among different funding mechanisms. Francis Edgeworth (1888, 1890), for instance, when discussing grading criteria, suggests introducing ‘chance elements’ into more fine-grained determinations. Modern adaptations of this view (see Avin 2015a, Sect. 6.2.1), use a ‘border zone’ approach, with the following features:

  1. (1)

    There are coarse-grained borders which establish various cut-off rates

  2. (2)

    Cut-off rates can distinguish between fundable proposals, not-fundable proposals, and those within the range of fine-grained decision-making.

  3. (3)

    Those with the fine-grained spectrum are each given an equal number of lottery tickets that can be used to determine which of those proposals get funded.

  4. (4)

    Since the notion of ‘fine-grained’ versus ‘coarse-grained’ is context-sensitive, the degree of separation between cut-off points is context-sensitive.

This view requires the projection postulate, but in a much more limited sense. Peer-review, or peers who determine the fruitfulness of a project, can only separate the ‘crank proposals’ from the bona fide good proposals (cf. Gillies 2014). One notable critic of this system is Barbara Goodwin (2005). Her primary criticism, that doesn’t concern considerations of justice,Footnote 55 is as follows: Since some candidates will be close to the cut-off points, there is a good chance that those slightly above or below a border zone have been placed there wrongfully. This point can be repeated until all border zone collapse until all projects are admitted into a weighted lottery. While this method has many its own difficulties (see Avin 2015b, pp. 115–120), it remains a plausible candidate for organizing research that is in line with the projection postulate.

The introduction of randomness is essentially a cost-effective methodFootnote 56 of recognizing Feyerabend’s worries about projecting future successes: it can’t be done reliably, especially in the case of truly innovative research which is necessary for the community as a whole. As such, it seems wrong to follow Goodwin’s slippery slope argument all the way down to the cranks. We can eliminate them from the weighted lottery altogether. But how shall we approach the weightings of the rest of the projects? I have argued that we have no way of determining the value of future research past a certain minimal threshold. This would make it seem invalid to assume that there is a spectrum from crankish to top-quality research.Footnote 57 Rather, the Feyerabendian view would be consistent with a lottery-based mechanism for funding, but it would bifurcate between cranks and leave the rest of the proposals with equal chances of being funded.

One noted problem with the lottery method is the problem of continuous funding. As Avin (2015b, pp. 115–116) notes, we are, probabilistically speaking, unlikely to fund the same project repeatedly. The principle of tenacity demands that continuous funding is reliably available. But we should remember that the conditions of pursuit are distinct from the conditions of beginning a research program. As such, it makes sense to have a distinct funding mechanism for established projects. This can come in two forms: (1) long-term grants and (2) many smaller grants for participants within an established research program. (2) can be carried out as before since peer-review is operating under the same set of conditions. (1) is trickier. Some longitudinal grants have proven to be massive successes (e.g., the Human Genome Project)Footnote 58 whereas others have been largely regarded as failures (e.g., the Human Brain Project).Footnote 59 However, as far as I’m aware, every major longitudinal grant awarded in the past 70 years has been building upon previous, well-established theoretical work. This seems like a sound practice, on Feyerabendian grounds, since the initial start up on tenacity is an exceptionally risky venture that could divert pluralism rather than be conducive to it.

9 Concluding remarks

There remains many open questions and puzzles posed by the alternative ‘well-ordered science’ that I have outlined and motivated in this paper. Most prominent among them is the proper characterization of ‘urgent science’ which is necessary if we are to understand the conditions under which the projection postulate remains valid. At times, Kitcher hints at the fact that urgent cases are what he is truly interested in:

there’ll be questions about the schedule on which goods are provided: whether we should pursue strategies that are likely to be successful in the long term or whether some problems are too urgent to be postponed (Kitcher 2004, p. 211).

Regardless, I hope to have shown the limitations of certain approaches of organizing research and thereby provided sufficient reasons for engage in further research in this topic. This also forces us to take a more serious look at proposals such as funding by lottery as they stand on firmer methodological grounds. Not only does this project bring a host of issues, old and new, to the fore and revitalizes them with practical aspirations, but it is of crucial importance for understanding the relationships between science and society and better navigating future policy.