1 Introduction

The ideal of value free science states that the justification of scientific findings should not be based on non-epistemic (e.g. moral or political) values. It derives, straightforwardly and independently, from democratic principles and the ideal of personal autonomy: As political decisions are informed by scientific findings, the value free ideal ensures—in a democratic society—that collective goals are determined by democratically legitimized institutions, and not by a handful of experts (cf. Sartori 1962, pp. 404–410). In regard of private decisions, personal autonomy would be jeopardized if the scientific findings we rely on in everyday life were soaked with moral assumptions (see Weber 1949, pp. 17–18).

However, the ideal of value free science has been at the heart of various controversies which have raged since at least Max Weber’s publications and involved philosophers and social scientists alike. More recently, philosophers have re-articulated and sharpened two types of criticism which I shall term the “semantic critique” and the “methodological critique”.Footnote 1 The semantic critique, which was anticipated by Weber (1949, pp. 27–38), is, for example, set forth by Putnam (2002) and Dupré (2007). It claims that normative and factual statements are irreducibly interwoven because of the existence of thick concepts.Footnote 2 That’s why science cannot get rid of value judgments; as a consequence, the ideal of value free science is allegedly unrealizable. The methodological critique dates back to a seminal paper by Rudner (1953), which incited a debate in the 1950s involving, amongst others, Jeffrey (1956) and Levi (1960). Various philosophers of science, including Philip Kitcher,Footnote 3 Helen Longino,Footnote 4 Torsten Wilholt,Footnote 5 Eric Winsberg,Footnote 6 Kevin ElliottFootnote 7 and, most forcefully, Heather Douglas,Footnote 8 currently seem to endorse the methodological critique. In short, it maintains that scientists have to make methodological decisions which require them to rely on non-epistemic value judgments.

This paper seeks to defuse the methodological critique. Allegedly arbitrary and value-laden decisions can be systematically avoided, it argues, by making uncertainties explicit and articulating findings carefully. The methodological critique is not only ill-founded, but distracts from the crucial methodological challenge scientific policy advice faces today, namely the appropriate description and communication of knowledge gaps and uncertainty.

The structure of this paper is as follows. One can distinguish two basic versions of the methodological critique that rely on a common, and hence central premiss, namely that policy-relevant scientific findings depend on arbitrary choices (Section 2). Such arbitrariness, the critics argue, arises in situations of uncertainty because scientists may handle inductive risks in different ways (Section 3).Footnote 9 But that is not inevitable: On the contrary, arbitrary decisions are systematically avoided, if uncertainties are properly expressed (Section 4). Such careful uncertainty articulation, understood as a methodological strategy, is exemplified by the current practice of the Intergovernmental Panel on Climate Change, IPCC (Section 5).

2 How arbitrariness undermines the value free ideal

There are two versions of the methodological critique, which object to the value free ideal in different ways while sharing a common core. A first variant of the critique argues that the value free ideal cannot be (fully) realized, a second variant states that it would be morally wrong to realize it.

The common core of both versions is a kind of underdetermination thesis. It claims that every scientific inference to policy-relevantFootnote 10 findings involves a chain of arbitrary (epistemically underdetermined) choices:

Thesis 1

(Dependence on arbitrary choices) To arrive at (adopt and communicate) policy-relevant results, scientists have to make decisions which (i) are not objectively (empirically or logically) determined and (ii) sensitively influence the results thus obtained.

According to the first version of the critique, one inevitably buys into non-epistemic value judgments by making an arbitrary, not objectively determined choice with societal consequences:Footnote 11

Thesis 2

(Decisions value laden) Decisions which (i) are not objectively determined and (ii) sensitively influence policy-relevant results (to be adopted and communicated) are inevitably based—possibly implicitly—on non-epistemic value judgments.

From Theses 1 and 2, it follows immediately that science cannot be free of non-epistemic values:

Thesis 3

(Value free science unrealizable) Scientists inevitably make non-epistemic value judgments when establishing (adopting and communicating) policy-relevant results.

This first version of the methodological critique is set forth by Rudner (1953) and seems to be approved, e.g., by Wilholt (2009), Winsberg (2010) and Kitcher (2011). However, one of its premisses, Thesis 2, represents a non-trivial assumption. To see this, suppose that the scientist, when facing an arbitrary choice, simply rolls a die. It’s at least not straightforward which specific, non-epistemic normative assumptions she (implicitly) buys into by doing so.

This might explain why Heather Douglas unfolds a different and, I take it, stronger line of argument, which yields a second type of methodological critique.Footnote 12 The second version, too, takes off from the premiss that policy-relevant scientific findings depend on arbitrary choices (Thesis 1). Because of science’s recognized authority, it reasons, the acceptance of some policy-relevant finding by a scientist is highly consequential.Footnote 13

Thesis 4

(Policy-relevant results consequential) The policy-relevant results scientists arrive at (adopt and communicate) have potentially (in particular in case they err) morally significant societal consequences.

One of Douglas’ main examples may serve to illustrate this claim (cf. Douglas 2000). The scientific finding that dioxins are highly carcinogenic will spur policy makers to set up tight regulations and will, consequently, shut down otherwise socially beneficial economic activities. If scientists, however, refute that very hypothesis, dioxins won’t be banned and the public will be exposed to a potentially harmful substance. In any case, the scientists’ decision has far-reaching effects, which will be particularly harmful provided they err.

But whenever we face a choice with morally significant consequences, the idea of responsibility requires us to adopt a moral point of view, i.e. to consider ethical aspects in the decision deliberation:

Thesis 5

(Moral responsibility) Any decision that is not objectively determined and has, potentially, morally significant societal consequences, should be based on non-epistemic value judgments (instead of being taken arbitrarily).

That’s why it would be morally wrong (in the light of Theses 1, 4 and 5) to follow the value free ideal in scientific inquiry:

Thesis 6

(Value free science unethical) Scientists should rely on non-epistemic value judgments when establishing (adopting and communicating) policy-relevant results.

The second version of the methodological critique constitutes a conclusive argument, at least once the arbitrariness thesis—the cornerstone of the critique—is granted. This very thesis will be further discussed in the following sections.

3 How uncertainty triggers arbitrariness

Policy making requires definite and unequivocal answers to various factual questions, or so it seems. Take Douglas’ example from health and environmental policy (cf. Douglas 2000): Are dioxins carcinogenic? Is there a threshold below which exposure to dioxins is absolutely safe—and if so, where? How many persons, out of a hundred, develop malignant tumors as a result from exposure to dioxins at current rates? Scientists are expected to answer these questions with “plain” hypotheses: Yes, dioxins are carcinogenic; or: no, they aren’t. The safety threshold lies at level X; or: it lies at level Y. So and so many persons (out of a hundred) currently suffer from malignant tumors because of exposure to dioxins.

Under uncertainty, i.e. when the empirical evidence or the theoretical understanding of a system is limited, such plain hypotheses can neither be confirmed nor be rejected beyond reasonable doubt. As the inference to the policy-relevant result, which the scientist is ultimately going to communicate, becomes error prone, the error probabilities or, more generally, the inductive risks one is ready to accept when drawing the inference have to be specified. Clearly, errors can be committed all along the inference chain—including the generation and interpretation of data, the choice of the model, the specification of parameters and boundary conditions, etc.—, so inductive risks have to be taken care of throughout the entire inquiry (and not merely at its final step). This gives the scientists considerable leeway, because there is no way in which the methodological management of inductive risks (specifically the acceptance of certain error probabilities) is objectively, i.e. empirically or logically, determined. The policy-relevant results inferred, e.g. statements about the carcinogenic effects of dioxins, depend sensitively on those methodological choices.

Put concisely, the argument in favor of arbitrariness reads:

  1. P1

    To arrive at (adopt and communicate) policy-relevant results, scientists have to adopt or reject plain hypotheses under uncertainty.

  2. P2

    Under uncertainty, adopting or rejecting plain hypotheses requires setting error probabilities which one is willing to accept when drawing the respective inference and which sensitively affect the results that can be inferred.

  3. P3

    The error probabilities one is willing to accept in an inference are not objectively (empirically or logically) determined.

  4. C

    Thus: Dependence on arbitrary choices (Thesis 1).

The argument relies crucially on a specific reconstruction of the decision situation that scientists face in policy advice. As P1 has it, the options available to scientists are fairly limited: they have to arrive at and communicate results of a quite specific type.Footnote 14 And they cannot infer such plain hypotheses, at least not under uncertainty, without taking substantial inductive risks and hence committing themselves to a set of methodological choices that are not objectively determined.

As long as the first premiss is granted, I take the above reasoning to be a very strong and convincing argument. But can P1 really be upheld?

4 Avoiding arbitrariness through articulating and recasting findings appropriately

Premiss P1, stated as narrowly as above, is false. Policy making can be based on hedged hypotheses that make the uncertainties explicit, and scientific advisors may provide valuable information without inferring plain, unequivocal hypotheses that are not fully backed by the evidence. Reconsider Douglas’ example (Douglas 2000).Footnote 15 Rather than opting for a single interpretation of the ambiguous data, scientists can make the uncertainty explicit through working with ranges of observational values. Instead of employing a single model, inferences can be carried out for various alternative models. Rather than committing oneself to a single level of statistical significance, one may systematically vary this parameter, too. Acting as policy advisors, the scientists can communicate the results of such a (methodological) sensitivity analysis. The reported range of possibilities may then inform a decision under uncertainty.Footnote 16

Reporting ranges of possibility is just one way of avoiding plain hypotheses and recasting scientific results in situations of uncertainty. Scientists could equally infer and communicate other types of hedged hypotheses that factor in the current level of understanding. They might make use of various epistemic modalities (e.g. it is unlikely/it is possible/it is plausible/etc. that …) or simply conditionalize on unwarranted assumptions (e.g. if we deem these error probabilities acceptable, then …, based on such-and-such a scheme of probative values, we find that …, given that set of normative, non-epistemic assumptions, the following policy measure is advisable …).Footnote 17

In sum, scientists as policy advisors are far from being required to accept or refute plain hypotheses.Footnote 18 With its original premiss in tatters, the above reconstruction of the methodological critique is in need of repair. But, in fact, premiss P1 can be easily amended to yield a much more plausible statement.Footnote 19

  1. P1’

    To arrive at (adopt and communicate) policy-relevant results, scientists have to adopt or reject plain or hedged hypotheses under uncertainty.

For the sake of validity, we have to modify premiss P2 correspondingly.

  1. P2’

    Under uncertainty, adopting or rejecting plain or hedged hypotheses requires setting error probabilities which one is willing to accept when drawing the respective inference and which sensitively affect the results that can be inferred.

While P1’ is now nearly analytic, P2’ turns out to be questionable. Does accepting hedged hypotheses, which are, thanks to epistemic qualification and conditionalization, weaker than plain ones, still involve substantial error probabilities? Douglas sees this counter argument, anticipating that “[some] might argue at this point that scientists should just be clear about uncertainties and all this need for moral judgment will go away, thus preserving the value-free ideal” (Douglas 2009, p. 85). But she rebuts:

Even a statement of uncertainty surrounding an empirical claim contains a weighing of second-order uncertainty, that is, whether the assessment of uncertainty is sufficiently accurate. It might seem that the uncertainty about the uncertainty estimate is not important. But we must keep in mind that the judgment that some uncertainty is not important is always a moral judgment. It is a judgment that there are no important consequences of error, or that the uncertainty is so small that even important consequences of error are not worth worrying about. Having clear assessments of uncertainty is always helpful, but the scientist must still decide that the assessment is sufficiently accurate, and thus the need for values is not eliminable. (Douglas 2009, p. 85)

This much is clear: Sometimes a probability (or, generally, an uncertainty) statement cannot be inferred, based on the available evidence, without a substantial chance to err. But Douglas, or the methodological critique, needs more: Every hedged (e.g. epistemically qualified or suitably conditionalized) hypothesis involves substantial error probabilities.—And that seems to be plainly false. Douglas either ignores or underestimates how far epistemic qualification and conditionalization might carry us. Consider, e.g.: “It is possible (consistent with what we know), that …”, “We have not been able to refute, that …”, “If we assume these thresholds for error probabilities, we find that …” Such results are, first of all, clearly policy relevant (think of mere possibility arguments,Footnote 20 or worst case reasoningFootnote 21)—even more so if complemented with a respective sensitivity analysis (as, for instance, varying the error thresholds systematically). Secondly, such hypotheses are sufficiently weak, or can be further weakened, so that the available evidence suffices to confirm them beyond reasonable doubt. There is a simple reason for that: A scientific result which fully and comprehensively states our ignorance is itself well corroborated (for if it weren’t, it wouldn’t make the uncertainty fully explicit in the first place).Footnote 22

To insist that some hedged hypotheses can be justified beyond reasonable doubt, even under uncertainty, doesn’t mean to deny fallibilism. There remains always the logical possibility that we are wrong: The fundamental regularities observed in the past might break down in the future, other “laws of nature” might reign. Or all our scientists, being cognitively limited, might have committed—in spite of multiple independent checks and double-checks—a simple mistake (e.g. when performing a calculation). Whereas this kind of uncertainty is irreducible, indeed, it seems to me just irrelevant and without any practical bearing. Note that any scientific statement whatsoever (e.g. that earth is no disk) is affected by similar doubts.Footnote 23 That’s nothing scientists have to worry about in scientific policy advice.

Let me explain this in some more detail. I take it that there is a vast corpus of empirical statements which decision makers—in private, commercial or public contexts—rightly take for granted as plain facts. I’m thinking for instance of results to the effect that plutonium is toxic, the atmosphere comprises oxygen, coal burns, CO2 is a greenhouse gas, Africa is larger than Australia, etc. These findings have been thoroughly empirically tested, they have been derived independently from various well-confirmed theories, or they have been successfully acted upon millions of times. Even so, they are fallible, they are empirically and logically underdetermined, and they might be wrong because of more trivial reasons: millions of people might have committed the same fallacy or might have been fooled by a similar optical illusion. Nobody is denying that. But such uncertainties are simply not decision-relevant. More precisely, I suggest that (i) the corpus of statements that are—for all practical purposes—established beyond reasonable doubt and (ii) the well-established social practice of relying on them in decision making may serve as a benchmark to which scientists can refer in policy advice. The idea is that scientific policy advice comprises but results that are equally well confirmed as those benchmark statements—so that policy makers can rely on the scientific advice in the same way as they are used rely on other well-established facts. By making all the policy-relevant uncertainties explicit, scientists can further and further hedge their reported findings until the results are actually as well confirmed as the benchmark statements. (In the extreme, they might simply admit their complete ignorance as to the consequences of some policy option.)

For illustrative purposes, we consider a ‘frank scientist’Footnote 24 who tries to comply with the methodological recommendations outlined above in order to circumvent the non-epistemic management of inductive risks. She might address policy makers along the following lines: “You have asked us to advice you on a complicated issue with many unknowns. We cannot reliably forecast the effects of the available policy options, which you’ve identified, in a probabilistic—let alone deterministic—way. Our current insights into the system simply don’t suffice to do so. However, we find that, if policy option A is adopted, it is consistent with our current understanding of the system (and hence possible) that the consequences \(\mathrm {C}_{\mathrm {A1}},\mathrm {C}_{\mathrm {A2}},\ldots \) ensue; but note that we are not in a position to robustly rule out further effects of option A not included in that range. For policy option B, though, we can reliably exclude this-and-this set of developments as impossible, which still leaves \(\mathrm {C}_{\mathrm {B1}},\mathrm {C}_{\mathrm {B2}},\ldots \) as a broad range of future possible consequences. These results are obviously not as telling as a deterministic forecast, but they represent all we currently know about the system’s future development. We, the scientists, think it’s not up to us to arbitrarily reduce these uncertainties. On the contrary, we think that democratically legitimized decision makers should acknowledge the uncertainties and determine—on normative grounds—which level of risk aversion is apt in this situation. Finally, the complex uncertainty statement I have provided above is as well confirmed as other empirical statements typically taken for granted in policy making (e.g., that plutonium is toxic, coal burns, earth’s atmosphere comprises oxygen, etc.). That is because all we relied on in establishing the possibilistic predictions were such well-confirmed results.”

5 Making uncertainties explicit and avoiding arbitrariness:the case of the IPCC

It is universally acknowledged that the detailed consequences of anthropogenic climate change are difficult to predict. Centurial forecasts of regional temperature anomalies or changes in precipitation patterns, let alone their ensuing ecologic or societal consequences, are highly uncertain. So, no wonder that climate scientists, in particular those involved in climate policy advice, have reflected extensively on how to deal with these uncertainties.Footnote 25 A recent special issue of Climatic Change Footnote 26 is further evidence of the attention climate science pays to uncertainty explication and communication. Some of the special issue’s discussion is devoted to the IPCC Guidance Note on Consistent Treatment of Uncertainties (Mastrandrea et al. 2010). The current Guidance Note, which is used to compile the Fifth Assessment Report (5AR), is a slightly modified version of the Guidance Note for the 4AR.Footnote 27 The Guidance Note may serve as an excellent example for how the very statements and results scientists articulate and communicate are modified and chosen in the light of prevailing uncertainties. It thus illustrates the general strategy described in the previous section and provides a counter-example to the arbitrariness thesis (Thesis 1), underpinning the refutation of the methodological critique.

The Guidance Note distinguishes six epistemic states that characterize different levels of scientific understanding, and lack thereof, pertaining to some aspect of the climate system, as shown in Table 1. For each of these epistemic states, the Guidance Note suggests which sort of statements may serve as appropriate explications of the available scientific knowledge. Thus, in case F), it advises to state the probability distribution (while making the assumptions of the statistical analysis explicit), whereas in case C), it does not do so.Footnote 28

Table 1 Types of uncertainty or knowledge states that require a specific articulation of the results and findings in scientific policy advice

From state A) to state F), the scientific understanding gradually increases, and the statements scientists can justifiably and reliably make become ever more informative and precise. If, as is the case in state A), current understanding is very poor, scientists might simply report that very fact, rather than dealing with significant inductive risks when inferring some far-reaching hypothesis (as the methodological critique has it). Importantly, the statement that a process is poorly understood, that the evidence is low and that the agreement amongst experts is limited—such a statement itself does not involve any practically significant and policy-relevant uncertainties (contra premiss P2’). The Guidance Note thus provides a blueprint for making uncertainties fully explicit and avoiding substantial inductive risks.Footnote 29 This is not to say that the framework provided by the IPCC is perfect and flawless. In addition, I’m not claiming here that the actual IPCC assessment reports consistently implement the Guidance Note and articulate uncertainties in a flawless way. But even if the guiding framework and the actual practice might be improved upon, the IPCC example nonetheless shows forcefully how scientists can articulate results as a function of the current state of understanding and thereby avoid arbitrary (methodological) choices. This effectively defeats the methodological critique of the value free ideal.Footnote 30

6 Conclusion

The methodological critique of the value free ideal is ill-founded. In a nutshell, the paper argues that there is a class of scientific statements which can be considered—for all practical purposes—as established beyond reasonable doubt. Results to the effect that, e.g., dinosaurs once inhabited the earth, alcohol freezes if sufficiently cooled, or methane is a greenhouse gas are not associated with decision making relevant uncertainties. Frequently, of course, empirical evidence is insufficient to establish hypotheses equally firmly. But rather than reporting uncertain results, and managing the ensuing inductive risks, scientific policy advice may set forth hedged hypotheses that fully make explicit the lack of understanding. By sufficiently weakening the reported results, it is always possible that scientific policy advice only communicates virtually certain (hedged) findings. That’s what the methodological critique of the value free ideal seems to underestimate.

Let me comment on the general dialectic situation. This paper attempts to refute a critique of the value free ideal. Even if the refutation were successful, this does not in itself show that the value free ideal is justified. Further reasons (e.g. along the lines indicated in the introductory paragraph) have to be given for that claim. Similarly, other criticisms of the value free ideal—such as the semantic critique or the denial that epistemic and non-epistemic values can be reasonably distinguished in the first place—remain unaffected by this paper’s argument and have to be considered separately. Finally, instead of accepting this paper’s conclusion, one may as well deny one of its premisses (and claim, e.g., that scientific findings like methane being a greenhouse gas require management of policy-relevant inductive risks). It should go without saying that in philosophy, too, you may hold on to your position, come what may, provided you’re prepared to make adjustments elsewhere in your web of beliefs. In that sense, there exists no definite refutation of the critique of the value free ideal.

This said, consider, finally, the position which adheres to the value free ideal and maintains it plays an important role in giving science and scientific policy advice its place in a democratic society. Seen from such a perspective, this paper reveals that the philosophical critique, precisely because it addresses a socially and politically relevant issue, is dangerous and risks to undermine democratic decision making. While scientific policy advice should be guided by the ideal of value free science, the methodological critique reminds us, at most, that there are factual (contingent) problems of realizing this ideal: Scientists may lack the material or cognitive resources to identify all uncertainties, make them explicit and carry out sensitivity analyses. But (a) this does not affect value free science understood as a ideal which one should try to come close to.Footnote 31 And (b) the unjustified criticism tends to obscure this methodological challenge (which is luckily addressed by the IPCC), rather than illuminating it and contributing to its remediation.