1 Introduction

While there is a growing consensus that social, political, and ethical values inevitably influence scientific reasoning, considerable disagreement still exists about whether and when scientists ought to make value judgments (Longino 1990; Lacey 1999; Douglas 2009; Elliott 2011; Brown 2013). Even if they can play positive roles, political and economic values have the potential to lead to bias (Oreskes and Conway 2010). Moreover, to the extent that scientists make value judgments, there are concerns that their values will be undemocratically privileged over those of other potential stakeholders (Schneider 1997; Pielke 2004, 2007; Scott et al. 2007; Lackey and Robert 2007; Betz 2013).

These concerns are particularly salient in climate change modeling. On the one hand, climate change research is clearly motivated by social and political values directed towards protecting things we care about. Moreover, climate change models involve a variety of uncertainties that may require judgments about what sorts of risks are acceptable (Kandlikar et al. 2005; Risbey 2007; Biddle and Winsberg 2010; Winsberg 2012). On the other hand, values can (even unconsciously) cause scientists to downplay (or exaggerate) uncertainties with respect to climate models, limit the range of scenarios considered, or cherry-pick data in presenting findings to policymakers (Pielke 2007; Oppenheimer et al. 2007; Bray 2010). Thus, many argue that although it may be difficult to isolate science from values, scientists should still strive to prevent values from playing a significant role in scientific reasoning or policy advising (Odenbaugh 2003; Betz 2007; Lackey and Robert 2007). The question is, then, how can we distinguish when such values are operating legitimately or illegitimately in climate modeling?

One strategy for distinguishing between legitimate and illegitimate values in scientific decision-making does so in relation to the aims or goals of the research (Intemann and de Melo-Martín 2010; Elliott 2013; Elliott and McKaughn 2014). I will develop and defend a particular conception of this approach, which I will refer to as the aims approach, whereby it is legitimate for scientists to appeal to non-epistemic values insofar as doing so will promote democratically endorsed epistemological and social aims of research. This framework has several advantages. First, it captures a variety of ways in which value judgments are relevant to modeling decisions. Second, it offers a strong justification as to why it is legitimate to do appeal to non-epistemic values, regardless of whether they are logically necessary to modeling decisions. Third, it provides resources for protecting against instances where values might negatively impact science. Finally, it has resources to ensure that stakeholder input is more effectively incorporated in determining what values will be endorsed.

2 The aims approach

Norms that govern scientific decision-making, including methodological choices, selection of data, and choice of theories or models, are widely viewed to be a function of the aims that constitute the research context. If we ask whether particular climate models are useful or reliable, it can depend on what we want to use them for (Parker 2009, 2014; Oreskes et al. 2010). A speedometer on a car may not work well in relation to the aim of telling a driver their speed with perfect precision. But, as a tool for helping a driver avoid getting a speeding ticket, a properly functioning speedometer is quite reliable. Similarly, whether climate models are reliable or useful depends on what we want them to do (Parker 2009). Decisions about what types of models to construct, the sort of features that reliable models will have, and the methodologies that are appropriate are justified in relation to the aims of the research. In many instances, these will be traditional epistemic aims of science. That is, some models and methodologies are justified because they help us arrive at true or empirically adequate beliefs about the world, increase our understanding, or allow us to explain or predict phenomena. But, in climate science the aim is not only to produce accurate beliefs about the atmosphere, but to do so in a way that allows us to generate useful predictions for protecting a variety of social, economic and environmental goods that we care about. As a result, value judgments must be made about 1) which goals constitute the aims of research in a particular context (including the social, ethical, political, and economic aims of the research) and 2) the extent to which particular practices, methodologies, or models are likely to promote those aims.

The aims approach maintains that social, ethical, and political value judgments are legitimate in climate modeling decisions insofar as they promote democratically endorsed epistemological and social aims of the research. On this view, value judgments about which goals constitute the aims of a particular research context must be justified by democratic mechanisms that secure the representative participation of stakeholders likely to be affected by the research. Moreover, individual scientists will have obligations to make value judgments about which types of models, methodological approaches, conceptual frameworks, or strategies for dealing with uncertainties best promote those democratically endorsed aims. Accordingly, it will be legitimate for scientists to appeal to social, ethical, and political values in modeling decisions when doing so will advance the epistemic and policy-related interests of stakeholders. On this approach, whether value judgments are legitimate is a matter of degree, where values may be more or less legitimate to appeal to depending on the degree to which they can be said to promote the aims of research and the extent to which they reflect democratically held values.

3 The aims approach as applied to climate modeling

Temporarily setting aside the issue about how exactly the aims of inquiry are to be democratically determined (to be returned to in section 4), this section will identify and explain a variety of ways in which it is legitimate for individual scientists to consult ethical and social aims of research in making modeling decisions on the aims approach. Some of the examples will be relatively uncontroversial while others less so. My purpose here, however, is to show that the epistemological and ethical obligations of scientists can be furthered by consulting the social and policy-related aims of the research in each of these examples. As a result, it captures a range of ways that values are intuitively legitimate in climate modeling decisions.

3.1 Judgments about model adequacy

Most scientists agree that even if aggressive mitigation policies are adopted, some adaptation to climate change will still be necessary. But, this requires information about what it is we must prepare to adapt to. General Circulation Models (GCMs) have successfully predicted 20th century increases in global average temperature, but this is not the scale at which adaptations occur (Oreskes et al. 2010). Local communities are concerned with, for example, changes in precipitation in a particular region that may affect agriculture or water supplies. But at this scale, Regional Climate Models (RCMS) are far more useful than GCMs. Similarly, existing models anticipate slow gradual changes and are not helpful in determining what might happen in the case of extreme rapid weather events, such as a rapid disintegration of the West Antarctic Ice Sheet (Oreskes et al. 2010; Moss and Schneider 2000). If one of the ethical aims of research is to determine how to adapt to “worst case scenarios,” then models able to capture extreme weather events should be preferred. In this way, it is legitimate for scientists to appeal to ethical, social, and policy aims in deciding the kinds of models to construct, as doing so is necessary and important for promoting the aims of the research.

Social and ethical aims are also relevant to determining the sorts of features that adequate models will have. For example, some argue that adequate Integrated Assessment Models (IAMs) must not only provide information about the aggregate impacts to be expected from climate change, but also information about the distribution of those impacts to ensure that costs and benefits can be distributed equitably (Agarwal 2002; Schiermeier 2010). Models that measure the aggregative effects of climate change on food production may obscure the ways in which access to food may be affected in ways that reinforce or exacerbate existing social inequalities. Thus, models that fail to account for the distribution of effects will be inadequate for developing ethical policies.

Ethical aims also have implications for methodological considerations such as duration and discounting. If we believe that we have moral obligations to protect future generations, then this might be a reason for increasing the duration of model runs so as to measure longer-term scenarios (e.g., 200 years versus 100 years). Moreover, there is a question of whether the interests of future generations ought to receive equal weight to those who currently exist. Most IAMs discount or give less weight to impacts that will occur further in the future. To be sure, decisions related to duration and discounting involve a variety of pragmatic and epistemic factors, but judgments about who deserves moral consideration and how to weigh those interests are equally relevant.

Decisions about the features that adequate models should have, then, are justified in relation to the aims of the research, including the social, ethical, economic, and political and policy aims of the research. Insofar as climate modeling aims to generate the kinds of predictions that will allow us to develop morally just policies, it will be not only legitimate, but ethically obligatory to consider what sorts of models and methodologies are most likely to do so. If models were constructed without attentiveness to the social aims of research, they may (even unintentionally) limit the sort of data that is available and as a result constrain the sorts of policies that could rationally be adopted.

While it may be relatively uncontroversial to claim that values are legitimate in decisions about what sorts of models to construct, further examples will show that the social aims of the research can be relevant to justifying other methodological decisions throughout the research process. Indeed, all methodological decisions are traditionally justified insofar as they promote the aims of the research. The approach being advanced here, however, rejects the idea that the traditional epistemological aims of research can be easily distinguished or isolated from the social and policy aims of research. This claim will be defended in greater detail in the next few examples.

3.2 Decisions about epistemic trade-offs

Value judgments related to endorsing and promoting particular social and policy aims of research can also be relevant to employing and adjudicating between traditional epistemic or cognitive values (Longino 1995). In model development, models are to some extent “tuned” or adjusted to better match established empirical observations (Parker 2009; Mauritsen et al. 2013). Because there is a variety of complex processes that are either poorly understood or difficult to model, modelers are forced to make certain assumptions or parameterizations (Mearns 2010). When a model does not match the observed temperature record there is some reason to think that one or more of the background assumptions or initial conditions needs to be adjusted. In such cases, tuning is performed by adjusting parameters related to processes not explicitly represented at the model grid resolution. The motivation for tuning is conceived of as epistemic: to assure that the model is more reliable or consistent with observed data. But, there are different features of the climate system that model tuning might strive to address. Improving the model in certain respects often means that it is less accurate in other respects. For example, models that are tuned to better represent the distribution of tropical precipitation between land and ocean in Maritime Southeast Asia, may perform more poorly in terms of representing tropical intraseasonal variability (Mauritsen et al. 2013). Whether we ought to tune models in this way depends on whether we are more concerned with generating predictions about the longer-term distribution of precipitation or week-to-week extreme weather events such as monsoons. Which epistemic trade-offs are justified depends, in part, on the sorts of predictions or trends that are important to advancing policy aims, such as protecting local agricultural practices or distribution of food production and fishing. Our social and policy aims are thus relevant to informing our epistemic priorities in model tuning – or determining what features of the climate adequate models should account for. In this case, appealing to value judgments in assessing epistemic tradeoffs will advance both the epistemic and social aims of the research.

At this point one might worry that if is legitimate to appeal to non-epistemic values in making epistemic tradeoffs, bias is likely to occur. There are concerns that values may lead to “wishful thinking” or tuning decisions based on what we wish the model would predict rather than decisions about what will make the model more accurate or accountable to the “way the world really is” (Brown 2013). This concern will be addressed in greater detail below, however, it is not the case that appealing to the social aims of research in tuning decisions means that such values will necessarily determine the content of the empirical data. Decisions that certain kinds of data are more important than others, such as data about longer-term distribution of precipitation or week-to-week extreme weather events, do not determine what the distribution of precipitation will be or what weather events will occur. Moreover, while social aims will be relevant to making such tradeoffs, so too will other aims of research. Having scientific information that succeeds in protecting the things we care about also requires models that are epistemically reliable. So, this is not a case where the social and epistemic aims conflict. But, there are many different ways to represent the world in reliable or empirically adequate ways and not all of them would advance our social and policy aims to the same degree. Thus, appealing to the social and policy aims of research can help produce tuning decisions that are both epistemically reliable and responsive to social needs.

3.3 Assessing causation

Climate models are also being used in Probabilistic Event Attribution (PEA) to identify extreme weather events caused by climate change (Stott et al. 2011; Stone et al. 2009; Pall et al. 2011; Christidis et al. 2012). Assessing the extent to which anthropogenic greenhouse gases cause extreme weather events (such as monsoons, heat waves, hurricanes or tornados) can be difficult given that some extreme weather would occur naturally regardless of climate change. In order to identify the portion of extreme weather events that can be attributed to anthropogenic emissions, PEA uses model experiments to calculate the extent to which a particular climate driver has changed the probability of an event occurring by comparing an ensemble of model simulations representing current conditions with a parallel ensemble of model simulations representing alternative possible worlds that might have occurred if the particular climate driver had been absent. Proponents of this approach have argued that PEA is important to several social and policy-related concerns including: 1) providing evidence to the public that dangerous human-induced climate change is already impacting them and will worsen; 2) predicting imminent extreme weather events; 3) identifying long-term adaptation needs and priorities; 4) assessing potential compensation for damages; and 5) evaluation of particular geoengineering interventions (Allen 2003; Stott et al. 2011).

The claimed policy aims of PEA, however, are diverse and appear to involve distinct epistemic aims. The stated aim of generating evidence for the public that a past weather event was caused by climate change involves explanation, whereas the aims of weather forecasting and anticipating adaptation needs involve prediction.

Different policy-related epistemic aims, such as explaining a particular observed event or predicting an increased trend in precipitation in specified area may require different conceptions of causality, causal ontologies (or what counts as a causal entity) and methodologies. For example, Pall et al. (2011) aimed to evaluate whether the flooding in England and Wales in 2000 was caused by anthropogenic climate change. In this case, it made sense to isolate anthropogenic greenhouse gases as a distinct causal agent and treat the oceanic conditions as fixed (since they were the conditions for the area in question at the time). Only atmospheric models were therefore used in order to determine whether the presence of anthropogenic greenhouse gases significantly increased the probability of the flooding. This same approach, however, would not be appropriate for advancing the epistemic aims related to predicting long-term weather trends or adaptation needs. Making such predictions requires the ability to make a different sort of causal inference involving different causal actors. Predicting a trend in a type of extreme weather event (such as flooding) requires analyzing the interactive and dynamic changes that are likely to occur in both atmospheric and oceanic states. Thus, coupled atmospheric and ocean models are needed to support causal inferences about greenhouse gases and extreme weather trends. Moreover, the policy aim of identifying long-term adaptation needs appears to require a different causal ontology. Adaptation needs are presumably not merely a function of adapting to, for example, the 50 % more rain in Pakistan that is caused by anthropogenic green house gases, but rather what is needed given both natural and human-induced changes in atmospheric conditions. In this case, it would not be helpful to distinguish anthropogenic greenhouse gases from other climate drivers as distinct causal agents. Thus, the justification of particular epistemic aims in PEA (explanation or prediction), selection of a causal ontology, and adopting certain methodologies depend in part on the social and policy aims of the research.

Similarly, the policy aim of generating evidence that can be used in assessing losses resulting from extreme weather events may involve a different conception of causation. PEAs rely on a conception of causation as counterfactual probabilities. That is, anthropogenic greenhouse gases are assumed to be a cause of a particular weather event (such as the 2000 UK floods) insofar as the event is more likely to be observed in a world with anthropogenic emissions than in a world where the emissions are not present. But it is not clear that this is the same notion of “causation” that is relevant to assessing responsibilities in litigation over losses and damage. In this context, the aim is to distinguish losses that can be “blamed” on human activity as opposed to “bad luck” weather in order to determine whether compensation is warranted. Here the notion of causation appears to be tied to assessing moral responsibility, in which case there are other human causal factors that might be relevant, such as decisions about where to build homes in relation to coastlines. This would require a different causal ontology positing other human, political, cultural, or economic factors as causal agents relevant to assessing blame and responsibility. As Hulme et al. (2011) argue, hazards and damages are always mediated by a network of complex political and economic factors. Thus, once again, the policy aims of research are relevant to the conception of causation that researchers ought to adopt. Different social and policy aims would warrant different epistemic choices.

Insofar as decisions in constructing PEAs are not attentive to policy aims, they may produce information that is inaccurate (insofar as causal inferences are being made that are not well supported) or else useless. Appealing to the social and policy aims in making such decisions, then, is legitimate as it supports both the epistemological and ethical obligations of scientists.

3.4 Employing normative concepts

Ethical, social, and policy-related value judgments can also be legitimate to appeal to in advancing the epistemological or scientific aims of research when the empirical content of scientific claims or hypotheses is itself value-laden (Callicott et al. 1999; Anderson 2004; Dupré 2007). Insofar as scientific hypotheses contain normative concepts, then value judgments will be relevant background assumptions in testing and assessing their empirical adequacy.

In addition to the case of “causal responsibility” used in PEA, IAMs involve concepts such as “climate change impacts,” “vulnerabilities,” and “dangerous” climate change. These are clearly concepts about what we take to be valuable or important to protect. Is it an “impact” if North American Plains Indians can no longer engage in traditional hunting practices, even if they have plenty of access to food in other ways, such as grocery stores? Insofar as decreases in biodiversity contribute to losses in linguistic traditions, does this constitute a distinct loss to be measured? Answering these questions depends on ethical judgments about what we take to be important to environmental or human flourishing. Whether hypotheses about the impacts produced by climate change are empirically adequate, then, will depend on background assumptions about those things that we take to be valuable and at least prima facie important to protecting. That is, value judgments will be relevant to determining the sort of data an adequate hypothesis about climate change impacts will need to account for. In such cases, appealing to ethical and social values promotes the epistemic aim of producing empirically adequate models and theories of climate impacts. That is, it helps produce reliable empirical information about value-laden phenomena.

One might argue that this just shows that scientists should refrain from using value-laden concepts, or to try to describe the same phenomena in a “value-neutral” or “policy-neutral” way (Scott et al. 2007; Lackey and Robert 2007; IAC 2012). Perhaps there is some way to operationalize or stipulate what will count as an impact, for example, without making an assessment about whether a particular impact is good or bad. It is not clear that this is possible, because attempts to operationalize value-laden concepts or to use value-neutral language do not eliminate value judgments. Such attempts only make the value judgments more implicit. That is, we could stipulate a variety of descriptive categories to count as “impacts,” but whether this stipulation is reasonable will rely on background assumptions about those things we take to be most important to protect. But, perhaps more importantly, even if it were possible to find some more descriptive language to measure impacts, it is not clear that doing so would be very useful given the other aims of the research. If, for example, we tried to neutrally measure all of the potential causal effects of climate change (e.g., will there be increased rates of toenail fungus?) this would not be an efficient use of limited resources. Some impacts will be trivial while others will be more central to conceptions of the good or the flourishing of humans or the environment. If such research aims to inform policymakers in protecting what we value, then it seems rational to focus on potential impacts we take to be most important.

3.5 Selection of evidential categories and interpretation of data

Even if a particular concept or scientific category is not itself value-laden, there may be different ways that the phenomenon could be measured or conceived. Consider studies on the impacts of climate change on biodiversity. This might be measured in a variety of ways, including impacts to ecosystem diversity, genetic diversity, species diversity, and species richness. Different conceptions of biodiversity will have different implications for whether some data constitutes evidence for or against particular hypotheses. For example, whether current empirical data supports the claim that biodiversity is decreasing as the result of climate change will depend on what we count as a species and whether we are concerned with the total numbers of a species or the distribution of those species between ecosystems. How biodiversity is best measured depends, in part, on why we want the information. It is legitimate to appeal to ethical and social value judgments then, in deciding which conceptual schemes to rely on in measuring biodiversity, as well as employing those schemes in interpreting data, because doing so will help promote the aims of the research.

Again, this is not to say that the social aims of research alone determine evidence. Whether there is evidence that a species is endangered is not just a question of whether conservation biologists want this to be the case because, say, they want to impose regulations on hunting or development. The value judgments involved do not determine how many species exist once a particular conceptual scheme has been adopted. Still, one might worry that the desire for a particular outcome might cause scientists to favor a conceptual scheme that will be more likely to yield the results they want (and indeed, these fears may be well-founded). But, this can be addressed insofar as the social aims of research are well-justified or informed by value judgments that are widely supported or democratically endorsed and not merely left to the preferences of scientists. This is why the aims approach requires such aims to be democratically justified and made more explicit at the beginning of the research process.

3.6 Dealing with uncertainties

Much of the discussion about value judgments in climate modeling has centered on their potential role in weighing the risks of errors related to uncertainties (Moss and Schneider 2000; Kandlikar et al. 2005; Risbey 2007; Biddle and Winsberg 2010; Winsberg 2012; van der Sluijs 2012). As mentioned, uncertainties arise in climate modeling as the result of processes that are poorly understood as well as factors difficult to model (IPCC 2013; Mearns 2010). Consequently, certain modeling decisions are underdetermined or epistemically unforced (Winsberg 2012). For example, there is significant uncertainty about cloud formation and water vapor feedback. Moreover, these variables are likely to have policy-relevant consequences as they can either amplify or dampen warming effects. Indeed, different assumptions about cloud formation have been found to account for much of the variation in climate sensitivity predicted between models (IPCC 2013, 9–3). Uncertainties in GCMs also contribute to a “cascade of uncertainty” as they are used in IAMs (that involve further uncertainties) to determine the likely environmental, economic, and health impacts of climate change (Schneider and Kunz-Duriseti 2002; Mearns 2010).

Yet despite such uncertainties, climate modelers must make decisions about how to represent poorly understood or complex processes, how to interpret data or assign probabilities to hypotheses about future climate change, and how to integrate data from physical science models to investigate the impacts of climate change. Such decisions are pragmatically forced insofar as failing to make any judgment in cases of uncertainty would prevent science from informing public time-sensitive policy decisions. Uncertainties in modeling are unlikely to be eliminated in the near future (if ever) and there is much that models can tell us even when uncertainties exist (Maslin and Austin 2012).

Thus, many have argued that it is legitimate to appeal to ethical values in making such decisions, because there are ethical consequences of error (Douglas 2009; Biddle and Winsberg 2010; Steel 2010; Winsberg 2012). For example, falsely assuming that cloud conditions will amplify warming could lead to overregulation of CO2, as well as resources wasted on unnecessary mitigation strategies. At the same time, falsely assuming that clouds will lessen warming could lead to under-regulation of CO2 and may lead to greater climate change impacts than predicted. This could have potentially devastating and irreversible consequences for biodiversity, human health, food production, and economic systems. Thus, it is argued, decisions about which assumptions should be built into climate models depend not only on the probability of error, but also on value judgments about how bad the consequences of error would be. For instance, whether it is worse to risk over-regulating or under-regulating (Douglas 2009; Biddle and Winsberg 2010; Winsberg 2012).

Many have rejected this account by arguing that non-epistemic value judgments are not necessary for resolving cases of uncertainty in order to continue research or generate policy-relevant results (Betz 2007, 2013; Parker 2010, 2014). As Parker (2010) has pointed out, climate modelers typically work with ensembles of models that can make a diverse range of assumptions. Thus, to the extent to which there is uncertainty, this may establish the need to generate ensembles that capture those uncertainties by representing a broad range of possibilities (Betz 2007; Parker 2010, 2014). That way, scientists and policymakers can identify likely trends given a wide range of possibilities while not making judgments about which particular set of assumptions should be treated as “correct” or as having the least risky consequences of error. Betz (2013; 2007) has also argued that scientists can avoid making value judgments by making the uncertainties clear and refraining from assigning subjective probabilities. That is, scientists can offer hedged hypotheses about what models show and make model limitations transparent. In this case, modeling decisions do not appear to be “forced” in the way proponents assume.

But even if value judgments are not logically or pragmatically necessary for addressing uncertainties, this is not the relevant issue on the aims approach. The aims approach provides a framework for asking whether it would be epistemically or ethically desirable to address uncertainties by appealing to non-epistemic values regardless of whether this could be avoided. Would the social and epistemological aims of the research be better advanced on the whole by, for example, only endorsing hedged hypotheses or by adopting one set of initial conditions or assumptions over others? Even if it were possible to address uncertainties in ways that eliminated value judgments, it does not follow that it would be desirable to do so, particularly if this resulted in producing information that was less useful to protecting those interests valued by stakeholders. For example, it might be possible for the IPCC to refrain from assigning specific probabilities to certain hypotheses and instead hedge conclusions by saying “X is more likely than not to occur.” But, the more hedged hypotheses are, the less useful they may be to policymakers, particularly in cases where policymakers are trying to identify specific adaptation and mitigation priorities (which very well may require difficult tradeoffs).

Similarly, scientists might use a range of ensembles to capture a variety of possibilities, but there are still judgments to be made about what constitutes an appropriately diverse range of assumptions within model ensembles. Indeed, some have argued that current ensembles are actually too conservative in terms of the assumptions that are made about aerosols and water vapor feedbacks, insofar as we are concerned with assessing worst-case scenarios (Oppenheimer et al. 2007; Bray 2010). One could make decisions about what ranges are appropriately diverse without appealing to value judgments, but it appears that in doing so climate models may fail to produce the kinds of data that are most important to promoting the social aims of research. Appealing to non-epistemic as well as epistemic aims of research will be desirable in dealing with uncertainties insofar as this produces epistemically and ethically better research.

There may be other ways in which value judgments are relevant to modeling decisions and this list is not intended to be exhaustive. I take these, however, to be some of the clearest examples where values seem relevant to modeling decisions so as to show that the aims approach can provide a way to explain why they are legitimate and necessary for promoting the epistemic and social aims of research.

4 Objections and replies

Obviously more needs to be said in order to develop the aims approach, particularly in relation to what will constitute “democratically endorsed” research aims. While this cannot be fully accomplished here, I will attempt to address some of the central concerns this approach faces so as to defend the fruitfulness of that project.

4.1 Can the aims approach exclude paradigm cases of illegitimate values?

As noted earlier, one might worry that the aims approach allows value judgments to enter into scientific decision-making so as to lead to bias. Imagine a group who decided that their aim was to stall regulatory policy and that they would make modeling decisions aimed at promoting that aim. This could lead to skewed methodologies, cherry-picking of data, or the sort of “wishful thinking” discussed earlier.

This concern is why, on the aims approach, value judgments about what constitutes the aims of a particular research context are not merely a function of whatever some scientists want or believe the aims to be. Rather, such aims must be informed by the epistemic and non-epistemic values of stakeholders. On the aims approach, value judgments will be illegitimate insofar as they are not relevant to promoting democratically endorsed aims of the research. Appealing to a particular value judgment may be illegitimate if the value fails to be democratically endorsed (as it would presumably be the case of wanting to secure a particular outcome in order to stall regulatory policy). Or, scientists might be correctly attentive to a democratically endorsed social aim, but make an unjustified judgment about how best to promote that aim given the other epistemic and social aims of research. So, imagine that a widely endorsed interest is protecting homes in coastal areas and data suggest that certain areas will become uninhabitable regardless of what mitigation and adaptation interventions are pursued. It would be illegitimate for scientists to decide to simply throw out that data in order not to alarm the public as this would further neither the epistemic nor social aims of the research (or at least not as well as some alternatives).

4.2 Are scientists really making value judgments?

Conversely, one might argue that the aims approach is actually consistent with the traditional value-free ideal of science because it is largely stakeholders, and not individual scientists who are making the ethical and social value judgments that constitute the aims of research. Once such values and aims are set, individual scientists need only engage in a kind of practical reasoning about what means will achieve or promote those ends. This sort of practical reasoning, however, does not really involve non-epistemic value judgments.

It is a mistake to think that the sort of means/ends reasoning that scientists must engage in does not involve value judgments. They must make evaluations about sorts of models, methodologies, epistemic trade-offs, conceptual frameworks, and strategies for addressing uncertainties will best serve stakeholder interests. Such judgments require scientists to assess a variety of possible options in relation to ethical and social values at stake in research and determine when ethical and social aims are being furthered. As in the case of PEA, there may be multiple and even conflicting social needs and scientists will have to make judgments about which aims should be prioritized all things considered, or what aims are more likely to be advanced given current technology (such as computing power). This cuts against a value-ideal that requires scientists to be “neutral” and distance themselves from endorsing any particular ethical, political, or policy values or aims in conducting research. Yet scientists are actually in the best position to make these sorts of value assessments because they possess the necessary expertise to assess the limitations and strengths of current methodological approaches and scientific considerations relevant to advancing social aims.

4.3 How can aims be “democratic” and does this demand too much?

One might be concerned that the aims approach imposes obligations on scientists to create democratic mechanisms and substantively involve stakeholders in ways that are not feasible. There are challenging issues here about who constitutes a stakeholder, how to secure stakeholder participation (since many of those most likely affected by issues such as climate change are from resource-poor countries and historically marginalized groups). Moreover, since it would be unrealistic to have all potential stakeholders participate in a decision-making process, it is unclear who could be said to legitimately “represent” a stakeholder group. For example, it is not clear that a government official or leader of, say, Bangladesh, has shared interests with, or can be an advocate for, all of the relevant stakeholder groups in that country. Given that conflicting values and interests are at stake it is not clear that consensus about the aims of research would be easily reached. Thus, there is concern that science would come to a grinding halt in a quagmire of democracy.

First, in response, the ability of science to proceed in a timely manner is also presumably an interest of stakeholders and there may be contexts where ideal democratic decision-making cannot be achieved. But, this is something that can occur in degrees and, on the aims approach, modeling decisions can be more or less justified in degrees depending on the extent to which social and epistemological aims are clear and there is evidence that they would be broadly endorsed. Thus, the aims approach need not require that ideal democratic mechanisms be achieved in order to provide a successful framework for assessing the decisions of scientists.

Second, it is misguided to think of the aims approach as requiring a two step-process where 1) aims are democratically set and 2) scientists set off to pursue this agenda. Rather, both climate science and stakeholder engagement can proceed (as they do now) as a process of interactive feedback loops. That is, stakeholder input can refine scientific aims and identify needs for future research in ways that inform the science and scientific developments can help stakeholders revise and refine needs and priorities. One way this might be done is at the level of generating calls for proposals for grants. That is, policymakers and scientific communities might establish stakeholder mechanisms for helping to identify policy aims, framing research questions, identifying the range of interests at stake, and receiving feedback about the kind of information that might be helpful for advancing policy aims. This input could be used in generating calls for grant proposals where the social aims of research are clearly identified. Consequently, the judgments of scientists discussed previously could be evaluated in relation to how well their choices promote the epistemic and social aims endorsed by stakeholders.

Indeed, there are many examples where stakeholder input has been successfully incorporated in research. Community-based participatory research has been used in many social science disciplines, where researchers work with Community Advisory Boards (CABs), comprised of representatives of those groups affected by the research. This has been common, for example, in both national and global research on HIV/AIDS prevention and treatment. In these cases, CABs participate not merely in crafting policy recommendations for, for example, needle-exchange programs or HIV education programs. Rather, they play a role at various stages throughout the research process: in formulating what the policy aims and the priorities of the research should be, giving feedback on the extent to which methodological decisions sufficiently advance those aims (such as clinical trial methodology), and providing critical feedback on assumptions that scientists have made in interpreting data (Epstein 1996).

In the context of climate change research, there are increasing efforts to incorporate stakeholder input throughout the research process (Kloprogge and van der Sluijs 2006; Tang and Dessai 2012; Kirchhoff et al. 2013). The UK Climate Impacts Programme has developed mechanisms for working with stakeholders to identify adaptation needs and receive critical feedback on modeling strategies to produce more “useable knowledge” (Tang and Dessai 2012). In the United States, regional-scale assessments have been found to produce scientific knowledge that is more useful to policymakers precisely because they have successfully incorporated policy needs into methodological decision making through frequent and sustained interaction with stakeholders throughout the research process (Kirchhoff et al. 2013). The IPCC has also adopted several mechanisms for incorporating stakeholder input and critical feedback at several stages. Government representatives are involved in determining the scope and framing of reports before they are prepared and there is an extensive review procedure for generating reports as well as the summary for policymakers (IPCC 2008). Although providing a full account of how appropriate stakeholder participation should be secured is beyond the scope of this paper, my aim here is to establish that this undertaking is a crucial one for helping to justify the value judgments relevant to research.

5 Conclusions

Ethical, social, and political values can operate in ways that are legitimate and ways that are illegitimate in science and distinguishing these is important. This is not merely an interesting theoretical question for philosophers of science, but also crucial to identifying the sorts of practices and mechanisms that are needed to produce science that is not only epistemically sound, but also responsive to social needs and interests.

On the aims approach it is legitimate to appeal to ethical and social values insofar as doing so will promote democratically endorsed epistemic and social aims of research. This framework is able to capture a wide range of ways in which values seem relevant to scientific decision-making and can explain why appealing to ethical and social values in certain cases can be both epistemically and socially beneficial. Second, it shows that the important question is not whether values are necessary to decisions about, for example, how to address uncertainties or underdermination, but rather whether appealing to such values is desirable given the epistemic and social aims of research. Moreover, it prevents scientists from having disproportionate power in deciding what values ought to be endorsed.

Others have recently developed approaches that appeal to the aims of research (Elliott 2013; Elliott and McKaughn 2014), although there are two distinct features of the approach developed here. First, the social and policy aims of science are understood as deeply connected to the epistemic aims of science such that they cannot easily be distinguished and should not be thought of as secondary considerations (Elliott 2013) or as “trumping” epistemic considerations (Elliott and McKaughn 2014). Rather, the social and ethical aims of research can have implications for the kinds of epistemic virtues we take to be important as well as the sort of data that empirically adequate theories must account for. Similarly, epistemic or cognitive considerations can be useful to promoting the social and policy-related aims of research.

A second feature of the aims approach developed here is that it requires epistemic and social aims of research to be democratically informed by the interests and values of stakeholders. While there is more work to be done to understood how this might be accomplished in practice, this helps to address concerns that individual scientists will have disproportionate power in determining which values are endorsed. While it is legitimate for individual scientists to sometimes make ethical and social value judgments, they must do so in consultation with values that are widely accepted and democratically endorsed.