Abstract
How much evidence would it take to convince climate skeptics that they are wrong? I explore this question within an empirical Bayesian framework. I consider a group of stylized skeptics and examine how these individuals rationally update their beliefs in the face of ongoing climate change. I find that available evidence in the form of instrumental climate data tends to overwhelm all but the most extreme priors. Most skeptics form updated beliefs about climate sensitivity that correspond closely to estimates from the scientific literature. However, belief convergence is a nonlinear function of prior strength and it becomes increasingly difficult to convince the remaining pool of dissenters. I discuss the necessary conditions for consensus formation under Bayesian learning and show that apparent deviations from the Bayesian ideal can still be accommodated within the same conceptual framework. I argue that a generalized Bayesian model provides a bridge between competing theories of climate skepticism as a social phenomenon.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Climate change has come to represent a defining policy issue of our age. Yet support for comprehensive climate policy at the global scale remains elusive. Decades of accumulated research and an overwhelming scientific consensus have not been enough to convince everyone. Many policy makers and ordinary citizens remain openly skeptical about the human role in our changing climate (Oreskes 2004; Anderegg et al. 2010; Doran and Zimmerman 2011; Cook et al. 2013; Verheggen et al. 2014; Tol 2014; Cook et al. 2016; Saad 2019). What are we to make of this skepticism? And just how much evidence would it take to convince climate skeptics that they are wrong? I seek to answer these questions within an empirical Bayesian framework. My goal is to understand how skeptics would respond to increasing evidence for human-induced climate change, provided that they update their beliefs rationally. In so doing, I hope to shed light on our current policy impasse and the possibility for finding common ground in the near future.
Beliefs about climate change are powerful. They dictate our choices as individuals and policies as societies. Our beliefs also shape how we interpret new information about the world. We are more predisposed to accept data that accords with our priors, and vice versa. For a climate skeptic, as for anyone else, beliefs provide a lens through which information is subjectively interrogated and made intelligible. This is not to say that beliefs are immutable. A central theme of Bayesianism—the intellectual framework for this paper—is the process by which beliefs are updated through exposure to new information. But our responsiveness to this new information may be greatly diminished, depending on how strongly we hold our existing beliefs. Exactly how great of a diminishing effect is the focus of this paper. To preview my method and findings, I consider a range of skeptic beliefs and examine how these “priors” modulate a person’s responsiveness to climate data. I find that available evidence in the form of instrumental climate data tends to overwhelm all but the most extreme cases. However, I also document the nonlinear effect that beliefs have on convergence with the scientific consensus. Even as most skeptic priors are overwhelmed by the evidence for climate change, it becomes increasingly difficult to convince the remaining dissenters that they are wrong.
Numerous studies have explored the cultural and psychological factors underlying climate skepticism. These include Kahan et al., (2011, 2012), McCright and Dunlap(2011a, 2011b), Corner et al. (2012), Ranney et al. (2012), Clark et al. (2013), Lewandowsky et al. (2019)—see Hornsey et al. (2016) for a review. Broadly speaking, these studies divide into two camps. One strand of the literature emphasizes the so-called “deficit model,” which posits that climate skepticism originates from a lack of relevant background knowledge. This includes an understanding of the underlying evidence and physical mechanisms, as well as the true extent of the scientific consensus. However, another camp has come to advocate for a theory of “cultural cognition,” which interprets climate skepticism as a social phenomenon resulting from shared value systems and group identity dynamics. In this latter view, a person’s scientific sophistication is relevant only insofar as it allows them to better marshal arguments in support of pre-determined positions (i.e., reinforcing cultural and tribal affiliations). I shall return to these two competing frameworks later in the paper. For the moment, my concern is less with the origins of climate skepticism than what it represents: namely, a set of beliefs about the rates and causes of global climate change.
A convenient way to model beliefs about climate change is by defining skepticism in terms of climate sensitivity, i.e., the temperature response to a doubling of CO2. Specifically, we can map skeptic beliefs directly on to subjective estimates of climate sensitivity, because they both describe the probable causes and distribution of future warming. The particular measure of climate sensitivity that I focus on here is the transient climate response (TCR). Formally, TCR describes the warming at the time of CO2 doubling—i.e., after 70 years—in a 1% per year increasing CO2 experiment (IPCC 2013). For the purposes of this paper, however, it will simply be thought of as the contemporaneous change in global temperature that results from a steady doubling of atmospheric CO2.
According to the the Intergovernmental Panel on Climate Change (IPCC), TCR is “likely” to be somewhere in the range of 1.0–2.5 ∘C (IPCC 2013). This corresponds to an approximate 66–100% probability interval in IPCC terminology. The IPCC further emphasizes the inherently Bayesian nature of climate sensitivity estimates, going so far as to state:
[T]he probabilistic estimates available in the literature for climate system parameters, such as ECS [i.e. equilibrium climate sensitivity] and TCR have all been based, implicitly or explicitly, on adopting a Bayesian approach and therefore, even if it is not explicitly stated, involve using some kind of prior information. (IPCC 2013, p. 922)
To understand why classical (i.e., frequentist) methods are ill-suited for the task of producing credible estimates of climate sensitivity, recall that frequentism interprets probability as the limiting frequency in a large number of repeated draws. Such a narrow definition holds little relevance to the question of climate sensitivity, for which there exists but one unique value. There is no population of “sensitivities” to draw samples from. I too adopt a Bayesian framework for determining climate sensitivity and its concomitant policy implications. However, my approach differs from the previous literature along several dimensions.
The most obvious point of departure is the fact that I deliberately focus on the beliefs of skeptics. Priors for determining climate sensitivity are usually based on paleo data, the judgments of scientific experts, or noninformative methods. Such approaches may possess obvious scientific merit for establishing a best estimate of climate sensitivity. Yet, they are of limited relevance for understanding people’s motivations and voting behavior when it comes to actual climate policy. My approach is to take skeptics at their word and work through to the conclusions of their stated priors. In other words, my goal is to recover posterior probabilities about the rate and causes of climate change that are logically consistent with the initial beliefs of these skeptics.
Contrarian climate beliefs have also been largely ignored in the economic and policy literature to date. The handful of studies that do consider policy options from the skeptic perspective have tended to emphasize edge scenarios like climate catastrophe and irreversibility. For example, Kiseleva (2016) introduces an integrated assessment model (IAM) of heterogeneous agents that incorporates various degrees of climate skepticism. She shows that a world comprised only of skeptical policy makers will make sufficient investments in mitigation measures to avoid catastrophic outcomes. The key mechanism is a dominant subset of “weak” skeptics who are sufficiently concerned by anthropogenic climate change that they reduce their emissions accordingly. Kiseleva (2016) does not allow for learning in her simulations.Footnote 1 However, theoretical work by Van Wijnbergen and Willems (2015) show that climate skeptics actually have an incentive to reduce emissions, since it will facilitate learning about the true causes of climate change. While it is possible for an increase in emissions to yield similar learning effects, the irreversibility of climate change renders this an inferior strategy. From a methodological perspective, the present paper differs from these earlier studies by combining Bayesian learning with an empirical framework.Footnote 2 Unlike the existing numerical and game-theoretic approaches, I am not attempting to prescribe an optimal emissions strategy or learning paths for climate skeptics under future uncertainty. Rather, my goal is to establish some ground rules for thinking about climate policy today, given the information that is already available to us.
Another distinguishing feature of this paper is that the results are derived via conceptually straightforward time-series regression analysis. While climate scientists have typically relied on complex computer models to simulate TCR, a growing body of research is aimed at understanding the link between human activities and climate change through the purview of time-series econometrics. Much of this literature has concerned itself with the apparent nonstationarity of climate data over time. The present paper takes as its foundation recent research (Gay-Garcia et al. 2009; Estrada et al. 2013a, 2013b; Kim et al. 2020), which argues convincingly that global surface temperatures and anthropogenic forcings are best described as trend-stationary processes, incorporating common structural breaks.Footnote 3 The upshot is to permit the use of level terms within an ordinary least squares (OLS) regression framework. Such matters notwithstanding, virtually all econometric studies of climate change attribution to date have been carried out in the frequentist paradigm. They do not consider the influence of priors, nor are they able to yield the probabilistic estimates that are characteristic of Bayesian analysis. A noteworthy and early exception is that of Tol and De Vos (1998), who are motivated to adopt a Bayesian approach because of multicollinearity in their anthropogenic emissions data. Such multicollinearity does not plague newer datasets, since these are defined in terms of common units (e.g., Wm− 2). Furthermore, Tol and De Vos (1998) do not consider the influence of overtly contrarian priors as a basis for affecting policy.
2 Econometric approach
2.1 Bayesian regression overview
The Bayesian regression framework is less familiar to many researchers than the frequentist paradigm that is commonly taught in universities. For this reason, I provide a brief overview of the key principles of the Bayesian method and highlight some important distinctions versus the frequentist approach.
A Bayesian regression model uses the logical structure of Bayes’ theorem to estimate probable values of a set of parameters 𝜃, given data X:
Here, p(𝜃|X) is known as the posterior and serves as the fundamental criterion of interest in the Bayesian framework. The posterior asks, “What are the probable values of our parameters, given the observed data?” This stands in direct contrast to the first term in the right-hand numerator, p(X|𝜃), which is the familiar likelihood function from frequentist statistics. The likelihood essentially reverses the question posed by the posterior and instead asks, “How likely we are to observe some data for a given set of parameters (e.g. based on an assumption about the data generating process)?” The second term in the numerator is the prior, p(𝜃). While the prior can take on any distributional form, it should in principle encapsulate our knowledge about the parameters before we have observed the data. Insofar as we are interested in learning about 𝜃, it is common practice to ignore the term in the denominator, p(X). This is simply the marginal probability of the data and can be thought of as a normalization constant, which helps to ensure that the posterior is a proper probability distribution (i.e., integrates to one) and can be calculated ad hoc if needed. For this reason, Eq. 1 is typically re-written as
Equation 2 embodies the mantra of Bayesian statistics: “The posterior is proportional to the likelihood times the prior.” Solving for the posterior typically involves the combination of various integrals, which cannot be calculated analytically.Footnote 4 Fortunately, we can simulate the posterior density computationally using Markov Chain Monte Carlo (MCMC) routines. This can be done for virtually any combination of prior and likelihood function. Obtaining a valid posterior is then simply a matter of: (i) choosing a prior distribution for our regression parameters, i.e., regression coefficients and variances, and (ii) specifying a likelihood function to fit the data. For ease of exposition—how we map parameter values to beliefs about TCR will be determined by the specification of the regression model—I begin with the likelihood function.
2.2 Likelihood function
The likelihood function is governed by the choice of empirical model. Following Estrada and Perron (2012) and Estrada et al. (2013a), I model global temperatures using the regression equation
where 𝜖t = ϕ𝜖t− 1 + νt is a first-order autoregressive, or AR(1), error process.
Here, GMST is the global mean surface temperature anomaly relative to the pre-industrial period (defined as the 1871–1900 average); RF is total radiative forcing due to both anthropogenic and natural factors (excluding volcanic eruptions); V OLC is the radiative forcing due to volcanic stratospheric aerosols; and SOI and AMO are scaled indices of these respective climatic phenomena. The subscript t denotes time. Specifying that the error term 𝜖 follows an AR(1) process allows us to account for dynamic elements such as potential autocorrelation.
Two points merit further discussion before continuing. First, nothing much hinges on the use of OLS for estimating TCR. For example, the β1 coefficient above is equivalent to the “climate resistance” constant (ρ) described in Gregory and Forster (2008); a point I return to later. OLS simply provides a convenient method for combining data and priors in a consistent Bayesian framework. Other methods could in principle be used to derive the same results. Second, the use of a composite RF variable that combines both anthropogenic and natural forcings may, at first blush, seem an odd choice. After all, the goal of this paper is to separate out and interrogate skepticism specifically about the human role in climate change. However, recall that the underlying forcings in my dataset are all expressed in terms of a common unit (i.e., Wm− 2). This circumvents the multicollinearity problems that would arise from estimating an econometric model on forcings that have been separated out.Footnote 5 Econometric issues aside, the use of a common forcing unit ensures that I don’t run the risk of estimating different coefficients, which would imply an inconsistent response of the climate system to identical forcings. The use of a composite forcing series is thus a necessary step to ensure that the model remains physically consistent.Footnote 6 I shall demonstrate that relaxing these constraints later in the paper nonetheless yields virtually identical conclusions as the physically correct specification.
Returning to my primary regression model, Eq. 3 implies a likelihood function that is multivariate normal,
where X is the design matrix of explanatory variables; β is the coefficient vector; σ2 = Var(𝜖) is the variance of the error term; and T = 140 is the number of years in the collated, historical dataset. Equation 4 can also be written more simply as GMST|β,σ2 ∼T(Xβ,σ2I).
An important feature of Eqs. 3 and 4 is that they define how we should map probabilities about the regression parameters to beliefs about climate sensitivity. Recall that TCR describes the contemporaneous change in temperature that will accompany a steady doubling of atmospheric CO2 concentrations. It follows that
where β1 is the regression coefficient describing how responsive global temperatures are to a change in total radiative forcing, and F2× is the change in forcing that results from a doubling of CO2. For the latter, I use the IPCC’s best estimate of F2× = 3.71 Wm− 2 and further assume an additional ± 10% variation to account for uncertainties over spatial heterogeneity and cloud formation (Schmidt 2007; IPCC 2001).Footnote 7 The key point is that assigning a distribution over the parameter β1 will necessarily imply a distribution for TCR, and vice versa. We therefore have a direct means of linking prior and posterior probabilities of the regression parameters to beliefs about TCR. It also means that the primary goal of the regression analysis will be to determine probable values of β1. The rest of the parameters will take a backseat in the analysis that follows, acting largely as controls.
Equation 5 contains an implicit assumption that will have bearing on the external validity of my results—specifically, the extent to which they can be extrapolated to different future climate scenarios. Recall that β1 is equivalent to the “climate resistance” parameter (ρ) defined in Gregory and Forster (2008) as the constant sum of the ocean heat uptake efficiency and the climate feedback parameter. The importance of this equivalence is that it underscores the role of oceanic thermal dynamics, and the implicit assumption of a linear scaling between the different climate components of my regression model. This linear scaling holds for climate scenarios that are characterized by a steady increase in radiative forcing—as was the case during the instrumental temperature record. However, it cannot be expected to hold in future scenarios that are characterized by significant changes in the rate of radiative forcing (e.g., due to aggressive mitigation action). In such cases, ocean heat uptake would need to be modeled separately to account for inertia in the climate system and its resultant impact on GMST (ibid.). All of which is to say that I will limit my analysis to the historical period, as well as future climate scenarios that are characterized by steady increases in radiative forcing.
3 Priors
Climate skepticism is a matter of degree. I account for this fact by defining a simple typology of skeptics as per Table 1. Summarizing, I distinguish between two basic skeptic archetypes based on their best guess for TCR. Lukewarmers (c.f. Ridley 2015) believe that TCR lies around 1 ∘C—i.e., the lower bound of the IPCC “likely” range—while deniers believe that TCR is likely zero. I further distinguish these individuals based on how certain they are about their best guess. A person with moderate convictions believes that the true value of TCR lies within a 1 ∘C uncertainty interval of their prior mean (95% probability), while that interval falls to just 0.25 ∘C for someone with strong convictions. Altogether, this yields a spectrum of skeptic priors that ranges from moderate lukewarmers to strong deniers. Importantly, each skeptic can all be represented mathematically by a prior distribution on TCR. I assume normal distributions for simplicity, where the mean represents an individual’s best guess and the variance their uncertainty.Footnote 8 Following Eq. 5, obtaining priors over β1 is a simple matter of dividing the respective TCR distributions by F2× = 3.71 Wm− 2. These are the parameters that actually enter the Bayesian regression model and are also shown in Table 1.
In addition to the subjective priors of our stylized skeptics, a useful reference case for the analysis is provided by a set of so-called noninformative priors. Loosely speaking, noninformative priors are vague and should not privilege particular parameter values over others. In practice, applied Bayesian researchers are advised to use noninformative priors that are weakly data-dependent (Gelman et al. 2020). For example, priors should be scaled to reflect feasible magnitudes of the underlying data. If the data are observed in the order of millimeters, then the prior should not allocate plausible weight to values in the order of kilometers, and so forth. This modest form of regularisation not only helps to ensure computational stability, but also avoids some of the theoretical pathologies associated with uniform priors (c.f. Annan and Hargreaves 2011). I therefore use a set of reference priors that have been scaled to reflect this limited data dependence. Given generic dependent variable y and independent variable x, I define a noninformative prior for the associated regression coefficient βx ∼(0, 2.5sy sx), where sx = sd(x).Footnote 9 In other words, my noninformative priors take the form of normal distributions with wide variances. For my default regression specification, this equates to a prior on the key radiative forcing coefficient of β1 ∼(0, 1.2142).
Note that my group of skeptics only hold subjective priors over TCR (and thus β1). Noninformative priors are always assumed for the remaining parameters in the regression equation. Similarly, I acknowledge that these skeptics are, of course, highly stylized caricatures. Their priors are simply taken as given. I am not concerned with where these priors come from and why they are of a particular strength. However, such abstractions are ultimately unimportant given the objectives of this study. My goal is to explore how different climate skeptics would respond to evidence for climate change, provided that they update their beliefs rationally. Moreover, this simple typology gives a sense of just how strong someone’s prior beliefs need to be, so as to preclude the acceptance of any policy interventions.
4 Data
The various data sources for this paper are summarized in Table 2. Global mean surface temperature data (1850–2017) are taken from the HadCRUT4 dataset, jointly compiled by the UK Met Office and the Climatic Research Unit at the University of East Anglia. Two alternate global temperature reconstructions—one provided by Cowtan and Way (2014) (hereafter, CW14) and the other by the NASA Goddard Institute for Space Studies (GISTEMP)—are used as a check against spatial coverage issues and other uncertainties.Footnote 10 Radiative forcing data, covering both historic estimates (1765–2005) and future scenarios (2006–2300), are taken from the Representative Concentration Pathway (RCP) database, hosted by the Potsdam Institute for Climate Impact Research. These data include anthropogenic sources of radiative forcing like industrial greenhouse gas emissions, as well as natural sources like solar irradiance and volcanic eruptions. As a part of the sensitivity analyses, I use an ensemble of 1000 forcing estimates to capture measurement uncertainty about radiative forcing data. This ensemble originates with Dessler and Forster (2018), although I use a recapitulated version provided by Hausfather et al. (2020) for ease of access. Data for two major oceanic-atmospheric phenomena, the Atlantic Multidecadal Oscillation (AMO, 1856–2017) and the Southern Oscillation Index (SOI, 1866–2017), are taken from the US National Oceanic and Atmospheric Administration (NOAA) and National Center for Atmospheric Research (NCAR). Summarizing the common historic dataset for which data are available across all series, we have 140 annual observations running over 1866–2005. RCP scenarios until 2100 will also be considered for making future predictions later in the paper.
5 Results
The analysis for this project was primarily conducted in R (R Core Team 2020, version 4.0.2), with the Bayesian computation being passed on to the Stan programming language (Gabry and Češnovar 2020). All of the code and data needed to reproduce the results can be found at the companion GitHub repository.Footnote 11
5.1 Regression results and updated TCR beliefs
The posterior regression results for the various prior types are presented in Table 3. Each column contains the results from running the Bayesian regression Eq. 3 over the full historical dataset (1866–2005), using a particular set of priors. Beginning with the noninformative case in the first column, all of the regression coefficients are credibly different from zero and of the anticipated sign. For example, GMST is negatively correlated with SOI. This is to be expected since the El Niño phenomenon is defined by SOI moving into its negative phase. The posterior coefficient density on our main parameter of interest, total radiative forcing (RF), shows that global temperature will rise by an average of 0.426 ∘C for every Wm− 2 increase. Of greater interest, however, is the fact that the posterior estimates yielded by the group of skeptic priors are very similar to this noninformative case. With the exception of the strong denier, there is a clear tendency to congregate towards the noninformative parameter values.
Of course, the exact values of the regression parameters are themselves of somewhat limited interest. Rather, their primary usefulness is to enable the recovery of posterior beliefs about TCR. These are summarized at the bottom of Table 3, while the full prior and posterior distributions are plotted in Fig. 1. We see that the posterior TCR distributions are generally clustered around a best estimate of 1.5 ∘C, with a 95% credible interval in the region of 1.1–1.8 ∘C, depending on the prior. Excepting the strong denier, these posterior beliefs about TCR fall comfortably within the IPCC “likely” range. However, the derived probability intervals are decidedly narrower and TCR values at the upper end of the spectrum are discounted accordingly.
Further insight into the updating behavior of our stylized skeptics is provided by the recursive TCR estimates shown in Fig. 2. Note these recursive estimates are run backwards in time to mimic the perspective of present-day skeptic looking back over an increasing body of historical evidence. It is apparent that stronger convictions about one’s prior beliefs (in the form of a smaller prior variance) have a greater dampening effect on posterior outcomes than the prior mean. For example, the moderate denier converges more rapidly to the noninformative distribution than the strong lukewarmer. However, most skeptics will converge to the noninformative distribution only after “observing” data from a number of decades. Note that this does not alter the conclusions that we are able to draw from our Bayesian analysis. As long as we have fully specified a prior that encapsulates a person’s initial beliefs, then we should in principle treat the full historical dataset as new information for updating those beliefs.Footnote 12 Yet it does highlight the importance of using all the available instrumental climate data for building any kind of policy consensus. Limiting the sample period under observation to, say, the last 35 years would largely preclude the possibility of consensus formation. The tendency of some prominent skeptics to rely on satellite records of global temperatures—which only stretch back as far as 1979—could be seen as anecdotal evidence in support of this claim (e.g., Mooney 2016). A similar argument could be made for a reliance on short-term climate trends and fluctuations that do accurately reflect longer-term trends. For example, the relatively brief “hiatus” in warming that followed the exceptionally strong 1998 El Niño event (Lewandowsky et al. 2016).
Returning to the question posed at the beginning of this paper: How much evidence would it take to convince climate skeptics that they are wrong about global warming? One way to reframe this question is to think about how much data a skeptic needs to observe before their best estimate of climate sensitivity begins to look reasonable to a mainstream climate scientist. For example, how long would it take before they obtained a mean posterior TCR of 1.3 ∘C or 1.5 ∘C? While it is possible to look at the skeptics’ recursive TCR estimates using only historical data, we run into problems with the more extreme priors. Put plainly, there is simply not enough historical data to overcome higher orders of skepticism. I therefore simulate over 200 years worth of global temperature and climate data using parameters obtained from the noninformative Bayesian regression in Table 3. I then use this simulated data to run a set of secondary regressions that are distinguished by a range of different skeptic priors on TCR. (This range is much more granular than my original four-skeptic typology.) Each regression is estimated recursively, incrementing one year at a time, until I obtain a posterior TCR distribution that has a mean value equal to the relevant target.
The results are shown in Fig. 3. While the instrumental climate record constitutes enough data to convince many skeptics in this hypothetical pool, it does not suffice in all cases. Similarly, although we expect that many present-day skeptics will eventually acquiesce their beliefs if climate change continues into the future, there remains a small group of hardcore skeptics who defiantly refuse convergence with the mainstream even if we project as far ahead as 2100. Such is the strength of their priors. Note further that the year of convergence is a nonlinear function of prior strength, so that it becomes increasingly difficult to convince the marginal skeptic. The steady accumulation of evidence over time will inexorably bring more skeptics into the mainstream fold. But the delay between each round of new converts is increasing.
An implication of this thought experiment is the following. If someone is unpersuaded of the human influence on climate today—despite all of the available evidence—then there is a high probability that they will remain unconvinced for many years hence. The extent to which these extreme skeptics constitute a meaningful voting block is an open empirical question. However, it is striking to think that such individuals are perhaps already out of reach from the perspective of comprehensive climate policy. Even the accumulation of evidence over the next several decades may not be enough to convince them. Scientific communication efforts should be tailored appropriately, specifically targeting moderates for persuasion (e.g., lukewarmers) rather than engaging skeptics en masse.
5.2 Sensitivity analysis
I test the sensitivity of my findings to a variety of potential data issues and alternate model specifications. These range from the use of alternative GMST reconstructions, to analyzing the impact of measurement error and uncertainty over forcing efficacies. A full discussion of the motivating context and technical details underlying each sensitivity run—with results across all prior types—is provided in the Supplementary Material. Unsurprisingly, I obtain wider posterior distributions under specifications that explicitly introduce additional forms of uncertainty into the estimation. However, the general effect of these alternate specifications is to nudge the posterior TCR mean slightly higher. Table 4 summarizes the posterior TCR distributions for various sensitivity runs when using noninformative priors. I am left to conclude that my primary data and modeling choices do not unduly bias the results.
5.3 Future temperatures
Climate policy is largely predicated upon the risks to future generations. As such, any policy discussion must consider predictions that run many years into the future. TCR estimates are one means of gaining an insight into how global temperatures will evolve over the coming decades. A more explicit way of demonstrating this is by predicting temperatures until the end of the century.
While the trajectory of future radiative forcings is subject to much uncertainty, some guidance is available in the form of the IPCC’s Representative Concentration Pathways (Van Vuuren et al. 2011). These so-called “RCPs” describe a family of emissions scenarios, where total anthropogenic forcings evolve according to various economic, demographic and technological assumptions. Each RCP includes a core component of atmospheric CO2 concentrations, while they all share a common prediction for radiative forcing due to solar activity. I take these series as the basis for constructing covariate vectors to predict temperatures until the year 2100. For the remaining explanatory variables—stratospheric aerosols, SOI and AMO—I take the mean historical values from my dataset. A summary of covariate vectors in 2100 for each RCP scenario is provided in the Supplementary Material.
Figure 4 shows the temperature evolution for each RCP under the noninformative case, which I again take as the benchmark. As discussed in Section 2.2, it would be inappropriate to extrapolate my regression framework to scenarios that are characterized by significant changes in the rate of radiative forcing. The confounding effect of (unaccounted for) thermal inertia in the oceans would render these model predictions ill-conditioned. I therefore focus on RCPs 6.0 and 8.5, which maintain steady rates of forcing increase.Footnote 13 The principal message is that CO2 concentrations must be constrained to well below RCP 6.0, if we are to avoid a 2 ∘C rise in global temperatures. Given the prominence of this particular threshold in international climate treaties and the popular narrative, the result is a reinforcement of commonly cited emissions targets such as 450 and 540 ppmv. On the other hand, we can expect to breech even 3 ∘C by the year 2100 if we continue along a truly unconstrained emissions path à la RCP 8.5.
What of the predictions yielded by our group of climate skeptics? While it is straightforward to redraw Fig. 4 for each prior type, a more intuitive comparison can be made by looking at the full distribution of warming that each skeptic expects by the end of the century. Figure 5 plots the predictive temperature density in the year 2100 for all prior types by RCP scenarios 6.0 and 8.5. Again, the data have a clear tendency to overwhelm even reasonably staunch forms of climate skepticism. Nearly all of the stylized skeptics would expect to breach the 2 ∘C threshold by 2100 under RCP 6.0, while a temperature rise of more than 3 ∘C is likely under under RCP 8.5. An exception can only be found in the form of the strong denier, whose extreme prior dominates the posterior in a way that obviates nearly all concern about large temperature increases.
5.4 Welfare implications and the social cost of carbon
Provided they consider enough data, we have seen that most climate skeptics should be able to agree that a 2 ∘C target requires limiting CO2 concentrations to around 540 ppmv. Whether someone actually subscribes to policy measures aimed at achieving the 2 ∘C goal is dependent on many things: their choice of discount rate, beliefs about the efficacy of policy, damage expectations, etc. Such issues are largely beyond the scope of this paper. Nonetheless, we may still gain a deeper insight into the welfare implications of our posterior TCR values by analysing their effect on the social cost of carbon (SCC). The SCC represents the economic costs associated with a marginal unit of CO2 emissions. It can therefore be thought of as society’s willingness to pay for the prevention of future damages associated with human-induced climate change.
Obtaining SCC estimates generally requires the use of integrated assessment models (IAMs), which are able to solve for optimal climate policy along a dynamic path by simulating across economic and climate systems. The PAGE model originally developed by Hope (2011), is ideally suited to our present needs. It is widely used as one of the major IAMs for evaluating climate policy (Nordhaus 2014; IWG 2016). More importantly, PAGE accepts random variables as inputs and yields the type of probabilistic output that is consistent with the rest of this paper. I take the posterior TCR distributions yielded by my Bayesian regression model and use these as inputs for calculating the SCC. The PAGE defaults are used for the remaining parameters.Footnote 14
Table 5 summarizes the SCC distributions across all prior groups in 2020 US dollars. The full probability distributions are highly skewed and characterized by extremely long upper tails (see the Supplementary Material). This is largely due to the fact that PAGE allows for the possibility of major disruptions—e.g., melting of the Greenland ice sheet—at temperatures above 3 ∘C. Such low probability, high impact events would yield tremendous economic losses and result in some extreme SCC values as a consequence. Note too that the frequency of these events are more common in my adapted version of PAGE, since I replace the default triangular (i.e., bounded) TCR distribution with the posterior TCR distributions from my model. The latter are approximately normally distributed, thus permitting small but positive weight in the tails. For this reason, I provide both the mean and median SCC values alongside the 95% probability interval.
Excepting the strong denier, the SCC for all prior types is comfortably larger than zero. The median value ranges from approximately $30 to $60 per ton (2020 prices), while the 95% probability interval extends from $10 to upwards of $130 per ton. These results are consistent with the SCC estimates found within the literature. For example, an influential synthesis review conducted by the US government under the Obama administration established a mean SCC value of $12–$62 per ton (2007 prices), depending on the preferred discount rate (IWG 2016). The encouraging point from a policy perspective is that such congruence exists despite the fact that the analysis proceeds from an initial position of skepticism. Another way to frame the SCC estimates presented here is to imagine that each prior type represents an equal segment of a voting population. We would then expect to see broad support for a carbon tax of at least $20–$25. While such a thought experiment clearly abstracts from the many complications that would arise from free-riding and so forth, again we see that nominal climate skepticism does not correspond to a mechanical dismissal of climate policy.
6 Discussion
We have seen that a nontrivial carbon price is consistent with a range of contrarian priors once we allow for updating of beliefs and, crucially, consider enough of the available data. An optimist might interpret these findings as a sign that common ground on climate policy is closer than many people think. On the other hand, they may also help to explain why the policy debate is so polarized in the first place. As all intermediate positions are absorbed into the mainstream, only the most hardcore skeptics will remain wedded to their priors. Such a group is unlikely to brook any proposals for reduced carbon emissions and virtually no amount of new information will convince them otherwise. Taken together with the persistent skepticism that one sees in actual polling data (e.g., Saad 2019), it then becomes reasonable to ask whether real-life climate skeptics hold such extreme views? For that matter, are they numerous or vocal enough to prevent political action? Such considerations are reinforced by the idealized nature of the analysis until now. Irrespective of the scientific merit of working through such a set-up, normal people clearly do not update their priors in lockstep with a formal Bayesian regression model, supported by large dataset of time-series observations.Footnote 15
A natural starting point for thinking about these issues is to take a closer look at the mechanisms underlying posterior agreement formation. The notion that partisans should converge toward consensus with increasing information has long been taken as a logical consequence of Bayes’ theorem. Indeed, empirical evidence to the contrary has been cited as a weakness of the Bayesian paradigm and its relevance to real-life problems (e.g., Kahneman and Tversky 1972). This is a misconception. Nothing in the Bayesian paradigm precludes the possibility of diverging opinions in the face of shared information (Jaynes 2003; Bullock 2009). It may even be the case that the same information has a polarizing effect on individuals, pushing them towards opposite conclusions. This is perhaps most easily shown by incorporating perceptions of trust and source credibility into our Bayesian model. In other words, we must broaden our conception of someone’s “prior” so that it describes not only their existing beliefs about some phenomenon S, but also the credibility that they assign to different sources of information about S.
Consider an example, which is closely adapted from a related discussion in Jaynes (2003). Al, Bob and Christie hold different beliefs about climate change. Al is a “warmist,” Bob is a “lukewarmer” and Christie is a “denier.” These labels are encapsulated by the prior probabilities that each person assigns to climate sensitivity S, which we assume for simplicity is either high or low: S ∈ SL,SH. Denote by I an individual’s prior information about the world. Then, indexing by the first letter of their names, we summarize their prior beliefs about climate change as the following probabilities: P(SH|IA) = 0.90, P(SH|IB) = 0.40, and P(SH|IC) = 0.10.
Suppose that the IPCC now publishes its latest assessment report, wherein it claims that climate sensitivity is high. How do Al, Bob, and Christie respond to this new data, D = DH? It turns out that the answer hinges on the regard that each individual holds for the IPCC itself. For example, let us say that all three individuals agree the IPCC would undoubtedly present data supporting a high climate sensitivity if that were the true state of the world, i.e., P(DH|SH,IA) = P(DH|SH,IB) = P(DH|SH,IC) = 1.00. However, they disagree on whether the IPCC can be trusted to disavow the high sensitivity hypothesis if the scientific evidence actually supported a low climate sensitivity. Despite their different beliefs about climate sensitivity, assume that Al and Christie both regard the IPCC as an upstanding institution that can be trusted to accurately represent the science on climate change. In contrast, Bob is dubious about the motives of the IPCC and believes that the organization is willing to lie in advancement of a preconceived agenda. Representing these beliefs in terms of probabilities, we have P(DH|SL,IA) = P(DH|SL,IC) = 0.05 and P(DH|SL,IB) = 0.89.
Recovering the posterior beliefs about climate sensitivity for our three individuals is now a simple matter of modifying Bayes’ theorem to account for each person’s relative trust in the IPCC. For Al, we have
Similarly, we obtain posterior probabilities of 0.43 for Bob and 0.69 for Christie.
Taking a step back, Al now believes even more strongly in the high climate sensitivity hypothesis, having raised his subjective probability for SH from 90 to 98%. Christie has experienced a still greater effect and has updated her subjective probability for SH from 10 to 69%. She now attaches a larger probability to the high sensitivity hypothesis than the low sensitivity alternative. However, the same cannot be said of Bob, who has not been swayed by the IPCC report in the slightest. Both his prior and posterior probabilities suggest that SH only has an approximately 40% chance of being true. Bob’s extreme mistrust has effectively led him to discount the IPCC’s high sensitivity claim in its entirety.
Extending the above framework to account for increasing granularity is conceptually straightforward. The principal insight remains the same: Trust is as much a determinant of whether beliefs are amenable to data—and whether individuals converge towards consensus—as the precision of the data itself. Such an extension seems especially relevant to the climate change debate given the sense of scientific distrust that pervades certain segments of society (Malka et al. 2009; Gauchat 2012; Leiserowitz et al. 2013; Fiske and Dupree 2014; Hmielowski et al. 2014). Indeed, recent research supports the notion that distrust of scientists is causing belief polarization about climate change in some demographic groups, even as scientific evidence may increase consensus in others (Cook and Lewandowsky 2016; Zhou 2016). Similar “backfire” effects have been well documented in other fields (Nyhan and Reifler 2010; Harris et al. 2015).
Perhaps the most important feature of generalising the Bayesian framework in this way is that it offers a bridge between competing explanations of climate skepticism as a social phenomenon. Whereas the so-called “deficit model” posits a lack of scientific knowledge and understanding as key drivers of skepticism, advocates of the “cultural cognition” theory argue that group identity and value systems are more relevant (Kahan et al. 2011, 2012; Ranney and Clark 2016). A Bayesian model that incorporates perceptions of source credibility is able to accommodate both camps. Exposure to new scientific evidence can ameliorate a person’s skepticism, but only if their priors allow for it. This includes factors like cultural identity and whether they cause us to discount some sources of information more than others.Footnote 16
7 Concluding remarks
The goal of this paper has been to explore the way in which prior beliefs affect our responsiveness to information about climate change. The Bayesian paradigm provides a natural framework and I have proposed a group of stylized skeptics to embody the degrees of real-world climate skepticism. The headline finding is that subjective skeptic priors are generally overwhelmed by the empirical evidence for climate change. Once they have updated their beliefs in accordance with the available data, most skeptics demonstrate a clear tendency to congregate towards the noninformative case that serves as an objective reference point for this study. My primary regression specification yields a posterior TCR mean and 95% credible interval of 1.6 ∘C (1.4–1.8 ∘C) under the noninformative prior. This distribution sits comfortably within the IPCC’s “likely” TCR range of 1.0–2.5 ∘C and is robust to a variety of sensitivity checks. Indeed, accounting for factors that could conceivably affect the results—alternate data sources, adjusted forcing efficacies, measurement error, etc.—tends to nudge the mean TCR estimate upwards.
Unsurprisingly, given their congruence with mainstream estimates, I show that the updated beliefs of various skeptics are generally consistent with a social cost of carbon of at least US$25 per ton. Only those with extreme a priori skeptic beliefs would find themselves in disagreement. Or, dispute the claim that unconstrained emissions will lead to substantial future warming. This suggests that a rational climate skeptic, even one that holds relatively strong prior beliefs to begin with, could embrace policy measures to constrain CO2 emissions once they have seen all of the available data. At the same time, perhaps the most salient finding of this paper is that belief convergence is a nonlinear function of prior strength. Anyone who remains unconvinced by the available data today is unlikely to converge with the mainstream consensus for many years hence. Their implied priors are of such a strength that even decades more of accumulated evidence may not be enough to convince them. Fully disentangling the root causes of such information immunity—whether climate skeptics are extremely sure of their priors, distrustful of scientists and other experts, or some combination thereof—remains an important area for future research.
Notes
It should be said that there is an important literature on Bayesian learning in IAMs that originates with Kelly and Kolstad (1999). But I am unaware of any IAM studies that explicitly try to model learning by climate skeptics.
In terms of tangentially related empirical work, Kaufmann et al. (2017) shows that spatial heterogeneity in local climate change effects and temperatures can partially explain persistent skepticism in different regions of the United States. Moore (2017) does not deal with skeptics per se, but characterizes learning about climate as a (potentially) Bayesian process where individuals make inferences based on local weather shocks. This builds off of earlier work by Deryugina (2013), who finds that longer spells of abnormal local weather patterns are consistent with Bayesian updating about climate beliefs.
Another group of researchers beginning with Stern and Kaufmann (2000), has argued that the instrumental temperature record contains a stochastic trend that is imparted by, and therefore cointegrates with, the time-series data of radiative forcings. The reader is referred to Estrada and Perron (2013) and Hillebrand et al. (2020) for helpful overviews of this debate.
So-called conjugate priors are a prominent exception and belong to the same distribution family as the resulting posterior. However, conjugacy places strong restrictions on the questions that can asked of the data.
Anthropogenic forcings such as CO2, CH4, and N2O all follow very similar trends over time. Any empirical model that does not constrain these forcings in some way will therefore struggle to correctly attribute warming between them.
Volcanic aerosols are an exception because they impart only a transitory level of forcing. This explains why V OLC may be included as a separate component in the regression equation (Estrada et al. 2013a).
It is worth noting that a number of studies which provide climate sensitivity estimates via time-series methods — e.g., Kaufmann et al. (2006), Mills (2009), and Estrada and Perron (2012)—do so under the assumption that F2× = 4.37 Wm− 2. This outdated figure appears to be based on early calculations by Hansen et al. (1988). The climate sensitivity estimates of these studies may consequently be regarded as inflated.
The choice of normally distributed priors should have little bearing on the generality of the results. An exception might occur if I assumed a bounded prior, like a triangular or uniform distribution. Because these bounded distributions assign zero weight to outcomes beyond a specific interval, no amount of data can shift the posterior beyond that interval. This idea, that a Bayesian posterior can converge on a particular outcome only if the prior allocates some (infinitesimal) weight to it, is known colloquially as Cromwell’s rule (Jackman 2009).
This is the default prior suggested by Goodrich et al. (2020), which they refer to as “weakly informative.”
HadCRUT5 (Morice et al. 2020) was released during the late revision stages of the manuscript. Among other things, this updated version of the HadCRUT temperature record adopts a similar approach to interpolating coverage gaps as in CW14.
As a corollary, concerns over the use of the full historical dataset would only hold sway in cases where priors already incorporate information that has been obtained from applying the same model on a sub-sample of the dataset. In that case, we would need to exclude the sub-sample from the analysis to derive a valid posterior that avoids double counting.
Temperature predictions for RCPs 2.6 and 4.5—depicting respective CO2 stabilisation scenarios—are included in Fig. 4 for reference purposes only.
This is not to say that people fail to update rationally, or even heuristically, in a Bayesian manner. For further discussion in the context of climate, see Lewandowsky et al. (2019).
While the precise theoretical development differs from the framework presented here, I would note the closely related concept of Bayesian networks (Pearl and Russell 2000). Cook and Lewandowsky (2016) use a Bayesian network approach in an experimental setting to document (rational) belief polarization after individuals are presented with evidence about climate change. Mistrust of climate scientists is a primary source of the polarization in their study.
References
Anderegg WRL, Prall JW, Harold J, Schneider SH (2010) Expert credibility in climate change. Proc Natl Acad Sci USA (PNAS) 27:12107. https://doi.org/10.1073/pnas.1003187107
Annan J, Hargreaves J (2011) On the generation and interpretation of probabilistic estimates of climate sensitivity. Clim Change 104(3-4):423. https://doi.org/10.1007/s10584-009-9715-y
Bezanson J, Edelman A, Karpinski S, Shah VB (2017) Julia: a fresh approach to numerical computing. SIAM Rev 59(1):65. https://doi.org/10.1137/141000671
Bullock JG (2009) Partisan bias and the Bayesian ideal in the study of public opinion. J Polit 71(3):1109. https://doi.org/10.1017/S0022381609090914
Clark D, Ranney MA, Felipe J (2013) . In: Knauff NSIW M, Pauen M (eds) Proceedings of 35th annual meeting of the cognitive science society. http://mindmodeling.org/cogsci2013/papers/0381/index.html, pp 2070–2075
Cook J, Lewandowsky S (2016) Rational irrationality: modeling climate change belief polarization using Bayesian networks. Top Cogn Sci 8(1):160. https://doi.org/10.1111/tops.12186
Cook J, Nuccitelli D, Green SA, Richardson M, Winkler B, Painting R, Way R, Jacobs P, Skuce A (2013) Quantifying the consensus on anthropogenic global warming in the scientific literature. Environ Res Lett 8(2):024024. https://doi.org/10.1088/1748-9326/8/2/024024
Cook J, Oreskes N, Doran PT, Anderegg WR, Verheggen B, Maibach EW, Carlton JS, Lewandowsky S, Skuce AG, Green SA (2016) Consensus on consensus: a synthesis of consensus estimates on human-caused global warming. Environ Res Lett 11(4):1. https://doi.org/10.1088/1748-9326/11/4/048002
Corner A, Whitmarsh L, Xenias D (2012) Uncertainty, skepticism and attitudes towards climate change: biased assimilation and attitude polarisation. Clim Change 114(3–4):463. https://doi.org/10.1007/s10584-012-0424-6
Cowtan K, Way RG (2014) Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Q J R Meteorol Soc 140 (683):1935. https://doi.org/10.1002/qj.2297
Deryugina T (2013) How do people update? The effects of local weather fluctuations on beliefs about global warming. Clim Change 118(2):397. https://doi.org/10.1007/s10584-012-0615-1
Dessler AE, Forster PM (2018) An estimate of equilibrium climate sensitivity from interannual variability. J Geophys Res: Atmos 123(16):8634. https://doi.org/10.1029/2018JD028481
Doran PT, Zimmerman MK (2011) Examining the scientific consensus on climate change. Eos, Trans Am Geophys Union (EOS) 90(3):22. https://doi.org/10.1029/2009EO030002
Estrada F, Perron P (2012) Breaks, trends and the attribution of climate change: a time-series analysis. http://people.bu.edu/perron/workingpapers.html. Working paper, Department of Economics, Boston University. Available:
Estrada F, Perron P (2013) Detection and attribution of climate change through econometric methods. http://people.bu.edu/perron/workingpapers.html Working paper, Department of Economics, Boston University. Available:
Estrada F, Perron P, Martínez-López B (2013a) Statistically derived contributions of diverse human influences to twentieth-century temperature changes. Nat Geosci 6:1050. https://doi.org/10.1038/NGEO1999
Estrada F, Perron P, Gay-García C, Martínez-López B (2013b) A time-series analysis of the 20th century climate simulations produced for the IPCC’s fourth assessment report. PLOS ONE 8(3):e60017. https://doi.org/10.1007/s10584-008-9524-8
Fiske ST, Dupree C (2014) Gaining trust as well as respect in communicating to motivated audiences about science topics. Proc Natl Acad Sci 111 (4):13593. https://doi.org/10.1073/pnas.1317505111. http://www.pnas.org/content/111/Supplement_4/13593.abstract
Gabry J, Češnovar R (2020) cmdstanr: R Interface to ‘CmdStan’. https://mc-stan.org/cmdstanr
Gauchat G (2012) Politicization of science in the public sphere a study of public trust in the United States, 1974 to 2010. Am Sociol Rev 77(2):167. https://doi.org/10.1177/0003122412438225
Gay-Garcia C, Estrada F, Sánchez A (2009) Global and hemispheric temperatures revisited. Clim Change 94(3–4):333. https://doi.org/10.1007/s10584-008-9524-8
Gelman A, Hill J, Vehtari A (2020) Regression and other stories, 1st edn. Cambridge University Press, Cambridge
Goodrich B, Gabry J, Ali I, Brilleman S (2020) rstanarm: Bayesian applied regression modeling via Stan. https://mc-stan.org/rstanarm. R package version 2.21.1
Gregory J, Forster P (2008) Transient climate response estimated from radiative forcing and observed temperature change. J Geophys Res: Atmos 113(D23). https://doi.org/10.1029/2008JD010405
Hansen J, Fung I, Lacis A, Rind D, Lebedeff S, Ruedy R, Russell G, Stone P (1988) Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model. J Geophys Res 93:9341. https://doi.org/10.1029/JD093iD08p09341
Harris AJ, Hahn U, Madsen JK, Hsu AS (2015) The appeal to expert opinion: quantitative support for a Bayesian network approach. Cogn Sci 40:1. https://doi.org/10.1111/cogs.12276
Hausfather Z, Drake HF, Abbott T, Schmidt GA (2020) Evaluating the performance of past climate model projections. https://doi.org/10.1029/2019GL085378. E2019GL0853782019GL085378, vol 47, p e2019GL085378
Hillebrand E, Pretis F, Proietti T (2020) Econometric models of climate change: introduction by the guest editors. J Econ 214(1):1. https://doi.org/10.1016/j.jeconom.2019.05.001
Hmielowski JD, Feldman L, Myers TA, Leiserowitz AA, Maibach EW (2014) An attack on science? Media use, trust in scientists, and perceptions of global warming. Public Underst Sci 23(7):866. https://doi.org/10.1177/0963662513480091
Hope C (2011) The PAGE09 integrated assessment model: a technical description. http://www.jbs.cam.ac.uk/fileadmin/user_upload/research/workingpapers/wp1104.pdf. Cambridge Judge Business School, Working Paper Series (4/2011). Available
Hornsey MJ, Harris EA, Bain PG, Fielding KS (2016) Meta-analyses of the determinants and outcomes of belief in climate change. Nat Clim Change 6(6):622. https://doi.org/10.1038/nclimate2943
IPCC (2001) Climate change 2001: the scientific basis. Contribution of Working. In: Houghton JT, Ding Y, Griggs DJ, Noguer M, van der Linden PJ, Dai X, Maskellm K, Johnson CA (eds) Group I to the third assessment report of the intergovernmental panel on climate change, 1st edn. Cambridge University Press, Cambridge
IPCC (2013) . In: Stocker TF, Qin D, Plattner G-K, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM (eds) Climate change 2013: the physical science basis. Contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change, 1st edn. http://www.ipcc.ch/report/ar5/wg1/. Cambridge University Press, Cambridge
IWG (2016) Technical update of the social cost of carbon for regulatory impact analysis under Executive Order 12866. Technical Support Document, Interagency Working Group on Social Cost of Carbon, United States Government, Washington, DC. https://www.epa.gov/sites/production/files/2016-12/documents/sc_co2_tsd_august_2016.pdf
Jackman S (2009) Bayesian analysis for the social sciences, 1st edn. Wiley, New York
Jaynes ET (2003) Probability theory: the logic of science. Cambridge University Press, Cambridge
Kahan DM, Jenkins-Smith H, Braman D (2011) Cultural cognition of scientific consensus. J Risk Res 14(2):147. https://doi.org/10.1080/13669877.2010.511246
Kahan DM, Peters E, Wittlin M, Slovic P, Ouellette LL, Braman D, Mandel G (2012) The polarizing impact of science literacy and numeracy on perceived climate change risks. Nat Clim Change 2:732. https://doi.org/10.1038/nclimate1547
Kahneman D, Tversky A (1972) Subjective probability: a judgment of representativeness. Cogn Psychol 3 (3):430. https://doi.org/10.1016/0010-0285(72)90016-3. http://www.sciencedirect.com/science/article/pii/0010028572900163
Kaufmann RK, Kauppi H, Stock JH (2006) Emissions, concentrations, & temperature: a time series analysis. Clim Change 77(3-4):249. https://doi.org/10.1007/s10584-006-9062-1
Kaufmann RK, Mann ML, Gopal S, Liederman JA, Howe PD, Pretis F, Tang X, Gilmore M (2017) Spatial heterogeneity of climate change as an experiential basis for skepticism. Proc Natl Acad Sci 114(1):67. https://doi.org/10.1073/pnas.1607032113. http://www.pnas.org/content/114/1/67.abstract
Kelly DL, Kolstad CD (1999) Bayesian learning, growth, and pollution. J Econ Dyn Control 23(4):491. https://doi.org/10.1016/S0165-1889(98)00034-7. http://www.sciencedirect.com/science/article/pii/S0165188998000347
Kiseleva T (2016) Heterogeneous beliefs and climate catastrophes. Environ Resour Econ 65(3):599. https://doi.org/10.1007/s10640-016-0036-0
Kim D, Oka T, Estrada F, Perron P (2020) Inference related to common breaks in a multivariate system with joined segmented trends with applications to global and hemispheric temperatures. J Econ 214(1):130. https://doi.org/10.1016/j.jeconom.2019.05.008
Leiserowitz AA, Maibach EW, Roser-Renouf C, Smith N, Dawson E (2013) Climategate, public opinion, and the loss of trust. Am Behav Sci 57(6):818. https://doi.org/10.1177/0002764212458272
Lewandowsky S, Risbey JS, Oreskes N (2016) The “pause” in global warming: turning a routine fluctuation into a problem for science. Bull Am Meteorol Soc 97(5):723
Lewandowsky S, Pilditch TD, Madsen JK, Oreskes N, Risbey JS (2019) Influence and seepage: an evidence-resistant minority can affect public opinion and scientific belief formation. Cognition 188:124. https://doi.org/10.1016/j.cognition.2019.01.011
Malka A, Krosnick JA, Langer G (2009) The association of knowledge with concern about global warming: trusted information sources shape public thinking. Risk Anal 29(5):633. https://doi.org/10.1111/j.1539-6924.2009.01220.x
Meinshausen M, Smith SJ, Calvin K, Daniel JS, Kainuma M, Lamarque J, Matsumoto K, Montzka S, Raper S, Riahi K et al (2011) The RCP greenhouse gas concentrations and their extensions from 1765 to 2300. Clim Change 109(1–2):213. https://doi.org/10.1007/s10584-011-0156-z
Mills TC (2009) How robust is the long-run relationship between temperature and radiative forcing? Clim Change 94(3–4):351. https://doi.org/10.1007/s10584-008-9525-7
McCright AM, Dunlap RE (2011a) Cool dudes: the denial of climate change among conservative white males in the United States. Glob Environ Change 21 (4):1163. https://doi.org/10.1016/j.gloenvcha.2011.06.003
McCright AM, Dunlap RE (2011b) The politicization of climate change and polarization in the American public’s views of global warming 2001–2010. Sociol Q 52(2):155. https://doi.org/10.1111/j.1533-8525.2011.01198.x
Mooney C (2016) Ted Cruz keeps saying that satellites don’t show global warming. Here’s the problem. The Washington Post. https://www.washingtonpost.com/news/energy-environment/wp/2016/01/29/ted-cruz-keeps-saying-that-satellites-dont-show-warming-heres-the-problem
Moore FC (2017) Learning, adaptation, and weather in a changing climate. Clim Change Econ 8(4):1750010. https://doi.org/10.1142/S2010007817500105
Moore FC, Rising J, Lollo N, Springer C, Vasquez V, Dolginow A, Hope C, Anthoff D (2018) Mimi-PAGE, an open-source implementation of the PAGE09 integrated assessment model. Sci Data 5(1):1
Morice CP, Kennedy JJ, Rayner NA, Winn J, Hogan E, Killick R, Dunn R, Osborn T, Jones P, Simpson I (2020) An updated assessment of near-surface temperature change from 1850: the HadCRUT5 data set. J Geophys Res: Atmos p e2019JD032361. https://doi.org/10.1029/2019JD032361
Nordhaus WD (2014) Estimates of the social cost of carbon: concepts and results from the DICE-2013R model and alternative approaches. J Assoc Environ Resour Econ 1(2):273. https://doi.org/10.1086/676035
Nyhan B, Reifler J (2010) When corrections fail: the persistence of political misperceptions. Polit Behav 32(2):303. https://doi.org/10.1007/s11109-010-9112-2
Oreskes N (2004) The scientific consensus on climate change. Science 306:1686. https://doi.org/10.1126/science.1103618
Pearl J, Russell S (2000) Bayesian networks. Department of Statistics Papers, UCLA. Available: https://escholarship.org/uc/item/53n4f34m
R Core Team (2020) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. https://www.R-project.org/
Ranney MA, Clark D (2016) Climate change conceptual change: scientific information can transform attitudes. Top Cogn Sci 8(1):49. https://doi.org/10.1111/tops.12187
Ranney MA, Clark D, Reinholz DL, Cohen (2012) . In: Miyake R C N, Peebles D (eds) Proceedings of the 34th annual conference of the cognitive science society. http://mindmodeling.org/cogsci2012/papers/0388/paper0388.pdf. Cognitive Science Society, Austin, pp 2228–2233
Ridley M (2015) My life as a climate change lukewarmer. The Times. https://www.thetimes.co.uk/article/matt-ridley-my-life-as-a-climate-change-lukewarmer-8jwbd8xz6dj
Saad L (2019) Americans as concerned as ever about global warming. Gallup, 25 March 2019 Available: https://news.gallup.com/poll/248027/americans-concerned-ever-global-warming.aspx
Schmidt G (2007) The CO2 problem in 6 easy steps. RealClimate.org, 6 August 2007. Available: http://www.realclimate.org/index.php/archives/2007/08/the-co2-problem-in-6-easy-steps/
Stern DI, Kaufmann RK (2000) Detecting a global warming signal in hemispheric temperature series: a structural time series analysis. Clim Change 47(4):411. https://doi.org/10.1023/A:1005672231474
Tol RSJ (2014) Quantifying the consensus on anthropogenic global warming in the literature: a re-analysis. Energy Policy 73:701. https://doi.org/10.1016/j.enpol.2014.04.045
Tol RSJ, De Vos AF (1998) A Bayesian statistical analysis of the enhanced greenhouse effect. Clim Change 38:87. https://doi.org/10.1023/A:1005390515242
Van Vuuren DP, Edmonds J, Kainuma M, Riahi K, Thomson A, Hibbard K, Hurtt GC, Kram T, Krey V, Lamarque JF et al (2011) The representative concentration pathways: an overview. Clim Change 109 (1–2):5. https://doi.org/10.1007/s10584-011-0148-z
Van Wijnbergen S, Willems T (2015) Optimal learning on climate change: why climate skeptics should reduce emissions. J Environ Econ Manag 70:17. https://doi.org/10.1016/j.jeem.2014.12.002
Verheggen B, Strengers B, Cook J, van Dorland R, Vringer K, Peters J, Visser H, Meyer L (2014) Scientists’ views about attribution of global warming. Environ Sci Technol 48(16):8963. https://doi.org/10.1021/es501998e
Zhou J (2016) Boomerangs versus Javelins: how polarization constrains communication on climate change. Environ Polit 25 (5):1. https://doi.org/10.1080/09644016.2016.1166602
Acknowledgements
The author would like to thank Øivind A. Nilsen, Jonas Andersson, Gunnar Eskeland, and Christopher Christopher Costello for their early encouragement and support. Various seminar participants and, especially, several anonymous reviewers provided extremely helpful feedback that substantially improved the final manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares no competing interests.
Additional information
Author contribution
G.R. McDermott is the sole author for this article.
Availability of data and materials
All code and data for this article are available at https://github.com/grantmcdermott/skeptic-priors.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
McDermott, G.R. Skeptic priors and climate consensus. Climatic Change 166, 7 (2021). https://doi.org/10.1007/s10584-021-03089-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10584-021-03089-x