1 Introduction

In the last few decades, literature on climate change risk, and generally on environmental policy, has grown significantly. This has been the case, since the Kyoto Agreement; in fact, starting from the 1990s, a general consensus about the issue of climate change has been established, which had not always been a given. Along with increases in the policymakers’ efforts to face the problem, this has called greater attention to this topic and to studies on extreme case scenarios.

Such growing interest has been caused by a number of historical reasons, first of all by the fact that it is now possible to realise the severity of the issue, by observing melting glaciers or the increased frequency of extreme events causing damage to inland regions and developing countries in contrast to Western ones (as examples).

In fact, developing countries, especially (but not only) those that rely on agriculture, are more vulnerable to climate change and extreme events. For example, loss of vital river flowsFootnote 1 is likely to damage mostly mountain or remote regions, while land use and urban sprawl could damage and indirectly cause floods or landslides in rural areas rather than urban areas.

Overall, climate change is one of the historical reasons behind the lack of territorial cohesion and not just in a rural-urban perspective. Natural hazard and vulnerability are not homogeneous among geography (Marin et al. 2019; Graziano and Rizzi 2020, see, e.g.), causing a inherent costs in enduring damages due to exogenous shocks. This is a crucial finding of regional science, already stressed and debated in the literature (Fröhlich and Hassink 2018, among others).

So, this nonuniformity is a source of unequal natural resource distribution: the evidence that there are regions suffering more than others from natural hazard and climate change has instigated an increasing interest upon in literature inspired by on the issue. Castells-Quintana et al. (2017), study the impact of climate change and weather events on long-run growth and Institution development, finding that “the role of geography as a fundamental determinant of relative prosperity”, and it exists hetereogenity even among the same area: Utaminingsih (2017) analyse resilience of children in West Lampung coastal area in Indonesia.

Overall, any regions have their resilience (Faggian et al. 2018; van Bergeijk et al. 2017, see, e.g.), and each region has to react to hazard and climate issues, coherently to its own unique characteristics (Cutter et al. 2016, see, e.g.). This is the ground of analysis (obviously, not the only one), where regional science and urban panning have to play an important role; this is so, because improving regional resilience will be fundamental in the future, even more than in this age.

Moreover, rises in temperatures and increased weather variability are not uniform across the planet, and this heterogeneity may cause major issues also in terms of climate justice. Fig. 1 presents a few features of this regional heterogeneity, based on how it was reported by the last Intergovernmental Panel on Climate Change (IPCC) (UNFCCC 2014).

Fig. 1
figure 1

UNFCCC (2014): Change in temperature and precipitation. Change in average surface temperature (a) and change in average precipitation (b) based on multi-model mean projections for 2081–2100 relative to 1986–2005 under the left and right scenarios. The number of models used to calculate the multi-model mean is indicated in the upper right corner of each panel. Stippling (i.e., dots) shows regions where the projected change is large compared to natural internal variability and where at least 90% of models agree on the sign of change. Hatching (i.e., diagonal lines) shows regions where the projected change is less than one standard deviation of the natural internal variability

Thus, climate change and natural disasters are causing also a change in allocation of natural and physical resource among the planet. How is reacting each economy (or agglomeration of economies) to a different set of resource in its endowment?

In order to better understand how regional resilience is facing the issue, and how it will face these shocks in the next decades, many papers and others kinds of studies are stressing many aspects of regional specialization and trade networks among different areas.

Pudelko et al. (2018) investigates the economic resilience of some German regions after 2008 crisis, focusing on the reaction of agglomeration economies analysing (deeply) the regional resilience, through a strong distinction between sensitivity and recovery from an external shock.

Bondonio and Greenbaum (2018) study the local economy resilience to rare natural disasters of U.S. counties between 1989 and 1999, using a propensity score matching approach. Moreover, a difference-in-difference estimator compares trends of affected counties’ post-disaster business establishments, employment, and payroll to counterfactual trends in the matched counties. Findings of this paper could be extremely useful for a policymakers, in order to better prepare responses for future disasters.

Thus, a novel and interdisciplinary way to analysis is crucial also for urban resilience. Borsekova and Nijkamp (2019) respond to this need, collecting the more important quantitative and qualitative tools for impact analysis and disaster management. In their review, the authors included environmental catastrophes, political turbulence and economic shocks.

The main issues that the literature is engaging with are both theoretical and technical. On the theoretical side, it is not straightforward to justify the political cost of an environmental policy; on a more technical level, the probability structure to be used in modelling future catastrophic scenarios is controversial. Thus, assumptions and postulations about the future cost of climate change as well as assumptions and postulations about the present cost of environmental policy have become crucial.

Regarding the political cost, there are two main debatable points: intergenerational equity, and so the opportunity to engage in policies increasing the well-being of future generations (Stern 2006; Weitzman 2009); and social equity, meaning whether policy costs should be imposed only on developed countries or also on the developing ones (Dasgupta 2007).

On the first issue, every paper or report investigating catastrophic scenarios linked to climate change has to provide an analysis that assumes or recommends politically viable intergenerational policies (Stern 2006; Tol and Yohe 2006, 2007; Dietz et al. 2007; Ackerman et al. 2009; Sterner and Persson 2008; Weitzman 2009).

Consequently, a very controversial issue in the literature regards the value of the discount factor between present and future utility (often present and future generations), together with the assumption of a stochastic discount factor (Weitzman 2009).

These unresolved problems are due to the existence of more than one source of uncertainty: future preferences (future utility), potential for environmental damages due to climate change, probability of extreme events, and severity of such extreme events.

The pure rate of time preference, and so the discount factor, has been one of the most debated questions over the last decade. This is so because the discount factor involves a few theoretically disputable points, which will be investigated here. Thus, we will analyse how the approach to modelling discounting in the climate change literature has been modelled on the Stern Review (Stern 2006) and we will then explore the debate around the Dismal Theorem (hereby, DT) (Weitzman 2009) .

Stern Review (Stern 2006) is the most influential report on climate change. This document was released for the UK Government and published as a volume in January 2007 by Cambridge University Press. The 700-page handbook presented a wide range of evidence on the impact of climate change and on its economic costs. A very beneficial aspect of the Stern Review is that it used a large number of different techniques to assess the costs and risks of climate change, claiming that “there is still time to avoid the worst impacts of climate change if we take strong action now”. According to some authors, these convincing conclusions resulted from some assumptions that are common to all the scenarios considered in Stern (2006). The debatable assumptions are those connected to the discount factor.

Thus, Weitzman (2009) used a fat-tailed function in order to estimate the risk of a climate catastrophe, leading to the result that it is necessary to implement extremely expensive environmental policies in order to be “safe”. The effect on literature has been shattering, and his paper has attracted much criticism, mainly from a technical standpoint but also regarding the conclusions and policy recommendations provided. Using the DT, Weitzman designed a consumption trajectory that is also subject to a stock of “random risk” of suffering damages resulting in the loss of human lifeFootnote 2. In so doing, the author postulated a negative coefficient or relative risk aversion that is a function of the utility of consumption, thus obtaining a stochastic discount factor. Weitzman introduced other sources of uncertainty, one being the logarithmic consumption growth that is postulated as a Probability Density Function (from now on, PDF), depending on the GDP and its relation to some generic structural parameter (that can be interpreted as sensitivity to climate change, exogenous shock, etc.). Finally, also the GDP was adopted as a random variable given by an \(n\) finite number of independent observations.

The following sections of this study each centre on a landmark paper and discuss the debate around that paper. They are organised as follows: in Sect. 2, we discuss the Stern Review (Stern 2006), regarded by most scholars as a sort of handbook of the economics of climate change; in Sect. 3 we present the DT (Weitzman 2009) commenting on the technical choices made by the author; Sect. 4 is focused on criticism of the DT and the debate around this; in Sect. 5 we explore the most recent works paying attention to current (and future) debate, stressing the Expected Utility theory (from now on, EU theory) and the assumption of rationality; Sect. 6 provides some concluding remarks and suggestions for future research.

2 Stern Review and debate

The increasing interest in the field of catastrophic scenarios, with a particular focus on climate catastrophes, has built a general consensus around the issue of climate change across populations, academia and governments. Hence, many studies providing interesting policy recommendations have been carried out in the last few decades. In this section, we investigate the most important work about climate change catastrophic scenarios and natural disasters and, more generally, about environmental policies and analysis, i.e., The Economics of Climate Change: The Stern Review (Stern 2006). It is worth underlining that this study was not only published for the U.K. government but has also been financed by it.

The Stern Review is a vast investigation of the risks and policies around the issue of climate change. The aim of the Review was to lay down the basis for future research on climate policy and to discuss how the different models implemented in economic analyses diverge.

The academic debate regarding the effects of climate change concentrates on uncertainty about future scenarios and policy expenditure in order to avoid the worst-case scenarios—and the Stern Review deals with these very same aspects. Indeed, Stern (2006) provided simulations, under different assumption frameworks, of the standard inter-temporal welfare function \(W=U(C_{0})+\beta E\left[U(C_{i})\right]\). He asserted that, when modelling catastrophic climate change, the discount rate (\(\beta\) represents the discount factor) has to be extremely low due to the large inter-temporal horizon (\(C_{i}\) is consumption by future generations), since otherwise we would underestimate the catastrophic scenario. This assumption was soon met with scepticism, first of all from Weitzman (2007) as well as from others authors, like Henderson (2007); Carter et al. (2007) and Tol and Yohe (2007).

The other controversial point in the academic debate regards the expectation \(E(\cdot)\) of future utility. Discussions about the right assumption framework to be used in a climate policy model eventually led to the DT (Weitzman 2009) and are far from over.

The Stern Review provided clear answers entailing a policy prospective. A debatable point concerning the assumption framework adopted has to do with the discount factor, attributing great importance to the future generations’ utility.

As argued by the author, this choice entailed a policy recommendation: “future generations should have a right to a standard of living no lower than the current one”Footnote 3, which follows from the concept of sustainability. Thus, in order to model it, Stern uses a very low discount rate \(r\) defined through a Frank Ramsey equation: \(r=\delta+\eta g\), where \(\delta\) is the pure discount rate, \(\eta\) is the elasticity of the marginal utility of consumptionFootnote 4, \(g=\frac{\overset{\cdot}{c}}{c}\) is the growth rate of consumption and, straightforwardly, the discount factor is given by \(\beta=e^{-rt}\).

This assumption has faced many objections in academia. Weitzman (2007) argued that, according to the model proposed by Stern, it would be necessary to save the entire present endowment in order to mitigate losses. To prove this, Weitzman replaced a welfare assessment with modelling used in the Stern Review (Stern 2006) in a balanced growth path, thus obtaining a saving rate \(s\), \(s=\frac{r-\delta}{\eta r}\) indicating that the entire present endowment has to be saved. In fact, in order to model the sustainability principle of intergenerational egalitarianism, Stern (2006) had to set \(\delta\approx 0\), because of \(\delta\) represents the pure time preference and \(\eta\approx 1\), because of \(\eta\) represents risk aversion among present and future generation wealth. Moreover, Weitzman (2007) stated that in a climate change context, \(g\) needs to be seen as a random variable. Thus, according to the parametrisation set by Weitzman (2007) on the integrated assessment model (IAM) and in line with the policy view adopted in the Stern Review, a saving rate climate-friendly saving rate has to be \(s\approx 100\%\).

In addition to the controversies regarding the parameter values analysed above, Weitzman (2007) introduced another source of uncertainty regarding the future value of \(r\): \(E(r)\). In fact, while Stern used a linear combination between a high value and a low value of \(r\), solved as follows \(\frac{1}{2}e^{r_{1}}+\frac{1}{2}e^{r_{2}}\); Weitzman modified this feature of the model using a set of subjective probabilities \(\sum p_{i}\) of the value of the discount rate \(r_{i}\). He obtained \(r(t)=-\frac{ln\sum p_{i}e^{-r_{i}t}}{t}\), that declines monotonically over time to an asymptotic limit for \(r(\infty)=\underset{i}{\min}\left\{r_{i}\right\}\), leading him to sarcastically conclude that: “Stern is right for the wrong reasons”.

As already mentioned, Weitzman suggested treating the growth rate \(g\) as a normal random variable: \(g\sim N(\mu,\sigma^{2})\), where \(\mu\) and \(\sigma^{2}\) are not know parameters. Hence, he obtained \(r^{f}=\delta+\eta\mu-\frac{1}{2}\eta^{2}\sigma^{2}\). His criticism regarding the use of a random variable in order to represent the growth rate lent greater importance to volatility as compared to interest rate. This was a crucial aspect of Weitzman’s idea of climate change modelling that would lead, two years later, to the development of his Dismal Theorem (see the following section).

Dasgupta (2007) identified a key point of discussion by analysing the distinction between \(\delta\) and \(\eta\), which is “crucial” according to the author because, if the first affects intergenerational distribution, the second involves distribution among agents of the same generation. Thus, Dasgupta (2007) provided evidence of the fact that Stern (2006) adopted an extremely egalitarian view of the intergenerational distribution but an extremely non-egalitarian view between developed and developing countries.

An accurate analysis of the parameter values was performed by Sterner and Persson (2008), who carried out in-depth observations on each of Stern’s choices:

  • With respect to \(\delta\), Sterner and Persson (2008)Footnote 5 noted that a higher value of pure time preference rate implies less weight being put on future damages, hence less abatement today. Thus, according to the authors, Stern correctly implemented his “normative” perspective and “ethical” view about the conditions of future generations.

  • Regarding \(\eta\), the marginal elasticity of utility to income, Sterner and Persson (2008)Footnote 6 argued that “it is ironic that Stern has been accused of using too low a value for \(\eta\) when he has used a value of one”. In fact, in standard real business models, \(\eta\) is close to zero and it is straightforward to think of a concave utility function.

  • Concerning \(g\), the growth rateFootnote 7, is another tricky issue. They wondered whether Stern used a growth rate which is too high and whether or not it is appropriate to use eternal growth in a similar analysisFootnote 8.

In general, even though Sterner and Persson (2008) criticised some aspects of the Stern Review (the most important one being the probable underestimation of rises in prices for certain goods and services), they did not consider a “problem” the fact that Stern (2006) made some “political assumptions”; yet, this was not the same as when the Stern Review had been called a “political document” by Weitzman (2007)Footnote 9 or Nordhaus (2007)Footnote 10. Specifically, Weitzman (2007) discarded many aspects of the Stern Review because they were too strong and relied on strong theoretical assumptions that are not easily testable in empirical terms. While this may be true, it is possible to move the same objections to Weitzman’ criticism, and this is why the analysis proposed by Dietz is extremely interesting. Dietz et al. (2007) argued that the ethical considerations regarding responsibilities among generations necessarily “boil down” the model to a very simplistic one. The point is the same for any value chosen (“how do we, as a generation, value benefits to collective action to protect the climate for generations a hundred or more years from now?”) and we need to take into consideration the fact that damages due to climate change could affect future growth, so we can observe that also the values of \(g\) are not straightforward.

Fig. 2
figure 2

Consumption Choice along two time periods, Dietz et al. (2007). A critic’s case: Nordhaus (2006) Base Case: Stern (2006)

Undoubtedly, the link between the evaluation of risk and intergeneration ethics is not univocal. Fig. 1 shows two extreme views that are present in literature: Nordhaus (2006) and Stern (2006) together with two intermediate simulations. The bottom part of the figure includes the estimation made by the Stern Review, asserting that climate change will involve a loss greater than ten per cent in global mean consumption per capita.

At the top of the figure, there is an example of a study in which the effects of climate change are taken into consideration in an optimistic way (considering the risk) and using a smaller coefficient concerning the utility of future generations, i.e., Nordhaus (2006), in which \(\delta=1.5\%\) and \(\eta=2\). Furthermore, no (or very little) uncertainty is expressed in that paper. Dietz et al. (2007) argued that, within a similar simulation, it is not possible to justify any policy action because, according to the chosen parameter values, the estimated damages are only \(0.6\%\) of current consumption.

Dietz et al. (2007) focused on the more sensitive aspect of the Stern framework of analysis, making two additional simulations, the first entailing “a larger ethical weight” but omitting uncertainty, and the second introducing uncertainty but adopting the same parameters used by Nordhaus (2006). In this case, the cost of climate change increases with respect to that in Nordhaus (2006), but not significantly. This is because “the interaction between risk and ethics … is crucial”Footnote 11.

Although it is appropriate to assert that some assumptions put forth by Stern (2006) are “political”, this very same criticism may be raised in relation to any assumption framework chosen. In fact, Dietz et al. received similar comments to their study a few years later (regarding their critique of Weitzman’s DT).

Notwithstanding the doubts, criticism and controversies, the Stern Review earned widespread attention by academia, as well as by supranational and national institutions.

Moreover, even the idea that Stern overestimated climate risk (Tol and Yohe 2006) has been contradicted by some papers, an interesting example of this being Ackerman et al. (2009). They used the same assumptions made by Stern (2006) but changed some parameters considered to be too “optimistic”, concluding that future climate damages will be greater than those calculated by the Review in terms of future loss of GDP. According to their paper, climate damages are estimated as a 10.8% loss with respect to current GDP in 2100, whereas Stern estimated this loss at 2.2%.

Ackerman et al. (2009) chose a different approach for what concerns three important parameters: the threshold of a climate catastropheFootnote 12, its probabilityFootnote 13 and the damage functionFootnote 14. Thus, Ackerman et al. (2009) concluded that Stern (2006) did not inflate the damages and correctly estimate the cost of climate change.

The academic debate regarding the technical aspects of the Stern Review has covered nearly every part of its assumption framework. It is altogether unequivocal that the Stern Review represented a breaking point in environmental policy analysis, since it “(…) has come to symbolize something of a dividing line in the evolution of the common appreciation of the climate problem. (…) the Stern Review is the first major, official economic report to give climate change a really prominent place among global problems” (Sterner and Persson 2008).

The controversy around the conclusions and estimations of the Stern Review has been widespread, and the kind of integrated assessment model used (called PAGE2002) has been applied in many other papers, also covering different environmental fields (an example of this is Alberth and Hope 2007).

PAGE2002 was deemed able to account for all the damages caused by climate change, as described by the third IPCC Report (UNFCCC 2001). It is worth noting that the more recent UN reports had not yet been produced in 2006Footnote 15, and the ones produced later followed the Stern frameworkFootnote 16.

Hope (2006) underlined that PAGE2002 was an update of the PAGE95 integrated assessment model (Plambeck and Hope 1996) able to investigate all the following aspects: Emissions of primary greenhouse gases, \(CO_{2}\) and methane, including changes in natural emissions stimulated by the changing climate; Greenhouse effect; Cooling from sulphate aerosols; Regional temperature effects; Non-linearity and transience in the damage caused by global warming; Regional economic growth; Adaptation to climate change; and the possibility of a future large-scale discontinuity.

The Appendix in Hope (2006) provides a valuable synthesis for a deeper analysis of the modelling introduced by PAGE2002.

Around the same period, a crucial aspect that was often emphasised regarded the probabilistic features of the model. Although Stern spread the level of uncertainty, according to Weitzman and many others authors, this was not enough, especially because of the different sources of uncertainty and the limits to information that can justify the use of “fat-tailed” probability.

Thus, three years later Weitzman would develop his DT (Weitzman 2009), and the ensuing debate would be long and intense.

3 Dismal Theorem and recent literature

The years immediately after the publication of the Stern Review saw a major debate around the correct use of the expected utility theory and which parameters are able to model climate change issues in a stochastic setting.

This paper focuses on the debate around intergenerational climate equity, but in recent years there has been an increase in academic interest also around the failure of supra-national policies due to the failure of international climate (or abatement) agreements. Barrett (2013) is one of the few available examples of a study investigating the uncertainty behind the level of pollution (or lack of pollution control) and government incentives to implement serious environmental policies. This is a theoretical paper, featuring the assumption that a natural disaster will occur above an uncertain threshold level of pollution. The author analysed uncertainty by using a uniform and a Gaussian random variable in order to identify the level of pollution that could determine a catastrophe. According to this assumption, any increase in emission units corresponds to a higher risk of a disaster being caused.

Barrett’s paper concerned climate treaties among countries. A particularly interesting conclusion regards the fact that the severity of possible disasters does not affect the Nash equilibria of the model, due to free-riding actions. On the contrary, the structure of uncertainty, linked to the threshold triggering a climate catastrophe, does affect the equilibria.

Another unresolved line of debate present in the literature has to do with the appropriate welfare framework in order to evaluate environmental policy, especially for what concerns climate change. A stimulating work in this regard is the one by Heal (2009), which explored the different tools used in the literature to investigate climate change policy.

As explained in the previous section, the key point in the academic discussion is the discount rate. Stern (2006) argued that the pure rate of time preference (PRTP) is extremely low, and there is known to be widespread discussion about this issue.

Without discarding an in-depth analysis of the discount factor, Heal (2009) highlighted the importance of elasticity among goods. He also stressed the role of uncertainty in investigating the solutions proposed by Stern (2006) and Nordhaus (2007) (“Uncertainty is one of the dominant facts in any analysis of climate change”, Heal 2009 page 12), adding the possibility of substitution among commodities. In doing so, Heal introduced others sources of uncertainty, i.e., the ones regarding the magnitude of a possible catastrophe, which kind of pollution could cause it, and the real risk of experiencing the catastrophic consequences of climate change.

Indeed, any study entailing the possibility that a natural disaster might occur has to deal with the level of uncertainty concerning this possibility and the information available to the agents addressing the issue.

Entailing the possibility of a catastrophe requires a thorough analysis of events whose chance of occurring is negligible. This stresses the normal tools of the EU theory, which is mostly not sensitive to small probability events.

The point is that some environmental problems are exactly what the sensitivity of the EU theory treats as low-probability, high-impact value events. For instance, the economic issues posed by climate change involve the need to investigate, under uncertainty, the best policy for low-probability events with high-impact catastrophic consequences. Indeed, Weitzman (2009) directed its attention to this.

The main point in Weitzman (2009) was to show “that the economic consequences of fat-tailed structural uncertainty (…) can readily outweigh the effects of discounting in climate-change policy analysis”. This was achieved using a framework that differed from models of uncertain catastrophes in which uncertainty takes on the form of a known thin-tailed PDF. Weitzman provided a model with a fat-tailed PDF, which is the consequence of the link between the thick-tailed probability, stemming from the evidence, and the thick-tailed probability represented by the likelihood function.

A key feature of Weitzman (2009), is the discount factor. Since he used a theoretical model with two time periods, he opted for a stochastic discount factor, instead of a fixed rate. The stochastic discount factor used by Weitzman is defined as the ratio between the future expected marginal utility \(E[U^{\prime}(C)]\) and the current one \(U^{\prime}(1)\), also considering the pure rate of time preference \(\beta\): \(M:=\beta\frac{E[U^{\prime}(C)]}{U^{\prime}(1)}\).

According to the framework of analysis in Weitzman (2009), through this stochastic factor \(M\), it is possible to derive the marginal rate of substitution between present and future consumption of a certain infinitesimal unit \(\delta\). The conclusion is that this stochastic discount factor \(M\) tends to infinity.

The author obtained this by starting from three main assumptionsFootnote 17:

(i) :

The coefficient of relative risk aversion \(\eta(c):=-c\frac{U^{\prime\prime}(C)}{U^{\prime}(C)}\) is strictly greater than zero as \(c\rightarrow 0\).

(ii) :

The consumption growth rate \(y:=logC\) is distributed according to a PDF \(h(y|s)=\frac{1}{s}f(\frac{y-\mu}{s})\) , where \(f\) is a normalisable function, \(\mu\) is a known location parameter, and \(s\) is an uncertain scale parameter. Weitzman interpreted \(s\) as “generalized climate sensitivity parameter” (…).

(iii) :

The prior on \(s\) is of the form \(p(s)\propto s^{-k}\), where \(k> 0\), and you are given a finite number \(n\) of independent observations \(y_{n}\) of the random variable \(y\).

The first assumption (i), as summarized by Millner (2013), is merely a form of risk aversion; the second (ii) is an assumption used to represent the consumption growth linked to climate change, for which Weitzman straightforwardly adopted a stochastic form; the third assumption (iii) regards the uncertainty on climate sensitivity.

Thus, Weitzman concluded that consumption would decline in a “catastrophic” way \(y\rightarrow-\infty\), whereas, the stochastic discount factor would tend to infinity (DT):

(a) :

The posterior distribution for \(y,q(y|y_{n})\propto\int_{0}^{\infty}h(y|s)\Pi_{n}h(y_{n}|s)p(s)ds\) scales like \(|y|^{-(n+k)}\) as \(y\rightarrow-\infty\).

(b) :

The stochastic discount factor \(M\rightarrow\infty\).

Conclusion (a) is the central result of the DT; consumption growth declines to \(-\infty\), because the PDF \(h(y|s)\) depends on a prior \(p(s)\), which depends, negatively, on a \(k> 0\). As argued above, this work represented a big innovation in modelling the risk of a catastrophe related to climate change and the reaction among academics has been rather strong. The idea of a CBA entailing a fat-tailed risk is not widely accepted, also because it involves expensive and timely policy actions, even though it is commonly accepted that policies should be oriented towards reducing the probability of a catastrophe occurring in the future.

Furthermore, due to strong criticism made against the assumptions of the DT, Weitzman provided an interesting line of reasoning on the use of the framework analysed in his study, observing the probability of assigning climate change response to GHG changes, namely the equilibrium climate sensitivity (“a good example of a known unknown”Footnote 18). Thus, in a subsequent work Weitzman (2011) argued that results can depend on seemingly random decisions about how to model tail probabilities, specifically whether a Pareto PDF or a Normal one is used. Indeed, it is true that, when using the same probability as UNFCCC (2007), there is a great difference in upper tail behaviour between the Pareto and the Normal distribution. This, adds to the fact that only on this point there are at least two sources of uncertainty: change in temperature (Table 1) and sensitivity to GHG emissions (Table 2).

Table 1 \(Prob[S\geq\hat{S}]\) for fat-tailed Pareto and thin-tailed Normal distributions

On Table 2, \(\hat{S}\) is the temperature increase threshold analysed, and the values shown represent the probability of the threshold being exceeded. The table displays the link between emissions and temperature increase. According to the different probability structure assumed, Pareto = \(\textit{Prob}_{P}(\cdot)\) = fat tails and Normal distribution = \(\textit{Prob}_{N}(\cdot)\) = thin tails. How does the ppm of \(CO_{2}\) affect the increase in temperature? Simply put, it is uncertain, and Weitzman was able to show how (and why) the outcome is more worrisome when using fat-tailed distribution (Byrnes 2015). For given values of ppm, Weitzman (2011) calculated the probability of exceeding a (\(5^{\circ}\)C or \(10^{\circ}\)C) threshold, as follows.

Table 2 Probabilities of exceeding \(T=5^{\circ}\text{C}\) and \(T=10^{\circ}\text{C}\) for given \(G=ppm\) of \(CO_{2}e\)

The debate around the DT has mainly concerned the mathematical tools and assumptions used, rather than its underlying principles. Thus, in the presence of fat-tailed outcomes and strong risk aversion, using the standard EU theory implies some weaknesses, because the probability of catastrophic events does not decrease rapidly enough to compensate for risk aversion. On the other hand, it remains crucial to consider the deep structural uncertainties concerning catastrophic outcomes, which influence the plausible applications of any cost-benefit analysis to the economics of climate change in the foreseeable future.

4 Criticism of the Dismal Theorem

The interpretations and assumptions found in Weitzman (2009) sparked an intense debate (it is possible to count more than one thousand citations on the matter)Footnote 19. Indeed, according to a key branch of the environmental literature, the dismal theorem is of no great relevance to the climate problem and the modelling strategy proposed by Weitzman is not useful to conduct a cost-benefit analysis.

The DT framework came under criticism in relation to three main aspects: (i) the assumption that does not allow endowment transfer (Horowitz and Lange 2009); (ii) the fact that infinite marginal willingness to pay does not imply that total willingness to pay is infinite (Karp 2009); (iii) the use of a CRRA utility function (Arrow and Priebsch 2014).

Horowitz and Lange (2009) showed that Weitzman’s model can be applied only to cases in which it is not possible to shift wealth from the present to the future. This is a strong limit of Weitzman’s approach and, even though scholars have put forward different kinds of criticism, it remains a central point.

In fact, the impossibility of investing during the first time period affects the expected value of the stochastic discount factor. Using the same model but considering the explicit investment decision, Horowitz and Lange obtained quite different results, based on which they concluded that (…) the optimal expected marginal rate of substitution between current and future consumption is finite (…). This is the main difference between their methodology and the one followed by Weitzman.

Since, unlike in Weitzman (2009), the investment decision is explicit in Horowitz and Lange (2009), in the second time period there will be \(C_{1}(S)+R(p,s)\), where \(R(p,s)\) is the return to \(p\)Footnote 20 investment decided during the first time period. Thus, \(\beta\frac{E\left[U^{\prime}(C_{1}(S))\right]}{U^{\prime}(1)}=\infty\) is not obtained as optimal investment choice if a technology exists which is able to transfer an asset into the futureFootnote 21. In fact, even if we assume optimal choice \(p=0\), we have \(\beta\frac{E\left[U^{\prime}(C_{1}(S))\right]}{U^{\prime}(1)}=\frac{1}{\widehat{R}^{\prime}(0)}\), which clearly contradicts Weitzman (2009).

The point is that Weitzman (2009) followed the Bayesian paradigmFootnote 22,using a likelihood function in order to investigate the behaviour of the random variable. According to this framework of analysis the conclusion regarding an infinite marginal rate of substitution is obvious.

Karp (2009) raised similar objections to the DT. He highlighted this very same point, concluding that the likelihood function entailing the intra-generational conflict arising from climate change policy might be exaggerated under two circumstances: (a) if it is not possible to change the composition of investment between man-made input and natural capital; or (b) if the correct investment decisions are not able to internalize either future or present damagesFootnote 23.

Karp (2009) conducted an analysis of consumption over two time periods, studying its possibility frontier (Fig. 3). He found that the internalization of present damages and the chance to abate during the future time period (opportunities to consume abatement) cause the curve (the consumption possibility frontier) to shift to the right, i.e., from the continuous to the dotted curve. According to this, the discount rate plays a key role in the decision to abate or not. The first-best C entails a positive amount of abatement during both time periods. The point is that, in this first best, investment choices during the first time period cause emissions during the second one, thus the decision on emissions is not exogenous. This is so because, in this setting, the current generation bequeaths to the future both man-made input and the possibility of abating (of consuming abatement), while in the current period it is possible to internalize the damages. So, Karp (2009)criticized the DT because its assumptions do not allow the designing of a long-term policy.

Fig. 3
figure 3

Consumption Choice along two time periods, Karp (2009). Point A: Constrained optimum with abatement set equal to 0. Point B: BAU equilibrium Point C: First Best with abatement

Karp (2009) also pointed out that, if agents take the emissions of the second time period as exogenous (or if it is not possible to internalize current emissions), the equilibrium is given by point B. Specifically, KarpFootnote 24 did not object to the DT in its entirety; he agreed that uncertainty about the distribution of a random variable can significantly increase overall uncertainty about this random variable, leading to a much higher risk premium. However, he strongly argued against the interpretation of the DT according to which society should be willing to make essentially any sacrifice to transfer a unit of certain consumption into the future. Roughly for the same reasons, both Karp (2009) and Horowitz and Lange (2009) also criticized the fact that Weitzman presented his result in terms of marginal willingness to pay, rather than total.

Horowitz and Lange (2009) used the same CRRA function found in Weitzman (2009) in order to investigate the temporal substitution of consumption over two time periods: \(U(C)=\frac{C^{1-\eta}}{1-\eta}\), where the future consumption is a random variable that captures uncertainty about its value.

Nordhaus (2011) expressed his criticism in a different way, concentrating on another key point of debate, i.e., the absence of policy in the DT. In his opinion, this influences the conclusions reached by Weitzman and makes his approach more pessimistic, as the policymaker has no opportunity to do anything to avoid a catastrophic scenario: “Weitzman discusses many implications of the dismal theorem, but one of the most striking is its destructive effect on benefit cost analysis, particularly for climate change”.

Nordhaus (2011) recognized the importance of the DT in terms of helping to identify when tail events are significant, but it is unquestionable that Weitzman postulated the complete (and strong) inability of society to learn and act in time. Regarding this, in order to support his theory, Weitzman (2011) argued that the extremely long time required by the system to naturally remove extra atmospheric \(CO_{2}\) (Solomon et al. 2009) provides the evidence of the ineffectiveness of any policy.

The discussion around the DT reached a particularly interesting phase in 2013, when the Journal of Economic Literature hosted articles written by scholars of two discount modelling views: “The Structure of Economic Modeling of the Potential Impacts of Climate Change: Grafting Gross Underestimation of Risk onto Already Narrow Science Models” (Stern 2013) and “Tail-Hedge Discounting and the Social Cost of Carbon” (Weitzman 2013), with an additional contribution by Pindyck: “Climate Change Policy: What Do the Models Tell Us?” (Pindyck 2013). In his paper, Stern underlined the need for a new generation of models accounting for the possibility of catastrophic outcomes, such as large-scale migration and conflicts. On the other hand, however, he highlighted the difficulties inherent in considering additional aspects, stating that there are many assumptions driving underestimation (thin-tailed) of climate risk in usual IAMs like, for instance, exogenous drivers of growth or the weakness of the damage function. Overall, Stern concluded that problems of underestimation in economic modelling of climate change impacts arise directly from these basic assumptions on the modelling of growth and climate impactsFootnote 25.

Thus, analysing usual IAM settings, Stern (2013) found that the assumptions leading to a thin-tailed structure do not give enough importance to catastrophic scenarios. He showed that the separability assumption and constancy growth combined lead to an optimistic conclusion, and the key point is not so much constancy or separability but the exogeneity of a key driver of growth combined with weak damagesFootnote 26. Stern focused specifically on a theoretical issue: if we assume a growth of any long-term, we are already discarding (by assumption) a disruption arising from higher temperatures; in addition, only using models with wide probability distributions over important parameters including those influencing growth, temperatures and damages, could these models be capable of producing futures that are as dismal and destructive as climate science suggests may be possibleFootnote 27. Therefore, Stern concluded that a new generation of models for climate science, impact and economics is needed.

Pindyck (2013) provided a similar analysis of the IAM approach, underlining the arbitrariness of its assumptions. Specifically, like Heal (2009), Pindyck stressed the arbitrariness of the utility function (and the related parameter) and, following many authors already cited, the sensitivity of temperature to changing \(CO_{2}\) concentrations, with the consequent economic impact of rising temperatures. This is an interesting perspective of analysis, since Pindyck clearly showed the parameters leading to a different social cost for carbon estimated by Nordhaus (2011) (12$ per ton fo \(CO_{2}\)) and Stern (2006) (200$ per ton of \(CO_{2}\)). Regarding the focus of this review, Pindyck (2013) stated that catastrophic outcomes are another major problem with using IAMs to asses climate change policy – and that this aspect is usually completely ignored. This is so because there is uncertainty regarding both the rise in temperatures and how the GPD may react to this rise. Thus, Pindyck (2013) proposed a particular strategy: “The problem is analogous to assessing the world’s greatest catastrophic risk during the Cold War – the possibility of a U.S.–Soviet thermonuclear exchange. How likely was such an event? There were no data or models that could yield reliable estimates, so analyses had to be based on the plausible, i.e., on events that could reasonably be expected to play out, even with low probability. Assessing the range of potential impacts of a thermonuclear exchange had to be done in much the same way. Such analyses were useful because they helped evaluate the potential benefits of arms control agreementsFootnote 28

Weitzman (2013) stressed the analysis on the “notoriously hypersensitive dependence of cost-benefit analysis (CBA) on the choice of a discount rate”Footnote 29, introducing a new (for 2013) concept of climate change discounting decomposing benefit and cost, as is usually done in capital assessment pricing model (CAPM). In so doing, he obtained a sort of insured DT. In fact, Weitzman introduced a component that hedges against the bad tail of catastrophic damages by insuring positive expected benefits even under the worst circumstances. These results are very interesting, especially in theoretical terms, because of questions on the relation between the price of risk and the social cost of carbon (SCC). Academics recognised this innovative approach and, after this paper, climate risk assessment using CAPMs increased exponentially.

Indeed, a result of this widespread debate (lasting over a decade) around the discount factor is that, in the analysis of climate risk, even though it is still the most widely adopted, the IAM approach is not the only setting for climate change studies.

5 Climate change studies with catastrophic scenarios, something more than IAM approaches

Thus, the wide debate around the discount factor and, overall, criticism of IAM approaches has had consequences on United Nations reporting, governments, agreementsFootnote 30, and academia. In fact, the discussion in this review have revolved around its important role in order to design the new global strategy pursued (or otherwise) by policymakers and, undoubtedly, the scientific approach and the academic debate mentioned above have been seriously considered. Nowadays, research is facing the fragility of the EU with respect to heavy-tailed distributional assumptions using more than a single methodology, and approaches other than IAMs are much less marginalized. For instance, Ikefuji et al. (2015) provided a general theoretical framework that leaves the probability distribution unrestricted. At the same time, they followed the idea that it is reasonable to use fat tails to model future consumption and observed that there is a limit to present consumption efforts in terms of present policy. One of the meanings of this finding (at least, in our interpretation) is that this limit simply means that an insurance cannot have an infinite price.

Specifically, Ikefuji et al. (2015) raised objections to the mainstream EU theory axiomatization by Von Neumann and Morgenstern (1944) and Savage (1956), without discrediting the DT and arguing that it is appropriate to question whether the EU theory actually suitable for normative analyses of catastrophic risk. They used an inter-temporal welfare function pretty similar to those of Stern and Weitzman, \(W(C_{0},C_{1})=U(C_{0})+\frac{1}{1+\rho}E[U(C_{1})]\), assuming that it is possible to transfer endowment from the present to the future. Thus, the future wealth depends on the state of nature at time \(t=1\): \(W_{1}=(W_{0}-C_{0})e^{(A\epsilon_{1})}\), where \(\epsilon_{1}\) is the risk associated with a given probability distribution under \(\P\) (the shape of the probability function is given by the assumptions made).

Consequently, Ikefuji et al. (2015) asserted that the representative agent faces catastrophic consumption risk at future time \(t=1\) when \(\epsilon_{1}\) is heavy-tailed to the left under the objectively or subjectively given probability measure \(\P\). In fact, deriving the optimal consumption bundle, the inter-temporal marginal rate of substitution will be given by: \(P_{C^{\ast}_{1}}(C^{\ast}_{1})=\frac{U^{\prime}(C^{\ast}_{1})}{(1+\rho)U^{\prime}(C^{\ast}_{0})}\); so, there exists at least the option to postpone current consumption because in the future there could be the need for catastrophic consumption (a sort of self-insurance). Simply put, whether if the rate of substitution is finite or not, depends on the assumption regarding risk aversionFootnote 31.

Hence, if relative risk aversion is non-negative, and non-decreasing for any positive consumption, and the absolute risk aversion is non-increasing for any positive consumption, then the expected inter-temporal marginal rate of substitution is finite: \(E(P)<\infty\). Ikefuji et al. (2015) criticized Weitzman (2009) only in relation to the fact that the DT is based on ex-ante incompatible model specification; by showing that, it is possible to use the EU theory to implement similar modelling.

In fact, Ikefuji et al. (2015) provided a utility function of present and future consumption avoiding fragility of distributional assumptions in relation to Savage’s entire axiomatization but not to that of Von Neumann Morgenstern: “only.. [than] ..Savage’s axiomatization of EU is valid”Footnote 32. So they defended the EU-theory discarding Von Neumann Morgenstern’s axioms.

So, Ikefuji et al. (2015) participated in the debate on the EU framework without analysing the fragility of the human rationality assumption. Yet, environmental policy papers entailing catastrophic scenarios have also been affected by the recent evolution of economic sciences in experimental and behavioural analysis. Chanel and Chichilnisky (2013) conducted an interesting experiment asking the same question twice, the first time in 1998 and the second time in 2009: “Imagine that you are offered the opportunity to play a game in which you must choose and swallow one pill out of 1 billion identical pills. Only one contains a lethal poison that is sure to kill you, all the other pills being ineffective. If you survive (i.e., you swallow one of the 999.999.999 ineffective pills), you receive a tax-free amount of 152.450 €. Are you willing to choose one pill and to swallow it?”

As mentioned, the experiment was repeated twice and the authors investigate also the reasons behind variation in the answers were also investigated (i.e., whether the individuals are rich or poor, educated or not, have children or not, etc.). Straightforwardly, the results of the experiment contradicted the EU theory because the individuals interviewed overestimated the risk. Essentially, this result follows the idea put forward by Posner (2004) on catastrophes, which are events with: “a very low probability of materializing, but that if (it) does materialize will produce a harm so great and sudden as to seem discontinuous with the flow of events that preceded it”. Therefore, a possible and innovative solution proposed by Chanel and Chichilnisky (2013), is to use a discontinuous function to link to any state of the word in order to provide “The Topology of Fear” (Chichilnisky 2009).

This may well be the future of any analysis in this field, even though the “normal” EU theory will remain a fundamental basis. The new frontier of economic science, which is breaking the wall of rationality, could provide interesting results regarding human behaviour.

An emblematic real-life example of this is the fear of a terrorist attack, which is usually greater than the fear of a car crash. In a rational view, this is absurd; however, the reason for this behaviour is common knowledge. In the first case, in fact, the victims and perpetrators are easily identifiable; in the second case, they are not. Policymakers act consequently: after 9/11 air traffic was banned across the USA and the growth in victims of car crashes was higher than the total number of victims of the World Trade Center attack (Sunstein 2007). Similarly, for what concerns climate change risk, the perpetrator is not identifiable, so the general population tends to underestimate this risk, and the economic science has to provide answers also in relation to this point. It is not sufficient to suggest policy makers what to do, it is also important to explain why there is only partial consensus regarding climate change policy and why climate change denialism exists.

Obviously, experimental and behavioural economics cannot substitute the usual IAM approach and have not contributed to the DT debate but, as in many other fields, have offered additional tools to analyse catastrophic scenarios and human “irrationality” with respect to climate risk. When stating this, it is honest to recall that, even during the 1990s (but, at that time, this was only niche literature), some scholars asserted that people do not behave rationally when facing the risk of a catastrophe (Coursey et al. 1993). At first these studies remained marginalized, but nowadays it is not so anymore. A paper that received much attention from academia was Sunstein (2007), investigating how people and governments manage worst-case scenarios. This paper reported experimental work in which individuals underestimate risk (so they treat it as having a value of 0) when the probability of a catastrophe is below a certain threshold. Conversely, when this probability exceeds the threshold, people are willing to pay large sums in order to protect themselves from this eventuality. Thus, if the people do not ignore a huge risk, they usually overestimate it; this is exactly what Coursey et al. said in 1993.

In fact, recent experimental papers have provided very interesting solutions to worse-case scenarios by relying on the overestimation cited here (Coursey et al. 1993): le Roux (2018) studied how individuals respond to a hypothetical insurance able to safeguard their endowment if a climate change catastrophe occurs. He set up an experimental model using a threshold public goods game similar to that found in Dannenberg et al. (2015). Those who take part in the game have an endowment and are informed that they might be the victims of a climate change catastrophe; the probability of the catastrophe occurring is unknown and, if it occurs, the players will lose their endowment; the players can decide to contribute to an insurance able to protect their endowment from the uncertain catastrophe, but the insurance will be bought only if a sufficient number of players contribute, renouncing a sufficient amount of the initial endowmentFootnote 33. In fact, this is called “team” insurance.

The results of this experiment are quite remarkable: more than half of the sample is willing to donate half of its endowment, and a quarter of the sample is willing to donate more than half of its endowment.

6 Conclusions and Suggestions

The great importance of literature on climate change and on the possibility that we might face catastrophic scenarios is commonly recognized, in both academic and institutional environments.

Indeed, the complexities of climate change and the long-term horizon (i.e., 2100) that needs to be investigated imply many responsibilities for the researchers working in this field. Some studies have been able to provide answers and interesting policy recommendations, like Stern (2006); instead, Weitzman (2009) has caused a wide academic debate around his DT. The use of fat-tailed probability in Weitzman (2009) attracted much criticism, but it also shed light on different sources of uncertainty and how to deal with them.

Without a doubt, the Stern Review (Stern 2006) has changed the approach to analysis, especially in UN reporting, which clearly emerges when comparing IPCC Reports released before and after2006. In the academic environment, the Stern Review has promoted an interesting debate around the discount factor and its use in the context of a welfare analysis across generations. The DT has contributed to the discussion by stimulating an analysis of the various settings proposed and of the view behind each assumption and parameter. It seems clear that the debate around the discount factor is not just a technical discussion, but it actually entails a deep philosophical question: how important is the welfare of future generations to us? Ultimately, the DT is valuable from a theoretical perspective and, although it is useless in providing policy recommendations, it may help researchers ponder some crucial aspects of climate policy.

Recent innovations in economic sciences have affected environmental policy analysis and, overall, climate change research. Specifically, experimental and behavioural papers might shed light on some new perspectives regarding human irrationality when facing worst-case scenarios. Regional sciences have to play a crucial role on this, in particular through its interdisciplinary essence.

Indeed, it is not possible to carry out any survey on catastrophic climate change without stressing the importance of the entire EU theory and the fact that it has been around for sixty years. When dealing with the possibility of suffering a global disaster, human behaviour displays some features that a stochastic discount factor is certainly unable to capture.

Social psychology and analysis regarding relationship among communities, individuals, families, territories, nature and neighbour regions are aspects that have not been analysed sufficiently till now.

Thus, experimental and behavioural tools will gain greater importance in the future, as they help understand the actions performed by individuals, insurance companies, and lobbies, as well as the rationale behind intergovernmental agreements.