Keywords

1 Introduction

Global warming is an inevitable reality that has been proved with observed data, which many believe is an anthropogenic climate change caused by greenhouse gas emissions. In spite of global efforts on mitigation of these emissions during the last two decades, climate change and its impacts show no signs of abating. Mitigation is a fundamental action, but a few decades may be required for it to be effective. Therefore, at present, “adapting” to the changing climate is urgent. This alternative action is rather natural because evidences of the impacts of climate change on the world can be seen almost every day and will continue to be seen in the uncertain climate change era.

The word “adaptation” appeared in the English literature in the early seventeenth century (Orlove 2009); however, its concept in climate change began receiving full attention after the Intergovernmental Panel on Climate Change (IPCC) officially defined it as “the adjustment in natural or human systems in response to actual or anticipated climate stimuli or their effects which moderates harm or exploits beneficial opportunities” (IPCC 2001), and United Nations Framework Convention on Climate Change (UNFCCC) emphasized “adaptation requires urgent attention and action on part of all countries” (UNFCCC 2002). More recently, several studies on climate change adaptation (CCA) have been carried out, ranging from concept (e.g., Smit et al. 2000; Adger et al. 2005; Hallegatte 2009; Orlove 2009; Birkmann 2013) to practice (e.g., Wheeler 2008; Tompkins et al. 2010; Preston et al. 2011; Lesnikovski et al. 2015) and from theory-oriented books to comprehensive national reports (e.g., NRC 2010; Ranger et al. 2010; WUCA 2010).

The scope of CCA nearly covers all the activities of our lives and environments. Resizing suit production, resetting vaccination timing, and changing the crops that we grow are considered as typical CCA examples. CCA can be categorized as autonomous or planned, proactive or reactive, short range or long range, and localized or widespread (Smit et al. 2000; Pelling 2011, Chapter 2). “Water” is one of the most important “climate-sensitive” sectors urgently requiring adaptation as demonstrated by Tompkins et al. (2010), who reported that water supply and flood sectors occupy the largest portion (34 %) of 300 CCA activities in the United Kingdom. Policies, plans, and projects that are created or improved to cope with the anticipated problems of water resources due to climate change can be key forms on how CCA activities are carried out. Agents in charge of CCA include national and local governments, private sectors, community organizations, and individuals.

The success of CCA depends not only on which option is selected but also on the time, location, and the method of implementing the selected option. Therefore, in the CCA process, we are often at a crossroad, and a rational “decision-making (DM)” theory is required. Although conventional decision-making theories have been developed in the past, adapting to climate change seems to be a new challenge that is beyond the realm of classical decision-making theories. This is mainly because climate change is characterized as “nonstationary” and a large degree of, sometimes deep, uncertainty. It should be noted that a time series is nonstationary when its key statistics, such as mean and variance, are functions of time, while deep uncertainty implies that a future event and its likelihood of occurrence are unknown. For example, the “average” temperature of the earth has been rising (i.e., indicating nonstationary characteristics), but its future projections relevant to DM are highly heterogeneous in terms of space and time (i.e., indicating deep uncertainty). Therefore, a successful decision making for CCA (DMCCA) should consider these two attributes effectively in its process.

Recent studies demonstrated that DMCCA should be based on “robust” and “adaptive” concepts to capture the nonstationary and deep uncertainty characteristics of climate change. A DM method that aims at favoring “robustness” performs satisfactorily over a wide range of plausible future scenarios compared to those that focus on optimality, which performs best only for a few scenarios. DMCCA alternatives that dominate other alternatives for a few scenarios but does not function properly for other scenarios should be avoided when the objective is to achieve robustness. On the other hand, an “adaptive” DM method refines its decision iteratively by continuously monitoring and learning from its previous outcomes and new information. In normal decision-making situations, such an iterative and continual process is seldom necessary because it requires considerable money, time, and effort. Therefore, robust and adaptive decision-making methods must adapt to changing climates and absorb a wide range of climate conditions. The word “flexible,” which leaves options open (Walkers et al. 2013), has been widely used as a substitute for both words, “robust” and “adaptive,” as robust decisions should be flexible over a wide range of scenarios and adaptive decisions should be flexible over the time horizon. The “flexible adaptation pathways” slogan of New York City (NPCC 2010) is a good example adopted in practice. Other expressions can also be used similarly for the above three words; for instance, Hallegatte (2009) recommended no-regret, reversible, soft, safety-margin, or faster-rotating strategies for CCA.

As DMCCA implies an entire process rather than referring only to an outcome, a well-designed framework embedding the robust and adaptive concepts is necessary to effectively support decision makers. The report published by United Kingdom Climate Impact Programme (UKCIP) entitled “Climate adaptation: risk, uncertainty and decision-making” (called RUD hereafter; Willows and Connell 2003) first proposed a comprehensive and systematic framework for DMCCA, which consists of eight stages: (1) identify problems and objectives, (2) establish decision-making criteria, (3) assess risk, (4) identify options, (5) appraise options, (6) make decisions, (7) implement decisions, and (8) monitor, evaluate, and review. This staged framework can be aided with a web-based tool called “the UKCIP adaptation wizard” (http://www.ukcip.org.uk/wizard/#.VehXyU3ot1M). More recently, IISD (2009) and NRC (2010) also proposed similar frameworks; the former focused on a policy making process, while the latter focused on an adaptation planning process. The contents of this chapter cover the entire DMCCA framework but focuses primarily on stage 5 of the RUD report, i.e., more specifically on how to quantitatively generate a good decision for a given problem. In particular, three DMCCA theories that have been developed during the last decade, the robust decision making (RDM), the real option analysis (ROA), and the dynamic adaptive policy pathway (DAPP) approaches, are introduced, and their applications are discussed in the field of water resources.

2 Classical Decision-Making Theory

2.1 Concept and Basic Structure

Water resources planning and management problems have been favorite examples in DM theories. Concrete mathematical descriptions and various practical applications can be easily found even in textbooks of water resources (systems) (e.g., Loucks et al. 2005). This section revisits the classical DM theory and methodology not only because they offer the fundamentals for advanced DM theories and methods introduced in the following sections but also because they have been often revised or combined with advanced DM methods for CCA.

In general, decision problems consist of a “decision” and “state variables,” which imply a decision is made as the function of a certain state. For instance, calculating the quantity of water to be supplied to a city for a future period depends on the quantity of water that will be available during the planning period and the quantity of water currently secured for the city. In this case, the quantity of water to be supplied is the decision variable, while the current and future estimates of water availability is the state variable. In general, these variables can be either continuous or discrete; however, in almost all examples of this chapter, the decision variable is represented by “alternatives,” and the state variable is expressed as “scenarios.” Table 8.1 shows an example of a DM problem where three alternatives and three scenarios are involved. For example, we can consider a hypothetical case where three water supply facilities (such as expanding the capacity of an existing dam, construction of a new desalination plant, and reinforcement of water conservation programs) are reviewed as alternatives under three climate scenarios (such as wet, medium, and dry futures).

Table 8.1 Example of decision alternative, state scenario, and performance criterion of a DM problem

A combination of each decision alternative and each state scenario produces a unique “outcome” (i.e., each cell in Table 8.1), which should be measured with a “performance criterion.” The ultimate purpose of a decision analysis is to choose a preferred alternative by comparing the performance criterion values of the alternative candidates. The common choice for the performance criterion is monetary valuation, and typical examples are “net present value” (or “benefit-cost ratio”) of a well-known cost-benefit analysis (CBA). Table 8.1 also shows a monetary valuation example (called the payoff matrix), where the value of each cell indicates the economic benefit of each water supply alternative if each scenario occurs. However, real decision problems often face various types of nonmonetary and intangible measures (for instance, the amount, running length, or sometimes, frequency of water deficit; and the aesthetic value of water). Contrary to CBA, a “cost-effectiveness analysis” (CEA) can handle a few nonmonetary measures because it does not evaluate the benefits of alternatives in monetary terms, but it chooses the least-cost alternative that meets a prespecified benefit target. The performance criterion is sometimes converted to an “objective function” in order to solve it mathematically.

The methodology to choose a preferred alternative depends on the “goal” a decision maker pursues. Most decision problems aim at minimizing the loss or maximizing the benefit. The cells in the fifth column of Table 8.1 show the “expected” benefits of the alternatives over three scenarios, and Alternative 2 is chosen because it has the maximum average value. Such a DM is traditionally well structured with the “optimization” theory, which basically consists of an objective function, constraints, and decision variables. Both CBA and CEA can be categorized as (constrained) optimizations. The real optimization problems usually suffer from nonlinear, non-concave functions with huge sets of decision and state variables and uncertainties; however, various solution search algorithms are also available, which include linear programming, dynamic programming, nonlinear programming, and genetic programming that have been developed over the last 60 years.

However, one may also pursue a “safer” goal to optimization, especially when the stakes are very high when she/he loses. The last column in Table 8.1 lists the maximum values from individual rows, and the minimum value of these maximums is given by Alternative 1. This “minimax” approach based on benefit values may be a “pessimistic” (or “conservative”) strategy, but it has the disadvantage of considering only the extreme payoff. An approach that can consider all payoffs is the minimax “regret” (Savage 1954), which is illustrated with values in parenthesis in Table 8.1. The regret implies opportunity loss and is defined as the difference between each payoff and the best possible payoff in a column. In Table 8.1, minimizing the maximum regret results in the “best of the worst” alternative, i.e., Alternative 1. The regret concept has been favorably adopted for the RDM theory (Sect. 8.3) where an alternative that performs reasonably well over all scenarios is preferred to an optimal alternative that performs best for a certain group of scenarios, but its performance is very poor for other groups. Stevenson and Ozgur (2007) compared three alternative choice strategies:

  • The maximin strategy (Wald 1949) is a conservative approach. It consists of identifying the worst (minimum) payoff for each alternative and selecting the alternative with the best (maximum) of the worst payoffs. In effect, this DM theory sets a limit on the potential payoff, and the actual payoff cannot be less than this amount.

  • The maximax approach is the opposite of the maximin approach. The best payoff for each alternative is identified, and the alternative with the best maximum is selected.

  • The minimax regret approach (Savage 1954) minimizes the worst-case regret. In this approach, the regret is defined as the difference between the actual payoff and the payoff that would have been obtained if a different course of action had been chosen. The worst-case regret is the regret in which the best option was chosen (Loomes and Sugden 1982).

In several real problems, there exist multiple performance criteria, some of which often conflict with other criteria. In order to handle multiple criteria in decision problems, multi-criteria analysis (MCA) or multi-criteria decision analysis (MCDA) has been widely used because it considers both quantitative and qualitative data in prioritizing adaptation strategies. All MCA approaches make options and their contribution to different criteria explicit, and all require the exercise of judgment. However, they differ in the manner they combine tangible and intangible data. Formal MCA techniques usually provide an explicit relative weighting system for different criteria. The main role of MCA is to resolve the difficulties faced by human decision makers in handling large amounts of complex information in a consistent way. Therefore, MCA can identify a single most preferred strategy, rank or outrank all strategies, shortlist a limited number of strategies for subsequent detailed appraisal, or distinguish acceptable from unacceptable possibilities (Dodgson et al. 2009). Therefore, MCA provides a systematic framework for assessing and scoring CCA strategies for a wide range of climate change scenarios and decision criteria, which are expressed in physical, monetary, or qualitative ways. The weights on criteria and scenarios can be derived through stakeholder participation or expert involvement (Werners et al. 2013; Kim and Chung 2015).

Ranger et al. (2010), Watkiss and Hunt (2013), and Kunreuther et al. (2014) concisely summarized the classical DM theory and methodology and discussed their implications for CCA, focusing on three traditional methods, CBA, CEA, and MCA. Please refer to these references for more details.

2.2 Handling Uncertainty and Scenario-Based Approach

In general, uncertainty makes DM problems difficult and complex. Most classical DM methods have been developed assuming that probabilities of random variables are well identified. Uncertainty is often captured using multiple scenarios, which runs a specific DM method for individual scenarios independently and combines the calculated outcomes with the given probabilities. It should be noted that the “implicit” optimization approach utilizes the same concept as a scenario-based DM approach, while in an explicit optimization approach (e.g., stochastic dynamic programming), uncertainty is expressed with an explicit formula with respect to probability rather than scenarios.

Table 8.1 illustrates a very simple example of a scenario-based DM approach, whereas problems that involve a sequence of decisions are best manipulated using decision trees (such as Figs. 8.3, 8.4, and 8.5 in Sect. 8.4). The fourth column in Table 8.1 assumes that the occurrence of all scenarios is equally likely, and scenario probabilities should be updated as new information becomes available, which consequently changes the choice of alternatives with a new weighted average. Stevenson and Ozgur (2007) also compared equal and weighted averages by using the maximin, maximax, and minimax approaches:

  • The Hurwicz criterion, also referred to as weighted average, offers a compromise between the maximax and maximin criteria for the DM theory. This approach requires the DM theory to specify a degree of optimism in the form of a weighting coefficient. Possible values range from zero to one, where the DM theory is more optimistic for values closer to one (Hurwicz 1945).

  • The equal likelihood criterion offers a method that incorporates more information. This approach treats the equivalent plausibility as if each scenario was equally likely to occur and focuses on the average payoff for each row by selecting the alternative that has the highest row average in a decision matrix.

As mentioned in the previous section, the minimax (as well as the maximax and maximin approaches) approach does not require any probability information and thus could be suitable for decision problems relevant to CCA where probabilities are not well defined.

For the past 30 years, the scenario-based approach has received much attention because a large number of climate scenarios (e.g., those based on SRES of AR4 and RCP of AR5) have been developed using relatively precise monitoring data and high-performance computer simulation results. In general, a climate change impact assessment is initiated by inputting these climate scenarios to a chain of general circulation models downscaling methods, hydrological models, and flood or drought vulnerability models. Most previous studies on CCA that aimed at reducing the projected vulnerability have often used this “predict-then-act” approach to predict the consequences of adaptation strategies and recommend an optimal response, that is, an action that is better than the rest. This “top-down” approach seems natural and conceptually straightforward, but it should address the uncertainty inherent in the process of selecting CCA strategies by estimating the likelihood that different future scenarios will occur (Lempert and Schlesinger 2000). Therefore, the probability-based approach has been useful for understanding comprehensive future climate change patterns and guiding specified adaptation policies, although identification of reasonable scenario probabilities has been a key issue (Schneider 2001). However, Lempert et al. (2003) argued that probabilities should be used with scenarios only if they contain solid information, while Moss and Schneider (2000) emphasized on probabilities based on subjective judgment. A few IPCC contributors have initiated a process to guide its assessment report toward characterizing uncertainties with probability distributions that represent the consensus of the scientific community (Giles 2002), while IPCC (2005) warned that the community may converge to an expressed value and become overconfident. These controversies over estimating and using scenario probabilities resulted in the development of the RDM theory, which will be addressed in Section 8.3 where limitations of the predict-then-act approach will also be discussed.

3 Robust Decision Making

3.1 Description

As mentioned in Sect. 8.1, RDM focuses on the robust perspective, as its name represents. The alternative that RDM chooses for a decision problem should perform reasonably well over a wide range of uncertainties. In general, RDM relies on three key components: (1) multiple scenarios of plausible future climates (Lempert et al. 2003), (2) a robustness criterion, and (3) one of the following approaches: (a) MCA or MCDA under complete uncertainty (Ranger et al. 2010; Chung and Kim 2014; Kim and Chung 2015), (b) assessed risk of policy (Lempert et al. 2004; Dessai 2005; Dessai and Hulme 2007), (c) vulnerability and robust response (Lempert et al. 2013; Bloom 2014), and (d) decision scaling. Regarding the first key component, similar to traditional multi-scenario methods, RDM characterizes uncertainty by considering multiple future scenarios. In some cases, these multiple scenarios will be represented by multiple future scenarios of the world such as scenarios of SRES of IPCC AR4 and RCP2.6, 4.5, 6.5, and 8.5 of IPCC AR5. RDM can also incorporate probabilistic information in a quantitative manner, but it rejects the view that a single joint probability distribution represents the best description for a deeply uncertain future. Rather, RDM uses ranges or, more formally sets of plausible probability distributions to describe deep uncertainty. For the second component, RDM uses robustness rather than an optimality criterion to assess an adaptation strategy, whereas the traditional subjective utility framework ranks alternative decision options based on the best estimated probability distributions. There exist several definitions of robustness, but all incorporate some types of satisficing criterion. For instance, a robust strategy can be defined as one that performs reasonably well compared to alternatives across a wide range of plausible future scenarios (Lempert et al. 2006). Lastly, RDM employs various analysis frameworks to characterize uncertainty and to help identify and evaluate robust strategies. They are summarized in the following sections. For strengths and weaknesses of these approaches, refer to the studies of Hall et al. (2012), Hallegatte et al. (2012), and Weaver et al. (2013).

3.2 Fuzzy MCDA-Based RDM Approach for Sustainability

MCA has high relevance for CCA (Werners et al. 2013). However, the general disadvantage of MCA is that scoring and weighting can be quite subjective, influenced by stakeholders and decision makers involved in the DM process. Therefore, recent MCA studies have applied a fuzzy set theory to consider the uncertainty of scoring and weighting. Furthermore, various variants of MCA methods have been proposed to derive robust priorities from several uncertain scenarios such as TOPSIS coupled with the minimax regret approach (Kim and Chung 2014). In addition, several methods, such as entropy, principal component analysis, Delphi technique, and analytical hierarchy process (AHP), have been introduced to derive subjective and objective weights (Porthin et al. 2013; Kim and Chung 2015), and sensitivity and uncertainty analyses were performed for obtaining the final decision (Hyde et al. 2005; Kang et al. 2013; Song and Chung 2016). Now, several combinations have emerged to solve the small part of deep uncertainty for robust prioritization such as the minimax regret approach with TOPSIS (Kim et al. 2015), fuzzy AHP (Sanneh et al. 2014), fuzzy TOPSIS (Chung and Kim 2014; Kim et al. 2015), and fuzzy VIKOR with entropy (Kim and Chung 2015).

In classical decision-making problems, the ratings and weights of criteria are known precisely, whereas in the real world, it is unrealistic to assume that the knowledge and representation of a decision maker or expert are precise. A few criteria are difficult to be defined or measured because of their inherent vagueness and complexity (Ducey and Larson 1999). The effectiveness of adaptation strategies varies based on the used climate change scenarios, as shown in Fig. 8.1. Because the evaluation of alternatives must handle the imprecision of established criteria and estimated performances, the development of a fuzzy DM model is necessary to address either qualitative (unquantifiable or linguistic) or incomplete information (Philles and Andriantiatsaholiniaina 2011). Therefore, most widely used MCA methods have been extended to solve problems in a fuzzy environment and to apply to practical decision problems (Carlsson and Fullér 1996; Greco et al. 1999; Liu 2007; Vadrevu et al. 2010; Turskis and Zavadskas 2011). Therefore, Chung and Kim (2014) developed a new analytical robust prioritization framework for CCA strategies by using a fuzzy set theory, multiple climate change scenarios, various MCA techniques, sustainability concepts, and strategies for DM under complete uncertainty.

Fig. 8.1
figure 1

Decadal variations of alternative evaluation index for RCP4.5 (Jeon 2013)

Kim and Chung (2015) used the VIKOR method for sustainable compromise solutions to suggest a set of adaptation solutions based on the regret aversion of decision makers such that the unknown preferences of decision makers can be considered and introduced into the fuzzy concept to represent the uncertainties of future climate change scenarios and expert valuations of the relative importance of decision criteria, which consists of social, economic, and environmental factors for sustainability. Furthermore, it employed the individual importance of each climate change scenario to account for the difference in information significance of various climate change scenarios by using an objective weight, specifically Shannon entropy-based weight.

In order to reflect the marginal utility of allocation capacity, Kim et al. (2015) developed an iterative framework for robust reclaimed wastewater (RWW) reuse allocation that considers several climate change scenarios. Based on alternative evaluation index (AEI), which includes various social, economic, and environmental criteria for sustainability (Chung and Lee 2009), the concept of incremental AEI (IAEI) was introduced to rank the best site by incrementally allocating facility capacity for RWW reuse to sites under two climate change scenarios. The minimax regret strategy was employed to consider the uncertainty inherent in climate change scenarios. The consequent robust ranking from the IAEIs was used to determine the final allocation of unit water quantity in a given iteration. This iteration continued until the total allocated water quantity satisfied the maximum available facility capacity for RWW reuse.

In addition, the RDM approach coupled with recent MCA tools can provide support to develop several real robust solutions for a highly changing climate environment with the consideration of sustainability, vulnerability, and stakeholders or decision makers. Several MCA techniques have been developed to consider various types of uncertainties on the selection of criteria, quantification of weights, derivation of performances, and appraisal of options. Thus, the MCDA method can be selected by criteria such as the type of data (quantitative, qualitative, or both), use of weights, use of subjective and objective weights, derivation of outranking, incomplete (or complete) information, and additional analyses such as sensitivity and uncertainty analyses. Critically, MCDA can reasonably consider the uncertainty of huge data, which is prevalent in the field of CCA strategy determination.

3.3 Assess-Risk-of-Policy Approach

Several researchers have proposed robust strategies for a wide range of plausible climate change scenarios. Additionally, Dessai and Hulme (2004) reviewed three critical questions within the context of the predict-then-act approach: (1) Why might we not need probabilities of climate change? (2) What are the problems in estimating probabilities? (3) How are researchers estimating probabilities? The CCA strategies can be determined using the following criteria: goals and motivation of the policy analysis, the unit of analysis, the timescale, and training of the analyst. That is, in the context of climate change, the conditional and provisional approach has various problems in deriving the probability density functions of future climate variables. Furthermore, Lempert et al. (2004) noted that the predict-then-act approach has often been used successfully for CBA, but that climate change violates its assumptions as follows: First, climate change is associated with radically diverse decision contexts and geography and timescales. Second, climate change is closely related to the conditions of deep uncertainty, where decision makers do not know or cannot agree on: (1) the appropriate models to describe interactions among the variables of a system, (2) the probability distributions to represent uncertainty about key parameters in the models, or (3) how to value the desirability of alternative outcomes.

Therefore, the assess-risk-of-policy framework has been proposed to develop CCA strategies that would perform satisfactorily with maximum insensitivity to uncertainty. The performance of the selected options would not be unstable in cases where the negative impacts resulting from a large number of plausible climate change scenarios of various GCMs and regional circulation models (RCMs) need to be reduced. These can be called well-hedged strategies in terms of risk assessment. In spite of these strengths, this framework requires several types of subjective decision components, which require the proper choices of the various simulation models and the values of input parameters for the selected models, because the selected models and their parameters can critically affect the responses of adaptation options in all scenarios.

In some simplified cases, where plausible limited scenarios are accepted or significant features do not vary largely, the predict-then-act approach will be more effective and reassuring for decision makers. However, as climate change scenarios of AR5 can even predict future climatic variables, such as rainfalls and temperatures, with consistent characteristics, the limited scenarios and simple patterns might not be realistic anywhere and anytime. Therefore, this approach should carefully consider the following question: “What plausible models, parameter values, and alternative strategies have been neglected that might make a proposed robust strategy vulnerable?”

While the predict-then-act approach identifies uncertainties in plausible forecasted scenarios and strong candidate strategies are agreed upon in general, the assess-risk-of-policy framework can suggest implicit criteria that can help analysts, engineers, and scientists in reducing the uncertainty as much as possible. The application of these criteria might cause the analysis to identify the uncertainties or focus on those parts of the problem where the uncertainties are most precisely characterized (Metlay 2000). That is, this approach suggests all probable vulnerabilities of the selected options and helps decision makers select a strategy with the most acceptable vulnerabilities. In the end, the assess-risk-of-policy is much more subjective because it forces decision makers to choose strategies having relatively low uncertainties and high performances in the vulnerable area and period. The predict-then-act approach that takes less time and uses a small amount of human and physical resources can be more useful if the approach is improved to produce high precise forecasts for a longer period. However, in situations where the future is almost completely uncertain and the negative impacts are perfectly unknown or difficult to articulate hedging strategies for a wide range of climate change scenarios and vulnerabilities, the assess-risk-of-policy framework can produce more realistic and reasonable decisions.

3.4 Vulnerability-and-Response-Option Approach

Recently, a vulnerability-and-response-option analysis framework was developed to characterize uncertainty and to help identify and appraise robust adaptation strategies with the constructive participation of stakeholders, following the procedure: (1) structure problem, (2) choose candidate strategy, (3) evaluate strategy against large ensemble of scenarios, (4) characterize vulnerabilities, and (5) identify and assess options for ameliorating vulnerabilities (Lempert and Grove 2010). In the first step, the engineers set up the problem structure and then analysts collect the relevant data and select simulation models, which can predict the performance and status of strategies at an acceptable level. This approach is totally different from the predict-then-act approach and has a critical distinction in the uncertainty of probabilistic forecasts or multiple scenarios. This approach considers a few candidate strategies and includes current government policies or several new policies proposed by key stakeholders through public communication. Or the optimum strategy from a traditional subjective utility analysis can be selected as the initial candidate.

The third step characterizes the vulnerabilities of the candidate strategy. These vulnerabilities identify regions and scenarios where the candidate strategy fails to meet the target performance of decision makers. In order to identify these vulnerabilities, the candidate is appraised against several plausible climate change scenarios, each representing different combinations of key uncertain climatic factors. In this framework, statistical techniques based on the concept of robustness are frequently used to determine scenarios having a poor performance for decision makers. If the performance of a strategy frequently deviates from the critical thresholds of all criteria, which are determined by stakeholders and experts, the strategy will be removed from the candidate list. In addition, cost will be the most important criteria if decision makers have budget limits. If the cost is larger than the threshold, the strategy will not be considered. In this case, a combination of strategies, which are also accepted by stakeholders and decision makers, can be leading candidates considering the total cost and synergy performance.

In the final step, this framework identifies the strong candidate strategies that might improve the vulnerabilities of the candidate strategy. Because these new strategies often exploit more or less adaptivity, an analyst should provide the trade-off. Therefore, decision makers could use this relationship to determine whether or not to adopt the new strategy. If a particular strategy is selected, analysts repeat the process from the second step by using this new candidate strategy. Alternatively, if decision makers or stakeholders do not select original and new candidate strategies using the trade-off among vulnerability, adaptivity, and performance, we return to the first step to restructure this decision problem in a way that might yield more desirable robust options for stakeholders or decision makers (Lempert and Grove 2010).

Other studies (Lempert et al. 2013; Grove et al. 2013; Bloom 2014) applied this vulnerability-and-response-option approach by customizing it to the characteristics of the study area. This process, as shown in Fig. 8.2, begins with a decision structuring step in which planners define goals, uncertainties, and policy choices under consideration. Analysts then use computer models to generate a large database of simulation runs in which each case represents the performance of a proposed policy in one plausible future scenario. Computer visualization and statistical analysis of this database help the planner identify clusters of scenarios that illuminate the vulnerabilities of the policies. These scenarios can then help planners identify potential new ways to address those vulnerabilities or evaluate whether these choices are worth adopting through a trade-off analysis. The process continues until planners settle on a robust strategy (Lempert et al. 2013).

Fig. 8.2
figure 2

Expanded procedure for vulnerability-and-response-option approach to generate adaptive strategies with multiple iterations (Bloom 2014)

3.5 Decision Scaling

Traditional approaches, such as the predict-then-act approach, can be characterized as top-down assessments because they start producing climate change projections from a GCM and then downscale them at coarse resolutions. In general, relatively few GCMs are used to estimate future climate scenarios because of the high computational efforts and several hours required. Further, socioeconomic impacts and hydrological components should be analyzed with additional simulations, which are also time-consuming. Nevertheless, the top-down approach has several limitations including the uncertainty of greenhouse gas emissions, errors of simulation models, differences between several GCMs, and consideration of the full range of plausible future climate scenarios.

In contrast to the top-down approach, McMahon (2007, 2011) linked reservoir reliability with climate conditions in regional studies, providing a basis for a climate scenario assessment independent of complex procedures in order to introduce the influences of climate change into a streamflow time series. It is called a bottom-up approach. It assesses the socioeconomic condition of the study system and identifies vulnerabilities or risks related to climate change, thereby focusing the analysis on the system or decision without considering GCM projections. A few studies (Johnson and Weaver 2008; Lempert et al. 2006) used GCMs as scenario generators. Other studies (Dessai et al. 2009; Sarewitz and Pielke 2000) did not consider any GCM projections at all because they believed that the information used for climate change scenarios is uncertain. However, it has been criticized by the reason of using no GCM in the bottom-up approach. Through decision scaling, the process of GCM projections is incorporated into the identification of critical climate conditions by using bottom-up analysis.

The decision scaling method finds the missing link between the insight into vulnerabilities obtained from the bottom-up process and all the information available from the climate change projections of GCMs or RCMs. Therefore, it can identify the type of climate changes that would cause critical problems. Because GCM-based climate projections offer a constricted view of possible future climate scenarios (Stainforth et al. 2007), decision scaling avoids using these data to identify vulnerabilities. Instead, an analyst conducts a more extensive vulnerability analysis to understand the sensitivity of the system to changes in climate variables. A climate response function is then derived to define the performance of the system and relevant planning thresholds in terms of the underlying climate statistics. By describing the system performance thresholds in climatic units, the process empowers the planner to assess climate risks by obtaining and comparing various sources of climate information. This may include, but is not limited to, GCM-based projections. It has been argued that such an approach would enable planners either to discount the need for GCMs in cases where the performance of a system linked to climate variables is found to be poor or to tailor specific decision-critical thresholds to fit with the available climate information (Brown and Baroang 2011; Brown and Wilby 2012; Turner et al. 2013).

3.6 Discussion

RDM has recently become one of the most effective tools in finding strategies for CCA. Other variants of RDM such as ROA, DAPP, and the others originated from the concept of RDM. The latter uses multiple views of the future climate scenarios and robustness criteria as the basic concepts for analysis. The four typical approaches presented in this article have been developed and improved with many real applications in the world for the past 10 years. However, more innovative approaches that cannot be characterized by the four classifications or their improved versions have appeared and will be developed to determine the robust strategies using various DM techniques.

From the recent studies, RDM shows three representative features. First, stakeholder participation and public involvement are important in the RDM process. The vulnerability-and-response-option approach and decision scaling have been improved to include the opinions from various stakeholders including residents and decision makers in a quantitative and qualitative manner. MCDA-based RDM has been extended to consider the preferences of stakeholders and enhance the importance of robustness criteria. Public hearings and surveys for collecting opinions, workshops for capacity building, and public meetings for conflict resolution and consensus building are therefore required. In severe conflict situations, litigation can be conducted in the courtroom. Second, uncertainty or sensitivity analyses have been incorporated in the general RDM process to reduce the uncertainties in the climate change scenarios and DM components. MCDA-based RDM used fuzzy set theory to include the uncertainties of the weighting values of decision criteria, performance values of strategies, and the importance of climate change scenarios. Decision scaling also helps derive the division map of the climate space through optimal decisions based on the climate response functions such as temperature and precipitation. Third, the anthropogenic factors such as social and economic components are included in the RDM process using vulnerability. The IPCC (2001) conceptualized vulnerability from a system’s perspective. According to this, a system is judged to be vulnerable if it is exposed to climate change impacts, if it is sensitive to these impacts, and if it has a low capacity to cope with these impacts. A general conceptual model for vulnerability including social, economic, and environmental components has emerged in the climate change literature, similar to the wider use of the concept (Kelly and Adger 2000; Adger 2006). MCDA-based RDM, vulnerability-and-the-response-option, and decision scaling approaches have been extended to combine the vulnerability concept with the previous RDM process. These studies follow the bottom-up analysis which begin with the assessment of the socioeconomic system of interest followed by the identification of climate states that impact a decision and finally identification of vulnerabilities or risks related to climate and decision criteria (Brown 2013).

One of the main aims of this study is for the RDM approach to be used in the field. For this, the three mentioned features are necessary. Application of various outstanding approaches such as public involvement, uncertainty sensitivity analyses, and vulnerability is therefore expected to be prevalent in the national and regional water resource plans in the near future.

4 Real Options Analysis

4.1 Description

ROA has also been often referred to as a DM methodology for CCA (e.g., Watkiss et al. 2013). RDM described in Section 8.3 spreads risk over scenarios, but ROA spreads risk over “time” by using “options.” The word “real” is derived from real assets to which the option pricing theory is often applied for the valuation of investment. Although initially coined by Myers (1977) for the financial market, ROA recently received attentions from engineering decision makers since de Neufville (2003) revisited its concept and merits with an emphasis on flexibility under uncertainty in the field of design and planning of various engineered projects. With ROA, one can operate each alternative differently; for example, one can abandon, delay, or alter the alternative as time proceeds (i.e., as more information is available). Such different operations of an alternative are called “options,” which, however, are not the obligation but the right. It should be noted that several literatures mix both the terminologies, “alternative” and “option.” However, this is not the case in this chapter.

Simply, ROA is an extension of the classical CBA called the “discounted cash flow” approach, where a stream of in and out cash flows over the lifetime of each alternative is predicted and the most economical (often called the optimal) alternative is chosen. Once the decision is made, the chosen alternative is implemented without any modification until it is completed. In other words, the classical discounted cash flow approach has only the “now and never” option without any flexibility over time. In the ROA procedure, however, the chosen alternative can be modified using a few options. For example, one can delay the construction timing of a project (alternative) until justification of the project proves more concrete by comparing the cost and benefit of waiting for such a delay with more information. Moreover, the project can be constructed step-by-step by watching how the future evolves rather than constructing it all at once. Using an option with ROA, the alternative that has been appraised as economically infeasible with the traditional economic analysis may be considered as a feasible solution. This difference is called the “option value.” The crucial term in both the discounted cash flow and ROA approaches is net present value (NPV), which is defined as the difference between the present value of benefits (e.g., the flood damage reduced by a flood mitigation structure) and the present value of costs (e.g., the installation and the maintenance and operation costs of the flood mitigation structure).

4.2 Typical Steps

Similar to other DM methods, ROA identifies problem objectives and constraints, alternatives and their options available, and scenarios of key uncertainties (WUCA 2010). A ROA model that best depicts the decision problem being considered is then chosen from several techniques, such as the Black-Scholes analytical approach (e.g., Lau et al. 2006), the binomial lattice approach (e.g., Michailidis and Mattas 2007), the Monte Carlo simulation approach (e.g., Jeuland and Whittington 2013), and the decision tree approach (Borison and Hamm 2008). The chosen ROA model calculates the financial values of individual combinations of alternatives and options as a function of time with given monetary information such as price change. This procedure is repeated for each scenario combination of key uncertainties, and the expected performance for each option is assessed using the given scenario probability. The following paragraphs introduce examples of two different approaches of ROA to illustrate the steps of the binomial lattice and the decision tree approaches, respectively.

The first ROA example tested four flood mitigation planning alternatives for a Korean basin (called the Korean flood mitigation planning (KFP) example hereafter), each of which indicates a unique set of various structural and nonstructural flood mitigation measures such as raising existing dams; constructions of new dams, wetlands, and retention ponds; and improvement of reservoir operations (Ryu 2014). One of the final products of this example was the option matrix for each alternative. Figure 8.3 shows examples of the “option matrix” where three options, such as invest (I), delay (D), and abandon (A), for each alternative, are employed during the so-called investment opportunity period of 30 years from 2016 to 2045. Figure 8.3 presents an obvious contrast between the two different alternatives. The abandon option first appears in the year 2029 for Alternative 1 (Fig. 8.3a), while, for Alternative 4, the abandon option appears in the year 2024 (Fig. 8.3b), 5 years earlier than Alternative 1. On the contrary, the invest option becomes available from the beginning of the investment period (i.e., the year 2016) for Alternative 1 (Fig. 8.3a), while, for Alternative 4, the invest option is delayed until the year 2029 (Fig. 8.3b).

Fig. 8.3
figure 3

Examples of the option matrix in ROA (I invest, D delay, A abandon options)

At each time step of ROA, one can include possible scenarios to reflect uncertainty, and thus, ROA can be visualized with a set of decision trees. In the KFP example, ROA was characterized with a binomial decision tree where only two scenarios are considered as shown in Fig. 8.4. The initial value \( V\left(0,0\right)={V}_0 \) at t 0 can either go up to \( V\left(1,0\right)={u}_1\cdot {V}_0 \) with a probability p 0 or down to \( V\left(1,1\right)={d}_1\cdot {V}_0 \) with a probability 1-p 0 at the next period t 1, and thus, in general, \( V\left(i,j\right)={u}_{i-j}{d}_j{V}_0 \) for node (i,j) at t i , i = 0, 1, 2, 3, … T and for j = 0, 1, 2, 3, … T where T is the investment opportunity period. The parameters u i , d i , and p i should be given or estimated to initiate the ROA procedure, which is seldom feasible in most real applications; thus, reasonable approximations are often required. With these parameters, the first task of the ROA procedure is to create the value tree at all nodes assuming the alternative being considered is implemented (i.e., invested).

Fig. 8.4
figure 4

Example of the binominal tree in ROA

For other options, such as the abandon and delay options, the values at all nodes are also calculated by a similar methodology, but their cost should cover the loss (e.g., the flood damage in this example) when the project alternative being considered is not implemented (i.e., delayed or abandoned). The maximum of these values over all the options at each node is called the “expanded NPV (ENPV),” which is the addition of NPV and the “option premium.” The difference between ENPV and NPV at t 0 for each alternative is called the “option premium,” which indicates the worth of using options throughout the entire investment opportunity period. In the KFP example, the NPV obtained using the discounted cash flow approach was positive for Alternative 1, while it was negative for Alternative 4. However, in ROA, the ENPV was positive for Alternative 4 because of a high value of its option premium, which results from the utilization of the options shown in Fig. 8.3, although the ENPV for Alternative 1 is still larger than that for Alternative 4. In other words, both the two alternatives are now economically feasible with adequate options. Therefore, one may choose Alternative 4 instead of Alternative 1 if some other benefits of Alternative 4, other than the economic feasibility, dominate those of Alternative 1. A typical example for those “other” benefits is one that comes from an eco-friendly alternative.

The next ROA example described in this section is a hypothetical case study (called the Australian water utility planning (AWP) example hereafter) by Borison and Hamm (2008) to illustrate the decision tree approach, which provides more generality and flexibility than the binomial lattice approach; however, it is difficult to implement it in practice. Their final option analysis is summarized in Fig. 8.5, where three alternatives (“Dam,” “Desal,” “Mod Desal”) and three uncertainty factors (“Dam Feas,” “‘Desal Cost,” “‘Inflows”) are considered for each of the four time periods, such as 2008–2009, 2010–2011, 2012–2013, and 2014–2027. Here, the acronyms of Dam, Desal, and Mod Desal represent three construction alternatives of new assets, such as a dam, a large desalination plant, and a modular desalination plant, respectively, while the acronyms of Dam Feas, Desal Cost, and Inflows represent uncertainties related to dam feasibility, desalination plant construction cost, and catchment inflows, respectively. The objective of the AWP example is to maximize the net value of water supply provided by portfolios of the above three assets (Borison and Hamm 2008). The AWP example considers eight combinations of the above three alternatives by using the “advance” option, which consists of advancing nothing, Dam only, Desal only, Mod Desal only, Dam and Desal, Dam and Mod Desal, Desal and Mod Desal, and a combination of all three. The report of Borison and Hamm (2008) also showed the estimation of the probability of each uncertainty source through rigorous sensitivity and uncertainty analyses.

Fig. 8.5
figure 5

Option analysis structure of the Australian water utility planning example (Borison and Hamm 2008) 

It should be noted that there are two differences between the KFP and the AWP examples. First, the AWP example considers more than a single source of uncertainty; furthermore, some of them are not binomial. Secondly, mainly because of such non-binomial complexities, the AWP example handles the possibilities of alternatives and uncertainties simultaneously, which produces approximately 400 million different paths (Borison and Hamm 2008). The resulting valuation of this option analysis (as illustrated in the last bar in Fig. 8.6) is a weighted average using the probabilities of all the paths. The first three bars in Fig. 8.6 indicate the results of a conventional economic analysis with the consideration of uncertainties, but which does not consider options. Figure 8.6 reports that the (incremental) option value for the AWP example is roughly estimated as $400 million when “option” is compared with “Dam 2008,” which is the best fixed alternative (Borison and Hamm 2008). It should be noted that the metric of the y-axis denotes the incremental value of an alternative or optional portfolio over “As Is.”

Fig. 8.6
figure 6

Option analysis valuation structure of the Australian water utility planning example (Borison and Hamm 2008)

4.3 Discussion

As ROA has been tested and evolved by various sectors in the field of the financial markets since several decades, its potential for application to CCA also seems high if scenario probabilities are well defined and economic data are provided. Therefore, although few, the number of ROA applications for CCA DM in the field of water resources planning is increasing. Lau et al. (2006) explored the potential use of the Black-Scholes model with ROA for urban drainage risk management to ultimately evaluate the funding strategies of sewerage companies. More recently, Steinschneider and Brown (2012) analytically linked the real option concept to the derivation of reservoir operation strategies as a function of reservoir inflow forecast scenarios and concluded that the real option significantly improves water supply performance of the case study reservoir system in America. The analytical ROA approach is easy to implement but hard to understand and express its output effectively.

Similar to the KFP example described in this section, Michailidis and Mattas (2007) applied the binomial ROA approach to an irrigation dam in Greece to test three options that delay, double, and abandon the investment and concluded that the classical discounted cash flow approach could underestimate the ability of the management under random future. Jeuland and Whittington (2013) conducted a highly computational ROA study based on the Monte Carlo simulation and incorporated option strategies into the design and operation of new dams along the Blue Nile in Ethiopia under climate change uncertainty. They concluded that investment planning based on the real options framework can be robust for bad situations and flexible enough to take advantage of good situations. The simulation-based ROA has been rarely applied to water resources planning; however, a few general-purpose simulation softwares can facilitate its applicability with the use of more flexible options (Borison and Hamm 2008).

As noticed in the AWP example described in this section, the ROA procedure can be the most general and powerful approach within decision tree structures, which assist decision makers to conceptualize and visualize adaptation planning (Borison and Hamm 2008). The most recent example of the decision tree approach is Woodward et al. (2014), who applied ROA to the flood mitigation planning for the Thames Estuary of the United Kingdom and concluded that the optimal adaptive strategies based on ROA and the multi-objective genetic algorithm dominate deterministic and rigid strategies. The backward-moving dynamic programming is often combined with this decision tree ROA approach to facilitate its heavy computations. Pol et al. (2014) introduced a probability distribution of water level increase in a dike investment decision problem, solved it within the dynamic programming framework, and concluded that the uncertainty information provides more impact on dike heightening strategy than future learning.

In summary, ROA is a well-proven economic analysis theory, which is flexible enough to adjust investment timing as well as strategy by using options. The decision trees typically generated by ROA are informative for decision makers to conceptualize their adaptive planning procedure. Past ROA studies for water resources planning indicates that ROA is most relevant to large, capital-intensive infrastructure investments, often comparing traditional investing-now versus waiting options (Borison and Hamm 2008). In spite of these advantages, ROA is resource intensive and requires well-identified probabilities and economic data, which is often not feasible in practice.

5 Dynamic Adaptive Policy Pathways

5.1 Description

Coined by Haasnoot et al. (2013), DAPP offers another DM methodology for different climate change scenarios. In particular, DAPP is based on an adaptive approach to capture flexibility over time, which implies one can choose near-term action keeping options open for the future (Haasnoot et al. 2013). The long terminology results from a combination of two precedent methodologies called “adaptive policy making (Walker et al. 2001)” and “adaptation pathway” (Haasnoot et al. 2011, 2012), but the meanings of both “dynamic” and “adaptive” seem identical as Walker et al. (2013) classified DAPP as a “dynamic” robust methodology. Haasnoot et al. (2013) and Walker et al. (2013) further clarified that adaptive policy making was rooted in the “assumption-based planning” (Dewar et al. 1993) methodology, and the “sell-by date” concept in DAPP was adopted from the “adaptation tipping point” (Kwadijk et al. 2010) methodology.

In spite of the combined history, the essence of DAPP is to create a so-called adaptation pathways map that enables decision makers to transfer to a different planning pathway (i.e., a different action or alternative) in order to reflect changing conditions. As shown in Fig. 8.7, the DAPP adaptation pathways map consists of solid and dashed lines, circles, and vertical bars that represent “action effective in all scenarios,” “action not effective in scenario U,” “transfer station to new action,” and “adaptation tipping point of an action,” respectively. It should be noted that scenarios here imply different projections of climate and hydrology that are usually formed as an ensemble. As x- and y-axes denote time and actions, respectively, each pathway represents a unique decision action over time. It is possible to transfer from one action to another action only at a tipping point. This pathway concept of DAPP has been adopted from the adaptation pathways methodology (Haasnoot et al. 2013; Walker et al. 2013). It should be noted that a tree created with ROA described in Section 8.4 provides a few “option” pathways for a given alternative, but the DAPP map provides various “action (i.e., alternative)” pathways. Therefore, each different option pathway in the ROA tree should be considered as a unique action pathway in the DAPP map.

Fig. 8.7
figure 7

Example of DAPP adaptation pathways map (Haasnoot et al. 2013)

5.2 Key Steps

DAPP provides a systematic procedure that consists of ten steps, including the step for developing the adaptation pathways map, as shown in Fig. 8.8. In general, the first half of the procedure follows the adaptation pathway procedure, while most of the second half follows adaptive policy making. In particular, the concepts of contingency actions and triggers as well as iterative approaches employed in Steps 3, 4, and 5 and also between Steps 9 and 10 have been adopted from adaptive policy making, while the use of transient scenarios and the concept of pathways map have been adopted from adaptation pathway.

Fig. 8.8
figure 8

DAPP procedure (Haasnoot et al. 2013)

Detailed tasks at each step are well described in Haasnoot et al. (2013) that originally proposed DAPP. This section summarizes the key outputs from individual steps and then aims at describing how these outputs are related and produced in the following paragraphs. Among the various outputs, the key outputs of DAPP are the definition of “success” (Step 1), the action to be listed in the y-axis of the DAPP map as well as the pathway candidates (Step 3), the sell-by date of each action (Step 4), the “promising” pathways and their scorecard (Step 5), the “preferred” pathways (Step 6), the contingency actions and triggers (Step 7), the initial adaptive plan (Step 8), and the continuous monitoring result (Steps 9 and 10).

The success defined in Step 1 is usually expressed with quantitative thresholds such as the allowable number of failures (e.g., how many times of water shortage can be allowed?), the allowable length of failure mode (e.g., how long can the water shortage be allowed to last?), and the magnitude of allowable failure (e.g., how much of water shortage can be allowed as a maximum?). The success is used for determining the sell-by date in Step 4, which is defined as the year in which a particular action no longer meets the objective (i.e., does exceed the predefined success threshold). As the sell-by date depends on the choice of a particular scenario, it is expressed with a box plot as shown in Fig. 8.9. Most DAPP studies employed the median or the average values from box-whisker plots. However, the application of these values may be problematic when the range of a box is large because the median or the average sell-by dates may not represent the reality.

Fig. 8.9
figure 9

Example of sell-by dates (Haasnoot et al. 2012)

Various action candidates corresponding to predefined objectives can be developed in Step 3, ranging from structural (e.g., raising dam heights, building new facilities, and dredging channel bottoms), semi-structural (e.g., rainwater harvesting, expanding wetland areas, and changing to porous pavement), and nonstructural measures (e.g., improving dam operations, reducing water demand through education, and installing an early warning system). On the contrary, the pathway candidates in Step 3 represent all the combinations of action sequences evolving over time. They should be narrowed down to “promising” pathways (Haasnoot et al. 2013) through an iterative reassessment procedure between Steps 3, 4, and 5. Some actions cannot be combined into a promising pathway because they may be mutually exclusive or their orders may be illogical. One can carry out this screening and refinement procedure together with stakeholders (Haasnoot et al. 2013). At the end of Step 5, a scorecard, which evaluates individual pathways with a few criteria that represent benefit and cost, is generated through quantitative and qualitative vulnerability assessments. As multiple scenarios exist, the score for each pathway should be calculated as an expected value.

As mentioned before, from Step 6, DAPP follows the “adaptive policy making” procedure, which is somewhat principle oriented rather than providing a specific guideline for each step. Based on the scorecard created in Step 5, a couple of “preferred” pathways are recommended in Step 6. One can apply the robustness concept in this step to reflect the deep uncertainty of climate change. In other words, a pathway may be preferred, which is expected to perform satisfactorily overall performance criteria and over the entire range of scenarios. Haasnoot et al. (2012, 2013) emphasized that the definition of robustness and the resulting preference depends on societal perspectives and tested DAPP with a hypothetical example with three perspectives such as hierarchist, egalitarian, and individualist. Haasnoot et al. (2013) and Walker et al. (2013) pointed out that the selected preferred pathways should be further refined by designing anticipated contingency plans with signposts and triggers (i.e., thresholds of signpost in Step 7). The initial adaptive plan is then finally determined, implemented, and monitored after this step, but a specific guideline, especially for choosing an initial adaptive plan from the preferred pathways, has not been provided in the current version of DAPP.

5.3 Discussion

The applications of DAPP are currently very limited because DAPP has been structured very recently. Although the adaptive policy making and adaptation pathway approaches have been applied to various cases (Walker et al. 2013), DAPP has been tested only for a few hypothetical examples of water management for the Rhine delta of the Netherlands. Therefore, for the water sector at least, exploring more real-world case studies with DAPP is expected to validate its efficacy in practice and improve its detailed procedures and schemes. As pointed out by Haasnoot et al. (2012), determining robust pathways for real cases may not be straightforward because of interrelated pathways and multiple targets that were not dealt with in simplified, hypothetical case studies.

Haasnoot et al. (2012) and Walker et al. (2013) also emphasized that computational supporting tools are important for DAPP because a large number of runs are generally required to handle an ensemble of scenarios, several combinations of actions and pathways with different performance criteria, and iterative loops. This is usually true for other DMCCA approaches, especially for RDM described in Section 8.3. For DAPP applications, exploratory modeling and analysis (EMA), a methodology that uses computational experiments to explore various types of uncertainties related to climate and hydrology scenarios (i.e., natural uncertainty), model structures and parameters (i.e., modeling uncertainty), and perspectives (i.e., societal uncertainty) has been used as a computational supporting tool (Walker et al. 2013), although the details are not covered in this section. See an online support for EMA (http://simulation.tbm.tudelft.nl/ema-workbench/contents.html) in addition to Haasnoot et al. (2012) and Walker et al. (2013) for details.

It should be noted that concepts of adaptive pathways and sell-by date (or the adaptive tipping point) are not completely new in DAPP, For example, Reeder and Ranger (2011) applied the “route-map approach” to the Thames Estuary 2100 project, which was a pioneering real case study for CCA planning. Figure 8.10 shows the route map developed for the Thames Estuary 2100 project that is very similar to Fig. 8.7 of DAPP, but actions and pathways are depicted as functions of threshold level (i.e., maximum water level rise) on the x-axis. The role of these threshold levels is the same as that of the sell-by dates or the adaptive tipping points of DAPP, sometimes called “adaptive turning points” (Werners et al. 2013) as well, at which the actions can be switched. Setting an appropriate threshold level is based not only on an engineering analysis but also on social consensus building between stakeholders. Therefore, the DAPP and the route-map approaches have another advantage of promoting public involvement in their procedures.

Fig. 8.10
figure 10

Example of the route-map approach: the Thames Estuary 2100 project (Reeder and Ranger 2011)

In summary, DAPP provides the most general DM tool, which covers various alternative (or action) adjustments, switches, and combinations, especially when compared to RDM and ROA. A large degree of generality in ROA makes its applicability higher while its implementation difficult. When incorporated with the policy-first and decision scaling approaches (Sect. 8.3), the DAPP and the route-map approaches can include another flexibility of cooperating with stakeholders. Currently, however, the applications of DAPP are limited; thus, more applications that can explore its potential further and reduce its computational burden are strongly desired in the near future.

6 Closing Remarks

As seen in previous sections, RDM, ROA, or DAPP seem very promising when a DM problem is encountered in water resources planning under different climate change scenarios. In addition to these three novel and advanced DM methods, classical DM theories (e.g., CBA, CEA, MCA) are still good candidates if reasonable assumptions are made and/or when some proper modifications are made to capture the nonstationary characteristics and uncertainties of climate change. In addition to these methods, there are several other methods for DM problems under climate change that were not introduced in this chapter because only a few methods have been tested in the field of water resources. In some cases, using multiple alternatives can be more effective than choosing a single alternative, which is a core concept of “portfolio analysis” (WUCA 2010; Hunt and Watkiss 2013). Rooted in the financial market theory, portfolio analysis spreads risk over alternatives that are similar to the above novel DM methods that spread risk over either scenarios or time (Hunt and Watkiss 2013).

To choose the most appropriate DM method for a given DM problem, the strengths and weaknesses of candidate DM methods should be analyzed, which are well summarized in Watkiss and Hunt (2013). The choice of method also depends on the characteristics of the given problem and resources available to the decision maker. WUCA (2010) emphasized that decision makers should consider how to deal with probabilities, importance of quantitative results, willingness to invest time, money, external help, willingness to include stakeholders, and how to use outcomes. More recently, Watkiss and Hunt (2013) compared several DM methods with respect to the availability of the benefits information, the nature of climate information, the relevant time periods for adaptation, and the requirement of resources and experts. According to their analysis, MCA can work with qualitative benefit and climate information while CBA and ROA work with the monetary performance term and scenario probabilities; RDM and ROA are suited for long-term planning but are highly resource intensive.

As mentioned before, the novel DM methods are resource intensive mainly because climate change introduces additional sources of uncertainty. In other words, more experts, cost, time, and other types of efforts are generally required for CCA than for other conventional planning procedures. Therefore, these resource constraints can be a primary barrier to the success of advanced DM methods because reality usually suffers from a lack of resource security. In particular, continuous efforts on the public involvement procedure should be emphasized in the climate change era as stakeholders are usually reluctant to work with a complex and resource-consuming procedure like the above advanced DM methods. One should keep in mind that DM is not only an action to be implemented but also a whole cycle of processes.