Keywords

1 Introduction

The United Nations Framework Convention on Climate Change (UNFCCC 1992) defines climate change as “a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over comparable time periods.” A steady-state increase in the global annual mean surface air temperature associated with a given global mean radiative forcing is referred to as climate sensitivity, and radiative forcing (climate forcing) is the perturbation of the energy balance of the surface–troposphere system, after allowing the stratosphere to readjust to a state of global mean radiative equilibrium (Harvey et al. 1997) .

Almost all scientists agree with global warming as an influence which contributes to climate change . Greenhouse effect is the reason why the global temperature has risen by 0.76 °C (0.57–0.95 °C) from 1850–1899 to 2001–2005 and this temperature rise has resulted in warming of the oceans and melting of glaciers, which caused the total twentieth-century sea level rise, estimated to be 0.17 m (0.12–0.22 m; Solomon 2007) . This situation seriously affects coastal areas and densely populated countries.

Warming in Africa is very likely to be larger than the global annual mean warming throughout the continent and in all seasons, with drier subtropical regions warming more than the moister tropics, and there is likely to be an increase in annual mean rainfall in East Africa (Christensen et al. 2007) . Seasonally, in parts of equatorial East Africa, rainfall is predicted to increase in December–February and decrease in June–August (McCarthy et al. 2001) .

For the Intergovernmental Panel on Climate Change (IPCC) mid-range (A1B) emission scenario, the mean annual temperature will increase in the range of 0.9–1.1 °C by 2030, in the range of 1.7–2.1 °C by 2050, and in the range of 2.7–3.4 °C by 2080 over Ethiopia compared to the 1961–1990 normal (NMA 2007). The major adverse impacts of climate variability in Ethiopia include (NMA 2007): (1) food insecurity arising from occurrences of droughts and floods; (2) outbreak of diseases, such as malaria, dengue fever, and water-borne diseases (e.g., cholera, dysentery) associated with floods, and respiratory diseases associated with droughts; (3) land degradation due to heavy rainfall; and (4) damage to communication, road, and other infrastructure by floods.

The study by Abdo et al. (2009) showed that the average annual minimum temperature is expected to increase by 1 °C in the 2020s while in the 2050s the minimum temperature is expected to increase by 2.2 and 1.7 °C for A2 and B2 scenarios, respectively. The average annual minimum temperature is projected to increase by 3.7 and 2.7 °C for A2 and B2 scenarios, respectively, in the period of the 2080s. The study also showed in 2020s, the maximum temperature is projected to increase by 0.6 °C while in 2050s the maximum temperature is expected to increase by 1.4 and 1.1 °C for A2 and B2 scenarios, respectively. In the 2080s, the annual maximum temperature is expected to increase by 2.5 and 1.8 °C for A2 and B2 scenarios, respectively. Bekele (2009) showed that the rainfall experiences a mean annual increase by 0.82, 0.85, and 1.6 % for A2 scenario in the 2020s, 2050s, and 2080, respectively. In case of B2 scenario, rainfall exhibits a mean annual decrease in amount by 0.5 and 1.0 % in the 2020s and 2050s and increase by 0.54 % in the 2080s. Abdo et al. (2009) also indicate that the variation in mean annual rainfall is lesser than the variation in the monthly rainfall.

The objective of this chapter is to investigate future changes in local-scale climatic variables in the Upper Gilgel Abay catchment. Large-scale general circulation models (GCMs) climate variables, such as rainfall and temperature, were downscaled by using local-scale baseline climate variables (predictands) for this objective using statistical downscaling model (SDSM) .

2 Materials and Methods

2.1 Study Area

The Gilgel Abay River is the largest tributary of the Lake Tana subbasin and originates from the highland spring of Gish-Abay town. Traditionally, people believe that the origin of Blue Nile River is this spring. The catchment covers the area of 1,654.3 km2 of the Lake Tana basin and the longest flow path extends to 80.6 km.

Location

Geographically, the Upper Gilgel Abay River catchment is found north of the Upper Blue Nile basin, which is the southern part of Lake Tana subbasin with the latitudes and longitudes between 10°56′ 53″ to 11°21′ 58″N and 36°49′ 29″ to 37°23′ 34″E, respectively (Fig. 19.1) .

Fig. 19.1
figure 1

Location of the study area

Topography

The elevation of the Upper Gilgel Abay catchment ranges from 1,891 to 3,524 m above mean sea level (amsl). The highest elevation of the catchment is located on the southeastern tip. Nearly half of the catchment has an elevation that ranges from 1,891 to 2,190 m amsl which extends from the center to the north tip (the outlet of the river).

Climate

Mean monthly rainfall (1994–2008) plot of Enjibara, Sekela, Dangila, and Wetet Abay meteorological stations indicates that the study area has one peak per year (Fig. 19.2). Therefore, the Upper Gilgel Abay catchment lies in monomodal climate class according to Ethiopian climate classification (Tadege 2001) with respect to rainfall regimes. The main rainfall season of the study area is from June to September and accounts for 70–90 % of the annual rainfall (Abdo et al. 2009) .

Fig. 19.2
figure 2

Mean monthly rainfall (1994–2008) distribution of the Upper Gilgel Abay catchment

According to the traditional climate classifications of the country (Tadege 2001), most of the area of the catchment is found in the woina dega climate (warm climate; 1,500–2,500 m amsl).

Mean annual areal rainfall (1994–2008) of the study area was computed using inverse distance weighted (IDW) interpolation technique (Fig. 19.3) by accounting the selected ten meteorological stations. As shown in the map, the mean annual areal rainfall of the study area varies from 1,624 to 2,349 mm. Majority of the study areas have a mean annual rainfall between 1,842 and 1,986 mm.

Fig. 19.3
figure 3

Mean annual areal rainfall of the Upper Gilgel Abay catchment

The temperature of the study area is highly affected by elevation change where the temperature decreases with increasing elevation (Fig. 19.4). For instance, mean monthly maximum temperature of Wetet Abay (1994–2008) at an elevation of 1,915 m amsl varies from 24.3 to 31.3 °C and of Gundil at an elevation of 2,574 m amsl varies from 18.3 to 24.9 °C. Generally, daily variation between maximum and minimum temperature is high as compared to the seasonal variation of temperature in the study area.

Fig. 19.4
figure 4

Mean monthly (1994–2008) temperature of stations with elevation difference

2.2 Available Data

Predictands (historical climate variables) and daily predictor variables for past and future projections were available to investigate future changes in local-scale climatic variables in the catchment.

Meteorological Data

Meteorological variables such as rainfall, maximum and minimum temperature were required as predictands to downscale the global climate GCM data to local climate variables. However, only Bahir Dar station was used as an input for SDSM to derive statistical relationships between the predictand and predictor that satisfy the baseline (1961–1990) historical data recommended by IPCC. These 30-year daily meteorological variables were collected from the National Meteorological Agency (NMA) Bahir Dar Branch Directorate.

GCM Data

GCM data were required to project and quantify the relative change of climate variables between the current and future time horizon. One of the global circulation models, Hadley Centre Coupled Model, version 3 (HadCM3), was used for this study because the model is widely applied in many climate change studies and the model provides daily predictor variables which can be used for the SDSM. The predictor variables are supplied on a grid box by grid box basis. On entering the location of the study area, the correct grid box was calculated and a zip file was downloadedFootnote 1 (Fig. 19.5). The African continent window with a resolution of 2.5° latitude × 3.75° longitude of HadCM3 was, therefore, used as an input to the SDSM model. When unzipping this file, the following three directories are available (CCIS 2013):

Fig. 19.5
figure 5

The grid box where the study area is located

  • National centers for Environmental predictions (NCEP) 19612001: This directory contains 41 years of daily observed predictor data, derived from the NCEP reanalyses and normalized over the complete 1961–1990 period. These data were interpolated to the same grid as HadCM3 (2.5° latitude × 3.75° longitude) before the normalization was implemented.

  • H3A2_19612099: This directory contains 139 years of daily GCM predictor data, derived from the HadCM3 A2(a) experiment, and normalized over the 1961–1990 period.

  • H3B2_19612099: This directory contains 139 years of daily GCM predictor data, derived from the HadCM3 B2(a) experiment, and normalized over the 1961–1990 period.

To apply SDSM to GCM data, both observed predictand and GCM data should ideally be available on the same grid spacing. Individual predictor (Table 19.1) and predictand files (one variable to each file, time series data only) are denoted by the extension *.DAT (Wilby et al. 2002) . The predictor represents large-scale atmospheric variables whereas the predictand represents local surface variables such as temperature and precipitation.

Table ‎19.1 HadCM3 predictor variables

2.3 The Climatological Baseline

In order to assess the implications of future changes on the environment, society, and economy on an exposure unit, it is first necessary to have information about the present-day or recent conditions as a reference point or a baseline (Carter et al. 1999; McCarthy et al. 2001) . Baseline information is important for: (1) characterizing the prevailing conditions under which an exposure unit functions and to which it must adapt; (2) describing average conditions, spatial and temporal variability, and anomalous events, some of which can have significant impacts; (3) calibrating and testing impact models across the current range of variability; (4) identifying possible ongoing trends or cycles; and (5) specifying the reference situation with which to compare future changes (Carter et al. 1999) .

The baseline period is usually selected according to the following criteria (Carter et al. 1994) : (1) It should be representative of the present-day or recent average climate in the study region; (2) be of a sufficient duration to encompass a range of climatic variations, including a number of significant weather anomalies (e.g., severe droughts or cool seasons); (3) should cover a period for which data on all major climatological variables are abundant, adequately distributed over space, and readily available; (4) include data of sufficiently high quality for use in evaluating impacts; and (5) be consistent or readily comparable with baseline climatologies used in other impact assessments.

A popular climatological baseline period is a 30-year “normal” period, as defined by the World Meteorological Organization (WMO). The current WMO normal period is 1961–1990, which provides a standard reference for many impact studies (McCarthy et al. 2001) . As well as providing a standard reference to ensure comparability between impact studies, other advantages of using this baseline period include (Carter et al. 1999) :

  • The period ends in 1990, which is the common reference year used for climatic and nonclimatic projections by the IPCC in the first and second assessment reports (and retained for the third assessment report).

  • It represents the recent climate, to which many present-day human or natural systems are likely to have become reasonably well adapted (though there are exceptions, such as vegetation zones or groundwater levels that can have a response lag of many decades or more relative to the ambient climate).

  • In most countries, the observed climatological data are most readily available for this period, especially in computer-coded form at a daily time resolution.

According to the above-listed importance and advantage, this study considered the suggested IPCC baseline period (1961–1990). Aground the catchment, only Bahir Dar meteorological station fulfills the IPCC baseline period because the observed data cover the range from 1961 to 1990. Therefore, these data were used as predictands for downscaling.

2.4 Climate Scenarios

As per the IPCC description, climate scenarios are plausible representations of the future that are consistent with assumptions about future emissions of greenhouse gases and other pollutants and with our understanding of the effect of increased atmospheric concentrations of these gases on global climate. These assumptions include future trends in energy demand, emissions of greenhouse gases, land use change, as well as assumptions about the behavior of the climate system over long timescales. The IPCC -Task Group on Data and Scenario Support for Impact and Climate Assessment (IPCC-TGCIA) classified climatic scenarios into three main types (Carter et al. 2007) , based on how they are constructed. These are: (1) synthetic scenarios, also known as incremental scenarios; (2) analog scenarios; and (3) climate model-based scenarios.

Special Report on Emissions Scenarios

The world will have changed by 2100 in ways that are difficult to imagine—as difficult as it would have been at the end of the nineteenth century to imagine the changes of the 100 years since (Nakicenovic et al. 2000) . The IPCC Special Report on Emissions Scenarios (SRES) in replacing the old IPCC scenarios (IS92) identifies 40 different scenarios following four families of storylines (Santoso et al. 2008) . Each storyline represents a distinctly different direction for future developments, such as demographic, socioeconomic, technological, and environmental developments. The four qualitative storylines yield four sets of scenarios called families (A1, A2, B1, and B2).

The main characteristics of the four SRES storylines and scenario families (Nakicenovic et al. 2000) are:

  • A1: The A1 storyline and scenario family describes a future world of very rapid economic growth, global population that peaks in the middle of the century and declines thereafter, and the rapid introduction of new and more efficient technologies. Major underlying themes are convergence among regions, capacity building, and increased cultural and social interactions, with a substantial reduction in regional differences in per capita income. The A1 scenario family develops into three groups that describe alternative directions of technological change in the energy system. The three A1 groups are distinguished by their technological emphasis: fossil intensive (A1FI), nonfossil energy sources (A1T), or a balance across all sources (A1B)Footnote 2.

  • A2: The A2 storyline and scenario family describes a very heterogeneous world. The underlying theme is self-reliance and preservation of local identities. Fertility patterns across regions converge very slowly, which results in continuously increasing global population. Economic development is primarily regionally oriented and per capita economic growth and technological changes are more fragmented and slower than in other storylines.

  • B1: The B1 storyline and scenario family describes a convergent world with the same global population that peaks in mid-century and declines thereafter, as in the A1 storyline, but with rapid changes in economic structures toward a service and information economy, with reductions in material intensity, and the introduction of clean and resource-efficient technologies. The emphasis is on global solutions to economic, social, and environmental sustainability, including improved equity, but without additional climate initiatives.

  • B2: The B2 storyline and scenario family describes a world in which the emphasis is on local solutions to economic, social, and environmental sustainability. It is a world with continuously increasing global population at a rate lower than A2, intermediate levels of economic development, and less rapid and more diverse technological change than in the B1 and A1 storylines. While the scenario is also oriented toward environmental protection and social equity, it focuses on local and regional levels.

As mentioned earlier, the HadCM3 climate model has been selected for this study. Climate change and climate change impact are more understandable with the use of all available GCMs and emission scenario . However, to show the technique of how one can study future climate change, only HadCM3 was used. HadCM3 model was developed by considering A2 and B2 SRES emission scenarios.

2.5 Climate Model Downscaling

Downscaling Techniques

GCMs indicate that rising concentrations of greenhouse gases will have significant implications for climate at global and regional scales (Wilby and Dawson 2007) . Due to their coarse spatial resolution and inability to resolve important subgrid scale features, such as clouds and topography, GCMs are restricted in their usefulness for local impact studies by their coarse spatial resolution. GCMs depict the climate using a three-dimensional grid over the globe, typically having a horizontal resolution between 250 and 600 km, 10–20 vertical layers in the atmosphere, and sometimes as many as 30 layers in the oceans (Nakicenovic et al. 2000) . Their resolution is thus quite coarse relative to the scale of exposure units in most impact assessments. Several methods have been adopted for developing regional GCM-based scenarios at the subgrid scale, a procedure variously known as “regionalization” or “downscaling.” Two different approaches to downscaling are possible (Hewitson and Crane 1996) :

  1. I.

    Dynamic (nested model) downscaling

    The typical application in this case is to drive a regional dynamic model at mesoscale or finer resolutions with the synoptic- and larger scale information from a GCM (Giorgi and Mearns 1991; Jenkins and Barron 1997) . Detailed information at spatial scales down to 10–20 km and at temporal scales of hours or less may be achieved in such applications (Hewitson and Crane 1996) . Such models are computationally demanding and are not an easily accessible research avenue, but in the long term, this technique is likely to be the best solution and needs to be encouraged.

  2. II.

    Statistical (empirical) downscaling

    Statistical downscaling is computationally efficient in comparison with dynamical downscaling and is a practical approach for addressing current needs in the climate change research community, especially in many of the countries liable to be most sensitive to climate change impacts (Hewitson and Crane 1996) .

In the empirical approach, one seeks to derive quantitative relations between circulation and local climate in some form of:

y = f(x) (‎19.1)

where y represents the predictand (a regional or local climate variable), x is the predictor (a set large-scale atmospheric variables), and f is a deterministic/stochastic function conditioned by x and has to be found empirically from observation or modeled data sets.

Many of the processes which control local climate, e.g., topography, vegetation, and hydrology, are not included in coarse-resolution GCMs . The development of statistical relationships between the local and large scales may include some of these processes implicitly (Fig. 19.6).

Fig. 19.6
figure 6

The concept of spatial downscaling. (Source: David Viner, Climatic Research Unit, University of East Anglia, UK)

Under the broad empirical/statistical downscaling techniques, the following three major techniques, which include the others, have been developed . These are weather classification/typing schemes, transfer function/regression model, and stochastic weather generators methods.

Regression models are a conceptually simple means of representing linear or nonlinear relationships between local climate variables (predictands) and the large-scale atmospheric forcing (predictors; Wilby et al. 2004) . Commonly applied methods include canonical correlation analysis (CCA; von Storch et al. 1993) and artificial neural networks (ANN) which are akin to nonlinear regression (Crane and Hewitson 1998) and multiple regression (Murphy 1999) .

For this particular study, a type of regression model was used which is SDSM . SDSM is widely applied in many regions of the world over a range of different climatic condition. It permits the spatial downscaling of daily predictor–predictand relationships using multiple linear regression techniques. The predictor variables provide daily information concerning the large-scale state of the atmosphere, while the predictand describes conditions at the site scale (CCIS 2008).

2.6 General Description of SDSM

SDSM is a decision support tool that facilitates the assessment of regional impacts of global warming by allowing the process of spatial-scale reduction of data provided by large-scale GCMs (Wilby et al. 2002) . It is best described as a hybrid of the stochastic weather generator and regression-based methods. This is because large-scale circulation patterns and atmospheric moisture variables are used to linearly condition local-scale weather generator parameters (e.g., precipitation occurrence and intensity; Wilby et al. 2002) .

Users are allowed to simulate, through combinations of regressions and weather generators, sequences of daily climatic data for present and future periods by extracting statistical parameters from observed data series (Gagnon et al. 2005) . The stochastic component of SDSM permits the generation of 100 simulations. The SDSM software reduces the task of statistically downscaling daily weather series into seven discrete steps: (1) quality control and data transformation; (2) screening of predictor variables; (3) model calibration; (4) weather generation (using observed predictors); (5) statistical analyses; (6) graphing model output; and (7) scenario generation (using climate model predictors). The structure and operations of SDSM can be best described with respect to the seven tasks as indicated in the bold box in the flowchart and their short descriptions below the flowchart as shown in Fig. 19.7 (Wilby and Dawson 2007) .

Fig. 19.7
figure 7

SDSM version 4.2 climate scenario generation. (Wilby and Dawson 2007)

Quality Control and Data Transformation

The quality control in SDSM is used to identify the gross data error, specification of missing data code, and outliers prior to model calibration. In many instances, it may be appropriate to transform predictors and/or the predictand prior to model calibration. The transform facility takes chosen data files and applies selected transformations (e.g., logarithm, power, inverse, lag, binomial).

Screening of the Predictor Variables

Identifying empirical relationships between gridded predictors (such as mean sea-level pressure) and single-site predictands (such as station precipitation) is central to all the statistical downscaling methods. The main purpose of screen variables operation is to assist the user in the selection of appropriate downscaling predictor variables.

Model Calibration

Model calibration takes the specified predictand along with a set of predictor variables and computes the parameters of multiple regression equations via an optimization algorithm (either dual simplex of ordinary least squares). Then, specification of the model structure, whether monthly, seasonal, or annual submodels are required; or whether the process is unconditional or conditional. In unconditional models, a direct link is assumed between the predictors and predictands, but in conditional models, there is an intermediate process between regional forcing and local weather.

Weather Generator

The weather generator operation generates ensembles of synthetic daily weather series given observed (or NCEP reanalysis) atmospheric predictor variables. The procedure enables the verification of calibrated models (using independent data) and the synthesis of artificial time series for present climate conditions.

Data Analysis

SDSM provides means of interrogating both downscaled scenarios and observed climate data with the Summary Statistics and Frequency Analysis screens. For model output, the ensemble member or mean must also be specified. In return, SDSM displays a suite of diagnostics including monthly/seasonal/annual means, measures of dispersion, and serial correlation and extreme.

Graphical Analysis

Three options of graphical analysis are provided by SDSM 4.2 through the Frequency Analysis, Compare Results and the Time Series Analysis screens.

Scenario Generations

Finally, the Scenario Generator operation produces ensembles of synthetic daily weather series for the potential atmospheric predictor variables supplied by a climate model (for either present or future climate experiments), rather than observed predictors.

2.6.1 Model Setup

  1. I.

    General Model Setting

Year Length

The normal calendar year (366) which allows 29 days in February every fourth year is used whenever dealing with predictand and NCEP predictor, whereas the year length of 360 days is used in the scenario generation since HadCM3 model uses years having 360 days. The 360-day calendar divides a year into 12 months, each of 30 days in length.

Event Threshold

The event threshold is set to zero for temperature and 0.1 mm/day for precipitation to treat trace rain days as dry days.

Model Transformation

The default (none) is used for predictand that is normally distributed and unconditional as in the case of daily temperature and fourth root transformation is applied for precipitation since the model is conditional and the data are skewed.

Variance Inflation

Variance inflation controls the magnitude of variance inflation in the downscaled daily weather variables. This parameter can be adjusted during the calibration period to force the model to replicate the observed data. The default (i.e., 12) produces approximately normal variance inflation prior to any transformation and is applied to maximum and minimum temperatures. For precipitation, this parameter can be adjusted during the calibration period.

  1. II.

    Predictor Variables Screening

The choice of predictor variable is one of the most influential steps in the development of statistical downscaling procedure. Identifying empirical relationships between gridded predictors and single-site predictands is central to all statistical downscaling . The screen variable option in SDSM assists the choice of appropriate downscaling predictor variables through seasonal correlation analysis, partial correlation analysis, and scatter plots. One of the approaches is to choose all predictors and run the explained variance on a group of 12, at a time. Out of the groups, those predictors which have high explained variance are selected. Then, partial correlation analysis is done for selected predictors to see the level of correlation with each other. There could be a predictor with a high explained variance but it might be very highly correlated with another predictor. This means that it is difficult to tell that this predictor will add information to the process, and, therefore, it will be dropped from the list. Finally, the scatter plot indicates whether this result is due to a few outliers or whether it is a potentially useful downscaling relationship. The selected predictor variables for precipitation and temperature are shown in Table 19.2.

Table ‎19.2 Selected large-scale predictor variables for the predictands of Bahir Dar station
  1. III.

    Model Calibration

The calibration model process constructs downscaling models based on multiple regression equations, given daily weather data (the predictand), and regional-scale, atmospheric (predictor) variables. The model structure for calibration can be specified by selecting either the unconditional or the conditional process. In conditional models, a direct link is assumed between the predictors and predictand. In unconditional models, there is an intermediate process between the regional forcing and local weather (e.g., local precipitation amounts depend on wet-/dry-day occurrence, which in turn depends on regional-scale predictors, such as humidity and atmospheric pressure). The model structure is set to unconditional for maximum and minimum temperatures and conditional for precipitation. The model type determines whether individual downscaling models will be calibrated for each calendar month, climatological season, or entire year. The model is structured as a monthly model for both precipitation and temperature downscaling, in which case, 12 regression equations are derived for 12 months using different regression parameters for each month equation. Finally, the data period should be set in order to specify the start and end date of the analysis. The calibration was done for a period of 20 years (1961–1980), and the rest 10 years were considered as validation period.

  1. IV.

    Weather Generator/Scenario Generator

The Weather Generator operation generates ensembles of synthetic daily weather series given observed (or NCEP reanalysis) atmospheric predictor variables. The procedure enables the verification of calibrated models (using independent data) and the synthesis of artificial time series for present climate conditions. Scenario Generation operation produces ensembles of synthetic daily weather series given the regression weight produced during the calibration process and the daily atmospheric predictor supplied by a GCM (under either the present or the future greenhouse gas forcing). These functions are identical in all respects except that it may be necessary to specify a different convention for model dates and source directory for predictor variables.

The two operations that were settled synthesize 20 daily ensembles either in the case of NCEP (1961–1990) or in the case of GCM (1961–2099) for maximum and minimum temperatures. Precipitation downscaling is necessarily more complex than temperature because daily precipitation amounts at individual sites are relatively poorly resolved by the regional-scale predictors, and precipitation is a conditional process (i.e., both the occurrence and amount processes must be specified) (Wilby and Dawson 2007) . Regarding precipitation complexity, increasing the ensemble number (up to 100) improves this problem.

3 Result and Discussion

3.1 Downscaling of Climate Variables

3.1.1 Selection of Predictor Variables

The best correlated predictor variables selected for precipitation, and maximum and minimum temperatures with the corresponding month which have a strong correlation between predictands and each predictor are listed in Table 19.2. Predictand and predictor have good correlations that means the predictor has the best performance to downscale the global climate variables to local-scale climate variable compared to others. For instance, relative humidity at 500 hPa and near-surface relative humidity had good performance to downscale precipitation rather than other predictors. Also, relative humidity at 500 hPa was very good predictor for the months January, February, April, August, November, and December, whereas precipitation for the months of June, September, and October was efficiently downscaled with near-surface relative humidity.

3.2 Baseline Scenarios

Baseline scenario analysis was performed for Bahir Dar station within 30-year period from 1961 to 1990. Thus, the HadCM3 was downscaled for daily base period for two emission scenarios (A2 and B2), and some of the statistical properties of the downscaled data were compared with daily observed data. One of the criteria commonly used in evaluating the performance of any useful downscaling is whether the historic or observed condition can be replicated or not.

The downscaled baseline daily temperatures show good agreement with observed data. However, due to the conditional nature of daily precipitation, downscaled values have less concurrence with observed daily data. In conditional models, there is an intermediate process between regional forcing and local weather (e.g., local precipitation amounts depend on wet-/dry-day occurrence, which in turn depends on regional-scale predictors, such as humidity and atmospheric pressure) (Wilby et al. 2004) . Additionally, complicated nature of precipitation processes and its distribution in space and time are the other reasons for its concurrence. Climate model simulation of precipitation has improved over time but is still problematic (Bader et al. 2008) and has a larger degree of uncertainty than those for temperature (Thorpe 2005) . This is because rainfall is highly variable in space and, so, the relatively coarse spatial resolution of the current generation of climate models is not adequate to fully capture that variability.

Coefficient of determination (R 2) for daily observed versus simulated (downscaled) data clearly shows the difference between unconditional and conditional models for both calibration and validation (Table 19.3).

Table ‎19.3 Downscaled daily precipitation, maximum and minimum temperature efficiency (R 2) relative to observed data

The replication of the observed data by the model is much better (with coefficient of determination nearly one), when the timescale resolution is reduced to monthly and annual. For this reason, baseline scenario mean monthly precipitation, maximum temperature, and minimum temperature of observed and downscaled data are compared and discussed in the next section.

A. Precipitation

SDSM estimated the mean monthly precipitation by performing reasonably. This is why temporal resolution of the analysis changed from daily to mean monthly values. Figure 19.8a shows this truth but there is a relatively small model error in the month of July and August as compared to other months. Mean monthly totals of observed and downscaled precipitation of July are 437.7 and 447.8 mm and of August are 382.5 and 395.4 mm, respectively. This result of precipitation was checked by plotting the absolute model errors monthly and shows (Fig. 19.9a) a good agreement to that obtained during downscaling.

Fig. 19.8
figure 8

Baseline period (1961–1990) mean monthly observed and downscaled precipitation (a), maximum temperature (b), and minimum temperature (c)

Fig. 19.9
figure 9

Absolute model error of mean monthly precipitation (a), maximum temperature (b), and minimum temperature (c) (1961–1990)

B. Maximum Temperature

The downscaled maximum temperature for baseline period shows good agreement between observed and downscaled values than that of precipitation and minimum temperature both in A2 and in B2 emission scenarios (Fig. 19.8b).

The monthly absolute model error of the downscaled maximum temperature for the baseline period shows an almost similar result. As compared to other months, February and July have somehow slightly higher model errors, though the magnitude is small (Fig. 19.9b).

C. Minimum Temperature

Like that of the maximum temperature, the downscaled minimum temperature shows a satisfactory agreement with the observed minimum temperature for all months both under A2 and under B2 emission scenarios, except a little variation in the months of January, February, and May. Absolute model error of the downscaled minimum temperature ensures this truth. The model error therefore ranges from 0.1 to 0.2 °C. See Figs. 19.8c and 19.9c.

3.2.1 Future Scenario

Checking the efficiency of the downscaling model which can replicate the observed statistical properties for future scenarios or does not help to project daily future climate variables for the next century using the HadCM3 (A2 and B2) global circulation model. The projection generates 20 ensembles of daily temperature and 100 ensembles of daily precipitation variables. These ensembles are averaged out in order to consider the characteristics of all those ensembles.

The analysis was done for three 30 years of data ranges based on recommendation of the WMO as the 2020s (2011–2040), 2050s (2041–2070), and 2080s (2071–2099). The generated scenarios were dealt with individually for each baseline predictand as below.

A. Precipitation

The result of rainfall projection is discussed on a mean annual, seasonal, and monthly basis. This research considered monomodal (one wet season) base seasonal classifications of Ethiopia which are namely Bega (October–January), a dry season, Belg (February–May), a short rainy season, and Kiremt (June–September), a long rainy season.

Keeping its spatial and temporal variability, rainfall projection did not show a magnified increasing or decreasing unlike the maximum and minimum temperatures for both A2 and B2 emission scenarios. Rainfall, experiences a mean annual increase by 2.21, 2.23, and 1.89 % for A2 scenario in the 2020s, 2050s, and 2080, respectively. This mean annual increase was repeated by B2 scenario with 2.06, 1.85, and 0.36 % in the 2020s, 2050s, and 2080, respectively. These values show that the trend is not increasing uniformly; instead, it differs from one time horizon to another.

A rainfall projection of Bega (October–January) shows an increase for the two emission scenarios, except for the 2080s of B2 scenario. Whereas, Belg (February–May) projection shows a decrease in mean monthly rainfall for the first 2 months (February and March) and increase for the last 2 months (April and May) of the season for A2 scenario. In case of B2 scenario with this Belg season, rainfall is increasing in May in the 2020s. The Kiremt (June–September) season of the 2 months (June and July) shares the rainfall decrease of April and May in the 2080s. Except for September (increasing), the Kiremt season more or less has nearly constant rainfall (see Table 19.4, Figs. 19.10a, b).

Fig. 19.10
figure 10

The future absolute change in mean monthly precipitation for A2 (a) and B2 (b) scenarios; maximum temperature for A2 (c) and B2 (d) scenarios; minimum temperature for A2 (e) and B2 (f) scenarios from the baseline period

Table ‎19.4 Future percentage precipitation changes of A2 and B2 scenario

The projected mean monthly rainfall of this study has a similar pattern to that of the work of Abdo et al. (2009) and deBoer (2007) using HadCM3 and European Centre Hamburg Model 5/Max-Planck-Institut für Meteorologie (ECHAM5/MPIOM) global climate models in the same catchment. These works were done on the Gilgel Abay River catchment and northern Ethiopian highlands . Both of the studies agreed with the mean monthly rainfall decrease in May, June, and July and increase in September, October, and November compared to the baseline period. Similarly, on IPCC third assessment report of McCarthy et al. (2001) , rainfall is predicted to increase in December–February and decrease in June–August in parts of East Africa under intermediate warming scenarios. This IPCC report strengthens this research output on increasing mean monthly rainfall from December to February and decreasing a little bit from June to August, for both A2 and B2 scenarios.

B. Maximum Temperature

The projected maximum temperature for mean annual shows an increase trend for all time horizons by 0.43, 1.05, and 1.92 °C for A2 scenario in the 2020s, 2050s, and 2080, respectively. B2 scenario also shows an increase of mean annual maximum temperature with 0.47, 0.87, and 1.38 °C in the 2020s, 2050s, and 2080s, respectively. As compared to B2 scenario, A2 scenario has a faster increasing trend. The increase will include all months largely for all time horizons except April (Table 19.5, Figs. 19.10c, ‎d).

Table ‎19.5 Future absolute maximum temperature changes of A2 and B2 scenarios

C. Minimum Temperature

The projected minimum temperature shows an increasing trend in all time horizons. In this case, both the A2 and B2 emission scenarios generate the future minimum temperature in similar manner. For A2 scenario, mean annual minimum temperature increases by 0.55, 1.06, and 1.83 °C and for B2 scenario, 0.50, 0.87, and 1.29 °C in the 2020s, 2050s, and 2080, respectively. Mean monthly variation of minimum temperature is higher than maximum temperature. For both A2 and B2 emission scenarios, the minimum temperature will be expected to increase from October to June. The difference from other months is in July, August, and September. Especially in August, a decreasing trend will be expected to dominate (Table 19.6, Figs. ‎19.10e and ‎19.10f).

Table ‎19.6 Future absolute minimum temperature changes of A2 and B2 scenarios

Generally, the projected minimum and maximum temperatures in all time horizons are within the range projected by IPCC, which says that the average temperature will rise by 1.4–5.8 °C toward the end of this century. In relation to this, one can understand and link the result of maximum and minimum temperature results to IPCC emission scenario storylines that increment for A2 scenario is greater than B2 scenario because A2 scenario represents a medium-high scenario which produces more carbon dioxide concentration than the B2 scenario which represents a medium-low scenario.

4 Conclusion

HadCM3, which is one of the global climate models used in this research, was downscaled for the Upper Gilgel Abay River catchment using Bahir Dar meteorological station climate variables as predictands . The statistical downscaling model (SDSM) , which is a multiple regression model, was the tool to downscale the GCM by considering IPCC climatological baseline (1961–1990). The predictors supplied by HadCM3 contain daily observed predictor (NCEP) and daily GCM predictor data developed to A2 (medium-high emissions) and B2 (medium-low emissions) scenarios.

Downscaling results of baseline predictor variables (NCEP) showed that maximum and minimum temperature values gave a better R 2 of NCEP reanalysis versus observed data and the value ranges from 0.56 to 0.66. This shows that future projections of maximum and minimum temperatures would be well replicated. The precipitation computation, on the other hand, showed that the calibrated model performed poorly to replicate the independent data set with R 2 of 0.24 and 0.25 for calibration and validation, respectively. This is due to complicated nature of precipitation processes and its distribution in space and time. However, when the daily data are aggregated to mean monthly, the observed data is simulated better.

Results of downscaling for future projections of climate variables showed that rainfall experiences a mean annual increase by 2.21, 2.23, and 1.89 % for A2 scenario in the 2020s, 2050s, and 2080s, respectively. Mean annual increases of rainfall are also expected in B2 scenario with 2.06, 1.85, and 0.36 % in the 2020s, 2050s, and 2080, respectively. Percentage changes of both A2 and B2 scenarios showed that the trend is not increasing uniformly; instead, it differs from one time horizon to another. Similar to the Abdo et al. (2009) findings, mean annual rainfall variation is less than mean monthly rainfall. The variation was clearly observed from one month to another and also from one time horizon to another.

Maximum temperature will be expected to increase for future time projections as the results showed. The projected maximum temperature shows an increasing trend for annual mean for all time horizons by 0.43, 1.05, and 1.92 °C for A2 scenario in the 2020s, 2050s, and 2080, respectively. B2 scenario also shows an increase of mean annual maximum temperature of 0.47, 0.87, and 1.38 °C in the 2020s, 2050s, and 2080s, respectively. Minimum temperature projections for A2 scenario showed mean annual increase of 0.55, 1.06, and 1.83 °C and for B2 scenario 0.50, 0.87, and 1.29 °C in the 2020s, 2050s, and 2080, respectively. Mean monthly variation of minimum temperature is higher than maximum temperature. As compared to B2 scenario, A2 scenario has a faster increasing trend because A2 scenario represents a medium-high scenario which produces more carbon dioxide concentration than the B2 scenario which represents a medium-low scenario. Based on the results of several GCMs using the data collected by the IPCC Data Distribution Center (IPCC-DDC), future warming across Africa will range from 2 °C (low scenario) to 5 °C (high scenario) by 2100 (Shaka 2008) . Therefore, the results obtained from HadCM3 are supported by the recommended range of IPCC.

Generally, this work considered one climate model only. However, climate change and climate change impact studies will be fruitful if they account for different GCMs and emission scenarios to minimize all types of uncertainties due to assumptions and parameterizations of climate model representation and also different downscaling techniques.