Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Definition of the Subject

This entry provides a brief introduction to the computer models of the atmosphere used for climate studies. The concepts of atmospheric forcing and response are developed and used to highlight the importance of clouds and aerosols to the climate system and the many uncertainties associated with their representation. Many processes that are important to the accurate representation of clouds and aerosols for climate are subgrid scale, and present both physical and computational challenges in atmospheric modeling. Other factors contributing to uncertainties in models are discussed, and some remaining challenges in atmospheric models are introduced.

Introduction

This entry provides a brief description of models of the atmosphere used for climate studies. These models can be part of a coupled climate system model or Coupled Climate and Earth System Models, as described by Gent elesewhere in the section Climate Change Modeling and Methodology, but they can also be used separately with prescribed values for surface fields, or simpler treatments for surface processes.

The atmospheric component of climate models can vary enormously in complexity. Simple atmospheric models based on energy balance arguments can be run on a laptop to provide rapid estimates of global, annual averaged properties of the atmosphere (e.g., surface temperature, e.g., [22]). The simpler types of atmospheric models (Energy Balance Models and Models of Intermediate Complexity) are discussed in the entry Climate Change Projections: Characterizing Uncertainty Using Climate Models by Sanderson, and Knutti, and Edmonds et al.

Much more elaborate models typically used in Earth system models are capable of simulating the distributions (in space and time) of hundreds of atmospheric fields and processes, the interaction between those fields and processes, and their response to external forcing. In this entry, the focus is on the more complex form of atmospheric models. These models are frequently also called General Circulation Models (GCMs) or Atmospheric General Circulation Models (AGCMs). The term GCM will be used here. More detail is found in the textbooks by Washington and Parkinson [20], Jacobson [5], and McGuffie and Henderson-Sellers [9].

GCMs share a great many features with the weather prediction models described by Bacmeister in this section Climate Change Modeling and Methodology. Both use the “equations of motion” (simplified versions of the Navier–Stokes for fluid flow, coupled with thermodynamic and mass conservation equations) to describe the evolution of the atmosphere, and parameterizations, but the way the models are used, and the focus is different (discussed more below).

A nice history of climate science and the development of weather and climate models can be found in Weart [21]. The first incarnation of atmospheric models solved on computers can be traced back to the efforts of a small group of meteorologists and physicists initially lead by John von Neumann and later Jule Charney near Princeton, New Jersey, soon after World War II. That effort started with the solution of simplified versions of the equations of motion on the most powerful computers available at the time (less powerful than the laptops in use today).

The complexity of present-day models has increased enormously, and large communities have grown up around those models. There are perhaps a dozen comprehensive independent GCMs in use today around the world, and many more prototypes used for study and development. Those communities include scientists and computer staff engaged in the development of the basic model, including testing and evaluation, as well as scientists that use the model as a tool for investigating climate science.

An easy example of a community activity focused on a particular GCM is the Community Atmosphere Model (CAM; http://www.cesm.ucar.edu/working_groups/Atmosphere/) project, an activity started in the USA about 30 years ago. That model is part of the larger Community Earth System Modeling (CESM) Project (http://www.cesm.ucar.edu).

CAM is a computer code that is order 400,000 lines of FORTRAN90 code. It is capable of being run in a variety of configurations, each optimized for different purposes, for example:

  • Configurations useful for paleoclimate problems. These simulations sometimes require different positions of the continents. Simulations might be performed over millennia to explore climate change in response to orbital changes, or the Sun’s luminosity.

  • Idealized model configurations that simplify the Earth system by assuming the underlying surface is an “aquaplanet” (a world entirely covered by water).

  • Configurations able to simulate the reactive photochemistry important for understanding the evolution of ozone distributions in the middle atmosphere handling hundreds of trace constituents, and many chemical reactions between those constituents.

  • Configurations useful for study the reasons for climate change arising from nature and mankind over the last two centuries, and useful for producing projections given various scenarios of change in the future.

There are many other model configurations, and this list is not inclusive.

Detailed technical notes and scientific papers describing these types of applications can be found on the web sites mentioned above. The model is “open source,” and it can be downloaded and run by anyone (provided a sufficiently powerful computer is available to perform a desired calculation). Archives of previous simulations are also available for download and examination.

The CESM and CAM projects are perhaps the largest of the GCM modeling activities in existence today, and so it provides an easy example to discuss and describe, but perhaps a dozen other activities around the world have similar capabilities.

The rest of this entry discusses the “generic” characteristics of this kind of climate model.

Global GCMs are generally run at resolutions resolving horizontal features larger than hundreds of kilometers and vertical variations of a few hundred meters. These resolutions are somewhat lower than used by weather models, where more detail is often needed. GCMs frequently include representation of processes and variables that are neglected or treated more simply in weather models (e.g., weather models often neglect details of the evolution of aerosols, or the slow evolution of greenhouse gases that affect the Earth system over longer timescales than are important for weather). GCMs may also include external forcing terms (e.g., variations in solar fluxes, or a historic database of volcanic eruption information) that are neglected by weather models.

Weather prediction models have typically been optimized to provide information about local features of the atmosphere over shorter time periods at higher resolution. Because initial conditions for weather models are constantly being “reset” to observed values, less attention has been paid to processes that affect the simulation over longer timescales (months to years). Climate models, on the other hand, focus on a description of the subtle balances and feedbacks occurring between processes and tend to describe these relationships through statistics of their long-term behavior. For many applications, climate models ignore initial conditions (weather modelers have traditionally viewed their simulations as an initial value problem; climate modelers, as a boundary value problem). A focus on statistical properties necessarily requires “multiple samples” from a distribution, with less attention on initial conditions, and more attention on the processes that control the model equilibrium or produce a transition from one climate regime to another. These points of view are changing, as discussed below.

This divergence in focus between weather and climate has led to differences in model design and configuration. Climate scientists have developed models that allow simulations over centuries or millennia. Weather models provide much higher resolution information, but those simulations are often for periods of only a few days or weeks.

GCMs being used around 2010 divide the atmosphere into columns of about a hundred kilometer on a side with 30–50 layers vertically (see Fig. 6.1). Most of the focus is on the atmosphere within the 40 km nearest the Earth’s surface. Weather models may use columns as narrow as 10 km on a size with two to three times the number of vertical layers. Global weather models thus divide the atmosphere into volumes about 200 times smaller than climate models. Computational and accuracy constraints require that model time steps decrease in proportion to the size of the model volume. A reasonable first guess on model cost (the number of floating point operations required to complete a simulation of fixed length) is that it scales as the cube of the model resolution. Figure 6.1 shows a typical type of atmospheric model grid (in this case uniform in latitude and longitude), but other discretizations are possible (see Bacmeister, this volume).

Fig. 6.1
figure 1

Typical discretization of a GCM or weather model (Figure from http://www.oar.noaa.gov/climate/t_modeling.html)

This difference in horizontal and vertical resolution produces significant differences in the way some features important to weather and climate are represented in these models. An example is shown in Fig. 6.2, which displays the surface topography for North and South America at a 200-km horizontal resolution typical of climate models and a 20-km resolution typical of weather models. The very sharp, small-scale topographic features like the Andes have been significantly “smoothed out” at low resolution, differing in altitude by almost a factor of 2 and spread over a much broader horizontal extent, with measurable impacts on the role of the Andes as a barrier to winds, and their role in influencing precipitation patterns. The need to resolve some features important for climate at high resolution while minimizing computational cost is one of the motivations for the development of regional climate models (see Regional Climate Models by Leung), or models with variable resolution meshes to put the resolution where it is needed.

Fig. 6.2
figure 2

The topography used by a typical “low-resolution” global atmospheric model (approximately 2° horizontal resolution, upper panel) and the high-resolution topography more typical of weather models, and next-generation climate models (about 0.25° resolution). Note the factor of 2 difference in height of the Andes and similar differences over the Rockies of N. America

Figure 6.3 shows a typical layer structure for an atmospheric climate model. Most models in use today rewrite their equations to employ a vertical coordinate that follows the terrain near the surface, with a gradual transition to a fixed height or pressure coordinate at higher altitudes. Model layers are generally concentrated near the Earth’s surface to deal with the complexity of processes taking place there due to boundary layer effects, terrain, interactions with surface models, and the fact that mankind lives in that region. Models typically use layers 10–100-m thick near the surface and gradually decrease that resolution to use 1–2-km-thick layers at higher altitudes. Other coordinate systems have also been considered for climate models. The equations of motion are expressed more simply with height- and pressure-based vertical coordinates, but treatment of boundary conditions is more complex. Some modeling groups have explored the use of vertical coordinates that approximately follow a material surface. These formulations result in more complex models with coordinate surfaces that can also intersect the surface of the Earth, introducing additional complexity in the treatment of boundary conditions, but the benefit is a model with much more accurate treatment of vertical transport.

Fig. 6.3
figure 3

Typical distribution of layers in an atmospheric model. Top model layer will reach around 40 km for models with a focus on the troposphere and much higher for models interested in middle atmosphere problems (Figure from http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf)

Climate problems require descriptions of physical interactions at multiscale scales, leading to a very demanding challenge in physics and computational mathematics. Small-scale phenomena operating on the scale of molecules (e.g., chemistry, phase change of water, and radiative transfer) influence larger-scale features and eventually have global impacts. The physics and chemistry occurring at those small scales influence fluid motions through radiative heating and phase change to produce important phenomena like clouds with important features at scales of meters to kilometers, for example, updrafts and downdrafts. A brute force representation to treat fluid motions like up- and downdrafts would require a model that explicitly resolves those features, requiring discretizations with cells as small as a few meters on a side. It is not feasible to represent the whole globe at this resolution, and other methods are required. For this reason, many processes of climate relevance involve treatment of processes and features that are inevitably smaller than a GCM cell, or “subgrid.” Figure 6.4 shows a satellite image of a cloud system in the equatorial eastern Atlantic with a typical GCM grid superimposed upon it in the right panel. The cloud features are clearly below the resolution of the model. A zoomed image of a small portion of a grid cell (outlined in red) is presented in the left panel, showing important cloud features at yet finer scales.

Fig. 6.4
figure 4

A satellite image (courtesy NASA) of the Eastern Central Pacific showing the cloud features in the context of a typical climate model resolution. The blue lines of the right panel show a superimposed grid typical of a low-resolution atmospheric model. A zoomed image of the small red box shown in the right panel appears on the left, showing even more detail

Parameterizations have been developed in order to represent processes that are important to the atmosphere, but occur at resolutions much below the scales the model is able to resolve (see Stensrud [16]). The equations describing the fundamental physics equations are sometimes abandoned, or simplified, through an “abstraction” to approximate that process. Sometimes those simplifications are based upon formal mathematical decompositions, like the turbulence parameterizations that depend upon “Reynolds averaging” of the equations of motion, along with appropriate choices for the constitutive equations, and “closure assumptions.” In other parameterizations, the complete equations are simplified to speed the calculation: for example, the equations of radiative heating are often approximated by assuming plane-parallel radiative transfer, and integrated over wavelength intervals to capture the essential absorption and emission for the gases and condensed species present in the atmosphere.

Other parameterizations are more explicitly “empirical,” employing process representations which are based upon observed behavior of the atmosphere. For example, some parameterizations for convection [2, 8] attempt to represent the overturning that occurs in the atmosphere when less dense air resides below more dense air by simply adjusting the profiles of temperature and water vapor toward prescribed profiles that agree approximately with observations. Profiles can be defined for shallow (nonprecipitating) and deep precipitating convection based on both observational evidence and theoretical considerations. So, rather than identifying a mechanism through which convection operates to reduce instabilities in the atmosphere, the parameterization makes robust statements about the “end state” of an adjustment process, and introduces empirical tendencies in the evolution equations that adjust the profile to agree with the observed profiles. This type of parameterization is more frequently used in weather prediction than in climate modeling because these empirical parameterizations may not express enough of the physical underpinning to allow inferences to be made about how, why, and where that process is important, or allow extensions to handle additional model needs. Adjustment schemes, for example, would have difficulty handling convective transport of trace species, or adjusting to changes in the fundamental physics that might be occurring as the climate changes (e.g., the response of convective precipitation to pollution).

A third class of parameterizations resort to “process-based models.” These parameterizations replace the basic physics with a conceptual model that is assumed to mimic the processes that occur in the real world. An example of a process-based model parameterization can be seen in the use of a “bulk plume model” to represent the role of convective clouds in a model column; this type of model is used in the majority of climate models in use in 2010 (see, e.g., [23]).

In a bulk plume model, the convective overturning occurring in the atmosphere in clouds, like those seen in the right panel of Fig. 6.5, is envisioned to take place through an ensemble of up- and downdrafts. The updrafts are assumed to begin at the “level of free convection” (the level where a parcel lifted from the surface will be both saturated and buoyant with respect to the ambient environment). The updraft is assumed to be driven by heat released during condensation taking place in parcels within the updrafts. The condensation produced in the updraft is assumed to produce rain. The rain falling into surrounding air is assumed to partially evaporate and initiate a saturated downdraft. These up- and downdrafts carry air from one level to another, entraining air from outside the cloud in the lower part of the cloud layers to dilute the updrafts, and detraining air to the environment aloft, to moisten it and redistribute heat. The ensemble of updrafts is represented by a single “bulk” updraft plume that entrains and detrains at multiple levels and a single “bulk” downdraft driven by evaporating rain to produce a conceptual model of convection like that described in the left panel of Fig. 6.5. The details of the representative up- and downdrafts are in turn controlled by specifications of the rate of entrainment and detrainment, how condensation, conversion of condensate to precipitation, and evaporation occurs within those up- and downdrafts, and a “closure assumption” that describes how the buoyancy generation occurring outside of the clouds is reduced by the mass fluxes within the up- and downdrafts. These parameterizations are obviously gross simplifications of the way clouds behave in the real world. The parameterizations introduce many “uncertain parameters” that require tuning to mimic the behavior of clouds in the real world.

Fig. 6.5
figure 5

The conceptual model used to produce a parameterization of a convective cloud like that seen in the figure at right. See text for details

The reader will note that the citations chosen here describe convective parameterizations written in, or prior to, the 1990s. Progress has been slow in developing better formulations for convection. Most parameterizations of convection have made progress “around the edges” by incrementally improving some aspect of the parameterization, like “closure assumptions” (the assumptions that deter how buoyancy excesses are removed from a column) or the “microphysical formulations” (controlling the ways that condensation, conversion to precipitation, and evaporation operate within the up- and downdrafts). Cloud parameterizations are viewed by climate scientists as one of the least satisfactory components of a GCM [12]. Convective parameterizations based upon plume models have the advantage over the very simple formulations like the Betts-Miller scheme of providing a physical picture (albeit crude) of how convection works that permits the expression of conservation laws (conservation of energy, enthalpy, momentum, mass, etc.). Plume model parameterizations also allow extensions to represent interactions between aerosols and clouds, for example, or the transport of soluble and insoluble trace constituents through vigorous convection, but they still have many limitations. Recently, a new class of parameterizations has begun to be explored, in which a “nearly cloud-resolving model” is embedded within each column of a GCM (e.g., [6]). These “super-parameterizations” of clouds have their own strengths and weakness: they use equations which are very close to the original equations of motion, but those equations are solved at scales that do not really resolve cloud motions. The parameterizations also increase the cost of the model by at least a factor of 100 over models using more traditional parameterizations so that their behavior for climate problems has not yet been thoroughly explored. Other new frameworks for cloud parameterization have also been suggested [1] that present an interesting approach to extending conventional parameterizations. There has not yet been time to evaluate the approach.

Clouds and Aerosols in Climate Models

The accurate representation of the effect of clouds and aerosols in the atmosphere is one of the most difficult and challenging tasks in climate models for scientists at the time this entry is written.

This task is important because clouds play many roles in the atmosphere: they scatter and absorb radiant energy in both the solar (shortwave) and infrared (longwave) part of the energy spectrum, reflecting sizable amounts of sunlight (shortwave energy) back to space and thus acting to cool the planet, but they also hinder the escape of heat/energy in the longwave, and thus can warm the Earth. Clouds are also reservoirs for heat and water acting to temporarily store energy and water in condensed phases, then return it to the atmosphere at other times and locations; they are sites for important in situ atmospheric chemistry and affect photolysis rates in both clear and cloudy regions of the atmosphere by changing the actinic flux; they are regions responsible for the rapid transport of atmospheric trace constituents from the lower to the upper atmosphere through vigorous convection; and they are also entities responsible for the removal of soluble species (gases and particles) through “wet deposition” processes. And, as discussed in the previous section, they are also incredibly difficult to represent accurately and comprehensively in GCMs.

But clouds are also strongly affected by “aerosols,” the small (solid and liquid) particles with sizes less than about 10 um that are suspended in the atmosphere. Aerosols have both natural (e.g., sea-salt, dust, and some organic compounds released by vegetation) and anthropogenic origins (e.g., the pollution released by power plants, cars, trucks, agricultural burning, etc.). Like clouds, aerosols scatter and absorb radiant energy in both the solar and infrared part of the energy spectrum, and thus play a direct role in the energy budget of the planet. Aerosols also affect air quality and can affect ecosystems in a number of ways (e.g., the mobilization of dust particles from deserts, their subsequent transport by winds, followed by deposition; dust deposition is believed to be a source of iron as a nutrient to ocean biota). In addition, some aerosols act as sites that facilitate the phase change of water from vapor to liquid, or ice at far lower vapor pressures than would be needed for the phase change to occur in the absence of the particles. The aerosols that act as sites for water vapor condensation to form liquid cloud drops are known as Cloud Condensation Nuclei (CCN); those that are sites for formation of ice crystals are called Ice Nuclei (IN). Different types of aerosols are more and less effective as CCN and IN, and aerosols “compete” with each other and nearby cloud drops and ice crystals for water vapor, making their interactions extremely complex and hence difficult to model (see, e.g., Seinfeld and Pandis [14] and Lohmann and Feichter [7] for complementary discussions on some of these issues).

The aerosols that become part of cloud drops or ice crystals will eventually be removed from the atmosphere when those drops or crystals get large enough to precipitate out (this is termed nucleation scavenging). Aerosols are also removed as precipitation (raindrops, snow, hail, and graupel) falls and “collects” particles along the way (termed below cloud scavenging). The treatment of aerosols thus depends intimately on the treatment of clouds in GCMs.

Aerosols and clouds thus interact in many ways. It is easy to find examples of situations where aerosols can affect the cellular structure of clouds and their reflectivity. Figure 6.6 shows a dramatic example of the influence of pollution from ship emissions on the brightness of low clouds near the ocean surface. Climate scientists believe that anthropogenic emissions of many aerosol types, from pollution, biomass burning, agriculture, etc., affect both the reflectivity of low clouds with impact on how clouds cool the planet, and the partial opacity of the high ice clouds that hinder the escape of heat from the planet.

Fig. 6.6
figure 6

NASA image (http://eoimages.gsfc.nasa.gov/images/imagerecords/5000/5488/ShipTracks_TMO_2005131_lrg.jpg)

The subtle interactions between clouds and aerosols, and their interactions with other components of the climate system, produce some of the largest uncertainties in interpreting the signatures of climate change over the twentieth century and complicate modelers’ abilities to produce accurate projections of climate change in the future.

These issues are discussed in great detail in the fourth assessment of the Intergovernmental Panel on Climate Change (AR4, IPCC2007), and it is not possible to provide much detail here. The reader should consult AR4 and the references therein for more detail.

Figure 6.7 shows globally averaged radiative forcing estimates for various forcing agents from IPCC2007. Changes in the atmospheric abundance of greenhouse gases and aerosols, in solar radiation, and in land surface properties alter the energy balance of the climate system. These changes are expressed in terms of a “radiative forcing” (W/m2), a term used to compare how a range of human and natural factors drive warming or cooling trends on global climate. The three estimates related to aerosols (surface albedo, direct effect, and cloud albedo effect) are particularly noteworthy when compared to other forcing agents.

Fig. 6.7
figure 7

From the summary for policymakers, IPCC AR4 [4] showing the globally averaged radiative forcing estimates for various forcing agents along with uncertainty estimates and level of scientific understanding (LOSU)

Surface albedo changes through Black Carbon deposition on snow are estimated to have a relatively small warming effect (positive forcing) on the planet. The forcing is a result of the decrease in reflectivity of the snow that occurs when the dark material is deposited on the snow surface. Since sunlight is more easily absorbed by the darker surface in this situation, the snow melts more quickly, revealing other darker surfaces below (vegetation, dirt, sea-ice, etc.), which further increases the warming effect.

The “direct effect” of aerosols refers to the ability of aerosol to reflect or absorb sunlight as it enters the atmosphere. When the aerosols scatter sunlight back to space, the implied forcing is identified as negative (cooling) because less energy is absorbed by the planet. When aerosols absorb sunlight, they reduce the planetary albedo and warm the planet, producing a positive radiative forcing. The net radiative direct effect is estimated to be positive; models estimate that aerosols reflect more energy back to space than they absorb, but the uncertainty indicated by the horizontal whiskers is very large, and the level of scientific understanding (LOSU) is judged to be “low,” as indicated in the figure.

The aerosol “indirect effect” refers to the role that aerosols play on clouds. Increasing aerosols (e.g., from pollution) can increase the number of particles available for cloud drops or ice particles to form on by acting as CCN or IN. Those “extra” cloud drops or ice particles introduced by the additional CCN and IN compete with the ambient aerosols for water vapor, and the result is that the cloud drops and ice crystals will be smaller than they would be in the absence of pollution. Smaller drops and particles scatter sunlight more efficiently (as demonstrated by simple physical arguments and the ship tracks seen in Fig. 6.6), and they frequently also precipitate less efficiently, affecting cloud lifetime and areal extent. Models estimate the indirect effect to produce a very large negative forcing, but the whiskers again indicate that the uncertainty is very large, and the LOSU very low. While the processes that produce cloud brightening in the presence of additional CCN and IN are well understood, there are numerous other factors that complicate the response of cloud reflectivity enormously, and scientists know that climate models treat these other factors very crudely, and inaccurately (see Stevens and Feingold [17]). For example, increasing the number of cloud drops and decreasing their size can also cause cloud drops to evaporate more readily, so the cloud reflectivity can actually decrease, and changing the reflectivity of particular regions of a cloud system can induce changes in the cloud dynamics (the intensity and extent of up- and downdrafts that control the precipitation and cloud areal extent; see, e.g., [19]), changing the cloud morphology and thus its radiative forcing.

The take-home message from the figure and discussion above is that aerosols of anthropogenic origin are currently estimated to have offset a substantial fraction of the positive forcing (heating) produced by increasing greenhouse gas concentrations over the twentieth century, but that result is very uncertain, and the level of understanding is low. These uncertainties in the estimates of the role of aerosols, clouds, and their interaction are strongly influenced by the deficiencies in the model representation of these processes, and by remaining deficiencies in our understanding of how these processes act.

Since it is not known how much of the twentieth century climate change (e.g., changes in surface temperature or precipitation) should be assigned to aerosol forcing and how much to the changes in greenhouse gas concentrations, it makes it much more difficult to interpret the system response to that forcing, and using understanding developed from simulations of past climate, provide accurate projections of how climate will change in the future.

Climate Forcing and Response

Climate change can be thought of as the response by the Earth system to the combination of externally imposed natural (e.g., solar variability and volcanic activity that changes the albedo of the planet) and anthropogenic forcings (e.g., greenhouse gases and aerosols), modulated by the internal model processes that allow the system to adjust to the imposed forcing. The response is strongly influenced by internal feedbacks within the Earth system. Negative feedbacks will increase the rate of cooling in the presence of positive external forcing (warming). Positive feedbacks can amplify that warming (see the discussion in the entry Coupled Climate and Earth System Models by Gent). The relative importance of positive and negative feedbacks in the climate system controls the amplitude of climate change produced from a given amount of external forcing.

There are a variety of ways to characterize the ratio of forcing to response. One convenient measure is “climate sensitivity.” Climate sensitivity is sometimes expressed in terms of a feedback parameter λ (expressed in W m−2 K−1) or its inverse, 1/λ. Oftentimes, the sensitivity is expressed as the change in globally averaged surface temperature ∆T that would occur in a model if it were allowed to equilibrate to a forcing ∆F associated with a doubling of CO2:

$$ \rm\Delta {{\text{T}}_{{2} \times {\text{C}}{{\text{O}}_2}}} = \rm\lambda {^*}\rm\Delta {{\text{F}}_{{2} \times {\text{C}}{{\text{O}}_2}}} $$

It is generally assumed that λ can be expressed as the sum of a sequence of feedbacks λi where i indicates the process responsible for the feedback:

$$ \rm\lambda = \sum {{\rm\lambda_i}} $$

The equilibrium change in surface temperature is a somewhat arbitrary measure of climate response, and other measurements have also been explored. The oceans have a very large heat capacity, and the rate of transport of heat into the deep oceans is very slow, which means that it would take a very long time (thousands of years) to reach an equilibrium. It is possible, with clever analytic methods, to estimate this equilibrium sensitivity, and other definitions are also used (e.g., the “transient climate sensitivity”) to describe the ratio of forcing to response. The different ways are not critical for this discussion, and the equilibrium definition is followed here.

This value is estimated in IPCC2007 (AR4) as “likely to be in the range 2–4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values.”

The first, largest, and perhaps easiest feedback to describe is the so-called Planck Feedback or Planck response, which describes the increase in emission that will occur as temperature increases due to a positive forcing. If one assumes from theory and detailed radiative calculations that the change in forcing F produced by a change in CO2 from concentration C0 to C (e.g., Myhre [25]) can be written as

$$ {\text{F}} = {\text{k}}{^*}{ \ln }\;\left( {{\text{C}}/{{\text{C}}_0}} \right)\ {\text{ where \ k}}\sim {\text{5W}}{{\text{m}}^{ - {2}}} $$

then a doubling of CO2 will result in an increase in the forcing of roughly 4 W/m2. That positive forcing will tend to increase surface temperature. If one then assumes that (1) the emission temperature of the planet is proportional to the surface temperature, (2) the planet radiates energy in proportion to the Stefan-Boltzman equation (σT4), and (3) no other atmospheric properties (clouds, water vapor, etc.) change, then model calculations indicate that λ is about −3.2 ± 0.1 W m−2 K−1 where the sign is chosen negative to indicate that the feedback is negative. And it follows that an increase of 4 W m−2 in forcing will result in an approximate change of 1 K in surface temperature.

This feedback estimate is robust, with very little uncertainty, and it is much lower than AGCMs report. The higher values of climate sensitivity are a result of the amplification of the response by other positive feedbacks that exist in the climate system (see, e.g., [4, 15]).

The largest positive feedback is believed to be the water vapor feedback: observations and models indicate that relative humidity (the ratio of ambient water vapor to the saturation value at a given temperature) remains approximately constant as temperature changes, particularly at high altitude (5–20 km). Therefore, an increase in temperature (e.g., from CO2) will produce an increase in water vapor, which is the strongest of greenhouse gases. That increase in water vapor hinders the escape of energy, and the warming is amplified. IPCC2007 estimated λWV to be about 1.8 W m−2 K−1.

Another important feedback is the “lapse rate feedback” λLR. When estimating the Planck Feedback, it was assumed that temperature change was constant (in latitude, longitude, and altitude). But it is known that the atmosphere will not produce a uniform change in the presence of a new forcing. The observed atmospheric lapse rate (vertical temperature gradient, see Glossary) roughly follows a “moist adiabatic lapse rate” in the tropics. It is temperature dependent and decreases more rapidly at cool temperatures than warm. So a temperature increase introduced near the surface in the tropics will produce adjustments that approximately follow a moist adiabatic lapse rate, and the perturbation will amplify with altitude. Since emission of infrared radiation varies with temperature, it will be more efficient as temperature increases, producing a negative lapse rate feedback that weakens the greenhouse effect. Model studies indicate that λLR has to be about –.84 W m−2 K−1.

It is interesting to note that models suggest that the water vapor feedback and lapse rate feedback are strongly (negatively) correlated, and the agreement by models on the sum of these two feedbacks is much more robust than the individual components: λLR+WV = 0.95 ± 0.1 W m−2 K−1.

The surface albedo feedback occurs because an increase in surface temperature due to a positive forcing can melt surface snow and ice. A decrease in ice and snow reduces surface reflectivity, allowing more energy to be absorbed at the surface, producing further warming, and further reducing snow and ice. Soden and Held [15] and IPCC estimate the surface albedo feedback λalb to be 0.26 ± 0.08 W m−2 K−1.

The last feedback to be discussed is the cloud feedback. A variety of observational, theoretical, and modeling studies suggest that low clouds tend to cool the planet by reflecting sunlight back to space. High ice clouds not only reflect sunlight back to space but also have a “greenhouse effect” and hinder the escape of longwave energy to space. Observational and modeling studies indicate that the net effect of high and low clouds is to cool the Earth (cloud reflection dominates the longwave trapping of energy). But it is by no means clear how cloud radiative forcing will change in the presence of external climate forcing. Clouds are so varied and complex that fewer clear general statements emerge to guide inferences about the sign and amplitude of their feedback processes. This is an area of very active research. Soden and Held found that all the GCMs used for AR4 had a positive cloud feedback (0.68 ± 0.37 W m−2 K−1), even though half the models exhibited a reduction in net radiative forcing in response to a warmer climate. They concluded that change in cloud forcing itself is not a reliable measure of the sign or absolute magnitude of cloud feedback due to noncloud feedbacks on the cloud forcing. Note that the uncertainty in feedback amplitude from clouds is about three times larger than that found for the lapse rate + water vapor feedback, or the albedo feedback.

The combination of positive and negative climate feedbacks produces the likely warming range of 2–4.5°C for a CO2 doubling cited in IPCC2007. Climate skeptics contend that the planet is unlikely to warm as much as GCMs predict by arguing that negative feedbacks are missing or underestimated or positive feedbacks overestimated in GCMs. These criticisms frequently appear in informal venues (blogs, the popular press, and elsewhere). When they are submitted to peer-reviewed refereed publications, they are taken seriously and scrutinized further. To date, the feedbacks described here have been found to be robust and dominant, and estimates of the range and distribution of climate quite robust.

Calibration (Tuning) and Evaluation of GCMs

There are still aspects of the atmosphere that are poorly characterized, and many processes remained crudely represented in GCMs. Even in situations where the correct physics is known, it is often too expensive to include the knowledge with brute force techniques, and approximations must be employed. Both lack of understanding and process approximation lead to uncertainties, and these uncertainties produce significant variations in model formulations adopted by groups around the world.

Multiple alternative parameterizations may also exist for a particular process (e.g., convection). Even in the event that a certain configuration of parameterizations is selected, there are many “uncertain” parameters within that configuration. The choices adopted for those uncertain parameters can have strong impacts on the behavior of the model.

So, substantial resources in the modeling community are invested in evaluating the behavior of the model in the presence of these uncertainties, and in selecting the parameterizations to be used in the model, or the values of the uncertain parameters to be used in subsequent simulations. The process of choosing the parameter values is known as “calibration” or “tuning.” Model tuning has historically been performed in a series of stages, and it is sort of an “Art” that requires a lot of insight by participating scientists, and perhaps multiple repetition of those stages.

One obvious method of tuning is to compare the behavior of a model, or a model component to observations that strongly constrain the process and adjust the parameters until the simulation agrees to some tolerance with observations. This tuning procedure can be considered an optimization problem, and it occurs frequently during model development.

For example, a parameterization of deep convection might first be tuned to make sure that it produces approximately the observed rate of precipitation, and observed tendencies of water vapor, and temperature, at a particular location and time period (the outputs) where deep convection occurs frequently, given a set of measurements of the atmospheric state (the inputs).

After this tuning, parameterization evaluation can begin by looking at its behavior at other locations and times not used for tuning. Some insight is gained during this process about reasonable values for parameter, and parameter sensitivity to variations of tunable parameters, and inputs. Sometimes “single-column model” versions of the GCM are used for this purpose. This is a version of the GCM in which a single-column model is isolated, and all lateral fluxes of information supplied from observations to identify the model behavior in strongly prescribed situations.

The model can also be tuned to optimize relationships after processes equilibrate with each other; the statistical characteristics of the model can be evaluated and compared with statistical properties of the atmosphere. For example, model top of atmosphere energy fluxes could be compared to observed energy fluxes. Small changes in tunable parameters may be performed to assure that the approximate global averages of top of atmosphere fluxes are similar to observed values without significantly degrading the agreement with observations at the process level.

The model can also be compared with observations for fields, processes, or situations that were not part of the calibration process. Figure 6.8 provides one example of this kind of evaluation showing differences between model annually averaged column-integrated water vapor in a long simulation of a climate model (top panel), compared to an estimate of the corresponding observed value (middle panel) and the difference. Column-integrated water is actually a pretty difficult field to observe and observational uncertainty quite high, but a variety of independent methods are available to provide estimates, and the signatures seen in the difference field are quite robust to choices of the observational dataset used. This particular model is quite moist in the tropics.

Fig. 6.8
figure 8

An example of the difference in the vertically integrated amount of water vapor simulated in a climate model run for 20 years with present-day forcing, compared with a reanalysis produce (an estimate produced from observations). The difference is shown in the lower panel

Another interesting evaluation and calibration method is the use of a climate model in “weather forecast” mode. The climate model is started from initial conditions from archives of meteorological conditions produced from a forecast center and run for a brief (few day) simulation. These kinds of simulations are often called “hindcasts.” The evolution of differences between the short simulations and observed fields provides information about how and why errors develop in the model. Individual processes can be examined to help in identifying the role of each process in driving the error growth.

The strategy can also be extended to longer periods (months or seasons) as part of a strategy sometimes called “seamless prediction” [10]. Models are viewed as representing processes that operate over different timescales. Fast processes (e.g., clouds) respond and produce measurable signatures to forcing on timescales of a day or so, systematic changes to heating rates can perturb planetary-wave structures in the atmosphere on the timescale of 10 days, and ocean and land surface react to changes in wave structure on the timescale of 100 days. On timescales of a thousand days and longer, anomalies will produce modifications to the cryosphere and biosphere. Atmospheric models (without coupling to oceans, ice, and the biosphere) can thus react and produce meaningful information on forcing and response through processes like clouds and planetary waves. Longer timescale phenomena need to be evaluated with coupled models. Seamless prediction paradigms ask that models be evaluated and improved on each timescale to provide increasing confidence in the model.

Once an “optimized” or “calibrated” model configuration has been found, it can be compared with other models, or other configurations of the model to quantify its sensitivity to changes in forcing, to identify the reasons that it responds the way it does, and to put that sensitivity in context compared to the behavior of other models. This process is one component of an evolving field of climatechange science called “Uncertainty Quantification.”

Remaining Challenges in Modeling the Atmosphere

There are many situations where scientists understand the fundamental physics of a physical process much more thoroughly than they can afford to express in a climate model, and scientists know that it is important to represent those processes more accurately. These are, in a sense, “known unknowns” in the parlance of Donald Rumsfeld. Scientists know that the parameterizations of clouds and aerosols used in climate models today can be formulated more accurately, and they know (in principle) how to do it. The challenge is (1) representing fields and processes that are truly important for the relevant climate problem and (2) expressing knowledge about that process in a computationally tractable way. This requires both scientific study and computational work. Many of these improvements can be achieved by clever revisions to the computational representation of critical processes in models. Scientists are rewriting model components to make more efficient use of evolving computer architectures that are providing enormous increments in computational resources, and developing more accurate and efficient computation methods to represent those processes. It may be possible in a few years to “brute force” problems that in the past required clever but potentially inaccurate approximations.

While some efforts to create a global cloud-resolving model have been made, their computation cost is at least a million times higher than models run at current climate resolution (e.g., [13]). It is currently a challenge just to complete a single annual cycle with models at this resolution. Scientists are also rewriting models to support “variable resolution” to allow models to provide higher resolution in areas that require it. One of the immediate challenges in this situation is that many of the approximations currently employed in atmospheric parameterizations are “scale dependent,” that is, the quantitative behavior of the parameterization changes behavior as the resolution changes. Current parameterizations are typically “retuned” for different resolutions, and any cloud parameterization that adapts to resolution changes is not yet known, although scale-aware parameterizations of subgrid-scale propagation of waves and wave breaking do exist. It is desirable that parameterizations become “scale aware” in the sense that they adapt to the resolution they are used at. Parameterizations of subgrid-scale processes should recognize that the definition of the phenomena they are being asked to resolve changes as the resolution changes [1]). Scale-aware parameterizations and the model equations that they are embedded in should reduce to “fundamental physics” as model resolution approaches the resolution of the process being represented. This is an area of active research.

There are also challenges for climate models in treating “unknown unknowns,” or “missing physics.” These are phenomena that scientists believe may be important to represent, but that they do not know about or are very unsure of. An example of this situation is the role of Organic Aerosols on climate. These aerosols are difficult to measure and model. They are semivolatile; the aerosols condense, evaporate, and evolve chemically with atmospheric dilution. Aerosol scientists now believe that these aerosols play a much larger role in the climate system than was originally recognized, but they only have hints as to the complex chain of events that produce these aerosols. Precursor emissions, chemical evolution, and interaction with other aerosols and atmospheric trace species are complex, and they are treated very simply in climate models today.

Finally, there are many ways that climate science can be advanced by improvements in methodology. Ensembles of model simulations can be used to characterize model uncertainty. Model intercomparison activities like AMIP [3] and CMIP5 [18] continue to improve and allow more robust evaluation of the strength and weaknesses of climate models and their ability to represent the real world and their performance as a tool for understanding climate change.