Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

There are many different kinds of naturally occurring fluid flows in the environment. Natural fluid motions are vital, and there is a general strong incentive to study them, particularly those of air in the atmosphere and of water from underground aquifers to surface flows in rivers, lakes, and oceans. Environmental concerns have encouraged interdisciplinarity to a degree that has been increasing in proportion to the acuity of the problems, giving rise to a body of knowledge that comprises several disciplines, including hydrology, meteorology, climatology, and oceanography among others. Whereas the particular objectives of each of these disciplines, such as weather forecasting in meteorology and climate change projections in climatology, encourage disciplinary segregation, environmental concerns compel experts in those disciplines to base their models on the solution of the equations of fluid dynamics.

The threat of climate change is one of the greatest challenges currently facing society. Because of the increased threats imposed by global warming and the increasing severity and occurrence of storms and natural disasters, improving our understanding of the climate system has become an international priority. In simple words, climate refers to the average of weather conditions. Descriptions of the climate generally encompass statistical information concerning the mean and variability of relevant quantities, as temperature, precipitation, and wind, over a multi-year time period. Fluctuations in the Earth system result naturally from interactions between the ocean, the atmosphere, the land, the frozen portion of the Earth’s surface (or cryosphere), and the changes in the Earth’s energy balance arising from volcanic eruptions and variations in the Sun’s intensity. Although global warming has been accepted as incontrovertible, humans continue to alter the composition of the atmosphere, primarily through the burning of fossil fuels. The build up of greenhouse gases and trace constituents is another factor that contributes to changes in the Earth’s heat energy balance. Its impact on the planet has been detected and is projected to become increasingly more important in the coming decades and centuries.

Today, a fundamental tool used for predicting weather and climate changes is the use of numerical models, i.e., mathematical models run as computer simulations. However, the basic ideas of weather forecasting and climate modelling were developed about more than a century ago, long before the construction of the first electronic computers (Phillips 1970; Lynch 2008). At these early times, observations were rather sparse and irregular, especially for the upper air and over the oceans, making weather forecasting very imprecise and unreliable. The basic laws of physics, fluid motion, and chemistry played no role and were replaced by the forecaster with crude techniques of extrapolation, knowledge of local climatology, and guesswork based on mere intuition. It was not until the beginning of the last century that meteorologists started to recognize that fluid mechanics and thermodynamics represent the set of fundamental physical principles that govern the flow of the atmosphere (Abbe 1901; Bjerknes 1904; Willis and Hooke 2006). In particular, Abbe (1901) proposed the first mathematical approach to forecasting, and shortly after Bjerknes (1904) introduced the idea that rational forecasting should consist of a diagnostic step, in which the initial state of the atmosphere is determined observationally and represented in charts giving the distribution of the variables at different levels, and a prognostic step, in which the laws of fluid motion are used to calculate the changes of this state over time. Non-linear advection—the transport of fluid properties and characteristics by the motion of the fluid itself—was identified as the primary physical process. However, he employed a graphical approach, rather than numerical methods, for solving the fluid dynamics equations and building up new charts describing the atmosphere some hours later, with the process being repeated iteratively until the desired forecast length was achieved.

The beginning of modern numerical weather prediction (NWP) was pioneered by Richardson (1922), who first attempted a direct solution of the equations of motion using finite difference methods (Lynch 2006). His work impelled profound developments in the theory of meteorology and is the foundation upon which modern forecasting is built. Since then, the advances in numerical analysis, which enabled the design of stable algorithms, the development of the digital computer technology, and the invention of the radiosonde, and its introduction in a global network, providing timely observations of the atmosphere in three-space dimensions (i.e., in latitude, longitude, and height), have completed the task. A definite impulse to modern meteorology was given later on by Charney (1947, 1948, 1950), who developed a set of equations known as the quasi-geostrophic vorticity system for calculating the large-scale motions of planetary-scale waves (Charney 1948), giving the first convincing physical explanation for the development of mid-latitude cyclones–his baroclinic instability theory. This theory was capable of producing a quantitatively accurate prediction of the atmospheric flow (Charney et al. 1950; Platzman 1979). In 1979 he leaded an ad hoc study group on carbon dioxide and climate for the United States National Research Council, with their final written report being one of the earliest modern scientific assessments about global warming (Charney et al. 1979). They estimated that doubling of CO\(_{2}\) emissions will produce a global warming near 3 \(^{\circ }\)C with a probable error of \(\pm \)1.5 \(^{\circ }\)C, which is quite close to the best estimate value of about 3 \(^{\circ }\)C for the global temperature increase given by the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report published in 2007.

With the advances in computer technology, numerical weather predictions have achieved breakthrough improvements in many aspects. In the 1960s, operational forecasts started to use models based on numerical solutions of the primitive equations—a set of non-linear differential equations, consisting of a form of the familiar Navier-Stokes equations, a continuity equation, and a thermal energy equation (Charney 1955; Hinkelmann 1959; Phillips 1960; Smagorinsky 1963). A six-level primitive equation model was introduced into operations at the National Meteorological Center in Washington in June, 1966, running on a CDC 6600 computer (Shuman and Hovermale 1968). Manipulating the vast datasets and performing the complex calculations necessary to modern weather prediction require some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of NWP models extends to about only 6 days. The density and quality of observations used as input to the forecasts and the deficiencies in the models themselves are important factors affecting the accuracy of the predictions. A more fundamental problem lies in the chaotic nature of the fluid-dynamics equations used to simulate the atmosphere. In addition, these equations need to be supplemented with parameterizations that attempt to capture the phenomenology of small-scale processes, including solar and terrestrial radiation, moisture content (cloudiness and relative humidity), surface hydrology (precipitation, evaporation, snow melt and run-off), heat exchange, soil, vegetation, surface water, and the effects of terrain. On the other hand, the development of regional (limited area) models has facilitated accurate forecasting of the tracks of tropical cyclones and hurricanes as well as of air quality (Shuman 1989; van Dop and Steyn 1991). The inclusion of the interactions of land and vegetation with the atmosphere has led to more realistic forecasts (Xue et al. 1996).

The chaotic nature of the atmospheric flow imposes a limit on predictability, as inherent errors in the initial state grow rapidly and render the forecast useless after some days. A numerical prediction method, known as ensemble forecasting, which is a form of Monte Carlo analysis has been introduced in which multiple numerical predictions, each starting from slightly different initial conditions, are run and the combined outputs are used to deduce probabilistic information about future changes in the atmosphere (Molteni et al. 1996; Toth and Kalnay 1997; Buizza et al. 1999). With this approach, probability forecasts for a wide range of weather events are currently generated and disseminated for use in the operational centres. For instance, seasonal forecasts, with a range of 6 months, are prepared at the European Centre for Medium-Range Weather Forecasts (ECMWF) and at the National Center for Environmental Prediction (NCEP) in Washington. They are made using a coupled atmosphere/ocean model, and a large number of forecasts are combined in an ensemble each month. In particular, these forecast ensembles have demonstrable skills for tropical regions with recent impressive predictions for the onset of El Niño and La Niña events. However, in middle latitudes, as in Europe, no significant skill has yet been achieved by these models. In fact, seasonal forecasting for middle latitudes remains one of the great problems facing us today.

Weather and climate are different in the sense that climate predictions do not need knowledge of weather in detail. A good analogy of the difference between weather and climate is to consider a swimming pool. Suppose that the pool is being slowly filled. If someone dives into it, this will certainly generate waves on the water surface. The waves represent the weather, while the average water level is the climate. A new diver jumping into the pool next day will produce more waves, but the water level will be higher as more water has flowed into the pool. In the atmosphere the ‘water hose’ is increasing the amount of greenhouse gases, which will cause the climate to warm even though we still have a changing weather (waves). Thus, climate scientists use models to forecast the average water level in the pool and not the waves. However, climate modelling derives from efforts first formulated to numerically predict the weather. The first successful long-range simulation of the general circulation of the atmosphere was developed in 1956 (Phillips 1956), which realistically depicted monthly and seasonal patterns in the troposphere (Cox 2002). This work had a galvanizing effect on the meteorological community and thereafter several general circulation models (GCMs) were developed. One early model of particular interest has been that developed at the National Center for Atmospheric Research (NCAR) (Kasahara and Washington 1967). By the early 1980s, NCAR has developed the Community Climate Model (CCM), which has been continuously refined into the next 20 years (Williamson 1983; Williamson et al. 1987; Williamson and Olson 1994), with the Community Atmosphere Model (CAM 3.0) being the latest version (Collins et al. 2004). On the other hand, coupled atmosphere/ocean climate models such as HadCM3 and HadGEM are used at the Hadley Centre for Climate Prediction and Research in the United Kingdom for a wide range of climate studies (Lynch 2006). Advanced models, such as the atmospheric GCM ECHAM5 developed at the Max Planck Institute for Meteorology (Roeckner et al. 2003), are under continuing refinements and extensions, and are increasing in sophistication and comprehensiveness. Most of them simulate not only the atmosphere and oceans but also a wide range of geophysical, chemical, and biological processes and feedbacks. In particular, these models, now called Earth System Models, are applied to the practical problem of weather prediction and also to the study of climate variability and mankind’s impact on it.

2 Weather Modelling and Prediction

The atmosphere is a fluid (composed mostly of air) that covers the entire Earth surface. Most of the phenomena which we associate with day-to-day weather occur in its lowest layer, called the troposphere, which ranges in thickness from about 8 km at the poles to 16–20 km over the equator. The troposphere is denser than the layers of the atmosphere above it and contains up to 75 % of the mass of the atmosphere, with approximate composition of 78 % nitrogen, 21 % oxygen, and 1 % small concentrations of other trace gases. Nearly all atmospheric water vapour (or moisture) and aerosols are found in the troposphere. Since temperature decreases with altitude, warm air near the surface of the Earth can readily rise, being less dense than the colder air above it. This induces a vertical movement, or convection, of air which generates clouds and ultimately rain from the moisture within the air, giving rise to much of the weather we experience in our daily lifes.

The troposphere is capped by the tropopause, a boundary region of stable temperature, separating the troposphere from the stratosphere, where the air temperature begins to rise. Such a temperature increase prevents much of the air convection beyond the troposphere, and consequently most weather phenomena, including towering cumulonimbus thunderclouds, are confined to the troposphere. For instance, most commercial aircrafts fly in the lower stratosphere, just above the tropopause where clouds are usually absent, as also are significant weather perturbations (Petty 2008). However, vigorous thunderstorms as, for example, those of tropical origin may overshoot into the lower stratosphere and undergo low-frequency vertical oscillations of an hour-order duration, or less (Shenk 1974), which in turn may induce low-frequency atmospheric gravity waves capable of affecting both atmospheric and oceanic currents in the region (Bromirski et al. 2010). Sometimes the temperature does not decrease with height in the troposphere, but rather increases, which is known as a temperature inversion. In general, temperature inversions limit or prevent the vertical mixing of air, causing a state of atmospheric stability. This can lead to episodes of air pollution, where air becomes stagnant and pollutants emitted at ground level remain trapped underneath the temperature inversion zone (Phalen and Phalen 2012).

Among the most significant scientific advances of the past century is our ability to simulate complex physical systems using numerical methods and predict their evolution. One outstanding example is the development of GCMs of the atmosphere and ocean, which can be used to predict the weather for several days in advance with a high degree of confidence and gain insight into the factors that cause changes in the climate as well as into their likely timing and severity. Here we shall review the most important numerical weather prediction models, which were the precursors to climate prediction systems, viewed as a problem in non-linear fluid mechanics.

2.1 Barotropic Models

Barotropic models are short-range prediction models that include only the reversible part of atmospheric physics. That is, the atmosphere is treated as a one-component gas consisting of dry air so that irreversible processes, such as non-adiabatic heating and cloud formation, are not taken into account. The barotropic model was the first kind of NWP model ever successfully implemented (Charney 1948; Charney et al. 1950). It is probably the simplest model that can realistically model atmospheric flow around the Earth. Meteorologists use the word barotropic to describe an atmosphere where isosteric surfaces—surfaces of constant specific volume—and isobaric surfaces—surfaces of constant pressure—coincide. In other words, the gradient of the specific volume (or density) and the gradient of pressure are parallel and proportional to each other so that the density is a function of pressure (adiabatic atmosphere).

Typical barotropic models are based on a set of equations known as the quasi-geostrophic system (Charney et al. 1950). These equations are derived from the Euler equations of motion by assuming that the Coriolis force resulting from horizontal air currents exactly balances the horizontal pressure gradients (geostrophic balance), while in the vertical direction hydrostatic equilibrium is assumed. If the atmosphere is divergence-free, the curl of the Euler equations of motion reduces to the barotropic vorticity equation (Bennett et al. 1993):

$$\begin{aligned} \frac{D\zeta }{Dt}=0, \end{aligned}$$
(1)

where \(D/Dt\) is the substantial time derivative and \(\zeta \) is the absolute vorticity defined by

$$\begin{aligned} \zeta =m^{2}\left[ \frac{\partial }{\partial x}\left( \frac{v}{m}\right) - \frac{\partial }{\partial y}\left( \frac{u}{m}\right) \right] +f, \end{aligned}$$
(2)

where \(v\) and \(u\) are the horizontal geostrophic wind components in the direction of the map coordinates \(x\) and \(y\), respectively, \(m\) is the map factor, and \(f=2\varOmega \sin \phi \) is the Coriolis frequency. Here \(\varOmega \) is the angular velocity of planetary rotation and \(\phi \) is the latitude. Since the model has non-divergent flow, a streamfunction \(\varPsi \) can be defined by

$$\begin{aligned} v=m\frac{\partial \varPsi }{\partial x},\quad u=-m\frac{\partial \varPsi }{\partial y}, \end{aligned}$$
(3)

so that

$$\begin{aligned} \zeta =m^{2}\nabla ^{2}\varPsi +f. \end{aligned}$$
(4)

In low-pressure systems, where the Rossby number (Ro) is small, the effects of planetary rotation are large compared to the net wind acceleration, allowing the use of the geostrophic approximation given by Eqs. (14) (Marshall and Plumb 2008). Typical barotropic models for operational weather prediction were based on an extended version of Eqs. (14) to account for small deviations from strict geostrophic balance—the so-called semi or quasi-geostrophic equations (Phillips 1970; Chynoweth and Sewell 1991). Since the pioneering work of Charney (1948) and (Charney et al. 1950), the quasi-geostrophic equations have become an accepted system of approximate equations for the study of mid-latitude motions of the atmosphere on a sypnotic scale, while allowing for the presence of mesoscale phenomena such as the atmospheric fronts.

A barotropic instability is a wave instability associated with shear in a jet-like current and this appears to be of central importance in the tropics. Early attempts of forecasting in the tropics with a barotropic atmospheric model were addressed to predict upper-air flow patterns in the tropical Pacific areas of both the Northern and Southern hemispheres (Jordan 1956; Vederman et al. 1966). A similar model was applied to forecasts of flow patterns at 500 mb level in the Indian region (Shukla and Saha 1970). Barotropic prediction models have also provided the basis for a significant advance of the state of the art of tropical cyclone motion and hurricane track forecasting in the range from one to several days (Bennett et al. 1993; Sanders and Burpee 1968; Sanders et al. 1980; DeMaria 1985). Although there are some situations where tropical cyclone motion can only be modelled using a more general form of the basic equations as, for example, in the case when a vortex interacts with a vertically-sheared basic current, there has been evidence that some aspects of tropical cyclone motion can be described with simple barotropic models. For instance, the SANBAR model (Burpee 2008)—a barotropic tropical cyclone track prediction model designed for the North Atlantic tropical cyclone basin and used operationally during 1973–1984 and 1985–1989, was recognized to be superior to other forecast methods for medium range track forecasts of low-latitude Atlantic tropical cyclones (Neumann and Pelissier 1981). It has also been shown that for the Australian/Southwest Pacific region many aspects of tropical cyclone motion can be explained using a theory based on a barotropic vorticity equation (Holland 1983, 1984). In fact, calculations of the terms in the full form of the vorticity equation, using aircraft and rawinsonde composite data, have shown that the dominant contribution to the local vorticity change in the regions near the tropical cyclone centre comes from the horizontal advection term (Chan 1984).

Barotropic NWP models have also been used to demonstrate the close coupling existing between the westwards propagating African waves and the broad scale African monsoons on the time scale of 3–5 days (Krishnamurti et al. 1980). It is well-known today that about 80 % of all tropical cyclones on the globe forms near or within the intertropical convergence zone (ITCZ) (Gray 1979). In satellite images, the ITCZ is sometimes observed to undulate, forming cloud patterns. At times, such an undulating ITCZ breaks down into several tropical disturbances within which tropical cyclones may form (Gray 1979; Zehr 1993). The resulting tropical cyclones and typhoons then move into higher latitudes, allowing the ITCZ to reform and perhaps start the cycle over again (Guinn and Schubert 1993). These undulations are a clear signature of easterly waves in the tropical troposphere. Easterly waves have early been recognized to play an important role in tropical cyclogenesis (Riehl 1945). These have since been observed in the Atlantic Ocean and West Africa (Reed et al. 1977; Chen and Ogura 1982), in the Pacific Ocean (Nitta et al. 1985; Nitta and Takayabu 1985; Tai and Ogura 1987; Heta 1991), and in the South China Sea and India (Saha et al. 1981). All these studies concluded that easterly waves occur in the lower tropical troposphere and have typical wavelengths and speeds in the ranges from 2,000 to 4,000 km and 5–8 ms\(^{-1}\), respectively. While nearly 60 % of all Atlantic tropical cyclones originates from African easterly waves (Avila and Clark 1989), observational and numerical studies indicate that they result from a convectively modified form of combined barotropic and baroclinic instability of the African easterly jet, which has maximum winds of 10–15 ms\(^{-1}\) near 700 mb and 15 \(^{\circ }\)N (Norquist et al. 1977; Thorncroft and Hoskins 1994a, b). Barotropic model simulations based on the shallow-water equations have suggested that the ITCZ break-down may play a role in producing the observed tendencies for tropical storms to cluster in time and form polewards of the central latitude of the ITCZ and to the east of existing tropical storms (Nieto Ferreira and Schubert 1997). More recently, barotropic instability calculations have also been employed to investigate the possible importance of barotropic shear variations for explaining the effect of the Madden-Julian oscillation on hurricane formation over the eastern and western North Pacific (Hartmann and Maloney 2001).

In spite of its numerous applications during more than 40 years, the quasi-geostrophic modelling was abandoned because of the development of more efficient ways of integrating the primitive equations (Bengtsson 1999). On the other hand, the incorporation of physical processes, radiation, clouds, precipitation processes, etc. was by far more complicated to implement in the quasi-geostrophic models, and this was an additional reason not to use them any longer in NWP.

2.2 Baroclinic Models

The occurrence of large vertical temperature gradients in the troposphere can lead to the formation of convective air currents, which transport the excess energy away from the surface to higher altitudes where the air is significantly cooler. When this happens we say that the atmosphere is statically unstable. In analogous manner, when the latitudinal temperature distribution is such that a large equator-to-pole temperature gradient exists, the atmosphere will break down into wind flows to move the excess energy from the regions of excess (warm tropics) to regions of deficit (cool poles). In this case, the atmosphere is said to be baroclinically unstable. This imbalance of energy is essentially due to an excess of radiational heating in the tropical latitudes. In a stratified fluid, a source term of the form \(\nabla \rho \times \nabla p/\rho ^{2}\) appears in the vorticity equation whenever isopycnic (constant density) surfaces and isobaric surfaces are not aligned, which is responsible for the baroclinic contribution to the local vorticity (Marshall and Plumb 2008). In meteorology, a baroclinic atmosphere is one in which the density depends on both the temperature and the pressure.

The most important application of the baroclinic instability is the cyclogenesis process at mid-latitudes, which represents the development of sypnotic scale weather disturbances. In other words, it is the leading mechanism shaping the cyclones and anticyclones that influence weather at mid-latitudes. For instance, in the ocean the baroclinic instability is responsible for the generation of mesoscale eddies that play a role in the transport of tracers, which are used in oceanography to deduce flow patterns in the ocean (Davis 1991). In general, vorticity is the curl of the velocity field and its evolution can be broken into contributions from advection (as vortex tubes move with the flow), stretching and twisting (as vortex tubes are pulled or twisted by the flow), and baroclinic vorticity generation (Nadiga and Aurnou 2008). Therefore, the study of the evolution of these baroclinic instabilities is a crucial part of developing theories of mid-latitude weather. The birth of baroclinic NWP models started with the classical work of Charney (1947) and Eady (1949). The energy source for baroclinic instability is the potential energy associated with the environmental flow, and since then meteorologists have become aware that baroclinic instability can develop even in situations of rapid rotation (small Ro) and strong stable stratification (large Richardson number, Ri) as is typically observed in the atmosphere, where Ri is a dimensionless number that serves to quantify the ratio of potential to kinetic energy.

Since a tropical cyclone is a huge tropospheric convection cell and the axis of the horizontal wind circulation remains almost vertical during the movement, there was a need to develop baroclinic prediction models capable of simulating the three-dimensional atmospheric motion more closely than single-level barotropic models. After the Electronic Numerical Integrator and Computer (ENIAC) forecast chaired by Charney in the 1950s in Aberdeen, Maryland (Platzman 1979), several baroclinic models were developed in the next few years, which were all based on the quasi-geostrophic system of equations (Phillips 1951, 1954; Charney and Phillips 1953; Matsumoto 1956; Wiin-Nielsen 1959; Kasahara 1960). Most of these models were employed to evaluate the instantaneous movement velocity of tropical cyclones from multi-level data, i.e., the atmosphere is divided into two, or more, levels where prognostic and diagnostic variables are evaluated from known data at these levels. However, it was soon argued that early experiments with baroclinic models capable of generating additional kinetic energy from the store of available potential energy failed (Ellsaesser 1968), and that the multi-level models were worse than the single-level barotropic forecasts (Bengtsson 1964; Shuman 1989). One major cause of the failure was due to a net accumulation of kinetic energy in the models owing to the presence of the baroclinicity source and the absence of a dissipative sink of kinetic energy. Therefore, the single-level model was preferred when regular operational weather forecasting commenced in 1958.

2.3 Primitive Equation Models

As numerical weather prediction passed its infancy, the quasi-geostrophic approximation was replaced by the primitive equations. On the basis that these equations would simulate the atmospheric dynamics and energetics more realistically than the filtered equations, Hinkelmann (1951) first tackled the issue of suitable initial conditions for integration of the primitive equations, followed by other important studies of initialization (Charney 1955; Phillips 1960). The first applications of the primitive equations were a success (Hinkelmann 1959; Smagorinsky 1963) and soon thereafter, they started to be used in operational settings in 1966 at the Deutscher Wetterdienst in West Germany (Reiser 1986) and at the National Meteorological Center in Washington (Shuman and Hovermale 1968), followed by the United Kingdom Meteorological Office in 1972 and the Australian Bureau of Meteorology in 1977 (Leslie and Dietachmayer 1992; Lynch 2008).

The primitive equations are a set of non-linear differential equations that form the basis for any NWP scheme. Their precise form depends on the coordinate system used to represent the vertical structure of the atmosphere, which may be either the pressure (\(p\)), the geometrical height (\(z\)), or the potential temperature (\(\theta \)) (Kasahara 1974). In particular, models based on pressure as a vertical coordinate must be distinguished into three types: pressure, log pressure, and the so-called \(\sigma \)-system, where \(\sigma =p/p_{0}\) and \(p_{0}\) is the Earth’s surface pressure (Phillips 1957). The use of pressure as a vertical coordinate became very popular during the 1950s and 1960s (Hinkelmann 1959; Eliassen 1949; Leith 1965). However, this scheme has certain computational disadvantages in the vicinity of mountains because the lower limit of the atmosphere is not a coordinate surface. In fact, there have been very few attempts to incorporate details of the Earth’s orography in these models. To overcome this difficulty, the \(\sigma \)-system was proposed in which the Earth’s surface is always a coordinate surface (Phillips 1957; Smagorinsky et al. 1965; Sela and Bostelman 1973). Moreover, the use of the potential temperature (defined as \(\theta =T(p_{0}/p)^{\kappa }\), where \(\kappa =R_{g}/c_{p}\), \(R_{g}\) is the specific gas constant, \(c_{p}\) the specific heat at constant pressure, and \(T\) the temperature) as a vertical coordinate in primitive-equation models commenced in the 1970s (Eliassen and Raustein 1968, 1970; Shapiro 1973). Although the approach is particularly suitable for resolving details of frontal structure, it still faces the same degree of complexity in handling the lower boundary conditions as in the isobaric coordinate system. While the representation of bottom topography has historically been crude, the choice of vertical coordinates is perhaps the single most important feature that differentiates between models and is still an active area of research.

In the pressure as well as in the height and potential temperature coordinate systems, special procedures were implemented to take into account the effects of the Earth’s orography, consisting of examining the height of the mountains and shaping them as lateral boundary conditions at the grid points. Although the \(\sigma \)-system is not free of shortcomings, the idea of transforming the Earth’s surface to a coordinate surface has also been applied to the height and potential temperature coordinate systems as well. A comprehensive overview of models using all three vertical coordinates as well as a concise review of the equations of oceanic motion, sub-grid-scale parameterizations, and numerical approximation techniques can be found in Haidvogel and Beckmann (1999). A convenient way to introduce a general system that utilizes any well defined variable as a vertical coordinate has been discussed by Kasahara (1974). For example, in the \(z\)-system any fluid quantity will be a function of the Cartesian coordinates (\(x\), \(y\), \(z\)) and time \(t\), while in the generalized coordinate system (the \(s\)-system), the independent variables would be (\(x\), \(y\), \(s\), \(t\)) such that \(s=s\)(\(x\), \(y\), \(z\), \(t\)). When \(x\), \(y\), and \(t\) are held fixed, this equation gives a single-valued monotonic relation between \(s\) and \(z\). The basic primitive equations for large-scale atmospheric flows written in the \(s\)-system are as follows: the horizontal equation of motion

$$\begin{aligned} \frac{D\mathbf{v}}{Dt}=-\frac{1}{\rho }\nabla _{s}p-g\nabla _{s}z-f\mathbf{k}\times \mathbf{v}+\mathbf{F}, \end{aligned}$$
(5)

where \(\mathbf{v}=u\mathbf{i}+v\mathbf{j}\) is the horizontal velocity, with \(u\) and \(v\) being its \(x\)-and \(y\)-components, \(\nabla _{s}=\nabla _{z}+(\partial s/\partial z)\nabla _{s}z(\partial /\partial s)\) is the gradient operator in the \(s\)-system, \(\mathbf{k}\) is the unit vector along the \(s\)-coordinate, \(f\) is the Coriolis frequency (\(=2\varOmega \sin \phi \)), \(\varOmega \) is the angular velocity of the Earth’s rotation, \(\phi \) the geographical latitude, \(\rho \) the atmosphere density, \(g\) the Earth’s gravitational acceleration, \(p\) the pressure, \(\mathbf{F}\) the frictional force per unit area, and

$$\begin{aligned} \frac{D}{Dt}=\left( \frac{\partial }{\partial t}\right) _{s}+\mathbf{v}\cdot \nabla _{s}+\dot{s} \frac{\partial }{\partial s}, \end{aligned}$$
(6)

is the total time derivative in the \(s\)-system, where \(\dot{s}\) is the generalized vertical velocity; the continuity equation

$$\begin{aligned} \frac{D}{Dt}\ln \left( \rho \frac{\partial z}{\partial s}\right) +\nabla _{s}\cdot \mathbf{v}+ \frac{\partial \dot{s}}{\partial s}=0; \end{aligned}$$
(7)

the hydrostatic equation

$$\begin{aligned} \rho \frac{\partial z}{\partial s}=-\frac{1}{g}\frac{\partial p}{\partial s}; \end{aligned}$$
(8)

the ideal gas law

$$\begin{aligned} p=\rho R_{g}T; \end{aligned}$$
(9)

and the first law of thermodynamics

$$\begin{aligned} \frac{D}{Dt}\ln \theta =\frac{Q}{c_{p}T}, \end{aligned}$$
(10)

where \(\theta \) is the potential temperature as defined in the text above and \(Q\) is the rate of heating/cooling per unit mass per unit time. Equations (510) constitute the basic set of dynamical principles for NWP. In predicting the atmospheric flow, we must define appropriate boundary conditions as required by any solution of the problem. In general, it is convenient to choice the upper boundary condition as a vertical coordinate surface, \(s=s_{T}=\) const., so that there is no mass transport through it (\(\dot{s}=0\)). As a lower boundary condition of the atmosphere, it is usually assumed that there is no mass flow through the Earth’s surface, which is located at fixed altitude \(H\) above the mean sea level \(z=0\). In the \(s\)-system, the Earth’s surface is expressed by

$$\begin{aligned} s=s_{H}=s(x,y,H,t), \end{aligned}$$
(11)

where the value of \(s\) at \(z=H\) may vary with time and space. Since the air at the Earth’s surface may move only along the Earth’s surface itself, the lower boundary condition must read

$$\begin{aligned} \dot{s}=\frac{\partial s_{H}}{\partial t}+\mathbf{v}_{H}\cdot \nabla s_{H}, \end{aligned}$$
(12)

at \(s=s_{H}\). If the Earth’s surface coincides with a constant \(s\)-surface, then Eq. (12) becomes \(\dot{s}=0\) at \(s=s_{H}\). It is worth noticing that many worldwide groups were also examining how to used “hybrid” coordinates, where the vertical coordinate may be a function of height in the mixed layer, a function of isentropes in the interior, and some function of the terrain in the bottom boundary layer (Spall and Robinson 1989; Arakawa and Konor 1996; Rõõm et al. 2007).

For prediction of large-scale weather phenomena, it is important to add to the above set of equations the prediction of the water vapour field. Water vapour is a dynamically active constituent of the tropical atmosphere which, though to a significant extent locally controlled by vertical advection, precipitation, and surface evaporation, is also affected by horizontal advection. Water vapour affects the flow in turn, because a humid atmosphere supports deep, precipitating convection more readily than a dry atmosphere. For instance, precipitation heats the atmosphere, and this heating drives the flow. The differential equation for the specific humidity \(q\), defined as the mass of water vapour per unit mass of air, in the \(s\)-system has the form (Sobel 2002):

$$\begin{aligned} \frac{Dq}{Dt}=Q_{q}, \end{aligned}$$
(13)

where \(Q_{q}\) represents sources and sinks of moisture due to unresolved processes, such as transport of water vapour as well as loss by condensation (Yanai et al. 1973). Similarly, \(Q\) in Eq. (10) represents sources and sinks of heat, such as radiative transfer of electromagnetic energy. For example, \(Q=Q_{c}+Q_{R}+Q_{d}\), where \(Q_{c}\) is the apparent source of heat associated with buoyant moist convection (i.e., release of latent heat by condensation of water vapour or freezing of liquid water as well as transport of heat), \(Q_{R}\) represents radiative heating or cooling, and \(Q_{d}\) represents diffusive or turbulent transport by motions that are not directly associated with deep convection (Yanai et al. 1973). In order to obtain a closed dynamical system, we need to parameterize these sources and sinks as functions of the large-scale state variables \(\mathbf{v}\), \(q\), and \(T\). In particular, \(Q_{c}\) and \(Q_{q}\) are determined by a convective parameterization (Arakawa 1993). A detailed discussion of the parameterization problem is precluded here and the reader is referred to a few useful textbooks for a detailed account (Emanuel 1994; Smith 1997).

Weather models that have grid-boxes with sides between 5 and 25 km can explicitly represent convective clouds, although they need to parameterize the cloud microphysics which occur at much smaller scales (Narita and Ohmori 2007). For example, the formation of large-scale clouds (stratus-type) is more physically based and form when the relative humidity reaches some prescribed value. On the other hand, the amount of solar radiation reaching the ground and the formation of cloud droplets, which occur on the molecular scale, must be parameterized before they can be included in any model. Atmospheric drag produced by mountains must also be parameterized because limitations in the resolution of elevation contours may produce significant underestimates of the actual drag (Stensrud 2009). A parameterization of the surface flux of energy between the ocean and the atmosphere is also required in order to determine realistic sea surface temperatures and type of sea ice found near the ocean’s surface (McGuffie and Henderson-Sellers 2005). In addition, the impact of multiple cloud layers as well as soil type, vegetation type, and soil moisture are factors that must be taken into account in NWP models (Melnikova and Vasilyev 2005; Stensrud 2009). Within air quality models, parameterizations are required to take into account atmospheric emissions from multiple relatively tiny sources, as roads, urban areas, fields, and factories, within specific grid-boxes (Baklanov et al. 2009).

In the last three decades a myriad of primitive-equation models has been reported in the literature, most of which have found applications in ocean dynamics and tropical cyclone forecasting (Arakawa and Suarez 1983; Beckers 1991; Song and Haidvogel 1994; Ezer and Mellor 1997; Barnier et al. 1998; Fraedrich and Frisius 2001). For testing and operational models, the process of entering observational data to generate initial conditions is called initialization. On land, terrain maps that are available at resolutions down to 1 km are employed to facilitate atmospheric circulation models within regions of rugged topography. This permits depict features such as downslope winds, lee waves—atmospheric standing waves due to wind flows towards a mountain, —and related cloudiness that affects the incoming solar radiation (Stensrud 2009). In country-based weather services, the main input data is produced by observations from radiosondes (placed in weather balloons that measure relevant atmospheric parameters and transmit them to a fixed receiver) and from weather satellites. Permanent weather observation stations either report hourly in METAR reports—the most popular format in the world for the transmission of weather data—or every 6 h in SYNOP (surface synoptic observations) reports. In general, these observations are irregularly spaced and so they must be processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by NWP models (Krishnamurti 1995). Many of these models are global, primitive-equation models based on finite-difference techniques, where the world is represented as discrete points on a spherical grid in latitude and longitude (Chaudhari et al. 2007), while a few other models are based on spectral methods that solve for a range of wavelengths. Today, information from weather satellites is used where traditional data sources are not available. Research projects use reconnaissance aircrafts to fly in and around weather systems of interest, such as tropical cyclones. In particular, reconnaissance aircrafts are also used over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or which are expected to be of high impact from 3 to 7 days into the future over the downstream continent.

The horizontal domain of a NWP model can be either global, covering the entire globe, or regional—also known as limited-area models,—covering only part of the Earth. The latter models allow for the use of finer grid spacing than global models because the available computational resourses are focused on a specific area, thereby allowing explicit resolution of small-scale meteorological phenomena that cannot be represented on the coarser grid of a large-scale, or global, model. In general, regional models use information from global models to specify boundary conditions at the edge of their domain and eventually allow systems from outside the limited area to move into it. For instance, high-resolution models (also called mesoscale models), such as the Weather Research and Forecasting (WRF) model, which was created through a partnership including the National Oceanic and Atmospheric Administration (NOAA), NCAR, and more than 150 other organizations and universities in the United States and other countries, and the Nonhydrostatic Mesoscale Model (NMM), which was designed for forecasting operations at various National Weather Service offices in the United States, are primitive-equation codes based either on hybrid or \(\sigma \) vertical coordinates that are employed to explore ways of improving the accuracy of hurricane track, intensity, and rainfall forecasts, among other meteorological questions.

3 Climate Modelling

Climate is a complex, large-scale phenomenon that emerges from complicated interactions among small-scale physical systems. As mentioned by Schmidt (2007) in his Physics Today’s article on the physics of climate modelling: the task climate modellers have set for themselves is to take their knowledge of the local interactions of air masses, water, energy, and momentum and from that knowledge to explain the climate system’s large-scale features, variability, and response to external pressures, or “forcings”. That is a formidable task, and though far from complete, the results so far have been surprisingly successful.

Computer models of the coupled atmosphere-land surface-ocean-sea ice system are essential scientific tools for understanding and predicting natural and human-caused changes in the Earth’s climate. Recently, these models have added more components such as interactive atmospheric aerosols, atmospheric chemistry, and representations of the carbon cycle. There is no doubt that the study of climate change and its impacts are of enormous importance for our future and that global climate models are perhaps the best means we have of anticipating the likely changes. In general, climate models are used for a variety of purposes, which range from the study of the dynamics of the climate system to projections of future climate. In recent years, the most talked-about use of climate models has been to project temperature changes resulting from increases in atmospheric concentrations of greenhouse gases.

3.1 Phenomena of Interest in Climate Modelling

A number of well-known phenomena may contribute to climate change over short and long periods, which include the global carbon cycle, El Niño-Southern Oscillation (ENSO) climate pattern and its counterpart La Niña, the greenhouse warming, the atmospheric chemistry, the ocean circulation, and extreme events such as mesoscale storms and volcanic eruptions.

3.1.1 The Global Carbon Cycle

In the geological history of the Earth, carbon has been cycling among large reservoirs in the land (including plants and fossil fuels), oceans, and the atmosphere. This natural cycling of CO\(_{2}\) usually takes millions of years to move large amounts of carbon from one system to another. However, atmospheric carbon dioxide comes increasingly from human activities, which together with other trace (greenhouse) gases in the atmosphere absorb radiation emitted from the Earth, thereby trapping heat in the atmosphere and contributing to its warming. For instance, since the Industrial Revolution in the nineteenth century, the amount of CO\(_{2}\) in the atmosphere has risen by 30 % as a result of the sustained increase in burning of fossil fuels (oil and natural gas) and other carbon based fuels, principally wood and coal, due to the rise of industry and transportation emissions.

There are two large reservoirs of carbon that are capable of taking significant amounts of CO\(_{2}\) out of the atmosphere at comparable rates: the oceans and the land plants. A comprehensive study of the ocean storage of CO\(_{2}\) derived from human activity based on a decade-long survey of carbon distributions in the Atlantic, Pacific, and Indian oceans indicate that the oceans have taken up to 118 billion metric tons of CO\(_{2}\) from human sources (anthropogenic CO\(_{2}\)) between the period from 1800 to 1994, implying that the oceanic sink accounts for \(\sim \)48 % of the total fossil-fuel and cement-manufacturing emissions (Sabine et al. 2004).

3.1.2 Greenhouse Gases and Aerosols

Trace (or greenhouse) gases in the atmosphere, such as water vapour, carbon dioxide, ozone, methane, nitrous oxide, and carbon monoxide, are present in the atmosphere in a tiny percentage (\(\sim \)1 %) compared to its total composition, mostly nitrogen and oxygen. However, such a small amount contributes significantly to long-term changes in the Earth’s climate. They absorb and re-emit some of the outgoing energy radiated from the Earth’s surface, retaining the excess heat in the lower atmosphere and affecting the surface energy balance of the planet. Some greenhouse gases remain in the atmosphere for decades or even centuries, warming the atmosphere and resulting in long-term changes to global climate. The factors that influence the Earth’s energy balance are quantified in terms of radiative forcing. While some greenhouse gases, like carbon dioxide, have always been present in the atmosphere, some others may be new compounds, introduced into the air by man-made mechanisms such as manufacturing processes. This human-induced (anthropogenic) warming has had a discernible influence on many physical and biological systems, and future warming is projected to have important impacts on the sea level rise, increased frequency and severity of extreme weather events, loss of biodiversity, and agricultural productivity.

Cumulative anthropogenic emissions of CO\(_{2}\) are recognized to be a major cause of global warming (Botzen et al. 2008), with the developed countries contributing to more than 80 % of industrial CO\(_{2}\) emissions (Höhne et al. 2010). A recent analysis estimates that water vapour accounts for about 50 % of the Earth’s greenhouse effect, with clouds—formed by suspended water droplets and ice crystals (Kiehl and Trenberth 1997)—contributing 25 %, carbon dioxide 20 %, and other minor trace gases and aerosols accounting for the remaining 5 % (Schmidt et al. 2010). Though to a relatively minor extent, aerosols—fine solid particles of various types and concentrations suspended in the atmosphere such as smoke, dust, smog, ashes, pollen, and other sources (Hinds 1999)—can also affect the behaviour of the Earth system. For example, aerosols can absorb and scatter radiation, which can cause either warming or cooling of the atmosphere. Therefore, they are important in the formation and behaviour of clouds, and can influence the water cycle and shift the Earth’s radiative balance.

3.1.3 El Niño and La Niña

El Niño is a natural fluctuation of the global climate system. Originally it was the name given to the periodic warming of ocean waters along the tropical South American coasts and out along the Equator to the dateline. Today, the name is used to describe the whole El Niño-Southern Oscillation (ENSO) phenomenon. During El Niño events, warmer than average sea surface temperatures occur in the central and eastern equatorial Pacific accompanied by high air surface pressure in the western Pacific, while during La Niña—the opposite extreme of the ENSO cycle,—cooler than average see surface temperatures predominate in the equatorial central and eastern Pacific accompanied by low air surface pressure in the western Pacific (Trenberth et al. 2007). ENSO is an important component of the climate system since El Niño/La Niña phases impact weather on a global scale.

Under normal conditions, i.e., when neither El Niño nor La Niña are present, the Walker circulation—parcels of air following a closed circulation in the zonal and vertical directions in the lower tropical atmosphere—is seen at the sea surface in the form of easterly trade winds that move air and water warmed by the Sun towards the west (Briggs and Smithson 1986). During El Niño events, the trade winds weaken, leading to a rise in sea surface temperature in the eastern equatorial Pacific and a reduction of up-welling off South America. Heavy rainfall and flooding occur over Peru, and drought over Australia and Indonesia. The supplies of nutrient-rich water off the South American coasts are cut off due to the reduced up-welling, adversely affecting fisheries in that region. In the tropical South Pacific the pattern of occurrence of tropical cyclones shifts eastwards, so there are more cyclones than normal in areas such as the Cook Islands and French Polynesia. Conversely, during La Niña events, the trade winds strengthen and the pattern is a more intense version of the normal conditions, with an even colder tongue of sea surface temperatures in the eastern equatorial Pacific. Typically, this anomaly happens at irregular intervals of 2–12 years, and lasts 9 months–2 years, with an average period length of 5 years (Philander 1990).

The strong El Niño event of 1982–1983 has inspired innovative climate research, which has resulted in greater predictability of ENSO. In particular, the NOAA’s research laboratories have taken a leadership role in furthering ENSO observations and research to improve understanding, predictions, and impacts. This not only serves society’s need for information about weather and climate, but also helps plan and respond to weather and climate impacts. For example, ENSO has widespread impacts on a global scale such as drought, wildfires, crop failure, starvation, increased tropical storm/hurricane activity, damage to ecosystems, flooding, and increased spreading of infectious diseases. Understanding and predicting ENSO has resulted in more accurate climate predictions and, hence, in a reduction of its impacts through better planning. For example, scientists are now taking their understanding of ENSO a step further by comparing comprehensive descriptions of these events from the observed record with those simulated by numerical prediction models (Emile-Geay et al. 2013a, b).

3.1.4 Atmospheric Chemistry

The composition and chemistry of the atmosphere is important primarily because of the interactions between the atmosphere and living organisms. As a matter of fact, the composition of the atmosphere changes as a result of natural events such as volcano emissions, lightning—massive electrostatic discharges between electrically charged regions within clouds, or between a cloud and the Earth’s surface,—and bombardment by solar wind particles, and also as a result of air pollution derived from human activities. Well-known examples of problems currently addressed by atmospheric chemistry include ozone depletion, acid rains, photochemical smog, greenhouse gases, and global warming (Seinfeld and Pandis 2006).

Progress in atmospheric chemistry is often driven by the interplay between observations, laboratory measurements, and numerical modelling. One common trade-off in numerical modelling is between the number of chemical compounds and reactions that are modelled and the representation of chemical transport and mixing in the atmosphere. Typical box models might include hundreds, or even thousands, of chemical reactions but will only have a rather crude representation of mixing in the atmosphere. In contrast, existing 3D models based on primitive equations represent many of the physical processes of the atmosphere but due to constraints on computational resources will have far fewer chemical reactions and compounds. A trend today is to incorporate atmospheric chemistry as modules in existing climate models.

3.1.5 The Ozone Layer

The ozone layer is a deep layer located in the stratosphere between 30 and 90 km above the ground, encircling the Earth and where most of the atmospheric ozone (O\(_{3}\)) is concentrated. It is well-known that though ozone represents only a small fraction of the gas present in the atmosphere, it plays a protective role by shielding humans and other types of life from the harmful ultraviolet (UV) radiation that comes from the Sun (Seinfeld and Pandis 2006). Ozone on the Earth’s stratosphere is a bluish gas created by UV light striking oxygen molecules containing two oxygen atoms (O\(_{2}\)) and separating them into individual oxygen atoms, which can then recombine with other O\(_{2}\) molecules to form O\(_{3}\).

Over the last two or three decades, the ozone layer has become more widely appreciated by the public as it was realized that certain industrial processes and consumer products result in the atmospheric emission of chemicals, such as chlorofluorocarbons and hydrochlorofluorocarbons, which have contributed to the depletion of the ozone layer through a complex series of chemical reactions (Steger and Bowermaster 1990). There is also evidence that natural sources of bromides and chlorides from ocean spray and volcanos can contribute to depletion of the ozone (Steger and Bowermaster 1990). As a consequence of these discoveries, an international treaty was signed in 1973, called the Montreal Protocol, and since then other international agreements were also put in place to limit the emissions of human-made, ozone-depleting substances. As a result of these efforts, it is expected that the ozone layer will progressively recover in the coming decades.

Since ozone is also a greenhouse gas in the upper atmosphere, it will have an impact on Earth’s climate. For instance, the increase of primary greenhouse gases may affect how the ozone layer will recover in the coming years. Therefore, understanding precisely how ozone abundances will change in the future with diminished chlorofluorocarbon emissions and increased emission of greenhouse gases remains an important challenge for atmospheric scientists. On the other hand, satellite data after the volcanic eruptions of El Chichón (Mexico) in 1982 and Mount Pinatubo (the Philippines) in 1991 showed a 15–20 % ozone loss at high latitudes, and a greater than 50 % loss over the Antarctic, suggesting that volcanic eruptions can play a significant role in reducing ozone levels. Eruption-generated particles, or aerosols, appear to provide surfaces upon which chemical reactions with chlorine-and bromine-bearing compounds from human-made chlorofluorocarbons take place. Thus, although volcanic aerosols provide a catalyst for ozone depletion, the real culprits in destroying ozone are human-generated chlorofluorocarbons (Solomon 1990; Newman et al. 2007).

Ozone depletion in the Earth’s ozone layer is seen to occur most severely in the polar regions. The discovery of the Antarctic ozone hole announced in 1985 (Farman et al. 1985) came as a shock to the scientific community, because the observed decline in polar ozone was far larger than anyone had anticipated (Zehr 1994). A review of the status of the ozone hole based on continued total-ozone measurements at Halley, Antarctica, reported in 2002 (Jones and Shanklin 1995), indicated that the ozone hole continued to deepen and that ozone loss extends into the months of January and February with a significant increase in UV-B radiation over the Antarctica in summer. The evolution of the ozone hole in the Antarctic stratosphere is continually monitored and improved measurements of ozone depletion are currently being reported (Huck et al. 2007). Significant depletion also occurs in the Arctic ozone layer during the late winter and spring period (between January and April). However, the maximum depletion was generally less severe than that observed in the Antarctic, with no large and recurrent ozone hole having taken place in the Arctic. However, an unprecedented large Arctic ozone hole was detected in 2011 (Manney et al. 2011). The hole possibly formed because the Arctic stratosphere remained cold longer than usual between December 2010 and March 2011. This way, cold air allowed water vapour and nitric acid to condense into polar stratospheric clouds, which catalyzed the conversion of chlorine into chemically active forms that destroyed ozone.

3.1.6 Paleoclimatology

A credibility test for existing climate models is their ability to simulate past climatic periods as the Cretaceous and the Last Glacial Maximum, which represent abnormally warm and cold climates, respectively. However, paleoclimatology also studies the climate prior to the widespread availability of records of temperature, precipitation, and other instrumental data. Unfortunately, records of past climate changes from satellites and human measurements generally cover less than 150 years, which are too short to examine the full range of climate variability. Therefore, it is crucial to examine climate changes going back to hundreds and thousands of years using paleoclimatic records from tree rings, corals, sediments, microfossils, glaciers, and other natural proxy sources (Cronin 2010).

Understanding how climate has changed on interannual to interdecadal time scales in the past can help scientists understand how climate may change in the future. For example, since the paleoclimate record shows that the Earth’s climate system is capable of undergoing abrupt changes, drastic changes in the frequency and intensity of extreme events may be a symptom of this process. The study of past climate change also helps us understanding how humans influence the Earth’s climate. For instance, the climate record over the last 1,000 years clearly shows that temperatures have increased significantly in the twentieth century, and that this warming was likely to have been unprecedented during all this period. The paleoclimatic record may also help unravel how much of this warming can be explained by natural causes and how much by human influences.

3.1.7 Global Ocean Circulation

The ocean is the major driver of global climate. It redistrubutes large amounts of heat around the planet via global ocean currents through regional scale up-welling and down-welling, and via a process called thermohaline circulation (Di Lorenzo et al. 2008; D’Orgeville and Peltier 2009), which refers to large-scale currents that are driven by fluxes of heat and freshwater across the sea surface and subsequent interior mixing of heat and salt (Rahmstorf 2003). Although winds and tides are important in creating turbulence, this driving mechanism is clearly distinct from wind-driven circulation: thermohaline circulation requires thermohaline surface forcing caused by differences in temperature and salinity of the water, while wind-driven circulation does not. Marine and coastal ecosystems, as we know them today, have adapted over time to the ocean circulation patterns. In addition, global climate change alters the factors that impact ocean circulation, such as wind, precipitation, temperature, and salinity patterns. These changes in forcing mechanisms may also lead to an increase in storm activity, thereby affecting local weather.

On the other hand, thermohaline circulation, which behaves as a conveyor belt, originates in the northern Atlantic Ocean where cold, dense waters sink to the deep ocean. These waters travel across ocean basins to the tropics where they warm and up-well to the surface, which are then drawn to polar latitudes to replace the cold sinking waters. During this process, heat is transferred to the atmosphere, causing the water to become cold and dense, and thus renewing the conveyor cycle. On the other hand, the salinity and the density of polar waters could be reduced by the melting of polar ice, which, in turn could weaken the rate at which the water sinks and alter the movement of heat around the Earth. Moreover, changes in global air temperatures over land and the ocean, as well as increased temperature variations, will alter atmospheric pressure gradients that drive the strength of winds over the ocean. Stronger winds are expected to induce a rapid, intense up-welling that provides a large influx of nutrients in a short amount of time, which can also increase the frequency and distribution of hypoxic events—low oxygen zones (Grantham et al. 2004; Chan et al. 2008). On the other hand, increased variability of winds due to global climate change may cause stronger and longer ENSO regimes (Yeh et al. 2009).

3.1.8 Extreme Weather Events

When a meteorological event comes as a surprise, such as a very hot summer, a unexpectedly mild winter, a flood, a drought, or a tornado, climate change is usually mentioned as one possible underlying cause. Yet, climate scientists warn us about the intrinsic erratic nature of weather, as well as on the difficulties of disentangling the climate change contribution to weather variability. However, changes in some types of extreme events have already been observed as, for example, increases in the frequency and intensity of heat waves and heavy precipitation events. Since 1950, the number of heat waves has increased and widespread increases have occurred in warm nights (Trenberth et al. 2007). In addition, the extent of regions affected by droughts has also increased as precipitation over land has marginally decreased, while evaporation has increased due to warmer conditions. In general, the number of heavy daily precipitations that lead to flooding has also risen, but not everywhere. On the other hand, it is well-known that tropical storm and hurricane frequencies vary from year to year, but evidence suggests substantial increases in intensity and duration since the 1970s (Trenberth et al. 2007). In the extra-tropics, variations in tracks and intensity of storms are a reflection of variations in major features of the atmospheric circulation, such as the North Atlantic oscillation.

In a warmer future climate, there will be an increased risk of more intense, more frequent, and longer-lasting heat waves. The European heat wave of 2003 was a clear example of the type of extreme heat event lasting from several days to over a week that is likely to become more common in a warmer future climate (Meehl et al. 2007). Most atmosphere/ocean GCMs predict increased dryness during summer and increased wetness during winter in most parts of northern middle and high latitudes (Meehl et al. 2007). Therefore, along with the risk of droughts, there will be an increased chance of intense precipitation and flooding due to the greater water-holding capacity of a warmer atmosphere so that intense and heavy downpours will be interspersed with longer relatively dry periods.

There is evidence from modelling studies that future tropical cyclones could become more severe, with greater wind speeds and more intense rainfalls (Bender et al. 2010). While it has been suggested that such changes may already be underway, there are clear indications that the average number of Category 4 and 5 hurricanes per year has increased over the past 30 years (McQuaid 2012). However, the overall frequency of Atlantic hurricanes is not expected to increase dramatically as the climate warms (Knutson et al. 2008; Zhao et al. 2009). In fact, the signal forced by greenhouse gases is a long-term trend, and a period of 30 years is too short to be able to distinguish a long-term trend from the multi-decadal fluctuations that are known to exist in the Atlantic (Landsea 2007). While the effects of global warming on hurricanes is still a matter of debate, climatic changes are responsible for the rise of the global sea levels at a rate of about 1.7 mm per year between 1950 and 2009, and at an accelerated pace of 3.3 mm from 1993 on, due to the expansion of warmer waters, ice melting in the poles, and shift of rainfall patterns. The northeast Atlantic coast is one region where this phenomenon is underway. A recent study has shown that sea levels from North Carolina to Canada have been rising at three to four times the global average since 1950 (Sallenger et al. 2012). By definition, higher seas mean higher storm surges, and hence huge storms. Whether amplified by global warming or not, they can go from destructive to catastrophic, implying that danger is compounded by the fact that most coastal fortifications were built when sea levels were lower, on the assumption that conditions would not change.

3.2 General Circulation Models

A general circulation model, often shortened as GCM, uses essentially the same partial differential equations of motion as a NWP model. The abbreviation GCM also refers to a global climate model, which is almost the same as a general circulation model, except that the former is used when the model is dealing specifically with global climate change. Although the main purpose of GCMs is to numerically predict changes in climate as a result of slow changes in some boundary conditions or physical parameters, such as, for example, the greenhouse gas concentration, they can also be used for weather forecasting as well as for understanding climate. In general, NWP models are employed to predict the weather in the short (from 1 to 3 days) and medium (from 4 to 10 days) range in the future, while GCMs are run much longer in time (from years to decades and decades to centuries) to learn about the climate in a statistical sense. A good NWP model can accurately predict the movement and evolution of atmospheric disturbances such as frontal systems and tropical cyclones. Although GCMs are capable to do this as well, most of them err so much after about 2 weeks or so, becoming useless for a perspective of weather forecasting in the long term. For example, an error in the sea surface temperature of a few centigrades, or even a small but systematic bias in cloudiness throughout the model, matter little to a NWP model, but for a GCM these factors are of great importance because they are relevant over a long-term evolution.

State-of-the-art GCMs use models capable of simulating surface and deep ocean circulations coupled to atmospheric GCMs. These models can be further coupled to dynamic models of sea ice and conditions on land. Coupled atmosphere/ocean GCMs are the models most often used to make predictions of future climate. These are very data intensive and require the most powerful supercomputers in the world to run. A recent trend in GCMs is to apply them as components of Earth System Models, which consist of coupling GCMs to ice sheet models for the dynamics of the Greenland and Antarctic ice sheets as well as to one or more chemical transport models (CTMs) for species relevant to climate. For example, a carbon CTM may allow a GCM to better predict changes in CO\(_{2}\) concentrations resulting from changes in anthropogenic emissions. In addition, this approach allows accounting for inter-system feedbacks as may be the effects of climate change on the recovery of the ozone hole (Allen 2004). Uncertainties in climate prediction depend on uncertainties in chemical, physical, and social models (Kerr 2001). In other words, even though progress has been made in incorporating more realistic chemistry and physics in the models, significant uncertainties and unknowns remain, especially regarding the future course of human population, industry, and technology.

The first long-range simulation of the general circulation of the atmosphere was carried out by Phillips (1956), using a two-level, quasi-geostrophic model on a \(\beta \)-plane channel with rudimentary physics. Following Phillips’ seminal work, several GCMs were developed. One early model of particular interest is the Kasahara-Washington model (Kasahara and Washington 1967), which was developed at NCAR. After several attempts to create a basic representation of large-scale atmospheric flow, scientists at Princeton University’s Geophysical Fluid Dynamics Laboratory (GFDL) produced a model that incorporated large eddies, making the simulation much more representative of the atmosphere (Smagorinsky 1963; Smagorinsky et al. 1965). This experiment was deemed a major success and the model was considered to be the first true GCM. With this success research groups at UCLA’s Lawrence Livermore National Laboratory (LLNL) and NCAR began to develop their own models (Ghan et al. 1982; Williamson 1983; Cess et al. 1985; Cess and Potter 1987; Williamson et al. 1987; Williamson and Olson 1994; Collins et al. 2004). Even with drastic advances in technology and scientific knowledge, climatologists still have to make many compromises in terms of realistically modelling the Earth. For example, until recently most models focused only on atmospheric circulation—ECHAM5 is an example of a relatively recent atmospheric GCM code developed by the Max Planck Institute for Meteorology (Roeckner et al. 2003)—and it was only during the 1990s that the first atmosphere/ocean coupled models began to appear. One important drawback of these models was an extremely coarse resolution so that many processes had to be parameterized, small-scale disturbances like thunderstorms and cyclones were not accounted for, and peninsulas, islands, and great lakes did not exist. While fine resolution may be ideal, a balance must always be struck between model resolution and computer power available.

As climate models evolved through the 1990s, scientists began to shift from reproducing general circulation to experimenting with the feedbacks of climatic processes due to increasing greenhouse gases, changing ocean currents, and the way the model responds to forced perturbations such as ENSO. As the next generation of models comes out, improvements and sophistications make them more reliable for global predictions and more capable of regional analyses. Examples of such models are the latest version of the Community Climate System Model CCSM3 (Gent 2006; Collins et al. 2006), the HadCM3—a well established coupled climate model that is cheap to run in current computers (Gordon et al. 2000; Pope et al. 2000)—and HadGEM1—a state-of-the-art global environment model (Johns et al. 2006; Martin et al. 2006; Stott et al. 2006)—both used at the Hadley Centre for climate modelling in the United Kingdom, the GISS GCM ModelE—the current incarnation of the GISS series of coupled atmosphere/ocean models developed by the National Aeronautics and Space Administration (NASA) (Schmidt et al. 2006),—and the European Centre for Medium-Range Weather Forecasts (ECMWF) coupled global model (Bechtold et al. 2008a, b). In particular, these models provide the ability to simulate many different configurations of Earth System Models, including interactive atmospheric chemistry, aerosols, carbon cycle and other tracers, as well as the standard atmosphere, ocean, sea ice, and land surface components.

Today, citizens and policy-makers want to know what heat waves, droughts, or floods are likely to occur in their particular region. Since the attention of the community turned to making predictions in ever more detail, only models that incorporate a much more realistic ocean and clouds would be able to calculate that. A scheme for representing clouds was developed in the 2000s at the Max Planck Institute for Meteorology (Gramelsberger 2010). This code uses 79 equations to describe the formation of stratiform clouds, incorporating a variety of constants, some known precisely from experiments or observations, and some others that had to be adjusted. The computation for each grid cell was a challenge even for the fastest supercomputers. Looking farther afield, the future climate system could not be determined very accurately until ocean/atmosphere GCMs are linked interactively with models for changes in vegetation. Dark forests and bright deserts do not only respond to climate, but also influence it. Since the early 1990s, and particularly during the last decade, the more advanced GCMs had incorporated dynamic global vegetation models suitable for use in NWP models and coupled GCMs, allowing for the simulation of vegetation-atmosphere interactions, photosynthesis and respiration processes as well as the representation of regional properties of vegetation (Quillet et al. 2010).

4 Predictability and Ensemble Forecasting

The accuracy of a given forecast depends on the internal error growth of the model, the model accuracy, and the errors in the initial state. When solving the equations at a global scale, the boundaries are periodic and the problem is an initial value problem. The initial conditions are integrated forward in time to obtain future states of the system. However, due to the intrinsic non-linear nature of the equations, the information of the initial conditions is lost within a few days, and the exact state of the system (weather) becomes unpredictable. In other words, given the chaotic nature of the atmosphere, we can never create a perfect forecast system because it is impossible to observe every detail of the atmosphere’s initial state. Therefore, tiny errors in the initial conditions will be amplified, always imposing a limit to how far ahead we can predict any detail. If, on the other hand, we are interested in climate predictions, statistics of the system can still be obtained. In this case, the initial conditions become unimportant and the problem reduces to a boundary value problem. Only the response of the climate to external forcings such as changes in solar radiation, concentration of greenhouse gases, etc., is of interest.

The first attempt to address this problem in NWPs has been to calculate how errors of the initial state are likely to grow in particular meteorological situations. For instance, Epstein (1969) first proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere, and successively it was demonstrated that these simulations produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution of the atmosphere (Leith 1974). Accepting the findings from chaos theory about the sensitivity of the prediction to uncertainties in the initial conditions, it has become a common practice to undertake a set of forecasts, or ensemble, with the same model but starting the runs from slightly different initial conditions. Small perturbations are added to the reference model, with amplitudes selected to be within the accuracy of the initial state.

Starting in 1992, ensemble forecasts have been used operationally by the ECMWF and the NCEP to account for the stochastic nature of weather processes. In particular, the ECMWF has made major contributions to this technique and has over the last years also developed and improved an operational system for ensemble prediction (Molteni et al. 1996; Mullen and Buizza 2002; Buizza et al. 1999, 2003, 2007). The ECMWF weather prediction model is run 51 times from slightly different initial conditions. One forecast, called the EPS control forecast, is run from the operational ECMWF analysis, followed by 50 additional integrations, called the perturbed members, which are designed to represent the uncertainties inherent in the operational analysis. The initial perturbations are generated using the singular vector technique to simulate the initial probability density (Barkmeijer et al. 1999). In contrast, the NCEP ensemble—the Global Ensemble Forecasting System—uses bred vectors, which are related to Lyapunov vectors and created by adding initially random perturbations to the model (Toth and Kalnay 1997; Kalnay 2003).

Ensemble forecastings are being used for many proposed problems, including global weather, hurricane track, intensity forecasts, and seasonal climate simulations. Seasonal forecasts, with a range of 6 months, are currently made using coupled GCMs by combining large numbers of forecasts in an ensemble each month, with impressive predictions for tropical regions and for the onset of El Niño and La Niña events. In the same way that many forecasts from a single model can be used to form an ensemble, multiple models can also be combined to produce an ensemble forecast. This approach is called multi-model ensemble forecasting, and it has been shown to improve forecasts when compared to a single model-based approach (Krishnamurti et al. 2000; Weigel et al. 2008, 2009; Zhou and Du 2010). One recent multi-model concept to medium-range weather forecasts is the THORPEX Interactive Grand Global Ensemble (TIGGE) (Bougeault et al. 2010). However, a recent comparison of the TIGGE multi-model forecasts with reforecast-calibrated ECMWF ensemble forecasts in extra-tropical regions has shown that the latter were of comparable or superior quality to the multi-model predictions (Hagedorn et al. 2012). The reforecast calibration procedure is particularly helpful at locations with clearly detectable systematic errors such as areas with complex orography or coastal grid points, while the multi-model approach might be advantageous in situations where it is able to suggest alternative solutions not predicted by the single-model of choice. Therefore, it would be desirable in the not so distant future to explore the relative merits of multi-model versus reforecast-calibrated predictions for other user-relevant variables such as precipitation and wind speed. Moreover, models within a multi-model ensemble can be adjusted for their various biases, which is a process known as superensemble forecasting. This type of forecast significantly reduces errors in the model output (Cane and Milelli 2010).

5 Future Perspectives and Challenges

Decision-makers from 155 nations agreed in 2009 to establish the world’s first framework for climate services, an effort that will supply on-demand climate predictions to governments, businesses, and individuals. By providing tailored information on how climate change will affect certain regions and sectors, the Global Framework for Climate Services will help the world better adapt to the challenges of climate variability and change. This vision marks a new era in climate science, one in which seasonal weather forecasting and long-term climate projections will merge seamlessly, giving rise to decadal climate predictions that have the skill and reliability of weather forecasts. Provision of these data to local planners and policy-makers will really be a service to society.

Evidence that climate predictions can provide precise and accurate guidance about how the long-term future may evolve is basically lacking. In this sense, scientists and decision-makers alike should think of climate models as just one of a range of tools to explore future possibilities. Unfortunately, predictive skill is unknown for climate at the decade-to-century timescale. Unlike weather forecasts, whose value in informing decision-making can routinely be tested over time by comparison with observed weather patterns, there is currently no such empirical evidence with which to test the skill of climate predictions. Certainly, as knowledge of the climate system and how it responds to greenhouse gases improves, model predictions will change, as will their probability distributions.

The sophistication of prediction models is closely linked to the available computer power. The advances in digital computer technology as well as the developments in atmospheric dynamics, instrumentation, and observing practice have all pointed towards increasing forecast accuracy apace over the half-century of NWP activities, and progress continues on several fronts. However, some formidable challenges remain. The effective computational coupling between the dynamical processes and physical parameterizations is one of these big challenges. On the other hand, nowcasting is the process of predicting changes over periods of a few hours. Current numerical methods provide guidance which occasionally falls short of what is required to take effective action and avert disasters. Although greatest value is obtained by a systematic combination of NWP products with conventional observations as well as radar and satellite imageries, much remains to be done to develop optimal nowcasting systems. On the other side, the chaotic nature of the atmosphere imposes limitations to the validity of deterministic forecasting. Ensemble forecasts provide probabilistic guidance, but so far their use has proved to be quite difficult in many cases. While reasonably good progress in seasonal forecasting for the tropics has been made, long-range forecasts for temperate regions remain a further challenge. Accompanied to this is the modelling and prediction of climate change, a matter of increasing importance and concern. As technology continues advancing at a faster rate than ever, we may be optimistic that future developments will lead to notable improvements in both weather and climate prediction.