Introduction by Guido Visconti

In the development of modeling the climate, two principal paths can be distinguished. On one side the effort to increase the resolution to eliminate as much as possible the parameterization of small-scale processes. This approach is an improvement over the “mechanistic” view of modeling climate mainly through the use of General Circulation Models (GCM). The other approach, which is born out of the theory of system dynamics, has as objective an improvement in the theoretical basis of climate science. A minor but promising development has to do with the direct statistical simulation (DSS) following a suggestion made by Edward Lorenz almost sixty years ago. The improvement in GCM is more oriented to the classical application of predicting the future climate and its impact on the human activities while the other two approaches have explained some specific phenomena like atmospheric jets, or El Nino but their use as predicting tools is not in the near future.

When the resolution of a model goes down to about 1 km it is possible to simulate deep convection, ocean eddies and land–atmosphere interactions in detail. An improvement of this kind eliminates the necessity of parameterize small-scale processes and in turn reduces the biases from which suffer most of the GCM results. Preliminary results obtained at such resolution show that also the reliability of regional climate projection is enhanced. A notable improvement has been obtained by increasing the resolution in numerical weather predictions and a similar effect is expected for the climate projections. However, in this case the financial burden is such that the implementation of such techniques must be multinational using the best technological available infrastructures. The reliance on governmental funding may create some conflict of interest within the scientific community.

The theoretical development of climate dynamics refers exactly to those sub-grid scale processes we mentioned before which are accounted for in a stochastic manner. The complexity of the climate system is such that theoretical studies can be carried out only on highly idealized models (like to the Lorenz equations) or in relatively simple systems like a very elementary model of the ocean current. However numerical experiments on such simple systems can give precious indications on the internal variability (IV) or forced variability (FV) of the climate system. A key concept to study such effects is the so-called pullback/snapshot (PBA) attractor. From the theory of non-linear deterministic systems, the attractor is determined by integrating the systems forward in time and studying where its trajectories lie in the phase space. In the PBA approach the system is studied as an ensemble where the relevant equations are initialized in slightly different conditions at some very distant initial time and integrated up to the present time. Here they reveal the nature of their attractor. The different initial conditions may represent an ensemble of climate systems which obeys the same equations and give the results in terms of a probability distribution. As it was mentioned at the beginning this approach has been applied to very simple systems and hardly could contribute to predict the future climate (or study the climate of the past) but could give important indications on the interpretation of those results.

The last development we would like to mention refers to the statistics of the fluid motions on the Earth (oceans and the atmosphere). The core of any climate model is to integrate in time the relevant equations (thermodynamics, fluid dynamics, chemistry) up to the statistical steady state. Any further integration will reveal the statistics of the system. This approach was not appreciated by Edward Lorenz that in his famous 1963 monograph on the general circulation of the atmosphere affirmed:

More than any other theoretical procedure, numerical integration is also subject to the criticism that it yields little insight into the problem. The computed numbers are not only processed like data but they look like data, and a study of them may be no more enlightening than a study of real meteorological observations. An alternative procedure which does not suffer this disadvantage consists of deriving a new system of equations whose unknowns are the statistics themselves.

The suggestion by Lorenz was implemented by Brad Marston of Brown University using the Direct Statistical Simulation (DSS) that however revealed a few problems as envisaged by Lorenz in the same monograph.

[DSS] can be very effective for problems where the original equations are linear, but, in the case of non-linear equations, the new system will inevitably contain more unknowns than equations, and can therefore not be solved, unless additional postulates are introduced

Marston has applied DSS to study zonal jets on Earth reproducing some of their features. As in the case of the development of the theoretical basis of climate science this approach has a long way to go before he could find useful applications. Most of these theoretical attempts to study climate reveal something like an inferiority complex of the scientific community that works on these subjects. The question is that not all sciences must necessarily use very sophisticated mathematical stuff (consider biology!) and yet they must have the same respect. Rephrasing Richard Lewontin: it is more interesting to explain while a mice and an elephant fall with the same acceleration in the vacuum or to explain why they came to such different sizes?

Machine-Generated Summaries

Keywords: variability, forcing, force, period, state, ensemble, approach, parameter, analysis, feedback, response, climate change, base, enso, consider.

History Matching for Exploring and Reducing Climate Model Parameter Space Using Observations and a Large Perturbed Physics Ensemble

https://doi.org/10.1007/s00382-013-1896-4

Abstract-Summary

We apply an established statistical methodology called history matching to constrain the parameter space of a coupled non-flux-adjusted climate model (the third Hadley Centre Climate Model; HadCM3) by using a 10,000-member perturbed physics ensemble and observational metrics.

History matching uses emulators (fast statistical representations of climate models that include a measure of uncertainty in the prediction of climate model output) to rule out regions of the parameter space of the climate model that are inconsistent with physical observations given the relevant uncertainties.

Our methods rule out about half of the parameter space of the climate model even though we only use a small number of historical observations.

We explore 2 dimensional projections of the remaining space and observe a region whose shape mainly depends on parameters controlling cloud processes and one ocean mixing parameter.

Constraining parameter space using easy to emulate observational metrics prior to analysis of more complex processes is an important and powerful tool.

It can remove complex and irrelevant behaviour in unrealistic parts of parameter space, allowing the processes in question to be more easily studied or emulated, perhaps as a precursor to the application of further relevant constraints.

Extended

An ideal analysis, and perhaps the only way to truly quantify this uncertainty, must involve expert judgement regarding the deficiencies in the mathematical representations of the physics in the model and the ways they might be improved in order to capture these deficiencies in future generations of models (see Goldstein and Rougier [1], for further discussion).

Introduction

The PPE is used to inform us about the behaviour of the model in this space and, in particular, about regions of the space where model-based projections are not predicted to be inconsistent with current observations.

We illustrate the application of existing methodology from the statistics with computer experiments literature to rule out regions of the parameter space of the UK Met Office’s Third Hadley Centre Ocean–Atmosphere General Circulation Model (HadCM3) (Pope et al. [2]; Gordon et al. [3]) containing model runs that are inconsistent with a handful of physically important observational metrics.

We use emulators, (fast statistical representations of climate models that include a measure of uncertainty in the prediction of climate model output), and four pre-industrial global and hemispheric averages of climatic variables to remove over half of the explored space.

We explore parts of HadCM3′s parameter space previously unstudied by running models with parameter choices outside the ranges established by Murphy et al. [4].

History Matching

The Bayesian approach uses the PPE to learn about the behaviour of the model throughout its parameter space in order to find regions of the space where model based projections are not inconsistent with current observations.

Bayesian calibration requires a stochastic representation of the climate model, called an emulator (Sacks et al. [5]), to be constructed and to be reliable across the whole of parameter space.

History matching, like Bayesian calibration, requires a statistical model that relates the climate model to reality.

History matching, on the other hand, allows us to use even the most simple outputs of the climate model in order to begin ruling out regions of parameter space.

We can take simple outputs that are relatively easy to model statistically, use them to rule out parameter space via history matching, then focus on emulating more complex model output within NROY space, where they may be easier to model statistically.

History Matching HadCM3

A principal motivation for history matching here, and in any application involving a large ensemble of a climate model, is to make our emulators for key and difficult to model quantities easier to fit and more accurate than they would be if fitted using all data within the unconstrained parameter space.

In an analysis that goes on to provide probabilistic climate predictions such as that in Murphy et al. [6], we would run a second ensemble within NROY space and with twentieth century forcing, and either use history matching with more complex constraints to further reduce NROY space or use Bayesian calibration with more complex constraints to generate probabilistic predictions.

This can vary substantially from observations (with values across the ensemble ranging between −5 and 33 °C) as the parameter perturbations alter the radiative balance at the top of the atmosphere.

History Matching with Multi-model Ensembles

This allows us to derive Var[z − E[f(x)]], the denominator of our implausibility measure (4), under the interpretation that the resulting discrepancy variance represents a tolerance to error which is consistent with using CMIP3 as representative of the judgements of the climate community regarding what represents an informative climate model.

Of adopting a formal statistical model relating CMIP3 and HadCM3 to each other and to the true climate, we are able to obtain ‘low’ and ‘high’ tolerance alternatives for the discrepancy variance to be used in a sensitivity analysis.

If we observed a small or negative estimate for Var[U], we may re-visit the second order exchangeability assumption for all models in the collection, or investigate the sensitivity of the estimate to the ensemble size.

NROY Space

In our ensemble, although changes in SAT are a result of different parameter perturbations rather than increasing greenhouse gases, it is likely that the sea ice would respond in a similar way, possibly resulting in the negative correlations found between SAT and SGRAD or SCYC.

SAT is, therefore, the dominant constraint due to the high correlations with the other variables and because of the relatively low discrepancy and observation errors of SAT compared with the other variables in relation to the ensemble ranges for those variables.

Although SAT dominates and that, for SCYC and PRECIP at least, no ensemble members that are not ruled out by SAT alone are ruled out by the additional constraints, that does not mean that the additional variables provide no further constraint on parameter space.

The AMOC and MHT in NROY Space

It reveals a nonlinear relationship between SAT and AMOC across the whole parameter space, but an approximately linear relationship within NROY space.

By history matching on simpler (i.e. univariate and easier to emulate) quantities such as global mean SAT, we have removed many ensemble members with unrealistically weak control AMOC strengths and removed a substantial part of the burden faced in emulating a complex quantity such as the AMOC time series over the whole parameter space by just focussing on that part of the parameter space that we are unable to easily rule out on the basis of observations.

This means that the AMOC in NROY space is, on average, more responsive to CO2 forcing than in the ruled out space, and that the sensitivity of the AMOC depends on the model parameters through the SAT.

Discussion

This tolerance is set using the error on the observations and discrepancy variance information derived from CMIP3, whilst accounting for the uncertainty in our emulator-based predictions.

The potential impact on our conclusions is assessed using a sensitivity analysis showing that even increasing the estimated discrepancy variance by an order of magnitude only results in around 10% less parameter space removed by history matching.

Complex constraints require sophisticated statistical emulators that are valid throughout NROY space in order to impose them.

AOGCMs of different resolution can be linked statistically (Williamson et al. [7]) so that large PPEs using coarser-resolution models and a small PPE using a very expensive model can be used together to emulate the advanced model and quantify its parametric uncertainty in the ways we have described.

Acknowledgments

A machine generated summary based on the work of Williamson, Daniel; Goldstein, Michael; Allison, Lesley; Blaker, Adam; Challenor, Peter; Jackson, Laura; Yamazaki, Kuniko (2013 in Climate Dynamics).

Advances in Projection of Climate Change Impacts Using Supervised Nonlinear Dimensionality Reduction Techniques

https://doi.org/10.1007/s00382-016-3145-0

Abstract-Summary

Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches.

The present paper adopts a new approach of supervised dimensionality reduction, which is called “Supervised Principal Component Analysis (Supervised PCA)” to regression-based statistical downscaling.

To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction.

The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine.

Extended

Due to the complexity of climate-associated processes, the two main challenges in developing the stochastic regression-based statistical downscaling approaches for climate change projection are: (1) determination of the functional relationship; and (2) identification of predictor variables from high dimensional atmospheric variables conveying climate change information with respect to the hydro-climate variable of interest.

Due to the complexity and nonlinearity of climate associated processes, and the existence of nonlinear interdependency within atmospheric projectors, a kernelized form of supervised dimensionality reduction is able to efficiently model the nonlinear variability of the data.

To better manage future near and long-term surface water resources for various purposes, especially drinking water use, authorities must be ready to mitigate adverse effects of rainfall shortage and surface water reduction under the impact of climate change.

Future research can focus on improving this characteristic of Supervised PCA.

Introduction

The statistical downscaling approaches relying on developing a statistical and quantitative relationship between large-scale atmospheric variables and fine scale variables at a particular site have gained more popularity among hydrologists wanting to predict climate change impacts on hydro-climate variables.

Due to the complexity of climate-associated processes, the two main challenges in developing the stochastic regression-based statistical downscaling approaches for climate change projection are: (1) determination of the functional relationship; and (2) identification of predictor variables from high dimensional atmospheric variables conveying climate change information with respect to the hydro-climate variable of interest.

To address the first challenge, nonlinear soft-computing data-driven regression modeling techniques such as Artificial Neural Networks (ANN) (Tisseuil et al. [8]; Tomassetti et al. [9]), machine learning methods, including Support Vector Machine (SVM) (Chen et al. [10]; Tripathi et al. [11]), and Sparse Bayesian learning algorithm or Relevance Vector Machine (RVM) (Ghosh and Mujumdar [12]; Joshi et al. [13]), have been applied to improve the downscaling of different hydro-climate variables so as to capture the nonlinearity between hydro-climate predictands and atmospheric predictors.

In statistical downscaling processes, projecting a dependent hydro-climate variable from high-dimensional large-scale atmospheric variables leads to inadequate results in terms of performance accuracy, due to the curse of dimensionality.

Dimensionality Reduction Methods

In a given high-dimensional data set, consider projecting a response stochastic variable using a set of independent high-dimensional explanatory random variables.

A preprocessing step deriving an appropriate low-dimensional manifold encoding of a high-dimension data set is crucial to reaching the best performance on learning.

Unsupervised Methods

Since the relationship between climate variables and transformed explanatory atmospheric projectors is still complex and nonlinear, two soft computing nonlinear machine learning methods Support Vector Regression (SVR), and Relevance Vector Machine (RVM) are employed to capture the nonlinearity and evaluate different dimensionality reduction methods in terms of response variable projection performance.

After selecting the best combined dimensionality-reduction and machine-learning method based on the performance criteria, the credibility of the model should be validated under the impact of changing conditions (non-stationarity) arising from global warming.

The different experiments designed to validate the performance of the combined supervised dimensionality reduction and machine-learning models in the current study are discussed in more detail as follows: I. Base experiment (Tr-RAN-Te-RAN) A random selection of training and validating periods (K-fold cross-validation) is used as a scenario for the validity of the model.

Results and Discussion

Doing so, the best-selected models (Kernel Supervised PCA and RVM) are employed on the different dimension-size of the atmospheric projectors formed based on the six predictor domain states to compare the model performances over the study area.

After selecting the best dimensionality reduction method and demonstrating the sensitivities and corresponding sources of uncertainty in terms of predictor sets, the best combination of the Kernel Supervised PCA and the RVM model formed based on the nine surrounding-grid-cells is employed for projecting precipitation time series for the upcoming decades.

Using the same tuned Kernel Supervised PCA model in the modeling section, the derived transformed atmospheric projectors for the upcoming decades based on different scenarios (in the same reduced-dimension extracted in the modeling) are employed for precipitation projection using the best selected RVM data-mining method.

Conclusions

To improve the performance and the predictive power of the statistical downscaling processes with high-dimensional input data, this study has presented a supervised nonlinear dimensionality reduction technique–Supervised PCA–for extracting principal components in which the dependency between the response hydro-climate variable and large-scale atmospheric projectors is maximized.

Due to the complexity and nonlinearity of climate associated processes, and the existence of nonlinear interdependency within atmospheric projectors, a kernelized form of supervised dimensionality reduction is able to efficiently model the nonlinear variability of the data.

The Supervised PCA method is able to capture the complex nonlinear dependency between target precipitation variable and the atmospheric projectors.

The proposed methodology can be used for other hydro-climate variables and also other regression-based statistical downscaling processes to improve the projection accuracy of target hydro-climate variables in the future.

Acknowledgments

A machine generated summary based on the work of Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali (2016 in Climate Dynamics).

The Theory of Parallel Climate Realizations

https://doi.org/10.1007/s10955-019-02445-7

Abstract-Summary

Based on the theory of “snapshot/pullback attractors”, we show that important features of the climate change that we are observing can be understood by imagining many replicas of Earth that are not interacting with each other.

These parallel climate realizations evolving in time can be considered as members of an ensemble.

We argue that the contingency of our Earth’s climate system is characterized by the multiplicity of parallel climate realizations rather than by the variability that we experience in a time series of our observed past.

The natural measure of the snapshot attractor enables one to determine averages and other statistical quantifiers of the climate at any instant of time.

We recall that systems undergoing climate change are not ergodic, hence temporal averages are generically not appropriate for the instantaneous characterization of the climate.

This can lead in certain climate-change scenarios to the coexistence of two distinct sub-ensembles representing dramatically different climatic options.

The problem of pollutant spreading during climate change is also discussed in the framework of parallel climate realizations.

Extended

A detailed investigation of these and similar quantities is beyond the scope of our paper, and might be the subject of future studies.

Introductory Comments

The traditional theory of chaos in dissipative systems has taught us that on a chaotic attractor there is a plethora of states, all compatible with the single equation of motion of the problem belonging to a fixed set of parameters [14, 15].

The distribution of the parallel states is not arbitrary: an additional property of attractors with chaotic properties is the existence of a unique probability measure, the natural measure [16], which describes the distribution of the permitted states in the phase space [14,15,16].

In the next Section we show that a changing climate can be described by an extension of the traditional theory of chaotic attractors: in particular, the theory of snapshot/pullback attractors [17, 18] appears to be an appropriate tool to handle the problem.

Particular emphasis is put on the natural measure on snapshot attractors with respect to which averages and other statistics can be evaluated providing a characterization of the climate at any instant of time.

Changing Climates: Mathematical Tools

Speaking, a snapshot or pullback attractor can be considered as a unique object of the phase space of a dissipative dynamical system with arbitrary forcing to which an ensemble of trajectories converges within a basin of attraction.

Even if the concept of snapshot and pullback attractors is practically the same, it is useful to remark here that a pullback attractor is defined as an object that exists along the entire time axis (provided the dynamics remains well-defined back to the remote past), while a snapshot attractor is a slice of this at a given, finite instant of time (their union over all time instants thus constitutes the entire pullback attractor).

However, that if the dynamics is not defined back to the remote past, then the pullback attractor is also undefined, but from some time after initialization the snapshot attractors can be practically identified.

The Theory of Parallel Climate Realizations

In order to make the unusual concept of pullback/snapshot attractors plausible, which might appear too much mathematically-oriented, while the concept of observed time series is widely used, we proposed the term parallel climate realizations in [19].

The ensemble, representing the natural measure of the snapshot attractor, undergoes a change in time due to the time-dependence of the forcing, and, as a consequence, both the “mean state” (average values) and the internal variability of the climate changes with time.

If the climate state itself is changing markedly within such a time interval, these averages unavoidably yield statistical artefacts that may be misinterpreted as they mix up events of the recent and more remote past.

We can say that the ensemble of parallel climate realizations is the generalization of the Gibbs distributions known from statistical physics for a non-equilibrium system whose parameters are drifting in time.

Illustration of Parallel Climate Realizations

An investigation of the Lorenz84 model with seasonal forcing [20] was carried out by Bódai et al. [21] from the point of view of an ensemble approach and led to the conclusion that the snapshot attractor of the forced system appears to be chaotic in spite of the fact that in extended regions of the forcing parameter F of the time-independent system the attractors are periodic.

Additional issues about initialization may arise from the insufficient spinup time: drifts corresponding to the convergence process from some state off the attractor in deep oceanic variables may appear and are actually documented in the MPI-GE [22], while their importance is mostly unknown in the other ensembles.

If one takes the time evolution of the atmospheric variables in different members of such a hypothetical full ensemble, and then constructs an ocean ensemble with each of these atmospheric realizations applied as a fixed forcing, the result will be an extended set of OCCIPUT-type ocean ensembles.

Nonergodicity and Its Quantification

The original observation of Romeiras et al. [17], according to which a single long trajectory traces out a pattern different from that of an ensemble stopped at a given instant, implies that ergodicity (in the sense of the coincidence of ensemble and temporal averages) is not met in nonautonomous systems.

Traditional chaotic attractors are known to be ergodic [16]: sufficiently long temporal averages coincide with averages taken with respect to sufficiently large ensembles.

The nonergodic mismatch can be evaluated along each single realization of the climate ensemble and depends on the realization.

Teleconnections: Analyzing Spatial Correlations

Investigating the teleconnections through the temporal correlations between a so-called teleconnection index and another variable (e.g., temperature or precipitation) a single correlation coefficient can be obtained.

With a sufficiently large number (N) of realizations an ensemble-based instantaneous correlation coefficient can be defined which provides the appropriate characterization of the strength of teleconnections in the spirit of parallel climate realizations.

The NAO teleconnection index (NAOI) is based on the difference in the normalized sea level pressure between Iceland and the Azores.

At a given time instant, it is also possible to compute an instantaneous teleconnection index in the spirit of parallel climate realizations.

An increasing strength of the teleconnection between a particular ENSO index and the Indian summer precipitation has been detected in the MPI Grand Ensemble in the twentieth century.

Ensembles in Experiments

Such experimental investigations nicely complement research based on numerical general circulation models: the latter can, theoretically, access the full set of parameters but with a limited resolution which hides important subgrid-scale nonlinear phenomena that may affect multiple scales.

Reproducing them with the same boundary conditions (forcing) most naturally provides an ensemble of different realizations of the same process, which represents the multitude of possibilities permitted by turbulence or chaotic-like phenomena, i.e., parallel realizations of the minimal climate system model.

Besides climate-related aspects it is worth noting that the ensemble approach may be the proper way to conduct fluid dynamics experiments in which non-equilibrium (non-ergodic) processes and turbulence are involved, i.e. phenomena characterized by inherent internal variability.

Only an ensemble statistics of these lifespans from a multitude of experiments (that are initiated identically within measurement precision) can provide meaningful information of these interesting intermittent phenomena, as demonstrated in e.g. [23].

Splitting of the Snapshot Attractor

An important property of the climate system is that for some range of fixed parameter values, it also allows two coexisting usual (stationary) attractors.

Even when initializing the ensemble entirely inside one of the basins of attraction (that belongs to the initial parameter value), only a fraction of the ensemble may end up on the usual attractor on which the ensemble was started.

During the returning part of the parameter drift, at the point when this usual attractor reappears, the snapshot attractor (as an extended object) may overlap with the basin of attraction of both of the coexisting usual attractors.

The separation of the snapshot attractor to two unconnected branches, between which transition of trajectories is not possible, stems from the fact that the corresponding stationary system is not ergodic in the sense of the existence of a unique global asymptotic probability measure [24].

Spreading of Pollutants in a Changing Climate

As an additional utilization of an ensemble of parallel climate realizations, the change in the intensity of atmospheric large-scale spreading of pollutants can also be investigated in a changing climate.

The intensity of the spreading can be characterized in general by such stretching rates [25,26,27].

In [27] in order to explore what the typical spreading behavior is in a changing climate, ensemble simulations of the PlaSim and CESM climate models were used.

Of climate change, spreading simulations showed an overall decreasing trend in the stretching rate in the ensembles of both climate models.

Temporal Aspects: An Emerging Research Direction

This is actually rather intuitive, since the statistical or dynamical relationship between two time instants separated by a given time is not temporally invariant any more: it depends on when within the climate change either of these instants is chosen.

To characterize the relationship between temporally separated values of a given variable, a “workaround” is to compute the correlation coefficient between two time instants with respect to the time-dependent natural probability measure (with respect to temporally evolving ensemble members in practice).

It is meaningful to compute the temporal average or standard deviation of some variable for e.g. a given decade, but then this average or standard deviation will have its own probabilistic description as defined via the time-dependent natural measure.

The ensemble average of this interval-wise taken quantifier should not be confused with the corresponding ensemble quantifier of a time instant within the given time interval: while these two characterizations coincide in a stationary climate, biases are introduced if the climate is changing.

Conclusion

The concepts of the average and the deviation from it also appear in the IPCC report [28], but it also considers averages taken over different climate models relevant.

The different models, however, describe climates of “different physics”, the differences of which do not reflect the internal variability of the climate, rather the perhaps significant inaccuracies of the models.

In the spirit of the article, it seems more appropriate to evaluate projections within single models based on parallel climate histories.

We wish to briefly address the characterization of model uncertainties within a single climate model.

Acknowledgments

A machine generated summary based on the work of Tél, T.; Bódai, T.; Drótos, G.; Haszpra, T.; Herein, M.; Kaszás, B.; Vincze, M. (2019 in Journal of Statistical Physics).

Understanding the Links Between Climate Feedbacks, Variability and Change Using a Two-Layer Energy Balance Model

https://doi.org/10.1007/s00382-020-05189-3

Abstract-Summary

A simple, two-layer energy balance model (EBM) is used to investigate climate variability in Coupled Model Intercomparison Project Phase 5 (CMIP5) models and examine possible links between variability and climate sensitivity, and the roles of stochastic variability, radiative feedbacks and ocean mixing.

The EBM represents global variability that, while somewhat stronger than the CMIP5 models, simulates reasonable ratios between shorter and longer timescales.

Variability in the EBM to the range of parameters from the Global Climate Models is found to be particularly sensitive to stochastic variability, especially on interannual time-scales.

The EBM results suggests that spread in stochastic forcing across the CMIP5 models is the single greatest factor degrading the correlation between variability and climate sensitivity, although model to model differences in radiative forcing and mixing into the deep ocean are also important.

They also suggest that normalizing variability in general circulation models by stochastic forcing, uptake into the deep ocean and radiative forcing are all important first steps to reduce factors that will otherwise confound the correlations.

Introduction

The approach taken is to develop and utilize a two-layer energy balance/feedback model (EBM) for the climate system, to explore and understand its variability on a range of timescales and to relate these to variability and climate change sensitivity found in the CMIP5 GCMs.

We will use the EBM approach to explore four questions: (1) How well can important aspects of global scale variability on timescales from interannual to multi-decades in CMIP5 models be understood and quantitatively described using a simple two-layer EBM? (2) What relative role do radiative feedbacks play in determining the magnitude of global variability, especially on longer timescales? (3) What parameters control potential relationships between the magnitude of variability and transient climate response (TCR) (Collins et al. [29]) and/or ECS. (4) What do differences across GCMs in their magnitude of stochastic forcing, the strength of radiative feedbacks and in other parameters therefore imply for the potential for constraining ECS or TCR through observations of variability.

Model Description and Analysis Methodology

Estimates of temperature variance and stochastic forcing from the CMIP5 models were calculated by first detrending annual mean temperatures and TOA radiation (to remove any residual drift), then removing the annual cycle by subtracting off mean January, mean February etc. For temperature, annual, monthly, decadal and 30-year variances we calculated after first averaging the monthly temperature fluctuations into annual means, then passing 10 year and 30 year running means through these timeseries prior to the calculation of variances.

Observational estimates based on CERES (Clouds and the Earth’s Radiant Energy System) satellite data indicate that global scale total TOA variability has a standard deviation of around 0.62 W m−2 on monthly timescales (Trenberth et al. [30]), a value comparable to the multi model mean (although, as with the models, some of the observed value will likely represent the response, i.e. feedback, from surface temperature changes).

Sensitivity of Variability in the EBM

The purpose of this is to understand how changes in these parameters affect temperature variability on different timescales before then considering how these parameters affect correlation between variability and sensitivity.

It is important to collate the climate variability computed numerically via the two-layer model with those represented in GCMs.

This gives us some confidence that the simple two-layer EBM both qualitatively and quantitatively reproduces overall features of variability from interannual to multi-decades in comparison with CMIP5 models.

In calculations, we applied the monothetic OFAT (one-factor-at-a-time) analysis, varying each parameter over its range and holding others at their base (i.e. CMIP5 model average) values.

Section we evaluate how the EBM and CMIP5 variances range across the ensemble of models, and what the EBM implies for the relationship between climate variability across different timescales and climate sensitivity.

Analysis of Climate Variability and Change of CMIP5 Models

The EBM predicts a high degree of correlation (i.e. high explained variance across the models) between variability and ECS with an R2 of 0.58 at interannual timescales, and up to 0.68 for 30-year.

It can be easily understood in the case of the ECS/variability correlation: the only factor producing spread in the ECS remains F, which plays no role in variability in the EBM.

The EBM predicts that without corresponding feedbacks operating these variables do not produce any significant correlation.

Removing the spread in γ, however, has the counter intuitive effect of decreasing the correlation between variability and ECS.

The impact of F is easily understood: it causes spread in ECS but does not affect variability, so reducing its spread produces greater correlation.

The puzzle is why eliminating the spread in γ reduces the correlation between ECS and variability.

Summary and Conclusions

We use a simple 2-layer energy balance model (EBM) to ask what factors might contribute to the spread in variability, and which factors might provide (or indeed limit) the degree of correlation between the magnitude of unforced variability and climate sensitivity (both ECS and TCR) across timescales from interannual to multi-decadal.

The correlation across CMIP5 models between the GCM variances and those simulated by the EBM are modest, with around 25% variability explained for longer timescale (decadal and 30-year).

The EBM predicts that the correlations between sensitivity and variability should be higher at longer timescales in the GCMs.

The EBM predicts lower correlations between variability and TCR than with ECS, consistent with there ocean heat uptake factors affecting TCR, whereas ECS is dependent on forcing and feedback alone.

The role of stochastic forcing in the current results is striking, as the EBM suggests that it could be a key ‘spoiler’ of cross GCM climate change/variability correlations.

Acknowledgments

A machine generated summary based on the work of Colman, Robert; Soldatenko, Sergei (2020 in Climate Dynamics).

A Voyage Through Scales, a Missing Quadrillion and Why the Climate is not What You Expect

https://doi.org/10.1007/s00382-014-2324-0

Abstract-Summary

Using modern climate data and paleodata, we voyage through 17 orders of magnitude in scale explicitly displaying the astounding temporal variability of the atmosphere from fractions of a second to hundreds of millions of years.

We identify five of these: weather, macroweather, climate, macroclimate and megaclimate, with rough transition scales of 10 days, 50 years, 80 kyears, 0.5 Myear, and we quantify each with scaling exponents.

Mean temperature fluctuations increase up to about 5 K at 10 days (the lifetime of planetary structures), then decrease to about 0.2 K at 50 years, and then increase again to about 5 K at glacial-interglacial scales.

Both deterministic General Circulation Models (GCM’s) with fixed forcings (“control runs”) and stochastic turbulence-based models reproduce weather and macroweather, but not the climate; for this we require “climate forcings” and/or new slow climate processes.

Averaging macroweather over periods increasing to ≈30–50 years yields apparently converging values: macroweather is “what you expect”.

Macroweather averages over ≈30–50 years have the lowest variability, they yield well defined climate states and justify the otherwise ad hoc “climate normal” period.

Moving to longer periods, these states increasingly fluctuate: just as with the weather, the climate changes in an apparently unstable manner; the climate is not what you expect.

Moving to time scales beyond 100 kyears, to the macroclimate regime, we find that averaging the varying climate increasingly converges, but ultimately—at scales beyond ≈0.5 Myear in the megaclimate, we discover that the apparent point of convergence itself starts to “wander”, presumably representing shifts from one climate to another.

Introduction: Foreground or Background, Signal or Noise?

If we attempt to extend Mitchell’s picture to the dissipation scales at frequencies 6 or 7 orders of magnitude higher (for millimetric spatial scale variability), the spectral range would increase by an additional ten or so orders of magnitude.

In Mitchell’s time, this scale bound view had already led to an atmospheric dynamics framework that emphasized the importance of numerous processes occurring at well defined time scales, the quasi periodic “foreground” processes illustrated as bumps—the signals—on Mitchell’s nearly flat background.

The purpose of this paper is therefore to stand Mitchell on his head, to invert the roles of foreground and background—of signal and noise—to treat the spectral continuum with its challenging and nontrivial multifractal scaling, as the fundamental signal and to relegate the residual quasiperiodic processes to the role of background processes where they belong.

Standing Mitchell on His Head: The Scaling Paradigm

By the early 1980s, following the explosion of scaling (fractal) ideas it was realized that scale invariance was a very general symmetry principle often respected by nonlinear dynamics, including many geophysical processes and turbulence.

In nonlinear dynamical systems, power laws arise when over a range of scales there are no processes strong enough to break the scaling symmetry.

Another way of putting this is to say that the dominant dynamical processes occur in synergy over a wide range of scales, with the resulting behaviour displaying no characteristic size or duration.

We can express this in yet another way in terms of systems theory: H < 0 indicates negative feedbacks occurring over a wide range of scales in a scale invariant way whereas H > 0, indicates positive feedbacks occurring over a wide range (this should not be confused with persistence and antipersistence which for Gaussian processes refer to fluctuations growing more or less quickly than Brownian motion).

Scaling in the Weather, Macroweather and Climate Regimes

Starting with the climate regime, numerous paleo temperature series (mostly from ice and ocean cores) have been analyzed and there is broad agreement on their scaling nature with spectral exponents estimated in the range βc ≈ 1.3 to 2.1 over the range from hundreds to tens of thousands of years, (Lovejoy and Schertzer [31, 32]; Schmitt et al. [33]; Ditlevsen et al. [34]; Pelletier [35]; Ashkenazy et al. [36]; Wunsch [37]; Huybers and Curry [38]; Blender et al. [39]; Lovejoy [40]; Rypdal and Rypdal [41]).

A seductive feature of the (anisotropic) scaling framework is that it fairly accurately predicts the weather to macroweather transition scale τw ≈ 10 days.

The analogous calculation for the ocean using the empirical (near surface) ocean turbulent flux ε ≈ 10−8 W/kg, yields a lifetime of ≈1 year which is indeed the scale separating a high frequency “ocean weather” (with β > 1) from a low frequency “macro-ocean weather” with β < 1 (Lovejoy and Schertzer [32]; at depth, ε is much lower and the corresponding lifetimes are much longer).

Real Space Fluctuations and Analyses

The behavior of the mean fluctuation is thus <ΔT> ≈ ΔtH so that if H > 0, on average fluctuations tend to grow with scale whereas if H < 0, they tend to decrease.

While the latter is adequate for fluctuations increasing with scale (i.e. H > 0), mean absolute differences generally increase and so when H < 0, they do not correctly estimate fluctuations.

Once estimated, the variation of the fluctuations with scale can be quantified by using their statistics; the qth order structure function Sq(Δt) is particularly convenient: where “ < .

For periods longer than this, the statistics are dominated by averages of many planetary scale structures, and these fluctuations tend to cancel out: for example large temperature increases are typically followed (and partially cancelled) by corresponding decreases.

The consequence is that in this macroweather regime, the average fluctuations diminish as the time scale increases.

Discussion

Avoiding anthropogenic effects by considering the pre-1900 epoch, for GCM climate models, the key question is whether solar, volcanic, orbital or other climate forcings are sufficient to arrest the H < 0 decline in macroweather fluctuations and to create an H > 0 regime with sufficiently strong centennial, millennial variability to account for the background variability out to glacial-interglacial scales.

Whatever the ultimate source of the growing fluctuations in the H > 0 climate regime, a careful and complete characterization of the scaling in space as well as in time (including possible space–time anisotropies) allows for new stochastic methods for predicting the climate.

the conventional but ad hoc “climate normal” period—this not only justifies the normal but allows averages of relevant variables over it to define “climate states” and the changes at scales Δt > τc to define climate change (again, in the recent period, this defines the scale at which anthropogenic variability starts to dominate natural variability).

Conclusions

A far more realistic picture of atmospheric variability is obtained by standing this scale bound picture on its head: placing the continuum processes in the fore, with the perturbing quasiperiodic processes in the background.

The empirically substantiated picture is rather one of “unstable”, “wandering”, high frequency weather processes (i.e. H > 0) tending—at scales beyond 10 days or so—and primarily due to the quenching of spatial degrees of freedom (intermediate frequency, low variability)—to macroweather processes.

True climate processes are “weather-like” (H > 0) and only emerge from macroweather at even lower frequencies, due to new slow internal climate processes coupled with external forcings (including in the recent period, anthropogenic forcings).

Whatever the cause, it is an empirical fact that the emergent synergy of new processes yields fluctuations that on average again grow with scale in at least a roughly scaling manner and become dominant typically on time scales of 30–100 years (somewhat less in the recent period) up to ≈ 100 kyears.

Acknowledgments

A machine generated summary based on the work of Lovejoy, S. (2014 in Climate Dynamics).

A New Framework for Climate Sensitivity and Prediction: A Modelling Perspective

https://doi.org/10.1007/s00382-015-2657-3

Abstract-Summary

The sensitivity of climate models to increasing CO2 concentration and the climate response at decadal time-scales are still major factors of uncertainty for the assessment of the long and short term effects of anthropogenic climate change.

While the relative slow progress on these issues is partly due to the inherent inaccuracies of numerical climate models, this also hints at the need for stronger theoretical foundations to the problem of studying climate sensitivity and performing climate change predictions with numerical models.

Response theory puts the concept of climate sensitivity on firm theoretical grounds, and addresses rigorously the problem of predictability at different time-scales.

These results show that performing climate change experiments with general circulation models is a well defined problem from a physical and mathematical point of view.

These results show that considering one single CO2 forcing scenario is enough to construct operators able to predict the response of climatic observables to any other CO2 forcing scenario, without the need to perform additional numerical simulations.

We also introduce a general relationship between climate sensitivity and climate response at different time scales, thus providing an explicit definition of the inertia of the system at different time scales.

Introduction

We follow a complementary approach to define a robust theoretical framework for the use of GCMs in addressing the problem of climate response, sensitivity and prediction, based upon Lucarini and Sarno [42].

The standard approach to the problem of computing the response of a climate model to the forcing due to a increasing CO2 concentration is the following: take a model, run it to a stationary state, increase the CO2 concentration following one specific CO2 increase scenario, measure the increase of global surface temperature, define on it operational measures of the sensitivity of the system.

The approach suggested here makes it possible to compare models in a new way, showing how the equilibrium climate sensitivity is just one point of a function that contain much more informations about the properties of the response to components of the forcing at different time-scales.

Methods and Materials

Axiom A systems possess a Sinai-Ruelle-Bowen (SRB) invariant measure, which guarantees (a) the asymptotic equivalence of time and ensemble averages of observables (that it is not, despite intuition, a general property of nonequilibrium systems) and (b) the stability of the statistical properties when a weak stochastic forcing is applied.

The use of response formulas in most cases of physical interest is justified thanks to the Chaotic Hypothesis (Gallavotti [43]), which states that chaotic systems with many degrees of freedom effectively behave as Axiom A systems in terms of properties (a) and (b) even if they do not satisfy rigorously requirements (1) and (2), at least when considering the statistical properties of coarse-grained observables (e.g. globally or regionally integrated quantities).

When we compute the expectation value of an observable in a numerical model as the long-term average on a stationary state, we are in fact implicitly assuming that the system is Axiom A-like.

Results

The long-term increase of the surface temperature for the doubling scenario (the equilibrium climate sensitivity) is rather high if compared with what is typically obtained with standard IPCC models, being 8.1 K against typical estimates between 1.5 and 4.5 K (IPCC [44, 45]).

Another interesting application the possibility to estimate the time horizon on which the mean climate change signal is distinguishable from the natural variability of the climate system for different rates of change of the forcing.

Given the Green function of the system, we can compute the expected mean climate change signal for forcing corresponding to different rates of change of the CO2 concentration, and check after how many years the mean signal is larger than a chosen number of standard deviations of the observable in the unperturbed system.

Summary and Discussion

The approach proposed here bypasses some of these mathematical issues by exploiting formal properties of the response and allows for constructing rigorous definitions of climate sensitivity at different time scales through the susceptibility function.

We have provided a framework for relating the difference between transient and equilibrium climate sensitivity to the inertia of the CS, and have shown how these properties depend of the response of the system on all time scales.

Inaccuracies in representing specific spectral features have serious impacts on our ability to predict climate response on the corresponding time scales, and our findings could help understanding why, e.g., climate response at decadal time scales may be hard to capture.

RRT provides a well defined theoretical framework and tools that allows to diagnose rigorously discrepancies in the properties of the frequency dependent response of different models and to guide the design of the climate change experiments.

Acknowledgments

A machine generated summary based on the work of Ragone, Francesco; Lucarini, Valerio; Lunkeit, Frank (2015 in Climate Dynamics).

Effect of AMOC Collapse on ENSO in a High Resolution General Circulation Model

https://doi.org/10.1007/s00382-017-3756-0

Abstract-Summary

We look at changes in the El Niño Southern Oscillation (ENSO) in a high-resolution eddy-permitting climate model experiment in which the Atlantic Meridional Circulation (AMOC) is switched off using freshwater hosing.

Convergence of this transport deepens the thermocline in the eastern tropical Pacific and increases the temperature anomaly relaxation time, causing increased ENSO period.

The anomalous Ekman transport is caused by a surface northerly wind anomaly in response to the meridional sea surface temperature dipole that results from switching the AMOC off.

To a previous study with an earlier version of the model, which showed an increase in ENSO amplitude in an AMOC off experiment, here the amplitude remains the same as in the AMOC on control state.

Extended

Yu et al. [46] have suggested a link between AMO and El Niño location, stronger AMOC leading to more central El Niño events and conversely weaker AMOC to more eastern El Niño events, very similar to our findings.

Introduction

The AMOC off state in this simulation is stable over the 450 years duration of the model integration (Mecking et al. [47]) which is then compared with a control run making the study a comparatively clean assessment of the impacts of AMOC shutdown.

We look in particular at the differences in ENSO resulting from the global climatic changes that collapse of the AMOC can induce in the model.

There have also been studies of ENSO in hosed, weakened AMOC runs of CMIP3 era models (Timmerman et al. [48]; Dong and Sutton [49]) and in most models there was a substantial weakening of the annual cycle in the eastern equatorial Pacific and an increase in ENSO amplitude.

Using a stochastically forced damped oscillator model of slow ENSO dynamics introduced by Jin [50] to qualitatively understand the response of the much more complicated HadGEM3, we suggest the difference in ENSO amplitude between the different models is due to the balance of changes in ENSO damping and the magnitude of stochastic forcing.

Model setup and Experiment Design

To repeat pertinent details, the model is the Global Coupled 2.0 model (GC2) configuration of the HadGEM3 model (Hewitt et al. [51]).

This consists of an atmosphere, ocean, sea-ice and land-surface models.

The atmosphere model is Global Atmosphere vn6.0 (GA6) (Demory et al. [52]) of the Met Office unified model at N216 horizontal resolution and 85 levels in the vertical.

Two runs of the model are compared, a steady state control run (the AMOC is in its usual on state in this run) and an AMOC off steady state run.

The AMOC off run is initialised after 42 years of the control run.

The AMOC off run is integrated for a total of 450 years from the start of the salinity perturbations.

Analysis we use all 150 years of the control run and the last 300 years of the AMOC off run to determine ENSO properties.

Results

The leading EOF in both the control and the AMOC off run is representative of the ENSO mode.

Relative to the control run, the AMOC off ENSO EOF accounts for proportionally slightly less of the total variance in SST (60 vs. 63%) although this difference, both proportional and absolute, is within estimated error bounds.

The variance differences at each spatial location were tested for significance at the 99% confidence level by performing a two sample f-test on the deseasonalized spatial SST anomaly fields in both control and AMOC off runs.

Time varying properties are analyzed by projecting the leading EOF onto the time ordered fields of deseasonlized monthly SST for both control and AMOC off runs.

There is also a shift to longer ENSO periods in the AMOC off relative to the control run which does appear to be significantly different.

Mechanisms for Differences in ENSO

Having established that ENSO in the AMOC off run relative to the control has (1) a similar amplitude and distribution of SST anomalies, (2) a spatial pattern shifted eastward and (3) a longer, more regular period, we discuss mechanisms that could result in these differences.

One sees a deepening of the thermocline in the east Pacific concentrated in the region of large ENSO variability as expected from the changes in the wind fields and the Ekman transport divergence.

Apart from the eastward shift in ENSO and the mean state, the other change is the mean depth of the thermocline in the eastern equatorial Pacific.

b is difficult to estimate from regressions of east to west thermocline depth difference versus east to west temperature difference using the HadGEM3 simulations as the model never reaches a true equilibrium between the thermocline depth difference and the wind stress at a given time due to the different adjustment time scales (and therefore lags) of the SST and thermocline depths in the east and west Pacific.

Discussion

All 5 models in Timmerman et al. [48] had significantly increased ENSO amplitudes as measured by the power spectra of the SST anomalies in the Niño 3 region.

The increase of peaks without broadening in the CMIP3 model’s power spectra suggest their ENSOs become more periodic and less damped.

CMIP5-class models show an improvement in terms of representing both the properties of ENSO (amplitude, frequency, spatial pattern) and the physical processes and feedbacks which are responsible for generating and maintaining the oscillation (Bellenger et al. [53]).

From the power spectra in Timmerman et al. [48] it appears that there is no change in ENSO period for most CMIP3 models.

As argued above, from the CMIP3 model’s power spectra it seems likely that their ENSOs become less damped and more periodic.

Conclusion

The increase in ENSO period is backed up using a simple model of ENSO as a stochastically forced damped oscillator.

Using the simple model, one can potentially understand the differing responses in the slow ENSO dynamics as a competition between the decrease in damping tending to increase amplitude and the decrease in forcing tending to decrease the amplitude.

Acknowledgments

A machine generated summary based on the work of Williamson, Mark S.; Collins, Mat; Drijfhout, Sybren S.; Kahana, Ron; Mecking, Jennifer V.; Lenton, Timothy M. (2017 in Climate Dynamics).

On the Relationship Between Atlantic Niño Variability and Ocean Dynamics

https://doi.org/10.1007/s00382-017-3943-z

Abstract-Summary

We address the question whether the equatorial SST bias affects the ability of a coupled global climate model to produce realistic dynamical SST variability.

We assess this by decomposing SST variability into dynamical and stochastic components.

To compare our model results with observations, we employ empirical linear models of dynamical SST that, based on the Bjerknes feedback, use the two predictors sea surface height and zonal surface wind.

We find that observed dynamical SST variance shows a pronounced seasonal cycle.

This indicates that the Atlantic Niño is a dynamical phenomenon that is related to the Bjerknes feedback.

In the coupled model, the SST bias suppresses the summer peak in dynamical SST variance.

Bias reduction, however, improves the representation of the seasonal cold tongue and enhances dynamical SST variability by supplying a background state that allows key feedbacks of the tropical ocean–atmosphere system to operate in the model.

Due to the small zonal extent of the equatorial Atlantic, the observed Bjerknes feedback acts quasi-instantaneously during the dynamically active periods of boreal summer and early boreal winter.

Extended

To compare our results with the evolution of the observed climate system, we use the ERA-Interim (Dee [54]) and the Archiving, Validation, and Interpretation of Satellite Oceanographic (AVISO) datasets.

We find that differences between ERA-Interim SST and other SST datasets are negligibly small. (Analysis results for alternative validation datasets such as the HadISST dataset (Rayner et al. [55]) are not shown.

We find that the Atlantic Bjerknes feedback is near-instantaneous during the dynamically active phases of the year.

In the coupled equatorial system, the atmosphere would react to the ocean-induced SST variability and our empirical models would pick up a statistical co-variability between SST and u10 that would be partly reflected in our SST decomposition—even though u10 in this idealized example was not fundamental in causing the SST variability in the first place.

Future study will help to further our understanding of the Atlantic Niño and its predictability.

Introduction

In a positive feedback, it relates SST and thermocline variability in the eastern ocean basin to zonal surface wind variability in the western ocean basin (u10) and lends growth to the Pacific (Bjerknes [56]) and Atlantic Niños (e.g. Keenlyside and Latif [57]; Burls et al. [58]; Lübbecke and McPhaden [59]; Deppenmeier et al. [60]).

The Pacific Niño generally is the result of a free mode of interannual variability that is driven by the Bjerknes feedback; interactions with the seasonal cycle occur, but do not dominate ENSO SST variability.

Burls et al. [58] argue that the Atlantic Niño hence reflects a modulation of the seasonally active Bjerknes feedback instead of an independent mode of interannual variability.

In contrast to numerous studies that have provided evidence for a relationship between Atlantic Niño variability and the Atlantic Bjerknes feedback, Nnamchi et al. [61, 62] have proposed that the Atlantic Niño is essentially driven by stochastic processes in the atmosphere rather than by dynamical ocean processes that are potentially predictable.

Model and Methods

All ensemble members use the same wind stress forcing, but differ in their initial conditions, which are taken from a control run at a time when the model is close to equilibrium.

In a partially coupled model the ocean and sea ice components are forced with observed wind stress anomalies that are added to the model’s monthly mean wind stress climatology.

To diagnose the heat flux correction, we use the same methodology as Ding et al. [63]: During a control integration, we nudge the first ocean level of the model towards the monthly climatology of observed SST with a restoring time scale of 10 days.

We note that Ding et al. [63] showed a substantial improvement in the ability of the partially coupled model runs to reproduce observed SST variability in boreal summer in FLX compared to STD.

Impact of the Coupled Bias on the Equatorial Atlantic

We assess SST and zonal wind biases in the tropical Atlantic for our KCM experiments.

Richter and Xie [64] and Richter et al. [65] have shown in different CGCMs that the equatorial Atlantic SST bias is related to a bias in zonal surface wind in the western equatorial Atlantic, which in turn can be traced back to precipitation deficiencies of the models.

In agreement with Richter et al. [66], a similar process could be at work in CGCMs: Spring zonal winds that are systematically too weak in the western equatorial Atlantic could inhibit seasonal thermocline shoaling in the eastern ocean basin and hence intense surface cooling during early boreal summer.

This behaviour is hardly altered qualitatively in the FLX experiment, indicating that the zonal wind bias depends only weakly on eastern basin SST in the model.

SST Variance Decomposition Method

To diagnose the observed dynamical SST variance, we use observed thermocline depth in the eastern equatorial Atlantic to model ERA-Interim SST variability in the same region.

That our decomposition approach heavily relies on empirical linear models, but that the resulting decomposition of the SST variance is not linear, i.e., the full SST variance is not the sum of the stochastic and dynamical SST variance. (The basic decomposition of the SST anomaly, however, is.) Here, we use the Bjerknes feedback as the dynamical framework for our empirical models of dynamical SST.

This indicates either that the co-variability between our predictors is strong during the respective month and that using either of them provides sufficient information to produce reasonable dynamical SST; or that the removed predictor does not have a strong impact on SST variability during this month. (2) Model adjustment keeps both predictors in a linear combination. (3) Model adjustment increases the complexity of the model by adding a non-linear predictor term, i.e. a quadratic term or a product of SSH and u10.

Seasonality of Dynamical SST Variance in the tropical Atlantic

The overall similarity between the total and dynamical SST variance suggests that the seasonal cycle of total SST variability in the tropical Atlantic is largely shaped by the variable dynamical contribution.

The dynamical and stochastic SST variances for the model experiment FLX are comparable to observations (blue).

The absolute minimum of dynamical SST variance occurs in May—when observed dynamical SST variance is already high and contributes substantially to the overall boreal summer peak.

Once the cold tongue is established, the feedbacks set in and contribute to dynamical SST variability.

The STD SST bias decreases and our empirical models operate on comparable conditions, resulting in dynamical SST variances in the STD experiment that are similar to observations in boreal fall and early boreal winter.

In boreal winter, the single u10 model does not contribute to dynamical SST variance.

Feedback Strengths in the Tropical Atlantic

Recall that the three relationships that make up the closed Bjerknes feedback in our framework are (1) Atl3 SST produces WAtl u10 variability, (2) WAtl u10 variability is translated into Atl3 thermocline—here: SSH—variability via equatorial wave dynamics, and (3) Atl3 SSH positively feeds back to Atl3 SST and lends growth to the initial SST anomaly.

We assess the degree of lag for each of the three Bjerknes relationships via a cross-correlation analysis for each month.

In our cross-correlation analysis for each calendar month and Bjerknes feedback element, we fix the response agent to the calendar month and correlate it sequentially with the forcing agent of all relevant lags.

Black crosses indicate the lag for which the relationship in terms of the ACC is strongest for the considered calendar month and Bjerknes feedback element.

Summary and Discussion

In agreement with numerous previous studies on the dynamics of the Atlantic Niño (e.g. Zebiak [67]; Carton et al. [68]; Ding et al. [69]), we find that dynamical SST variance contributes substantially to equatorial Atlantic SST variability in boreal summer (May–July), the peak phase of the Atlantic Niño.

That, in contrast to May and June, the December peak of enhanced dynamical SST variance is captured by both the FLX and STD experiments, indicating that the KCM appears to be able to reproduce the variability associated with Okumura and Xie [70])’s Atlantic Niño II.

While we have provided further evidence for a dynamically driven Atlantic Niño, research is not yet clear on what exactly these dynamics are: If the Bjerknes feedback is involved in establishing the seasonal cold tongue, which processes govern the feedback modulation that produces the interannual variability of the Atlantic Niño?

Acknowledgments

A machine generated summary based on the work of Dippe, Tina; Greatbatch, Richard J.; Ding, Hui (2017 in Climate Dynamics).

A Theoretical Model of Strong and Moderate El Niño Regimes

https://doi.org/10.1007/s00382-018-4100-z

Abstract-Summary

The existence of two regimes for El Niño (EN) events, moderate and strong, has been previously shown in the GFDL CM2.1 climate model and also suggested in observations.

Although the recent 2015–16 EN event provides a new data point consistent with the sparse strong EN regime, it is not enough to statistically reject the null hypothesis of a unimodal distribution based on observations alone.

We implemented this nonlinear mechanism in the recharge-discharge (RD) ENSO model and show that it is sufficient to produce the two EN regimes, i.e. a bimodal distribution in peak surface temperature (T) during EN events.

Using the Fokker–Planck equation, we show how the bimodal probability distribution of EN events arises from the nonlinear Bjerknes feedback and also propose that the increase in the net feedback with increasing T is a necessary condition for bimodality in the RD model.

Extended

Despite the strong simplifications, we show that this model reproduces two EN regimes and provides insights into the role of the stochastic forcing in El Niño diversity and predictability.

Introduction

We focus on the latter, particularly on our proposal that strong EN events (e.g. 1982–83 and 1997–98) correspond to a separate dynamical regime associated with nonlinearity in the Bjerknes feedback (TD16).

A theoretical model with nonlinear ocean advection (Timmermann et al. [71]; An and Jin [72]) produces strong EN in the form of “bursts” as part of complex self-sustained nonlinear oscillations, but these only have a weak resemblance to observations.

Although all these nonlinear mechanisms could contribute to ENSO, no study to our knowledge has addressed the origin of strong and moderate dynamical regimes of El Niño (warm) events.

This model focuses only on the strength of El Niño events as a first approximation to ENSO diversity, neglecting the spatial distribution or seasonal effects, or nonlinear processes specific to La Niña.

Despite the strong simplifications, we show that this model reproduces two EN regimes and provides insights into the role of the stochastic forcing in El Niño diversity and predictability.

Recharge-Discharge Model

This also includes the nonlinear radiative cloud feedback that enhances damping in the convective regime (Lloyd et al. [73]; Bellenger et al. [53]).

Fitting the linear RD model to the nonlinear RD model run produces a weaker effective linear damping parameter to the original from Burgers et al. [74], as expected.

This reduced by 45% weaker damping that the Burgers et al. [74].

Fokker–Planck Equation

The Fokker–Planck (FP) equation describes the evolution of the probability distribution function (PDF) of states in a stochastically-forced (“Brownian”) dynamical system (Risken [75]), which allows us to address issues of predictability in simple climate models by describing how the PDF evolves from an initial condition under all possible realizations of the stochastic forcing (e.g. Hasselmann [76]).

Using the terminology of the FP equation, the PDF evolution is governed by the “drift”, which is the displacement, rotation, and deformation of the PDF by the deterministic dynamics, and the “diffusion”, which is the spreading of the PDF due to the random walk associated with the stochastic forcing.

Results

The skewness of the distribution of EN T peaks in the nonlinear model (1.47) is larger than the observational value (1.11), whereas the linear model is lower (0.89).

An important aspect of the onset of these observed strong EN events is that, in contrast to the “pure” (unforced) RD dynamics, in general h does not decrease as sharply when T increases towards its peak as afterwards, during EN decline; in 1982, h even increased right up to two months prior to the peak T. This indicates that, if the RD model is indeed representative of the underlying dynamics, the onset of the 1982 (and probably also the other) strong EN was strongly facilitated by external forcing (TD16).

After the observed EN peaks, the pronounced discharge process leads to large negative h, but the associated La Niña peak T anomalies are not as large as the ones for the EN events in the Niño 3 region.

Discussion

Other empirical methods (e.g. Burgers et al. [74]) could be adapted to consider this nonlinearity, or the proposed model could be used to derive an estimate of the non-linear feedback through assimilation of observations.

This model highlights the key role of the stochastic forcing, particularly the component of the forcing on ENSO time-scales, in the growth of the strong EN events (e.g. Levine and Jin [77]; TD16).

It is often assumed that the low-frequency positive forcing is the result of clustering of short-term westerly wind events, either randomly or modulated by SST (e.g. Gebbie et al. [78]; Zavala-Garay et al. [79]; Gebbie and Tziperman [80]).

The strong La Niña following strong EN events in our nonlinear model is consistent with the strong heat content discharge that is seen in observations, except that in observations the discharge does not necessarily produce strong La Niña events.

Conclusions

In a previous study (Takahashi and Dewitte [81]), the convective SST threshold in the eastern Pacific and the associated nonlinearity in the Bjerknes feedback provides a parsimonious explanation for this, motivating further exploration of this possibility suggestive with a simple theoretical model based on this mechanism.

We show that this nonlinearity is sufficient to produce the bimodal distribution associated with strong and moderate EN regimes.

It is a parsimonious theory for the EN regimes, based on a well-known nonlinear SST-convection relation.

It is a simpler model than, for instance, high-dimensional linear models and does not produce exotic behavior as other nonlinear models or require special assumptions about the forcing, thus providing a better null hypothesis for models that exhibit two EN regimes.

Acknowledgments

A machine generated summary based on the work of Takahashi, Ken; Karamperidou, Christina; Dewitte, Boris (2018 in Climate Dynamics).

Climate Change Impact Assessment on Flow Regime by Incorporating Spatial Correlation and Scenario Uncertainty

https://doi.org/10.1007/s00704-016-1802-1

Abstract-Summary

Flooding risk is increasing in many parts of the world and may worsen under climate change conditions.

The current statistical downscaling approaches face the difficulty of projecting multi-site climate information for future conditions while conserving spatial information.

The results showed different variation trends of annual peak flows (in 2080–2099) based on different climate change scenarios and demonstrated that the hydrological impact would be driven by the interaction between snowmelt and peak flows.

The proposed CLWRS approach is useful where there is a need for projection of potential climate change scenarios.

Introduction

Besides precipitation patterns, the prediction of future floods relies on changes in temperature, snowmelt, land use patterns, etc. Future climate changes can be predicted based on the physics described by the General Circulation Models (GCMs), which simulate the interaction among atmosphere, ocean and sea.

LARS-WG was proposed by Racsko et al. [82] as a stochastic weather generator, intended to model meteorological parameters such as precipitation and solar radiation.

Based on the above-mentioned studies, it is found that LARS-WG is effective in simulating climate change for meteorological variables but is limited to being applied for a single site; multi-site RainSim is capable of spatially addressing rainfall but requires modifications to address future climate change.

The objective of this study is to propose a coupled LARS-WG and RainSim (CLWRS) approach to quantify the changes in flood occurrences under future climate projections for different GCMs for future time periods (i.e. 2080–2099).

Methodology

The initial purpose of LARS-WG is to obtain uncorrelated precipitation patterns which serves as the input for RainSim through the Change Factor approach detailed in the following section.

The output of RainSim in turn serves as the input for the reanalysis by LARS-WG to generate meteorological data, now taking into account the spatial patterns among multiple meteorological sites.

Step 1: Use LARS-WG to predict baseline and future meteorological conditions LARS-WG determines the statistical properties of historical meteorological data, and generates long records of simulated data for either future or baseline condition (1961–1990).

Step 3: Use LARS-WG to reanalyse meteorological data based on correlated precipitation The precipitation pattern generated from RainSim is used as the input for the reanalysis by LARS-WG to generate the rainfall statistics.

The temperature and radiation simulated for different future scenarios by LARS-WG along with the precipitation from RainSim are used as the input for LARS-WG.

Case Study and Data

The meteorological data, including maximum temperature (Tmax), minimum temperature (Tmin) and precipitation (P) for three meteorological stations are collected for 1965–2007 from Environment Canada website (Environment Canada [83]).

Other meteorological data required are the dew point temperature (Tdew) and global radiation (R).

The hydrometric data for this case study for the period 1965–2007 were obtained from Canadian Water Office (Environment Canada [83]).

This relationship is then used for the data sourced from Environment Canada to obtain Tdew for the period 1965–2007.

Tdew is calculated from relative humidity (Rh), R, Tmax and Tmin which is obtained for the period 1979–2007 CFSR data, using the formula given by Lawrence [84].

The change factor approach described in the earlier section is employed to obtain future precipitation data for the Kootenay Watershed incorporating spatial characteristics.

The second type is the topographic data for the Kootenay Watershed (Kite [85]).

Results and Discussion

For 100-year return period, the A1B scenario predicts an increase of flow around 11.46% (based on baseline values); A2 and B1 scenarios predict decreases of 7.27 and 8.02%, respectively.

Another important observation is the difference in change factors and their influence in predicting peak flows.

The variation of the predicted flood peaks exhibits an increasing trend for different return periods, mirroring the trend of 75th percentile.

For lower return periods of 5 and 10 years, decreases in peak flows at 5.98 and 3.49% are predicted; whereas, the higher return periods suggest increasing trends.

Similar to the B1 scenario, the HADCM3 predicts the highest increases in the flow peaks while INCM3 and GIAOM predictions remain conservative and FGOALS suggest decreases.

A1B scenario which predicts increase in temperatures with lesser magnitudes, predicted increasing flood peaks.

Conclusion

The CLWRS framework combining LARS-WG and RainSim with SLURP model is presented.

LARS-WG is used to facilitate the quantification of the predictions’ uncertainty arising from emission scenarios.

Although, this variation could complicate quantifying the relationship between changes in temperature, snowmelt and flood peaks; it would facilitate better understanding of the Pacific Northwest.

The importance of this relationship is further emphasized by the changes in the flood peaks predicted by the different scenarios.

As LARS-WG continues to be updated with the CMIP5 scenarios and as more GCMs are incorporated to augment its predictive capabilities, the subsequent evolution of the trends in scenario uncertainty desires further investigation.

This can be further extended to model and simulate temperature patterns at the different meteorological stations.

Acknowledgments

A machine generated summary based on the work of Vallam, P.; Qin, X. S. (2016 in Theoretical and Applied Climatology).