Abstract
Probabilistic wind forecasting is a methodology to deal with uncertainties in numerical weather prediction models (NWP). In this chapter, we describe the need for ensemble forecasting, the different techniques used to generate the different initial conditions, and the operational ensemble models that are used nowadays in meteorological agencies. Then, we develop an ensemble method designed for the downscaling wind model described in Chap. 4 coupled with the AROME–HARMONIE mesoscale model, a non-hydrostatic dynamic forecast model described in Chap. 5. As we have explained in Chap. 4, some parameters need to be estimated since we do not know its exact value. These parameters are, basically, the roughness length and the zero plane displacement (explained in Chap. 2), as well as the Gauss moduli parameter (\(\alpha \)) used in the diagnostic wind model. This estimation is the main source of uncertainties in the model; therefore we will estimate some of these parameters using different forecast values of the AROME–HARMONIE. Finally, an example of the approach is applied in Gran Canaria island with a comparison of the ensemble results with experimental data from AEMET meteorological stations.
Access provided by CONRICYT-eBooks. Download chapter PDF
Similar content being viewed by others
1 Probabilistic Forecasting
Up to this point, we have described deterministic weather models. These models are governed by the initial state, and the errors in this state grow as the model predicts the future, since the models are unstable systems characterized by nonperiodicity. So, the accuracy of the forecast depends on the initial state which is uncertain.
This relationship between the initial state and the deterministic prediction was discovered by Edward Lorenz and is discussed in his book “The Essence of Chaos” [24]. In 1962 [22], he simulated the evolution of the atmospheric state using the geostrophic form of the two-layer baroclinic model proposed in [21] consisting of 12 ordinary differential equations in 12 variables. It used a linear regression from the output of a model. When he ran the simulations, he found out that some solutions were drastically different. Analyzing the results, he found out that, in some experiments, he had truncated the model output to three digits accuracy while the original values had a precision of six digits. Just this small change lead to significant differences in the forecast results. These differences imply that observations need a precision up to the three decimal places to obtain a reliable forecast.
This result prompted the scientific community to determine a procedure to determine which is the best forecast of the atmosphere state according to the available data. Nowadays, there are several meteorological agencies worldwide running their numerical weather prediction models (NWP), each one different from the others. The results from these models are consistent with the observed data but they differ between them, so we cannot say which model is the “correct one”. Instead, we can think of each forecast as a member of an ensemble of atmospheric states that are consistent with the observations.
With this idea, Epstein realized that the atmosphere is deterministic since it obeys the fundamental laws of hydrodynamics, but its state can only be known in a probabilistic way. Therefore, in [11], he proposed a “stochastic dynamic” (SD) approach consisting in using the continuity equation for probability [14] in the observations data. He compared the results of the SD model with the results of a deterministic model that used as the initial condition the ensemble mean from the Monte Carlo method.
The problem with SD is that it is expensive; the number of equations for SD prediction is equal to the number of spectral components raised to the power of the number of moments. Philip Thompson [34] proposed a more efficient model by using variances directly instead of covariances; this way the number of equations was reduced.
With the advent of parallel machines, researchers developed different approaches to deal with the uncertainty of the initial state. Murphy [26] ran an experiment using the hemispheric version of the Meteorological Office (UKMO) five-level general circulation model. Initial conditions were obtained by perturbing a given state. Seven individual perturbations were used, and the ensemble forecast consisted of their integration.
To obtain the perturbed initial state, Murphy considers two different methods: random perturbation and lagged-averaged forecast. The random perturbation generates the seven initial states by adding independent perturbations to the known initial state. These perturbations are consistent with some analysis errors. This random perturbation method is similar to the Monte Carlo method. The lagged-averaged forecast method uses past observations to generate each member of the ensemble.
The Monte Carlo (or random perturbation) approach has some limitation; for example, the perturbed parameters can lead to imbalances in the atmospheric state. Another issue is that the perturbations in the Monte Carlo approach were random, while the parameters should have certain preferred directions. With this ideas, different strategies for perturbing dynamical prediction models were studied. The two more used methods nowadays are the singular vector decomposition (SV) [1, 10, 28] used by the European Centre for Medium-Range Weather Forecasts (ECMWF) [19] and the Breeding Vector technique (BV) used by the National Centers of Environmental Prediction (NCEP) [36]. A comparison between the two methods using the ECMWF Integrated Forecast System is described in [25].
These advances, along with more powerful parallel machines, and improvements in deterministic forecasting [33], led to the birth of Ensemble Prediction Systems (EPSs). EPSs are operational systems that provide probabilistic forecasts based on ensemble members. The method to create these ensemble members is different between systems. The Meteorological Service of Canada (MSC) uses a Monte Carlo approach, and, as said previously, ECMWF uses SV, and NCEP uses BV [4].
More recently, a new approach to ensemble forecasting has been developed, the multimodel ensemble forecast. This approach uses forecasts from different models as ensemble members. The ensemble may be composed of deterministic forecasts or from ensemble prediction systems (called superensemble). The idea is to combine the strengths and weaknesses of each model and obtain a more reliable prediction [9, 16].
The THORPEX Interactive Grand Global Ensemble (TIGGE) [3] is a multimodel ensemble system that combines the predictions of the following models: ECMWF, UK Met Office (UKMO), National Centre for Medium Range Weather Forecasting—India (NCMRWF), CMA, Japan Meteorological Agency (JMA), National Centers for Environmental Prediction (NCEP-USA), Meteorological Service of Canada (CMC), Bureau of Meteorology Australia (BOM), Centro de Previsao Tempo e Estudos Climaticos Brazil (CPTEC), Korea Meteorological Administration (KMA), and MeteoFrance (MF) global models. Apart from this global initiative, there is the North American Ensemble Forecasting System (NAEFS) [7] that combines the systems from the Canadian Meteorological Centre (CMC) and the National Centers for Environmental Prediction (NCEP); and an European initiative: the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction project (DEMETER) [27].
If the reader is interested in these developments, John M. Lewis [20] wrote a more thorough review of the history of ensemble models.
1.1 Initial State Perturbation Methods
In this subsection, we describe the two most used methods to perturbate the initial state. A simple way of converting a deterministic forecast into a probabilistic forecast would be to modify the deterministic result using a probability distribution constructed from previous forecast errors. This strategy would not work because the underpinning dynamical equations are nonlinear, then the errors at the initial state do not relate directly to the predicted result. So we need to perturbate the initial state. The Monte Carlo approach is to create a random perturbation of the initial state according to their known error characteristics. However, this leads to underdispersive forecast ensembles [5]. The reason for this is that there are many unrepresented sources of uncertainty not explicitly represented in a Monte Carlo forecast.
For this reason, new techniques were required to represent the nonlinearity of the dynamical equations in the ensemble predictions. Two of the most commons techniques will be discussed in this sections; the Singular Vector decomposition, and the Breeding Vector technique.
1.1.1 Singular Vector Decomposition
The main idea behind Singular Vector decomposition is the singular value decomposition of the forward tangent linear operator. This can be physically interpreted as the fastest growing perturbations. Therefore, SVs give information about the direction and dynamics of rapidly growing instabilities and perturbations.
The method was devised by Lacarra and Talagrand in [18] where they were interested in identifying the perturbations that lead to the maximum difference between the simulated state and a reference one. They defined \(\mathbf {x}(0)\) as the vector containing the initial state information. The model is defined as \({\mathbf {M}}:\mathbb {R}^n\rightarrow \mathbb {R}^n\). Therefore the state at time t is defined as
Since they were interested in knowing the perturbations that differed more from a reference state they need to know how the state evolves. For this reason, they define the resolvent of \({\mathbf {M}}\) as
If the perturbed initial state is defined as \((\mathbf {x}(0) + \mathbf {\chi }(0))\), then the time evolution of the perturbed state can be written as
the second-order term can be neglected, and the derivative of \(\mathbf {\chi }\) is
This linear system of equations is called the tangent linear system of \({\mathbf {M}}\) in the vicinity of the particular solution \(\mathbf {x}(t)\). It describes the temporal evolution of the perturbation \(\mathbf {\chi }(t)\), to first order concerning the initial perturbation \(\mathbf {\chi }(0)\). We can rewrite Eq. (4) as
where the operator \({\mathbf {L}}(0,t)\) is the forward tangent linear operator or the linear propagator. So the perturbations that will maximize the difference can be found using the singular value decomposition of \({\mathbf {L}}(0,t)\).
where \(\varLambda \) is a diagonal matrix with the singular values of \({\mathbf {L}}\) (\(\lambda _1\text {, }\lambda _1\text {, } \dots \)). \({\mathbf {Y}}^*\) is the conjugate transpose of \({\mathbf {Y}}\). The columns of \({\mathbf {Y}}\) correspond to the initial (or right) singular vectors. The columns of \({\mathbf {W}}\) are the evolved (or left) singular vectors.
The singular vectors of \({\mathbf {L}}\) are the same as the eigenvectors of \({\mathbf {L}}^*{\mathbf {L}}\). And, specifically, \({\mathbf {Y}}\) and \({\mathbf {W}}\) are related in the following manner:
To find the perturbations with the maximum amplitude growth, we need to compute them. To this end, we can use any norm \({\mathbf {E}}\)
where \({\mathbf {E}}\) is a matrix operator that defines the inner product.
For a linear operator \({\mathbf {L}}\), exists its adjoint \({\mathbf {L}}^*\) such that \(\langle \chi , {\mathbf {L}}y\rangle = \langle {\mathbf {L}}^*\chi , y\rangle \). Its possible to choose different norms at the initial and the final time
The objective is to maximize the growth rate, or amplification factor, defined as
To maximize \(\lambda ^2\), we solve the following eigenvalue problem:
We can rewrite this equation using the variable transformation \(y_i(t_0) = E_0^{-\frac{1}{2}}\gamma _i(t_0)\):
This equation has the same form as Eq. (7); comparing them we can conclude that the eigenvectors of \(E_0^{-\frac{1}{2}}{\mathbf {L}}^*E_t{\mathbf {L}}E_0^{-\frac{1}{2}} = \left( E_0^{-\frac{1}{2}}{\mathbf {L}}^*E_t^{\frac{1}{2}}\right) \left( E_t^{\frac{1}{2}}{\mathbf {L}}E_0^{-\frac{1}{2}}\right) = {\mathbf {L}}_s^*{\mathbf {L}}_s\) are the initial singular vectors of \({\mathbf {L}}_s\); and they represent the perturbations with a maximum amplification factor in the time interval \((t_0, t)\).
When used in real numerical weather prediction models, the calculation of the singular vector is difficult because the definition of the model \({\mathbf {M}}\) has to be computed analytically. In operational ensemble prediction systems, this calculation is made using tangent linear and adjoint models and an iterative Lanczos algorithm [6, 12]. A review of the method with applications to El Niño as well as decadal forecasting is presented in [29]. Also, Diaconescu and Laprise [8] review the applications such as forecast error estimation, ensemble forecasting, target adaptive observations, predictability studies and growth arising from instabilities.
1.1.2 Breeding Vector
This method is the most computationally inexpensive [38]. There are two different versions of this method: the simple breeding [35], and the masked breeding [36].
The main idea of the method is that the choice of the initial perturbation has to cover all the space of possible analysis errors. In an operational NWP, the perturbation of the initial state is reduced by the use of observations. Therefore, the most important errors are those associated with the evolution of the model. The breeding method modifies the perturbation using the difference between the perturbed and the unperturbed forecast. Using this technique, all random perturbations develop into the structure of the leading local (time-dependent) Lyapunov vectors (LLVs; see [37]) of the atmosphere after a transient period.
Toth and Kalnay [36] describe the main steps of the breeding method as
-
1.
add a small, arbitrary perturbation to the atmospheric analysis (initial state) at a given day \(t_0\)
-
2.
integrate the model from both the perturbed and unperturbed initial conditions for a short period \(t_1\)
-
3.
subtract one forecast from the other
-
4.
scale down the difference field so that it has the same norm as the initial perturbation
-
5.
add this difference into the analysis corresponding to the following period \(t_1\)
By construction, this method “breeds” the nonlinear perturbations that grow fastest. Therefore, independent perturbations will converge to the same perturbations after enough time steps. This perturbation is related to LLVs. LLVs have been used to characterize the behavior of dynamical systems. The Lyapunov exponents(\(\lambda _i\)) are defined as
where p is a linear perturbation spanning the phase space of the system with orthogonal vectors.
Each Lyapunov exponents can be associated with a perturbation vector. The vector associated to the largest exponent has the property that any random perturbation introduced an infinitely long time earlier develops into it. Lorenz [23] described this property; he noted that initially random perturbations had a strong similarity after 8 days of integration. The breeding method converges to this LLVs after 3 or 4 days of integration.
The masked breeding is the same as the simple breeding described before, but taking into account the geographically dependent uncertainty.
1.2 Multimodel Ensemble Methods
The rationale behind multimodel ensemble methods is that collective information is better than single information, especially the more complex the process. In the concrete case of short- and medium-range weather forecasting Sanders, it was demonstrated that combining different forecast could be beneficial [2, 15, 32]. Combining multiple models, Fritsch et al. [13] suggested that the superiority of the forecast relied on the variations in model physics and numerics between models leading to a substantial role in generating the full spectrum of possible solutions.
However, we should note that model physics and numerics is not enough, another source of uncertainty is the initial state of the atmosphere. This kind of uncertainties is handled by Ensemble Prediction Systems using a technique to perturbate the initial state (such the ones described in Sect. 1.1). So, a good idea could be to combine both models. Palmer et al. [27] developed a European multimodel ensemble system known as DEMETER.
When developing a multimodel ensemble system, there are several choices to be made. For example, we can consider all the individual forecasts equal, so we just combine them with the same weight. However, more complex methods of optimally combining the single-model output have been described [17, 30, 31]. Another aspect is how the initial state is perturbed; is it better to use the same perturbation in all models? Or we should use the default perturbation technique for each model?
In the concrete case of DEMETER, from each model, except that of the Max- Planck Institute (MPI), uncertainties in the initial state are represented through an ensemble of nine different ocean initial conditions. Three different ocean analyses; a control ocean analysis is forced with momentum, heat, and mass flux data from the ECMWF 40-yr Reanalysis, and two perturbed ocean analyses are created by adding daily wind stress perturbations to the ERA-40 momentum fluxes. The wind stress perturbations are randomly taken from a set of monthly differences between two quasi-independent analyses. Also, to represent the uncertainty in SSTs, four SST perturbations are added and subtracted at the start of the hindcasts. As in the case of the wind perturbations, the SST perturbations are based on differences between two quasi-independent SST analyses. Atmospheric and land surface initial conditions are taken directly from ERA-40.
Palmer [27] concludes that the multimodel ensemble is a viable, pragmatic approach to the problem of representing model uncertainty in seasonal-to-interannual prediction, and leads to a more reliable forecasting system than that based on any one single model.
A study of the superiority of multimodel ensemble systems has been done by Hagedorn et al. in [9, 16].
2 Ensemble Model for Diagnostic Wind Field
Given the importance of introducing the uncertainties in the prediction of the wind field, in this chapter, we describe a simple ensemble method designed for Wind3D, the diagnostic wind model presented in Chap. 4. In the same spirit as Wind3D, the ensemble approach described in this section is a fast procedure designed for the microscale.
Schematically, in any NWP, the main sources of uncertainty comes from observations, model parameters, data assimilation procedures, and boundary conditions.
In the wind model described in Chap. 4, we have detected the parameters with more uncertainty, namely: Gauss moduli parameter (\(\alpha \)), roughness length (\(z_0\)), and displacement height (d).If we categorize these uncertain parameters in the four categories defined above, \(\alpha \) belongs to the model parameters while \(z_0\) and d belong to boundary conditions. An evolutionary algorithm has been presented to characterize these parameters. However, it has been noted that even the “best estimation” has some uncertainty; in Sect. 4.2 several evolutionary algorithms have been run leading to different parameter estimations.
Another source of uncertainty in Wind3D comes from the observations. Please, remember that these observations can originate from measurement stations or the forecast of a deterministic NWP. In the case of the measurement data, the errors are related to the machine and the daily conditions whereas in the deterministic NWP forecast, we are using the “best forecast” provided by the NWP, but we have already seen that this forecast may be inaccurate. Moreover, due to the differences in horizontal resolution between the local scale diagnostic wind model and the NWP, the height of the grid points between models can be inconsistent. In this case, we do not know if these points are reliable for Wind3D. So, we may ask ourselves “Which are the reliable NWP forecast points?”
Since the method described is an ensemble forecast system, the wind model is used in conjunction with an NWP to have the predictability capability. In this case, to be able to estimate the variables, we need two different sets of data, the set used to run the wind model and the set of observations the results are compared against. Instantly another question arises “How do we generate these sets?”.
The ensemble model described here tries to answer the two doubts that have arisen. The model chooses the valid NWP points based on the difference between their height; when the difference between the NWP height and the diagnostic height is lower than a threshold, the point is valid. Once we have chosen the viable points, we construct the two subsets (model observations and validation data) using a random selection. Once the two subsets are created, we estimate the best values for \(\alpha \), \(\varepsilon \), \(z_0\), and d using the memetic algorithm discussed in Sect. 4.2. Figure 1 shows the diagram of the method.
This method can also be used with various NWP forecast emulating a multimodel ensemble. For example, we can have some ensemble members from ECMWF model, other members from NCEP, and the rest from AROME–HARMONIE.
3 Numerical Experiment
In this section, we present an application of the presented methodology. The application is in Gran Canaria island. The ensemble forecast is generated from AROME–HARMONIE forecast with a horizontal resolution of 2.5 km. The ensemble model is validated against measured data from the AEMET network stations. The day of the simulation is February 20, 2010.
The mesh created for this application is created with the Meccano method (Chap. 3) from a digital terrain model of the Gran Canaria island. The height of the domain is 10.000 m., and the resulting mesh has 251.808 nodes and 1.090.366 tetrahedra (Fig. 2)
Figure 3 shows the terrain height in the Meccano mesh and the AROME–HARMONIE grid. We can observe the differences between the height considered by the Wind3D and AROME–HARMONIE. The maximum height is around 1.000 m in the AROME–HARMONIE discretization and 2.000 m in the Wind3D discretization. This big height difference indicates that, at some points, the AROME–HARMONIE 10 m velocity may not be appropriate. For this reason, instead of using all the 10 m data, we have selected a subset of points attending to a height difference criteria.
Once a set of points has been chosen, we randomly divide them into two different subsets. One subset is used as observations in Wind3D, and the other subset is used by the evolutionary algorithm to compute the fitting function. The fitting function is the Root Mean Square Error (RMSE) between the forecast values by Wind3D and the data in the second subset. In this case, we have selected the points which height difference is less than 50 m. These selected points are shown in Fig. 4 (left). The two randomly generated subsets can be seen in Fig. 4 (right); green points are used as observations for Wind3D, and red points are used to compute the RMSE.
Now we have generated all the members of the ensemble. Then, we estimate the best values of \(\alpha \), \(\varepsilon \), \(z_0\) and d, and with these best values, we compute the forecast wind using the Wind3D model.
Finally, to validate the method, we compare the ensemble forecast results with the observed data measured in the AEMET network of automatic stations. Each station provides two data; the average and the maximum wind velocity of the last 10 min. Their UTM coordinates are summarized in Table 1, and their position in a map is shown in Fig. 5.
Figure 6 shows the comparison of measured data and the ensemble box plot forecast. We show the most representative comparisons from four stations. The first thing that we can notice is that, in general, the mean value of the ensemble forecast is reasonably similar to the measured wind velocity. In some cases, the forecasted velocity is close to the maximum (C625O, C639Y), in some others, it is close to the average velocity (C619X), and sometimes it is in between (C635B).
Another observation is that the variation of the mean value of the ensemble forecast is smoother than the measured velocity. In contrast, the measured data exhibits abrupt changes among time steps. These abrupt changes are not captured by any member of the ensemble.
A more detailed inspection of the comparatives shows interesting remarks. For example, the ensemble forecast in station C619X has many outliers in all time steps. C639Y also has some of them, but they are close to the mean values. However, C635B and C625O do not have outliers in all the time steps. These outliers sometimes can provide interesting information, for example, in station C619X from 0–7 h they capture the total variation between the average data and the maximum.
C625O station deserves a special mention. Analyzed carefully, we can observe that, between 11 h and midnight, the difference between maximum and average measured data increases. This increase is captured in the ensemble forecast by the higher dispersion of the box plot. This agreement between ranges shows that the resulting ensemble probability can be useful in predicting the uncertainty of the wind velocity.
4 Conclusions
In this chapter, we have seen the necessity of a probabilistic approach to numerical weather prediction is necessary. It is introduced with a brief review of the progress done in this area: the discovery of the need for a probabilistic approach and the development of these techniques. Then we go into more detail with the description of two of the more used methods to perturbate the initial state; Singular Vector decomposition and Breeding Vectors. To finish the introduction, we describe the basis of a multimodel ensemble method.
Next, we describe an ensemble forecast method specially designed for the microscale. This method is based on the estimation of the uncertain parameters using an evolutionary algorithm. The uncertain parameters are both model parameters, i.e., \(\alpha \) and \(\varepsilon \), and physical parameters, namely the roughness length (\(z_0\)) and the displacement height (d). The evolutionary algorithm minimizes the error of the predicted wind field by a microscale wind model and the forecast of an NWP. The NWP forecast is used for the input data of the model and the control data to compute the fitting function of the evolutionary algorithm. The selection of these two subsets is random and generates the different members of the ensemble system.
Finally, to illustrate the methodology and validate the model, we present a numerical experiment. In this experiment, we use the microscale model Wind3D described in Chap. 4 coupled with the AROME–HARMONIE model described in Chap. 5. The experiment is located in Gran Canaria island during February 20, 2010. The results have shown that, at any predicted time and station, the forecast ensemble probability lies between the average and the maximum velocity, usually closer to the maximum. Also, the range of the forecast increases when the difference between the maximum and average velocity raises, providing a tool to predict variability in the wind field.
References
Barkmeijer J, Bouttier F, Gijzen MV (1998) Singular vectors and estimates of the analysis-error covariance metric. Q J Roy Meteorol Soc 124(549):1695–1713. https://doi.org/10.1002/qj.49712454916
Bosart LF (1975) Sunya experimental results in forecasting daily temperature and precipitation. Mon Weather Rev 103(11):1013–1020. https://doi.org/10.1175/1520-0493(1975)103<1013:SERIFD>2.0.CO;2
Bougeault P, Toth Z, Bishop C, Brown B, Burridge D, Chen DH, Ebert B, Fuentes M, Hamill TM, Mylne K, Nicolau J, Paccagnella T, Park YY, Parsons D, Raoult B, Schuster D, Dias PS, Swinbank R, Takeuchi Y, Tennant W, Wilson L, Worley S (2010) The THORPEX interactive grand global ensemble. Bull Am Meteorol Soc 91(8):1059–1072. https://doi.org/10.1175/2010bams2853.1
Buizza R, Houtekamer PL, Pellerin G, Toth Z, Zhu Y, Wei M (2005) A comparison of the ECMWF, MSC, and NCEP global ensemble prediction systems. Mon Weather Rev 133(5):1076–1097. https://doi.org/10.1175/mwr2905.1
Buizza R, Milleer M, Palmer TN (1999) Stochastic representation of model uncertainties in the ECMWF ensemble prediction system. Q J Roy Meteorol Soc 125(560):2887–2908. https://doi.org/10.1002/qj.49712556006
Buizza R, Tribbia J, Molteni F, Palmer T (1993) Computation of optimal unstable structures for a numerical weather prediction model. Tellus A 45(5):388–407. https://doi.org/10.1034/j.1600-0870.1993.t01-4-00005.x
Candille G (2009) The multiensemble approach: The NAEFS example. Mon Weather Rev 137(5):1655–1665. https://doi.org/10.1175/2008mwr2682.1
Diaconescu EP, Laprise R (2012) Singular vectors in atmospheric sciences: A review. Earth Sci Rev 113(3–4):161–175. https://doi.org/10.1016/j.earscirev.2012.05.005
Doblas-Reyes FJ, Hagedorn R, Palmer TN (2005) The rationale behind the success of multi-model ensembles in seasonal forecasting—II. calibration and combination. Tellus A 57(3), 234–252. https://doi.org/10.1111/j.1600-0870.2005.00104.x
Ehrendorfer M, Tribbia JJ (1997) Optimal prediction of forecast error covariances through singular vectors. J Atmos Sci 54(2):286–313. https://doi.org/10.1175/1520-0469(1997)054<0286:OPOFEC>2.0.CO;2
Epstein ES (1969) Stochastic dynamic prediction. Tellus 21(6):739–759. https://doi.org/10.3402/tellusa.v21i6.10143
Errico RM, Ehrendorfer M, Raeder K (2001) The spectra of singular values in a regional model. Tellus A 53(3):317–332. https://doi.org/10.1034/j.1600-0870.2001.01199.x
Fritsch JM, Hilliker J, Ross J, Vislocky RL (2000) Model consensus. Weather Forecast 15(5):571–582. https://doi.org/10.1175/1520-0434(2000)015<0571:MC>2.0.CO;2
Gleeson TA (1966) A causal relation for probabilities in synoptic meteorology. J Appl Meteorol 5(3):365–368. https://doi.org/10.1175/1520-0450(1966)005<0365:ACRFPI>2.0.CO;2
Gyakum JR (1986) Experiments in temperature and precipitation forecasting for illinois. Weather Forecast 1(1):77–88. https://doi.org/10.1175/1520-0434(1986)001<0077:EITAPF>2.0.CO;2
Hagedorn R, Doblas-Reyes FJ, Palmer TN (2005) The rationale behind the success of multi-model ensembles in seasonal forecasting— i. basic concept. Tellus A 57(3), 219–233 (2005). https://doi.org/10.1111/j.1600-0870.2005.00103.x
Krishnamurti TN (1999) Improved weather and seasonal climate forecasts from multimodel superensemble. Science 285(5433):1548–1550. https://doi.org/10.1126/science.285.5433.1548
Lacarra JF, Talagrand O (1988) Short-range evolution of small perturbations in a barotropic model. Tellus A Dyn Meteorol Oceanogr 40(2):81–95. https://doi.org/10.3402/tellusa.v40i2.11784
Leutbecher M (2005) On ensemble prediction using singular vectors started from forecasts. Mon Weather Rev 133(10):3038–3046. https://doi.org/10.1175/MWR3018.1
Lewis JM (2005) Roots of ensemble forecasting. Mon Weather Rev 133(7):1865–1885. https://doi.org/10.1175/mwr2949.1
Lorenz EN (1960) Energy and numerical weather prediction. Tellus 12(4):364–373. https://doi.org/10.3402/tellusa.v12i4.9420
Lorenz EN (1962) The statistical prediction of solutions of dynamic equations. In: International Symposium on Numerical Weather Prediction in Tokyo. The Meteorological Society of Japan
Lorenz EN (1965) A study of the predictability of a 28-variable atmospheric model. Tellus 17(3):321–333. https://doi.org/10.1111/j.2153-3490.1965.tb01424.x
Lorenz EN (1995) The essence of chaos. University of Washington Press
Magnusson L, Leutbecher M, Klln E (2008) Comparison between singular vectors and breeding vectors as initial perturbations for the ecmwf ensemble prediction system. Monthly Weather Rev 136(11), 4092–4104 (2008). https://doi.org/10.1175/2008MWR2498.1
Murphy JM (1988) The impact of ensemble forecasts on predictability. Q J Royal Meteorol Soc 114(480):463–493. https://doi.org/10.1002/qj.49711448010
Palmer TN, Doblas-Reyes FJ, Hagedorn R, Alessandri A, Gualdi S, Andersen U, Feddersen H, Cantelaube P, Terres JM, Davey M, Graham R, Délécluse P, Lazar A, Déqué M, Guérémy JF, Díez E, Orfila B, Hoshen M, Morse AP, Keenlyside N, Latif M, Maisonnave E, Rogel P, Marletto V, Thomson MC (2004) Development of a European multimodel ensemble system for seasonal-to-interannual prediction (DEMETER). Bull Am Meteorol Soc 85(6), 853–872. https://doi.org/10.1175/bams-85-6-853
Palmer TN, Gelaro R, Barkmeijer J, Buizza R (1998) Singular vectors, metrics, and adaptive observations. J Atmos Sci 55(4):633–653. https://doi.org/10.1175/1520-0469(1998)055<0633:SVMAAO>2.0.CO;2
Palmer TN, Zanna L (2013) Singular vectors, predictability and ensemble forecasting for weather and climate. J Phys A Math Theor 46(25):254,018. https://doi.org/10.1088/1751-8113/46/25/254018
Pavan V, Doblas-Reyes FJ (2000) Multi-model seasonal hindcasts over the euro-atlantic: skill scores and dynamic features. Clim Dyn 16(8):611–625. https://doi.org/10.1007/s003820000063
Rajagopalan B, Lall U, Zebiak SE (2002) Categorical climate forecasts through regularization and optimal combination of multiple gcm ensembles. Mon Weather Rev 130(7):1792–1811. https://doi.org/10.1175/1520-0493(2002)130<1792:CCFTRA>2.0.CO;2
Sanders F (1973) Skill in forecasting daily temperature and precipitation: some experimental results. Bull Am Meteorol Soc 54(11):1171–1178. https://doi.org/10.1175/1520-0477(1973)054<1171:SIFDTA>2.0.CO;2
Simmons AJ, Hollingsworth A (2002) Some aspects of the improvement in skill of numerical weather prediction. Q J R Meteorol Soc 128(580):647–677. https://doi.org/10.1256/003590002321042135
Thompson PD (1985) Prediction of the probable errors of predictions. Monthly Weather Rev 113(2):248–259. https://doi.org/10.1175/1520-0493(1985)113<0248:POTPEO>2.0.CO;2
Toth Z, Kalnay E (1993) Ensemble forecasting at nmc: the generation of perturbations. Bull Am Meteorol Soc 74(12):2317–2330. https://doi.org/10.1175/1520-0477(1993)074<2317:EFANTG>$2.0.CO;2
Toth Z, Kalnay E (1997) Ensemble forecasting at ncep and the breeding method. Monthly Weather Rev 125(12):3297–3319. https://doi.org/10.1175/1520-0493(1997)125<3297:EFANAT>2.0.CO;2
Trevisan A, Legnani R (1995) Transient error growth and local predictability: a study in the lorenz system. Tellus A Dyn Meteorol Oceanogr 47(1):103–117. https://doi.org/10.3402/tellusa.v47i1.11496
Wang X, Bishop CH (2003) A comparison of breeding and ensemble transform kalman filter ensemble forecast schemes. J Atmos Sci 60(9):1140–1158. https://doi.org/10.1175/1520-0469(2003)060<1140:ACOBAE>2.0.CO;2
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Oliver, A., Rodríguez, E., Mazorra-Aguiar, L. (2018). Wind Field Probabilistic Forecasting. In: Perez, R. (eds) Wind Field and Solar Radiation Characterization and Forecasting. Green Energy and Technology. Springer, Cham. https://doi.org/10.1007/978-3-319-76876-2_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-76876-2_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-76875-5
Online ISBN: 978-3-319-76876-2
eBook Packages: EnergyEnergy (R0)