1 Atmospheric Scales in Time and Space

1.1 Introduction

Variations in wind speeds in the atmosphere are caused by processes with time intervals ranging from less than one second to several months and spatial structures from a few millimetres to many hundreds of kilometres (Fig. 1). The scale analysis of meteorological motion systems originates from dynamic meteorology. By eliminating dispensable terms in the equations of motion (Navier–Stokes equations), scale analysis helps to simplify mathematical equations of numerical weather prediction for different scales. For wind energy, different scales play different roles: Microscale effects such as turbulence are important for the design of wind turbines as they influence the mechanical loads, while larger scales are important for the expected energy yield.

Fig. 1
figure 1

Atmospheric scales

1.2 Global Scales—Formation of Global Wind Systems

Due to its spherical shape, the Earth experiences irregular solar radiation. This is accompanied by different degrees of warming, which is strongest at the equator and decreases towards the poles. The strong warming at the equator leads to the formation of low-pressure areas, while high-pressure areas form at the poles. The different pressures create a near-surface flow from the poles towards the equator. At higher altitudes, the airflow is oriented in the opposite direction. The Earth's rotation and the resulting Coriolis force cause the formation of three distinct circulations on each hemisphere, which are illustrated in Fig. 2.

Fig. 2
figure 2

Global wind circulation patterns

In each hemisphere, tropical, mid- and polar latitudes can be identified. Europe is mostly located in the mid-latitudes. This area is characterized by cyclonic westerly winds. To the north, the mid-latitudes are bordered by the polar front, which is located around the 60th degree of latitude. The polar zone is dominated by a large low-pressure area over the polar region with easterly flows.

The tropical low-pressure belt at the equator is called doldrums. The subtropical high-pressure belt, also known as the horse latitudes, runs along the 30th degree of latitude. Like the middle latitudes, the tropical area is also interesting for wind energy: The Hadley circulation cells are formed by the air mass transport, which leads to the very constant and strong trade winds in the lower layers.

The position of global wind systems shifts with the seasons. During summer in the northern hemisphere, the earth’s surface warms up most not directly at the equator, but further north at the 23rd degree of latitude, the Tropic of Cancer. Accordingly, the global wind systems shift towards the north.

1.3 Mesoscale Phenomena—Formation of Local Wind Systems

Similar to the global wind systems, local compensating winds can also arise due to temperature differences. Knowledge of the formation mechanisms of these systems can be helpful in the search for suitable areas for wind energy.

The best known is probably the sea–land circulation, which is driven by differential warming of land and water coupled with their different heat capacities. During the day, land masses warm up more than the water surface. This differential warming causes a pressure gradient in the near-surface layer from sea to land (Fig. 3). This results in a cool and humid sea breeze that intensifies as the day progresses. In the late afternoon in clear weather conditions, the landmass begins to cool rapidly, while the high heat storage capacity of the water leads to comparatively negligible cooling. When the land mass is cooler than the water, the pressure gradient reverses and a land breeze develops. However, the land breeze is weaker than the sea breeze, since the temperature difference during night is much smaller than the temperature difference during the day.

Fig. 3
figure 3

Sea breeze, example daytime

How far inland the sea breeze is felt depends on many factors. These include, for example:

  • What is the vegetation like? High surface roughness due to forests, for example, weakens the sea breeze.

  • Is the sea breeze strengthened by the winds caused by pressure systems? If, for example, there is an area of high pressure over Poland, the resulting easterly winds from the area of high pressure will strengthen the sea breezes at the Baltic coast in northern Germany.

Circulations can also form in the mountains during radiation-intensive high-pressure weather conditions. The driver for the mountain and valley winds is again warming, in this case of the sunlit mountain slopes. The warming is accompanied by thermal upwelling, which causes an upwards flow on the slopes of the valley. The air rising on the slopes is replaced by the valley wind from below. At night, the process is reversed: the ground cools faster and more strongly than the air at the same altitude in the free atmosphere. This creates gravity-driven cold hillside winds that gather at the bottom of the valley and move towards the valley exit as mountain winds.

1.4 Microscale Phenomena—Turbulence

Turbulence is associated with the smaller scales (Fig. 1). While the expected wind energy yields can be determined from the larger scale diurnal and seasonal variations together with the mean wind speed of a site, the knowledge of the turbulence is primarily necessary for the load calculations of the wind turbine (Sect. 9.1).

The magnitude of turbulent fluctuations is typically expressed as the standard deviation σv of the velocity fluctuations. This is measured over a 10 min period, normalized to the mean wind speed Vave, and referred to as turbulence intensity Iv.

$$I_{v} = \frac{{\sigma_{v} }}{{V_{{\mathrm{ave}}} }}$$
(1)

The turbulence intensity varies greatly. The variations are caused by natural fluctuations of the wind. However, to a certain extent, turbulence is also dependent on the averaging interval and the response behaviour of the measuring instrument used.

Two natural sources of turbulence can be identified: thermal and mechanical. Mechanical turbulence is caused by wind shear and depends on the surface roughness z0. Often, the mechanically generated turbulence intensity Iv at a height z under neutral conditions, in flat terrain and infinite uniform roughness z0 is calculated as a combination of Eqs. 1 and 4 simplified expressed as

$$I_{v} = \frac{1}{{\mathrm{ln}\frac{z}{{z_{0} }}}}$$
(2)

From the equation, we see a decrease in turbulence intensity with increasing height above the ground. However, this expression is only valid as long as the profile is logarithmic and σ = 2.5 u* holds. u*, the friction velocity, as well as the relation between σ and u* will vary with height, such that Eq. 2 should approximately be adjusted by an additional factor of 0.8 for the hub heights of 120–170 m, which are now more commonly used.

Thermal turbulence is caused by convection and depends mainly on the temperature difference between the ground and the air. In unstable conditions, i.e. when the ground is strongly heated, the turbulence intensity can reach very large values. Under stable conditions with very little vertical momentum exchange, the turbulence is typically very low.

The turbulence intensity varies not only with the height above ground (Eq. 2) but also with the wind speed. While the influence of mechanically generated turbulence dominates at higher wind speeds, thermally induced turbulence is dominant at low to medium wind speeds (Fig. 4, left). Onshore, the turbulence intensity is the highest at low wind speeds and shows an asymptotic behaviour towards a constant value at higher wind speeds (Fig. 5).

Fig. 4
figure 4

Offshore standard deviation of wind speed (left) and turbulence intensity (right) at measurement heights from 30 to 100 m [1]

Fig. 5
figure 5

Turbulence intensity Iv versus wind speed, for example, onshore conditions

In offshore conditions, part of the wind energy is being used for the development of waves. This process leads to increasing turbulence (Fig. 4, right). Increasing turbulence is not only dependent on wind speeds but also on the sea fetch. Consequently, the sea surface roughness is time- and direction-dependent. Also, wind direction changes withdraw energy from the wind field in order to break up the existing wave field and generate a new one.

A similar phenomenon can be observed in or near forests, since here too the mechanically generated turbulence intensity increases at high wind speeds due to the increasing movement of the leaf canopy. Since the expected extreme gusts at a site depend on the turbulence at high wind speeds, the increasing turbulence offshore as well as in the forest is of great importance (Sect. 9.4).

Typical values of Iv in neutral conditions for different land cover at a height of 120 m a.g.l. are listed in Table 1.

Table 1 Typical turbulence intensities at 120 m a.g.l. for different land covers

Forests generally cause high ambient turbulence and therefore require special attention. The turbulence near the forest can be calculated with detailed models such as computational fluid dynamics (CFD) models. Various concepts are available for complex CFD codes. A common method for forest modelling is simulation via an aerodynamic drag term in the momentum equations, parameterized as a function of tree height and leaf density.

Measured data show that the turbulence generated by the forest is significant in the range of five times the height of the forest in the vertical direction and up to 500 m from the forest edge in horizontal the direction. Outside these limits, the ambient turbulence intensity quickly approaches normal values, which no longer show the influence of the forest [2].

2 The Atmospheric Boundary Layer (ABL)

The lowest part of the Earth's atmosphere is called the atmospheric boundary layer (ABL). It is also known as the friction layer or planetary boundary layer (PBL). This part of the atmosphere is essentially responsible for weather patterns, as most of the turbulent vertical exchange of heat (energy) and water vapour takes place within it.

In contrast to the overlying free atmosphere, the atmospheric boundary layer is characterized by the influence of surface friction. The roughness of the ground draws energy from the airflow.

This creates a vertical wind speed gradient, also called wind shear. The shear causes turbulence, which transports the influence of surface friction to higher air layers through momentum and mass exchange. As a result, the surface friction influences not only the airflow but also the vertical distribution of temperature and pressure.

The height of the atmospheric boundary layer is linked to the spatial pattern of different roughnesses of the Earth's surface. It is smallest over the oceans and highest over mountains. In addition, the height varies with stability, ranging from a few metres’ depth in stable stratification and several kilometres in unstable stratification. Typically, it varies between 100 and 2000 m, and the average depth of the atmospheric boundary layer is about 1000 m.

Above the atmospheric boundary layer is the ‘free atmosphere’, which is non-turbulent or only intermittently turbulent. In the free atmosphere, the wind is approximately geostrophic. The geostrophic wind is an idealized flow in which the wind blows parallel to the isobars on a pressure chart. This flow occurs when the pressure gradient force and the Coriolis force balance are in equilibrium. The pressure gradient force results from the pressure differences in the atmosphere and is always directed towards the low air pressure. In addition to the pressure gradient force, also the Coriolis force acts on the moving air mass: If air masses move from south towards north in the northern hemisphere, they are deflected to the right; if they move towards the south, they are deflected to the left. Thus, an equilibrium between the two forces can only be established if the geostrophic wind blows parallel to the isobars (Fig. 6). Although the geostrophic wind is an idealization, its concept plays an important role in numerical weather models.

Fig. 6
figure 6

Development of geostrophic wind in the northern hemisphere

In general, when moving closer to the ground, the atmospheric boundary layer is characterized by the increasing influence of surface friction, which leads to turbulence and vertical mass and momentum exchange. The wind speed usually increases with altitude.

Within the atmospheric boundary layer, a further differentiation is made into the laminar sublayer, the Prandtl layer and the Ekman layer (Fig. 7). This differentiation is justified by the fact that the description of the physical processes can be simplified in the different layers. With increasing proximity to the ground, the surface friction gains more and more influence, and thus the dominance of the different physical processes determines the flow changes. Ultimately, therefore, the differentiation into different layers aids the solution of the Navier–Stokes equations.

Fig. 7
figure 7

Atmospheric boundary layer

In the upper part of the atmospheric boundary layer, the Ekman layer, the friction and the shear stress due to the Earth's surface become increasingly noticeable. In order to solve the Navier–Stokes equation, it is assumed that the change in turbulent shear stress with height is non-zero. The main characteristic of the Ekman layer is the wind direction rotation with height (termed the Ekman spiral).

While the geostrophic wind blows parallel to the isobars, the angle to the isobars increases with increasing proximity to the ground until—at the bottom of the Ekman layer—it blows into the low pressure at an average angle of 30° to the isobars. At the bottom of the Ekman layer, the wind speed is reduced by about 30–40% compared to the geostrophic frictionless wind.

Below the Ekman layer, there is the Prandtl layer. The Navier–Stokes equations in the Prandtl layer are solved by assuming that the turbulence does not change with height. Further, it is assumed that the wind direction is also constant with height. The height of the Prandtl layer is often expressed as a fixed percentage (about 10%) of the height of the atmospheric boundary layer. Its height would therefore depend on the vertical temperature profile, just like that of the atmospheric boundary layer. It is the lowest in cold, clear winter nights just before sunrise and the highest on sunny summer days in the early afternoon. On average, its height is 100 m. It should be remembered, however, that the Prandtl layer is based more on mathematical necessity than physics. Since early wind turbines did not exceed the height of the Prandtl layer, many flow models used in the wind energy sector are based on the physical assumptions made for the Prandtl layer.

The laminar sublayer (also called viscous sublayer) is only a few millimetres high and therefore irrelevant in the context of wind energy.

2.1 The Vertical Wind Profile

The vertical wind profile describes the increase of the wind speed with increasing height above ground. The assumptions used to describe the flow in the Prandtl layer result in a logarithmic wind profile that can be described as follows:

$${\varvec{V}}\left( {\varvec{z}} \right) = \frac{{\user2{u*}}}{{\varvec{\kappa}}}\left( {{\mathbf{ln}}\left( {\frac{{\varvec{z}}}{{{\varvec{z}}_{0} }}} \right) -{\varvec{\varPsi}}\left( {\frac{{\varvec{z}}}{{\varvec{L}}}} \right)} \right)$$
(3)

The logarithmic profile is valid for flat terrain and uniform, infinite roughness and depends on the following quantities: the friction velocity u*, the height above ground z, the roughness length z0 and the Von Kármán constant κ, for which a value of about 0.4 is usually assumed.

The empirical stability function Ψ corrects for the influence of temperature stratification. Ψ is positive for unstable stratification, negative for stable stratification and zero for neutral stratification (Sect. 2.3). The parameter L is the so-called Monin–Obukhov length and describes the vertical mass transfer as the ratio of frictional forces and buoyancy forces. The dimension is a length. Under neutral conditions, when Ψ is zero, it can be Eq. 3 can be simplified to the following:

$$V\left( z \right) = \frac{{u_{*} }}{\kappa } \cdot \mathrm{ln}\left( {\frac{z}{{z_{0} }}} \right)$$
(4)

The wind speeds at two heights can be related to each other by applying this equation twice:

$$V_{2} \left( {z_{2} } \right) = V_{1} \left( {z_{1} } \right) \cdot \frac{{\mathrm{ln}\left( {\frac{{z_{2} }}{{z_{0} }}} \right)}}{{\mathrm{ln}\left( {\frac{{z_{1} }}{{z_{0} }}} \right)}}$$
(5)

If the roughness length z0 is known, the velocity V2 measured at height z2 (e.g. hub height) can be derived from the wind velocity V1 at height z1 (e.g. measurement height). However, this equation is only valid for neutral stratification, flat terrain and infinite, uniform roughness.

Alternatively, the increase in wind speed with height is formulated using the exponential profile. According to this, two wind speeds at two heights have the following relationship:

$$\frac{{V_{1} \left( {z_{1} } \right)}}{{V_{2} \left( {z_{2} } \right)}} = \left( {\frac{{z_{1} }}{{z_{2} }}} \right)^{\alpha }$$
(6)

V1(z1) and V2(z2) are the wind speeds at heights z1 and z2, respectively.

However, this equation has no physical background and is based solely on empiricism.

The shear exponent α, which is also called Hellmann exponent, depends on the heights z1 and z2, the roughness, the atmospheric stratification and the terrain structure. Therefore, a measured height exponent is only valid for the respective location, the respective measurement heights z1 and z2 and the current stability. An application of the measured shear exponent in other states or a transfer to a different height ratio is not permitted. Equation 6 is therefore only of limited use.

It should be noted that the IEC 61,400-1 [3] defines a shear exponent of 0.2 for the determination of the design loads, while the 2017 EEG [4] defines a different value of 0.25. In reality, shear exponents vary over a wide range. As an example, Fig. 8 shows the frequency distributions of shear exponents measured at five different sites.

Fig. 8
figure 8

Frequency distribution of shear exponent α [5]

2.2 Influence of Roughness on the Wind Profile

Roughness is one of the key factors determining the wind profile. The roughness length z0 describes the height above the ground at which the wind speed is zero. The unit of roughness length is metres. Roughness is often represented in the form of roughness classes. Five classes are distinguished, from 0 to 4, whereby fractional classes are also possible, see Table 2.

Table 2 Typical roughness length and corresponding classes based on [6]

The influence of roughness on the wind profile is shown in Fig. 9.

Fig. 9
figure 9

Vertical wind speed profile for different roughness lengths assuming a geostrophic wind speed of 9 m/s

The vertical wind speed gradient generates mechanical turbulence. Consequently, many factors relevant to wind turbines are indirectly linked to surface roughness. This is, for example, the case for fatigue loads as well as the decay of the wake within a wind farm.

2.3 Influence of Atmospheric Stability on the Wind Profile

As can be seen from Eq. 3, the vertical wind profile depends not only on the roughness but also on the vertical temperature profile. The vertical temperature profile, which is also referred to as atmospheric stratification or stability, can be divided into three categories as follows (Fig. 10).

Fig. 10
figure 10

Vertical temperature profile for different atmospheric stabilities

In neutral conditions, the temperature profile is adiabatic, i.e. there is an equilibrium between heating/cooling on one and expansion/contraction on the other side. As a result, no thermal energy is transported vertically. Therefore, the logarithmic profile is solely influenced by the roughness (see Eq. 4). In neutral stratification, the temperature decreases by approximately 1 °C per 100 m. Neutral conditions occur typically at high wind speeds.

In the case of unstable stratification, the temperature decreases more strongly with altitude than in neutral conditions. This situation occurs, when the ground heats up considerably, and is thus typical for summer months during strong solar radiation. The heating of the ground causes the air near the ground to rise because the air density in the layers above is lower (convection). Thus, a vertical mass exchange takes place, which leads to an increased thermally induced turbulence. The increase in wind speed with altitude is generally small.

In the case of stable stratification, the air near the ground is cooler than in the layers above.

Such a situation is typical for late-night hours in inland areas, especially in winter. Due to the higher air density of the layers near the ground compared to the air above, the vertical mass transfer and thus turbulence is suppressed. Due to the decoupling of the different air layers, the wind speed can change significantly with altitude. In some cases, significant changes in wind direction can occur with altitude.

Atmospheric stability can be determined in a number of ways. However, the accurate determination requires instrumentation that is often not used in the context of wind resource assessment. One of the most common scientific methods for determining the Monin–Obukhov length (see Eq. 3) is the measurement of all three wind components and the temperature fluctuations using an ultrasonic anemometer (see Sect. 6.2.2). Furthermore, the stability can be determined by measuring the temperature difference between two different heights. In this case, the stability is parameterized by the gradient Richardson number. Under offshore conditions, the stability can be determined by the temperature difference between water temperature and air temperature (Bulk Richardson number).

As a simplification, turbulence and shear can be considered as a proxy of atmospheric stability: In the case of stable stratification, turbulence is suppressed, resulting in high shear results. In the case of unstable stratification, thermal turbulence is induced, the shear is low (Fig. 11).

Fig. 11
figure 11

Turbulence intensity versus shear exponent α (left) and vertical wind speed profile (right) for different atmospheric stabilities [6]

2.4 Influence of the Orography on the Wind Profile

The term orography describes the differences in elevation in the terrain. In flat terrain, the influence of roughness on the vertical wind profile dominates. With increasing terrain complexity, the influence of shape of the terrain, i.e. orography, increases. The flow is generally accelerated over terrain elevations (speed-up), which leads to a deformation of the logarithmic wind profile. Depending on the height above ground, the profile becomes steeper in one area and flatter in another. The extent of deformation depends on the terrain slope, roughness and atmospheric stability. In very steep terrain, the flow can even separate and follows no longer the shape of the terrain. A turbulent separation bubble can form (Fig. 12, bottom). In the area of the separation bubble, the wind speed can decrease with increasing height leading to a negative shear exponent. As a rule of thumb, this occurs at terrain slopes greater than 30% or 17°. The location and extent of such a separation zone depend on the terrain slope, terrain curvature, roughness and atmospheric stability. In complex terrain, linearized flow models such as WAsP, which is often used to calculate wind yield (Sect. 4.3.1) cannot model separated flows, and therefore are only to a limited degree suitable for the calculation of the vertical wind profile. Higher fidelity models such as CFD models are often more suited in complex terrain (Sect. 4.3.2).

Fig. 12
figure 12

Impact of orography on vertical wind speed profile for gentle hill (top) and steel hill (bottom)

2.5 Influence of Obstacles on the Wind Profile

Local obstacles such as buildings influence the wind profile. These can partly shadow the lower parts of the wind profile and thus lead to a deformation of the wind profile. The extent of shadowing depends on the dimensions, position and porosity of the obstacle. Figure 13 shows the percentage reduction of the wind speed caused by a two-dimensional structure. The shaded zone describes the area directly at the obstacle, where the slowdown is strongly dependent on the detailed geometry of the obstacle. The flow in this area can only be described by more detailed numerical models such as CFD.

Fig. 13
figure 13

Wind speed reduction in % behind an obstacle with the height ‘1’ and depth ‘0’, based on [7]

Figure 13 shows that an attenuation can only be detected up to three times the height of the obstacle. Influences of obstacles with limited heights such as houses on wind turbines with modern hub heights will therefore be negligible.

3 Statistical Representation of the Wind

3.1 The Power Spectrum

Observations of wind speed and direction show large variations over a variety of timescales. The longer the observed period, the larger the measured variance will be. This distribution of the variance over different temporal scales is illustrated by the power spectrum in Fig. 14. The representation as a power spectrum is particularly suitable for certain design calculations of the wind turbines, including fatigue loads.

Fig. 14
figure 14

Power spectrum of wind speeds measured over a flat homogeneous terrain in Denmark—the spectrum is shown in a log-linear, area-true representation, based on [7]

The power spectrum is calculated by using a Fast Fourier Transform of a sufficiently long measured time series of wind speeds. The power spectrum in Fig. 14 was determined from a 1-year time series with a measurement frequency of 8 Hz. For illustration, the frequencies (x-axis) are labelled as time units. Due to the log-linear representation, the area under the curve reflects the energy content. This makes it possible to assign the energy components to the time units such as hours, days or seasons.

The power spectrum can be roughly divided into two sections. The energetically largest part lies in the range of temporal variations from hours to months and is thus relevant for energy yield calculations. The peak of the spectrum corresponds to temporal variations of one day and is caused by the diurnal wind speed variation at the specific location. The second largest maximum corresponds to temporal variations of a few days and is caused by synoptic variations such as the passage of low-pressure systems.

Variations with a time scale of less than one hour are associated with atmospheric turbulence (Sect. 1.4). This region of the spectrum is particularly relevant for determining the dynamic loads on the wind turbine. Numerous mathematical models have been developed to describe this part of the spectrum. Two spectral representations are often used: The Kaimal spectrum is an empirically developed model that agrees well with the observed spectra. The Von Kármán spectrum is less realistic for atmospheric turbulence but better describes the turbulence in tubes and wind tunnels.

These two turbulence spectra describe the temporal fluctuations of the turbulent component at a specific point of the area swept by the wind turbine rotor. However, since the rotor blades of the wind turbine sweep a turbulent field, the observation of the spectra at one point is not sufficient. The spatial change in lateral and vertical directions is also important because this spatial change is ‘collected’ by the rotating blade (rotational sampling). To reflect these effects, the spectral description of the turbulence must be extended through cross-correlation of the turbulent fluctuations of two points separated in lateral and vertical directions. The mathematical solution of the cross-correlation is considerably easier with the von Kármán spectrum, which is why it is used more frequently than the Kaimal spectrum for the creation of 3D turbulence fields.

3.2 Frequency Distribution of the Wind Speed

While the power spectrum describes how much energy is associated with a certain temporal variation, the frequency distribution describes how often certain wind speeds occur. Generally, wind speeds are recorded for 10 min intervals. The graphical representation of the frequency of occurrence results in a histogram, typically in wind speed intervals (bins) of 0.5 or 1.0 m/s.

Histograms are often transformed into Weibull distributions as these often provide a good approximation to the actual observed frequency distribution of wind speeds (Fig. 15).

Fig. 15
figure 15

Frequency distribution of wind speed shown as histogram and Weibull distribution

The mathematical expression of the Weibull distribution of the wind speed v is determined by two quantities, the shape parameter k, which is connected to the shape of the distribution, and the scale parameter A, which results with increasing magnitude into a shift of the curve to higher wind speeds and at the same time to a widened distribution.

$$f\left( v \right) = \frac{k}{A}\left( \frac{v}{A} \right)^{k - 1} \mathrm{exp}\left( { - \left( \frac{v}{A} \right)^{k} } \right)$$
(7)

Both parameters are dependent on the local conditions and vary from place to place. Figure 16 shows a group of Weibull distributions for varying k parameters but constant mean wind speed v. Clearly, there is an increased frequency of high wind speeds in a distribution with a smaller k parameter. This is an important indication for the estimation of extreme wind speeds (see Sect. 9.4).

Fig. 16
figure 16

Weibull distributions for constant mean wind speed but varying k parameter

The Weibull distribution is a probability density distribution. The median of the distribution corresponds to the wind speed bisecting the area under the curve. This means that half of the time the wind is blowing weaker than the median wind speed, and half of the time it is blowing stronger than the median wind speed. The mean wind speed is the average of the distribution. In a rough approximation, the relationship between the Weibull parameters A and k and the mean wind speed can be described as follows:

$$\overline{V} \approx A\left( {0{,}568 + \frac{0{,}434}{k}} \right)^{\frac{1}{k}}$$
(8)

Many different methods can be used for fitting the two Weibull parameters to a measured histogram. It is important that the fitting method focuses on the energetic part of the distribution which corresponds to the wind speed range most relevant for wind energy generation. Therefore, the fitting method of moments is most commonly used as it focuses on medium to high wind speeds but not on extreme wind speeds. Furthermore, the total energy content of the Weibull distribution should be identical to that of the observed distribution.

The possibility of describing the entire frequency distribution of the wind by two parameters saves computer resources for wind energy estimates. Today, the concept of the Weibull distribution is increasingly contested, firstly because, especially in the more exotic parts of the world, the wind speed distribution often does not sufficiently follow the shape of a Weibull distribution, and secondly, because the timeliness of the generated electricity is increasingly important when the traded directly on the electricity market. Thus, the economic success of a project is determined by the temporal interaction of generation and electricity price. Therefore, energy yield calculations are increasingly carried out in the time domain, i.e. a time series of production is calculated.

3.3 Wind Direction Distribution

The energy yield of a wind farm depends both on the spatial distribution of the energy across the site and on a layout with the lowest possible wake losses between wind turbines (Sect. 10.2). Therefore, in addition to wind speed, the wind direction distribution must be known. A differentiation must be made between mean wind speed, frequency and energy rose (Fig. 17). These three representations can differ significantly since the energy is proportional to the cube of the wind speed.

Fig. 17
figure 17

Mean wind speed (left), frequency (centre) and energy rose (right)

The method described in the European Wind Atlas [7] presents a division into twelve 30° sectors to simplify the wind direction information, which is causally linked to the simplified representation of the wind speed as a Weibull function: To determine the Weibull distribution for a direction, the underlying histogram must be sufficiently filled. The smaller the sectors, the lower the probability of being able to create statistically significant histograms per sector.

It should be noted that the measurement of wind direction often has an uncertainty of up to 5°.

4 Flow Models

As explained in Sect. 1, meteorological phenomena are divided into different spatial and temporal scales to be able to simplify the equations of motion (Navier–Stokes equations) for the respective scale. This results in different types of flow models for different scales.

4.1 Reanalysis Models

The so-called reanalysis models, which describe phenomena on the order of synoptic and global scales, assimilate millions of weather observations from synoptic weather stations, radiosondes, weather balloons, aircraft, ships, buoys and satellites into numerical weather models and compute a global, three-dimensional consistent description of the atmospheric state decades into the past. In this sense, this process combines the advantages of a model with the advantages of observations. Among the most common reanalysis data are ERA5 (produced by ECMWF, European Centre for Medium-Range Weather Forecasts) and MERRA2 (produced by NASA Global Modeling and Assimilation Office). Typically, the horizontal resolution of modern reanalysis data is 30–50 km, the temporal resolution is one hour.

On the one hand, these data sets are used for applications that rely on long-term meteorological data, for example, long-term corrections, and on the other hand, for the initialization of higher resolution models like mesoscale models.

For both applications, care has to be taken as both the composition of the assimilated data as well as the assimilation model as such may change over the years, which can lead to temporal inconsistencies.

4.2 Mesoscale Models

Reanalysis data are often used as boundary conditions for the next, higher resolution level of model, the so-called mesoscale models. These models add physical descriptions in the mesoscale range to the spatially and temporally coarsely resolved reanalysis data. They re-introduce atmospheric processes with spatial scales from about 2 km to several 100 km. Small-scale physical processes that are not resolved by the spatial and temporal discretization (mesh size and time step), such as boundary layer turbulence or convective exchange processes, are parameterized by ad hoc approaches, i.e. described approximately by resolved parameters. While generally the temporal resolution remains unchanged in the hourly range, and the horizontal resolution of mesoscale models is typically 1–3 km.

A commonly used mesoscale model is WRF (Weather Research and Forecasting Model), a modular numerical atmospheric model from the National Center for Atmospheric Research (NCAR).

The application of mesoscale models generally increases the correlation with local data compared to reanalysis data. Typically, the correlation coefficient R2 is around 70% for data with a temporal resolution of one hour. For coarser temporal resolution, the correlation improves.

4.3 Microscale Models

4.3.1 Linearized Models—WAsP

Mesoscale model calculations are flanked by microscale flow models. These models often combine the determination of the spatial distribution of energy at the site with the modelling of the wake interaction between wind turbines. Once created for the specific site, these models allow rapid determination of energy production for different layouts, turbine types and hub heights.

The linearized flow model WAsP (Wind Atlas Analysis and Application Program), developed in the 1980s, has contributed significantly to the growth of the wind industry through its ease of use. WAsP belongs to the family of Jackson–Hunt models [9] like the MS3DJH model [8], both—in contrast to mass-consistent models—solve the linearized Navier–Stokes equation.

WAsP allows to determine the wind climate at any point of the site based on measured wind data. WAsP accomplishes this task by double vertical and horizontal extrapolations. The idea behind this is simple.

In the first step, the measured wind data are cleaned of local influences through obstacles, roughness and orography in order to calculate the geostrophic wind climate. Assuming that the geostrophic wind is valid for a larger area, the influences of local obstacles, roughness and orography on a nearby wind turbine site can be re-introduced in the next step by adding their local description. This principle is shown in Fig. 18 illustrated.

Fig. 18
figure 18

Principle of European Wind Atlas [7]

Since this procedure assumes the comparability of the measurement location with the future position of the wind turbine (the so-called similarity principle), the validity of the above assumption depends less on the distance but more on the similarity of the two positions. Therefore, it is extremely important to carry out wind measurements at a location that is representative of the future wind farm.

As already explained in Sect. 2.3, the vertical wind speed profile depends not only on orography, roughness and obstacles but also on atmospheric stability. During night-time, the cooling of the ground suppresses the development of a vertical exchange of momentum. Consequently, turbulence is suppressed, and high wind speed shear can be observed, sometimes accompanied by wind direction veer. Conversely, the heating of the ground during daytime causes increased turbulence through the vertical exchange of momentum. As a result, the wind speed increases less with increasing height above ground. To account for the effects of varying stability without having to model each wind profile, a simplified procedure was implemented in WAsP that requires only the climatological mean and the root mean square of the annual and daily variations in heat flux. This simplification allows a correction of the logarithmic profile for average stability effects.

Several studies have shown that using WAsP can lead to prediction errors in complex terrain [10]. This is caused by the linearization of the Navier–Stokes equations, which limits WAsP's functionality for complex sites where flow separation might occur. Figure 19 shows the separation of flow at a terrain slope of about 30%. Rather than following the shape of the hill, the flow follows the shape of a virtual, flatter hill. However, WAsP uses the true terrain slope to calculate acceleration and thus overestimates wind speed. A partial correction of this model error is possible with help of the so-called Ruggedness Index (RIX). RIX is defined as the fraction of the area with a radius of 3 km around the site of interest, that exceeds a terrain slope of 30% and therefore violates the WAsP model assumptions of linear flow.

Fig. 19
figure 19

Flow separation at a steep hill [11]

Figure 20 shows the relationship between the expected wind speed error and the difference in complexity between the position of the measuring mast and the position of the future wind turbine. Again, the similarity principle applies: if the complexity of the turbine position is comparable to the complexity of the measurement position, the prediction error is close to zero. If the reference site (measurement position) is less complex than the future turbine position, the difference Δ-RIX becomes positive, and thus an overestimation of the wind speed is to be expected. If the reference site is more complex than the future turbine position, the difference Δ-RIX becomes negative, and thus an underestimation of the wind speed is to be expected. It must be emphasized that the quantitative relationship between Δ-RIX and prediction error depends on the location.

Fig. 20
figure 20

Prediction error of wind speed as a function of the difference Δ-RIX between WTG position and reference [12]

Another challenge in WAsP is the modelling of forests. In order to model the wind speed correctly, a so-called displacement height combined with a high roughness must be used [13]. The flow across a forest is lifted as the forest acts partly as an artificial hill. Consequently, the flow is accelerated above the forest. The displacement height is an artificial elevation of the terrain for the forested area, with the purpose of correcting the wind speed over the forest. The height of displacement depends on the density of the forest and the canopy and is often assumed to be between 0.9 and 1.1 times the tree height. At the forest edge, the displacement height should taper off. Depending on the wind direction it has been suggested to extend the displacement height up to a distance of 50 times the forest height [14] (Fig. 21). The roughness length to be applied for forest areas should be in the order of 0.4 m to more than 1 m. The increased roughness leads to a wind profile with a higher shear [13].

Fig. 21
figure 21

Displacement height in WAsP caused by forest depending on wind direction

In summary, WAsP still plays an important role in determining wind resources, mainly because the calculations can be performed both quickly and easily. Its limitations resulting from the linearization of the flow model are known and well understood. Their consequences can be estimated and partially corrected.

4.3.2 Non-linearized Models

In recent years, the use of more complex models such as CFD (Computational Fluid Dynamics) has become more popular. CFD models are mostly used in a complementary way instead of replacing WAsP. The solution of the Navier–Stokes equations in CFD models is generally more complex than in linearized models such as WAsP. It has been shown that the performance of CFD models depends not only on the choice of parameterization but also on the skill and experience of the user, which makes it difficult to ultimately judge model performance [15].

Apart from re-estimating the local speed-up effects at the sites, CFD tools are frequently used to identify critical areas where wind conditions are particularly difficult to predict. Often CFD models are the only possibility to estimate turbulence, shear and inflow angle at the wind turbine’s location which makes them a valuable tool for minimizing the technical risk of layouts at more complex sites.

Currently, the most widely used CFD tools are based on RANS solutions (Reynolds Averaged Navier Stokes). With ever-increasing computer power high-fidelity LES (Large Eddy Simulation) models are starting to be used for commercial applications.

5 The First Step: Site Identification

The wind resource is one of the most critical aspects in planning a wind farm. Different approaches are possible to obtain information about the wind climate. Wind atlases show colour-coded wind speed or energy for a given height above ground and are often used for the initial screening of suitable sites. The level of detail of the underlying flow model as well as its spatial resolution determines the quality of the atlas. One example is the Global Wind Atlas [16], which in its third version has a spatial resolution of 250 m. It is based on the mesoscale model WRF (Sect. 4.2) and initialized with the reanalysis data ERA5 (Sect. 4.1). For selected countries where this atlas has been validated with measurements, the mean absolute bias is given as 14% with an uncertainty of 10% [16]. The quality of a wind atlas can be improved by calibration with either measured wind or production data.

Hardly any other step in the process of wind farm development is as significant for economic success as the correct assessment of the wind regime at the future turbine site. Due to the cubic relationship between wind speed and energy content in the wind, the prediction of energy output is extremely sensitive to the accuracy of the wind speed and requires every possible attention.

6 The Second Step: Measuring the Wind Climate

6.1 Introduction

The measured wind climate is the main input for the flow models, which are used to extrapolate the measured wind vertically and horizontally to establish the spatial distribution of available energy across the site at hub height. The resulting resource map is the basis for the development of layouts.

The importance of wind speed measurements places very high demands on the quality of the instruments since the available kinetic power is proportional to the cube of the wind speed. Being exposed to harsh conditions, the measurement instruments must be robust and able to provide reliable data even during prolonged, unattended operation. Ideally, their power consumption should be low so that they can operate independently of the grid.

Any wind resource assessment requires a minimum measurement period of a full year to avoid seasonal bias. A shorter measurement duration leads to higher uncertainty [17]. If the instrumentation fails due to lightning, icing, vandalism or other reasons, and the failure is not quickly detected and rectified, data are lost. Often it is possible to compensate for the lost data, but the reduced data integrity will always lead to an increased uncertainty as potentially a bias can be introduced. In the worst case, the measurement period will have to start all over again; otherwise, the increased uncertainty could jeopardize the feasibility of the entire project.

The requirements for wind measurement are defined by the standard for power curve measurement, the IEC 61,400-12-1 [18]. In the context of power curve measurements, the standard describes the use of mast-mounted anemometers as well as remote sensing systems. Whereas the 2017 edition of IEC 61,400-12-1 does not allow a stand-alone use of remote sensing but requires the combination of remote sensing systems with a co-located measurement mast, today’s reality is an increasing number of measurement campaigns based on stand-alone remote sensing systems.

The IEC standard describes suitable geometries and mounting arrangements for anemometer on the measurement mast which aims to minimize the influence of the mast’s structure in the measurement. The measurement behaviour of the anemometers depends on several local climatic conditions such as turbulence, temperature and others. The impact of these factors is described through a so-called class number, which must be determined once for each anemometer model and describes its measurement uncertainty related to operational behaviour. The process is similar for remote sensing devices and involves the determination of the instrument’s response to a range of climatic conditions compared to mast-mounted anemometers.

Great emphasis is placed on traceability of the instruments. This includes the calibration prior to and sometimes after the measurement campaign. Both cup anemometers and ultrasonic anemometers have to be calibrated in a MEASNET accredited wind tunnel [17,18,19]. Alternatively, the calibration can happen through comparison in situ with a reference anemometer [20]. Equally, the functionality of remote sensing systems has to be verified, where the behaviour of the individual system is ensured through comparison with a measurement mast.

In addition to wind speed and direction at several heights, other signals can be captured. These may include barometric pressure, relative humidity and temperature. All these parameters have an influence on the energy yield of the wind turbine. If additionally differential temperature measurements are performed, conclusions can be drawn about the atmospheric stability. Furthermore, in complex terrain [17] suggests the measurement of all three flow components (u, v & w) to determine the inflow angle.

The number and height of the measurements depend on the hub height of the planned wind turbines and the complexity of the site, since the uncertainty of the flow models increases with increasing complexity: The more complex the site, the more representative positions should be captured by the measurement campaign and the more important are hub height measurements to allow an accurate prediction of the wind resources.

Today (in 2022), there is no dedicated international standard for wind resource determination available, however, an international IEC standard for site assessment, IEC 61,400-15, is under development. Guidance can be found in the national Technical Guideline for Determination of Wind Potential and Energy Yields [19] issued by the Fördergesellschaft für Wind (FGW) as well as in the international Guideline for the Assessment of Site-specific Wind Conditions [17] issued by MEASNET. Both go beyond the IEC 61,400-12-1 in terms of content and describe procedures and the necessary documentation for calculating wind resources.

6.2 Anemometer

6.2.1 Cup Anemometer

Most onsite wind measurements use traditional cup anemometers. The cup anemometer has three cups, each mounted at the end of a horizontal arm, which in turn are arranged in a star shape on a vertical shaft. A cup anemometer rotates in the wind because the drag coefficient of the open cup is greater than the drag coefficient of the smooth back of the cup. Any horizontal flow will cause the anemometer to rotate proportional to the wind speed. Therefore, the rotational speed over a fixed period can be used to determine the average wind speed. The exact transfer function between the rotational speed and wind speed is best determined by calibration in a MEASNET accredited wind tunnel, which follows the s strict requirements as defined in [20]. MEASNET members regularly participate in round-robin tests to ensure comparability of results between different tunnels. The round-robin tests have significantly increased the quality of calibration. It should be noted that even calibrations to the highest standard have an uncertainty of about 1–2%.

In addition to the uncertainty related to calibration, further uncertainty arises due to the altered operational behaviour of the anemometer in the free atmosphere compared to the conditions in a wind tunnel, where the anemometer has been calibrated.

A distinction is made between static and dynamic effects. The description of the static behaviour includes the instrument’s response to inclined flow, which can be caused either by terrain effects, thermal effects or mounting errors. According to IEC 61,400-12-1 [18], the anemometer should only measure the horizontal component of the wind, but not the vertical component. Furthermore, the friction in the bearing of the anemometer should be as independent of the temperature as possible.

The most relevant dynamic characteristic is the so-called over-speeding. Due to the aerodynamic properties of the cups, the anemometer tends to accelerate faster than it decelerates, which leads to an overestimation of the wind speed, especially in the medium wind speed range. Another dynamic property is described by the distance constant, which refers to the responsiveness of a cup anemometer following a step change in wind speed. The distance constant is the equivalent to the length of air stream which passes the anemometer during the time taken for it to respond to 63.2% of the step change. The distance constant should be as short as possible and is usually a few metres. Various methods for determining the distance constant are described in [21].

The individual response functions of the above-described static and dynamic behaviour are combined in the form of a class number. From this, the uncertainty of the operating behaviour for a specific anemometer brand can be determined as a function of the wind speed [18]. If the individual response functions were accessible by the anemometer manufacturer, the uncertainty of the operating behaviour could be determined specifically for the turbulence, inflow angle and temperature prevailing at the site. Unfortunately, only a few manufacturers currently provide this information.

The behaviour of the cup anemometer is well understood. Possible sources of error affecting the measurement include the effects of the measurement mast, boom and other mounting arrangements. Of course, proper maintenance of the anemometer is important. Problems can also occur due to icing of the sensor. By combining a heated and an unheated anemometer at the same measurement height, icing can be readily identified. Other climatic factors can affect the life of an anemometer such as corrosion near the sea or wear of the bearing in desert climates due to sand. Therefore, it is recommended to replace anemometers after 1 year.

6.2.2 Ultrasonic Anemometer

Ultrasonic anemometers, as the name implies, use ultrasonic signals to measure wind speed and direction. In addition, the sound virtual temperature can be determined, although this is dependent on the humidity of the air. The sound virtual temperature can be used to determine the Monin–Obukhov length (see Eq. 3). Ultrasonic anemometers have up to three transmitter–receiver pairs (sonotrodes). A three-axis sonic anemometer can provide a very high-resolution measurement of the three-dimensional wind vector (Fig. 22). The sonic anemometer operates on the principle of precisely measuring the time it takes an ultra-high frequency acoustic pulse (typically 100 kHz) to traverse a known path length in the direction of the wind and opposed to it. From the transit-time difference, the velocity of the flow can be determined. The wind measurement is independent of the air density and humidity.

Fig. 22
figure 22

Ultrasonic anemometer with three sonotrodes

Motionless ultrasonic anemometers have a number of advantages over mechanical anemometers and can provide measurements of turbulence, air temperature and atmospheric stability. The measurement of all three wind components, the absence of moving parts and the high temporal resolution make the ultrasonic anemometer a very attractive wind speed-measuring instrument. In particular, the high temporal resolution makes ultrasonic anemometers more suitable for turbulence measurements than cup anemometers. In addition, the probe arms are easily heated, so that it can be used well in regions subjected to ice and snowfall.

However, ultrasonic anemometers also introduce new, less well-known sources of error. To ensure the necessary mechanical stability, the probe arms must be sufficiently robust in design, but at the same time, they should influence the airflow as little as possible. Due to the influence of the probe arms and the sensitivity of the measurement principle to small deviations in geometry, the absolute accuracy of ultrasonic anemometers is generally lower than that of high-quality cup anemometers [21]. Temperature variations can also affect the geometry. Less known is the fact that the sensors themselves are temperature sensitive. Furthermore, it should be considered that most ultrasonic anemometers have a higher power consumption than cup anemometers and can hardly be powered by batteries or solar-powered power supplies.

6.2.3 Measurement Mast Geometry

To perform accurate wind speed measurements, the instruments should be influenced as little as possible by the changing flow through and around the mast and the side booms. The influence of the mast can be minimized by sufficient distance between mast and instrument as well as the careful choice of the side boom’s orientation with respect to the main wind direction. The influence of the mast on the measured wind speed should be less than 1% and the influence of the side boom less than 0.5% [18]. Other instruments, aviation lights and lightning protection should be mounted so as to minimize interference on the anemometer.

The influence of the mast on the flow depends on the type of mast. In front of tubular masts, the wind slows down and is accelerated around the sides (Fig. 23, left). Therefore, the least interference is achieved when mounting the side boom 45° to the main wind direction. The minimum distance from the centre of the mast should be 6.1 times the diameter of the tubular mast, but 8.2 times is better [18].

Fig. 23
figure 23

Wind speeds around tubular mast (left) and lattice mast (right) normalized to free wind speed coming from left, based on [19]

The flow conditions around a lattice mast are more complicated to determine. In addition to the orientation of the mast with respect to the wind, the distance of the anemometer to the mast centre, also the drag of the mast plays a role. Similar to the tubular mast, a deceleration in front of the mast can be observed, while an acceleration occurs at the flanks (Fig. 23, right). The least impairment is achieved when the side booms are mounted at 90° with respect to the main wind direction. The minimum distance from the centre of the mast should be 3.7 times the width of the lattice mast, but better 5.7 times [18].

Top-mounted instruments are generally less affected by the mast’s structure than side-mounted instruments. Still, care has to be taken, as the support tube of the instrument has to be sturdy enough to avoid wind-induced vibrations. However, as a consequence, the construction might become heavy, which can drive up the price of the mast. Additionally, top-mounted instruments are more susceptible to lightning strikes. Figure 24 shows possible arrangements of the top mounting.

Fig. 24
figure 24

Possible measurement geometries at the top of a mast

Instruments at lower heights are obviously side-mounted. To minimize the influence of the side boom, the instrument should be mounted approximately 20 times the boom diameter above the boom. Further details on the determination of flow distortion can be found in [18, 21].

In general, redundancies should always be present. Mounting two opposing instruments at the same height allows correction of the mast influence.

6.3 Remote Sensing Systems

Ground-based remote sensing systems are an alternative to mast-mounted anemometry. Two technologies have gained some acceptance in the wind energy community: LiDAR and SoDAR. Both LiDAR (Light Detection And Ranging) and SoDAR (Sound Detection And Ranging) use remote sensing techniques. Most systems are based on the Doppler effect. The signal emitted by a LiDAR is scattered by aerosols, while the signal emitted by the SoDAR is scattered by temperature fluctuations.

In many countries around the world, the process of obtaining a building permit for sufficiently tall measurement masts requires the involvement of, aviation or even military administrations. This often leads to disproportionately long planning periods and high costs, which do not occur when using remote sensing systems. According to IEC 61,400-12-1 [18], however, the use of remote sensing devices requires a co-located moderately high mast (e.g. 60 m). Such a hybrid system combines the high absolute accuracy and high availability of the cup anemometer with the greater vertical range of the remote sensing system. Like the anemometer, the operational behaviour of remote sensing systems depends on several conditions in the free atmosphere. This dependence of the behaviour is quantified by a class number, which has to be determined once for each model. The class number allows the determination of the measurement uncertainty.

As with anemometers, the measurement campaign using remote sensing should last at least one year, since the vertical profile varies greatly with different atmospheric stabilities, which are connected to seasonal and diurnal variations.

In contrast to the very small measurement volume of a cup or ultrasonic anemometer, both remote sensing technologies measure large volumes that change with altitude, which impacts direct comparisons to anemometer measurements. Both technologies require significantly more power than anemometers, making the use of a generator or fuel cell necessary.

While the precision of a LiDAR is superior to a SoDAR, and can approach that of cup anemometers [22], both instruments have an inherent weakness in complex terrain, as both measurement techniques assume homogeneity of the wind vector in the measurement volume, which is not the case for non-flat terrain (Fig. 25). There is evidence that errors of 5–10% in mean wind speed are not uncommon, e.g. [23]. Only a multiple remote sensing system, where several sufficiently separated units focus on the same volume, could eliminate this inherent error. Alternatively, the measurement error due to inhomogeneity can be reduced by numerical correction, e.g. through CFD models.

Fig. 25
figure 25

Inhomogenous flow in the measurement volume [24]

Another challenge is the interpretation of the measured turbulence intensity in connection with the verification of the turbine’s suitability at the site according to IEC 61,400-1 [3], see Sect. 9.1. Since the measurement volume of remote sensing systems differs significantly from measurement volumes of traditional anemometry, the turbulence measured by remote sensing differs from that measured by a cup. This creates a deviation from the IEC 61,400-1. The industry initiative CFARS (Consortium for the Advancement of Remote Sensing) is currently working on a systematic comparison and quantification of the influence of different measurement technologies on loads.

6.3.1 LiDAR

A LiDAR focuses on a specific distance and measures the frequency shift of the returning signal, which is scattered by the aerosols within the focal volume. Because light can be focused much more accurately and scatters much less in the atmosphere than sound, LiDAR systems have higher accuracy and a better signal-to-noise ratio than SoDAR. Most LiDARs can be divided into two categories.

The continuous wave LiDAR uses an optical system to focus on different heights (Fig. 26, right). Through a continuously rotating prism, the vertical laser beam is deflected 30° from the vertical. By adjusting the laser focus, the wind vector can be scanned along the circumference of a cone at different heights above the ground. The length of the measurement volume is defined by the optics of the device and the measurement height. Typically, it increases with the square of the measuring range.

Fig. 26
figure 26

Schematics of working principles of a pulsed lidar (left) and a continuous wave lidar (right) [24]

The other system uses a pulsed signal with a fixed focus (pulsed LiDAR). In contrast to the continuous wave LiDAR, its prism does not rotate continuously (Fig. 26, left). Instead, the prism remains stationary, while the LiDAR sends a pulse in a specific direction and records the backscatter in a series of range windows (fixed time delays) triggered by the end of each pulse. In contrast to the continuous wave LiDAR, the probe length is constant with height.

LiDAR relies on the backscatter of aerosols carried by the wind to estimate wind speed. Variations in the vertical concentration of backscattering particles can lead to uncertainties in wind speed measurements. For example, if clouds or fog are present along the line of sight, there is a risk that strong reflections from the cloud or fog will swamp the Doppler signal, potentially resulting in a falsified wind speed estimate. Processing algorithms for cloud detection have been developed to identify, reduce or remove scatter from clouds and fog.

However, also an insufficient amount of particles, e.g. in very clean air, affects the functionality of the LiDAR. Other parameters that affect the measurement are errors in the vertical alignment of the instrument and uncertainties in the focusing height. The LiDAR, as an optical instrument, is also susceptible to the effects of dirt on the exit window, so there is a need for a robust cleaning device of the LiDAR window.

LiDARs have many different applications today: The vertical conical scan shown in Fig. 26 is used not only in the context of wind resource measurements but also for power curve measurements. In the offshore sector, LiDARs are installed either on platforms or on buoys, also here with a vertical, conical scan. Floating LiDAR technology reduces the need for meteorological measurement masts as the primary source of data to describe wind resources. The devices measure wind speed and direction at a fraction of the cost of conventional measurement masts. Savings of up to 90% are possible, based on a typical investment of 10 million euros for a measurement mast. To achieve commercial acceptance for investment decisions based on this relatively new technology, a roadmap has been formulated [25]. Through the definition of key performance indicators in terms of accuracy and availability, the roadmap provides a clear framework for floating LiDAR providers to reach full commercial acceptance.

LiDARs with conical scan patterns can also be mounted on the nacelle (Fig. 27). This mounting arrangement allows the measurement of the power curve according to IEC 61,400-12-2 [26]. In addition, the possibility of using knowledge of approaching gusts for active load control is currently being investigated.

Fig. 27
figure 27

Forward-looking nacelle-mounted lidar

Furthermore, through more complex control of the prism, LiDARs can also perform fan-shaped scans (Fig. 28). Often these scanning LiDARs have a longer range than the vertical scanning LiDARs, which can be in the range of several kilometres. Such measurements are used, for example, in complex terrain to locate flow separation, or to verify wake models. When positioned on existing offshore wind turbines or on the coast, scanning LiDARs can also be used to quantify wind resources in the vicinity.

Fig. 28
figure 28

Examples of different scan geometries of a scanning lidar [27]

6.3.2 SoDAR

There are different types of SoDARs with different arrangements of transmitter and receiver. A common variant is the phased array SoDAR, which consists of a combined transmitter–receiver array, i.e. it is monostatic. This array is electronically controlled to direct sound pulses in different directions, one in the vertical direction and two other directions at 90° to each other (Fig. 29). Assuming that the flow field in the volume covered by the three beams is homogeneous, the wind vector can be constructed from the three signals. Thus, in non-flat terrain, any measurement of a SoDAR is inherently error-prone just like that of LiDAR.

Fig. 29
figure 29

Principle of a three-phased array SoDAR

One of the disadvantages of a SoDAR is its dependence on temperature fluctuations, which is challenging under neutral and stable atmospheric conditions. Bistatic SoDARs, on the other hand, where the transmitter and receiver are spatially separated, are sensitive to inhomogeneities in wind speed and do not have this problem. However, bistatic SoDAR units are not currently commercially available.

Since the SoDAR is an acoustic system, the presence of ambient noise can affect the function of the SoDAR. The noise source can be rain, but also other noises, for example from animals, can have a negative effect. A particularly critical issue is the increase of noise in high wind speed situations. These situations are often associated with neutral stability and suffer from the lack of sufficient temperature fluctuations. False echoes from nearby objects such as house walls or trees can also lead to a distorted signal. Other parameters that can affect the SoDAR measurement are errors in the vertical orientation of the instrument and temperature changes at the antenna. With an increasing altitude above ground, the signal-to-noise ratio of the SoDAR deteriorates, such that the data availability decreases.

For these various reasons, the measurement accuracy of SoDAR systems does not reach that of cup anemometers. At the same time, the measurement volume changes with altitude due to the aperture angle between the three beams. Therefore, a measured profile is difficult to interpret, since it can be based on a different number of measured values as well as a different measurement volume per height.

6.4 Production Data

Another very good source of information for the assessment of the wind regime are production data from nearby wind farms. These can be used to calibrate the flow model. However, production data are often only accessible on a monthly basis. In order to filter out months with unrepresentative production caused by issues such as grid curtailment or icing, these data are often checked for plausibility with a wind index. However, experience clearly shows that this method is not always sufficient, since periods with grid curtailment and other suboptimal performance cannot always be identified and thus lead to a contamination of the monthly production data. Consequently, this can then lead to an incorrect calibration of the flow model and ultimately falsify the result for the project under investigation.

In contrast to monthly production data the use of 10 min SCADA data allows precise identification and correction of non-optimal production periods and is thus preferable. If an existing wind farm is used to study a future project, analysis of SCADA data provides information not only from one position, but from several, and is consequently often superior to wind measurements, which are typically only available for a limited number of positions. Furthermore, the wake model can be calibrated for the existing wind turbines (Sect. 10.2). If the existing and planned wind farms are comparable with respect to terrain, size and layout, the found parameterization of the wake model can be assumed to be valid for the future turbines.

6.5 Measurement Period and Averaging Time

The energy yield is calculated based on the distribution of wind speeds at the site. However, the annual wind distribution varies considerably from year to year. Depending on the local climate, the annual mean wind speed values can vary by ±15% from one year to the next. To reduce the uncertainty of the interannual variability, a long-term correction of the measured data is performed (Sect. 7.3). However, on a monthly scale, the wind speed variability is much stronger, depending on the local climate—reaching variations of ±50%. Therefore, it is of crucial importance to measure whole years, because often the seasonal variations of the long-term reference data differ from those at the site. Figure 30 shows, for example, a significant difference between site and reference data in the winter months, while, in the summer months, the difference is negligible. A correction of a short-term measurement with less than 12 months using reference data would therefore be subject to error.

Fig. 30
figure 30

Seasonal bias: example of deviation between onsite and reference data during winter

Normally, wind data are averaged over a period of 10 min. For this period, the standard deviation of the wind speed, which is necessary for the determination of the turbulence intensity, is calculated by the data logger. The averaging period of 10 min allows the data to be used directly for load calculations [3], as these also refer to 10 min averages.

7 The Third Step: Data Analysis

7.1 Quality Control

Both, the short-term and long-term, reference data should be subjected to rigorous quality control. This is mainly done manually but is supported by filter algorithms that flag unusual behaviour. The direct comparison of site data with long-term data can offer advantages here, for example, to check the wind direction measurements.

When using anemometers, special attention should be paid to possible signs of icing. Often heated instruments are combined with unheated ones as a deviation between the two sensors indicates icing. Wind vanes often ice up earlier than the rotating cup anemometers. Thus, a constant wind direction may also be an indication of ice. Such questionable periods should be excluded from further analysis.

Depending on the technology, low clouds and fog can affect the function of a LiDAR and lead to erroneous measurements. Furthermore, the quality of the signal can suffer in particularly clean air due to a lack of aerosols necessary to reflect the signal.

SoDARs require temperature fluctuations in the air to reflect the signal. Therefore, special attention should be paid to periods of stable or neutral stratification, as the functionality of the SoDAR may be impaired under these conditions.

7.2 Data Corrections

For further analysis, the highest possible data integrity is necessary. In general, at least 90% of the data should be available after filtering [17]. The data gaps caused by missing data or quality control can lead to systematic measurement errors, especially if the gaps are not randomly distributed but occur cumulatively in certain meteorological situations such as wintertime or certain times of the day. Therefore, data gaps of the relevant sensors such as wind speed and direction should be closed where possible by reconstructing the missing data from readings of other sensors to increase the data availability. For filling the data gaps, Measure-Correlate-Predict (MCP) methods are usually used to fill the data gaps (see next section).

7.3 Long-Term Correction

Weather, and therefore wind speeds, vary from year to year. Onsite data for the analysis of wind resources tend to cover a limited period, which in many cases is less than 5 years. The question therefore arises to what extent this measurement period is representative. Long-term measurements have shown that variations in wind energy beyond 20% occur [7]. For a proper assessment of the economic viability of a project, this variability must be considered. Each short-term measurement should therefore be corrected with long-term, reference data. This correction is based on the assumption that the short- and long-term data sets are well correlated. The coefficient of determination R2 is used to quantify the quality of the correlation. R2 of wind speed should not be less than 70% for a temporal resolution of 10 min [19]. In addition to the correlation of wind speed, the representativeness of wind directions should be checked. The k parameters of the frequency distributions should also be similar.

Today, reanalysis data (Sect. 4.1) are commonly used as long-term data, either directly or spatially refined through mesoscale models. Generally, weather stations are too strongly influenced by changes in their surroundings, such as growing vegetation or construction activity, and are therefore often inconsistent. However, even with reanalysis data, care must be taken in the choice of long-term period. Since both the composition of the input data of the reanalysis models as well as the assimilation models, can change over time, a reference period of 20 years is often chosen as a compromise, even though reanalysis data from longer periods might be available. In general, the interpretation of trends in long-term series is difficult because it is not possible to distinguish between natural trends and trends due to inconsistent input data or changes in the assimilation models. Only if there is absolute certainty, that an observed trend is artificial and has no physical background, detrending of the long-term data can be considered.

Frequently, several long-term data sets are compared with each other. However, this has only a limited effect on the interpretation of trends since most reanalysis models are based on the same input data and are thus not statistically independent.

In principle, two different strategies for long-term correction can be distinguished.

The statistical method for correcting the short-term data is called Measure-Correlate-Predict or MCP. Here, for the concurrent period, a mathematical relationship is established between the short- and long-term data. The application of this mathematical relationship results in a synthetic long-term data series. Exemplary methods, which can be used to establish the relationship, are given below:

  • Regression analysis: Here, the linear regression of the wind speeds of the short-term and long-term data is determined either omnidirectionally or by wind direction sector.

  • Matrix method: The matrix method is based on the comparison of wind direction and wind speed frequency distributions to derive the statistical relationship between measured data and reference. A realistic estimation of distribution functions is important, also when data coverage is low. It should be noted that not every matrix method has sufficient conservation properties with respect to the statistical moments of the frequency distribution.

  • Nonlinear methods such as neural networks.

In general, MCP methods require a high temporal data resolution and sufficient data coverage for all energy-relevant wind speeds. The chosen MCP method must be able to reproduce the original measured data without significant errors. Simple linear regressions of wind speed do not preserve energy, so additional measures are required to ensure energy conservation. However, such measures can reduce the quality of a correlation.

Instead of transposing the wind distribution from the reference station, scaling methods determine a correction factor for the short period data. For the short-term measurement period, the energy level of the reference data set is determined and compared with the energy of the total long-term period. The resulting ratio is then applied as a correction to the short-term onsite data. This procedure has the disadvantage that the wind rose measured onsite remains unchanged and should therefore only be used if the wind directions were representative during the measurement period.

8 The Fourth Step: Spatial Extrapolation

The analysis of the wind speed measurements results in a detailed description of the wind climate at several heights, but at one position only. To calculate the energy yield of the entire wind field across the site, the measurement results must be extrapolated vertically from measurement height to hub height and horizontally across the wind farm area.

The vertical extrapolation can be carried out either with the help of a flow model or by using the measured shear. The decision of which of the two methods is to be preferred depends on the conditions at the site and the geometry of the measurement.

Using measured shear for vertical extrapolation, has the advantage that the resulting time series of wind speed at hub height reflects varying atmospheric stability. It is thus preferable to the use of average atmospheric stability such as would be the case were WAsP to be used. However, if the measurement instruments are subjected to varying degrees of shadowing at different measurement heights, the measured shear will be falsified. This can be the case, for example, if the topmost instrument is placed at the tip (cf. Fig. 24, right), but the instruments at lower heights are mounted on side booms. Figure 31 shows another situation where the measured shear is distorted: The top two measurement heights are in the boundary layer caused by a nearby forest, while the lowest measurement height is free from the influence of the forest. The measured shear over all three heights is most likely not representative for the future positions of the wind turbines and should therefore not be used for vertical extrapolation.

Fig. 31
figure 31

Influence of internal boundary layer caused by a forest on a nearby measurement mast

For the horizontal extrapolation, microscale flow models are used (see Sect. 4.3). It is important to note that even advanced flow models have their limitations if the similarity principles are violated and the site characteristics of the proposed wind turbines deviate significantly from those at the measurement site.

9 The Fifth Step: Choosing the Wind Turbine

In the design of wind turbines, a number of assumptions are made that describe the wind climate on site. Since the robustness of a wind turbine is directly related to the cost of the machine, a system has been established to divide sites into different categories in order to optimize the costs of the wind turbines. The site class according to IEC Standard 61,400-1 [3] depends on the mean wind speed and the extreme wind speed Vref at hub height (see Table 3). The term extreme wind is used for the maximum 10 min average wind speed with a 50-year return period at hub height. For sites in areas with tropical cyclones, a design for particularly high extreme winds is required. A fourth class, the S class, allows manufacturers one or more deviations from the parameters of the first three classes.

Table 3 Classification of WTGs according to IEC 61,400-1 [3]

In addition, three different turbulence classes for sites with low, medium and high turbulences were introduced. They are parameterized by Iref, where Iref represents the turbulence intensity at 15 m/s. Iref has the values 0.12, 0.14 and 0.16 for the standard turbulence classes A, B and C, respectively. So, a site can be classified as IA, IIB, IIIC or S, for example.

When selecting a suitable wind turbine for a specific site, this table can be used to determine the required wind class. At this point, it must be pointed out, that direct conclusions from the average wind speed to the extreme winds at hub height are not possible!

The mean wind speeds Vave, the extreme winds at hub height Vref and the turbulence Iref are not the only relevant factors. In the context of certification of a proposed project, in addition to these parameters, the distribution of wind speed (k parameters, see Sect. 3.2), flow inclination (or inflow angle), shear (gradient) and turbulence are compared with the assumptions of the type certification of the wind turbine. The individual parameters, their cause and effect are explained below.

9.1 Turbulence

Section 1.4 describes ambient turbulence and its causes. In addition to ambient turbulence, turbulence is generated by neighbouring wind turbines, which also leads to increased mechanical loads.

Figure 32 shows the additional wake-induced turbulence intensity as a function of the distance between two wind turbines, expressed as a multiple of the rotor diameter [28]. These results are based on measurements at four sites of different complexity. The Frandsen model [28] defines the so-called effective turbulence, which is a combination of ambient and wake turbulence integrated over all directions and takes the accumulation of fatigue based on material properties into account. The calculated effective turbulence is based on the 90th percentile of the measured ambient turbulence and must be compared with the normal turbulence model (NTM) as a design limit according to IEC 61,400-1 [3] for a range of wind speeds. During the planning phase, it must therefore be ensured that the effective turbulence intensity does not exceed the turbulence intensity assumed in the certification.

Fig. 32
figure 32

Wake-induced turbulence intensity according to Frandsen [28]

Turbulence is the main cause of material fatigue. In particular, turbulence causes bending of the blade root. The turbulent wind field also causes alternating torsion of the drive train. In addition, the tower is subjected to thrust (Fig. 33).

Fig. 33
figure 33

Impact on loads due to turbulence

9.2 Vertical Gradient (Shear)

The wind speed difference between the top and bottom of the rotor is another wind load parameter, which has to be assessed when reviewing the turbine’s suitability for the site. This difference is usually expressed as the height exponent α as used in the power law (Eq. 6). As a default value the IEC 61,400-1 assumes a shear exponent of 0.2 as the basis for the load calculations.

Since the blades experience a different angle of attack with each rotation due to wind speed differences at top and bottom of the rotor, alternating loads are caused on the blades. This leads to fatigue at the blade root. In addition, the rotating parts of the main shaft are subjected to bending (Fig. 34). It should be noted that not only high shear that causes excessive loads but also very small or negative shear. Therefore, locations with high as well as negative shear should be avoided.

Fig. 34
figure 34

Impact on loads due to vertical shear

A number of phenomena can cause large shear: In steep terrain, the flow might become separated. Consequently, the wind speed profile across the rotor might be distorted to such a degree, that parts of the rotor may even be subjected to negative shear (Fig. 12).

Similarly to steep terrain, large obstacles such as buildings or a forest can lead to distorted wind profiles as the wind speed at the lower part of the rotor can be strongly decelerated (Fig. 13). The degree of deceleration depends on the dimensions of the obstacle, its so-called porosity, and the distance of the turbine from the obstacle.

The wake behind a wind turbine is not only characterized by high turbulence (Fig. 32) but also by a clearly reduced wind speed compared to the surrounding area called the velocity deficit. The influence of a wake on the vertical wind profile is shown in Fig. 35. The dashed line shows the profile in front of the wind turbine, and the solid line shows the profile behind the wind turbine. The deformed wind profile clearly shows no only areas with negative gradients but also areas with very large gradients.

Fig. 35
figure 35

Vertical wind profile in front and in wake of a WTG [29]

In addition, partial shading of the rotor can also lead to strong horizontal wind shear (Fig. 36). In this situation, the rotor experiences not only a vertical but also a horizontal gradient, simultaneously with increased turbulence.

Fig. 36
figure 36

Partial wake situation

Finally, atmospheric stability is a driver for varying shear. As explained in Sect. 2.3, higher shear exponents are to be expected, especially in winter months and at night, while during summertime lower shear can be expected. Figure 8 documents the variation of shear exponents, ranging from negative shear up to values around 0.8.

9.3 Inflow Angle

To benefit from the terrain-induced speed-up (Fig. 12), wind turbines are often installed at the point of greatest curvature of the terrain. An undesirable side effect of such a position is an inflow angle that deviates significantly from the horizontal. The inclined flow can lead to lower production, since the angular response of the wind turbine to inflow follows roughly a cosine and thus mainly the horizontal component of the wind contributes to the energy production.

In addition, the wind turbine is subjected to higher loads with increasing inflow angles. In particular, the fatigue loads on the blades increase as the blades are subjected to a constantly changing angle of attack. Also, the bending loads on the rotating parts of the drive train are increased. Finally, the uneven loading of the rotor causes additional loads on the yaw drive (Fig. 37).

Fig. 37
figure 37

Impact on loads due to non-horizontal inflow

As default, the IEC 61,400-1 [3] assumes an inclined flow of 8°. The inclined flow is naturally related to terrain slope. With increasing height above the ground, however, the influence of the terrain decreases. Especially in complex terrain, the limit value of 8° can quickly be exceeded. When placed in areas of flow separation, the flow can in the worst-case reverse and seriously damage the wind turbine.

A measurement of the inclined flow requires the measurement of all three wind components, for example, by using an ultrasonic anemometer at hub height. The measured inclined flow is only valid for the measurement position so that measurements can often only serve to verify a suitable flow model like CFD. However, scanning LiDARs can serve the purpose of identifying inflow across larger areas.

By optimization of the proposed layout, the technical risk due to flow inclination can often be controlled without significant loss of energy.

9.4 Extreme Winds

The selection of a suitable wind turbine for a site requires also knowledge of the expected maximum 10 min average wind speed Vref with a return period of 50 years [3]. While local building codes often generalize the amplitude of extreme events over large areas, wind data measured on-site allows for a more accurate site-specific prediction of the expected 50-year event. Several methods allow to estimate the maximum 10 min wind speed at the site based on onsite data.

Like other climatic extremes, Vref can often be described by a double-exponential Gumbel distribution [30]. The parameters describing the Gumbel-distribution are generally determined from plotting the measured extreme events against their rank. This so-called Gumbel plot ideally shows a linear relationship, the slope of this relationship is used to predict the 50-year event. Several methods are available to extract extreme events from a short-term data set and further fit the Gumbel distribution to these extremes [31]. However, all methods have one problem in common: the resulting 50-year estimate is strongly correlated with the highest measured wind speed event in the time series used for the analysis [32].

The European Wind Turbine Standard, EWTS [33], provides a way to estimate extreme wind based on the onsite wind speed distribution, rather than a measured time series. The EWTS provides a relationship between the shape of the wind distribution, described by the k-parameter, and the expected extreme wind (Fig. 16). The extreme wind is determined on the basis of the mean wind speed multiplied by a factor that depends on k. This factor is 5 for k = 1.75. For distributions with long tails (corresponding to a small k parameter), the factor is larger than 5. For distributions with a bigger k parameter, the factor is smaller than 5.

In addition to the maximum 10 min average wind speed at hub height with a return period of 50 years Vref, the extreme 3 s gust Ve50 has a critical influence on the wind turbine design and has to be established. The extreme 3 s gust is a function of Vref and the turbulence intensity Iext, which is the turbulence intensity that occurs during extreme winds.

$$V_{e50} = V_{{\mathrm{ref}}} \left( {1 + 2.8 I_{{\mathrm{ext}}} } \right)$$
(9)

The relationship described in Eq. 9 is based on both experimental data and theoretical work. As mentioned in Sect. 1.4, caution is needed in offshore and forest situations because the turbulence intensity does not show an asymptotic behaviour but increases with increasing wind speed. It is therefore much more difficult to estimate Iext under these conditions.

10 The Sixth Step: Energy Yield

10.1 Energy Yield of the Individual Wind Turbine

The estimate of annual electricity production is derived from the wind turbine hub height wind speed. The relationship between the generated electricity and the wind speed is described by the power curve.

Knowing the power curve of a wind turbine P(v), the average electricity production can be estimated using the probability density function of the wind speed at hub height f(v), typically expressed as a Weibull distribution (see Eq. 7).

$$P = \mathop \int \limits_{0}^{\infty } f\left( v \right) P\left( v \right) \;\;dv = \mathop \int \limits_{0}^{\infty } \frac{k}{A}\left( \frac{v}{A} \right)^{k - 1} \exp \left( { - \left( \frac{v}{A} \right)^{k} } \right) P\left( v \right) dv$$
(10)

This integral cannot be calculated analytically and must therefore be solved numerically. For this purpose, the power curve is divided into bins, typically in 0.5 m/s steps. The power output can now be calculated by summing the generated energy for each wind speed bin. As explained in Sect. 3.2, energy yield calculations are increasingly carried out in the time domain, i.e. a time series of production is calculated, as this approach is considered more correct and allows direct coupling to varying spot market prices.

Caution is advised as the energy contents of the wind are proportional to the air density. A power curve normally refers to an air density of 1.225 kg/m3, which corresponds to a temperature of 15 °C at sea level. A higher altitude and/or a warmer location will result in a lower air density and thus a lower energy yield.

In addition to air density, a number of site-specific parameters influence the power curve and thus the energy yield. These parameters include turbulence, shear and the wind direction variation over the rotor surface (veer). IEC 61,400-12-1 [18] describes how these site-specific influence parameters can be taken into account in order to achieve comparability of power curves measured at different sites.

Figure 38 shows schematically the influence of turbulence: with increasing turbulence, the ‘knee’ of the power curve becomes flatter, but at the same time the power output increases for lower wind speeds. The overall influence of turbulence is therefore dependent on the wind speed distribution.

Fig. 38
figure 38

Impact of turbulence intensity on the power curve

Shear also affects the power output. The simplification of the power curve referring to hub height wind speed becomes less and less suitable with increasing rotor size. Therefore, the so-called rotor equivalent wind speed (REWS) has been introduced, which accounts for the variation of wind speed across the rotor.

10.2 Energy Yield of the Wind Farm

Every wind turbine produces a wake. The wake describes the volume of reduced wind speed downstream of a wind turbine. If another nearby wind turbine is operating within this wake, the output of this downstream wind turbine is reduced compared to the wind turbine operating in the free wind. This power reduction depends on the wind turbine characteristics, the wind farm geometry and the wind climate. The wind turbines operating in the wake are not only subject to reduced wind speed (as shown in Fig. 35) but also to an increased dynamic load—due to the increased turbulence caused by the upstream wind turbines (Fig. 32). This increased turbulence must be taken into account when selecting a wind turbine class suitable for the wind turbine (Sect. 9.1).

Two models are commonly used in the industry to calculate the impact of upstream wind turbines on energy production, the N.O. Jensen model [34] and the eddy viscosity model of Ainslie [35].

The N.O. Jensen model is a simple kinematic model, which describes the speed reduction caused by a single turbine through a wake decay constant. The model is based on the assumption that the wake expands linearly as a function of the distance x to the rotor (Fig. 39).

Fig. 39
figure 39

Wake model according to N. O. Jensen

The N.O. Jensen model describes the reduced velocity vw (w for wake) in the rotor plane as follows:

$$v_{w} = v_{0 } \left[ {1 - \left( {1 - \sqrt {1 - c_{t} } } \right)\left( {\frac{D}{D + 2 k x}} \right)^{2} } \right]$$
(11)
vw:

 wind speed in wake [m/s]

v0:

 free wind speed in front of the wind turbine [m/s]

ct:

thrust coefficient [−]

D:

 Rotor diameter [m]

k:

wake decay constant [−]

x:

distance downstream [m]

The thrust coefficient ct describes the horizontal thrust force on the rotor. This information is provided in form of the thrust curve by the manufacturer. The thrust curve has a great influence on the wake loss, but its validity is difficult to prove. The wake decay constant k is related to the opening angle of the conical wake. A high ambient turbulence intensity leads to increased mixing between the wake and the surrounding undisturbed flow causing the wake to be replenished with energy from the boundary layer. The opening angle of the wake increases, resulting in lower wake losses compared to situations with low turbulence. The effect of multiple wakes can be established through the square root of the sum of squares of the wind speed deficits [34].

The N.O. Jensen model was validated in the 1980s. Compared to today, the rotor diameters and the hub heights were small. Accordingly, the ambient turbulence at hub height was high, which resulted in a recommendation for the wake decay constant k of 0.075. From today’s point of view, with considerably higher hub heights and a consequently lower ambient turbulence, this value is outdated and should be reduced. Investigations suggest connecting the wake constant k to the ambient turbulence at hub height [36]:

$$k \approx 0.4 \;\;\mathrm{TI}$$
(12)

Although the N.O. Jensen model is somewhat crude, it is computationally very fast and useful for initial estimates of the wake losses of a wind farm. The performance of the above models has been evaluated through various blind tests [37].

The Ainslie model also dates from the 1980s, and is based on a numerical solution of the Navier–Stokes equations. The eddy viscosity is described by the turbulent mixing due to the induced turbulence generated in the shear layer of the wake and the ambient turbulence [35].

WakeBlaster belongs to the next generation of wake models, whose core is a parabolic three-dimensional RANS solver [38]. WakeBlaster solves the wake flow field for multiple turbines simultaneously rather than for a single turbine, eliminating the need for an empirical wake superposition model. It belongs to a class of mid-fidelity models designed for industrial use in wind farm design, operation and control.

10.3 Further Production Losses

In addition to wake losses, other sources of losses must be considered to determine the net yield (Fig. 40). Various guidelines [17, 19], as well as the working documents of the Technical Expert Group for the preparation of IEC 61,400-15 suggest lists of each item to be considered. Unfortunately, no consensus has yet been reached on which factors should be considered and how to categorize them. The most commonly used categories include:

Fig. 40
figure 40

Main steps of energy yield estimation

  • Internal wakes.

  • External wakes caused by already existing as well as future wind farms.

  • Availability

    • of the wind turbine,

    • of the infrastructure (balance of plant BOP), and

    • of the grid.

  • Electrical losses

    • Electrical efficiency.

    • Parasitic consumption.

  • Performance of the wind turbine

    • Suboptimal operation.

    • Generic and/or site-specific adjustment of the power curve.

    • High-wind hysteresis.

  • Environmental conditions

    • Degradation without icing.

    • Degradation due to icing, shut down due to icing

    • Environmental loss e.g. shut down due to high temperature.

    • Site accessibility.

  • Curtailments

    • Load-related curtailments (such as wind sector management).

    • Grid curtailment.

    • Permit curtailment (such as noise or shadow flicker).

    • Operational strategies.

Some of the loss factors can be calculated, often most accurately in the time domain. Bat curtailment, for example, requires the wind turbines to be switched off at a certain time of day and year, at certain wind speeds, and in certain weather conditions. A statistical determination of the resulting losses is possible, but less accurate than in the time domain.

A number of loss factors, such as availability, are typically covered by specific warranty agreements, so any consideration of uncertainty in these parameters requires a contract-specific review that is generally outside the scope of an energy analysis.

Since the late 2010s, another potential loss factor was cited, which is related to the upstream reduction of wind speed in front of a wind farm causing potentially a reduction of energy output. At the time this book was published, there was no industry-wide consensus on this effect, known as blockage [39]. The detection of blockage is extremely difficult because the effect to be measured is of the order of measurement uncertainty. Therefore, indirect attempts are made to detect blockage. Bleeg et al. [40] report a reduction in wind speeds in front of a wind farm, which was detected by changed relationship of wind speed measured with multiple masts before and after the installation of wind farms. Furthermore, production differences along the front row of wind turbines in an offshore wind farm were referred to as evidence of blockage [41].

LES (large eddy simulation) models and simplified three-layer models of the atmosphere indicate a reduction in wind farm production on the order of 5% under stable conditions [42, 43], on average during the course of the year the losses would be significantly lower. There are indications that blockage might be connected to the size of the rotor and the turbine’s hub height interacting with the atmospheric boundary layer (ABL).

Simpler engineering models as described in [44], describe how the thrust of the rotor slows down the flow in front of a wind turbine. This effect, called induction, is well known and understood. In the context of a wind farm a re-distribution of energy is expected through acceleration at the sides and the far depth of the wind farm, which partly offsets the reduced wind speed at the front row.

Some suggestions on the magnitude of losses can be found in [45]. Unfortunately, it is not always possible to verify the actual individual contributions once the wind turbine is operational. For one thing, the losses are so fragmented that the individual contributions are too small to be determined with certainty. For another, the error codes of SCADA systems rarely support the allocation of lost production to the categories assumed during the pre-construction process. Various blind tests attempt to identify the key challenges of pre-construction assessment and thereby improve accuracy [46, 47]. It often proves difficult to identify a clear pattern of behaviour in the method choices and loss assumptions.

When combining losses double-counting should be avoided. If, for example, turbines are bat-curtailed, they no longer generate a wake.

10.4 Uncertainty Analysis

In addition to the net yield P50 the uncertainty of the pre-construction assessment is key information when searching for finance. The gross energy yield as well as the individual loss factors are subject to uncertainties. A site-specific uncertainty analysis is an important part of any wind farm assessment. The most important contributions to uncertainty are:

  • Historical wind resources including long-term correction

  • Measurement uncertainty/wind data basis;

  • Spatial extrapolation/Modelling wind field;

  • Performance of the wind turbine:

    • Wake effects;

    • Loss factors.

For each element, the magnitude of the uncertainty is determined. Some suggestions on the magnitude of uncertainties can be found in [45].

Often, it is assumed that the individual uncertainties follow a normal distribution. Through use of a sensitivity factor, wind speed uncertainties can be translated into energy uncertainties. Once expressed as energy uncertainty, all individual uncertainties can be combined as total uncertainty. The total uncertainty can then be used to determine the AEP for different confidence levels, e.g. P90 being the production reached with 90% confidence.

It is common to present uncertainty results for different time horizons, from 1 to 20 years.