Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation.

3.1 Data Transfer and Acquisition

The data transfer and acquisition mainly depend on the output data types and measuring frequency of the measuring devices. Different methods are distinguished with respect to digital or analog output signals from the sonic anemometer, the analyzer, or any other additional device. The main requirements for instruments and data acquisition systems used for eddy covariance data are their response time to solve fluctuations up to 10 Hz. This means that the sampling frequency has to be high enough to cover the full range of frequencies carrying the turbulent flux, leading usually to a sampling rate of 10–20 Hz. Data acquisition in general should be flexible with respect to sampling frequency and may depend on the devices used (data logger versus personal computer; type of sonic anemometer or gas analyzer).

One needs to distinguish between two major groups of data acquisition systems, namely data loggers or computers. Explicit advantages when using data loggers are robustness, compactness, behavior in difficult conditions (low temperature, high humidity), and, above all, low power consumption, which makes such a system the preferred choice for a solar-powered eddy covariance site, especially in remote places where line power is not available. In this case, however, open-path gas analyzers would be preferred compared to closed-path gas analyzers, the latter needing a pump that consumes significantly more energy. If frequent station supervision and data collection are not feasible, an immediate processing of mean data by a logger may be advisable. In this case, the user should ensure to log not only the corrected fluxes but also the raw means for a separate postprocessing. Another convenience when using data loggers is that sensors, instruments, or devices with various output signals can be used simultaneously. The data logger can handle analog output signals, data sent through RS232 serial interface, or, in case of Campbell Scientific data loggers, according to a Stand Development Monitoring (SDM) protocol. Figures 3.1, 3.2, 3.3 and 3.4 are examples for LiCor and ADC gas analyzers. Other analyzers such as the Los Gatos CH4-analyzer or the Picarro CH4/CO2/H2O analyzer suit also to some of the following schemes.

Disadvantages of data logger-based systems are that graphical representation of raw or calculated data is much more complicated to realize, of much less quality, or even impossible and that raw data are usually stored in one large file which later needs to be split into files of convenient length, for example, 30 min. This could also be done online if the data logger is connected to a computer, but in this case the benefit of low power consumption may be canceled out.

Data collection with a computer equipped with one of the numerous eddy covariance software packages requires generally a connection to main power. Nevertheless, small cap rail computers with low power consumption are available on the market, which may be used with an affordable solar power supply. Except for some laptops, computers and most of the data acquisition software packages (e.g., EddySoft) can be configured to restart after a power down event so that no operator’s action is required. There are several advantages to use a computer to acquire eddy covariance data: raw data are stored in files of desired length and format; visual user interfaces allow the operator to carry out or modify any program settings in an easy way; raw data as well as processed data like fluxes can be represented in colorful graphs and tables; with some software packages, more than one instance of the data acquisition program can run on the same computer allowing simultaneous acquisition from several eddy systems (e.g., useful for flux profiles). Finally, the computer can be used for other tasks besides collecting the eddy covariance data such as flux data postprocessing; communication with a data logger to archive meteorological data; communication with a web camera capturing phenological images; data, images, graphs transmission via modem or network; data backup on peripheral storage devices; collection of other data such as status information from the eddy covariance instruments.

If a computer is used and particularly if it is running under Windows® it is recommended to transmit all data of one eddy covariance system through only one data stream. Since Windows® is not a real time operating system, it is impossible to ensure data being transmitted to different input lines of the computer, for example, several COM-ports will be synchronized throughout long periods. In practice, this means that the data should be sent to the computer via one physical or virtual COM-port (RS232, USB or Ethernet). This implies that there must be an instrument or a device in front of the computer which merges the data from the different components of the eddy instrumentation into one data stream. In many cases, this merging device is the sonic anemometer (Figs. 3.1 and 3.2) because many of the producers of sonic anemometers provide their instruments with analog input channels (Gill, Metek, Thies, Young). This is realized by embedded analog to digital converters (ADCs) or by optional external analog input boxes. The quality and resolution of the ADCs can vary significantly. With this solution, analog output signals of gas analyzers are being digitized by the ADCs and the digital data are then merged with the sonic data and sent to the computer. The general drawback of this procedure is that in many cases two conversions are carried out: modern gas analyzers operate internally on a digital basis. To produce an analog output signal, a first conversion from digital to analog (DAC) is required. Then the data must be converted back to a digital representation to be transmitted to the computer. Those two conversions may of course reduce the signal quality (see further Chap. 4).

Fig. 3.1
figure 1_3

Examples for data acquisition of sonic anemometer and various gas analyzers with a computer for analog and digital data streams. The left path describes the digitization of the analog signals by the sonic anemometer; the right path describes the direct digital data transmission from all devices to a computer. The gray boxes at the bottom show the disadvantages of each of the configurations

Another option is offered by LiCor Biosciences in conjunction with their new gas analyzers. There is an interface box available to which the digital signals of the gas analyzers can be connected to as well as additional analog signals. In this case, the idea is to connect analog output signals from sonic anemometers which are digitized and merged with the digital data stream of one or more gas analyzers and digitally transmitted to the computer (Fig. 3.2) or data logger (Fig. 3.3). Again the drawback is the double conversion; in this case, the DACs of some sonic anemometers are quite limited in resolution. On the other hand, the same interface box is able to convert the digital data of the gas analyzers to analog signals which then might be connected to the analog input channels of a sonic anemometer (Fig. 3.2).

Fig. 3.2
figure 2_3

Examples for data acquisition of sonic anemometer and various gas analyzers with a computer for analog and digital data streams. An intelligent interface box (LI-7550, LiCor…) is able to merge different signal inputs (analog or ethernet) and data output can be realized via ethernet or RS232 (left path) or as analog signals (right path). The gray box at the bottom shows the disadvantages of the configuration

Fig. 3.3
figure 3_3

Examples for data acquisition of sonic anemometer and various gas analyzers with a data logger for analog and digital data streams. The same interface box (LI-7550, LiCor…) as described in Fig. 2.2 might be used (right path) or various devices can be connected to the data logger directly (left path). Both paths can be used simultaneously. The gray box at the bottom shows the disadvantages of the configuration

Two system configurations which are less common should also be mentioned:

  • A system described by Eugster and Pluss (2010) is operating “fully digital,” which means that digital data from all instruments of the system are transmitted to a computer via independent COM-ports (Fig. 3.1, path on right hand side). The problem of synchronization is controlled by a Linux operating system.

  • A system where a data logger acts as merging device to which the sonic anemometer and gas analyzers are connected and the raw data are transmitted to a computer via RS232 at high frequency. This system is pretty flexible because instruments with analog and digital outputs can be mixed in various combinations (Fig. 3.4).

    Fig. 3.4
    figure 4_3

    Examples for data acquisition of sonic anemometer and various gas analyzers with a computer for analog and digital data streams with a data logger being a merging and synchronizing device

Before running any of the possible data acquisition software tools, it has to be ensured that hardware settings of the sonic anemometer and the analyzer are appropriately introduced into the software settings. Depending on the type of software used, also measurement frequency, number, and order of additional analog or serial input channels have to be set. The sonic anemometer azimuth alignment has to be fixed in the acquisition software to get horizontal wind components directly as an output especially when real time calculation of fluxes and wind direction is required but also to enable correct postprocessing.

For closed-path CO2/H2O analyzers such as the LI 6262 or LI 7000 it may be possible to choose linearized or nonlinearized output signals. In this latter case also, pressure and temperature signals from the analyzer have to be sampled in high-frequency resolution. For any of the signals sampled, it has to be ensured that in case of voltage signals the ranges and units are set correspondingly in the analyzer output and in the data acquisition software.

Wind components together and speed of sound are determined by any of the different types of 3D-ultrasonic anemometers, such as Campbell CSAT3, Gill R2, R3, HS, or WindMaster(Pro), METEK, or Young (see also Sect. 3.2.1.1). Each of the different anemometers has specific characteristics that have to be considered with respect to data acquisition: number of analog inputs, azimuth alignment, angle adjustment, tone settings (for Gill R3/HS), heating settings (for Metek USA-1), analog output full scale deflection (for Gill R3/HS and Windmaster Pro), sensor head correction (for Metek USA-1), and analyzer type.

3.2 Flux Calculation from Raw Data

The transformation of high-frequency signals into means, variances, and covariances requires different steps that will be detailed below. First, the sensor output signals have to be transformed in order to represent micrometeorological variables (Sect. 3.2.1). Secondly, a series of quality tests have to be applied in order to flag and/or eliminate spikes and brutal shifts that could appear in the raw signals due to electronic noise (Sect. 3.2.2). After that, variable averages, variances, and covariances have to be computed (Sect. 3.2.3). Variances and covariances require the computation of variable fluctuations, which in some cases could require some detrending (Sect. 3.2.3.1). Covariances require, in addition, a determination of the lag between the two variables that covary (Sect. 3.2.3.2). These procedures provide estimates of means, variances, and covariances expressed in an axis system that is associated with the sonic anemometer. A rotation is then needed in order to express these variables in a coordinate frame that is linked to the ecosystem under study (Sect. 3.2.4).

3.2.1 Signal Transformation in Meteorological Units

3.2.1.1 Wind Components and Speed of Sound from the Sonic Anemometer

The operating principles of sonic anemometers are described in Sect. 2.3 and in several publications and textbooks (Cuerva et al. 2003; Kaimal and Businger 1963; Kaimal and Finnigan 1994; Schotanus et al. 1983; Vogt 1995). The sonic anemometer output provides three wind components in an orthogonal axis system associated with the sonic anemometer, and the sound velocity, c, respectively. This variable depends on air density and thus on atmospheric pressure (p), vapor pressure (e), and absolute air temperature θ:

$$ c = \sqrt {{\gamma \cdot R/{m_{\text{d}}} \cdot \theta \cdot (1 + 0.32 \cdot e/p)}} $$
(3.1)

where R = 8.314 J K−1 mol−1 is the universal gas constant, m d = 28.96 10−3 kg mol−1 is the dry air molar mass, and \( \gamma \) = 1.4 the ratio of constant pressure and constant volume heat capacities. In practice, the sonic anemometer software computes the sonic temperature as (Aubinet et al. 2000; Schotanus et al. 1983):

$$ {\theta_{\text{S}}} = \frac{{{m_{\text{d}}}}}{{\gamma R}}\frac{{\left( {c_{{1}}^2 + c_2^2 + c_3^2} \right)}}{3} = \frac{1}{{403}}\frac{{\left( {c_1^2 + c_2^2 + c_3^2} \right)}}{3} $$
(3.2)

where:c 1, c 2 and c 3 correspond to the speed of sound measured along each sonic anemometer axis.

However, this temperature strays from real absolute temperature (θ) by 1–2% as it does not take the dependence of sound velocity on vapor pressure (e) into account. The relation between sonic temperature and absolute real temperature is given by Kaimal and Gaynor (1991):

$$ {\theta_{\text{s}}} = \theta \cdot \left( {1 + 0.32 \cdot e/p} \right) $$
(3.3)

This is almost equal to the virtual temperature \( {\theta_{\text{v}}} \), defined as:

$$ {\theta_{\text{v}}} = \theta \cdot \left( {1 + 0.38 \cdot e/p} \right) $$
(3.4)

As a result, \( {\theta_{\text{s}}} \) can be directly used to estimate the buoyancy flux and, thus the stability parameter (h m −d)/L. However, for sensible heat flux estimates, a correction, based on Eq. 3.3 and needing independent vapor pressure measurement (SND correction), is necessary. It is described in detail in Sect. 4.1.2.

3.2.1.2 Concentration from a Gas Analyzer

The scalar intensity of an atmospheric constituent must be expressed in the conservation equation (e.g., Eqs. 1.19–1.25) in terms of mixing ratio. Infrared gas analyzers measure either density or molar concentrations and may convert them to mole fractions either with or without correction for water vapor (Sect. 2.4.1). The signal conversion to mixing ratios requires the knowledge of high-frequency air density fluctuations and therefore an estimate of high-frequency air temperature and humidity fluctuations. In the closed-path system, the former is neglected, considering that temperature fluctuations are damped due to the air passage through the tube (for more detail, see Sect. 4.1.2.3), and the latter is taken into account by the analyzer if this option is available in the analyzer software (among others, LI-COR 6262) and chosen by the user. If this is not the case (among others, LI-COR 7000), the signal conversion must be done during data postprocessing. In the open-path system (among others, LI-COR 7500), none of these corrections are accounted for and they must be applied during data postprocessing.

In case of linearized analog output mode, the output of a gas analyzer is thus a voltage signal V χ (V ρ ) related to molar mixing ratio (density). The relation has to be determined from the settings in the data acquisition software, namely the maximum voltage output V max, which relates to a maximum mixing ratio (density) of the trace gas χ smax (ρ smax) and a zero voltage, which corresponds to a minimum mixing ratio (density) χ smin(ρ smin):

$$ {\chi_{\text{s}}} = {\chi_{\text{smin}}} + \frac{{{\chi_{\text{smax}}} - {\chi_{\text{smin}}}}}{{{V_{{{ \max }}}}}}{V_{{{\chi }}}} $$
(3.5a)
$$ {\rho_{\text{s}}} = {\rho_{\text{smin}}} + \frac{{{\rho_{\text{smax}}} - {\rho_{\text{smin}}}}}{{{V_{{{ \max }}}}}}{V_{{{\rho }}}} $$
(3.5b)

χ s max (ρ s max) as well as χ s min(ρ s min) have to be set according to expected values of mixing ratios (densities) at the site to optimize the analyzer’s resolution and calibration gases with mixing ratios within this range should be used (see Sect. 2.3.4). This equation for the determination of trace gas mixing ratios (densities) applies to many gas analyzers in use with analog output. Newer sensors output provide mixing ratios as digital signals.

3.2.2 Quality Control of Raw Data

Quality control of flux data is the second step of processing. High-frequency raw data often contain impulse noise, that is, spikes, dropouts, constant values, and noise. Spikes in raw data can be caused by instrumental problems, such as imprecise adjustment of the transducers of ultrasonic anemometers, insufficient electric power supply, and electronic noise, as well as by water contamination of the transducers, bird droppings, cobwebs, etc., or rain drops and snowflakes in the path of the sonic anemometer. Some instruments issue error flags in case of suspect data (e.g., USA-1, CSAT, LI7500).

Spikes can usually be detected because of their amplitude, duration, or abruptness of occurrence. Besides checks for exceeding of physical limits and standard deviations, Hojstrup (1993) suggested a procedure which defines thresholds by a point-to-point autocorrelation. Further, Vickers and Mahrt (1997) developed test criteria for quality control of turbulent time series independent of the statistical distribution with a focus on instrument malfunctions.

Any spike detection and elimination modifies the data. Especially, means and variances of an averaging interval that are used as test criteria are changing. As a consequence, the quality assessment is an iterative process (e.g., Schmid et al. 2000). However, the change in the measured data implies also that each test is site-specific, has to be applied carefully, and should not mean a simple removal of single samples or complete averaging intervals, but an application of meaningful flags. As introduced by Vickers and Mahrt (1997), commonly, hard flags are used to identify artifacts introduced by instrumental or data recording problems and soft flags are used to identify statistical abnormal behaviors which are apparently physical but do disturb the further statistical evaluation or indicate nonstationary time series (Sect. 4.3.2.1). Detected hard spikes should be checked visually either to affirm a setup artifact and to discard data or to switch to a soft spike. Data flagged with a soft flag indicating limited data quality can be used for some purposes but not for standard data analyses.

The first step of data quality flagging includes checks for physical limits (wind velocity range, temperature range, and realistic trace gas concentrations, respectively). The thresholds should be chosen not too close but can include the seasonal cycle (especially for temperature) to avoid any truncation of the measuring signal. Examples are:

  • Horizontal wind velocity: |u| < 30 m s−1

  • Vertical wind velocity: |w| < 5 m s−1 (close to the surface)

  • Sonic temperature: |θ s −θ m| < 20 K (θ m: monthly mean temperature)

Site and instrument-specific thresholds can be derived from typical frequency distributions of time series that are representative for the majority of meteorological conditions. The thresholds must be corroborated by direct inspection of the time series where unusual ranges were detected. Spikes detected with these thresholds are marked with a hard flag.

In a second step, the data could be checked relative to the standard deviation σ of the average interval. Schmid et al. (2000) proposed that each value χ i within a time series which deviates more than the product of a discrimination factor (e.g., D = 3.5) and the standard deviation σ j from the mean value \( {\overline \chi_j} \) should be characterized as a spike. For more selective filtering, subintervals (j) of the average interval are used to define the standard deviation and mean.

$$ {\chi_i} - {\bar{\chi }_j} \geqslant D \cdot {\sigma_j}\quad \xrightarrow{{}}\quad {\text{spike}} $$
(3.6)

These data windows should comprise most of the variance of the variable in a local scale. Schmid et al. (2000) applied 15 min windows, whereas Vickers and Mahrt (1997) used moving windows of 5 min length.

As the standard deviation decreases with the elimination of spikes, the tests should be repeated several times, until either there are no more new spikes or the maximum of iterations is completed. The discrimination factor should be increased with each iterative step (k) by a constant term (e.g., D k= 3.5 + 0.3k).

A soft spike is registered if the fluctuation from the mean is larger than the threshold value. As a second condition, the duration of the deviation can be used, for example, a spike should be shorter than 0.3 s (Schmid et al. 2000).

More complex approaches perform despiking with respect to the difference between consecutive data points. Hojstrup (1993) applied a point-to-point autocorrelation method using an exponential filter function. Each individual value χ i is compared with a test value χ t,i calculated from the course of the previous time series, according to:

$$ {\chi_{{{\text{t,}}i}}} = {\chi_{{i{ - 1}}}}{R_{{{\text{M,}}i}}} + {{\rm X}_{{{\text{M,}}i}}}(1 - {R_{{{\text{M,}}i}}}) $$
(3.7a)

where the mean (X M,i ) is computed as

$$ {{\rm X}_{{{\text{M,}}i}}} = {{\rm X}_{{{\text{M,}}i{ - 1}}}}\left( {1 - 1/M} \right) + {\chi_i}/M $$
(3.7b)

the auto correlation coefficient (R M,i ) as

$$ {R_{{{\text{M,}}i}}} = \frac{{{R_{{{\text{M,}}i - {1}}}}\left( {1 - 1/M} \right)\sqrt {{{\sigma^2}_{{{\text{M,}}i{ - 1}}}{\sigma^2}_{{{\text{M,}}i{ - 2}}}}} + \left[ {\left( {{x_i} - {{\rm X}_{{{\text{M,}}i}}}} \right)\left( {{x_{{i{ - 1}}}} - {{\rm X}_{{{\text{M,}}i{ - 1}}}}} \right)} \right]/M}}{{\sqrt {{{\sigma^2}_{{{\text{M,i}}}}{\sigma^2}_{{{\text{M,}}i{ - 1}}}}} }} $$
(3.7c)

and the standard deviation (σ M,i ) as

$$ {\sigma^2}_{{{\text{M,}}i}} = {\sigma^2}_{{{\text{M,}}i{ - 1}}}\left( {1 - 1/M} \right) + {\left( {{\chi_i} - {{\rm X}_{{{\text{M,}}i}}}} \right)^2}/M $$
(3.7d)

The memory of the filter is characterized by a number of points M, however, it is rather a filter constant, as the influence of previous points on following test values reduces with time distance but is theoretically infinite. During the process, the filter memory is adjusted to the varying auto correlation R M:

$$ M = \frac{{ - 230}}{{\ln \left( {\left| {{R_{{{\text{M,}}\,i}}}} \right|} \right)}}\quad \mathit{with}\quad 0.1\, < \,\left| {{R_{{{\text{M,}}\,i}}}} \right|\, < \,0.99 $$
(3.7e)

The comparison is made according to:

$$ \left| {{\chi_i} - {\chi_{{{\text{t,}}i}}}} \right| \geqslant D \cdot {\sigma_{{{{\chi - \chi t}}}}}\quad \xrightarrow{{}}\quad {\mathit{spike}} $$

where σ χ−χ t is the standard deviation of the differences between the test values and the actual data points. The discrimination factor D is set from 3.3 to 4.9, depending on the probability of exceeding the threshold, χ−χ t .

Clement (2004) proposed a similar approach based on the difference between consecutive data points \( \left| {{\chi_{\text{i}}} - {\chi_{{{\text{t,i}}}}}} \right| \geqslant D \cdot {\sigma_{{{{\chi - \chi t}}}}}\quad \xrightarrow{{}}\quad {\text{spike}} \), which further regards dropouts (a corresponding subroutine is implemented in the flux calculation software EDIRE, University of Edinburgh, Institute of Atmospheric and Environmental Science). The threshold for the deviation of Δχ from the mean of the differences \( \overline {\Delta \chi } \) is set relative to the standard deviation σ Δχ of Δχ of the whole averaging interval.

$$ \left| {\Delta {\chi_i} - \overline {\Delta \chi } } \right| \geqslant D \cdot {\sigma_{{{{\Delta \chi }}}}}\xrightarrow{{}}{{spik}}e $$

Detected differences are suggested as an upward or downward leg of a spike. The procedure searches within a predefined window around a detected difference for the corresponding leg. The interval between the two legs is then corrected by an offset function including the slope of the interval.

The last two methods need to be parameterized very carefully to avoid false data exclusion. Indeed, parameters are so sensitive that physically valuable data could inappropriately be eliminated. Despite this, they are very helpful to detect dropouts and spikes automatically, which are not found by the previous methods.

Further tests aim to detect variances outside a defined valid range (a variance that is either too small or too large is flagged). Unusually large skewnesses or kurtosis and large discontinuities can be detected using the Haar transform (Vickers et al. 2009). Large kurtosis in time series of the sonic temperature can for example indicate water on the transducers (Foken et al. 2004). These tests are preferably applied to moving windows of width 10–15 min.

The eliminated spikes leave gaps in the time series that need to be filled, especially when spectral analysis has to be performed on the data. For short gaps, this is frequently done by interpolating using Gaussian random numbers depending on mean and standard deviation or by the model of Hojstrup (1993). Linear interpolation can lead to a systematic error and is not recommended. Time series with more than 1% spikes should be excluded from further statistical analysis (Foken 2008).

However, the application of the methods needs to be examined carefully. It is known, that physically plausible behavior and instrument problems overlap in parameter space. This underscores the importance of the visual inspection either to confirm or deny flags raised by the automated set of tests.

3.2.3 Variance and Covariance Computation

3.2.3.1 Mean and Fluctuation Computations

The variance of any variable χs is computed as

$$ \overline {{{\chi ^\prime}_{\text{s}}}^2} = \frac{1}{{N - 1}}\sum\limits_{{j = 1}}^N {({\chi_{\text{s}}}_j - \overline {{\chi_{\text{s}}}} } {)^2} $$
(3.8)

where N is the number of samples, χ s the scalar of interest, \( {\chi ^\prime_{\text{s}}} \) its fluctuating part, and \( \overline {{\chi_{\text{s}}}} \) its nonfluctuating part, that is, that part of the time series that does not represent turbulence, for example, the arithmetic mean.

The covariance of any wind component u k or scalar χ s with another wind component u i is calculated as

$$ \overline {{{\chi ^\prime}_{\text{s}}}{{u^\prime}_i}} = \frac{1}{N}\sum\limits_{{j = 1}}^N {\left[ {\left( {{\chi_{\text{s}}}_j - \overline {{\chi_{\text{s}}}} } \right)\left( {{u_{{i,j}}} - \overline {{u_i}} } \right)} \right]} = \frac{1}{N}\sum\limits_{{j = 1}}^N {{{\chi ^\prime}_{\text{s}}}_j{{u^\prime}_{{i,j}}}} $$
(3.9a)
$$ \overline {{{u^\prime}_k}{{u^\prime}_{\text{i}}}} = \frac{1}{N}\sum\limits_{{j = 1}}^N {\left[ {\left( {{u_{{k,}}}_j - \overline {{u_k}} } \right)\left( {{u_{{i,j}}} - \overline {{u_i}} } \right)} \right]} = \frac{1}{N}\sum\limits_{{j = 1}}^N {{{u^\prime}_{{k{,}j}}}{{u^\prime}_{{i,j}}}} $$
(3.9b)

where u k , with k =1, 2, 3, represent wind components u j , v j, or w j .

In practice, the averages of \( {\chi_{\text{s}}} \) and u i may be computed in several ways. The first approach, referred to as block averaging (BA), is

$$ {\overline {{X_{{5}}}}_{\text{BA}}} = \frac{1}{N}\,\sum\limits_{{j = 1}}^N {{X_{{5}}}} $$
(3.10)

It has the advantage over the alternatives that it dampens low-frequency parts of the turbulence signal to the least degree. However, when there is a need to remove a trend in the time series, due to instrumental drift or synoptic change in atmospheric conditions, block averaging is not sufficient to calculate fluctuations from turbulence data. To remove these undesired contributions from the time series, mainly two other types of high-pass filtering are being used, namely linear detrending, where the nonfluctuating part is calculated as

$$ \overline {Xs} {\,_{{{\text{LD}}j}}} = {\beta_1}{t_1} + {\beta_{{0}}} $$
(3.11)

where β 0 and β 1 are intercept and slope of a linear regression of χ s with time (e.g., Draper and Smith 1998). Another way of defining the nonfluctuating term is to calculate the auto regressively filtered time series, which is sometimes falsely called running mean:

$$ {\overline \chi_{\text{s}}}_{{{\text{AF,}}j}} = \alpha {\chi_{{s,j}}} + \left( {1 - \alpha } \right) \cdot {\chi_{{{\text{s,}}j{ - 1}}}} $$
(3.12)

where α is the constant of the filter, related to cut-off, f c, and sampling, f s, frequency as

$$ \alpha = 1 - {e^{{ - 2\pi \left( {{{{{f_{\text{c}}}}} \left/ {{{f_{\text{s}}}}} \right.}} \right)}}} $$
(3.13)

The different detrending algorithms were compared by Rannik and Vesala (1999), Culf (2000) and Moncrieff et al. (2004). Due to the nature of turbulence, that is, varying over several orders of frequency domains, high-pass filtering does not only remove undesired contributions to the covariance but also low-frequency contributions to the flux happening at the same time scales, which must be corrected. The theoretical work by Lenschow et al. (1994) and Kristensen (1998) provided spectral transfer functions for each of the three detrending methods (cf. Rannik and Vesala (1999)). To calculate unbiased and complete fluxes, any covariance, irrespective of the detrending method used, must be corrected for high-pass filtering losses. Application of these functions to correct fluxes is however limited as the low-frequency part of cospectra cannot be measured and is thus is not well defined (Kaimal and Finnigan 1994). Techniques allowing the evaluation of high-pass filtering errors are proposed in Sect. 4.1.2.2. Benefits and disadvantages of the different high-pass filtering methods are discussed in terms of flux uncertainty assessment in Sect. 7.3.3.1.

3.2.3.2 Time Lag Determination

Application of Eq. 3.9a requires that the instantaneous quantities χ sj and u j are computed at the same place and the same time. This is, however, generally impossible. Consequently, before applying Eq. 3.9a, the recorded time series must be lagged by a certain time against each other.

The delay between the two time series is mainly caused by differences in electronic signal treatment, spatial separation between wind and scalar sensors, and air travel through the tubes in closed-path eddy covariance systems. Time delays caused by signal electronic treatment (signal conversion and computation) are generally relatively small, constant, and known and can thus be considered directly. Delay due to sensor separation is more important. The air parcel needs some time to pass both of the instruments, which depends on wind speed, wind direction, and distance between the sensors. New sensor development aims at combining chemical and velocity measurements in one sampling volume. Larger lag times as common for closed-path systems comprise the time needed for the air to travel from the intake to the measurement cell in the analyzer. This delay depends on the inner volume of the air conducting parts of the eddy covariance system (filters, tubes, valves, and detection cell), on the mass flow through the system (and thus may vary with pump aging and filter contamination), and on the considered gas. Indeed, larger time delays may be observed if gases interact with the tube walls, which is notably the case for water vapor (Ibrom et al. 2007a, b; Massman and Ibrom 2008).

Two procedures are generally used to estimate the lag time. In the case of closed-path eddy covariance systems, where the most important cause of delay is due to air travel through the tubes, a mass flow controller can be installed in the pumping systems so that the time lag can be considered as constant. In these conditions, it could be estimated once at the beginning of the measurement period and the time series could be lagged by this constant value during the measurement campaign. It is, however necessary to check this value with empirical methods, because wall interactions are very likely to introduce additional time lags.

Lag times can be estimated for each averaging interval by performing a cross correlation analysis between the scalar of interest and the vertical wind component. This consists in comparing the correlations between the two signals lagged by different delays (Fig. 3.5). The time lag that is selected is those that produce the highest correlation. However, this procedure could result in ambiguous lag times, especially when the correlation is small. A feasible automatic procedure to determine lag times could thus use a defined search window as determined from mass flow, tube dimensions, and typical wall interactions at times with high enough fluxes (Aubinet et al. 2000; Kristensen et al. 1997; Lee and Black 1994; Moncrieff et al. 1997). In cases where these limits are exceeded, as well as in cases when the change in the lag time is too abrupt, it is recommended to use the value of the preceding averaging interval. Especially for H2O lag times it can also be useful to determine their dependency on relative humidity and use this dependency for further lag time determinations. Lag times for each of the variables and each averaging interval have then to be included in further postprocessing steps.

Fig. 3.5
figure 5_3

Time lag determination as an example for CO2 and H2O compared to the vertical wind component w. Dashed lines represent the cross correlation for CO2 for day- and night-time and the solid lines represent the cross correlation of H2O and w for day-time. Data were acquired at Maun, Botswana on DOY 58 in 1999 at 0230 and 1,100 h. Tube length was about 7 m at an inner diameter of one-eighth inch, flow rate was about 7 l min − 1

3.2.4 Coordinate Rotation

3.2.4.1 Requirements for the Choice of the Coordinate Frame and Its Orientation

Each term in the mass balance (Eq. 1.12) is a scalar and so is independent of the coordinate frame. The individual components of the divergence term (all terms but the first in left hand side (LHS) of Eq. 1.13), however, can take different forms in different coordinate systems. As measurements are generally taken from one single point, the coordinate frame must be chosen so that the sole divergence that can be measured (Term IV in Eq. 1.19) approximates the total divergence as closely as possible (Finnigan et al. 2003). This is the basic requirement guiding the choice of the coordinate frame and its orientation.

The setting up of the mass balance (Eq. 1.19) implicitly assumes the choice of a rectangular Cartesian coordinate system with the x direction parallel to the local mean wind vector, usually at the position of the sonic anemometer. The use of other coordinate systems, for example, the physical streamline or the surface-following coordinate systems, can be considered, especially in gentle topography, to facilitate the estimation of extra-terms in the mass balance equation, to combine several anemometers in the estimation of the terms of the mass balance equation, or to incorporate measurements in flow and transport models. These alternative coordinate systems will not be analyzed here. For a thorough discussion on this topic, see Finnigan (2004), Lee et al. (2004), and Sun (2007).

In order to determine the reference frame orientation, a homogeneous boundary layer is assumed where the mean moments of the wind and the scalar field in the surface-normal, cross streamline direction will be much larger than streamwise gradients (i.e.,: \( \frac{{\partial \overline {u^\prime{{\chi ^\prime}_{\text{s}}}} }}{{\partial x}},\frac{{\partial \overline {v^\prime{{\chi ^\prime}_{\text{s}}}} }}{{\partial y}} \ll \frac{{\partial \overline {w^\prime{{\chi ^\prime}_{\text{s}}}} }}{{\partial z}} \)). This hypothesis will obviously be met in one-dimensional, horizontally homogeneous mean wind fields above homogeneous sources/sinks but also in two- or three-dimensional flows when the point measurement is not very close to abrupt changes in surface topography or in surface cover. We can consider that this is the case for micrometeorological sites chosen to avoid large inhomogeneities in topography and source distribution, which means most of the long-term flux study sites, even those in complex terrain. In these conditions, the desired orientation of the coordinate system, that is, the one that will yield the best approximation to the divergence using an anemometer at a single point is obtained when the instrument is oriented in the plane spanned by the mean wind vector and the local normal to the surface.

If the vertical axis of the sonic is not aligned with the local normal to the surface, there will be cross-contamination among components of the flux divergence also called shortly “tilt errors.” It has been shown that the momentum flux is particularly sensitive to the tilt errors (Wilczak et al. 2001). For a 1° tilt, the error is typically greater than 10% under moderately unstable conditions and can be as large as 100% under free convection conditions. Scalar fluxes are not as sensitive, with a tilt error usually less than 5% for small tilt angles (<2°) but the errors could potentially cause a systematic bias in annually integrated eddy fluxes (Lee et al. (2004) and references therein).

Usually, anemometers are fixed in a permanent position at a tower and it is not possible to align the anemometer coordinate system to the changing flow field. The operator should simply align the sonic in a reasonable, pragmatic way as close as possible to the requested orientation of the z-axis and depending on the technical constraints (usually aligning the z-axis to the gravity field or tilting the sonic according to the expected slope over steep terrains).

As a consequence of the misalignment of the sonic anemometer the mean vertical wind components different from zero may appear (Heinesch et al. 2007). Measurement artifacts which are not discussed within this subsection like electronic problems (Grelle and Lindroth 1994; Wilczak et al. 2001), flow perturbation or insufficient calibration of the anemometer can also contribute to a spurious vertical wind component.

In order to avoid cross-contamination between the flux components due to the above mentioned problems, it is highly recommended to perform a rotation on the data before further corrections are done. The generic way to apply a rotation scheme will be presented in the next section and the definition of the rotation angles will be presented in Sect. 2.2.4.3.

3.2.4.2 Coordinate Transformation Equations

Three degrees of freedom are available, leading to three rotations characterized by the Euler angles α, β, and γ. The first, second and third rotations are performed around the z-axis, new y-axis and new x-axis, respectively, resulting in the angles α, β and γ. If the coordinate system is right-handed and if a positive rotation angle is defined as being a counter-clockwise rotation looking down the axis of rotation, these rotations can be expressed mathematically in matrix form by

$$ {R_{{{01}}}} = \left( {\begin{array}{*{20}{c}} {\cos \alpha } & {\sin \alpha } & 0 \\ { - \sin \alpha } & {\cos \alpha } & 0 \\ 0 & 0 & 1 \\ \end{array} } \right),{R_{{{12}}}} = \left( {\begin{array}{*{20}{c}} {\cos \beta } & 0 & {\sin \beta } \\ 0 & 1 & 0 \\ { - \sin \beta } & 0 & {\cos \beta } \\ \end{array} } \right),{R_{{23}}} = \left( {\begin{array}{*{20}{c}} 1 & 0 & 0 \\ 0 & {\cos \gamma } & {\sin \gamma } \\ 0 & { - \sin \gamma } & {\cos \gamma } \\ \end{array} } \right) $$
(3.14)

These rotations are applied successively starting from the wind vector in the sonic anemometer coordinates and ending in the wind vector in the desired coordinate system:

$$ \left( {\begin{array}{*{20}{c}} {{{\bar{u}}_{{3}}}} \\ {{{\bar{v}}_3}} \\ {{{\bar{w}}_3}} \\ \end{array} } \right) = {R_{{03}}}(\alpha, \beta, \gamma ).\left( {\begin{array}{*{20}{c}} {{{\bar{u}}_{{0}}}} \\ {{{\bar{v}}_0}} \\ {{{\bar{w}}_0}} \\ \end{array} } \right) $$
(3.15)

where: \( {R_{{03}}}(\alpha, \beta, \gamma ) = {R_{{23}}}(\gamma ).{R_{{12}}}(\beta ).{R_{{01}}}(\alpha ) \) is the matrix product of the three sequential rotation matrices.

For the scalar covariance matrix, it gives:

$$ \left( {\begin{array}{*{20}{c}} {\overline {{{u^\prime}_3}{{\chi ^\prime}_{\text{s}}}} } \hfill \\ {\overline {{{v^\prime}_3}{{\chi ^\prime}_{\text{s}}}} } \hfill \\ {\overline {{{w^\prime}_3}{{\chi ^\prime}_{\text{s}}}} } \hfill \\ \end{array} } \right) = {R_{{{03}}}}(\alpha, \beta, \gamma )\left( {\begin{array}{*{20}{c}} {\overline {{{u^\prime}_{{^0}}}{{\chi ^\prime}_{\text{s}}}} } \hfill \\ {\overline {{{v^\prime}_{{^0}}}{{\chi ^\prime}_{\text{s}}}} } \hfill \\ {\overline {{{w^\prime}_{{^0}}}{{\chi ^\prime}_{\text{s}}}} } \hfill \\ \end{array} } \right){ } $$
(3.16)

and, for the wind components (co)variance matrix, it gives:

$$ \left( {\begin{array}{*{20}{c}} {\overline {{{u^\prime}_3}{{u^\prime}_3}} } \hfill & {\overline {{{u^\prime}_3}{{v^\prime}_{{3}}}} } \hfill & {\overline {{{u^\prime}_3}{{w^\prime}_3}} } \hfill \\ {\overline {{{v^\prime}_3}{{u^\prime}_3}} } \hfill & {\overline {{{v^\prime}_{{3}}}{{v^\prime}_3}} } \hfill & {\overline {{{v^\prime}_3}{{w^\prime}_3}} } \hfill \\ {\overline {{{w^\prime}_3}{{u^\prime}_3}} } \hfill & {\overline {{{w^\prime}_3}{{v^\prime}_3}} } \hfill & {\overline {{{w^\prime}_3}{{w^\prime}_3}} } \hfill \\ \end{array} } \right) = {R_{{{03}}}}(\alpha, \beta, \gamma ).\left( {\begin{array}{*{20}{c}} {\overline {{{u^\prime}_{{^0}}}{{u^\prime}_{{^0}}}} } & {\overline {{{u^\prime}_{{^0}}}{{v^\prime}_{{^0}}}} } & {\overline {{{u^\prime}_{{^0}}}{{w^\prime}_{{^0}}}} } \\ {\overline {{{v^\prime}_{{^0}}}{{u^\prime}_{{^0}}}} } & {\overline {{{v^\prime}_{{^0}}}{{v^\prime}_{{^0}}}} } & {\overline {{{v^\prime}_{{^0}}}{{w^\prime}_{{^0}}}} } \\ {\overline {{{w^\prime}_{{^0}}}{{u^\prime}_{{^0}}}} } & {\overline {{{w^\prime}_{{^0}}}{{v^\prime}_{{^0}}}} } & {\overline {{{w^\prime}_{{^0}}}{{w^\prime}_{{^0}}}} } \\ \end{array} } \right){.}R_{{{03}}}^{\tau }(\alpha, \beta, \gamma ) $$
(3.17)

where\( R_{{03}}^{\tau } \) is the transposed \( {R_{{03}}} \).

This procedure will always be applied to each flux-averaging interval (typically 30 min averages).

Two methods are available to define these three rotation angles. The so-called double rotation (DR), and the planar-fit (PF) method. The DR has been used since the early years of eddy covariance measurements and is the most common and easiest to use method. The planar-fit method has been shown to have advantages over the DR method in complex terrain. They will both be presented in the next subsections.

3.2.4.3 Determination of Rotation Angles

3.2.4.3.1 Double Rotation

In this vector basis orientation, the z-axis is normal to and points away from the (30 min) mean local streamline and the x-axis is parallel to the (30 min) mean flow with x increasing in the direction of the flow.

In order to obtain the desired vector basis, the first rotation has to be performed to align \( \overline {\text{u}} \) into the mean wind direction, forcing \( \bar{v} \) to 0, resulting in the yaw angle α:

$$ {\alpha_{\text{DR}}} = {\tan^{{ - 1}}}\left( {\frac{{\overline {{v_{{0}}}} }}{{\overline {{u_0}} }}} \right) $$
(3.18)

The second rotation has to be performed to nullify\( \bar{w} \), resulting in the pitch angle β:

$$ {\beta_{\text{DR}}} = {\tan^{{ - 1}}}\left( {\frac{{\overline {{w_1}} }}{{\overline {{u_1}} }}} \right) $$
(3.19)

This DR scheme ends in what is termed a “natural wind system,” firstly introduced by Tanner and Thurtell (1969) and further described by McMillen (1988) and Kaimal and Finnigan (1994) among others.

After these two rotations, no further information can be extracted from the velocity vector but there is still infinity of orientations for the vector basis due to the last degree of freedom around the x-axis. A third rotation, introduced by McMillen (1988), was originally intended to minimize the \( \overline {v^\prime w^\prime} \) momentum flux. In practice, it was often found that this rotation results in unphysical orientation of the vector basis and is thus not recommended anymore (Finnigan 2004). Instead, the anemometer vertical axis should be aligned as closely perpendicular as possible to the underlying surface and then just the first two rotations have to be applied. In these conditions, Eqs. 3.143.17 are still valid, provided that R23 is the identity matrix.

The DR is an efficient way to level the anemometer to the surface in an idealized homogeneous flow and has the advantage to be usable online, even when the orientation of the anemometer is modified. However, drawbacks of the DR rotation procedure became apparent when eddy covariance measurements were performed above nonflat terrain and on long-term basis. Limitations are the risk of over-rotation (if there is an electronic offset in the measurement of w, it will be interpreted erroneously as a tilt), the loss of information (information on possible nonzero \( \overline w \) are missed), degradation of data quality (unrealistically large pitch angles in low wind speed conditions) and high-pass filtering of the data (it produces the undesirable effect of having turbulent time series that are discontinuous, see Lee et al. 2004).

3.2.4.3.2 Planar-Fit Method

More often and especially above tall vegetation or complex terrain, a nonzero mean (30 min) vertical wind velocity may exist and has to be taken into account as pointed out first by Lee (1998) and Paw et al. (2000). An alternative rotation procedure, the so-called “planar-fit method” was therefore proposed by Wilczak et al. (2001) based on the assumption that the vertical wind component is only equal to zero over longer averaging periods, usually weeks or longer, representing different typical flow features of a site under investigation.

To define this reference system, a mean streamline plane is first defined on the basis of measurements made on periods long enough to encompass all wind directions and a sample size that allows robust averaging. The z-axis is then fixed as perpendicular to this plane, the x- axis as the normal projection of the (30 min) mean wind velocity on this plane, and the y-axis as the normal to the two other axes. To obtain the mean streamline plane, a multiple linear regression is performed on the (30 min) wind components following:

$$ {\bar{w}_0} = {b_0} + {b_1}{\bar{u}_0} + {b_2}{\bar{v}_{{0}}} $$
(3.20)

from which the regression coefficients b 0, b 1, and b 2 are deduced. b 0 gives the instrumental offset in the vertical velocity component that must be subtracted from w 0 in further calculations. b 1 and b 2 are used to determine the pitch (β PF) and roll angles (γ PF). Combining Eqs. 42 and 44 of Wilczak et al. (2001), these angles can be obtained as

$$ \sin {\beta_{\text{PF}}} = \frac{{ - {b_{{1}}}}}{{\sqrt {{b_1^2 + b_2^2 + 1}} }},\,\cos {\beta_{\text{PF}}} = \frac{{\sqrt {{b_2^2 + 1}} }}{{\sqrt {{b_1^2 + b_2^2 + 1}} }} $$
(3.21)
$$ \sin {\gamma_{\text{PF}}} = \frac{{{b_2}}}{{\sqrt {{1 + b_2^2}} }},\,\cos {\gamma_{\text{PF}}} = \frac{1}{{\sqrt {{1 + b_2^2}} }} $$

The pitch and roll rotations are applied with these fixed angles to each (30 min) individual periods that were used for the determination of the coefficients. The z-axis of the reference coordinate system is perpendicular to the long-term local mean streamline plane. Finally, the yaw rotation is applied for each individual period with a varying angle:

$$ {\alpha_{\text{PF}}} = {\tan^{{ - 1}}}\left( {\frac{{\overline {{v_2}} }}{{\overline {{u_2}} }}} \right) $$
(3.22)

Because rotations are not commutative and because the regression coefficients have been computed from the wind components in the sonic anemometer frame, the pitch and roll rotations must be applied before the yaw rotation so that the definition of the R03 matrix given above should be modified accordingly.

It is recommended to reject low wind speed conditions (generally below 1 m s−1) for the computation of the regression coefficients, thereby removing the problem of unrealistically large pitch angles. Depending on the complexity of the topography the data set can be split into different wind sectors, to determine different planes for different wind sectors, but it has to be assured that an appropriate number of data sets contribute to the calculation for each sector. This method is called the “sector-wise planar-fit method.”

By relying on an ensemble of observations, the coordinate system is stable through time and the x-y plane is more or less parallel to the local surface. Sites where a systematic vertical motion exists are exceptions to this (forest edges or abrupt changes in the topography) but these sites are very rare and will face a lot of other methodological problems for flux computation. Using the planar-fit method, the drawbacks of the DR are overcome. Indeed, the risk of over rotation is minimized because the z-axis is now independent of the wind direction. Information on the two-dimensional or three-dimensional nature of the flow is now available, including the nonzero (30 min) mean vertical velocity which allows the investigation of nonturbulent advective fluxes (see Sect. 5.4.2). Again, these advantages over the DR method will be substantial mainly over nonideal sites and/or under bad weather conditions. The planar-fit method on a site enables valuable insights in the complexity of the flow, especially over forests (Lee et al. 2004).

The planar-fit method has also some drawbacks. Regression coefficients depend on the anemometer orientation, the possible instrumental offset in vertical velocity, and the canopy structure. Therefore, they have to be computed every time one of these parameters is modified. Together with the requirement of a long data set available for the estimation of these coefficients, this can be a limitation of the applicability of the method at particular sites. Finally, influences of atmospheric stability and strong winds, for example, remain to be investigated and thorough intercomparisons of rotation procedures are still sparse (Su et al. 2008).

3.3 Flux Determination

As described in Sects. 1.3 and 1.4, the vertical turbulent flux of any scalar χ s can be deduced from the covariance of vertically rotated wind speed (w) and the mixing ratio of this scalar that have been treated as described above.

The general shape of the flux is given by:

$$ F_{\text{s}}^{{EC}} = \overline {{\rho_{\text{d}}}} \cdot \overline {w^\prime{{\chi ^\prime}_s}} = \frac{{{{\bar{p}}_{\text{d}}} \cdot {m_{\text{d}}}}}{{R \cdot \bar{\theta }}} \cdot \overline {w^\prime{{\chi ^\prime}_s}} $$
(3.23)

Specific shapes for each flux will be detailed below. If, in Eq. 3.23 and in the following equations, the average sonic temperature \( {\bar{\theta }_{\text{s}}} \) was used instead of true air temperature \( \bar{\theta } \), Eq. 3.3 should be applied in order to account for the difference between these two variables (Liu et al. 2001; Schotanus et al. 1983). When the concentration is expressed in terms of density or molar concentration, further corrections are needed to take high-frequency dry air density fluctuations into account (Sect. 4.1.4.1).

3.3.1 Momentum Flux

After application of the rotation, the momentum flux τ (kg m−2 s−1) may be determined from the fluctuations of vertical (w) and horizontal (u) wind components:

$$ \tau = - {\rho_{\text{d}}} \cdot \overline {w^\prime u^\prime} = - \frac{{{{\bar{p}}_{\text{d}}} \cdot {m_{\text{d}}}}}{{R \cdot \bar{\theta }}} \cdot \overline {w^\prime u^\prime} $$
(3.24)

Friction velocity, u * (ms−1), may be directly deduced from the covariance of vertical and horizontal wind components, as

$$ {u_{{*}}} = \sqrt{- \overline{u{^\prime} w{^\prime}}} $$
(3.25)

3.3.2 Buoyancy Flux and Sensible Heat Flux

The buoyancy flux can be determined from fluctuations of vertical wind component and sonic temperature \( {\theta_{\text{s}}} \) fluctuations:

$$ {H_{\text{s}}} = - {\rho_{\text{d}}}{c_{\text{p}}}\overline {w^\prime{{\theta ^\prime}_{\text{s}}}} = - \frac{{{{\bar{p}}_{\text{d}}} \cdot {m_{\text{d}}}}}{{R \cdot \bar{\theta }}} \cdot {c_{\text{p}}}\overline {w^\prime{{\theta ^\prime}_{\text{s}}}} $$
(3.26)

while the sensible heat flux writes:

$$ {H_{\text{s}}} = - {\rho_{\text{d}}}{c_{\text{p}}}\overline {w^\prime{{\theta ^\prime}_{\text{s}}}} = - \frac{{{{\bar{p}}_{\text{d}}} \cdot {m_{\text{d}}}}}{{R \cdot \bar{\theta }}} \cdot {c_{\text{p}}}\overline {w^\prime{{\theta ^\prime}_{\text{s}}}} $$
(3.27)

Fluctuations of true air temperature in Eq. 3.27 may be deduced from Eq. 3.3 only if high-frequency measurements of water vapor pressure and air pressure are available. If not, the conversion of buoyancy flux into sensible heat flux must be done on the basis of the averaged data. This correction is described in detail in Sect. 4.1.4.2.

3.3.3 Latent Heat Flux and Other Trace Gas Fluxes

For all other tracers, the conversion of mass mixing ratio into mass flux can be performed by using Eq. 3.23. Alternatively, the equations computing fluxes in mass or molar units from scalar concentrations expressed in molar mixing ratio, mass mixing ratios, molar concentration, or density are presented in Table 3.1.

Table 3.1 Formulas computing mass or molar fluxes from scalar concentrations expressed in molar mixing ratios, mass mixing ratios, molar concentrations or density. In the two latter cases, Webb-Pearman-Leuning (WPL) corrections should be applied (see Sect. 4.1.3)

The turbulent mass flux of water vapor,\( F_{\text{v}}^{\text{EC}} \), may be deduced from (3.23) where the scalar is water vapor mixing ratio. Often the water vapor flux is expressed as latent heat flux (W m−2) which is then determined as

$$ \lambda E = F_{\text{v}}^{\text{EC}} \cdot {\lambda_{\text{v}}} $$
(3.28)

where λ v = 3147.5−2.372 \( \theta \) (\( \theta \)in K) is the latent heat of vaporization for water (J kg−1).

3.3.4 Derivation of Additional Parameters

One of the most important parameters in micrometeorological applications that describes the atmospheric stratification is the stability parameter ζ, defined as

$$ \zeta = \frac{{{h_{\text{m}}} - d}}{L} $$
(3.29)

where h m is the measuring height, d the zero plane displacement height, and L the Obukhov-length, which relates dynamic, thermal, and buoyant processes. This length is defined as

$$ L = - \frac{{u_{{*}}^3}}{{\kappa \cdot \left( {g/{{\bar{\theta }}_{\text{S}}}} \right) \cdot \overline {w^\prime{{\theta ^\prime}_{\text{S}}}} }} $$
(3.30)

Another parameter of interest is the Bowen ratio that relates sensible and latent heat flux:

$$ Bo = \frac{H}{{{\lambda_{\text{v}}}F_{\text{v}}^{{EC}}}} $$
(3.31)

which is a helpful measure if energy partitioning is investigated.