Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

14.1 Introduction

Although thermal remote sensing data have been available since the 1970s, the use of time series in remote sensing is recent, since the temporal coherence of thermal data records have been hindered by several flaws. Nonetheless, the potential of the applications is high, from climate change studies to environment monitoring.

As concern about the consequences of climate change grows, the need for reliable information on surface temperature has increased. For example, climate modelers need surface temperature as input for their models to adequately simulate past and future climate, in order to be able to quantify vegetation and plankton response to atmospheric CO2 anthropogenic forcing (see for example Diak and Whipple 1993). As stated by Frey et al. (2012), “surface temperature is a key variable in the climatological system. It represents the interface between the incoming radiation fluxes and other terms of the energy balance, i.e. the sensible heat flux or the ground heat flux. Air temperature is directly triggered by Land Surface Temperature (LST). Because of this central position in the climatological system, LST can be used as an indicator of the energy balance at the Earth’s surface and the so-called greenhouse effect in climate change studies. As part of the surface radiation budget, LST belongs also to the essential climate variables of WMO-GCOS (World Meteorological Organization-Global Climate Observing System, URL1). In the Strategic Plan for the US Climate Change Science Program (CCSP 2006), the surface ground temperature is listed as a state variable and the long-wave surface energy budget (derived also from LST) is listed as one of its key external forcing or feedback observations” (Frey et al. 2012).

This need for reliable surface thermal estimations is also shared by studies on climate change’s current consequences. For example, it allows the identification of desertification (Lambin and Ehrlich 1996; Karnieli et al. 2010), whether under water stress or human pressure. Surface temperature is also a key parameter for ecosystem studies, since the suitability of temperature conditions for local flora and fauna species is endangered by climate change (see for example Bertrand et al. 2011).

Thermal estimations include both temperature and emissivity, which are intimately related (see for example Gillespie et al. 1998, for temperature and emissivity separation methods). Thermal emissivities have been monitored for their inclusion in global climate model surface schemes (Menglin and Liang 2006), and their seasonal variation analyzed (Ogawa et al. 2008). Thermal emissivities can also be used to monitor vegetation changes through time, as French et al. (2008) have presented for a semi-arid site in New Mexico (USA).

Another field of application of thermal remote sensing is thermal anomaly detection and monitoring. These approaches are carried out by comparing pixel temperatures against their background in uni-temporal satellite data (Kuenzer et al. 2007), or they consist in comparing a near-real time surface temperature estimation to past reference measurements, in order to identify departures from standard behaviors (Kuenzer et al. 2008). Some modern fire detection systems have been developed based on this principle, and are especially useful for detection of fire events in remote areas (Prins et al. 2004). Such alert systems can also be implemented for volcano monitoring, or for industrial hazard detection.

However, for such applications to be implemented, one key aspect has to be taken into account: when two thermal images are compared at two distinct dates, one has to make sure that both images are coherent. Some of the factors that decreases the temporal coherence in multitemporal thermal analysis are common to other remote sensing characterization, while others are more TIR (Thermal Infra Red) specific. All these factors are summarized in Table 14.1.

Table 14.1 Factors influencing time series coherence of thermal parameters

Common factors with other remote sensing applications are included in what is generally referred to as level 2 products, which include calibration, georeferencing, and atmospheric correction. Calibration is needed to transform sensor counts into brightness temperature, is sensor dependent, and calibration coefficients may need to be updated during the activity period of a given sensor. Georeferencing is crucial for time series analyses, in order to make sure that the same location is being monitored through time. Regarding atmospheric correction, some land and sea surface temperature algorithms (Jiménez-Muñoz and Sobrino 2008; Barton 1995) consider the absorption due to total atmospheric water content, while others only transform brightness temperatures to land surface temperature through the assignation of land surface emissivities (Sobrino et al. 2008a). Additionally, complex atmospheric correction may be needed, especially considering the impact of atmospheric depth, atmospheric mass, and also terrain on the thermal signal, such as provided by the ATCOR (atmospheric correction) tool, implemented for thermal bands of Landsat MSS (Multispectral Scanner), TM (Thematic Mapper) or ETM+ (Enhanced Thematic Mapper Plus), ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer ) or BIRD (Bispectral Infra-Red Detection) instruments, and available in various software packages.

However, time series pre-processing of thermal remote sensing data also present more specific characteristics. The first one, which is shared with other optical remote sensing, is the presence of clouds, which mask the land surface and therefore prevent from the estimation of surface temperature. Additionally, in high resolution data, projected cloud shadows can also decrease the retrieved temperatures. Cloud masks are sometimes provided with temperature retrievals, although undetected clouds may still be present in the data. For example, thin cirrus clouds decrease the observed surface temperature, although their detection from remotely sensed data is problematic (Saunders and Kriebel 1988). Moreover, some regions of the globe are almost permanently covered with clouds during some seasons, and therefore cloud free observations are scarce, and time series gap filling has to be implemented.

Another specific aspect of time series analysis in thermal remote sensing is the orbital drift effect. This orbital drift effects all satellites which do not possess onboard fuel for orbit correction, such as engineless platforms, or ageing platforms which fuel reserves have been exhausted. Orbital drift is evidenced mainly for polar satellites, and consists in a slow but steady change in the orbit characteristics of the considered satellite, which results in an increasing delay or advance in satellite overpass over a given reference geographic location. An example of such orbital drift can be observed in the case of the NOAA (National Oceanic and Atmospheric Administration) satellite series (Price 1991). The effect of the orbital drift combines both considerations on sun-target-sensor geometry (proportion of shade in a given pixel) as well as on daily temperature cycle characteristics, to which the resulting variations in the pathway length through the atmosphere can be added. Therefore, it is evidenced more easily for land covers with high daily amplitude (deserts, crops) than for low amplitudes (sea, evergreen forests). A similar effect can be evidenced for non-drifting platforms for which some pixels are observed for different paths. However, this effect is not correlated with time, and therefore can be assimilated as an additional noise in the time series which can be handled by the trend detection methods presented in Sect. 14.3.2.

Thermal anisotropy is another factor which can influence temperature retrieval, and therefore thermal time series (Lagouarde et al. 1995), for which a BRDF-type (Bidirectional Reflectance Distribution Function, see Tanré et al. 1983) correction could be developed. However, since such correction has not yet been developed in the case of thermal data, it will not be mentioned further here.

In order to be able to analyze thermal time series (whether emissivities or temperatures) correctly, all these aspects have to be taken into consideration. Then, change analysis can be conducted. To that end, this chapter is divided into two parts, the first part describing how to conduct these corrections, while the second one is focused specifically on change analysis.

14.2 Removal of Temporal Incoherence in Thermal Time Series

As stated in the introduction, the coherence of thermal time series can be hindered by several factors. Since factors such as calibration, geocorrection, and atmospheric correction are usually taken into account during data processing, they will not be elaborated on here. Instead, this first part will be devoted to the correction of two more specific factors, i.e. cloud contamination and orbital drift effects.

14.2.1 Cloud Contamination

The first step in cloud contamination removal is cloud identification. It is usually carried out through band ratio and/or band thresholding, with each band or band ratio threshold aiming at the detection of one particular type of cloud (high and low thick clouds, thin cirrus) for both Land Surface Temperature and Sea Surface Temperature (SST). The key point here is the number of bands available for the considered sensor, which allow for more or less thresholds to be applied. When restricted to thermal infrared, the application of thresholds is less reliable, especially when the observed areas may present an ice or snow cover during part of the time series (mountains, temperate to polar areas). This is due to the fact that cloud tops have similar temperatures to snow or ice covered surfaces. Recent algorithms allow for a good assessment of cloud contamination (Ackerman et al. 1998; Derrien and Le Gléau 2005). Nevertheless, cloud masking may leave out cloud contaminated values in the time series, which is the reason why this section will focus on cloud reconstruction methods.

Cloud reconstruction methods are based on a few assumptions, which allow identifying cloud contaminated values within the temporal profile of the data, and then filling the gap corresponding to cloud contaminated values. Usually, the assumptions made for reconstruction are threefold (Julien and Sobrino 2010): continuity of the time series, which corresponds to the fact that observed natural processes show slow changes; clouds have an unidirectional effect on the signal, which is due to the fact that usually cloud contamination tend to decrease the signal values; and finally, cloud free dates are sufficient for time series reconstruction. Most methods for time series reconstruction have been developed for analyses of vegetation index time series (such as NDVI – Normalized Difference Vegetation Index, Tucker 1979, or EVI – Enhanced Vegetation Index, Huete et al. 2002). However, these methods can also be applied to sea or land surface temperature as well as to emissivity time series. Such methods are reviewed in the following paragraphs, to focus finally on three methods selected for their wide application or their novelty.

Numerous methods have been presented to identify and interpolate contaminated values in time series data (van Dijk et al. 1987; Viovy et al. 1992; Roerink et al. 2000; Jönsson and Eklundh 2002, 2004; Chen et al. 2004; Ma and Veroustraete 2006; Beck et al. 2006; Julien and Sobrino 2010), the latest methods usually performing better than the previous ones (Hird and McDermid 2009). The criteria usually followed to assess the best reconstruction method are its fidelity to the original cloud-free data and its ability to identify cloud contaminated values. Validation of the reconstructed time series is usually qualitative, since spatially extensive measurements (usually of the order of one square kilometer) would be needed for a quantitative validation. Readers have to keep in mind that such corrections reconstruct a “clear-sky” time series, which can differ substantially from ground truth, since cloudy LST for example would be lower than the reconstructed “clear-sky” value. However, estimation of cloudy LST would require the inclusion of models that would increase considerably the processing costs of the correction, which is the reason why such methods have not been developed widely. One example of such method can be found in Jin and Dickinson (2000). Note that these methods do not distinguish between clouds and other atmospheric contamination of the data.

The methods presented by van Dijk et al. (1987) and Viovy et al. (1992) were designed with their application to daily time series in mind, and therefore are difficult to apply on composited time series, due to their focus on the highest frequencies in the signal (up to a few days), which do not appear in composited time series. Indeed, most of the publicly available databases of remotely sensed data for Earth observation, such as Pathfinder AVHRR (Advanced Very High Resolution Radiometer) Land (Smith et al. 1997) or GIMMS (Global Inventory Modeling and Mapping Studies; Tucker et al. 2005) are composited. This compositing aims at lowering atmospheric and cloud influence, as shown in Holben (1986), with different compositing periods ranging usually from 8 to 15 days. Even though composite data present lower atmospheric contamination than raw time series, this composition process does not eliminate atmospheric contamination. For example, cloud cover can persist longer than the compositing period for some time periods (rainy season) or over some specific areas (tropical rainforests). Therefore, we present here three approaches for remaining atmospheric influences on composited time series:

HANTS (Harmonic Analysis of NDVI Time Series): This algorithm (Menenti et al. 1993; Verhoef et al. 1996; Roerink et al. 2000) was developed with the application to time series of NDVI images in mind. These images are usually composited by means of the so-called Maximum Value Compositing (MVC, Holben 1986) algorithm in order to suppress atmospheric effects. The HANTS algorithm exploits the negative effect of atmospheric contamination on NDVI values. In HANTS, a curve fitting is applied iteratively, i.e. first a least squares curve is computed based on all data points, and next the observations are compared to the curve. Observations that are clearly below the curve are candidates for rejection due to atmospheric contamination and the points that have the greatest negative deviation from the curve therefore are removed first. Next a new curve is computed based on the remaining points and the process is repeated. Pronounced negative outliers are removed by assigning a weight of zero to them, and a new curve is computed. This iteration eventually leads to a smooth curve that approaches the upper envelope of the data points. In this way, atmospheric contaminated observations have been removed and the amplitudes and phases computed are much more reliable than those based on a straightforward FFT (Fast Fourier Transform). An example of implementation of the HANTS algorithm for land surface temperature time series analysis can be found in Julien et al. (2006).

Double logistic curve fitting: The double logistic approach has been previously applied in Julien and Sobrino (2009) to global GIMMS data, as a generalization of the method presented by Beck et al. (2006) for Siberia.

NDVI yearly evolutions are fitted to the following double logistic function (Beck et al. 2006):

$$ NDVI(t)=\left( {mNDVI-wNDVI} \right)\times \left( {\frac{1}{{1+{e^{{-mS\times \left( {t-S} \right)}}}}}+\frac{1}{{1+{e^{{mA\times \left( {t-A} \right)}}}}}-1} \right)+wNDVI $$
(14.1)

where NDVI(t) is the remotely sensed NDVI evolution for a given year (t = 0 to 364, in day of year), wNDVI is the winter NDVI value, mNDVI is the maximum NDVI value, S is the increasing inflection point (spring date), A is the decreasing inflection point (autumn date), mS is related to the rate of increase at S inflection point, and mA is related to the rate of decrease at A inflection point. All these parameters are retrieved iteratively on a pixel-by-pixel basis, by using the Levenberg-Marquardt technique (More 1977). In order to remove eventual snow- or cloud-contaminated values, a preliminary fit is conducted in order to estimate the dormancy period as the period before spring date and after autumn date. During this period, all eventual negative NDVI values are set to the highest positive value over the whole dormancy period, labeled winter NDVI. Since surface temperatures usually present a high seasonality, this approach is also suitable for surface temperature time series. Although NDVI and LST annual curves may differ in shape (no constant LST during winter or summer), the double logistic approach can describe adequately the LST annual curves.

IDR (iterative Interpolation for Data Reconstruction): This method (Julien and Sobrino 2010), also exploits the tendency of cloud and atmospheric influence to lower NDVI values. Additionally, since NDVI is a proxy for vegetation greenness, its temporal variation should be smooth and continuous. Therefore, for each date of a given time series, an alternative NDVI value is computed as the mean between the immediately preceding and following observations. An alternative NDVI time series is therefore obtained, and compared to the original time series. The date corresponding to the maximum difference between the alternative and original time series is identified, and the corresponding NDVI value in the original time series is replaced with the corresponding NDVI value in the alternative time series. This replacement is carried out only when the maximum difference between both time series is higher than noise level (in that case 0.02 NDVI units). Then a new alternative time series is computed from the modified time series, and the process is iterated until convergence is reached. This process allows to progressively increase one by one the low and discontinuous NDVI values (corresponding to atmospherically contaminated values) until the upper envelope of the NDVI time series is reached. The methodology is somewhat similar to the one presented in Ma and Veroustraete (2006), with the difference that the IDR method is carried out from the data itself, and not from a comparison to an average of different years, which can be problematic for areas with high interannual variability, for areas suffering a land cover change, or when the acquired time series length is short.

Figure 14.1 presents an example of atmospheric contamination reconstruction using the IDR method with Meteosat Second Generation land surface temperature data, retrieved during one whole day by using the algorithm developed by Sobrino and Romaguera (2004). Cloud contamination appears as sudden decreases in retrieved land surface temperature from 10:00 to 13:00 (GMT), while atmospheric contamination is more easily evidenced at night (from 17:00 to 24:00 GMT for example).

Fig. 14.1
figure 00141

One-day (15th July 2010) land surface temperature (in Kelvin) for a pixel in eastern Turkey as retrieved by Meteosat Second Generation SEVIRI (Spinning Enhanced Visible and InfraRed Imager) sensor (black), and after IDR atmospheric contamination reconstruction (grey). See text for details on the IDR method

14.2.2 Orbital Drift Effect

The orbital drift effect is evidenced only for a few platforms, although these platforms are the ones that provide one of the longest time series of surface temperature. Therefore, the following paragraph describes only methods developed to counterbalance this phenomenon for the NOAA-AVHRR sensor.

The first method, developed by Gutman (1999), relies on the previous calculations of temperature and SZA (Solar Zenithal Angle) time series anomalies, which are then averaged over homogeneous vegetation classes. A simple linear regression is then conducted between these averaged anomalies, and finally, the fitted SZA anomalies are removed from LST time series by simple difference. This method was applied and analyzed thoroughly in Gleason et al. (2002), showing that some hemispherical and local adaptations were needed for desert and crop classes respectively. These methods rely on a priori knowledge on land cover, which can (and should) not be considered in change studies through time series analysis.

Another method, developed by Jin and Treadon (2003), relies on modeling land surface temperature daily cycle, from which the difference of temperature between the nominal and actual satellite overpass times can be estimated, and then added to the data for the corresponding date. However, the daily cycles have been computed for 18 land covers, for all four seasons, and for latitude bands of 5°, which transforms temporal discontinuities in land surface temperatures at satellite transitions into spatial discontinuities at vegetation class and latitude band transitions.

Pinheiro et al. (2004) developed a model based on vegetation structural data and geometric optics, which allows for the estimation of the fraction of sunlit and shaded endmembers observed by AVHRR for each pixel of each overpass. This approach has been used to build a daily record of NOAA-14 AVHRR land surface temperature over Africa (Pinheiro et al. 2006). Due to the needed a priori knowledge of the land cover and to the model complexity this approach is difficult to implement for large datasets.

Pinzon et al. (2005) used an approach based on the Empirical Mode Decomposition (EMD) to correct NDVI data for the orbital drift. The decomposition uses the simple assumption that any data consists of different simple intrinsic modes of oscillations, which can be retrieved iteratively from the data itself. Therefore, a specific mode could be identified as orbital drift dependent, and removed from the signal. This approach was chosen for the correction of GIMMS NDVI as well as for the LTDR (Long Term Data Record) dataset.

Finally, Sobrino et al. (2008b) presented a simple and automated method to correct NOAA-AVHRR orbital drift, also using SZA anomaly information. The iterative character of this method results in increased processing times, and is thus difficult to implement for large databases.

All the methods presented above are unable to correct the orbital drift effect without introducing spurious trends in the data (Hou and Shi 2011), since the differentiation between orbital drift and trends included in the time series is not always obvious. However, Julien and Sobrino (2012) developed a data-driven method to correct this orbital drift, therefore avoiding the lack of information on NOAA-AVHRR acquisition times (although indirect approaches such as the ones presented in Frey et al. 2012, or Ignatov et al. 2004, allow for their estimation). The Julien and Sobrino (2012) approach is based on a pixel-by-pixel fit of LST anomalies against both time since launch and solar zenithal angle anomaly, which allows for the removal of orbital drift influence without removing eventual trends in the pixel time series. Figure 14.2 presents an example of the orbital drift influence on a barren pixel time series and its correction by the Julien and Sobrino (2012) approach.

Fig. 14.2
figure 00142

LST time series for a barren pixel before (grey) and after (black) the Julien and Sobrino (2012) orbital drift correction. The orbital drift effect on the uncorrected LST time series is clearly evidenced by the increasing difference between both curves from satellite launch to retirement dates

14.3 Time Series Analyses

In the case of sea surface temperatures, the main application being climate modeling, basic approaches relying on anomaly estimation and linear regression have been generally applied (see for example Comiso 2003). Another case of time series analyses based on anomaly estimation is the coal fire detection method presented by Kuenzer et al. (2008), which also presents the particularity of using four observations per day, as a rare example of intra-daily time series application. Due to the scarcity of temporally coherent land surface temperature time series, few methods have been developed specifically for their analysis. However, methods developed for other optical wavelengths can easily be transposed to thermal infrared applications. Therefore, the following methods could also perfectly be applied to LST time series as well. These methods are presented hereafter, divided in change detection and trend retrieval methods.

14.3.1 Change Detection

Coppin et al. (2004) have reviewed different methods for vegetation monitoring. This section summarizes the ones that are relevant to thermal time series, completed with other references. Detection of changes in remotely sensed images presupposes having access to similar data, whether regarding acquisition (cloud free images, atmospheric effects, illumination and observation geometry, similar wavelengths and spatial resolutions), time scale (comparable phenological state of the vegetation), data processing (similar methods, accurate georeferenciation). Change detection algorithms are mainly based on bi-temporal analysis, i.e. comparison of two sets of data, preferably from before and after the change.

  • A first technique for change detection is univariate image differencing, which consists in the simple subtraction of two images previously co-georeferenced (Banner and Lynham 1981; Lyon et al. 1998; Nelson 1983). Negative differences can generally be attributed to increase in vegetation cover (which temperature is lowered by increased evapotranspiration), while positive differences evidence mainly a decrease in vegetation cover. An example of this method can be found in Fig. 14.3, where the difference between two land surface temperature estimates from airborne AHS (Airborne Hyperspectral Scanner) sensor, acquired over an agricultural area in southern France in 2007 at 11:40 UTC, shows the land cover change between two dates. This difference image allows the identification of the changes suffered between these two dates: lower river (Garonne) level (top of the image, in white), increase in vegetation cover (as a decrease of LST in the lower part of the image, in dark grey) and decrease in vegetation cover (as an increase in LST in the left part of the image, in white). This method has the advantage of low cost in processing time.

    Fig. 14.3
    figure 00143

    Land surface temperature as retrieved by the AHS (Airborne Hyperspectral Scanner) sensor over an agricultural area in Southern France on 24th April 2007 and 15th September 2007 (upper images) and their difference (lower image)

  • A second technique consists in image ratioing on a pixel by pixel basis, resulting in an image where change pixels have a value different to unity. This technique has been applied by Howarth and Wickware (1981), unfortunately without being able to make a quantitative assessment of the changes.

  • A third technique is image regression, which consists in assuming that the “after” image is linearly related to the “before” image for all bands, implying that the spectral properties of most pixels have not changed between images. Changes are then identified by setting thresholds to the residuals. This technique has not been proven to reach high accuracies (Burns and Joyce 1981; Singh 1989; Ridd and Liu 1998).

  • A fourth method is multi-temporal spectral mixture analysis, which supposes that the images (preferably with high spatial resolution) include pixels with pure spectral signatures or end-members, present in all pixels with different proportions. Then, change results in variation in end-member percentiles. This method was implemented successfully for Landsat images of Brazilian Amazon by Adams et al. (1995) and Roberts et al. (1998).

  • Finally, a fifth technique is multidimensional temporal feature space analysis, which consists in overlaying selected bands of the “before” and “after” images in a composite image as red, green and blue bands, in which changes appear in unique colors. This technique does not provide any insight on the drivers of the changes, and is usually applied for mask building before change detection. For example, Alwashe and Bokhari (1993) and Wilson and Sader (2002) have applied this technique to Landsat bands or derived indices. Finally, combinations of those different techniques have also been used (Desclée et al. 2006).

Among all these techniques, users should choose which method is more adequate for their application. For example, methods 1, 2 and 5 are straightforward (these techniques can be carried out automatically for large amounts of data, with changes being identified through threshold definition.), while methods 3 and 4 are more difficult to implement. Therefore, methods 1, 2 and 5 can be used as an exploratory tool in order to identify where changes are occurring, while methods 3 and 4 can emphasize links between changes separated geographically. Moreover, method 4 is useful for determining the nature of the change which has been identified.

14.3.2 Trend Analyses

Most of the methods presented above have been designed for high resolution images, such as those retrieved by SPOT (Satellite Pour l’Observation de la Terre) or Landsat sensors, for which temporal resolution is quite low. However, when temporal resolution is higher and spatial resolution is lower, surface temperature retrieval consists in an averaging of temperature over land covers and vegetation species, while abrupt events such as harvesting are smoothed due to their local character. Thus, different techniques have to be applied, which can be summarized as temporal trajectory analyses. These techniques include statistical analysis (departure from averages, optima, etc. – Lambin and Strahler 1994), simple anomalies (instantaneous departure from average corresponding period over the whole time series – Myneni et al. 1997; Plisnier et al. 2000; Comiso 2003), Fourier analysis (Andres et al. 1994), principal component analysis (PCA) (Eastman and Fulk 1993; Young and Wang 2001) and change-vector analysis (CVA) (Lambin and Strahler 1994; Lambin and Ehrlich 1997). As was the case with change detection methods, the last techniques (Fourier, PCA and CVA) are heavier to implement, although they allow for a better assessment of geographical correlation of the changes. Additionally, De Beurs and Henebry (2005a) designed a statistical framework for land cover change analysis.

Ordinary least squares (OLS) regression is the most common method applied for trend analysis in long image time series, as is the study of global trends in SST by Deser et al. (2010). However, four basic assumptions affecting the validity of trends summarized by OLS regression are often violated: (1) all the Y-values should be independent of each other; the residuals should be (2) random with (3) zero mean; and (4) the variance of the residuals should be equal for all values of X (De Beurs 2005). Since time series of biophysical parameters are temporally correlated, OLS regression retrieved trends are not reliable.

The approach described hereafter relies on the Mann-Kendall framework, which has been applied in a few previous studies of time series of remotely sensed data (De Beurs and Henebry 2004a, b, 2005a, b). The basic principle of Mann-Kendall (MK) tests for trend is to examine the sign of all pairwise differences of observed values (Libiseller and Grimvall 2002). An univariate form of such tests was first published by (Mann 1945), and the theory of multivariate Mann-Kendall tests is due to Hoeffding (1948), Kendall (1975), Dietz and Killeen (1981). During the past two decades, applications in the environmental sciences have given rise to several new MK tests. Hirsch and Slack (1984) published a test for detection of trends in serially dependent environmental data collected over several seasons.

The Mann-Kendall statistic for monotone trend in a time series {Zk, k = 1, 2,…, n} of data is defined as:

$$ T=\sum\limits_{j< i } {\operatorname{sgn}\left( {{Z_i}-{Z_j}} \right)} $$
(14.2)

where

$$ \operatorname{sgn}(x)=\left\{ {\begin{array}{llllll} {1,} \hfill & {if} \hfill & {x>0} \hfill \\{0,} \hfill & {if} \hfill & {x=0} \hfill \\{-1,} \hfill & {if} \hfill & {x<0} \hfill \\\end{array}} \right. $$
(14.3)

If no ties are present and the values of Z1, Z2, …, Zn are randomly ordered, this statistic test has expectation zero and variance:

$$ Var(T)=\frac{n(n-1)(2n+5) }{18 } $$
(14.4)

Furthermore, T is approximately normal, if n is large (n > 10 – Kendall 1975).

Finally, the null trend hypothesis can be rejected at a confidence level α if T (in absolute value) is greater than a corresponding threshold, which value is zα•√Var(T), with zα being retrieved from standard normal distribution tables.

This test determines whether trends are present in the data. However, it does not provide estimates of the trend magnitude. To that end, Sen’s slope approach (Sen 1968) can be used. This approach consists in determining trend values for all pairs of data of the time series, and then in identifying the median value of all these estimated trends. This approach has been shown to be resistant to outliers (Sen 1968).

Figure 14.4 shows an example of Mann-Kendall significance level for MODIS (Moderate Resolution Imaging Spectroradiometer) TERRA maximum land surface temperature trends, as well as trend values estimated by Sen’s slope approach for the whole globe, and a synthesis image which presents Sen’s slope trend values for Mann-Kendall values above 90 % confidence level. These maps show a good spatial homogeneity, which confirms the validity of the applied pixel-based approach. Obviously, the relatively short time span of MODIS data (10 years) does not allow for climate studies, although the observed trends can easily be related to climate change impacts (increased air temperatures in boreal areas for example – IPCC 2007). In order to be able to analyze climate related trends, a longer time series is needed, from the AVHRR instrument for example, provided an adequate correction of the orbital drift effect (Sect. 2.3).

Fig. 14.4
figure 00144

Global trends for MODIS TERRA land surface temperature between 2001 and 2010, as retrieved through the Mann-Kendall statistical framework. These maps correspond to significance level (top), trend values (middle), and trend values with a confidence level above 90 % (bottom)

Finally, a novel technique has been developed by Verbesselt et al. (2010), which consists in the detection of breakpoints in a time series (BFAST – Breaks For Additive Seasonal and Trend). Although this technique has been developed with the application to vegetation indices in mind, this technique can perfectly be applied to surface temperature time series, and can provide interesting insight on the timing of surface changes, which are useful for the attribution of causes of the observed changes.

14.4 Conclusions

Time series analysis for thermal data is a quite novel field, due to the low availability of temporally coherent datasets. However, recent advances in time series pre-processing (such as the ones presented here) and new sensors will result in an increased interest in this field. This has to be added to the general concern regarding global warming, based on trends observed from air temperatures, although surface temperature is a better indicator of ecosystem suitability for existing vegetation and animal species, whether aquatic or terrestrial. Therefore, thermal time series could allow for an improved assessment of global warming impacts (plant phenology, pests control, food security, etc.).