Introduction

Yellow rust caused by Puccinia striiformis f. sp. tritici (Pst) is among the most damaging diseases in winter wheat (Triticum aestivum L.) in Western Europe. Since 2013, an increased frequency of yellow rust epidemics was observed in Europe due to the spread of new strains that can infect cultivars which were previously considered as resistant (Hovmøller et al. 2016). Puccinia striiformis is an obligate biotrophic pathogen that is controlled by an integrated pest management approach taking primarily advantage of host plant resistance and control threshold-triggered fungicide application (Jørgensen et al. 2014). Environmental conditions favouring yellow rust infections in spring and summer were recently reported in detail by El Jarroudi et al. (2017). Beyond summer conditions, it is well established that the pathogen’s probability of survival decreases in cold winters (Chen 2005; Gladders et al. 2007) as they also occur in Western Europe. In the USA, forecast models were developed that used accumulated negative degree days (daily temperatures <7 °C) between 1 December and 31 January (Line 2002). Low temperature sums may for instance be obtained by moderately low temperatures (that may still allow pathogen survival) over the entire period of observation, or, by short but rather extreme low temperatures that kill the pathogen. Both scenarios can result in similar numbers of degree days but epidemiological outcomes differ. Furthermore, the thermal behaviour of yellow rust strains differs, with significant shifts over time (de Vallavieille-Pope et al. 2018). More precise and more recent information on relevant time frames and temperatures is needed to enhance yellow rust forecasts. To allow regular updates with regard to climate change and transferability to other regions, a simple and generally usable methodology for identifying relevant time frames and temperatures is needed. It was the purpose of the present analysis to identify time frames and temperature ranges that were associated with yellow rust epidemics in recent years. To reach this objective, a generally usable methodology is proposed. The predictive value of periods with significant temperature differences between epidemic and non-epidemic cases was tested by leave-one-out cross-validation.

Materials and methods

Plant disease assessments

Yellow rust disease data were gathered during field trips to four experimental sites (Burmerange: 6.28 E | 49.52 N; Reuler: 6.04 E | 50.06 N; Everlange: 5.95 E | 49.78 N and Bettendorf: 6.19 E | 49.87 N) during seven growing seasons from 2010 to 2016. The experiments at Reuler and Burmerange were conducted in commercial wheat fields, while experimental plots at Everlange and Bettendorf were part of an experimental field managed by the Lycée Technique Agricole d’Ettelbruck (El Jarroudi et al. 2009). The experimental farm was established at Everlange from 2010 to 2013 and then transferred to Bettendorf starting in 2014 and, thus, no data from this site is available before that year. A commercial wheat field was monitored in Everlange after 2013. The disease monitoring was done visually each year between the plant growth stages (GS) 31 (first node detectable) and 65 (anthesis half-way) corresponding approximately to the period between mid-April and end of June. The flowering period is the latest point of time when spraying fungicide on a winter wheat crop is allowed (https://saturn.etat.lu/tapes/). Each year, the main shoots of ten plants from four replicate plots (plot size: 7.5 by 1.5 m) were marked at the beginning of the growing season. Disease severity on the upper seven leaves was determined weekly throughout the growing season as described in detail by Junk et al. (2016). Disease severity was subsequently translated into disease incidence (for further details see below), because the control threshold used for distinguishing epidemic from non-epidemic cases was given as incidence rather than as severity. All data used in the present study were obtained in fungicide untreated plots.

To achieve reproducibility of the visual disease assessments, an online training tool of the Julius Kühn Institute was used (http://prozentualer-befall.julius-kuehn.de/schadbilder.php?show=8). A complete overview of the 29 case studies available for the present study is given in Table 1.

Table 1 Location, cultivar and growth stages, dates and days of the year, when the control threshold for yellow rust was exceeded in winter wheat grown in Luxembourg

The control threshold: a criterion for distinguishing relevant strong epidemics from economically irrelevant weak epidemics

Within the concept of integrated pest management (IPM), the control threshold is an estimate of the level of disease at which the grower must interfere to prevent an economic loss (Zadoks 1985). It is the disease level that causes an economic loss approximately equal to the costs of pathogen control. Usually, the costs that are caused by one pesticide spray at the farm level including the pesticide itself, labour, machine wearout costs, diesel and so on are used for the estimation of the control thresholds. The control threshold thus reflects on the one hand the disease level that can be tolerated without resorting to pesticide use; on the other hand it reflects the disease level when economic losses become highly likely if the disease concerned is left uncontrolled. It is thus a well-suited criterion for distinguishing practically relevant strong epidemics from practically irrelevant weak epidemics. Cases where the control threshold was reached will subsequently be referred to as epidemic cases, while cases where the control threshold was not reached will be referred to as non-epidemic cases. The control threshold for yellow rust on winter wheat according to Beer (2005), namely 30% disease incidence on the stem or on the upper three leaves during the plant growth stages 31–61, was used for the present study. Disease incidence was plotted against DOYs (days of the year, data not shown) and the DOY when the control threshold was reached was retained for each case study and collected in Table 1.

Weather data acquisition

Time series of daily air temperatures were retrieved from the website www.agrimeteo.lu, where the data from automatic weather stations of the GDL are displayed. Air temperature was measured at a height of 2 m. The following weather stations were used: Reuler for the Reuler site, Useldange for Everlange, Remich for Burmerange and both Bettendorf and Fouhren for the Bettendorf site (Fig. 1). Fouhren was used as a weather station for covering the experimental field at Bettendorf before the weather station in Bettendorf was available in 2015. Thus, for 2014, weather data from Fouhren were used for Bettendorf. A common starting point was needed for the analysis of temperature time series. The starting point should be easy to determine by farmers and it should be a proxy for a process with biological relevance. Since the date of sowing is known to farmers, we decided to use it as the starting point for our considerations. Temperature time series analysed here generally start at the date of sowing. Time will subsequently be expressed as days after sowing (DAS). Furthermore, a meaningful, stable and commonly available time slice for temperature measurements had to be laid down. We have chosen average daily temperatures, because they are commonly available from many weather stations and models.

Fig. 1
figure 1

Locations of weather stations and experimental fields

Statistical analyses

To test if location, cultivar, cultivar susceptibility rankings and year had a significant effect on whether a relevant epidemic occurred (control threshold exceeded) or not (control threshold not exceeded), 2-sided Chi-Square tests (software package SPSS version 19, IBM Corporation, Armonk, New York, USA) were used.

To test potential effects of location, cultivar and DOY, when the control threshold for yellow rust on winter wheat was reached, an analysis of covariance (SPSS) was used. The DOY was the dependent variable, location and cultivar were the independent variables and the year of harvest was considered as covariable. Approximate Gaussian distribution and approximately homogenous variances were confirmed by visual assessment using box plots.

For each day after sowing, average daily air temperatures were compared between epidemic and non-epidemic cases using non-parametric Mann-Whitney U-tests (SPSS). Differences were considered as significant, whenever P values lower than 0.05 were estimated.

Leave-one out cross-validation (LOOCV) was used to test the predictive value of the significant temperature differences that were found. Details of the method were previously described by Beyer et al. (2012).

Results

Yellow rust epidemics exceeding the control threshold according to Beer (2005) were observed in 17 out of 29 cases (Table 1). The occurrence of epidemics (control threshold exceeded: yes or no) was independent of location (P = 0.519), while the effect of the cultivars was close to the significance level (P = 0.059). The effect of the cultivar susceptibility rankings (Table 1) was non-significant at P = 0.116, confirming that cultivar effects were non-significant within the set of cultivars used here. Yellow rust epidemics were more frequently observed than should be expected under the assumption of random temporal distribution of epidemics among years in 2010, 2014, 2015 and 2016 and less frequently in the years 2011, 2012 and 2013 (P = 0.013).

For those 17 case studies, where epidemics were observed, the point of time expressed as day of the year when the control threshold was reached was independent of the year (P = 0.856), the location (P = 0.704) and the cultivar (P = 0.520). A trend towards earlier epidemics for more susceptible cultivars, when cultivar susceptibility rankings rather than cultivars themselves were analysed, was non-significant at P = 0.148.

Air temperature was significantly lower at 7, 8, 21, 22, 23, 24, 25, 26, 27, 28, 69, 80, 81, 106, 109, 132, 133, 134, 136, 143, 144, 145, 171, 186, 187, 194, 195 and 205 DAS when no epidemic was observed compared with epidemic cases and significantly higher at 3 DAS (Table 2). Fitting Gaussian regression lines (general equation: y = y0 + a*exp.(−0.5*((x-x0)/b)2, where y = temperature (°C); y0 = y-axis intercept; a, b = regression parameters and x0 = time when temperature = min) to temperature time courses revealed that air temperature was on average very similar at 0 and 225 DAS, but during the coldest period of the year about one degree Celsius lower before non-epidemic cases (0.7 °C) compared with epidemic cases (1.7 °C, Fig. 1a). The longest periods with significantly lower temperatures before non-epidemic cases were observed between 21 and 28 DAS, between 132 and 134 DAS and between 143 and 145 DAS (Table 2, Fig. 2). LOOCV revealed that among the periods with significantly different temperatures, the differences observed at 25 and 143 DAS allowed a correct classification of the left out case studies in 86.7% and 83.3% of all cases, respectively (Fig. 2b). These were the best results obtained when testing every observed temperature difference, suggesting that in at least 13.3% of the cases factors beyond temperature played a decisive role.

Table 2 Average temperature (°C) and standard error (SE) for 17 case studies when the control threshold was reached and for 12 case studies, when the control threshold for yellow rust in winter wheat was not reached between sowing and 1st June of the following year
Fig. 2
figure 2

a Winter temperature time courses for 17 epidemic case studies (red) and for 12 non-epidemic case studies (green). Epidemic and non-epidemic case studies were separated according to the control threshold concept by Beer (2005). Curves were fitted using the equation type y = y0 + a*exp.(−0.5*((x-x0)/b)2, where y = temperature (°C); y0 = y-axis intercept; a, b = regression parameters and x0 = time when temperature = min). Coefficients of determination were 0.88 and 0.90 for the non-epidemic and the epidemic cases, respectively. Plot symbols represent means; error bars indicate the standard error of the mean. b The percentage of case studies that were correctly categorized by a leave-one out cross-validation is displayed for each period with significantly different (Mann-Whitney U-test, P < 0.05) air temperatures as shown in (a). The leave-one out cross-validation was carried out according to the same procedure described in Beyer et al. (2012) and Eickermann et al. (2015). The position of the bars on the x-axis in (b) indicate the temporal position of periods when significant differences between the time courses shown in (a) were observed

There was a weak trend towards warmer temperatures over the range of years tested, both in the time frame 21–28 DAS as well as in the time frame 132–134 DAS, but both effects were non-significant at P = 0.333 and P = 0.378, respectively.

Discussion

Time frames with significant differences between the temperatures that were measured prior to yellow rust epidemics in winter wheat compared with non-epidemic cases were identified. If an epidemic occurred could be correctly predicted based on temperature differences within time frames identified here in up to 87% of the cases.

Short periods with significant temperature differences are unlikely to allow for the completion of relevant processes such as spore germination or frost desiccation. Longer periods with consistently different temperatures are less likely to be causally unrelated with the observed epidemic outcome. The longest periods with significantly lower temperatures before non-epidemic cases were observed between 21 and 28 DAS and between 132 and 134 DAS (Table 2, Fig. 2), suggesting that the temperatures during these periods were relevant for the survival of the pathogen under in vivo conditions in the field. The first period between 21 and 28 DAS corresponds to the end of October and could be related to the establishment of the disease on the newly sown, emerging crop (Zadocks plant growth stages between 11 and 19). This hypothesis is supported by the fact that the average temperature between 21 and 28 DAS was 10.79 ± 0.26 °C which corresponds well with a recent study by de Vallavieille-Pope et al. (2018), where the highest infection efficacies of various yellow rust strains were observed at 10 °C. To the best of our knowledge, this is the first report providing evidence for a crucial effect of low temperature on yellow rust epidemics during the period of winter wheat emergence under field conditions. For brown rust (Puccinia triticina), a fungus from the same genus like yellow rust, the predictive value of temperatures between 9 November and 19 December was demonstrated for France (Gouache et al. 2015), which could also be related to the infection of the newly emerged winter wheat crop. The second period between 132 and 134 DAS identified in the present study corresponds to the beginning of February (Zadocks plant growth stages around 29), the coldest period of the year, when the cold hardiness of the pathogen is facing its strongest challenge. Between 143 and 145 DAS, another period with significant differences was observed. In the latter case it is unlikely that the temperatures measured for the non-epidemic cases overtaxed the cold hardiness of the pathogen, because the low temperatures observed in this period for the non-epidemic cases were on average higher than those observed for epidemic cases during the period between 132 and 134 DAS (Table 2). It may be speculated that temperature oscillations have a negative impact on the survival of yellow rust in winter. LOOCV suggested that whether a practically relevant yellow rust epidemic occurs or not can be predicted based on autumn and winter temperatures with accuracies up to 87% (Fig. 2B).

The increasing frequency of yellow rust epidemics observed in Europe since 2013 may be related to (i) increasingly mild winter temperatures and (ii) to the spread of more aggressive strains (Hovmøller et al. 2016). In fact, we found slightly increasing temperature trends in the periods that are putatively relevant for yellow rust survival in winter, but effects were clearly non-significant, suggesting that the quick increase in epidemic frequency was primarily triggered by the spread of more aggressive strains rather than by a slow and within the period of observation insignificant temperature increase. At the level of inter-annual variability observed here, longer observation periods are needed to identify significant trends over time.

Cultivar susceptibility to yellow rust had no significant effect on whether a yellow rust epidemic occurred or not. However, even the non-significant cultivar effects were pretty close to statistical significance levels and therefore the absence of a significant cultivar effect should not be taken for granted in future studies, particularly when regions with more aggressive strains and/or other cultivars are studied.

The significance of the year effect might be related to annual temperature differences. Average temperatures during the coldest period between 132 and 134 DAS were 1.55 ± 0.34 °C for the mostly epidemic years 2010, 2014, 2015 and 2016, while they were − 2.97 ± 0.98 °C for the mostly non-epidemic years 2011, 2012 and 2013.

Plant disease forecast models predicted different variables such as infection periods (Magarey et al. 2005; El Jarroudi et al. 2017), infection efficiency (Lalancette et al. 1988), pathogen toxin production (Giroux et al. 2016), minimum conidial densities needed for infection (Walter et al. 2016), infection severity classes (Zhao et al. 2017), newly expressed symptoms (Molitor et al. 2016) etc.. However, to be directly usable in IPM, if and possibly when control thresholds will be reached needs to be predicted, because this is the criterion that - despite some limitations discussed in detail by Zadoks (1985) – triggers pesticide use within the framework of IPM. To date there are very few attempts to predict the surpassing of control thresholds. Most models in plant pathology focus on variables with relevance for epidemiology, but with unclear relevance for the question whether a control threshold will be reached and hence if pesticide application will make sense from an economic point of view or not. Future efforts should focus more on predicting parameters with direct relevance at a predefined level be it the farm level, the regional or the global level, to facilitate more widespread use of plant pathology models.

The approach proposed here will produce the clearest result if fungal strains with identical temperature requirements were responsible for the epidemics within the period and region of observation. In reality, temperature requirements of fungal strains differ to some extent (see for example de Vallavieille-Pope et al., 2018). If, for instance, two populations of strains with extremely contrasting temperature requirements would be present, the variability of temperatures within the epidemic studies as well as within the non-epidemic studies will be large and the detection of significant differences between epidemic and non-epidemic cases will become increasingly unlikely with increasing differences in the temperature response of the two populations. Only temperature differences that are statistically significant despite the variability in temperature requirements of the fungal strains can be detected by the approach proposed here. If average temperature responses of fungal strains occurring in other regions should be different from the strains of Luxembourg, different time frames with different temperatures will be identified when using the present approach with disease and weather data from that region. These different time frames and temperatures will have a prognostic value for the region of origin of the data. With regard to other climatic conditions, the approach proposed here seems robust. If, for instance, winter starts later and end earlier in more Southern regions, different time frames with different temperatures will be identified when using the present approach with disease and weather data from that region. The approach may be used for identifying critical periods and temperatures for all phenomena that in fact respond to temperature. However, a sufficient number of epidemic and non-epidemic cases is essential for reliable statistical testing. The present approach will therefore be most valuable in regions, where epidemics occur in 50% of all cases and the remaining 50% can be used for comparison. In regions, where a disease occurs every year or never, the present approach cannot be used due to a lack of either epidemic or non-epidemic.

Even though it is highly likely that the relevant time frames, when significant differences between the temperatures preceding epidemic and non-epidemic cases occur, vary with region (particularly with latitude), the same methodology demonstrated herein can be used to identify the relevant time frames and temperature differences for each region where air temperature data and disease incidences are available. Furthermore, it might be advantageous to examine temperature difference in relation to plant growth stages rather than in relation to DAS to better take the impact of temperature on plant development into account.