1 Introduction

Various physical phenomena are characterized by a differential equation of first order, whose solution is an exponential function. The determination of its parameters, especially the decay time τ, is executed for instance in nuclear physics and chemistry, medical diagnostics, absorption spectroscopy and phosphor thermometry [1, 2]. These processes are universally described by

$$I(t) = \sum_{i=1}^{M} {A_i \hbox{e}^{-\frac{t}{\tau_i}}},$$
(1)

with the parameters M (number of exponential terms), A i (amplitude of exponential i) and τ i (decay time of exponential i) for i  ∈ {1, …, M}. When the underlying physics of the observed process can be simplified to a mono-exponential approach, Eq. 1 reduces to

$$I(t)=A \hbox{ e}^{-\frac{t}{\tau}}.$$
(2)

However, when I(t) describes an experimentally derived waveform, Eq. 2 is extended to

$$I(t)=A \hbox{ e}^{-\frac{t}{\tau}}+b,$$
(3)

where the parameter b denotes the time-independent offset signal that originates from digitization of the waveform, dark current of the detector or background signals during acquisition, such as radiation as in case of optical diagnostics [3]. Furthermore, experimental noise may be superimposed to the waveform:

$$I(t)=A \hbox{ e}^{-\frac{t}{\tau}}+b+\epsilon(t),$$
(4)

where \(\epsilon(t)\) is the noise term, that primarily consists of shot noise, dark current noise and quantization noise, excluding long-term drifts.

By introducing these additional parameters, several requirements arise concerning the fitting algorithm for the determination of the decay time τ. The parameter b has to be determined within the fitting algorithm, if it is not extracted from the experimental data in advance. The noise term \(\epsilon(t)\) is randomly distributed and superimposed with the signal, which introduces an uncertainty in the determination of τ.

For a reliable fitting algorithm, three major requirements are formulated:

  1. 1.

    Accuracy: Systematic errors depending on the signal-to-noise ratio (SNR) must be minimized.

  2. 2.

    Precision: A high repeatability and therewith low standard deviations should be achieved.

  3. 3.

    Computational time: Minimizing the duration of each single computation is beneficial for real-time applications [4] and processing large amounts of data sets [5, 6].

The goal of this work is to identify the most common methods for the computation of τ, compare these in terms of the above-mentioned requirements and identify the major influencing factors.

2 Methods for mono-exponential fitting

In literature, a large number of methods for the determination of τ exist. Interestingly, each discipline of science sticks in general to one fitting method. In spectroscopy, for instance, the application of Fourier transforms [7, 8] to the data is common, while specifically, in phosphor thermometry, nonlinear least-square methods and regressions are frequently applied [912]. The following section describes 6 of the most common methods for decay time evaluations.

2.1 Nonlinear least squares (NLS)

The most common way of fitting a nonlinear function in the form of Eq. 3 is by directly applying the method of least squares. Here, the experimentally derived values y i are associated with the model function I(t) by the following relation:

$$\chi(\varvec{\theta})=\sum_{i=1}^{n} r_i^2(t_i,\varvec{\theta})$$
(5)
$$=\sum_{i=1}^{n}(y_i-I(t_i,\varvec{\theta}))^2$$
(6)
$$= \| {\bf y} - {\bf I}\|^2.$$
(7)

By minimizing the residual r and therewith \(\chi(\varvec{\theta})\) the parameters \(\varvec{\theta}=(A, \tau, b)^T\) are identified [13]. For the solution of nonlinear problems, as given in Eq. 3, extensive numerical methods such as the Gauss–Newton algorithm have to be employed, which transfers the nonlinear problem into a linear one. An advancement of this method is the Levenberg–Marquardt algorithm, which offers a higher degree of robustness by performing a linear Tikhonov regularization [14, 15]. In this work, the term NLS refers to the nonlinear least-squares method, employing the Levenberg–Marquardt method.

2.2 Linear regression of the logarithm (LOG)

A rather simple approach for the determination of the parameters of Eq. 3 relies on taking the logarithm of the waveform. If the offset b is known a priori, Eq. 3 can be rewritten as

$$\hbox{ln}\big(I(t)-b\big) = -\frac{t}{\tau}+\hbox{ln}(A).$$
(8)

Subsequently A and τ can be computed by linear least-squares fitting [16].

2.3 Rapid lifetime determination (RLD)

This method was developed by Guggenheim [17] and is based on algebraic relations. If the offset b is known a priori, two equations for two instances of time can be formulated and subsequently rearranged to determine the parameters A and τ:

$$I_1=A \hbox{ e}^{-\frac{t_1}{\tau}}+b$$
(9)
$$I_2=A \hbox{ e}^{-\frac{t_2}{\tau}}+b$$
(10)
$$\Leftrightarrow \tau = \frac{t_1-t_2}{\hbox{ ln}\left(\frac{I_2}{I_1}\right)}$$
(11)
$$A = \frac{I_1}{\tau\left(1-\frac{I_2}{I_1}\right)}.$$
(12)

Furthermore, the method can be extended to include fitting of the offset b by exploiting a third equation at t 3 [18]. This method has been used regularly within the last decades: Woods et al. [19] exploited this method under the name baseline rapid lifetime determination (BRLD) to evaluate fluorescence decay curves, which have been summed in post-processing:

$$I_1=\sum_{i=0}^{S-1} I_i \quad I_2=\sum_{i=S}^{2S-1} I_i \quad I_3=\sum_{i=2S}^{3S-1} I_i,$$
(13)

where S refers to a user-defined temporal interval and I 1, I 2 and I 3 describe the integrals, which have been approximated as sums in analogy to Fig. 1. Ballew and Demas performed a comparison of the (B)RLD and the NLS and reported a benefit of 3 orders of magnitude in terms of computational time in favor of the (B)RLD [20, 18].

Fig. 1
figure 1

Schematic of the BRLD following Woods et al. [19]

2.4 Fast Fourier transform (FFT)

Kirchner et al. [7] showed that the decay time τ of a mono-exponential decay of the \(\hbox{ e}^{-\frac{t}{\tau}}\) can be computed by the use of a Fourier transform,

$$F(\omega)=\int\limits_0^{\infty}{\hbox{ e}^{-\frac{t}{\tau}} \hbox{ e}^{i \omega t} \hbox{ d} t}.$$
(14)

Using the relation

$$\int\limits_0^{\infty}{e^{-a t} \hbox{ d} t}=\frac{1}{a}$$
(15)

F(ω) simplifies to

$$F(\omega)=\frac{1}{\frac{1}{\tau}-i \omega}.$$
(16)

If Eq. 16 is multiplied by \(\frac{\frac{1}{\tau} +i \omega}{\frac{1}{\tau}+i \omega}, \tau\) can be computed [21]:

$$\tau=\frac{ Im(F(\omega))}{\omega Re(F(\omega))}.$$
(17)

For discrete data, the Fast Fourier transform (FFT) has to be applied, and the integral in Eq. 14 has to be approximated as a sum. Everest and Atkinson [21] showed that τ can then be expressed as

$$\tau=\varDelta t \bigg[\hbox{ ln}\left( \frac{Re(F(\omega_k))}{ Im(F(\omega_k))} \hbox{ sin}(\omega_k {\varDelta} t) + \cos(\omega_k \varDelta t)\right)\bigg]^{-1}$$
(18)

with \(\omega_k=2 \pi k / (n \varDelta t)\) leading to an identical τ for every ω k . Furthermore, the derivation of τ for an equation of the form \(I(t)= A \hbox{ e}^{-\frac{t}{\tau}}+b\) including the amplitude A and the background b leads to the same solution. In turn, a determination and subtraction of b becomes obsolete [21].

2.5 Linear regression of the sum (LRS)

The basic idea of the LRS approach has been described initially as the phase-plane method by Huen et al. [22]. It uses the fact, that the derivative of any order of an exponential function \(I(t)=A \hbox{ e}^{-\frac{t}{\tau}}+b\) retains its original exponential form. Demas and Adamson [23] pointed out that function and integral are of the same form as well:

$$\int\limits_0^t{I}(t') \hbox{ d}t'=\tau (A+b) -\tau I(t)+b t,$$
(19)
$$I(t)=A+b-\frac{1}{\tau} \int\limits_0^t{I}(t') \hbox{ d}t'+\frac{b t}{\tau}.$$
(20)

Equation 20 is a linear equation of the following form:

$$Y(x,t)=a_0+a_1 x+a_2 t,$$
(21)

where a 0, a 1 and a 2 are coefficients and x and t are two independent variables, with x corresponding to the integral in the second term of Eq. 20. The three coefficients a i for i  ∈ {1, 2, 3} can be calculated analytically using the method of least squares (see [24] for details). Thereby, the following equation system has to be solved:

$$\left(\begin{array}{lll} n & S_I & S_t\\ S_I & S_{II} & S_{tI}\\ S_t & S_{tI} & S_{tt}\\ \end{array}\right) \left(\begin{array}{ll} a_0\cr a_1\\ a_2\\ \end{array}\right) = \left(\begin{array}{l} S_f\\ S_{fI}\\ S_{ft}\\ \end{array}\right).$$
(22)

If the offset b is known a priori, the linear equation system in 22 reduces to a 2 × 2 matrix equation. n denotes the number of data points in I(t), while each entry of the matrix of the right-hand side is a sum over individual values (one subscript) or a sum over the product of two values (two subscripts), respectively

$$S_g\equiv\sum_{i=0}^{n-1}{g_i}, \quad \quad S_{gh}\equiv\sum_{i=0}^{n-1}{g_i h_i}.$$
(23)

Using I(t), t and the running integral of I(t), which can be approximated as a sum [21]

$$I_i\equiv\int\limits_0^{t_i}{I(t) \hbox{ d}t} \approx \sum_{i=0}^{n-1}{I(t_i) \varDelta t},$$
(24)

the system of Eq. in 22 can be solved in order to determine the coefficients a 0, a 1 and a 2.

Matheson [25] extended the method to multi-component exponential functions and suggested to rename it to the term successive integration (SI). Furthermore, he used the trapezoidal numerical integration for the evaluation of the integral S I in Eq. 22. Halmer et al. pointed out that this procedure leads to a systematic error. On account of this, an iterative determination of τ was introduced and the method was renamed as corrected successive integration (CSI) [24]. Recently, Everest et al. [21] showed that τ can be derived analytically, when the integral is treated as a direct sum as shown above; therefore, the method was renamed again to linear regression of the sum (LRS). The coefficients of the exponential function can then be computed by

$$\tau = \frac{\varDelta t}{\hbox{ ln}(1-a_1 \varDelta t)},$$
(25)
$$b=-\frac{a_2}{a_1},$$
(26)
$$A=a_o \hbox{ e}^{-\frac{\varDelta t}{\tau}}-b.$$
(27)

In the work presented here, the LRS has been implemented following the procedure of Everest et al. [21].

2.6 Prony method

This method has been developed by Prony [26] as early as 1795 to characterize multi-exponential decays of the form

$$I(t)=\sum_{i=1}^{M}A_i \mu_i^t, \quad \hbox{ with } \quad \mu_i=\hbox{e}^{-\frac{\varDelta t}{\tau_i}}.$$
(28)

When I(t) is discretized by n equidistant values

$$I_j=I(j \varDelta t)\quad \hbox{ with }\quad j=0, 1, 2,\ldots,n-1,$$
(29)

a set of equations of the following form can be formulated [27]:

$$\begin{array}{*{20}c} {A_{1} + A_{2} + \cdots + A_{M} = I_{0} } \\ {A_{1} \mu _{1} + A_{2} \mu _{2} + \cdots + A_{M} \mu _{M} = I_{1} } \\ {A_{1} \mu _{1}^{2} + A_{2} \mu _{2}^{2} + \cdots + A_{M} \mu _{M}^{2} = I_{2} } \\ \vdots \\ {A_{1} \mu _{1}^{{n - 1}} + A_{2} \mu _{2}^{{n - 1}} + \cdots + A_{M} \mu _{M}^{{n - 1}} = I_{{n - 1}} } \\ \end{array}$$
(30)

To solve this nonlinear problem, 2M equations have to be known. Prony realized this task by the following polynomial [27, 28]:

$$\sum_{i=0}^{M} \alpha_i \mu^i=\prod_{j=1}^M(\mu-\mu_j)=0, \quad \alpha_0=1.$$
(31)

To determine the coefficients α i (i = 0, 1, 2, …, M), the first equation from the set of Eq. 30 is multiplied by α M , the second equation by α M−1, …, the (M − 1)th equation by α 1 and the (M)th equation by α 0 = 1. Subsequently, the sum of these products is computed. In analogy to Eq. 31 this leads to [29]:

$$I_M+\alpha_1 I_{M-1}+\cdots+\alpha_M I_0=0$$
(32)

Repeating this task in terms of starting with the second, third, …, (n − M)th equation, n − M − 1 new equations are formulated:

$$\begin{array}{*{20}c} {I_{M} + \alpha _{1} I_{{M - 1}} + \alpha _{2} I_{{M - 2}} + \cdots + \alpha _{M} I_{0} = 0} \\ {I_{{M + 1}} + \alpha _{1} I_{M} + \alpha _{2} I_{{M - 1}} + \cdots + \alpha _{M} I_{1} = 0} \\ \vdots \\ {I_{{n - 1}} + \alpha _{1} I_{{n - 2}} + \alpha _{2} I_{{n - 3}} + \cdots + \alpha _{M} I_{{n - M - 1}} = 0} \\ \end{array}$$
(33)

Since the experimental values I j (j = 0, 1, 2, …, n − 1) are known, α i (i = 1, 2, …, M) can be computed directly when n = 2M or approximated via least squares when n > 2M. Subsequently, the roots of the polynomial μ i (i = 1, 2, …, M) in Eq. 31 can be calculated. In doing so, the set of nonlinear equations 30 becomes linear and the parameters A i (i = 1, 2, …, M) can be extracted easily [30]. In the case of a mono-exponential decay, the computation simplifies, since M = 1 [4, 31]: A set of \(n-\varDelta n\) linear equations can be formulated in analogy to Eq. 31:

$$I_{j+\varDelta n}+I_j \alpha=0, \quad \quad j=0, 1, 2,\ldots,(n-\varDelta n-1),$$
(34)

with \(n/2\geq \varDelta n\geq 1\) and \(\varDelta n\) as a user-defined parameter. The decay time τ then equals

$$\tau=\frac{\varDelta n \varDelta t}{\hbox{ ln}(-\alpha^{-1})}.$$
(35)

α can be computed by superposition or correlation methods according to Zhang et al. [4]. In this work, the correlation method has been implemented as suggested by Zhang et al. [4]:

$$\varDelta I_j=I_j-I_{j+\varDelta j}, \,j=0, 1, 2,\ldots,(n'-1),$$
(36)

with \(n'=n-\varDelta j\) and \((n-2) \geq \varDelta j \geq 1.\) α then equals

$$\begin{aligned} \alpha&=-\sum_{j=0}^{n'-\varDelta n-1} (I_{j+\varDelta n}-I_{j+\varDelta n+\varDelta j}) (I_{j+k}-I_{j+k+\varDelta j}) \\ &\quad \times\Bigg(\sum_{j=0}^{n'-\varDelta n-1}(I_j-I_{i+\varDelta j}) (I_{i+k}-I_{i+k+\varDelta j})\Bigg)^{-1}. \end{aligned}$$
(37)

According to Zhang et al. [4], the parameters were set to \(\varDelta j = n/2, \varDelta n = n/4\) and \(k=\varDelta n / 2.\)

3 Determination of temporal boundaries

The methods discussed in this work are applied to extract single decay times τ from experimental signals. However, deviations from mono-exponential behavior influences from the onset of the process and the superimposed noise \(\epsilon(t)\) require limiting the temporal boundaries in which fitting is performed. According to Brübach et al. [32], this can be realized in two ways:

3.1 Predefined fitting window

When using a predefined fitting window, the temporal boundaries are characterized by two user-defined constants c 1 and c 2. The fitting window starts at

$$t_1=t_0+c_1 T$$
(38)

and ends at

$$t_2=t_0+c_2 T,$$
(39)

where t 0 refers to t = 0 and T is the recording length (not to be confused with the fitting window in the following sections). If the observation window T is changed, the temporal boundaries in which fitting is performed will also change. To avoid these circumstances, Brübach et al. [32] developed an alternative approach:

3.2 Iteratively adapted fitting window

To define a fitting window independently of the settings of the detection system, the temporal limits are coupled to the decay time τ itself:

$$t_1=t_0+c_1 \tau,$$
(40)
$$t_2=t_0+c_2 \tau.$$
(41)

Since the decay time τ is not known a priori, t 1 and t 2 have to be initialized by an initial value for τ:

$$\tau_0=c_0 T.$$
(42)

Convergence is achieved when the value of the decay time reaches a constant level. In any case the coefficients c 1 and c 2 have to be chosen individually for each application.

4 Determination of b

In general, there are two options for the determination of the offset b:

  • Experimental extraction by averaging signal prior to the start of the decay process

  • Numerical determination using fitting routines

The first option requires the acquisition of data for t < t 0. In case of the second option, the offset b is an additional parameter in the fitting model. Therefore, with respect to the offset reduction problem, algorithms that compute Aτ and b are termed 3-parameter-fits in the following, while methods that solve Eq. 2 are called 2-parameter-fits. Relating to this, the methods investigated in this work can be divided into three classes:

  1. 1.

    Offset subtraction prior to fitting is unavoidable (LOG).

  2. 2.

    By extending the algorithm, b can be determined additionally (NLS, RLD, LRS).

  3. 3.

    An offset subtraction is dispensable, since the result is independent of the value of b (FFT, Prony).

The LOG algorithm obviously is relying on the experimental determination of b. The second category allows either experimental extraction as well as numerical determination of the offset, depending on the form of the implementation. The FFT and Prony algorithms form a special case, where the offset does not have any influence on the determination of τ.

5 Comparison and evaluation

In the following, the algorithms presented in Sect. 2 are analyzed and compared with each other. Effects of temporal discretization, the choice of the temporal boundaries as well as correlations between τ and b are discussed. Furthermore, all algorithms are compared to each other with respect to the above-mentioned criteria and evaluated with regard to precision, accuracy and computational time.

First, the investigations are performed by evaluating simulated data using Eq. 3 for t ≥ t 0 with the parameters listed in Table 1. These parameters were chosen based on the decay of the thermographic phosphor Gd3Ga5O12:Cr and the detection with a photomultiplier tube (see [33] for details). White Gaussian noise has been randomly added to the decay curve in order to model \(\epsilon(t)\). Therefore, the signal for t < t 0 consists only of noise, which is randomly distributed around b, in order to allow for single-shot offset determination for all 2-parameter algorithms. The temporal discretization \(\varDelta t,\) the number of data points n, the timing of excitation t 0 and the SNR are listed along with the decay parameters Aτ and b in Table 1 for an SNR of 5,000 (standard scenario). Since the observed signal is transient, the SNR is defined as the intensity at the beginning of the fitting window, divided by the standard deviation of the background noise. A variation of the SNR has been performed by varying the amplitude A. For a visual impression of noise levels, Fig. 2 shows 3 single shots of simulated waveforms with different SNRs. Since the lifetime is known a priori in the case of simulated data, a predefined fitting window is applied in the following sections.

5.1 Temporal discretization

The influence of the temporal discretization \(\varDelta t\) on precision has been investigated. Therefore, the number of data points n relative to τ have been varied. Subsequently, mean values and standard deviations have been computed.

All algorithms investigated showed similar behavior: An increase in \(\frac{n}{\tau}\) leads always to a decrease in relative standard deviations. Since the resulting dependencies of all algorithms were identical, a rule of thumb could be formulated, outlining that an increase of 2 orders of magnitude in data points per τ leads to a decrease in relative standad deviations of approximately one order of magnitude. This observation can be used to select an appropriate sampling rate during experimental data acquisition.

5.2 Choice of fitting window

In order to determine the ideal choice of the temporal boundaries individually for each algorithm, Dowell et al. [3] introduced the parameter m N , which couples precision and SNR:

$$m_N=\frac{\sigma_\tau}{\tau}\hbox{ SNR},$$
(43)

where σ τ denotes the shot-to-shot standard deviation

$$\sigma_\tau=\sqrt{\frac{1}{N-1}\sum_{i=1}^{N}(\tau_i - \overline{\tau})^2},$$
(44)

computed out of a set of N individually evaluated decay curves with a mean decay time of \(\overline{\tau}.\) The variable parameter β describes the normalized fitting window

$$\beta=\frac{t_2-t_1}{\tau}.$$
(45)

By plotting m N versus β, a characteristic evolution for each of the algorithms can be evaluated. The position of the minimum of this evolution is termed β opt and indicates maximum precision. A number of investigations used a similar approach, all of them varying t 2 in Eq. 45, while t 1 was set to t 0 [4, 21, 34].

For evaluation of the methods presented in this work, a variation of β has been performed at a constant SNR of 5,000. For every β, 10,000 decays have been simulated and evaluated individually. Since the SNR does not change, relative standard deviations are used here as an indicator for precision instead of m N .

5.2.1 Influence of t 1

In Fig. 3, the relative standard deviation is plotted as a function of β, which was generated for different values of t 1 and varying t 2, exemplary for the LRS (2-parameter) algorithm. Two effects can be observed: On the one hand, relative standard deviations increase with increasing t 1, which is caused by a decrease in signal intensity with a late fitting window. On the other hand, β opt is identical for all curves plotted. This indicates that β opt is independent of the location of the fitting window, and it solely depends on the length of |t 2 − t 1|. All other algorithms, which are not presented here for the sake of brevity, show similar behavior.

5.2.2 Influence of temporal discretization

Figure 4 similarly to Fig. 3 presents relative standard deviations versus β for the 2-parameter version of LRS. This time, the temporal discretization in terms of data points n per τ has been varied, while t 1 = t 0. It is evident that the temporal discretization influences β opt drastically. For \(\frac{n}{\tau}=10,\) the optimum of β is about 8. Increasing the temporal discretization shifts the position of β opt toward smaller values of β (β opt, 100 = 1.5 and β opt, 1,000 = 0.8). The other algorithms investigated in this work show similar tendencies with different quantities as discussed in the next section.

These results indicate that independent of the algorithm, no optimum position of the fitting window exists, since the temporal discretization changes β opt. For real measurements, where τ represents a physical quantity that may change from case to case by several orders of magnitude, n is fixed within a certain temporal regime. In turn, the ratio \(\frac{n}{\tau}\) varies and changes the relative standard deviation.

5.2.3 Comparison

Although β opt cannot be determined explicitly and universally for each method without including further parameters, the evolution of the relative standard deviation versus β reveals individual properties of the respective algorithm. Therefore, this relationship is plotted in Fig. 5 with t 1 = t 0 and a constant temporal discretization \(\frac{n}{\tau}=125.\)

It is obvious that not only the value of β opt, but also the level and width of each minimum are characteristic for each algorithm. All 3-parameter methods, including FFT and Prony, have higher values for β opt than the 2-parameter algorithms. This indicates that the 3-parameter algorithms require more data points and/or a wide fitting window to reach their optimum in precision. The methods LOG and RLD (2-parameter) disclose an optimum at rather low values for β indeed, but a small deviation from β opt leads to a considerable increase in standard deviations. The 2-parameter versions of NLS and LRS reach their optimum precision for relatively small values of β. However, an increase in β results here in a rather small (LRS) or even no loss (NLS) in precision.

Interpreting the global evolution of all curves leads to the conclusion that for β < β opt the number of data points is too small to guarantee a good precision. For β > β opt the influence of noise increases. Therefore, individual behavior in the presence of changing SNRs is expected, which will be discussed in Sect. 5.4.

5.3 Correlation of τ and b

No correlation between τ and b should exist, but is introduced erroneously by the use of fitting algorithms. This dependency can be verified by the cross-correlation coefficient r τ b [35]:

$$r_{\tau b}=\frac{\sum\nolimits_{i=1}^{N}(\tau_i-\overline{\tau})(b_i-\overline{b})}{\Bigg( \sum\nolimits_{i=1}^N(\tau_i-\overline{\tau})^2 \sum\nolimits_{i=1}^N (b_i-\overline{b})^2 \Bigg)^{\frac{1}{2}}},$$
(46)

with N as the number of evaluated decays \(i\cdot \overline{\tau}\) and \(\overline{b}\) correspond to the mean values of decay time and offset, respectively. The cross-correlation coefficient r τ b ranges from −1 to 1, where r τ b  = 0 corresponds to no correlation between τ and b, while r τ b  =  ±1 indicate a positive or negative linear relationship, respectively.

In Fig. 6, r τ b is plotted versus the normalized fitting window β with t 1 = t 0 and a constant temporal discretization of \(\varDelta t=0.8\, \upmu \hbox{s}\). Each value is based on 10,000 individually evaluated simulated decays. Concerning the 2-parameter algorithms, b was computed by taking the mean value of the signal for t < t 0. For all other methods, b is computed internally by the fitting algorithm. It is obvious that all 3-parameter algorithms show a strong negative correlation between τ and b. For increasing β, this correlation becomes weaker. For the FFT and 3-parameter version of NLS and LRS, the correlation decreases constantly, while Prony and RLD (3-parameter) converge to a constant level, which indicates a strong negative correlation within the entire range of values considered. All 2-parameter algorithms exhibit weak or almost no correlation between τ and b for very low values of β. When β increases, a negative correlation appears, which is more pronounced for RLD and LRS than for NLS and LOG. This observation is reasonable since the influence of an inaccurately determined background influences the decay waveform more toward the end than in the beginning of the decay. Concerning 3-parameter algorithms, this relation is reciprocal: If fitting is performed for a small value of β, the offset determination is more error-prone. When β increases, the determination of b becomes more accurate. Thereby, the correlation discussed here becomes weaker. Furthermore, it should be noted that using a different fitting window for the determination of τ and b may lead to different results.

Further studies revealed that a correlation of τ and b is independent of temporal discretization and SNR. However, it is evident that the correlation shown here is not only existent specifically for each algorithm but also dependent on the length of the fitting window. The authors recommend a careful experimental determination of b, combined with a 2-parameter fitting algorithm with a low value for β in order to prevent high negative correlations between τ and b.

5.4 Precision and accuracy

To complement the above shown investigations, a variation of the SNR has been performed. Therefore, the standard scenario has been chosen with a predefined fitting window in the temporal boarders t 1 = 0.5τ and t 2 = 3.5τ, which corresponds to typically applied values in our previous work in the context of phosphor thermometry [36, 33]. By using an iteratively adapted fitting window, identical results have been gained. 10,000 decay waveforms have been simulated with a random SNR between 50 and 5,000 and evaluated with each algorithm.

Figure 7 illustrates the relationship between SNR and evaluated τ by the use of scatter plots for every algorithm investigated. All methods have in common that a decreasing SNR leads to a less precise determination of τ. Regarding the LOG method, the decreasing SNRs are accompanied by systematic errors in the determination of τ. This deviation is significant for SNRs below 100. First, a systematic underestimation of τ can be seen, which can be led back to the nonlinear transform of the signal during logarithmic calculus. For further decreasing SNRs, an overestimation of τ can be observed. This is due to the fact that the logarithm of negative data is not defined, causing a bias and results in a less steep slope of the linear regression and therewith in higher values for τ.

For a quantification of the data in Fig. 7, Table 2 lists the mean values and standard deviations of the data shown, subdivided in two groups of high (5,000 ≤ SNR ≤ 500) and moderate SNR (500 > SNR ≥ 50). While all algorithms listed predict τ accurately for the high SNR, a decrease in accuracy can be observed in the second group of SNR. In this context, all 2-parameter methods show more accurate results than their 3-parameter counterparts. The most accurate results are yielded for NLS (2-parameter) and LRS (2-parameter). Similar findings are observed for precision, which is indicated by low standard deviations: The 2-parameter algorithms show more precise results, with the 2-parameter versions of NLS and LRS performing best.

5.5 Verification

In the preceding sections, a number of algorithms have been characterized and investigated theoretically by the use of simulated decays. To verify these results, experimentally measured decay curves are investigated. Therefore, the decay of two different thermographic phosphors at two different temperatures have been evaluated. The thermographic phosphor materials were Gd3Ga5O12:Cr and Mg4FGeO6:Mn, and the data have been taken from previous studies [33, 37]. For each temperature, 200 single shots with a discretization of 2000 and 2500 time steps, respectively, have been evaluated. According to these earlier publications, fitting parameters, using an iteratively adapted window have been set to c 1 = 0.5 and c 2 = 3.5 for Gd3Ga5O12:Cr and c 1 = 1 and c 2 = 4 for Mg4FGeO6:Mn, respectively.

The results of this investigation are listed in Table 3. For each algorithm, the mean decay time and relative standard deviation are shown. In the last column, the averaged time of all computations are shown additionally. Computations have been performed on a 3 GHz dual-core processor using Matlab\(\circledR\), Version R2012a.

Since the true value of the decay time is not known, accuracy of each algorithm cannot be judged further. With regard to relative standard deviations, the results from the investigations using simulated data are confirmed: In general, the 2-parameter methods provide more precise fitting results compared to the 3-parameter algorithms. The most precise results are obtained, with one exception, by the 2-parameter versions of NLS and LRS.

Concerning their computational performance, the NLS implementations require most computational time, due to their iterative nature. The LOG algorithm is almost one order of magnitude faster but is still time-consuming due to the computation of the logarithm of many data points. Due to their non-iterative nature, all the other algorithms have an average computational time in the order of 1 ms.

6 Summary and conclusion

In the present work, a comparison of methods for mono-exponential fitting of decay curves has been performed. The algorithms have been categorized by their ability to include the time-independent background b (so-called 3-parameter methods) in the fitting algorithms. Methods lacking this ability have been termed 2-parameter algorithms, since only the amplitude A and the decay time τ are determined and b has to be known prior to fitting. By evaluating simulated noisy data several conclusions could be drawn:

  • An increase of two orders of magnitude in data points per τ leads to a decrease in relative standard deviations of approximately one order of magnitude.

    Table 1 Parameters of the standard scenario
    Table 2 Mean values and relative standard deviations (in brackets) for the evaluation of decay times with variable SNRs
    Table 3 Mean values, relative standard deviations (in brackets) and averaged computational time for a single evaluation for 200 decays of two thermographic phosphor materials at two temperatures
    Fig. 2
    figure 2

    Simulated decay waveforms with different SNRs

    Fig. 3
    figure 3

    Relative standard deviations as a function of β for different values of t 1. Data are obtained from LRS (2-parameter) algorithm and simulated data using the parameters from Table 1

    Fig. 4
    figure 4

    Relative standard deviations versus β for different temporal discretizations

    Fig. 5
    figure 5

    Comparison of the characteristic evolution of standard deviations of different algorithms as a function of β

    Fig. 6
    figure 6

    Cross-correlation coefficient of τ and b in dependency of β

    Fig. 7
    figure 7

    Scatter plots of the investigated algorithms for evaluation of the standard scenario with a SNR between 50 and 5,000. The true decay time is τ = 10−4 s

  • The choice of the optimum fitting window is dependent on several parameters such as the number of data points and the algorithm itself. When the decay time changes and the settings of the detection device stay unmodified, the temporal discretization changes as well, which prevents sticking to the optimum fitting window. By choosing the right algorithm, this uncertainty can be reduced.

  • By cross-correlating τ and b, it could be shown that each algorithm itself introduces an unphysical cross-dependency of the evaluated decay time and offset. Generally, the correlation changes with the length of the fitting window. 3-parameter methods show completely different behavior than 2-parameter methods. The authors recommend using 2-parameter algorithms with low values for β.

  • Concerning accuracy, the LOG algorithm revealed high systematic errors at low SNRs. Generally, the 2-parameter algorithms had lower systematic errors at low SNRs, compared to the 3-parameter algorithms

  • Similar findings were obtained for the precision of the methods: 2-parameter algorithms produced more precise results than the 3-parameter methods. The most precise methods were the 2-parameter versions of NLS and LRS.

Finally, the insights gained from evaluating simulated data sets were verified using experimental data of two thermographic phosphors, decaying at two different temperatures. The results from the simulated data sets were confirmed, since both 2-parameter versions of NLS and LRS delivered the most precise results. Additionally, the averaged computational time for the execution of the fitting functions has been compared, showing that LRS is about 20 times faster than the iterative NLS algorithm. For studies, where computational time is a critical parameter, the authors recommend the use of LRS, while the NLS should be the choice for studies, where computational time is negligible. Finally, it is emphasized that except for the very last section, these studies were performed for ideally mono-exponentially decaying signals. In case of fitting multi-exponential signals by mono-exponential approaches—which is not unusual in phosphor thermometry—at least the conclusions drawn for the position of the fitting window can be affected.