1 Introduction

The exponential distribution is one of the most widely used distributions for modeling data in reliability theory, queuing theory, and many other fields. For this reason, and due to its simple and suitable form there are many characterizations of this distribution that can be expressed conveniently. Some of them can be found in, among others, Ahsanullah and Hamedani (2010), Arnold et al. (2008), Balakrishnan and Rao (1998) and Galambos and Kotz (1978).

In recent times, these characterizations have increased their popularity due to the fact that they are useful in construction of goodness-of-fit tests. Some goodness-of-fit tests for exponentiality are studied in Ahmad and Alwasel (1999), Angus (1982), Jansen van Rensburg and Swanepoel (2008), Koul (1977, (1978), Nikitin (1996), Nikitin and Volkova (2010), Volkova (2010), Jovanović et al. (2015), Milošević (2016).

There exist different approaches when constructing the test statistics. One of them uses Laplace transforms. Baringhaus and Henze (1991) considered the test based on the differential equation that Laplace transform of exponential distribution satisfies. The analogous tests for Rayleigh and Gamma distribution were proposed in Meintanis and Iliopoulos (2003) and Henze et al. (2012), respectively. The approach of comparison of theoretical and empirical Laplace transform was considered in Henze (1993) and Henze and Meintanis (2002a) for exponential, and Henze and Klar (2002) for inverse Gaussian distribution. Meintanis et al. (2007) considered the exponentiality tests based on characterization involving moments.

Worth mentioning are also similar tests based on empirical characteristic functions considered, e.g. in Henze and Meintanis (2002b) and Gürtler and Henze (2000).

Our approach in this paper is to create a test based on equidistribution characterization and the corresponding U-empirical Laplace transforms.

Consider a characterization of the exponential distribution of the form

$$\begin{aligned} \omega _1(X_1,\ldots ,X_m)\overset{d}{=}\omega _2(X_1,\ldots ,X_m), \end{aligned}$$

where \(\omega _1(X_1,\ldots ,X_m)\) and \(\omega _2(X_1,\ldots ,X_m)\) are non-negative homogeneous functions of i.i.d. random variables \(X_1,...,X_m\) , i.e. for every real number \(c>0\)

$$\begin{aligned} \omega _k(cX_1,\ldots ,cX_m)=c\omega _k(X_1,\ldots ,X_m),\;k=1,2. \end{aligned}$$

Let \(X_1,X_2,\ldots ,X_n\) be a sample from a non-negative continuous distribution function F. For testing the composite hypothesis of exponentiality \(H_0: \;F(x)=1-e^{-\lambda x},\;\lambda >0,\) we propose the family of scale-free test statistics of the integral type

$$\begin{aligned} J_{n,a}=\int \limits _{0}^{\infty }(L^{(1)}_n(t)-L^{(2)}_n(t))\bar{X}e^{-a\bar{X}t}dt, \end{aligned}$$
(1)

where \(\bar{X}\) is the sample mean, a is some positive constant and

$$\begin{aligned} L_n^{(k)}(t)=\frac{1}{n^{[m]}}\sum \limits _{1\le i_1<\cdots <i_m\le n}\sum \limits _{\pi \in \mathrm \Pi (m)}e^{-t\omega _k(X_{i_{\pi (1)}},\ldots ,X_{i_{\pi (m)}})},\quad k=1,2, \end{aligned}$$

where \(n^{[m]}=m!\left( {\begin{array}{c}n\\ m\end{array}}\right) \) and \(\mathrm \Pi (m)\) is the set of all one-to-one mappings \(\pi : \{1,\ldots ,m\}\mapsto \{1,\ldots ,m\}\), are U-empirical Laplace transforms. The exponential weight function ensures the convergence of the integral while the role of the sample mean is to make the statistic scale free under null hypothesis. The tuning parameter a can be chosen in order to increase the power of the test against some particular alternatives.

We consider both large positive and large negative values of \(J_{n,a}\) to be significant. The tests will be consistent against all alternatives where the theoretical counterpart of \(J_{n,a}\) is not equal to zero, which includes all distributions of practical interest.

To compare the quality of our tests with some other tests we shall use the approximate Bahadur efficiency. This method has been considered in Meintanis et al. (2007) and Henze et al. (2009).

The paper is organized as follows. In Sect. 2, we derive asymptotic distribution and other asymptotic properties of our test statistics needed for calculation of local approximate Bahadur efficiency. In the next section we present two well-known characterizations and use the results from Sect. 2 to construct appropriate goodness-of-fit tests based on them. We compare these tests among each other and with some other tests via approximate Bahadur efficiency. In Sect. 4 we perform a simulation study in order to compare the powers of our tests with other exponentiality tests.

2 Asymptotic properties of \(J_{n,a}\)

After integration, the expression (1) becomes

$$\begin{aligned} J_{n,a}= & {} \frac{\bar{X}}{n^{[m]}}\sum \limits _{1\le i_1<\cdots <i_m\le n}\sum \limits _{\pi \in \mathrm \Pi (m)}\\&\times \Big (\frac{1}{a\bar{X}+\omega _1(X_{i_{\pi (1)}},\ldots ,X_{i_{\pi (m)}})}-\frac{1}{a\bar{X}+\omega _2(X_{i_{\pi (1)}},\ldots ,X_{i_{\pi (m)}})}\Big ). \end{aligned}$$

In order to find the asymptotic distribution of \(J_{n,a}\) under \(H_0\) we consider the auxiliary function

$$\begin{aligned} J^{*}_{n,a}(\mu )= & {} \frac{\mu }{n^{[m]}}\sum \limits _{1\le i_1<\cdots <i_m\le n}\sum \limits _{\pi \in \mathrm \Pi (m)}\\&\times \Big (\frac{1}{a\mu +\omega _1(X_{i_{\pi (1)}},\ldots ,X_{i_{\pi (m)}})}-\frac{1}{a\mu +\omega _2(X_{ i_{\pi (1)}},\ldots ,X_{ i_{\pi (m)}})}\Big ), \end{aligned}$$

where \(\mu =\lambda ^{-1}\). For every fixed \(\mu >0\) \(J^{*}_{n,a}(\mu )\) is an U-statistic whose distribution does not depend on \(\mu \). Therefore we can put \(\mu =1.\)

The U-statistic \(J^{*}_{n,a}(1)\) has symmetric kernel

$$\begin{aligned} \varPhi (X_1,\ldots ,X_m;a)\!= & {} \!\frac{1}{m!}\sum \limits _{\pi \in \mathrm \Pi (m)}\\&\times \Big (\frac{1}{a+\omega _1(X_{ \pi (1)},\ldots ,X_{ \pi (m)})}-\frac{1}{a+\omega _2(X_{ \pi (1)},\ldots ,X_{ \pi (m)})}\Big ). \end{aligned}$$

If the kernel is non-degenerate we may apply the Hoeffding’s theorem (1948) and get the asymptotic distribution of \(\sqrt{n}J^{*}_{n,a}(1)\). Precisely, the asymptotic distribution of \(\sqrt{n}J^{*}_{n,a}(1)\) is normal \(\mathcal {N}(0,m^2\sigma ^2_{\varPhi }(a))\). Here, \(\sigma ^2_{\varPhi }(a)\) is the variance of the kernel projection on \(X_1\), i.e.

$$\begin{aligned} \sigma ^2_{\varPhi }(a)&=E(\varphi ^2(X_1;a))\\ \varphi (s;a)&=E(\varPhi (X_1,\ldots ,X_m;a)|X_1=s). \end{aligned}$$

It is known that the sample mean has the following limiting distribution

$$\begin{aligned} \sqrt{n}(\bar{X}-\mu )\overset{d}{\rightarrow }\mathcal {N}(0,\mu ^2). \end{aligned}$$

It is not difficult to show that the conditions 2.3 and 2.9A of Randles’ theorem (1982, Theorem 2.13) are satisfied. Hence we can conclude that the asymptotic distribution of \(J^{*}_{n,a}(\mu )\) and \(J_{n,a}\) coincide. Since the distribution of \(J_{n,a}\) does not depend of parameter \(\lambda =\mu ^{-1}\), we have that the asymptotic distribution is:

$$\begin{aligned} \sqrt{n}J_{n,a}\sim \mathcal {N}(0,m^2\sigma ^2_{\varPhi }(a)). \end{aligned}$$
(2)

Therefore, we should reject our null hypothesis at asymptotic level of significance \(\alpha \) if

$$\begin{aligned} \frac{\sqrt{n}}{m\sigma _{\varPhi }(a)}|J_{n,a}|\ge u_{1-\frac{\alpha }{2}} \end{aligned}$$

where \(u_{1-\alpha /2}\) denotes \(1-\alpha /2\)-th quantile of the standard normal distribution.

2.1 Local approximate Bahadur efficiency

For Bahadur theory, we refer to Bahadur (1971) and Nikitin (1995). For two tests with the same null and alternative hypotheses, \(H_0(\theta \in \Theta _0)\) and \(H_1(\theta \in \Theta _1)\), the asymptotic relative Bahadur efficiency is defined as the ratio of sample sizes needed to reach the same test power when the level of significance approaches zero. It can be expressed as the ratio of Bahadur exact slopes, functions proportional to exponential rate for a sequence of test statistics. The calculation of these slopes depends on large deviation functions which are often hard to obtain.

For this reason in many situations the tests are compared using approximate Bahadur efficiency. In some situations, when the limiting distribution is normal, approximate Bahadur efficiency and classical Pitman efficiency coincide (Wieand 1976.)

Suppose that \(T_n=T_n(X_1,\ldots ,X_n)\) is a test statistic and its large values are significant, i.e. the null hypothesis is rejected whenever \(T_n>t_n\). Let the distribution function of the test statistic \(T_n\) converge weakly, under \(H_0\), to a distribution function \(F_T\), such that, \(\log (1-F_T(t))=-\frac{a_Tt^2}{2}(1+o(1))\), where \(a_T\) is positive real number, and \(o(1)\rightarrow 0\) as \(t\rightarrow \infty \). Suppose that the limit in probability \(\lim _{n\rightarrow \infty }T_n/\sqrt{n}=b_T(\theta )>0\) exists for \(\theta \in \varTheta _1\).

The relative approximate Bahadur efficiency of \(T_n\) with respect to another test statistic \(V_n\) (whose large values are significant) is

$$\begin{aligned} e^{*}_{T,V}=\frac{c^{*}_T}{c^{*}_V}, \end{aligned}$$

where \(c^{*}_T=a_Tb_T^2(\theta )\) and \(c^{*}_V=a_Vb_V^2(\theta )\) are the approximate Bahadur slopes of \(T_n\) and \(V_n\), provided that, similarly to the previous case, the distibution function of \(V_n\) converges weakly to \(F_V\) and \(\log (1-F_V(t))=-\frac{a_Vt^2}{2}(1+o(1))\).

In our case, \(T_{n}=\sqrt{n}|J_{n,a}|\). Let \(F_0(t)\) be the distribution function of the normal \(\mathcal {N}(0,m^2\sigma ^2_{\varPhi }(a))\) , i.e. \(F_0\) is the limiting distribution function of \(\sqrt{n}J_{n,a}\). Since for normal distribution, the coefficient \(a_T\) is the inverse of the variance, using the convergence symbol o(1), we have

$$\begin{aligned} \log (1-F_T(t))= & {} \log (2(1-F_{0}(t)))=\log 2+\log ((1-F_{0}(t)))\\= & {} -\frac{t^2}{2m^2\sigma ^2_{\Phi }(a)}(1+o(1)), \end{aligned}$$

which enables us to apply the mentioned concept of the relative approximate Bahadur efficiency to the investigated testing problem.

It remains to find the limit in probability under close alternatives. Let \(\mathcal {G}=\{G(x,\theta ),0<\theta <C\}\) be a class of distribution functions such that G(x, 0) is exponential and regularity condition from Nikitin and Peaucelle (2004), including differentiability along \(\theta \) in the neighbourhood of zero, are satisfied. Denote \(h(x)=\frac{\partial }{\theta }g(x,\theta )|_{\theta =0}\).

Lemma 1

For a given alternative density \(g(x;\theta )\) whose distribution belongs to \(\mathcal {G}\) we have that the limit in probability of statistic \(J_{n,a}\) is

$$\begin{aligned} b_J(\theta ) = m \int _{0}^{\infty } \varphi (x)h(x)dx\cdot \theta +o(\theta ), \, \theta \rightarrow 0. \end{aligned}$$

Proof

Since under alternative the sample mean converges almost surely to its expected value \(\mu (\theta )\), using the law of large numbers for U-statistics with estimated parameters (see, Iverson and Randles 1989) we have that the limit in probability of statistic \(J_{n,a}\) is equal to the one of \(J^{*}_{n,a}(\mu (\theta ))\). Without loss of generality we may take \(\mu (0)=1\).

Denote for brevity \(\mathbf {x}=(x_1,\ldots ,x_m)\) and \(\mathbf {G}(\mathbf {x},\theta )=\prod _{i=1}^{m}G(x_i,\theta )\). We have

$$\begin{aligned}&b_J(\theta )=E_{\theta }(\varPhi (X_1,\ldots ,X_m))=\int \limits _{R^m}\Big (\frac{\mu (\theta )}{a\mu (\theta )+\omega _1(\mathbf {x})}-\frac{\mu (\theta )}{a\mu (\theta )+\omega _2(\mathbf {x})}\Big ) d\mathbf {G}(\mathbf {x},\theta ). \end{aligned}$$

The first derivative of \(b(\theta )\) along \(\theta \) at zero is

$$\begin{aligned} b'_{J}(0)&=\int \limits _{R^m}\frac{\partial }{\partial \theta }\Big (\frac{\mu (\theta )}{a\mu (\theta )+\omega _1(\mathbf {x})}-\frac{\mu (\theta )}{a\mu (\theta )+\omega _2(\mathbf {x})}\Big )\Big |_{\theta =0} d\mathbf {G}(\mathbf {x},0)\\&\quad +\int \limits _{R^m} \Big (\frac{\mu (0)}{a\mu (0)+\omega _1(\mathbf {x})}-\frac{\mu (0)}{a\mu (0)+\omega _2(\mathbf {x})}\Big )\frac{\partial }{\partial \theta }d\mathbf {G}(\mathbf {x},\theta )\Big |_{\theta =0}\\&=\int \limits _{R^m}\Big (\frac{\mu '(0)\omega _1(\mathbf {x})}{(a+\omega _1(\mathbf {x}))^2}-\frac{\mu '(0)\omega _2(\mathbf {x})}{(a+\omega _2(\mathbf {x}))^2}\Big ) d\mathbf {G}(\mathbf {x},0)\\&\quad +\int \limits _{R^m} \Big (\frac{1}{a+\omega _1(\mathbf {x})}-\frac{1}{a+\omega _2(\mathbf {x})}\Big )\frac{\partial }{\partial \theta }d\mathbf {G}(\mathbf {x},\theta )\Big |_{\theta =0}. \end{aligned}$$

Since the integrand is bounded the first summand is equal to zero due to the characterization. On the second summand we may apply the result from Nikitin and Peaucelle (2004) and obtain

$$\begin{aligned} b_J'(0)=m\int \limits _{0}^{\infty }h(x)\varphi (x;a)dx. \end{aligned}$$

Expanding \(b_J(\theta )\) into Maclaurin series we complete the proof. \(\square \)

Note that \(T_n/\sqrt{n}\) converges in probability to \(|b_J(\theta )|\) as \(n\rightarrow \infty \).

Lacking a theoretical upper bound, the approximate Bahadur slopes are often compared (see e.g., Meintanis et al. 2007) with the approximate Bahadur slopes of the likelihood ratio tests, which are known to be optimal parametric tests in terms of Bahadur efficiency. Hence, we may consider the approximate Bahadur efficiencies against the likelihood ratio tests as “absolute” local approximate Bahadur efficiencies.

Under very general conditions the likelihood ratio tests have the approximate slopes equivalent to the double Kullback–Leibler distance from the alternative to the null set of distributions. It can be shown (see, Nikitin and Tchirina 1996) that, in the case of the alternatives from \(\mathcal {G}\), for small \(\theta \), they can be expressed as

$$\begin{aligned} 2K(\theta )=\bigg (\int \limits _{0}^{\infty }h^2(x)e^xdx-\Big (\int \limits _{0}^{\infty }xh(x)dx\Big )^2\bigg )\cdot \theta ^2+o(\theta ^2). \end{aligned}$$
(3)

3 Characterizations and tests

In this section, we present two new tests of exponentiality based on the following characterizations. They come from Desu (1971) and Puri and Rubin (1970).

Characterization 1

(Desu (1971)) Let X be random variable with distribution function \(F(\cdot )\). Let \(X_1,X_2,\ldots X_n\) be a sample from F and let \(W=\min (X_1,\ldots ,X_n)\). If \(F(\cdot )\) is a nondegenerate distribution function, then for each positive integer n, nW and X are identically distributed if and only if \(F(x)=1-e^{-\lambda x}\), for \(x\ge 0\), where \(\lambda \) is a positive constant.

Characterization 2

(Puri and Rubin (1970)) Let \(X_1\) and \(X_2\) be two independent copies of a random variable X with pdf f(x). Then X and \(|X_1-X_2|\) have the same distribution if and only if for some \(\lambda >0\) \(f(x)=\lambda e^{-\lambda x}\), for \(x\ge 0\).

The test statistics based on Characterizations 1 and 2 are, respectively

$$\begin{aligned} J^{\mathcal {D}}_{n,a}=\frac{\bar{X}}{n(n-1)}\sum \limits _{1\le i_1<i_2\le n }\sum \limits _{ \pi \in \mathrm \Pi (2)}\Big (\frac{1}{a\bar{X}+X_{ i_{\pi (1)}}}- \frac{1}{a\bar{X}+2\min (X_{ i_{\pi (1)}},X_{ i_{\pi (2)}})}\Big ),\nonumber \\ \end{aligned}$$
(4)
$$\begin{aligned} J^{\mathcal {P}}_{n,a}=\frac{\bar{X}}{n(n-1)}\sum \limits _{1\le i_1<i_2\le n }\sum \limits _{ \pi \in \mathrm \Pi (2)}\Big (\frac{1}{a\bar{X}+X_{ i_{\pi (1)}}} -\frac{1}{a\bar{X}+|X_{ i_{\pi (1)}}-X_{ i_{\pi (2)}}|}\Big ). \end{aligned}$$
(5)

The projections of kernel of U-statistics \(J^{\mathcal {D}}_{n,a}\) and \(J^{\mathcal {P}}_{n,a}\) on \(X_1\) under \(H_0\) are

$$\begin{aligned} \varphi ^{\mathcal {D}}(s;a)&=E(\Phi ^{\mathcal {D}}(X_1,X_2;a)|X_1=s)=\frac{1}{2(a+s)} - \frac{1}{2}e^a Ei(-a) \\&\quad - \frac{1}{2 (a + 2 s)} e^{-s} \big (2 + a e^{a/2 + s} \Gamma \left( 0, \frac{a}{2}\right) + 2 e^{\frac{a}{2} + s} s \Gamma \left( 0, \frac{a}{2}\right) \\&\quad - a e^{a/2 + s} \Gamma \left( 0, \frac{a}{2} + s\right) - 2 e^{\frac{a}{2} + s} s \Gamma \left( 0, \frac{a}{2} + s\right) \big ),\\ \varphi ^{\mathcal {P}}(s;a)&=E(\Phi ^{\mathcal {P}}(X_1,X_2;a)|X_1=s)\\&=\frac{1}{2(a+s)} - \frac{1}{2} e^aEi(-a) + e^{-a - s} (e^{2 a}Ei(-a) + Ei(a) - Ei(a + s)), \end{aligned}$$

where \(Ei(z)=\int _{-z}^{\infty }u^{-1}e^{-u}du\) and \(\Gamma (a,z)=\int _{z}^{\infty }t^{a-1}e^{-t}dt\) are the exponential integral and the incomplete Gamma function, respectively.

It can be shown that the kernels are centered for every \(a>0\). It is not possible to obtain the variance in a closed form, however it can be calculated for each a. Some values are given in Table 1, and the plots of the variance functions are shown in Fig. 1. We can see that in these cases the kernels are non-degenerate and the asymptotic distributions of \(\sqrt{n}J_{n,a}^{\mathcal {D}}\) and \(\sqrt{n}J_{n,a}^{\mathcal {P}}\) follow from (2).

Table 1 Values of \(\sigma ^2_{\Phi ^{\mathcal {D}}}(a)\) and \(\sigma ^2_{\Phi ^{\mathcal {P}}}(a)\)
Fig. 1
figure 1

Variance functions \(\sigma ^2_{\Phi ^{\mathcal {D}}}(a)\) (left) and \(\sigma ^2_{\Phi ^{\mathcal {P}}}(a)\) (right)

We shall compare our tests with the following integral-type tests based on the same characterizations. These types of tests have been proposed in some recent papers (see e.g., Nikitin and Volkova 2010; Volkova 2010; Jovanović et al. 2015).

$$\begin{aligned} I^{\mathcal {D}}_n&=\int \limits _{0}^{\infty }(F_n(t)-G_n(t))dF_n(t),\\ I^{\mathcal {P}}_n&=\int \limits _{0}^{\infty }(F_n(t)-H_n(t))dF_n(t), \end{aligned}$$

where

$$\begin{aligned} F_n(t)&=\frac{1}{n}\sum \limits _{i}I\{X_i<t\}\\ G_n(t)&=\frac{1}{n^2}\sum \limits _{i,j}I\{2\min (X_i,X_j)<t\},\\ H_n(t)&=\frac{1}{n^2}\sum \limits _{i,j}I\{|X_i-X_j|<t\}, \end{aligned}$$

The asymptotic distribution of these test statistics is also normal, so using the same method we can derive their Bahadur approximate slopes.

The common alternatives we are going to consider are

  • a Weibull distribution with density

    $$\begin{aligned} g(x,\theta )=e^{-x^{1 + \theta }} (1 + \theta ) x^{\theta },\;\theta > 0,\; x\ge 0; \end{aligned}$$
    (6)
  • a gamma distribution with the density

    $$\begin{aligned} g(x,\theta )=\frac{x^{\theta }}{\varGamma (\theta +1)}e^{-x}, \theta > 0, x\ge 0; \end{aligned}$$
    (7)
  • a Makeham distribution with density

    $$\begin{aligned} g(x,\theta )=(1+\theta (1-e^{-x}))\exp (-x-\theta ( e^{-x}-1+x)),\;\theta > 0, \;x\ge 0; \end{aligned}$$
    (8)
  • a linear failure rate distribution (LFR) with density

    $$\begin{aligned} g(x,\theta )=(1 + \theta x)e^{-x - \theta \frac{x^2}{2}}, \theta > 0, x\ge 0; \end{aligned}$$
    (9)

In Tables 2 and 3, there are Bahadur approximate efficiencies for our statistics \(J^{\mathcal {D}}_{n.a}\) and \(J^{\mathcal {P}}_{n.a}\) against their integral counterparts \(I^{\mathcal {D}}_n\) and \(I^{\mathcal {P}}_n\) based on the same characterizations. We can see that practically in all cases our tests are more efficient.

Table 2 Approximate Bahadur ARE (\(J_{n,a}^{\mathcal {D}},I_n^{\mathcal {D}}\))
Table 3 Approximate Bahadur ARE (\(J_{n,a}^{\mathcal {P}},I_n^{\mathcal {P}}\))

Figures 2, 3, 4 and 5 show the dependence of the local approximate Bahadur efficiencies \(\mathrm{eff}_a\) on the parameter \(a\in (0,10)\). Each figure shows the efficiencies of both statistics \(J^{\mathcal {D}}_{n,a}\) and \(J^{\mathcal {P}}_{n,a}\).

Fig. 2
figure 2

Local approximate Bahadur efficiencies for a Weibull alternative

Fig. 3
figure 3

Local approximate Bahadur efficiencies for a gamma alternative

Fig. 4
figure 4

Local approximate Bahadur efficiencies for a Makeham alternative

Fig. 5
figure 5

Local approximate Bahadur efficiencies for a linear failure rate alternative

We can notice that the local efficiencies range from reasonable to high. It is also possible, for a fixed a, to construct the alternatives against which the test would be “fully efficient”, i.e. it would have the same efficiency as the likelihood ratio test. In our case it can be shown, employing the same reasoning as in e.g. Jovanović et al. (2015), that some such alternatives are of the form

$$\begin{aligned} g(x;\theta )=e^{-x}(1+\theta (C_1\varphi (x;a)+C_2(x-1)),\;\;C_1>0,C_2\in \mathbb {R}. \end{aligned}$$

Besides, the figures show that, in cases of a Makeham and a linear failure rate alternative, statistic \(J^{\mathcal {P}}_{n,a}\) is always more efficient than \(J^{\mathcal {D}}_{n,a}\), while in gamma case it is the other way around, except for some small values of a. In case of a Weibull alternative \(J^{\mathcal {P}}_{n,a}\) is more efficient for values of a up to 3.5, while \(J^{\mathcal {P}}_{n,a}\) gradually overtakes it for larger ones.

4 Power study

In this section, we compare the powers of our tests for sample sizes \(n=20\) and \(n=50\) against some common alternative distributions with some well known exponentiality tests. The choice of tests comes from the review paper on exponentiality tests (Henze and Meintanis 2005). The tests include classical Kolomogorov–Smirnov (KS) and Cramer–von Mises (\(\omega ^2\)), Epps–Pulley test based on characteristic function (EP) (see, Epps and Pulley 1986), two tests based on a characterization via the mean residual life \(\overline{KS}\) and \(\overline{CM}\) (see, Baringhaus and Henze 2000), test based on spacing (S) (see, D’Agostino and Stephens 1986), Cox–Oakes test (see, Cox and Oakes 1984) and the test based on integrated empirical distribution function (KL) (Klar 2001). The alternative distributions are Weibull (W), gamma (\(\mathrm \Gamma \)), standard half-normal (HN), standard uniform (U), Chen (CH), linear failure rate (LF) and extreme value (EV), for the same choice of parameters as in Henze and Meintanis (2005). The level of significance is 0.05 and the number of Monte Carlo replications is 10,000.

The results are given in Tables 4 and 5. The general conclusion is that our tests perform better in case of small sample sizes. In particular, our tests are always better in case of W and \(\mathrm \Gamma \), and in vast majority of cases for HN, CH(1), LF(2) and LF(4). For other alternatives our tests are better in some cases and comparable in others, with the exception of CH(0.5) and CH(1.5). Moreover, we can notice that the powers of the tests increase with parameter a.

Table 4 Percentage of significant samples for different exponentiality tests \(n=20\), \(\alpha =0.05\)
Table 5 Percentage of significant samples for different exponentiality tests \(n=50\), \(\alpha =0.05\)

5 Conclusion

In this paper, we introduced a new class of scale-free goodness-of-fit tests for exponential distribution based on U-empirical Laplace transforms of equidistributed sample functions.

For two tests from this class we calculated the approximate relative Bahadur efficiencies of our tests and some other tests, for some choice of common alternatives. The results are more than satisfactory. We also calculated their “absolute” local approximate Bahadur efficiencies, i.e. their relative approximate Bahadur efficiencies against the likelihood ratio tests, and they range from reasonable to high.

Finally, we compared the powers of our tests with some other goodness-of-fit tests and noticed that in most cases our tests were more powerful.