1 Introduction

It is a privilege to have the chance of congratulating Bruno Ebner and Norbert Henze for having provided an excellent thorough review of the timely subject of testing goodness-of-fit to the multivariate normal distribution. One issue that is raised in their review, and more generally when proposing multivariate procedures, is computation time, which is particularly relevant nowadays with the advent of high dimension. In this connection, I will try to illustrate how much may be “saved” from the computationally efficient BHEP test if we move away from the normal distribution towards more general models. In doing so, I will concentrate on a specific case which I consider important, but the general idea will become clear through this example. Let \( X \in \mathbb R^d, \ d\ge 1\), be an arbitrary random vector with absolutely continuous distribution having density \(f(\cdot )\), and let \(f_0(\cdot )\) denote the density of a parametric family of distributions. Suppose we wish to test the null hypothesis

$$\begin{aligned} H_{0}: f(x)=f_0(x), \, x\in \mathbb R^d, \end{aligned}$$

by means of the test statistic

$$\begin{aligned} T_{n,w}=n \int _{\mathbb R^d} |\Psi _n(\varvec{t})-\Psi _0(\varvec{t})|^2 w(\varvec{t}) \mathrm{{d}} \varvec{t}, \end{aligned}$$
(1)

that measures “distance” between the empirical CF \(\Psi _n(\cdot )\) resulting from data of size n from X and the CF \(\Psi _0(\cdot )\) corresponding to \(f_0(\cdot )\). By straightforward algebra, it follows from (1) that the computation of the integrals

$$\begin{aligned} I_r(x)=\int _{\mathbb R^d} \cos (t^\top x)\Psi _0^r(t) w(t) \mathrm{{d}}t, \ r=0,1,2, \end{aligned}$$
(2)

in explicit form is necessary (for \(r=2\), only \(I_2(0)\) is actually needed).

Here, we will focus on spherical distributions under test with integrable CFs and weight functions \(w(\cdot )\) in (1) for which the computational efficiency of the BHEP test may be realised beyond normality. In this setting, we recall the definition of the CF

$$\begin{aligned} \Psi (t)= \int _{\mathbb R^d} \cos (t^\top x)f(x)\mathrm{{d}}x, \end{aligned}$$
(3)

and the multivariate inversion theorem

$$\begin{aligned} f(x)=\frac{1}{(2\pi )^d} \int _{\mathbb R^d} \cos (t^\top x)\Psi (t)\mathrm{{d}}t. \end{aligned}$$
(4)

Note that the assumption of the CF \(\Psi (\cdot )\) being integrable implies that the corresponding distribution function is absolutely continuous with a density \(f(\cdot )\) obtained by inversion via (4); see (Ushakov 1999), Theorem 1.8.5.

In the next section, we will discuss the case of testing for (alpha) stable distributions. Motivation for considering this particular class of laws comes from the fact that the stable class is heavy-tailed, but also strictly includes the normal law, and has several interesting theoretical properties as well as numerous applications, especially in Finance and Engineering. The corresponding literature is extensive. A merely indicative list comprises (Nolan 2013; Bonato 2012; Rachev and Mittnik 2000; Uchaikin and Zolotarev 1999; Adler et al. 1998; Samorodnitsky and Taqqu 1994), and references therein.

2 Test for stable distributions

Due to uniqueness of CFs the null hypothesis \(H_0\) corresponding to a spherical stable distribution may be restated in the equivalent form

$$\begin{aligned} \Psi (t)=\Psi _0(t), \, t \in \mathbb R^d, \end{aligned}$$
(5)

where \(\Psi (t)\) denotes the CF of X and \(\Psi _0(t)=e^{-\Vert t\Vert ^\alpha }\) is the CF of the class of spherical stable distributions. The parameter \(\alpha \in (0,2]\) is a shape parameter that regulates tail behaviour of the underlying stable law, with the most well-known cases corresponding to the multivariate Gaussian (resp. Cauchy) distribution for \(\alpha =2\) (resp. \(\alpha =1\)). In this note, we consider the parameter \(\alpha \) as fixed (known).

The feature that makes the CF approach to goodness-of-fit ideal is that, for the stable class, the CF is simple compared to the corresponding distribution function; see (Meintanis et al. 2015; Matsui and Takemura 2008; Meintanis 2005; Koutrouvelis and Meintanis 1999). We will illustrate here that with the weight function appropriately chosen, not only the CF but also the test statistic itself may be explicitly defined. In doing so, we point out that if the weight function \(w(\cdot )\) is proportional to a symmetric around zero density then (3) implies that the integral \(I_0(x)\) in (2) is a constant multiple of the value of the CF of that density computed at the point x. On the other hand, if we choose \(w\propto \Psi _0\), then by using the inversion in (4) we have \(I_0(x)\propto f_0(x)\). We moreover require that \(\Psi _0(t)w(t)\) also leads to an explicit expression for the integral \(I_1(x)\) in (2), and likewise for the integral \(I_2(0)\). The search for such weight functions is further advanced by consulting (Epps 2005) who considers the choice of w(t) for univariate families, and at the same time recalling the notion of adjoint distributions whereby a pair of distributions is called adjoint if the density of the one is a constant multiple of the CF of the other; see the review paper by Rossberg (1995) . Note that with this definition the Gaussian density is self-adjoint, a property that is part of the reason why the BHEP test comes in explicit form.

With these considerations in mind, we target as weight function for carrying out the test corresponding to (5) the density \(\kappa \Vert t\Vert ^{N-1}e^{-\lambda \Vert t\Vert ^s}\) of the spherical Kotz-type (Kt) distribution, where \(N\ge 1\) and \(s,\lambda >0\) are parameters, and \(\kappa \) denotes a fixed normalising constant that depends on the dimension d as well as on the parameters \((N,s,\lambda )\). (Note that fixing \((N,s)=(1,2)\) illustrates that the Gaussian distribution belongs to the Kt family). Clearly then, the \(\alpha \)-stable distribution is adjoint to a Kt distribution with \((N,s,\lambda )=(1,\alpha ,1)\), and thus set \(w(t)=e^{-\Vert t\Vert ^\alpha }\) as weight function. Then, the integrals defined in (2) may be computed by means of (3) as

$$\begin{aligned} I_r(x)=\int _{\mathbb R^d} \cos (t^\top x) e^{- (r+1)\Vert t\Vert ^\alpha } \mathrm{{d}} t=\frac{\varphi _{r+1}(x)}{\kappa _{r+1}}, r=0,1,2, \end{aligned}$$
(6)

where \(\varphi _r(\cdot )\) denotes the CF of the Kt distribution with parameters \((N,s,\lambda )=(1,\alpha ,r)\), and \(\kappa :=\kappa _r\) denotes the aforementioned normalising constant of the corresponding Kt density. Analytic expressions for the right-hand side of (6) may be drawn from (Nadarajah 2003) and render the test statistic figuring in (1) free of numerical integration for the entire range \(\alpha \in (0,2]\) except at \(\alpha =1\), and with \(\alpha =2\) leading back to (a variant of) the BHEP test. On the other hand, the outstanding case \(\alpha =1\) can be treated using the density of the Cauchy distribution by invoking (4).

3 Outlook

What then about alternative families of distributions? By setting \(w\propto \Psi _0\), the type of reasoning at play is that if \((\Psi _0,w)\) is an adjoint pair, then \(w^r\propto \Psi ^r_0\), i.e. \((\Psi ^r_0,w^r)\) is also an adjoint pair, for \(r=2,3\). In this connection the integral \(I_0(x)\) may be obtained in two different ways: Either via (3) as the CF corresponding to the adjoint density of \(\Psi _0\) or by means of (4) by computing the density \(f_0\) under test at the point x. Likewise the integral \(I_1(x)\) may be obtained either via (3) by means of the CF corresponding to the adjoint density of \(\Psi ^2_0\) or by inversion in (4) by using the density of \(X+X_1\) (where \(X_1\) denotes an independent copy of X), corresponding to the CF \(\Psi ^2_0\), whichever is more convenient. For the testing for stability problem (5) specifically, and since it is straightforward to see that positive integer powers of Kt densities lead to Kt densities with the same value of s, the CF-interpretation leads to a bit more general weight function \(w(\cdot )\) as it allows for Kt densities even for \(N\ne 1\) to be employed as weight functions. Finally, the integral \(I_2(0)\) is relatively simpler to handle since this is constant (free of the argument x), either by utilising the CF of the adjoint density of \(\Psi ^3_0\) or the density of \(X+X_1+X_2\) (with \(X_2\) denoting an extra independent copy of X) corresponding to the CF \(\Psi ^3_0\), in each case the quantity being considered at the origin, and computed either analytically or numerically.

By way of example, this reasoning may also be applied to the family of multivariate generalised Laplace distributions of Kozubowski et al. (2013) and to the Kotz-type family, leading in both cases to CF-based tests free of numerical integration; further details will be provided elsewhere. In principle, a CF-based test for the multivariate Student’s t-distribution may also be obtained in this way, involving however an elevated computational burden since, although the density corresponding to \(\Psi ^2_0(\cdot )\) has been derived explicitly by Berg and Vignat (2010) , it is considerably more complicated than in the aforementioned cases.