1 Introduction

Covariance functions play a key role in statistical literature as they provide a relevant information about the correlation structure of underlying processes; in particular, in time series analysis, in spatial data and, more generally, in spatio-temporal data analysis, determine performance of prediction.

Bochner’s Theorem provides an exhaustive description for the set of covariance functions which are continuous in their domain; the same result underlines that a covariance assumes values in the set of the complex field, although the subset of the real covariance functions is required in most of the case studies (De Iaco et al. 2002a; Cressie and Huang 1999; De Iaco et al. 2002b; Gneiting 2002; Diggle and Ribeiro 2007; Cox and Isham 1988; Kolovos et al. 2004; Ma 2002; De Iaco et al. 2000).

It is relevant to point out that admissible covariance models for all dimensions, such as the exponential and gaussian, which are among the most common parametric models belonging to the Whittle-Matern class (Matern 1980), have some limitations, because they are always positive and strictly decreasing (Stein 1999), as a consequence a negative correlation cannot be modelled from this special set of covariances.

A special family of infinitely differentiable Bessel-Lommel covariance functions that always exhibit a negative hole effect and are valid in \({\mathbb {R}}^n,\) where \(n>2,\) was derived by Hristopulos (2015), although the exact functional form of these covariance functions is not exactly the same for different dimensions.

In the literature properties of covariance functions have been described by several authors (Cressie and Wikle 2011; Hristopulos 2020; Adler 1981; Yaglom 1987; Christakos 1984); in particular, starting from the classical covariance models (exponential, gaussian, rational), which are all positive functions and applying the well known properties of covariance functions, the resulting models preserve the same features, i.e. they are all positive, hence they cannot model a negative correlation. It’s also widely known that, in general, the difference between covariances could not be a covariance. At this purpose, just very few authors have faced the question concerning the difference (or linear combinations with negative coefficients) between covariances, in addition, this kind of analysis has been performed only on the real domain.

The difference between two real valued covariances for spatial data was analyzed by Vecchia (1988), however the method proposed by this author is not very flexible to be applied and is valid only in two dimensions. Besides this last contribution, a linear combination, whose weights could be negative, of two real valued continuous covariances was also performed by Ma (2005). Moreover, a special class of covariance functions (the generalized sum of product models), characterized by negative weights, has been analyzed by Gregori et al. (2008); these last authors underlined the importance of covariance models allowing negative values for problems of biological, medical and physical nature.

Recently, the requirements under which the difference between parametric covariance functions in the set of the complex field is still a covariance function have been analyzed (Posa 2021): these results can be considered a further property for the family of covariance functions. Moreover, the resulting class of covariances are characterized by interesting properties: they could be positive in the set of definition, or could assume negative values in a subset of the field of definition, suitably modifying the parameters on which they depend. In addition, the families of covariance functions obtained through the above difference, can present different behaviours near the origin; details on these models will be also discussed. Indeed, many applications concerning phenomena related to turbulence in space-time, biology and hydrology, require covariance functions with negative values, as described by some authors (Levinson et al. 1984; Yakhot et al. 1989; Shkarofsky 1968; Pomeroy et al. 2003; Xu et al. 2003b, a).

There are two main aims in this paper: a theoretical one, related to the importance of a new property involving the difference between two isotropic correlation structures; a practical one, which regards the construction of new flexible families of isotropic covariances, able to be adapted to several case studies. Indeed, these new classes present peculiar features with respect to traditional classes, such as the Whittle-Matern class and the several families constructed by applying the classical properties: infact, these last classes are characterized by several restrictions, as will be underlined throughout the manuscript.

Some interesting examples regarding the difference between isotropic and continuous covariance functions proposed in this paper, as a consequence of the general results previously established, are advantageous to be adapted and used in many applications. Indeed, a detailed analysis on the various classes of models has been provided and specified in \({\mathbb {R}}, {\mathbb {R}}^2\) and \({\mathbb {R}}^3\), which represent the usual Euclidean spaces encountered in several case studies.

It is important to underline that the present analysis has been devoted to isotropic covariances because they can be considered the starting blocks for building anisotropic and non stationary models; moreover, these families of covariances play an important role in the statistical theory of turbulence (Batchelor 1982) and they are often encountered in many others applied areas, such as description of ocean wave behaviour (Longuet-Higgins 1957) and road roughness modelling (Kamash and Robson 1978). In Sect. 2, some characteristics of continuous covariance functions are described, whereas in Sect. 3 a brief overview concerning traditional classes of covariance models has been given: this summary has been suitably proposed in order to underline the advantages and the drawbacks of these peculiar families. In Sect. 4 conditions for which the difference between covariances results in a covariance have been given. Some well known models, often utilized in the applications, have been considered; in particular, some examples for the difference of two isotropic covariance functions are explored in \({\mathbb {R}}, {\mathbb {R}}^2\) and \({\mathbb {R}}^3\). A peculiar parametric analysis on these families has been given and the same analysis underlines that the results can be utilized in a flexible and easy way by many practitioners. At last, as far as we know, classes of covariance functions, able to describe various and different scenarios as the ones presented in this paper, seem not to exist.

2 Continuous covariance functions

In order to provide a complete description on the subject, a synthetic overview on the family of continuous covariance functions is presented hereafter. Any complex and continuous covariance function can be introduced through Bochner’s theorem (Bochner 1959).

Theorem 1

Bochner’s theorem. A continuous function \(C: {\mathbb {R}}^m \rightarrow {\mathbb {C}}\), where \({\mathbb {C}}\) is the set of the complex numbers and \({\mathbb {R}}^m\) is the Euclidean m-dimensional space, is a covariance function if and only if it is the Fourier transform of a finite and non decreasing measure F, i.e.,

$$\begin{aligned} \displaystyle C(\textbf{s}) = \int _{{\mathbb {R}}^m} \exp ({i{\varvec{\omega }}^T \textbf{s}}) dF({\varvec{\omega }}), \end{aligned}$$
(1)

where i is the imaginary unit.

A special characterization of (1) outcomes from the absolute continuity of the function F, i.e.,

$$\begin{aligned} \displaystyle C(\textbf{s}) = \int _{{\mathbb {R}}^m} \exp ({i{\varvec{\omega }}^T\textbf{s}}) f({\varvec{\omega }})d{\varvec{\omega }}, \end{aligned}$$
(2)

where the spectral density \(f({\varvec{\omega }})\ge 0, {\varvec{\omega }} \in {\mathbb {R}}^m\) and \(f \in L^1({\mathbb {R}}^m)\).

If \(C \in L^1({\mathbb {R}}^m)\), then the spectral density (s.d.) function is continuous, moreover:

$$\begin{aligned} f({\varvec{\omega }})={1\over {(2\pi )^m}}\int _{{\mathbb {R}}^m} \exp (-i{\varvec{\omega }}^T {{\textbf {s}}}) C({{\textbf {s}}})d{{\textbf {s}}}. \end{aligned}$$
(3)

Let us state the main properties regarding the set of covariance functions.

  1. 1.

    \(C({{\textbf {0}}})\ge 0\).

  2. 2.

    \(C({{\textbf {s}}}) = {\overline{C}}(-{{\textbf {s}}}), \quad \forall {{\textbf {s}}} \in {\mathbb {R}}^m\), where the bar denotes complex conjugate.

  3. 3.

    \(|C({{\textbf {s}}})|\le C({{\textbf {0}}}), \quad \forall {{\textbf {s}}} \in {\mathbb {R}}^m.\)

  4. 4.

    Let \(C_1,\ldots ,C_n\) be covariance functions on \({\mathbb {R}}^{m}\), then \(\displaystyle \prod _{i=1}^n C_i(\textbf{s})\) is a covariance on \({\mathbb {R}}^{m}\).

  5. 5.

    Let \(C_1,\ldots ,C_n\) be covariance functions on \({\mathbb {R}}^{m}\) with \(\alpha _i\ge 0, i=1,\ldots ,n\), then \(\displaystyle \sum _{i=1}^n \alpha _i C_i(\textbf{s})\) is a covariance on \({\mathbb {R}}^{m}\).

  6. 6.

    If \(C_k, k\in {\mathbb {N}}\), are covariances in \({\mathbb {R}}^{m}\), then \(\displaystyle C({{\textbf {s}}})=\lim _{k\rightarrow \infty } C_k({{\textbf {s}}})\) is a covariance function in \({\mathbb {R}}^{m}\), provided that the limit exists.

  7. 7.

    If C is a covariance on \({\mathbb {R}}^{m}\), there is no guarantee that C be a covariance on \({\mathbb {R}}^{n}\), with \(n>m.\) Indeed, it is well known that the triangle model and the cosine model are covariance functions just in \({\mathbb {R}}\), however these last models are not covariance functions in \({\mathbb {R}}^n\), with \(n\ge 2.\)

  8. 8.

    If a covariance is isotropic, then it is a real valued function; as a consequence, in this special case, there exist several results which utilize Bessel and completely monotone functions (Polya 1949; Schoenberg 1938; Matern 1980; Yaglom 1987). Moreover, according to properties 1. and 3., a real covariance function is bounded and can be positive, negative or nought.

3 Traditional classes of isotropic covariance models

In the following section, a brief overview for isotropic covariance models has been proposed: in particular, some properties and limitations have been outlined.

Among the wide families of isotropic covariance functions utilized in several applications, it is worth to recall the Whittle-Matern class (Stein 1999): any covariance function belonging to this class is valid in any dimension of the Euclidean space; the exponential and gaussian models are special members of this class. In addition, several extensions of currently available covariance models which refer to Whittle-Matern class have been recently proposed (Laga and Kleiber 2017; Alegria et al. 2021; Ma and Bhadra 2022).

Apart from the merits of the Whittle-Matern class, a significant drawback is that this class is not able to model negative correlations, because all the members of this family are positive and decreasing functions. These last limitations characterize families of covariances constructed by applying the classical properties, such as sums and products, as will be described hereafter.

3.1 Isotropic covariance models

In the literature many stationary covariance models have been built utilizing Bochner’s representation (Christakos 2000; Gneiting 2002; Cressie and Huang 1999).

A stationary covariance C is defined isotropic if \(C({{\textbf {s}}})=C(s)\), where \(s=||{{\textbf {s}}}||.\) Although the concept of isotropy represents a strong assumption, the isotropic covariance models can be considered the starting blocks for building anisotropic and non stationary models. Yaglom (1957) exhibited the spectral representation of continuous isotropic stationary covariance functions on \({\mathbb {R}}^{m}\) (\(m\ge 2\)). In particular, if

$$\begin{aligned} \int ^{\infty }_{0} s^{m-1}|C(s)|ds <\infty ,\quad m \in {\mathbb {N}}, m\ge 2, \end{aligned}$$

then there exists a s.d. f; in this case:

$$\begin{aligned} C(s)={{2(\pi )^{m/2}}\over \Gamma (m/2)}\int ^{\infty }_{0}{\Lambda _m(ws) w^{m-1}}f(w)dw, \qquad s>0, \end{aligned}$$
(4)

where \(s=\Vert {{\textbf {s}}}\Vert ,\)

$$\begin{aligned} \Lambda _m(w)=2^{(m-2)/2}\Gamma (m/2){J_{(m-2)/2}(w) \over {w^{(m-2)/2}}}, \end{aligned}$$

and J is a Bessel function of order \((m-2)/2\). Moreover, the s.d. function f is given by the expression:

$$\begin{aligned} f(w)={1\over {2^{m-1}(\pi )^{m/2}\Gamma (m/2)}}\int ^{\infty }_{0}\Lambda _m(ws) s^{m-1}C(s)ds. \end{aligned}$$
(5)

In the most important particular cases where \(m=2\) or \(m=3\), equations (4) and (5) take the form:

  • \(m=2:\)

    $$\begin{aligned} C(s)= & {} {2\pi }\int ^{\infty }_{0} J_0(ws) w f(w)dw, \quad \nonumber \\ f(w)= & {} {1\over {2\pi }}\int ^{\infty }_{0} J_0 (ws) s C(s)ds; \end{aligned}$$
    (6)
  • \(m=3:\)

    $$\begin{aligned} C(s)= & {} {4\pi }\int ^{\infty }_{0} {\sin (ws)\over {ws} } w^2 f(w)dw, \quad \nonumber \\ f(w)= & {} {1\over {2\pi ^2}}\int ^{\infty }_{0} {\sin (ws)\over {ws} } s^2 C(s)ds. \end{aligned}$$
    (7)

It is relevant and useful to recall the relationship between the s.d. \(f_1(w_1)\) in \({\mathbb {R}}\) with the s.d. f(w) in \({\mathbb {R}}^2\) and in \({\mathbb {R}}^3\), respectively:

$$\begin{aligned} f(w)= & {} {{-1\over {\pi }}\int ^{\infty }_{w} {{{df_1(w_1)} \over {dw_1}}{1\over ({w_1^2-w^2)^{1/2}}} dw_1}}, \quad \nonumber \\ f(w)= & {} {-1\over {2\pi w}} {{df_1(w)} \over {dw}}. \end{aligned}$$
(8)

To verify if a function C is an m-dimensional isotropic covariance function, one only needs to calculate through (5) the corresponding s.d. f and examine if \(f(\omega )\ge 0, \omega \in {\mathbb {R}}\). Another method, which is equivalent to the first, consists in finding the Fourier transform \(f_1(w_1)\) of the function C in \({\mathbb {R}}\), then calculating the corresponding multidimensional s.d. f(w) from \(f_1(w_1)\) and finally checking if the function f(w) is non negative. This last method is especially convenient when \(m=2\) or \(m=3\).

Let us denote by \(\Phi _m\) the set of all m-dimensional isotropic covariance functions and also denote by \(\Phi _\infty\) the set of all covariance functions C which belong to \(\Phi _m\) for any integer m. Then, \(\Phi _1\supset \Phi _2\supset \cdots \supset \Phi _m\supset \cdots \supset \Phi _\infty\), where \(\Phi _1\) is the set of all real positive definite covariance functions. Note also that the general form of a function C belonging to \(\Phi _\infty\) is provided hereafter:

$$\begin{aligned} C(s)=\int ^{\infty }_{0}e^{-s^2 t^2} dF(t), \end{aligned}$$
(9)

where F is a bounded non decreasing function.

Lower bounds on isotropic correlation functions \(\rho (s)= C(s)/C(0)\) are well known in the literature (Yaglom 1987). Indeed, a function \(\rho\) is an isotropic correlation function on \({\mathbb {R}}^{m}\) if and only if it is of the following form:

$$\begin{aligned} \rho (s)= \int ^{\infty }_{0} \Lambda _m(sw)dF(w), \quad \int ^{\infty }_{0} dF(w)=1, \end{aligned}$$

where F is non decreasing.

Thus, for all s,

$$\begin{aligned} \rho (s)\ge \displaystyle \inf _{w\ge 0} \Lambda _m(w); \end{aligned}$$

in particular:

$$\begin{aligned} m= & {} 2, \quad \rho (s)\ge \displaystyle \inf _{w\ge 0} J_0(w)\approx -0.403; \quad \\ m= & {} 3, \quad \rho (s)\ge \displaystyle \inf _{w\ge 0} {sin(w)\over w}\approx -0.218; \end{aligned}$$

since \(\Lambda _\infty =\exp ({-w^2}), \quad \rho (s )> \inf \Lambda _{\infty }(w),\quad\) hence \(\rho (s )>0, \quad for \quad m=\infty .\)

According to this last result, as supported by (9), any isotropic covariance function which belongs to \(\Phi _\infty\) cannot ever be negative.

3.2 The Whittle–Matern class

Gneiting and Guttorp (2006) have provided an interesting historical review on this family of covariance functions; the same authors documented its relationship to the Hankel transform.

The Whittle-Matern class of continuous and isotropic covariance functions encloses several models and it is often utilized because of its adaptability: for example, it has been widely used in modeling several case studies, such as sea beam data, wind speed, field temperature and soil data, due to its great flexibility to model behaviors of correlation structures in proximity of the origin. This class contains just positive covariances, hence these models cannot be used in applications where negative covariances are required, such as turbulences and environmental processes.

In particular, this class of models includes a parameter which controls the degree of differentiability for the random function and encloses the gaussian and exponential models as special cases.

The Whittle- Matern class of covariances is provided by the following expression:

$$\begin{aligned} C(\Vert {{\textbf {s}}}\Vert )= A\big ({\lambda }{\Vert {{\textbf {s}}}\Vert }\big )^{\nu }{K_{\nu }(\lambda \Vert {{\textbf {s}}}\Vert )}, \end{aligned}$$
(10)

where \(K_\nu\) is the modified Bessel function of second kind, \(\displaystyle \Vert {{\textbf {s}}}\Vert = \sqrt{\sum _{i=1}^m s_i^2 }, \lambda > 0\) is a scale parameter and \(\nu\) is a parameter which controls the smoothness of the random function.

The s.d. is given by the Fourier transform of C:

$$\begin{aligned} f(\omega )=A{2^{(\nu - 1)}\over (\pi )^{m/2}} {{\Gamma {(\nu +m/2)}\lambda ^{2\nu }}\over {(\omega ^2+\lambda ^2)^{\nu +m/2}}}. \end{aligned}$$

The Whittle-Matern class becomes especially simple when \(\nu =n+1/2, n\in {\mathbb {N}}\). In this case the covariance function is the product of an exponential with a polynomial of order n (Hristopulos 2020). Starting from the result that a function C is a completely monotonic function if and only if \(C(\Vert {{\textbf {s}}}\Vert )\) is positive definite on \({\mathbb {R}}^m\) for all m, Berg et al. (2008) performed an interesting analysis on Dagum family of isotropic correlation functions.

3.3 Sums, products and limits of covariance functions

In many geostatistical case studies, nested correlation models are often utilized: the same nested models are obtained through linear combinations, with positive weights, of standard covariances, such as gaussian, exponential and rational. As a consequence, the resulting model is characterized by the same main features of one of the basic models, as can be easily illustrated hereafter.

Corollary 1

Let \(C_j, j=1,\ldots ,n\) be continuous and non negative isotropic covariance functions defined in \({\mathbb {R}}^m, m\le 3\); suppose that each \(C_i, i=1,\dots , n\) is differentiable and a decreasing function in \(]0,a[, a>0\) and there exists \(i\in {\mathbb {N}},\) such that \(\displaystyle \lim _{x\rightarrow 0^+} C'_i(x)< 0;\) let:

$$\begin{aligned} \displaystyle C_S(x)= & {} \sum _{j=1}^n \lambda _j C_j (x), \quad \nonumber \\ C_P(x)= & {} \prod _{j=1}^n C_j (x), \quad \lambda _j>0, j=1,\ldots ,n; \end{aligned}$$
(11)

then:

$$\begin{aligned} C_S(x)\ge & {} 0, \quad \forall x \in {\mathbb {R}}, \quad and \quad \displaystyle \lim _{x\rightarrow 0^+} C_S'(x)< 0; \\ C_P(x)\ge & {} 0, \quad \forall x \in {\mathbb {R}}\quad and \quad \lim _{x\rightarrow 0^+} C_P'(x)< 0. \end{aligned}$$

Proof

It follows by applying the standard rules on derivatives. \(\square\)

Corollary 2

Let \(C_k, k\in {\mathbb {N}},\) be continuous and positive isotropic covariance functions defined in \({\mathbb {R}}^m, m\le 3\), then

$$\begin{aligned} C(x)=\lim _{k\rightarrow \infty } C_k(x) \end{aligned}$$
(12)

is a positive and isotropic covariance function in \({\mathbb {R}}^{m}\), provided that the limit exists.

Proof

It follows from the sign permanence theorem.

Taking into account the previous results, the following concerns can be derived.

  • According to Corollary 1, in a linear combination of \((n-1)\) gaussian covariance models with just one exponential model with positive coefficients or their product, the behaviour near the origin depends only on the exponential model, hence the same behaviour near the origin cannot ever be parabolic, i.e. it does not depend from the \((n-1)\) gaussian models. On the other hand, in order to obtain a parabolic behaviour near the origin utilizing the standard properties of the set of the covariance functions, it is necessary that all models present the same parabolic behaviour near the origin.

  • Utilizing the classical properties of covariance functions described in Sect. 2 and Corollaries 1 and 2, it can be deduced that it is not possible to construct flexible models able to disclose different behaviours in proximity of the origin or covariance models which can be positive or negative according to peculiar values of their parameters.

    On the other hand, it is relevant to point out that it is possible to construct three-parameter isotropic covariance functions which take negative values for a specific range of values of one parameter (rigidity coefficient), as it has been shown for the so-called Spartan covariance functions (Hristopulos and Elogne 2007). In addition, the Spartan covariance functions for values of the rigidity parameter greater than 2 are expressed as the difference between two functions (Hristopulos 2015).

  • As already pointed out, although the Whittle-Matern class of covariance functions includes a parameter which controls the degree of differentiability of the random function, the same class contains exclusively non negative covariance functions; hence, the Whittle-Matern class of covariance functions cannot be used to model negative correlations.

  • It has been underlined that utilizing equations (11) and (12) or the well known properties described and considering standard covariance models (exponential, gaussian, rational) it is not possible to generate covariances which can assume negative values.

    Yaglom (1987) presented oscillatory covariance functions utilizing some Bessel functions. Linear combinations of covariance functions with some negative coefficients can be found in Gregori et al. (2008) and in Ma (2005).

  • In order to model covariance structures which present negative values or damped oscillations, hole effects models, such as the cosine covariance and covariance functions resulting from the product of standard positive models with a cosine function, are very often utilized. These correlation models are able to describe empirical processes which change their signs several times, such as the fading of radio signals received by radar. It is well known that the cosine covariance model is valid only in \({\mathbb {R}}\), moreover it is not strictly positive definite (De Iaco and Posa 2018), hence it presents very often severe limitations for modelling purposes. Sometimes, the cosine model is often combined with the exponential model, as specifed hereafter:

    $$\begin{aligned} \displaystyle C(s)=exp\bigg (-{|s|\over a_1}\bigg ) cos\bigg ({s\over a_2}\bigg ), \quad s\in {\mathbb {R}}. \end{aligned}$$

    This model can be extended to an isotropic model in \({\mathbb {R}}^m\), under a condition on the parameters: in particular, the previous model is a covariance in \({\mathbb {R}}^2\) if and only if \(a_2\ge a_1\) and a covariance in \({\mathbb {R}}^3\) if and only if \(a_2\ge a_1 \sqrt{3}\) (Yaglom 1987).

  • Among the hole effect covariance models, it is suitable to recall the cardinal sine model, which is valid in \({\mathbb {R}}^3\), i.e.,

    $$\begin{aligned} \displaystyle C(s)=\bigg ({a\over s}\bigg ) sin\bigg ({s\over a}\bigg ). \end{aligned}$$

    However, all these classes of covariance functions are not able to describe different structures modifying the values of their parameters: this problem will be faced and overcome in the next section.

\(\square\)

4 Special classes of continuous covariance functions

The general issue regarding the difference between covariances has been recently analyzed (Posa 2021); indeed, it can be considered a further property for the whole class of covariance functions. This result, a natural consequence of Bochner’s Theorem, is valid in the complex field, i.e., for any covariance function as in (1).

In the present section some interesting results outcoming from the difference between covariance functions will be given: in particular, the resulting covariance models present some relevant and interesting properties which are often required in the applications. At this purpose, the whole analysis will be performed in \({\mathbb {R}}, {\mathbb {R}}^2\) and \({\mathbb {R}}^3\); the families of covariance functions outcoming from the above difference show a variety of behaviours such that they can be easily utilized in several case studies. Indeed, the class of models proposed hereafter, according to the values of the parametrs on which the same class depends, could be positive or negative in a subset of their domain; moreover, the models could portray a linear or parabolic behaviour near the origin. Covariance functions featured by negative values in a subset of their field of definition have been used in many applications as previously pointed out.

The dependence of a covariance function from a set of parameters will be suitably underlined.

Theorem 2

(Posa 2021). Let \(C_k: {\mathbb {R}}^m \rightarrow {\mathbb {C}},k=1,2\) be covariance functions and consider

$$\begin{aligned} C({{\textbf {x}}}; {\varvec{\Lambda }})= & {} A C_1({{\textbf {x}}}; \varvec{\alpha })-B C_2({{\textbf {x}}}; \varvec{\beta }), \quad \nonumber \\ {{\textbf {x}}}= & {} (x_1,\ldots ,x_m)\in {{\mathbb {R}}^m}, \end{aligned}$$
(13)

where \(\varvec{\alpha }\) and \(\varvec{\beta }\) are vectors of parameters, \(A>0, B>0\) and \({\varvec{\Lambda }}=(A,B, {\varvec{\alpha }}, {\varvec{\beta }}).\)

Suppose that the covariance functions \(C_k, k=1,2\), can be expressed as in (2), i.e.,

$$\begin{aligned} C_1({{\textbf {x}}}; \varvec{\alpha })= & {} \int _{{\mathbb {R}}^m} \exp (i{\varvec{\omega }}^T {{\textbf {x}}}) f_1({\varvec{\omega }; \varvec{\alpha }})d{\varvec{\omega }}, \quad \\ C_2({{\textbf {x}}}; \varvec{\beta })= & {} \int _{{\mathbb {R}}^m} \exp (i{\varvec{\omega }}^T {{\textbf {x}}}) f_2({\varvec{\omega }; \varvec{\beta }})d{\varvec{\omega }}, \end{aligned}$$

with \(f_i({\varvec{\omega }; \cdot })\ge 0, \forall {\varvec{\omega }} \in {\mathbb {R}}^m\) and \(\displaystyle \int _{{\mathbb {R}}^m} f_i({\varvec{\omega }; \cdot })d{\varvec{\omega }} <\infty , i=1,2;\) then

$$\begin{aligned} C({{\textbf {x}}}; {\varvec{\Lambda }})=\int _{{\mathbb {R}}^m} \exp (i{\varvec{\omega }}^T {{\textbf {x}}})( A f_1({\varvec{\omega };\varvec{\alpha }})- B f_2({\varvec{\omega };\varvec{\beta }})) d{\varvec{\omega }} \end{aligned}$$
(14)

is a covariance function if and only if \(\forall {\varvec{\omega }} \in {\mathbb {R}}^m,\)

$$\begin{aligned}{} & {} A f_1({\varvec{\omega }; \varvec{\alpha }})- B f_2({\varvec{\omega }; \varvec{\beta }})\ge 0, \quad \nonumber \\{} & {} \quad \int _{{\mathbb {R}}^m} (A f_1({\varvec{\omega }; \varvec{\alpha }})- B f_2({\varvec{\omega };\varvec{\beta }})) d{\varvec{\omega }}<+\infty . \end{aligned}$$
(15)

Note that \((A f_1 - B f_2)\) is integrable, since \(f_1\) and \(f_2\) are integrable.

In the next subsections, some interesting outcomes of Theorem 2, are given for the subset of the real covariances: it will be shown how the difference between two covariance functions, suitably chosen among the most utilized models, i.e. gaussian, exponential and rational model, can generate families of correlation structures which are extremely advantageous to model a wide range of case studies.

In the following, some examples for the difference between isotropic covariance functions are given for the Euclidean spaces \({\mathbb {R}}, {\mathbb {R}}^2\) and \({\mathbb {R}}^3,\) respectively. In particular, equations (6), (7) and (8) can be utilized to obtain the s.d. functions in \({{\mathbb {R}}^2}\) and \({{\mathbb {R}}^3}\), knowing the corresponding one dimensional s.d. function.

4.1 Special classes of covariance functions in \({\mathbb {R}}\)

In the present Section flexible families of covariance models will be constructed in \({\mathbb {R}}\). These classes can be extremely useful in one dimensional applications, such as time series analysis.

Corollary 3

Given the following exponential covariance functions:

$$\begin{aligned} C_1({x}; \alpha )= & {} exp\big (-\alpha |x| \big ),\quad \\ C_2({x }; \beta )= & {} \exp \big (-\beta |x | \big ), \quad x \in {\mathbb {R}}, \alpha>0,\beta >0 \end{aligned}$$

then, the function defined hereafter

$$\begin{aligned} C({x};A, B, \alpha ,\beta )= & {} A \exp \big (-\alpha |x | \big ) - B \exp \big (-\beta |x |\big ), \quad \nonumber \\ A> & {} 0, B >0, \end{aligned}$$
(16)

is a covariance function if and only if:

$$\begin{aligned} 1<{ {\beta \over {\alpha }}}\le {A \over B }, \quad or \quad 1<{ {\alpha \over {\beta }}}<{A \over B }. \end{aligned}$$
(17)

If \(\displaystyle 1<{ {\beta \over {\alpha }}}<{A \over B }\) or \(\displaystyle 1<{ {\alpha \over {\beta }}}<{A \over B }\), then the covariance function (16) always presents a linear behaviour near the the origin. If \(\displaystyle 1<{ {\beta \over {\alpha }}}={A \over B }\), the covariance function (16) presents a parabolic behavior near the origin; moreover, if \(\displaystyle 1<{ {\alpha \over {\beta }}}<{A \over B }\), the covariance function (16) is always negative in a subset of its field of definition. At last, if \(\displaystyle 1<{ {\beta \over {\alpha }}}\le {A \over B }\), the covariance function (16) cannot ever be negative.

Proof

Note that (16) is a covariance if and only if the corresponding Fourier transform,

$$\begin{aligned} f(\omega ; A, B, \alpha ,\beta )={A \over \pi }{ {\alpha \over {(\omega ^2+\alpha ^2)}}}-{ B \over \pi }{ {\beta \over {(\omega ^2+\beta ^2)}}}, \quad \omega \in {\mathbb {R}}, \end{aligned}$$

is a s.d. function, i.e.,

$$\begin{aligned} {A \over \pi }{ {\alpha \over {(\omega ^2+\alpha ^2)}}}-{ B \over \pi }{ {\beta \over {(\omega ^2+\beta ^2)}}}\ge 0, \quad \forall \omega \in {\mathbb {R}}, \end{aligned}$$
(18)

and f is integrable; this last condition is satisfied because \(C_1\) and \(C_2\) are integrable; in particular, (18) is satisfied if: \(\displaystyle { {(\omega ^2+\beta ^2)\over {(\omega ^2+\alpha ^2)}}}-{ B \over A }{ {\beta \over {\alpha }}}\ge 0, \quad \forall \omega \in {\mathbb {R}}.\)

The following cases can be analyzed with respect to the values of the parameters \(\alpha\) and \(\beta\):

  1. 1.

    \(\alpha >\beta\); the minimum value of the function: \(\displaystyle \zeta (\omega ; \alpha ,\beta )={ {(\omega ^2+\beta ^2)\over {(\omega ^2+\alpha ^2)}}}\) is obtained for \(\omega ^*=0\); hence \(\displaystyle \zeta (0; \alpha , \beta )={ {\beta ^2\over {\alpha ^2}}};\) then, (18) is a s.d. function if: \(\quad \displaystyle 1<{ {\alpha \over {\beta }}}<{A \over B }.\)

    In this case, the covariance (16) assumes negative values if:

    $$\begin{aligned} \quad \displaystyle (\alpha -\beta )|x |>ln\bigg ({A \over {B }}\bigg ). \end{aligned}$$

    Moreover, the covariance function (16) always presents a linear behaviour near the origin because:

    $$\begin{aligned} \lim _{x\rightarrow 0^+} C'(x)= & {} -\alpha A+\beta B<0,\qquad \\ \lim _{x\rightarrow 0^-} C'(x)= & {} \alpha A-\beta B>0. \end{aligned}$$
  2. 2.

    \(\alpha <\beta\); in this case, \(\displaystyle \zeta (\omega ; \alpha , \beta )={ {(\omega ^2+\beta ^2)\over {(\omega ^2+\alpha ^2)}}}>1, \forall \omega \in {\mathbb {R}},\) then (18) is a s.d. function if:

    $$\begin{aligned} 1<{ {\beta \over {\alpha }}}\le {A \over B }. \end{aligned}$$
    (19)

    Even in this case the covariance function (16) always presents a linear behaviour near the origin because:

    $$\begin{aligned} \lim _{x\rightarrow 0^+} C'(x)= & {} -\alpha A+\beta B<0,\qquad \\ \lim _{x\rightarrow 0^-} C'(x)= & {} \alpha A-\beta B>0. \end{aligned}$$

    The covariance (16) should assume negative values if: \(\displaystyle (\beta -\alpha )|x |<ln\bigg ({B \over A }\bigg );\) however, because of (19), \(\displaystyle {B \over A }<{ {\alpha \over {\beta }}}<1;\) then, if \(\alpha <\beta ,\) the covariance function (16) cannot ever be negative.

  3. 3.

    \(\alpha<\beta , \quad \displaystyle {B \over A }={ {\alpha \over {\beta }}}<1, \quad i.e. \quad \alpha A=\beta B.\)

    In this peculiar case the covariance function (16) always presents a parabolic behaviour near the origin because: \(C'(0)=0\), moreover, the covariance function cannot ever be negative.

Hence, he class of models defined in (16) is powerful enough to be adapted to several situations with respect to the classes of covariance functions as those described in the previous section. \(\square\)

Corollary 4

Given the following gaussian covariance functions,

$$\begin{aligned} C_1({x};\alpha )= & {} \exp \big (-\alpha x^2\big ), \quad \\ C_2({x}; \beta )= & {} \exp \big (-\beta x^2\big ), \quad x \in {\mathbb {R}}\quad \alpha>0,\beta >0, \end{aligned}$$

then the function defined hereafter

$$\begin{aligned} & C({x};A, B, \alpha ,\beta )= A \exp \big (-\alpha x^2\big )\nonumber {} - B \exp \big (-\beta x^2\big ), \quad \\ & x \in {\mathbb {R}}, A>0, B>0, \end{aligned}$$
(20)

is a covariance function, if and only if:

$$\begin{aligned} \displaystyle 1<{ \alpha \over {\beta }}< \bigg ({ A\over {B}}\bigg )^2. \end{aligned}$$
(21)

Moreover, the family of covariance functions (20) always presents a parabolic behaviour near the origin and the same family always assumes negative values in a subset of its domain.

Proof

Note that (20) is a covariance if and only if its Fourier transform

$$\begin{aligned} f({x};A, B, \alpha ,\beta )= & {} A f_1(\omega ; \alpha ) - B f_2(\omega ; \beta ), \quad \omega \in {\mathbb {R}}, \end{aligned}$$

where \(f_1\) and \(f_2\) are, respectively:

$$\begin{aligned} f_1({\omega };\alpha )= & {} {1\over {2\pi }}\int _{{\mathbb {R}}}\exp \big (-i x \omega \big ) \exp \big (-\alpha x^2\big ) dx \\= & {} {{1\over {2}\sqrt{\pi \alpha }}} \exp \bigg [-\bigg ({\omega ^2\over {4\alpha }} \bigg )\bigg ], \omega \in {\mathbb {R}}, \end{aligned}$$

and

$$\begin{aligned} f_2({\omega };\beta )= & {} {1\over {2\pi }}\int _{{\mathbb {R}}}\exp \big (-i x \omega \big ) \exp \big (-\beta x^2\big ) dx \\= & {} {{1\over {2}\sqrt{\pi \beta }}} \exp \bigg [-\bigg ({\omega ^2\over {4\beta }} \bigg )\bigg ], \omega \in {\mathbb {R}}, \end{aligned}$$

is a s.d. function, i.e. it is integrable and \(\quad A f_1(\omega ; \alpha )- B f_2(\omega ; \beta )\ge 0, \quad \forall \omega \in {\mathbb {R}}.\)

Note that f is integrable, because \(f_1\) and \(f_2\) are integrable, moreover this last inequality is satisfied if:

$$\begin{aligned} \displaystyle {\omega ^2\over {4}}\bigg ({1\over {\beta }}-{1\over {\alpha }}\bigg ) > ln\bigg ({B\over {A}} \sqrt{{\alpha \over {\beta }}}\bigg ); \end{aligned}$$
(22)

then (22) is satisfied if \(\quad \alpha >\beta\) and \(\quad \displaystyle { B\over {A}} \sqrt{{\alpha \over {\beta }}}<1 \Longleftrightarrow { \alpha \over {\beta }} <\bigg ({ A\over {B}}\bigg )^2.\)

In this case the family of covariance functions (20) always presents a parabolic behaviour near the origin; moreover, the same family is always negative for all the values \(x \in {\mathbb {R}}\) such that: \(\quad \displaystyle (\alpha -\beta )x^2>ln\bigg ({A \over {B }}\bigg ).\) \(\square\)

Corollary 5

Given the following covariance functions,

$$\begin{aligned} & C_1({x };\alpha )= {} {1 \over {(x ^2+\alpha ^2)}}, \quad C_2({x }; \beta )= {} {1 \over {(x ^2+\beta ^2)}}, \quad \\ & x \in {\mathbb {R}}, \alpha>0, \beta >0, \end{aligned}$$

the function

$$\begin{aligned} & C(x; \alpha ,\beta , A, B )= & {} {A \over {(x ^2+\alpha ^2)}}\nonumber - {B \over {(x ^2+\beta ^2)}}, \quad \\ & A>0, B >0, \end{aligned}$$
(23)

is a covariance function if and only if:

$$\begin{aligned} \displaystyle \beta >\alpha \qquad and \qquad \displaystyle {B \over A }<{ {\beta \over {\alpha }}}. \end{aligned}$$
(24)

The parametric family (23) presents a parabolic behaviour in proximity of the origin; moreover, it is always positive if: \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\) and assumes negative values if: \(\displaystyle 1< {B \over A }<{\beta \over \alpha }.\)

Proof

Note that (23) is a covariance if and only if its Fourier transform:

$$\begin{aligned} & f(\omega ; A,B,\alpha , \beta )= & {} {A \over {2\alpha }} exp\big (-\alpha |\omega |\big )\nonumber -{B \over {2\beta }} exp\big (-\beta |\omega |\big ), \quad \\ & \omega \in {\mathbb {R}}; \end{aligned}$$
(25)

is a s.d. function, i.e., it is integrable and

$$\begin{aligned} {A \over {2\alpha }} exp\big (-\alpha |\omega |\big )-{B \over {2\beta }} exp\big (-\beta |\omega |\big )\ge 0, \quad \omega \in {\mathbb {R}}. \end{aligned}$$

The s.d. is a difference of integrable functions, hence it is integrable; in particular, the previous inequality is satisfied if: \(\displaystyle \beta >\alpha ,\) and \(\displaystyle {B \over A }<{ {\beta \over {\alpha }}}.\) \(\square\)

The parametric family (23) always presents a parabolic behaviour near the origin because \(C'(0)=0\) and it assumes just positive values if:

$$\begin{aligned} x^2(A-B)>B\alpha ^2-A\beta ^2, \quad \forall x \in {\mathbb {R}}. \end{aligned}$$

This last inequality is always satisfied if \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\). At last, the parametric family (23) can assume negative values if:

$$\begin{aligned} x^2(B-A)>A\beta ^2-B\alpha ^2, \quad x \in {\mathbb {R}}. \end{aligned}$$

This last inequality is always verified if: \(\displaystyle 1< {B \over A }<{\beta \over \alpha }.\)

All the results of this section are summarized in Table 1.

Table 1 Summary of the results in Sect. 4.1, where the Euclidean space is \({\mathbb {R}}\)

Corollary 6

Let \(C(x)=A\cdot exp(-|x|)+B\cdot exp(-x^2), A>0, B>0,\) be a covariance function in \({\mathbb {R}}\). Then C presents a linear behaviour near the origin and an inflection point in \(x_0\in ]0,\infty [\), i.e. \(C''(x_0)=0,\) if \(A-2B<0\).

Proof

Suppose that \(A-2B<0\); note that:

$$\begin{aligned} \displaystyle \lim _{x\rightarrow 0^+} C'(x)=\displaystyle \lim _{x\rightarrow 0^+} [-A\exp (-x)-2Bx\exp (-x^2)]=-A<0; \end{aligned}$$

moreover, \(C''(x)=A \cdot exp(-x)+2B\cdot exp(-x^2)(2x^2-1)\); then

$$\begin{aligned} \displaystyle \lim _{x\rightarrow 0^+} C''(x)=A-2B<0; \quad C''(1)={(A+2B)\over e}>0. \end{aligned}$$

However, \(C''\) is a continuous function for \(x \in ]0,\infty [\), then there exists \(x_0 \in ]0,1[\) such that \(C''(x_0)=0\). \(\square\)

The importance of Corollary 6 is related to the fact that a linear combination of a gaussian model with an exponential model can generate a model with a change of concavity, which could be useful in some applications. The behaviour underlined by Corollary 6 (Fig. 1) is atypical, because for covariance functions, characterized by a linear behaviour in proximity of the origin, the concavity is very often upwards near the origin, whereas for the covariance function of Corollary 6 the concavity is downwards near the origin, i.e. the same behaviour typical of the covariance functions with a parabolic behaviour in proximity of the origin.

Fig. 1
figure 1

Linear behaviour near the origin for the sum of an exponential with a gaussian model. The concavity near the origin is upwards if \(A-2B>0\), downwards if \(A-2B\le 0\)

4.2 Special classes of isotropic covariance functions in \({\mathbb {R}}^2\)

In this section, the difference between isotropic covariance functions has been considered in the Euclidean spaces \({\mathbb {R}}^2\). In particular, the same models of the previous section will be considered.

Given the one dimensional covariance functions \(C_1(x )\) and \(C_2(x )\) and the corresponding s.d. functions \(f_1(\omega )\) and \(f_2(\omega )\), using equation (8), the s.d. functions \(f_1(\omega )\) and \(f_2(\omega )\) in \({\mathbb {R}}^2\) can be obtained and they will be utilized in the following results.

Corollary 7

Given the following exponential covariance functions:

$$\begin{aligned} C_1({x}; \alpha )= & {} exp\big (-\alpha |x| \big ); C_2({x }; \beta )= \exp \big (-\beta |x| \big ), \quad \\ {{\textbf {x}}}= & {} (x_1,x_2), |x|=\sqrt{\sum _{i=1}^2 x_i^2}, \alpha>0, \beta >0 \end{aligned}$$

then, the function defined hereafter

$$\begin{aligned} & C({x};A, B, \alpha ,\beta )= & {} A \exp \big (-\alpha |x| \big ) \nonumber - B \exp \big (-\beta |x|\big ), \quad \\ & A>0, B>0, \end{aligned}$$
(26)

is a covariance function if and only if:

$$\begin{aligned} \displaystyle 1< {\beta \over \alpha } \le {A\over B}\qquad or \qquad \displaystyle 1< {\alpha \over \beta } < \sqrt{A\over B}. \end{aligned}$$
(27)

If: \(\quad \displaystyle 1< {\beta \over \alpha } < {A\over B}\quad\) or \(\quad \displaystyle 1< {\alpha \over \beta } < \sqrt{A\over B}\) the covariance function (26) presents a linear behaviour near the origin; the same covariance function presents a parabolic behaviour near the orign if: \(\quad \displaystyle 1< {\beta \over \alpha } = {A\over B}.\quad\) Moreover, if \(\quad \displaystyle 1< {\beta \over \alpha } \le {A\over B},\quad\) the covariance function (26) cannot ever be negative; otherwise, if: \(\quad \displaystyle 1< {\alpha \over \beta } < \sqrt{A\over B}\), the covariance function (26) assumes negative values in a subset of its domain.

Proof

Note that (26) is a covariance if and only if its Fourier transform:

$$\begin{aligned} f(\omega ;A, B,\alpha , \beta )={{A\alpha }\over {2\pi (\omega ^2+\alpha ^2)^{3/2}}}- {{B\beta }\over {2\pi (\omega ^2+\beta ^2)^{3/2}}}, \end{aligned}$$

where \(\alpha>0,\quad \beta >0, \quad \varvec{\omega }=(\omega _1,\omega _2)\) and \(\displaystyle \omega =\sqrt{\sum _{i=1}^2 \omega _i^2},\) is a s.d. function, i. e.,

$$\begin{aligned} {A\alpha \over {2\pi (\omega ^2+\alpha ^2)^{3/2}}}-{B\beta \over {2\pi (\omega ^2+\beta ^2)^{3/2}}}\ge 0, \quad \forall \omega \in {\mathbb {R}}; \end{aligned}$$
(28)

in particular, (28) is satisfied if:

$$\begin{aligned} \displaystyle \bigg ({{\omega ^2+\beta ^2}\over {\omega ^2+\alpha ^2}}\bigg )^{3/2}-{ B\over A}{ {\beta \over {\alpha }}}\ge 0, \quad \forall \omega \in {\mathbb {R}}. \end{aligned}$$

If \(\alpha <\beta\), then (28) is verified if: \(\quad \displaystyle 1< {\beta \over \alpha } \le {A\over B}\);    if: \(\displaystyle 1< {\beta \over \alpha } < {A\over B},\) the covariance function (26) always presents a linear behaviour near the origin because:

$$\begin{aligned} \lim _{x\rightarrow 0^+} C'(x)= & {} -\alpha A+\beta B<0,\qquad \\ \lim _{x\rightarrow 0^-} C'(x)= & {} \alpha A-\beta B>0. \end{aligned}$$

Otherwise, if: \(\quad \displaystyle 1< {\beta \over \alpha } = {A\over B}\), the covariance function presents a parabolic behaviour near the origin because:

$$\begin{aligned} \lim _{x\rightarrow 0^+} C'(x)= \lim _{x\rightarrow 0^-} C'(x)=0. \end{aligned}$$

The covariance (26) should be negative if: \(\displaystyle (\beta -\alpha )|x |<ln\bigg ({B \over A }\bigg );\) however, \(\displaystyle {B \over A }<{ {\alpha \over {\beta }}}<1,\) then if \(\alpha <\beta ,\) the covariance function (26) cannot ever be negative.

On the other side, if \(\alpha >\beta\), then (28) is verified if: \(\displaystyle 1< {\alpha \over \beta } < \sqrt{A\over B}.\)

In this case, it is easy to check that the covariance function (26) is negative for all the values \(x \in {\mathbb {R}}\) which satisfy: \(\quad \displaystyle (\alpha -\beta )|x |>ln\bigg ({A \over {B }}\bigg ).\)

Moreover, in this case the covariance function (26) always presents a linear behaviour near the origin because: \(\quad \displaystyle \lim _{x\rightarrow 0^+} C'(x)=-\alpha A+\beta B<0,\quad \lim _{x\rightarrow 0^-} C'(x)=\alpha A-\beta B>0.\) \(\square\)

Corollary 8

Given the following gaussian covariance functions:

$$\begin{aligned} C_1({x}; \alpha )= & {} exp\big (-\alpha x^2 \big ); C_2({x}; \beta )= \exp \big (-\beta x^2 \big ), \\ {{\textbf {x}}}= & {} (x_1,x_2), |x|=\sqrt{\sum _{i=1}^2 x_i^2}, \quad \alpha>0, \beta >0; \end{aligned}$$

then, the function defined hereafter

$$\begin{aligned} & C({x};A, B, \alpha ,\beta )= & {} A \exp \big (-\alpha x^2\big )\nonumber - B\exp \big (-\beta x^2\big ), \\ & A>0, B>0, \end{aligned}$$
(29)

is a covariance function if and only if:

$$\begin{aligned} \displaystyle 1< {\alpha \over {\beta }}<{ A\over {B}}. \end{aligned}$$
(30)

The covariance function (29) always presents a parabolic behaviour near the origin; moreover, the same covariance function is always negative in a subset of its domain.

Proof

Note that (29) is a covariance function if and only if its Fourier transform:

$$\begin{aligned} & A f_1(\omega ; \alpha )- Bf_2(\omega ;\beta )= \\ & {} = {A\over {4\pi \alpha }} \exp \bigg [-\bigg ({\omega ^2\over {4\alpha }} \bigg )\bigg ] - {B\over {4\pi \beta }} \exp \bigg [-\bigg ({\omega ^2\over {4\beta }} \bigg )\bigg ], \quad \omega \in {\mathbb {R}}, \end{aligned}$$

where:

$$\begin{aligned} f_1(\omega ;\alpha )= & {} {1\over {4\pi \alpha }} \exp \bigg [-\bigg ({\omega ^2\over {4\alpha }} \bigg )\bigg ],\quad \nonumber \\ f_2(\omega ;\beta )= & {} {1\over {4\pi \beta }} \exp \bigg [-\bigg ({\omega ^2\over {4\beta }} \bigg )\bigg ], \end{aligned}$$
(31)

\(\alpha>0,\quad \beta >0, \quad \varvec{\omega }=(\omega _1,\omega _2)\) and \(\displaystyle \omega =\sqrt{\sum _{i=1}^2 \omega _i^2},\) is a s. d. function, i.e., it is integrable and:

$$\begin{aligned} {A\over {4\pi \alpha }} \exp \bigg [-\bigg ({\omega ^2\over {4\alpha }} \bigg )\bigg ]- {B\over {4\pi \beta }} \exp \bigg [-\bigg ({\omega ^2\over {4\beta }} \bigg )\bigg ]\ge 0, \quad \omega \in {\mathbb {R}}. \end{aligned}$$

The s. d. is integrable, because it is a difference of integrable functions; moreover, the last inequality is verified if:

$$\begin{aligned} \displaystyle {\omega ^2\over {4}}\bigg ({1\over {\beta }}-{1\over {\alpha }}\bigg ) > ln\bigg ({B\over {A}} {\alpha \over {\beta }}\bigg ); \end{aligned}$$
(32)

then, (32) is always satisfied if \(\alpha >\beta\) and \(\displaystyle { B\over {A}} {\alpha \over {\beta }}<1,\) i.e., \(\displaystyle 1< {\alpha \over {\beta }}<{ A\over {B}}.\)

The covariance function (29) always presents a parabolic behaviour near the origin because \(C'(0)=0\) and it is always negative, in a subset of its domain, because there always exist \(x>0\) such that the following inequality

$$\begin{aligned} (\alpha -\beta ) x^2 > ln \bigg ({A\over B}\bigg ), \end{aligned}$$

is satisfied. \(\square\)

Corollary 9

Given the following covariance functions,

$$\begin{aligned} C_1({x };\alpha )= & {} {1 \over {(x ^2+\alpha ^2)}^{3/2}}; C_2({x }; \beta )= {1 \over {(x ^2+\beta ^2)}^{3/2}}, \\ {{\textbf {x}}}= & {} (x_1,x_2), |x|=\sqrt{\sum _{i=1}^2 x_i^2},\alpha>0, \beta >0 \end{aligned}$$

the function

$$\begin{aligned} & C(x;A, B, \alpha ,\beta )= & {} {A \over {(x ^2+\alpha ^2)}^{3/2}}\nonumber - {B \over {(x ^2+\beta ^2)}^{3/2}}, \quad \\ & A>0, B >0, \end{aligned}$$
(33)

is a covariance if and only if:

$$\begin{aligned} \displaystyle \beta >\alpha \qquad and \qquad \displaystyle {B \over A }<{ {\beta \over {\alpha }}}. \end{aligned}$$
(34)

The parametric family (23) presents a parabolic behaviour near the origin; moreover, it assumes just positive values if: \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\) and negative values if: \(\displaystyle 1< {B \over A }<{{\beta \over \alpha }}.\)

Proof

Starting from equation (6), the s.d. function \(f_1\) for the covariance \(C_1\) is the following:

$$\begin{aligned} f_1(\omega )= & {} {1\over {2\pi }}\int ^{\infty }_{0} J_0 (\omega x) x C_1(x)dx\\= & {} {1\over {2\pi }}\int ^{\infty }_{0} J_0 (\omega x) x {1 \over {(x ^2+\alpha ^2)}^{3/2}}dx= \\= & {} {1\over {2\pi \alpha }} \exp (-\alpha |\omega |), \quad \varvec{\omega }=(\omega _1,\omega _2) \quad \omega =\sqrt{\sum _{i=1}^2 \omega _i^2}. \end{aligned}$$

Similarly, \(\displaystyle \quad f_2(\omega )={1\over {2\pi \beta }} \exp (-\beta |\omega |).\)

Note that (33) is a covariance if and only if its Fourier transform:

$$\begin{aligned} & f(\omega ; A,B,\alpha , \beta )= & {} {A \over {2\pi \alpha }} exp\big (-\alpha |\omega |\big )\nonumber -{B \over {2\pi \beta }} exp\big (-\beta |\omega |\big ), \quad \\ & \omega \in {\mathbb {R}}; \end{aligned}$$
(35)

is a s.d. function, i.e., it is integrable and \(\quad \displaystyle {A \over {\alpha }} exp\big (-\alpha |\omega |\big )-{B \over {\beta }} exp\big (-\beta |\omega |\big )\ge 0, \quad \omega \in {\mathbb {R}}.\)

The s.d. (35) is integrable since it is a difference of functions which are integrable; in particular, the previous inequality is verified if: \(\displaystyle \beta >\alpha ,\) and \(\displaystyle {B \over A }<{ {\beta \over {\alpha }}}.\)

The parametric family (33) always presents a parabolic behaviour near the origin because \(C'(0)=0\) and is always positive if: \(\quad \displaystyle {{(x ^2+\beta ^2)}^{3/2} \over {(x ^2+\alpha ^2)}^{3/2}}>{B\over A}, \quad \forall x \in {\mathbb {R}}.\)

This last inequality is always satisfied if \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\). At last, the parametric family (33) presents negative value if: \(\quad \displaystyle {{(x ^2+\beta ^2)}\over {(x ^2+\alpha ^2)}}<{\bigg ({B\over A}\bigg )^{2/3 }},\) which is always verified if: \(\displaystyle 1<{B \over A }<{{\beta \over \alpha }}.\)

All the results of this section are summarized in Table 2. \(\square\)

Table 2 Summary of the results in Sect. 4.2, where the Euclidean space is \({\mathbb {R}}^2\)

4.3 Special classes of isotropic covariance functions in \({\mathbb {R}}^3\)

In this section, the difference between isotropic covariances has been considered in the Euclidean spaces \({\mathbb {R}}^3\). In particular, the same models of the previous section will be considered.

Given the one dimensional covariance functions \(C_1(x )\) and \(C_2(x )\) and the corresponding s.d. \(f_1(\omega )\) and \(f_2(\omega )\), recalling equation (8), the s.d. \(f_1(\omega )\) and \(f_2(\omega )\) in \({\mathbb {R}}^3\) can be obtained and they will be utilized in the following results.

Corollary 10

Given the following exponential covariance functions:

$$\begin{aligned} C_1({x}; \alpha )= & {} exp\big (-\alpha |x| \big ),\quad {{\textbf {x}}}=(x_1,x_2,x_3), \\ |x|= & {} \sqrt{\sum _{i=1}^3 x_i^2}, \quad \alpha >0, \end{aligned}$$

and

$$\begin{aligned} C_2({x};\beta )= & {} \exp \big (-\beta |x| \big ), \quad {\varvec{x}}=(x_1,x_2,x_3), \\ |x|= & {} \sqrt{\sum _{i=1}^3 x_i^2}, \quad \beta >0, \end{aligned}$$

then, the function defined hereafter

$$\begin{aligned} & C({x};A, B, \alpha ,\beta )= & {} A \exp \big (-\alpha |x| \big ) \nonumber - B \exp \big (-\beta |x|\big ), \quad \\ & A>0, B>0, \end{aligned}$$
(36)

is a covariance function if and only if

$$\begin{aligned} \displaystyle 1< {\beta \over \alpha } \le {A\over B}\quad or \quad \displaystyle 1< {\alpha \over \beta } <\bigg ( {A\over B}\bigg )^{1/3}. \end{aligned}$$
(37)

If: \(\quad \displaystyle 1< {\beta \over \alpha } < {A\over B}\quad\) or \(\quad \displaystyle 1< {\alpha \over \beta } < \bigg ( {A\over B}\bigg )^{1/3}\) the covariance function (36) presents a linear behaviour near the origin; the same covariance function presents a parabolic behaviour near the orign if: \(\quad \displaystyle 1< {\beta \over \alpha } = {A\over B}.\quad\) Moreover, if \(\quad \displaystyle 1< {\beta \over \alpha } \le {A\over B},\quad\) the covariance function (36) cannot ever be negative; otherwise, if: \(\quad \displaystyle 1< {\alpha \over \beta } < \bigg ( {A\over B}\bigg )^{1/3}\), the covariance function (36) is always negative in a subset of its domain.

Proof

Note that (36) is a covariance if and only if its Fourier transform:

$$\begin{aligned} f({x};A, B, \alpha ,\beta )=Af_1(x,\alpha )-Bf_2(x,\beta ), \end{aligned}$$

where

$$\begin{aligned} f_1(\omega ;\alpha )={\alpha \over {\pi ^2(\omega ^2+\alpha ^2)^{2}}},\quad f_2(\omega ;\beta ) ={\beta \over {\pi ^2(\omega ^2+\beta ^2)^{2}}}, \end{aligned}$$

\(\displaystyle \varvec{\omega }=(\omega _1,\omega _2,\omega _3), \omega =\sqrt{\sum _{i=1}^3 \omega _i^2},\) is a s.d. function, i.e., it is integrable and

$$\begin{aligned} {A\alpha \over {\pi ^2(\omega ^2+\alpha ^2)^{2}}}-{B\beta \over {\pi ^2(\omega ^2+\beta ^2)^{2}}}\ge 0, \quad \forall \omega \in {\mathbb {R}}; \end{aligned}$$
(38)

in particular, f is integrable, because \(f_1\) and \(f_2\) are integrable; moreover (38) is satisfied if:

$$\begin{aligned} \displaystyle \bigg ({{\omega ^2+\beta ^2}\over {\omega ^2+\alpha ^2}}\bigg )^{2}-{ B\over A}{ {\beta \over {\alpha }}}\ge 0, \quad \forall \omega \in {\mathbb {R}}. \end{aligned}$$

If \(\alpha <\beta\), (38) is verified if: \(\displaystyle 1< {\beta \over \alpha } \le {A\over B}\); if \(\alpha >\beta\), (38) is verified if: \(\displaystyle 1< {\alpha \over \beta } <\bigg ( {A\over B}\bigg )^{1/3}.\) \(\square\)

In particular, if \(\quad \displaystyle 1< {\beta \over \alpha } < {A\over B}\) the covariance function (36) always presents a linear behaviour near the origin because: \(\displaystyle \lim _{x\rightarrow 0^+} C'(x)=-\alpha A+\beta B<0,\quad \lim _{x\rightarrow 0^-} C'(x)=\alpha A-\beta B>0.\)

Otherwise, if \(\quad \displaystyle 1< {\beta \over \alpha } = {A\over B}\), the covariance function presents a parabolic behaviour near the origin because: \(\displaystyle \lim _{x\rightarrow 0^+} C'(x)= \lim _{x\rightarrow 0^-} C'(x)=0.\)

The covariance function (36) should assume negative values if: \(\displaystyle (\beta -\alpha )|x |<ln\bigg ({B \over A }\bigg );\) however, \(\displaystyle {B \over A }<{ {\alpha \over {\beta }}}<1,\) then if \(\alpha <\beta ,\) the covariance function (36) cannot ever be negative.

If \(\alpha >\beta\), then (38) is verified if: \(\displaystyle 1< {\alpha \over \beta } <\bigg ( {A\over B}\bigg )^{1/3}.\) In this case, it is easy to verify that the covariance function (36) is negative for all the values \(x \in {\mathbb {R}}\) such that: \(\quad \displaystyle (\alpha -\beta )|x |>ln\bigg ({A \over {B }}\bigg ).\)

Moreover, the covariance function (36) always presents a linear behaviour near the origin because: \(\displaystyle \lim _{x\rightarrow 0^+} C'(x)=-\alpha A+\beta B<0,\quad \lim _{x\rightarrow 0^-} C'(x)=\alpha A-\beta B>0.\)

Corollary 11

Given the following gaussian covariance functions:

$$\begin{aligned} C_1({x}; \alpha )= & {} exp\big (-\alpha x^2 \big ), C_2({x};\beta )= \exp \big (-\beta x^2 \big ); \\ {{\textbf {x}}}= & {} (x_1,x_2,x_3), |x|=\sqrt{\sum _{i=1}^3 x_i^2}, \alpha>0,\beta >0, \end{aligned}$$

the function defined hereafter

$$\begin{aligned} & C({x};A, B, \alpha ,\beta )= & {} A \exp \big (-\alpha x^2\big )\nonumber - B\exp \big (-\beta x^2\big ), \quad \\ & A>0, B>0, \end{aligned}$$
(39)

is a covariance function if and only if:

$$\begin{aligned} \displaystyle 1< {\alpha \over {\beta }}<{ \bigg ({A\over {B}}\bigg )^{2/3}}. \end{aligned}$$
(40)

The covariance function (39) always presents a parabolic behaviour near the origin; moreover, the same covariance function is always negative in a subset of its domain.

Proof

Note that (39) is a covariance if and only if its Fourier transform:

$$\begin{aligned} f({x};A, B, \alpha ,\beta )=Af_1(x,\alpha )-Bf_2(x,\beta ), \end{aligned}$$

where

$$\begin{aligned} f_1(\omega ;\alpha )= & {} {1\over {8({\pi \alpha })^{3/2}}} \exp \bigg [-\bigg ({\omega ^2\over {4\alpha }} \bigg )\bigg ], \quad \nonumber \\ f_2(\omega ;\beta )= & {} {1\over {8({\pi \beta })^{3/2}}} \exp \bigg [-\bigg ({\omega ^2\over {4\beta }} \bigg )\bigg ], \end{aligned}$$
(41)

\(\displaystyle \varvec{\omega }=(\omega _1,\omega _2,\omega _3), \omega =\sqrt{\sum _{i=1}^3 \omega _i^2},\) is a s.d. function, i.e., it is integrable and

$$\begin{aligned}{} & {} A f_1(\omega ; \alpha )- Bf_2(\omega \beta )= \\{} & {} \quad \\ & ={A\over {8({\pi \alpha })^{3/2}}} \exp \bigg [-\bigg ({\omega ^2\over {4\alpha }} \bigg )\bigg ] - {B\over {8({\pi \beta })^{3/2}}} \exp \bigg [-\bigg ({\omega ^2\over {4\beta }} \bigg )\bigg ]\ge 0,\quad \\& \forall \omega \in {\mathbb {R}}; \end{aligned}$$

In particular, f is integrable, because \(f_1\) and \(f_2\) are integrable; moreover, the last inequality is verified if:

$$\begin{aligned} \displaystyle 1< {\alpha \over {\beta }}<{ \bigg ({A\over {B}}\bigg )^{2/3}}. \end{aligned}$$
(42)

The covariance (39) always presents a parabolic behaviour near the origin because \(C'(0)=0\) and it is always negative because the following inequality: \(\displaystyle (\alpha -\beta ) x^2 > ln \bigg ({A\over B}\bigg ),\) is always satisfied in a subset of its domain. \(\square\)

Corollary 12

Given the following covariance functions,

$$\begin{aligned} & C_1({x };\alpha )= {1 \over {(x ^2+\alpha ^2)}^{2}}, \quad {{\textbf {x}}}=(x_1,x_2,x_3), \\ & |x| = \sqrt{\sum _{i=1}^3 x_i^2},\quad \alpha>0, \\ & C_2({x }; \beta )= {1 \over {(x ^2+\beta ^2)}^{2}}, \quad {{\textbf {x}}}=(x_1,x_2,x_3), \\ & |x| = \sqrt{\sum _{i=1}^3 x_i^2}, \quad \beta >0, \end{aligned}$$

the function

$$\begin{aligned} & C(x;A, B, \alpha ,\beta )= & {} {A \over {(x ^2+\alpha ^2)}^{2}}\nonumber - {B \over {(x ^2+\beta ^2)}^{2}}, \quad \\ & A>0, B >0, \end{aligned}$$
(43)

is a covariance function if and only if:

$$\begin{aligned} \displaystyle \beta >\alpha \quad and \quad \displaystyle {B \over A }<{ {\beta \over {\alpha }}}. \end{aligned}$$
(44)

The parametric family (43) presents a parabolic behaviour near the origin; moreover, it is always positive if: \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\) and can assume negative values if: \(\displaystyle 1< {B \over A }<{{\beta \over \alpha }}.\)

Proof

Starting from equation (7), the s.d. function \(f_1,\) for the covariance \(C_1\) is the following:

$$\begin{aligned} f_1(\omega )= & {} {1\over {2\pi }}\int ^{\infty }_{0} {\sin (\omega x)\over {\omega x} } x^2 C_1(x)dx\\= & {} {1\over {2\pi }}\int ^{\infty }_{0} {\sin (\omega x)\over {\omega x} } x^2 {1 \over {(x ^2+\alpha ^2)}^{2}}dx= \\= & {} {1\over {8\pi \alpha }} \exp (-\alpha |\omega |), \quad \varvec{\omega }=(\omega _1,\omega _2,\omega _3) \quad \\ \omega= & {} \sqrt{\sum _{i=1}^3 \omega _i^2}. \end{aligned}$$

Similarly, \(\displaystyle \quad f_2(\omega )={1\over {8\pi \beta }} \exp (-\beta |\omega |).\)

Note that (43) is a covariance if and only if its Fourier transform:

$$\begin{aligned} & f(\omega ; A,B,\alpha , \beta )= & {} {A \over {8\pi \alpha }} exp\big (-\alpha |\omega |\big )\nonumber -{B \over {8\pi \beta }} exp\big (-\beta |\omega |\big ), \quad \\ & \omega \in {\mathbb {R}}; \end{aligned}$$
(45)

is a s.d. function, i.e., it is integrable and \(\displaystyle {A \over {\alpha }} exp\big (-\alpha |\omega |\big )-{B \over {\beta }} exp\big (-\beta |\omega |\big )\ge 0, \quad \omega \in {\mathbb {R}}.\) The s.d. (45) is integrable since it is the difference of integrable functions; in particular, the previous inequality is verified if: \(\displaystyle \beta >\alpha ,\) and \(\displaystyle {B \over A }<{ {\beta \over {\alpha }}}.\) \(\square\)

The parametric family (43) always presents a parabolic behaviour near the origin because \(C'(0)=0\) and it is always positive if: \(\displaystyle \quad {{(x ^2+\beta ^2)}^{2} \over {(x ^2+\alpha ^2)}^{2}}>{B\over A}, \quad \forall x \in {\mathbb {R}}.\)

This last inequality is always satisfied if \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\). At last, the parametric family (43) can assume negative values if: \(\displaystyle \quad {{(x ^2+\beta ^2)}\over {(x ^2+\alpha ^2)}}<{\bigg ({B\over A}\bigg )^{1/2 }}.\) This last inequality is always satisfied if: \(\displaystyle 1<{B \over A }<{\bigg ({\beta \over \alpha }\bigg )^4}.\)

All the results of this section are summarized in Table 3.

Table 3 Summary of the results in Sect. 4.3, where the Euclidean space is \({\mathbb {R}}^3\)

5 Interpretation and representation of the results

In this section the previous results will be properly analyzed and a useful representation will be given. According to the results of the Corollaries proved in the previous subsections, as a first, significant and relevant general result it comes out that all the main characteristics exhibited by the families of covariance models are the same regardless of the dimension of the Euclidean space. For example, this is perfectly justified looking at inequalities (17), (27) and (37), regarding the difference between two exponential models: the order relations among the ratio of the parameters is preserved in \({\mathbb {R}}\), in \({\mathbb {R}}^2\) and in \({\mathbb {R}}^3\) except for the exponential factor related to the dimension of the Euclidean space. For this reason, the results presented hereafter have been given by arbitrarily choosing the three Euclidean spaces, \({\mathbb {R}}, {\mathbb {R}}^2\) and \({\mathbb {R}}^3\), respectively.

5.1 Difference of exponential models

On the basis of Corollary 3, the family of covariance functions (16) in \({\mathbb {R}}\) is extremely flexible to describe several behaviours of correlation structures: infact, this class, according to the values of the parameters, can assume a linear behaviour in proximity of the origin if \(\displaystyle {B\over A}<{\beta \over \alpha }<1\) or \(\displaystyle {B\over A}<{\alpha \over \beta }<1\), as well as a parabolic behaviour in proximity of the origin if \(\displaystyle {B\over A}={\alpha \over \beta }<1\); moreover, the same class is always positive if \(\displaystyle {B\over A}\le {\alpha \over \beta }<1\); on the other side, it assumes negative values if \(\displaystyle {B\over A}<{\beta \over \alpha }<1.\) In this last case the point of intersection \(x_0\) with the x axis is given hereafter:

$$\begin{aligned} x_0={{1\over {\alpha - \beta } }\log {{ A}\over { B}}}. \end{aligned}$$
(46)

Moreover, the same family presents a minimum value \(x_m\) for \(x>0\) given by the following expression:

$$\begin{aligned} x_m={{1\over {\alpha - \beta } }\log {{\alpha A}\over {\beta B}}}. \end{aligned}$$
(47)

If the family of covariances is always positive, i.e. \(\displaystyle {B\over A}\le {\alpha \over \beta }<1\), there exists an inflection point \(\displaystyle x_F= {1\over {\beta -\alpha } } \ln {{\beta ^2 B}\over {\alpha ^2 A}}\quad\) if \(\displaystyle {{\beta ^2 B}\over {\alpha ^2 A}}>1\); hence, from the sign of the second derivative, if \(\displaystyle 1<\sqrt{A\over B}<{\beta \over \alpha }<{A\over B}\) the concavity is downwards for \(0<x<x_F\), otherwise if \(\displaystyle 1<{\beta \over \alpha }<\sqrt{A\over B}\) the concavity is always upwards, as a consequence, in this last case the inflection point does not exist.

In particular, if \(\displaystyle {B\over A}= {\alpha \over \beta }<1\), the family of covariances presents a parabolic behaviour in proximity of the origin; in this case the inflection point \(\displaystyle x_F= {1\over {\beta -\alpha } } \ln {{\beta }\over {\alpha }}\); as a consequence, the practical range (Journel and Huijbregts 1981; Chilès and Delfiner 1999) decreases as \(\alpha \rightarrow \beta\). The practical range is usually defined in terms of the variogram function. For second order stationary random fields, in terms of the covariance function, equivalently the practical range \(x_R\) is the value such that \(C(x_R)=0.05 C(0)\).

In Fig. 2a) it is shown the behaviour concerning the difference between two exponential covariances in \({\mathbb {R}}\) when \(\displaystyle {B\over A}<{\beta \over \alpha }<1\) by fixing \(C(0)=1\) with \(A=2, B=1, \alpha =2\), by varying the values of the parameter \(\beta\), such that \(\displaystyle 1<{\alpha \over \beta }<2\). According to (47), \(\displaystyle x_m= {1\over {\alpha -\beta } } \ln {2\alpha \over \beta }\) and \(\displaystyle {C(x_m)={{{1\over k} -1}\over {(2k)^{1\over {k-1}}}}}\), with \(\displaystyle k={\alpha \over \beta }\). Hence, as \(k\rightarrow 2\), the value of \(C(x_m)\) becomes more and more negative. Similarly, the intersection point \(\displaystyle x_0={\ln 2\over {\alpha -\beta } }\) with the x axis (\(x>0\)) becomes always greater as \(\beta \rightarrow \alpha .\)

In Fig. 2b) it is shown the behaviour concerning the difference between two exponential covariances in \({\mathbb {R}}\) when \(\displaystyle {B\over A}<{\beta \over \alpha }<1\) by fixing the values of the parameters \(\alpha\) and \(\beta\), such that \(\displaystyle {\alpha \over \beta }=2\), with \(\alpha =2, \beta =1\), by fixing \(C(0)=1\) and by varying A and B such that \(A-B=1\). According to (47), \(\displaystyle x_m= \ln {2A\over B}\), with \(\displaystyle C(x_m)={{ -B^2}\over {4A}}.\) Hence, as B increases the value of \(C(x_m)\) becomes more and more negative. Similarly, the intersection point \(\displaystyle x_0={\ln {A\over B}}\) with the x axis (\(x>0\)) becomes always greater as B decreases.

Fig. 2
figure 2

Difference between exponential covariances when \(\displaystyle {B\over A}<{\beta \over \alpha }<1\): (a) by fixing \(C(0)=1, A=2, B=1, \alpha =2\), for different values of \(\beta\); (b) by fixing \(C(0)=1, \alpha =2, \beta =1,\) for different values of A and B, such that \(A-B=1\)

In Fig. 3a) it is shown the behaviour concerning the difference between two exponential covariances in \({\mathbb {R}}\) when \(\displaystyle {B\over A}={\alpha \over \beta }<1\), by fixing \(A-B=1\). As pointed out, this class of covariances, according to the previous values of parameters, presents a parabolic behaviour in proximity of the origin and it is always positive. As the ratio \(\displaystyle {\beta \over \alpha }\) increases, the practical range increases.

In Fig. 3b) it is shown the behaviour concerning the difference between two exponential covariances in \({\mathbb {R}}\) when \(\displaystyle 1<{\beta \over \alpha }<{A\over B}\) by fixing \(C(0)=1\) with \(A=2, B=1\), by varying the values of the parameters \(\alpha\) and \(\beta\), such that \(\displaystyle 1<{\beta \over \alpha }<2\). As pointed out, in this case the family of covariances is always positive. It is easy to show that as \(\displaystyle {\beta \over \alpha }\) becomes closer to 2 the practical range increases.

Note that the two solid lines in Fig. 3b) satisfy the condition \(\displaystyle 1<{\beta \over \alpha }<\sqrt{A\over B}\), hence the concavity is always upwards, as a consequence, in this case the inflection point does not exist; on the other hand, the dash and dash-dot lines satisfy the condition \(\displaystyle \sqrt{A\over B}<{\beta \over \alpha }<{A\over B}\), hence the concavity is downwards for \(0<x<x_F\): these four lines present a linear behaviour near the origin. From the same figure note that the dot line corresponds to the limit case \(\alpha A=\beta B\) and the covariance presents a parabolic behaviour near the origin.

Fig. 3
figure 3

Difference between two exponential covariances: (a) when \(\displaystyle {B\over A}={\alpha \over \beta }<1\), by fixing \(A-B=1\); (b) when \(\displaystyle 1<{\beta \over \alpha }<{A\over B}\) by fixing \(C(0)=1, A=2, B=1\)

5.2 Difference of Gaussian models in \({\mathbb {R}}^2\)

On the basis of Corollary 8, the family of covariance functions (29) in \({\mathbb {R}}^2\) is extremely flexible to describe some peculiar behaviours of correlation structures: infact, this class, according to the values of the parameters, i.e. \(\displaystyle 1<{\alpha \over \beta }<{A\over B}\), always assumes a parabolic behaviour in proximity of the origin, moreover, the same class is always negative in a subset of its domain. There always exists a point of intersection \(x_0>0\) with the x axis, given hereafter:

$$\begin{aligned} x_0={\sqrt{{1\over {\alpha - \beta } }\log {{ A}\over { B}}}}. \end{aligned}$$
(48)

Moreover, the same family presents a minimum value \(x_m\) for \(x>0\) given by the following expression:

$$\begin{aligned} x_m={\sqrt{{1\over {\alpha - \beta } }\log {{\alpha A}\over {\beta B}}}}. \end{aligned}$$
(49)

Figure 4a shows the behaviour of the difference between two gaussian covariances in \({\mathbb {R}}^2\) by fixing \(\alpha =7, \beta =6\) and \(C(0)=1\) with \(\displaystyle {7\over 6}<{A\over B}\), by varying the values of the parameters A and B, such that \(A-B=1\). According to (49), the minimum value \(x_m\) of C is obtained for \(\displaystyle x_m= \sqrt{ \ln {{7A}\over {6 B}}}\), with \(\displaystyle C(x_m)={-B\over 7}\bigg ({{6B+6}\over {7B}}\bigg )^6\). Hence, as B increases the value of \(C(x_m)\) becomes more and more negative. Similarly, the intersection point \(\displaystyle x_0=\sqrt{ \ln {{B+1}\over B}}\) with the x axis (\(x>0\)) becomes always greater as B decreases.

5.3 Difference of rational models in \({\mathbb {R}}^3\)

On the basis of Corollary 12, the family of covariance functions (43) in \({\mathbb {R}}^3\) is extremely flexible to describe some peculiar behaviours of correlation structures: infact, this class, according to the values of the parameters, i.e. \(\displaystyle \beta >\alpha\) and \(\displaystyle {B \over A }<{ {\beta \over {\alpha }}}\), always assumes a parabolic behaviour in proximity of the origin, moreover, it is always positive if \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\) and assumes negative values if \(\displaystyle 1< {B \over A }<{\beta \over \alpha }.\)

If the family of covariances is always positive, i.e. \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\), for a certain value of x, the following inequality: \(\displaystyle {A_1 \over {(x ^2+\alpha _1 ^2)}^{2}}- {B_1 \over {(x ^2+\beta _1^2)}^{2}}> {A_2 \over {(x ^2+\alpha _2^2)}^{2}}- {B_2 \over {(x ^2+\beta _2^2)}^{2}},\) is always verified if: \(A_1>A_2, \quad B_1<B_2, \quad \alpha _1^2<\alpha _2^2,\quad \beta _1^2>\beta _2^2,\quad\) or equivalently if: \(\displaystyle {A_1 \over \alpha _1^2}>{A_2 \over \alpha _2^2},\quad {B_1 \over \beta _1^2}<{B_2 \over \beta _2^2}.\)

In Fig. 4b) it is shown the behaviour concerning the difference between two rational covariances in \({\mathbb {R}}^3\) when \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\) by fixing \(C(0)=1\), i.e. \(\displaystyle {{A\over \alpha ^4}- {B\over \beta ^4}}=1\).

From the same Fig. 4b) it results that the practical range increases by enhancing the value of \(\displaystyle {A\over \alpha ^2}\) together to reducing the value of \(\displaystyle {B\over \beta ^2}.\)

Fig. 4
figure 4

(a) Difference between two gaussian covariances by fixing \(\alpha =7, \beta =6\) with \(\displaystyle {7\over 6}<{A\over B}\), by varying the values of the parameters A and B, such that \(A-B=1\); (b) difference between two rational covariances when \(\displaystyle {B \over A }<1<{ {\beta \over {\alpha }}}\) by fixing \(C(0)=1\)

5.4 Some relevant remarks

Taking into account all the previous results, the following relevant aspects can be derived.

  • One of the most relevant consequences outcoming from the results regarding the difference between parametric covariances, consists in the extremely wide class of scenarios which these peculiar classes of covariances are able to describe. These aspects have been properly detailed considering the differences between exponential, gaussian and rational covariance functions, respectively. This kind of analysis has been carried out in \({\mathbb {R}}, {\mathbb {R}}^2\) and \({\mathbb {R}}^3\), which represent the usual domains encountered in the applications.

  • In particular, the difference between two exponential covariances generates parametric families which are capable to describe several behaviours: models which are always positive, as well as models which are characterized by negative values, in addition to models which present a linear or a parabolic behaviour in proximity of the origin. Moreover, according to the values of parameters on which this family depends, there could be an inflection point, such that the concavity is downwards in proximity of the origin: this last behaviour is atypical for covariance functions characterized by a linear behaviour near the origin. The unique models which cannot be described by this parametric class regard the covariance functions characterized by a parabolic behaviour near the origin and at the same time assume negative values.

  • According to this last aspect, the models constructed through the difference between gaussian covariances present a parabolic behaviour near the origin and at the same time assume negative values, hence are able to overcome the above limitation. Moreover, the models outcoming from the difference between rational covariances always present a parabolic behaviour near the origin and at the same time, according to the values of parameters on which they depend, are always positive in the whole domain or can assume negative values in a subset of their field of definition.

  • As already pointed out, the traditional classes of covariances, such as the Whittle-Matern class and the several families constructed by applying the classical properties, are characterized by several restrictions which can be overcome by the new classes of covariances constructed through the difference between some simple covariance functions. These last parametric families are characterized by very simple expressions and can be adapted to most of the case studies: indeed, they are also extremely simple to handle from a computational point of view.

  • A further relevant aspect regards the main characteristics exhibited by the families of covariance models: these features are the same regardless of the dimension of the Euclidean space. For example, if a class of covariances outcoming from the difference between two exponential models presents a linear behaviour near the origin in \({\mathbb {R}}\) for certain values of the parameters, the same linear behaviour is preserved by the same class in \({\mathbb {R}}^2\) and in \({\mathbb {R}}^3\) because the order relations among the ratios of the parameters is retained, except for the exponential factor related to the dimension of the Euclidean space.

  • Under the assumption of second order stationarity, all the previous results can be given in terms of the variogram function.

  • The special classes of covariance functions exhibited in this paper can enrich the classes of spatio-temporal models usually utilized in the applications. Indeed, most of these families of spatio-temporal covariance models are positive (De Iaco et al. 2002a; Cressie and Huang 1999; De Iaco et al. 2002b; Gneiting 2002), hence they cannot model negative correlation structures. Moreover, the same classes are not so flexible to describe different behaviours by properly modifying the values of their parameters.

    However, it is relevant to underline that space-time covariances have been constructed in two space dimensions plus time based on the three-dimensional Spartan covariance family and a composite space-time metric (Varouchakis and Hristopulos 2019). These four-parameter covariance models are flexible, allow for negative values, and they have been shown to perform quite competitively in a hydrological application.

  • Regarding parameters estimation and modeling problems, as a starting point, it is necessary to compute the sample covariance function \({\widehat{C}}\) for different lags \(h_i, i=1,\ldots ,n_l\), where \(n_l\) is the number of lags. Hence, a suitable class of covariance functions \(C(\cdot ,\varvec{\Theta })\), which depends on a vector of parameters \(\varvec{\Theta }\), must be fitted to the empirical covariance \({\widehat{C}}\), as usually done. In particular, the vector of parameters \(\varvec{\Theta }\) can be estimated through the non-linear weighted least squares technique (Cressie and Wikle 2011), by minimizing the following function:

    $$\begin{aligned} \Psi (\varvec{\Theta })=\displaystyle \sum _{i=1}^{n_l}\big [{{\widehat{C}}}(h_i)-C(h_i, \varvec{\Theta })\big ]^2 w_i, \end{aligned}$$

    where \(w_i\) represents the weight of the i-th lag. These weights are reasonably assumed to be equal to the number of pairs related to the same lag. Note also that the choice of an appropriate model can be supported by analyzing the main properties (such as behaviour near the origin) of the sample covariance (De Iaco and Posa 2013; De Iaco et al. 2021) and some statistical tests can be used for this purpose (Cappello et al. 2018, 2020). It is relevant to point out that that all the classes of covariance models previously introduced are strictly positive definite, because they all have been constructed through a spectral density function (De Iaco and Posa 2018; De Iaco et al. 2011): this last property is particularly useful for interpolation problems because it guarantees the invertibility of the kriging matrix.

6 Conclusions

In this paper, after summarizing some characteristics of continuous covariance functions, starting from a recent result concerning the difference between isotropic covariance functions, special classes of isotropic covariance models have been obtained: these new families of covariances present flexible and interesting features with respect to most of the classical families of isotropic covariance models. Indeed, as it has been underlined throughout the paper, the Whittle-Matern family and the whole classes of models obtained by applying the usual properties of the covariance functions, are not able to describe peculiar correlation structures, such as covariance models which present negative values, or more generally, different behaviours which can be described by modifying the parameters of a parametric family. Our analysis has been devoted to isotropic covariances, because they can be considered the starting blocks for building anisotropic and non stationary models; moreover, these families of covariances play an important role in many applied areas. From a practical point of view, these new classes of isotropic covariance models are characterized by an extremely simple formalism and can be easily adapted to several case studies.