In this chapter we bring together background material on several topics that will be important in the sequel.

2.1 Basic Topology

In this section we review some basic concepts from topology. For more details see, e.g., [13] or Chapter 7 in [8]. We begin by defining a topological space.

Definition 2.1.

Let \(\mathbb{E}\) be a set. If \(\mathcal{T}\) is a collection of subsets of \(\mathbb{E}\) such that

  1. 1.

    \(\emptyset, \mathbb{E} \in \mathcal{T}\),

  2. 2.

    \(\mathcal{T}\) is closed under finite intersections, and

  3. 3.

    \(\mathcal{T}\) is closed under arbitrary unions,

    then \(\mathcal{T}\) is called a topology, \((\mathbb{E},\mathcal{T} )\) is called a topological space, and the sets in \(\mathcal{T}\) are called open sets. The complement of an open set is called a closed set. If, in addition, for any \(a,b \in \mathbb{E}\) with ab there are \(A,B \in \mathcal{T}\) with a ∈ A, b ∈ B, and \(A \cap B =\emptyset\),then we say that the space is Hausdorff.

For any topological space \((\mathbb{E},\mathcal{T} )\) the class of Borel sets is the \(\sigma\)-algebra generated by \(\mathcal{T}\) and is denoted by \(\mathfrak{B}(\mathbb{E},\mathcal{T} )\). Thus \(\mathfrak{B}(\mathbb{E},\mathcal{T} ) =\sigma (\mathcal{T} )\). When the collection \(\mathcal{T}\) is clear from context we sometimes write \(\mathfrak{B}(\mathbb{E}) = \mathfrak{B}(\mathbb{E},\mathcal{T} )\). In particular, when working with \(\mathbb{R}^{d}\) we generally assume that \(\mathcal{T}\) are the usual open sets. In this case \(\mathfrak{B}(\mathbb{R}^{d},\mathcal{T} )\) are the usual Borel sets, which we denote by \(\mathfrak{B}(\mathbb{R}^{d})\). Any measure on the space \((\mathbb{E},\mathfrak{B}(\mathbb{E}))\) is called a Borel measure on \((\mathbb{E},\mathcal{T} )\) or just a Borel measure when the space is clear from context.

If \(A \subset \mathbb{E}\), then the interior of A (denoted A ) is the union of all open sets contained in A, and the closure of A (denoted \(\bar{A}\)) is the intersection of all closed sets containing A. Note that \(A^{\circ }\subset A \subset \bar{ A}\). We write \(\partial A =\bar{ A}\setminus A^{\circ }\) to denote the boundary of A. We conclude this section by recalling the definition of a compact set.

Definition 2.2.

Let \((\mathbb{E},\mathcal{T} )\) be a Hausdorff space and let \(A \subset \mathbb{E}\). If for any collection \(\mathcal{T}_{0} \subset \mathcal{T}\) with \(A \subset \bigcup \mathcal{T}_{0}\) there is a finite subcollection \(\mathcal{T}_{1} \subset \mathcal{T}_{0}\) with \(A \subset \bigcup \mathcal{T}_{1}\), then A is called a compact set. If A is such that its closure is compact, then A is called relatively compact.

2.2 Infinitely Divisible Distributions and Lévy Processes

In this section we review some important results about infinitely divisible distributions and their associated Lévy processes. Comprehensive references are [69] and [21]. A probability measure μ on \(\mathbb{R}^{d}\) is called infinitely divisible if for any positive integer n there exists a probability measure μ n on \(\mathbb{R}^{d}\) such that if \(X \sim \mu\) and \(Y _{1}^{(n)},\ldots,Y _{n}^{(n)}\stackrel{\mathrm{iid}}{\sim }\mu _{n}\) then

$$\displaystyle{X\stackrel{d}{=}\sum _{i=1}^{n}Y _{ i}^{(n)}.}$$

We denote the class of infinitely divisible distributions by ID. The characteristic function of an infinitely divisible distribution μ on \(\mathbb{R}^{d}\) is given by \(\hat{\mu }(z) =\exp \{ C_{\mu }(z)\}\) where

$$\displaystyle\begin{array}{rcl} C_{\mu }(z) = -\frac{1} {2}\langle z,Az\rangle + i\langle b,z\rangle +\int _{\mathbb{R}^{d}}\left (e^{i\langle z,x\rangle } - 1 - i \frac{\langle z,x\rangle } {1 + \vert x\vert ^{2}}\right )M(\mathrm{d}x),& &{}\end{array}$$
(2.1)

A is a symmetric nonnegative-definite d × d matrix, \(b \in \mathbb{R}^{d}\), and M satisfies

$$\displaystyle\begin{array}{rcl} M(\{0\}) = 0\mathrm{\ and\ }\int _{\mathbb{R}^{d}}(\vert x\vert ^{2} \wedge 1)M(\mathrm{d}x) < \infty.& &{}\end{array}$$
(2.2)

We call C μ the cumulant generating function of μ, A the Gaussian part, b the shift, and M the Lévy measure. The measure μ is uniquely identified by the Lévy triplet (A, M, b) and we will write

$$\displaystyle{\mu = ID(A,M,b).}$$

The class of infinitely divisible distributions is intimately related with the class of Lévy processes. These processes are defined as follows.

Definition 2.3.

A stochastic process {X t : t ≥ 0} on \((\varOmega,\mathcal{F},P)\) with values in \(\mathbb{R}^{d}\) is called a Lévy Process if X 0 = 0 a.s. and the following conditions are satisfied:

  1. 1.

    (Independent increments) For any n ≥ 1 and \(0 \leq t_{0} < t_{1} < \cdots < t_{n} < \infty \), the random variables \(X_{t_{0}},\ X_{t_{1}} - X_{t_{0}},\ldots,X_{t_{n}} - X_{t_{n-1}}\) are independent.

  2. 2.

    (Stationary increments) \(X_{s+t} - X_{s}\stackrel{d}{=}X_{t}\) for any s, t ≥ 0.

  3. 3.

    (Stochastic continuity) For every t ≥ 0 and ε > 0 \(\lim _{s\rightarrow t}P\left (\vert X_{s} - X_{t}\vert >\epsilon \right ) = 0\).

  4. 4.

    (Càdlàg paths) There is \(\varOmega _{0} \in \mathcal{F}\) with P(Ω 0) = 1 such that for every ω ∈ Ω 0, X t (ω) is right-continuous in t ≥ 0 and has left limits in t > 0.

Since a Lévy process {X t : t ≥ 0} has the càdlàg paths property it follows that, with probability 1, \(\lim _{s\downarrow t}X_{s} = X_{t}\) and \(\lim _{s\uparrow t}X_{s}\) exists. We define \(X_{t-}:=\lim _{s\uparrow t}X_{s}\) and we write \(\varDelta X_{t} = X_{t} - X_{t-}\) to denote the jump at time t. The connection between Lévy processes and infinitely divisible distributions is highlighted by the following result, which is given in Theorem 7.10 of [69].

Proposition 2.4.

  1. 1.

    If μ is an infinitely divisible distribution on \(\mathbb{R}^{d}\) , then there is a Lévy process {X t : t ≥ 0} with \(X_{1} \sim \mu\) .

  2. 2.

    Conversely, if {X t : t ≥ 0} is a Lévy process on \(\mathbb{R}^{d}\) , then for any t ≥ 0 the distribution μ t of X t is infinitely divisible and \(\hat{\mu }_{t}(z) = [\hat{\mu }_{1}(z)]^{t}\) .

  3. 3.

    If {X t : t ≥ 0} and {X t ′: t ≥ 0} are Lévy processes on \(\mathbb{R}^{d}\) with \(X_{1}\stackrel{d}{=}X_{1}'\) , then {X t : t ≥ 0} and {X t ′: t ≥ 0} have the same finite dimensional distributions.

In the context of Lévy processes, the Lévy measure has a simple interpretation. Specifically, if {X t : t ≥ 0} is a Lévy process with \(X_{1} \sim ID(A,M,b)\), then

$$\displaystyle\begin{array}{rcl} M(B) =\mathrm{ E}\left [\#\{t \in [0,1]:\varDelta X_{t}\neq 0,\varDelta X_{t} \in B\}\right ],\quad B \in \mathfrak{B}(\mathbb{R}^{d}).& &{}\end{array}$$
(2.3)

In other words, M(B) is the expected number of times t ∈ [0, 1] at which the Lévy process has a jump (i.e., \(X_{t} - X_{t-}\neq 0\)) and the value of this jump is in the set B. See Sections 3.33.4 in [21] for details.

An important subclass of infinitely divisible distributions is the class of stable distributions. A probability measure μ on \(\mathbb{R}^{d}\) is called stable if for any n and any \(X_{1},\ldots,X_{n}\stackrel{\mathrm{iid}}{\sim }\mu\) there are a n  > 0 and \(b_{n} \in \mathbb{R}^{d}\) such that

$$\displaystyle\begin{array}{rcl} X_{1}\stackrel{d}{=}a_{n}\sum _{k=1}^{n}X_{ k} - b_{n}.& &{}\end{array}$$
(2.4)

It turns out that, necessarily, \(a_{n} = n^{-1/\alpha }\) for some α ∈ (0, 2]. We call this parameter the index of stability and we refer to any stable distribution with index α as \(\boldsymbol{\alpha }\) -stable. Comprehensive references are [68] and [78].

Fix α ∈ (0, 2] and let μ be an α-stable distribution. If α = 2, then μ = ID(A, 0, b) is a multivariate normal distribution, which we denote by μ = N(b, A). If α ∈ (0, 2), then μ = ID(0, L, b) where

$$\displaystyle\begin{array}{rcl} L(A) =\int _{\mathbb{S}^{d-1}}\int _{0}^{\infty }1_{ A}(ur)r^{-1-\alpha }\mathrm{d}r\sigma (\mathrm{d}u),\qquad A \in \mathfrak{B}(\mathbb{R}^{d}),& & {}\\ \end{array}$$

for some finite Borel measure \(\sigma\) on \(\mathbb{S}^{d-1}\). We call \(\sigma\) the spectral measure of the distribution and we write \(\mu = S_{\alpha }(\sigma,b)\). All α-stable distributions with α ∈ (0, 2) and \(\sigma \neq 0\) have an infinite variance and are sometimes called infinite variance stable distributions.

One reason for the importance of stable distributions is that they are the only possible limits of scaled and shifted sums of iid random variables. Specifically, let \(X_{1},X_{2},\ldots \stackrel{\mathrm{iid}}{\sim }\mu\) for some probability measure μ and define \(S_{n} =\sum _{ i=1}^{n}X_{i}\). If there exists a probability measure ν and sequences a n  > 0 and \(b_{n} \in \mathbb{R}^{d}\) such that for \(Y \sim \nu\)

$$\displaystyle\begin{array}{rcl} \left (a_{n}S_{n} - b_{n}\right )\stackrel{d}{\rightarrow }Y,& &{}\end{array}$$
(2.5)

then ν is a stable distribution. When this holds we say that μ (or equivalently X 1) belongs to the domain of attraction of ν (or equivalently of Y ). When ν is not degenerate its domain of attraction is characterized in [23] for the case d = 1 and in [67] and [54] for the case d ≥ 2. We now give a related fact, which further explains the importance of stable distributions.

Lemma 2.5.

Fix \(c \in \{ 0,\infty \}\) . Let {X t : t ≥ 0} be a Lévy process and let Y be a random variable whose distribution is not concentrated at a point. If there exist functions a t > 0 and \(b_{t} \in \mathbb{R}^{d}\) with

$$\displaystyle\begin{array}{rcl} \left (a_{t}X_{t} - b_{t}\right )\stackrel{d}{\rightarrow }Y \ \mbox{ as}\ t \rightarrow c& &{}\end{array}$$
(2.6)

then Y has an α-stable distribution for some α ∈ (0,2].

Proof.

Fix \(N \in \mathbb{N}\). Let \(Y ^{(1)},Y ^{(2)},\ldots,Y ^{(N)}\) be iid copies of Y and let {X t (n): t ≥ 0}, \(n = 1,2,\ldots,N\), be independent Lévy processes with \(X_{1}^{(n)}\stackrel{d}{=}X_{1}\). From (2.6) it follows that

$$\displaystyle{\mathop{\mathrm{d - lim}}\limits _{t\rightarrow c}\left (a_{Nt}X_{Nt} - b_{Nt}\right ) = Y.}$$

The fact that Lévy processes have independent and stationary increments gives

$$\displaystyle\begin{array}{rcl} \mathop{\mathrm{d - lim}}\limits _{t\rightarrow c}\left (a_{t}X_{Nt} - Nb_{t}\right )& =& \mathop{\mathrm{d - lim}}\limits _{t\rightarrow c}\sum _{n=1}^{N}\left [a_{ t}\left (X_{nt} - X_{(n-1)t}\right ) - b_{t}\right ] {}\\ & =& \mathop{\mathrm{d - lim}}\limits _{t\rightarrow c}\sum _{n=1}^{N}\left (a_{ t}X_{t}^{(n)} - b_{ t}\right ) =\sum _{ n=1}^{N}Y ^{(n)}. {}\\ \end{array}$$

Since Y is not concentrated at a point, neither is \(\sum _{n=1}^{N}Y ^{(n)}\), and by the Convergence of Types Theorem (see, e.g., Lemma 13.10 in [69]) there are constants c N  > 0 and \(d_{N} \in \mathbb{R}^{d}\) such that

$$\displaystyle{\sum _{n=1}^{N}Y ^{(n)}\stackrel{d}{=}c_{ N}Y - d_{N},}$$

which implies that Y has a stable distribution by (2.4).  □ 

2.3 Regular Variation

Regularly varying functions are functions that have power-like behavior. Comprehensive references are [11, 23, 62], and [63]. For \(c \in \{ 0,\infty \}\) and \(\rho \in \mathbb{R}\), a Borel function \(f: (0,\infty )\mapsto (0,\infty )\) is called regularly varying at c with index \(\boldsymbol{\rho }\) if

$$\displaystyle{\lim _{x\rightarrow c}\frac{f(tx)} {f(x)} = t^{\rho }.}$$

In this case we write \(f \in RV _{\rho }^{c}\). If \(f \in RV _{\rho }^{c}\), then there is an \(L \in RV _{0}^{c}\) such that \(f(x) = x^{\rho }L(x)\). If \(h(x) = f(1/x)\), then

$$\displaystyle\begin{array}{rcl} f \in RV _{\rho }^{c}\mbox{ if and only if }h \in RV _{ -\rho }^{1/c}.& &{}\end{array}$$
(2.7)

If \(f \in RV _{\rho }^{c}\) with ρ > 0 and \(f^{\leftarrow }(x) =\inf \left \{y > 0: f(y) > x\right \}\), then

$$\displaystyle\begin{array}{rcl} f^{\leftarrow }\in RV _{ 1/\rho }^{c}& &{}\end{array}$$
(2.8)

and \(f^{\leftarrow }\) is an asymptotic inverse of f in the sense that

$$\displaystyle\begin{array}{rcl} f(f^{\leftarrow }(x)) \sim f^{\leftarrow }(f(x)) \sim x\ \mbox{ as }\ x \rightarrow c.& & {}\\ \end{array}$$

When \(c = \infty \) this result is given on page 28 of [11]. The case when c = 0 can be shown using an extension of those results and (2.7). We now summarize several important properties of regularly varying functions.

Proposition 2.6.

Fix \(c \in \{ 0,\infty \}\) and \(\rho \in \mathbb{R}\) . Let \(f,g,h: (0,\infty )\mapsto (0,\infty )\).

  1. 1.

    If \(f \in RV _{\rho }^{c}\) , then

    $$\displaystyle\begin{array}{rcl} \lim _{t\rightarrow c}f(t) = \left \{\begin{array}{lr} 1/c&\mathrm{if}\ \rho < 0\\ c &\mathrm{if }\ \rho > 0\end{array} \right..& & {}\\ \end{array}$$
  2. 2.

    If f is a monotone function and there are sequences of positive numbers \(\lambda _{n}\) and b n such that \(b_{n} \rightarrow c\) , \(\lim _{n\rightarrow \infty }\lambda _{n}/\lambda _{n+1} = 1\) , and if for all x > 0

    $$\displaystyle\begin{array}{rcl} \lim _{n\rightarrow \infty }\lambda _{n}f(b_{n}x) =:\chi (x)& & {}\end{array}$$
    (2.9)

    exists and is positive and finite, then there is a \(\rho \in \mathbb{R}\) such that \(\chi (x)/\chi (1) = x^{\rho }\) and \(f \in RV _{\rho }^{c}\) .

  3. 3.

    Let \(f \in RV _{\rho }^{c}\) and assume that h(x) → c as x → c. If for some k > 0 we have \(g(x) \sim kh(x)\) as \(x \rightarrow c\) , then \(f(g(x)) \sim k^{\rho }f(h(x))\) as \(x \rightarrow c\) .

  4. 4.

    If k > 0, ρ > 0, and \(f,g \in RV _{\rho }^{c}\) , then

    $$\displaystyle{f(t) \sim kg(t)\ \mbox{ as}\ t \rightarrow c}$$

    if and only if

    $$\displaystyle{f^{\leftarrow }(t) \sim k^{-1/\rho }g^{\leftarrow }(t)\mbox{ as }t \rightarrow c.}$$

Proof.

For the case \(c = \infty \) Parts 1–3 are given in Propositions 2.3 and 2.6 in [63]. Extensions to the case c = 0 follow from (2.7). Part 4 is an immediate consequence of Part 3 and the asymptotic uniqueness of asymptotic inverses of regularly varying functions, see Theorem 1.5.12 in [11].  □ 

Another useful result is Karamata’s Theorem, a version of which is as follows.

Theorem 2.7.

Fix \(c \in \{ 0,\infty \}\) and let \(f \in RV _{\rho }^{c}\) for some \(\rho \in \mathbb{R}\) . If ρ ≥−1 and \(\int _{0}^{x}f(t)\mathrm{d}t < \infty \) for all x > 0, then

$$\displaystyle\begin{array}{rcl} \lim _{x\rightarrow c} \frac{xf(x)} {\int _{0}^{x}f(t)\mathrm{d}t} =\rho +1.& &{}\end{array}$$
(2.10)

If ρ ≤−1 and \(\int _{x}^{\infty }f(t)\mathrm{d}t < \infty \) for all x > 0, then

$$\displaystyle\begin{array}{rcl} \lim _{x\rightarrow c} \frac{xf(x)} {\int _{x}^{\infty }f(t)\mathrm{d}t} = -\rho - 1.& &{}\end{array}$$
(2.11)

Proof.

For \(c = \infty \) this follows from Theorem 2.1 in [63]. Now assume that c = 0. To verify (2.10) let \(g(x) = x^{-2}f(1/x)\) and note that (2.7) implies that \(g \in RV _{-2-\rho }^{\infty }\). By change of variables we have

$$\displaystyle\begin{array}{rcl} \lim _{x\rightarrow 0} \frac{xf(x)} {\int _{0}^{x}f(t)\mathrm{d}t} =\lim _{x\rightarrow \infty }\frac{x^{-1}f(1/x)} {\int _{0}^{1/x}f(t)\mathrm{d}t} =\lim _{x\rightarrow \infty } \frac{xg(x)} {\int _{x}^{\infty }g(t)\mathrm{d}t} =\lim _{x\rightarrow \infty } \frac{xg(x)} {\int _{x}^{\infty }g(t)\mathrm{d}t} =\rho +1,& & {}\\ \end{array}$$

where the final equality follows by (2.11) for the case \(c = \infty \) and the fact that \(-2-\rho \leq -1\). The proof of (2.11) is similar.  □ 

We will also work with matrix-valued functions. While regular variation of invertible matrix-valued functions is defined in [5] and [54], we need a different definition to allow for the non-invertible case.

Definition 2.8.

Fix \(c \in \{ 0,\infty \}\), \(\rho \in \mathbb{R}\), and let \(A_{\bullet }: (0,\infty )\mapsto \mathbb{R}^{d\times d}\). If \(\mathrm{tr}A_{\bullet } \in RV _{\rho }^{c}\) and there exists a \(B \in \mathbb{R}^{d\times d}\) with B ≠ 0 and

$$\displaystyle\begin{array}{rcl} \lim _{t\rightarrow c} \frac{A_{t}} {\mathrm{tr}A_{t}} = B& & {}\\ \end{array}$$

we say that A is matrix regularly varying at c with index ρ and limiting matrix B. In this case we write \(A_{\bullet } \in MRV _{\rho }^{c}(B)\).

In the above definition, we can allow scaling by a function other than trA . However, this choice is convenient for our purposes. One way to interpret matrix regular variation is in terms of quadratic forms. It is straightforward to show that \(A_{\bullet } \in MRV _{\rho }^{c}(B)\) means that there exists an \(L \in RV _{0}^{c}\) such that for any \(z \in \mathbb{R}^{d}\)

$$\displaystyle\begin{array}{rcl} \langle z,A_{t}z\rangle \sim \langle z,Bz\rangle t^{\rho }L(t)\mbox{ as }t \rightarrow c.& &{}\end{array}$$
(2.12)

We also need to define regular variation for measures. Assume that R is a Borel measure on \(\mathbb{R}^{d}\) with

$$\displaystyle\begin{array}{rcl} R(\vert x\vert >\delta ) < \infty \mbox{ for any }\delta > 0.& &{}\end{array}$$
(2.13)

Note that this condition holds for all probability measures and all Lévy measures.

Definition 2.9.

Fix ρ ≤ 0 and \(c \in \{ 0,\infty \}\). A Borel measure R on \(\mathbb{R}^{d}\) satisfying (2.13) is said to be regularly varying at c with index ρ if there exists a finite, non-zero Borel measure \(\sigma\) on \(\mathbb{S}^{d-1}\) such that for all \(D \in \mathfrak{B}(\mathbb{S}^{d-1})\) with \(\sigma (\partial D) = 0\)

$$\displaystyle\begin{array}{rcl} \lim _{r\rightarrow c}\frac{R\left (\vert x\vert > rt, \frac{x} {\vert x\vert } \in D\right )} {R(\vert x\vert > r)} = t^{\rho } \frac{\sigma (D)} {\sigma (\mathbb{S}^{d-1})}.& &{}\end{array}$$
(2.14)

When this holds we write \(R \in RV _{\rho }^{c}(\sigma )\) and we refer to \(\sigma\) as a limiting measure.

Clearly, the measure \(\sigma\) is unique only up to a multiplicative constant. For \(D \in \mathfrak{B}(\mathbb{S}^{d-1})\) define

$$\displaystyle\begin{array}{rcl} U_{D}(t) = R(\vert x\vert > t,x/\vert x\vert \in D),\qquad t > 0.& &{}\end{array}$$
(2.15)

When \(\sigma (D) > 0\), \(\sigma (\partial D) = 0\), and \(R \in RV _{\rho }^{c}(\sigma )\)

$$\displaystyle\begin{array}{rcl} \lim _{r\rightarrow c}\frac{U_{D}(rt)} {U_{D}(r)} =\lim _{r\rightarrow c} \frac{U_{D}(rt)} {U_{\mathbb{S}^{d-1}}(r)} \frac{U_{\mathbb{S}^{d-1}}(r)} {U_{D}(r)} = t^{\rho } \frac{\sigma (D)} {\sigma (\mathbb{S}^{d-1})} \frac{\sigma (\mathbb{S}^{d-1})} {\sigma (D)} = t^{\rho },& & {}\\ \end{array}$$

and hence

$$\displaystyle\begin{array}{rcl} U_{D} \in RV _{\rho }^{c}.& &{}\end{array}$$
(2.16)

In particular, we have \(U_{\mathbb{S}^{d-1}} \in RV _{\rho }^{c}\). Now take \(L(t) = U_{\mathbb{S}^{d-1}}(t)/\left [t^{\rho }\sigma (\mathbb{S}^{d-1})\right ]\), and note that \(L \in RV _{0}^{c}\). Combining this with (2.14) gives the following.

Lemma 2.10.

\(R \in RV _{\rho }^{c}(\sigma )\) if and only if there is an \(L \in RV _{0}^{c}\) such that for all \(D \in \mathfrak{B}(\mathbb{S}^{d-1})\) with \(\sigma (\partial D) = 0\)

$$\displaystyle\begin{array}{rcl} U_{D}(t) \sim \sigma (D)t^{\rho }L(t)\ \mbox{ as}\ t \rightarrow c.& &{}\end{array}$$
(2.17)

The next result will be fundamental to the discussion in Chapter 5

Proposition 2.11.

Fix \(c \in \{ 0,\infty \}\) , ρ ≤ 0, let \(\sigma \neq 0\) be a finite Borel measure on \(\mathbb{S}^{d-1}\) , and let R be a Borel measure on \(\mathbb{R}^{d}\) satisfying (2.13).

  1. 1.

    If \(R \in RV _{\rho }^{c}(\sigma )\) and q ≥ 0 with 0 < q + |ρ|, then for any κ > 0 there exists a function a t > 0 with \(\lim _{t\rightarrow c}a_{t} = 1/c\) such that

    $$\displaystyle\begin{array}{rcl} \lim _{t\rightarrow c}ta_{t}^{q}R\left (\vert x\vert > r/a_{ t}, \frac{x} {\vert x\vert } \in D\right ) =\kappa \sigma (D)r^{\rho }& & {}\end{array}$$
    (2.18)

    for all \(r \in (0,\infty )\) and all \(D \in \mathfrak{B}(\mathbb{S}^{d-1})\) with \(\sigma (\partial D) = 0\) .

  2. 2.

    If there exists a function a t > 0 with \(\lim _{t\rightarrow c}a_{t} = 1/c\) such that for all \(r \in (0,\infty )\) and all \(D \in \mathfrak{B}(\mathbb{S}^{d-1})\) with \(\sigma (\partial D) = 0\) (2.18) holds for some q ≥ 0 and some κ > 0, then \(R \in RV _{\rho }^{c}(\sigma )\) .

  3. 3.

    If \(R \in RV _{\rho }^{c}(\sigma )\) and q ≥ 0 with 0 < q + |ρ|, then (2.18) holds for some function a t > 0 with \(\lim _{t\rightarrow c}a_{t} = 1/c\) if and only if \(a_{t} \sim K^{1/(\vert \rho \vert +q)}/V ^{\leftarrow }(t)\) where \(K =\kappa \sigma (\mathbb{S}^{d-1})\) and \(V (t) = t^{q}/R(\vert x\vert > t)\) . Moreover, in this case, \(a_{\bullet } \in RV _{-1/(q+\vert \rho \vert )}^{c}\) .

Proof.

Fix \(D \in \mathfrak{B}(\mathbb{S}^{d-1})\) with \(\sigma (\partial D) = 0\). We begin with the first part. Assume that \(R \in RV _{\rho }^{c}(\sigma )\) and let \(a_{t} \sim K^{1/(\vert \rho \vert +q)}/V ^{\leftarrow }(t)\), where V and K are as in Part 3. Note that \(a_{\bullet } \in RV _{-1/(q+\vert \rho \vert )}^{c}\) and thus that \(\lim _{t\rightarrow c}a_{t} = 1/c\). By Proposition 2.6 we have

$$\displaystyle\begin{array}{rcl} r^{\rho } \frac{\sigma (D)} {\sigma (\mathbb{S}^{d-1})}& =& \lim _{s\rightarrow c}\frac{R\left (\vert x\vert > rs, \frac{x} {\vert x\vert } \in D\right )} {R(\vert x\vert > s)} {}\\ & =& \lim _{t\rightarrow c}\frac{a_{t}^{q}R\left (\vert x\vert > r/a_{t}, \frac{x} {\vert x\vert } \in D\right )} {a_{t}^{q}R(\vert x\vert > 1/a_{t})} {}\\ & =& \lim _{t\rightarrow c}V (1/a_{t})a_{t}^{q}R\left (\vert x\vert > r/a_{ t}, \frac{x} {\vert x\vert } \in D\right ) {}\\ & =& K^{-1}\lim _{ t\rightarrow c}V (K^{1/(\vert \rho \vert +q)}/a_{ t})a_{t}^{q}R\left (\vert x\vert > r/a_{ t}, \frac{x} {\vert x\vert } \in D\right ) {}\\ & =& K^{-1}\lim _{ t\rightarrow c}V (V ^{\leftarrow }(t))a_{ t}^{q}R\left (\vert x\vert > r/a_{ t}, \frac{x} {\vert x\vert } \in D\right ) {}\\ & =& \frac{1} {\kappa \sigma (\mathbb{S}^{d-1})}\lim _{t\rightarrow c}ta_{t}^{q}R\left (\vert x\vert > r/a_{ t}, \frac{x} {\vert x\vert } \in D\right ) {}\\ \end{array}$$

as required. To show the second part assume that (2.18) holds for some q ≥ 0, some κ > 0, and some function a t  > 0 satisfying \(\lim _{t\rightarrow c}a_{t} = 1/c\). We have

$$\displaystyle\begin{array}{rcl} \lim _{s\rightarrow c}\frac{R\left (\vert x\vert > sr, \frac{x} {\vert x\vert } \in D\right )} {R(\vert x\vert > s)} =\lim _{t\rightarrow c}\frac{ta_{t}^{q}R\left (\vert x\vert > r/a_{t}, \frac{x} {\vert x\vert } \in D\right )} {ta_{t}^{q}R(\vert x\vert > 1/a_{t})} = \frac{\sigma (D)} {\sigma (\mathbb{S}^{d-1})}r^{\rho }.& & {}\\ \end{array}$$

We now turn to the third part. Assume that a t  > 0 is such that \(\lim _{t\rightarrow c}a_{t} = 1/c\) and that a satisfies (2.18) for all \(r \in (0,\infty )\) and all \(D \in \mathfrak{B}(\mathbb{S}^{d-1})\) with \(\sigma (\partial D) = 0\). In particular, this means that \(\lim _{t\rightarrow c}ta_{t}^{q}R\left (\vert x\vert > 1/a_{t}\right ) =\kappa \sigma (\mathbb{S}^{d-1})\), or equivalently that \(V (1/a_{t}) \sim t/K\) as t → c. Combining this with Proposition 2.6 gives

$$\displaystyle\begin{array}{rcl} \lim _{t\rightarrow c} \frac{a_{t}} {K^{1/(\vert \rho \vert +q)}/V ^{\leftarrow }(t)}& =& \lim _{t\rightarrow c}\frac{K^{-1/(\vert \rho \vert +q)}V ^{\leftarrow }(t)} {1/a_{t}} {}\\ & =& \lim _{t\rightarrow c} \frac{V ^{\leftarrow }(t/K)} {V ^{\leftarrow }(V (1/a_{t}))} =\lim _{t\rightarrow c}\frac{V ^{\leftarrow }(t/K)} {V ^{\leftarrow }(t/K)} = 1, {}\\ \end{array}$$

which concludes the proof.  □ 

When \(R \in RV _{\rho }^{\infty }(\sigma )\) we sometimes say that R has regularly varying tails. In this case we refer to | ρ | as the tail index. The following result helps explain these definitions.

Proposition 2.12.

Let \(\sigma \neq 0\) be a finite Borel measure on \(\mathbb{S}^{d-1}\) and let R be a Borel measure on \(\mathbb{R}^{d}\) satisfying (2.13) . If \(R \in RV _{\rho }^{\infty }(\sigma )\) for some ρ ≤ 0, then for any δ > 0

$$\displaystyle\begin{array}{rcl} \int _{\vert x\vert \geq \delta }\vert x\vert ^{\gamma }R(\mathrm{d}x)\left \{\begin{array}{lr} < \infty &\mathrm{if}\ \gamma < \vert \rho \vert \\ = \infty &\mathrm{if }\ \gamma > \vert \rho \vert \end{array} \right..& & {}\\ \end{array}$$

Proof.

When γ ≤ 0 the result follows immediately from the fact that R satisfies (2.13). Now assume that γ > 0 and fix δ > 0. By Fubini’s Theorem (Theorem 18.3 in [10])

$$\displaystyle\begin{array}{rcl} \int _{\vert x\vert \geq \delta }\vert x\vert ^{\gamma }R(\mathrm{d}x)& =& \int _{\vert x\vert \geq \delta }\int _{0}^{\vert x\vert }\gamma u^{\gamma -1}\mathrm{d}uR(\mathrm{d}x) {}\\ & =& \delta ^{\gamma }R(\vert x\vert \geq \delta ) +\int _{ \delta }^{\infty }\gamma u^{\gamma -1}R(\vert x\vert \geq u)\mathrm{d}u = I_{ 1} + I_{2}. {}\\ \end{array}$$

Clearly, \(I_{1} < \infty \). From (2.16) it follows that R( | x | ≥ u) = u ρ L(u) for some \(L \in RV _{0}^{\infty }\). Proposition 1.3.6 in [11] implies that for any ε > 0 there exists a δ ε  > δ such that for all u > δ ε we have u ε < L(u) < u ε. When γ >  | ρ | fix ε ∈ (0, γ − | ρ | ) and note that

$$\displaystyle{I_{2} \geq \int _{\delta _{\epsilon }}^{\infty }\gamma u^{\gamma -1-\vert \rho \vert -\epsilon }\mathrm{d}u = \infty.}$$

When γ <  | ρ | fix ε ∈ (0, | ρ | −γ) and note that

$$\displaystyle{I_{2} \leq \int _{\delta }^{\delta _{\epsilon }}\gamma u^{\gamma -1}R(\vert x\vert \geq u)\mathrm{d}u +\int _{ \delta _{\epsilon }}^{\infty }\gamma u^{\gamma -\vert \rho \vert +\epsilon -1}\mathrm{d}u < \infty.}$$

This completes the proof.  □