Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Set-indexed stochastic processes naturally appear in many areas of probability theory and mathematical statistics, e.g., as empirical measures [26], set-indexed martingales [15], point processes [7, 8], and random measures [17].

Both empirical and partial sum processes are special cases of marked point processes or random measures. They can be described via the pairs (x i, m i), where x i are locations and m i is the mass located at x i, also called the mark of x i.

Empirical processes assign the same nonrandom mass to each random location. More precisely, based on d-dimensional sample X 1, …, X n the empirical measure is defined for any Borel \(A \subset \mathbb R^d\) by

$$\displaystyle \begin{aligned}F_n(A) = \frac{1}{n}\#\{j=1,\dots,n:\; \mathbf X_j\in A\}. \end{aligned}$$

Partial-sum processes are defined by assigning i.i.d. random masses to fixed locations on a grid in the space \(\mathbb R^d\). If \(\mathbb Z^d\) stands for the set of integer points in \(\mathbb R^d\), then the partial-sum process is the normalized version of

$$\displaystyle \begin{aligned} S(A) = \sum_{\mathbf j \in A} X_{\mathbf j} \end{aligned} $$
(18.1)

being the sum of i.i.d. random variables X j, \(\mathbf j\in \mathbb Z^d\), with j ∈ A. We set S(A) = 0 if {j : j ∈ A} = ∅.

The partial sum processes in dimension one are extensively studied in the classical probability theory as cumulative sums of random variables. We discuss below the higher-dimensional setting and allow for a richer family of sets A than in the classical case of multiple sums, see, e.g., [19]. There are three main types of asymptotic results for partial-sum processes indexed by sets, namely,

  • strong laws of large numbers,

  • central limit theorems,

  • laws of the iterated logarithms.

Perhaps the first strong law of large numbers appeared in the paper by Bass and Pyke [3]. The central limit theorem for partial-sum processes is obtained by Kuelbs [22], while the law of the iterated logarithm is due to Wichura [29]. Further references can be found in the survey papers by Pyke [25] and Gaenssler and Ziegler [11]. From now on, we concentrate on the strong law of large numbers.

The paper is organised as follows. First, we recall the Bass–Pyke theorem (the uniform strong law of large numbers) in Sect. 18.2. It is generalised for signed measures in Sect. 18.3 and proved in the subsequent Sect. 18.4. The main feature is the general scaling of the argument set using diagonal matrices with the determinant converging to infinity. The case of stationary measures is considered in Sect. 18.5. The most important special cases concern random measures generated by marked point processes and by sums of random variables on a grid. Section 18.6 describes an application to stochastic integrals. Section 18.7 concludes and mentions a number of further related references.

2 The Bass–Pyke Theorem

Let \(\mathbb N^d\) be the set of d-dimensional vectors with positive integer coordinates. Consider a family of independent identically distributed random variables \(\{X_{\mathbf j},\mathbf j\in \mathbb N^d\}\). If A is a Borel measurable subset of \(\mathbb R^d\) define S(A) by (18.1). Let |A| denote the Lebesgue measure of A and tA = {tx : x ∈ A} for t > 0, and let B be the open unit Euclidean ball centered at the origin. For r ≥ 0 and Borel \(A\subset \mathbb R^d\),

$$\displaystyle \begin{aligned} A^r=\{x\in\mathbb R^d:\; x+rB\cap A\neq\emptyset\} \end{aligned}$$

denotes the outer r-parallel set of A and

$$\displaystyle \begin{aligned} A^{-r}=\{x:\; x+rB\subset A\} \end{aligned}$$

is the inner r-parallel set. Therefore,

$$\displaystyle \begin{aligned}A(r)=A^r\setminus A^{-r}=\{x:\rho(x,\partial A)<r\}, \end{aligned}$$

where ρ is the Euclidean distance and ∂A is the boundary of A.

Theorem 18.1 (See [3, Th. 1])

Assume that the expectation \(\mu =\mathbf E\,\left [X_j\right ]\,\) exists. Let \(\mathcal {A}\) be a collection of Borel measurable subsets of [0, 1]d. If

$$\displaystyle \begin{aligned} \sup_{A\in\mathcal{A}}|A(\delta)| \to 0 \qquad \mathit{\text{as }}\; \delta\downarrow 0, \end{aligned} $$
(18.2)

then

$$\displaystyle \begin{aligned} \lim_{n\to\infty} \sup_{A\in\mathcal{A}} \left|\frac{S(nA)}{n^d}-\mu|A|\right|=0 \qquad \mathit{\text{a.s.}} \end{aligned} $$
(18.3)

To appreciate some peculiarities of this theorem we briefly discuss below its simplest case, where \(\mathcal {A}\) consists of a single set A.

Example 18.1

If A = [0, 1]d, then nA is the cube in \(\mathbb N^d\) with a side of length n and thus mA ⊆ nA if m ≤ n. Therefore S(nA) is, in fact, a subsequence of sums of independent identically distributed random variables with the expectation μ. In this case, (18.3) follows from the Kolmogorov strong law of large numbers for independent identically distributed random variables.

Example 18.2

Let A be the set of points with rational coordinates in [0, 1]d. Clearly (18.2) fails. On the other hand, S(nA) is the same as in the case of [0, 1]d but |A| = 0. Therefore, (18.3) holds if μ = 0 and it fails otherwise.

Example 18.3

Let A be the set of points of [0, 1]d with irrational coordinates. Clearly (18.2) fails. Since S(nA) = 0, strong law of large numbers (18.3) fails if μ≠0. Otherwise (18.3) holds.

The situation is even more complicated if \(\mathcal {A}\) becomes richer.

3 Uniform Law of Large Numbers for Random Signed Measures

Let ξ(A), \(A\in \mathcal {B}\), be a random signed measure defined on the family \(\mathcal {B}\) of Borel sets in \(\mathbb R^d\), see, e.g., [17]. Denote \(|\mathbf t|=\prod _{i=1}^d t_i\) and \([\mathbf t,\mathbf s]=\times _{i=1}^d [t_i,s_i]\) for t = (t 1, …, t d) and s = (s 1, …, s d) from \(\mathbb R^d\). For \(\mathbf t\in \mathbb R_+^d\) and \(A\subset \mathbb R^d\), write

$$\displaystyle \begin{aligned} \mathbf t\cdot A=\{(t_1x_1,\dots,t_dx_d):\; \mathbf x=(x_1,\dots,x_d)\in A\}. \end{aligned}$$

Assume that ξ(A) is integrable for each bounded Borel A and let

$$\displaystyle \begin{aligned} \varLambda(A)=\mathbf E\,\left[\xi(A)\right]\,, \qquad A\in\mathcal{B}, \end{aligned}$$

be the first moment measure of ξ.

The signed measure ξ is said to satisfy the multiparameter strong law of large numbers if

$$\displaystyle \begin{aligned} \lim_{|\mathbf t|\to\infty} \frac{\xi(\mathbf {t} \cdot I)-\mathbf E\,\left[\xi(\mathbf {t} \cdot I)\right]\,}{|\mathbf t|}=0 \qquad \text{a.s.}, \end{aligned} $$
(18.4)

where I = (0, 1]. Note that t converges to infinity in a rather arbitrary manner, it is only essential that the volume of the rectangle [0, t] converges to infinity.

Let \(\mathcal {A}\) be a subfamily of Borel sets in I. For m ≥ 1, denote

$$\displaystyle \begin{aligned}C_m(\mathbf k)=\frac{1}{m}(\mathbf k-\mathbf1,\mathbf k], \qquad k\in\mathbb N^d. \end{aligned}$$

Here 1 = (1, …, 1) and k −1 = (k 1 − 1, …, k d − 1) for \(\mathbf k={({k}_1,\dots ,{k}_d)}\in \mathbb N^d\). For every \(A\in \mathcal {A}\),

$$\displaystyle \begin{aligned}A = \bigcup_{C_m(\mathbf k)\subseteq A} C_m(\mathbf k), \qquad {A^{\prime\prime}_m} = \bigcup_{C_m(\mathbf k)\cap A\ne\varnothing} C_m(\mathbf k) \end{aligned}$$

are discrete analogues of the inner and outer parallel sets to A.

The following result generalizes Theorem 18.1.

Theorem 18.2

Let ξ be a random signed measure that satisfies the multiparameter strong law of large numbers. Assume that

$$\displaystyle \begin{aligned} \lim_{m\to\infty}\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\mathbf E\,\left[\xi(\mathbf {t} \cdot (A\setminus A^{\prime}_m))\right]\,}{|\mathbf t|}\right|=0 \end{aligned} $$
(18.5)

and |ξ(A)|≤ η(A) for all Borel sets A and a random measure η that satisfies the multiparameter strong law of large numbers and such that

$$\displaystyle \begin{aligned} \lim_{m\to\infty}\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\frac{\mathbf E\,\left[\eta(\mathbf t\cdot ({A^{\prime\prime}_m}\setminus A^{\prime}_m))\right]\,}{|\mathbf t|}=0. \end{aligned} $$
(18.6)

Then ξ satisfies the uniform strong law of large numbers, that is,

$$\displaystyle \begin{aligned} \lim_{|\mathbf t|\to\infty} \sup_{A\in\mathcal{A}} \left|\frac{\xi(\mathbf {t} \cdot A) -\mathbf E\,\left[\xi(\mathbf {t} \cdot A)\right]\,}{|\mathbf t|}\right|=0 \qquad \mathit{\text{a.s.}} \end{aligned} $$
(18.7)

Corollary 18.1

Assume that ξ is a random (non-negative) measure that satisfies the multiparameter strong law of large numbers. If

$$\displaystyle \begin{aligned} \lim_{m\to\infty}\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\frac{\mathbf E\,\left[\xi(\mathbf t\cdot ({A^{\prime\prime}_m}\setminus A^{\prime}_m))\right]\,}{|\mathbf t|}=0, \end{aligned}$$

then (18.7) holds.

Even in the setting of partial sums, there are several differences with Theorem 18.1. First, the growth parameter t is continuous. This allows one to treat the cases where some of the coordinates of t approach the axes or are constant, while all coordinates are separated from zero and grow in the setting of [3], so that the set nA increases to the whole \(\mathbb R_+^d\) in the limit if A contains a neighborhood of the origin.

Second, we deal with signed measures rather than with sums of random variables over sets in \(\mathbb R^d\). Even if we restrict our setting and consider a particular case where ξ is constructed in the same manner as in [3], we still are in a more general situation, since we do not impose the independence assumption on the auxiliary random variables, e.g., it is applicable to orthogonal random variables. Of course, one should be aware of appropriate conditions for the strong law of large numbers (18.4) for every particular dependence scheme. Various examples are presented in [19]. Therefore, we provide a universal method for obtaining the uniform strong law of large numbers (18.7) from (18.4).

4 Proof of Theorem 18.2

For x = (x 1, …, x d) ∈ I, we have x ⋅ I = (0, x]. Then

$$\displaystyle \begin{aligned} \lim_{|\mathbf t|\to\infty} \frac{\xi(\mathbf {t} \cdot A)-\mathbf E\,\left[\xi(\mathbf {t} \cdot A)\right]\,}{|\mathbf t|}=0 \qquad \text{a.s.} \end{aligned} $$
(18.8)

holds with A = x ⋅ I for any fixed x ∈ I, since t ⋅ (x ⋅ I) = s ⋅ I for s = t ⋅x = (t 1x 1, …, t dx d) and |t|→ is equivalent to |s|→.

Since ξ is a signed measure, condition (18.8) also holds for every set A being a difference of x 1 ⋅ I and x 2 ⋅ I. Thus, (18.8) holds for sets A being a finite union of differences (x 1 ⋅ I) ∖ (x 2 ⋅ I).

Turning to the general set \(A\in \mathcal {A}\), fix m ≥ 1 and write

$$\displaystyle \begin{aligned} \limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}} &\left|\frac{\xi(\mathbf {t} \cdot A)-\mathbf E\,\left[\xi(\mathbf {t} \cdot A)\right]\,}{|\mathbf t|}\right| \\ &\le \limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\xi(\mathbf {t} \cdot A)-\xi(\mathbf {t} \cdot A^{\prime}_m)}{|\mathbf t|}\right| \\ &\quad +\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\xi(\mathbf {t} \cdot A^{\prime}_m)-\mathbf E\,\left[\xi(\mathbf {t} \cdot A^{\prime}_m)\right]\,}{|\mathbf t|}\right| {} \\ &\quad +\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\mathbf E\,\left[\xi(\mathbf {t} \cdot A^{\prime}_m)\right]\,-\mathbf E\,\left[\xi(\mathbf {t} \cdot A)\right]\,}{|\mathbf t|}\right|. \end{aligned} $$
(18.9)

Since ξ is a signed measure, \(\mathbf E\,\left [\xi (\mathbf {t} \cdot A^{\prime }_m)\right ]\,-\mathbf E\,\left [\xi (\mathbf {t} \cdot A)\right ]\, = -\mathbf E\,\left [\xi (\mathbf {t} \cdot (A\setminus A^{\prime }_m))\right ]\,\), hence,

$$\displaystyle \begin{aligned}\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\mathbf E\,\left[\xi(\mathbf {t} \cdot A^{\prime}_m)\right]\,-\mathbf E\,\left[\xi(\mathbf {t} \cdot A)\right]\,}{|\mathbf t|}\right| =\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\mathbf E\,\left[\xi(\mathbf {t} \cdot (A\setminus A^{\prime}_m))\right]\,}{|\mathbf t|}\right|. \end{aligned}$$

Passing to the second term on the right hand side of (18.9), note that there is only a finite number of possible combinations of the cubes C m(k) belonging to I (this number depends on m, of course). Since \(A^{\prime }_m\) is constructed from the cubes C m(k), there is only a finite number of possible values for \(A^{\prime }_m\) if \(A\in \mathcal {A}\). From the strong law of large numbers (18.8) we conclude that

$$\displaystyle \begin{aligned}\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\xi(\mathbf {t} \cdot A^{\prime}_m) -\mathbf E\,\left[\xi(\mathbf {t} \cdot A^{\prime}_m)\right]\,}{|\mathbf t|}\right|=0 \qquad \text{a.s.} \end{aligned}$$

Now we proceed with the first term on the right-hand side of (18.9). Since

$$\displaystyle \begin{aligned}|\xi(\mathbf {t} \cdot A)-\xi(\mathbf {t} \cdot A^{\prime}_m)|=|\xi(\mathbf {t} \cdot (A\setminus A^{\prime}_m))| \le\eta(\mathbf {t} \cdot (A\setminus A^{\prime}_m))\le \eta(\mathbf {t} \cdot ({A^{\prime\prime}_m}\setminus A^{\prime}_m)), \end{aligned}$$

we get

$$\displaystyle \begin{aligned} \limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}&\left|\frac{\xi(\mathbf {t} \cdot A)-\xi(\mathbf {t} \cdot A^{\prime}_m)}{|\mathbf t|}\right| \\ &\le \limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\eta(\mathbf {t} \cdot ({A^{\prime\prime}_m}\setminus A^{\prime}_m))-\mathbf E\,\left[\eta(\mathbf {t} \cdot ({A^{\prime\prime}_m}\setminus A^{\prime}_m))\right]\,}{|\mathbf t|}\right| \\ &\quad + \limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\frac{\mathbf E\,\left[\eta(\mathbf {t} \cdot ({A^{\prime\prime}_m}\setminus A^{\prime}_m))\right]\,}{|\mathbf t|}. \end{aligned} $$

Since η is assumed to satisfy the multiparameter strong law of large numbers, (18.8) holds for η instead of ξ and with A being a finite union of the cubes C m(k). The set \( A^{\prime \prime }_m\setminus A^{\prime }_m\) belongs to (0, 1 + (1∕m)]d. Since only a finite number of configurations of the cubes C m(k) ⊆ (0, 1 + (1∕m)]d exists, the strong law of large numbers (18.8) for η implies that

$$\displaystyle \begin{aligned}\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\eta(\mathbf {t} \cdot ({A^{\prime\prime}_m}\setminus A^{\prime}_m))-\mathbf E\,\left[\eta(\mathbf {t} \cdot ({A^{\prime\prime}_m}\setminus A^{\prime}_m))\right]\,}{|\mathbf t|}\right|=0 \qquad \text{a.s.} \end{aligned}$$

Therefore,

$$\displaystyle \begin{aligned} \limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}} &\left|\frac{\xi(\mathbf {t} \cdot A)-\mathbf E\,\left[\xi(\mathbf {t} \cdot A)\right]\,}{|\mathbf t|}\right| \\ &\le\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\left|\frac{\mathbf E\,\left[\xi(\mathbf {t} \cdot (A\setminus A^{\prime}_m))\right]\,}{|\mathbf t|}\right| \\ &\quad +\limsup_{|\mathbf t|\to\infty}\sup_{A\in\mathcal{A}}\frac{\mathbf E\,\left[\eta(\mathbf {t} \cdot ({A^{\prime\prime}_m}\setminus A^{\prime}_m))\right]\,}{|\mathbf t|}. \end{aligned} $$

Passing to the limit as m → and using assumptions (18.5) and (18.6), we complete the proof of the uniform strong law of large numbers (18.7).

5 Homogeneous Random Fields and Stationary Measures

A random signed measure ξ in \(\mathbb R^d\) is said to be stationary if ξ(⋅) shares the finite-dimensional distributions with ξ(⋅ + t) for each \({\mathbf t}\in \mathbb R^d\). If the first moment \(\mathbf E\,\left [\xi (\cdot )\right ]\,\) is finite, then the first moment measure Λ is proportional to the Lebesgue measure.

The ergodic theorem of Zygmund [31] implies that, if ξ is stationary with

$$\displaystyle \begin{aligned} \mathbf E\,\left[|\xi(A)|(\log^+|\xi(A)|)^{d-1}\right]\,<\infty \end{aligned} $$
(18.10)

for all bounded Borel A, then

$$\displaystyle \begin{aligned} |{\mathbf t}|{}^{-1} \int_{[\mathbf 0,\mathbf {\mathbf t}]} \xi(A+\mathbf x)d\mathbf x \end{aligned}$$

converges almost surely as t → to a finite random variable, being the conditional expectation of ξ(A) with respect to the invariant σ-algebra. The limit is deterministic and equals \(\mathbf E\,\left [\xi (A)\right ]\,\) if ξ is ergodic. Here \(\log ^+z=\log (e+z)\) for z ≥ 0. Note that all results of Sect. 18.3 can be amended for the convergence t → instead of |t|→. The notation t → means that all coordinates of t tend to infinity, while |t|→ means that at least one of them tends to infinity.

Theorem 18.3

Let \(\mathcal {A}\) be a family of Borel sets in I that satisfies (18.2). Assume that ξ is a stationary ergodic random measure such that (18.10) holds for A = I. Then ξ satisfies the uniform strong law of large numbers as t →∞, that is, (18.7) holds with t →∞.

Proof

Note that

$$\displaystyle \begin{aligned} \int_{[\mathbf 0,{\mathbf t}]}\xi(I+\mathbf x)d\,\mathbf x =\int_{[\mathbf 0,{\mathbf t}]}\int_{I+\mathbf x}\xi(d\,\mathbf u)d\,\mathbf x =\int_{[\mathbf 0,{\mathbf t}+ \mathbf 1]} |(-I+\mathbf u)\cap[\mathbf 0,{\mathbf t}]| \xi(d\,\mathbf u). \end{aligned}$$

Further, |(−I + u) ∩ [0, t]| is less than or equal to one for all u ∈ [0, t + 1] and is exactly one if u ∈ [1, t], whence

$$\displaystyle \begin{aligned} \xi([\mathbf 1,{\mathbf t}]) \leq \int_{[\mathbf 0,{\mathbf t}]}\xi(I+\mathbf x)\,d\mathbf x \leq \xi([\mathbf 0,{\mathbf t}+\mathbf 1]), \end{aligned} $$
(18.11)

since ξ is nonnegative. If d = 1, then

$$\displaystyle \begin{aligned} \lim_{{\mathbf t}\to\infty}\frac{\xi([\mathbf 0,{\mathbf t}])-\xi([\mathbf 1,{\mathbf t}])}{|{\mathbf t}|}=0 \qquad \text{a.s.} \end{aligned} $$
(18.12)

which together with (18.11) and ergodic theorem (Zygmund’s theorem [31] for d = 1) yields for d = 1

$$\displaystyle \begin{aligned} \lim_{{\mathbf t}\to\infty}\frac{\xi(\mathbf t\cdot I)}{|\mathbf t|}=\mathbf E\,\left[\xi(I)\right]\, \qquad \text{a.s.} \end{aligned} $$
(18.13)

Now let d > 1 and assume that (18.13) holds for all dimensions less than d. Then (18.12) holds for the dimension d. This together with (18.11) combined with Zygmund’s theorem [31] yields (18.13) for the dimension d. Since ξ is stationary, (18.5) and (18.6) follow from (18.2). The result follows from a variant of Theorem 18.2 for the convergence t →.

Let X j, \(\mathbf j\in \mathbb N^d\), be a homogeneous random field, that is, \((X_{\mathbf j_1},\dots ,X_{\mathbf j\,_m})\) coincides in distribution with \((X_{\mathbf j_1+\mathbf s},\dots ,X_{\mathbf j\,_m+\mathbf s})\) for all \(m\in \mathbb N\), and \(\mathbf s,{\mathbf j_1},\dots ,{\mathbf j\,_m}\in \mathbb N^d\). For \({\mathbf n}=(n_1,\dots ,n_d)\in \mathbb N^d\), denote S n = S([0, n]) from (18.1). In other words,

$$\displaystyle \begin{aligned}S_{\mathbf n}=\sum_{\mathbf k \preceq \mathbf n} X_{\mathbf k} \end{aligned}$$

where ≼ is a partial order in \(\mathbb N^d\) defined by

$$\displaystyle \begin{aligned}\mathbf k \preceq \mathbf n \quad \iff\quad k_1\le n_1,\dots,k_d\le n_d \end{aligned}$$

for \(\mathbf k=(k_1,\dots ,k_d)\in \mathbb N^d\) and \(\mathbf n=(n_1,\dots ,n_d)\in \mathbb N^d\).

Dunford [9] proved that if

$$\displaystyle \begin{aligned} \mathbf E\,\left[|X_{\mathbf j}|\left(\log^+|X_{\mathbf j}|\right)^{d-1}\right]\,<\infty, \end{aligned} $$
(18.14)

then the limit of the averages

$$\displaystyle \begin{aligned} \frac{S_{\mathbf n}}{|\mathbf n|} \end{aligned} $$
(18.15)

exists almost surely as |n| = n 1 ×⋯ × n n →. Smythe [27] provides a probabilistic statement and proof of this result for independent identically distributed random variables X j. Etemadi [10] obtains the same result for pairwise independent identically distributed random variables. The limit of S n∕|n| in the latter case coincides with the expectation \(\mu =\mathbf E\,\left [X_{\mathbf j}\right ]\,\). This property requires the ergodicity if random variables are not necessarily pairwise independent and identically distributed.

Note that S(A) given by (18.1) is not a stationary random measure and it may be also signed, so Theorem 18.3 is not directly applicable. The following result follows from Theorem 18.2.

Corollary 18.2

Let \(\{X_{\mathbf j},\mathbf j\in \mathbb N^d\}\) be a homogeneous random field and the moment condition (18.14) holds. Further let \(\mathcal {A}\) be a family of subsets of the unit cube I that satisfies (18.2). If {X j} is ergodic, then

$$\displaystyle \begin{aligned}\lim_{|\mathbf{t}|\to\infty} \sup_{A\in\mathcal{A}} \left|\frac{S(\mathbf{t}\cdot A)}{|\mathbf{t}|}-\mu|A|\right|=0 \qquad \mathit{\text{a.s.}} \end{aligned}$$

Proof

Note that the expectations in (18.5) and (18.6) are dominated by a constant times \(|{A^{\prime \prime }_m}\setminus A^{\prime }_m|\).

Remark 18.1

Condition (18.14) is necessary for the almost sure convergence of (18.15) in the case of independent identically distributed random variables.

Another particularly important family of random signed measures is generated by marked point processes. Let N = {(x i, m i), i ≥ 1} be a point process in \(\mathbb R^d\times \mathbb R\), where the second coordinate m i represents the mark of the point x i. Then

$$\displaystyle \begin{aligned} \xi(A)=\sum_{\mathbf x_i\in A} m_i \end{aligned} $$
(18.16)

is a random signed measure. The process is called independently marked if the marks are i.i.d. random variables and independent of locations. The first order moment measure

$$\displaystyle \begin{aligned} \varLambda(A\times C)=\mathbf E\,\left[\# \{i:\; (\mathbf x_i,m_i)\in A\times C\}\right]\, \end{aligned}$$

is the measure on Borel sets in \(\mathbb R^d\times \mathbb R\), and we assume that \(\varLambda (A\times \mathbb R)\) is finite for each bounded Borel set A.

The marked point process is stationary if its distribution does not change if the locations x i are all translated by any vector t; then Λ(A × C) = Λ((A + t) × C) for all \(\mathbf t\in \mathbb R^d\).

Theorem 18.4

Assume that ξ is given by (18.16) for an ergodic independently marked point process satisfying

$$\displaystyle \begin{aligned} \mathbf E\,\left[|m_1|(\log^+|m_1|)^{d-1}\right]\,<\infty, \end{aligned}$$

and the random variable \(N=\operatorname {card}\{i:\mathbf x_i\in I\}\) is square integrable. Then (18.7) holds for any family \(\mathcal {A}\) satisfying (18.2).

The proof of Theorem 18.4 is based on the following elementary upper bound for the function \(x(\log x)^r\).

Lemma 18.1

Let r > 0, n ≥ 1 and a 1, …, a n ≥ e r−1. Put A n = a 1 + ⋯ + a n. Then

$$\displaystyle \begin{aligned}A_n (\log A_n)^{r} \le \sum_{i=1}^n a_i(\log a_i)^r+r\sum_{i=1}^n (A_n-a_i) (\log a_i)^{r-1}. \end{aligned}$$

Proof

It is clear that

$$\displaystyle \begin{aligned}A_n (\log A_n)^{r} = \sum_{i=1}^n a_i(\log A_n)^{r} =\sum_{i=1}^n a_i(\log a_i)^r+\sum_{i=1}^n a_i \left((\log A_n)^{r}-(\log a_i)^{r}\right). \end{aligned}$$

By the mean value theorem,

$$\displaystyle \begin{aligned}(\log A_n)^{r}-(\log a_i)^{r} = (A_n-a_i) \cdot r\frac{(\log \xi)^{r-1}}{\xi} \end{aligned}$$

for some a i ≤ ξ ≤ A n. Since the right hand side is a decreasing function in ξ,

$$\displaystyle \begin{aligned}(\log A_n)^{r}-(\log a_i)^{r} \le (A_n-a_i) \cdot r\frac{(\log a_i)^{r-1}}{a_i}. \end{aligned}$$

Therefore,

$$\displaystyle \begin{aligned}A_n (\log A_n)^{r} \le \sum_{i=1}^n a_i(\log a_i)^r + r \sum_{i=1}^n (A_n-a_i) (\log a_i)^{r-1}. \end{aligned}$$

Proof (of Theorem 18.4)

In order to apply Theorem 18.2, we only need to show that

$$\displaystyle \begin{aligned} \bar\xi(I)=\sum_{\mathbf x_i\in I} |m_i| \end{aligned}$$

satisfies (18.14). Without loss of generality, we assume that |m i|≥ e d−2 almost surely, since \(\bar \xi (I)=\bar \xi _1(I)+\bar \xi _2(I)\), where \(\bar \xi _1(I)\) and \(\bar \xi _2(I)\) are constructed from \(m_i1\kern -3.5pt\operatorname {I}_{\{|m_i|<e^{d-2}\}}\) and \(m_i1\kern -3.5pt\operatorname {I}_{\{|m_i|\ge e^{d-2}\}}\), respectively, with \(m_i1\kern -3.5pt\operatorname {I}_{\{|m_i|<e^{d-2}\}}\) being bounded. By Lemma 18.1 with r = d − 1, and a i = |m i|

$$\displaystyle \begin{aligned}\mathbf E\,\left[A_N(\log A_N)^{d-1}\right]\, \le \mathbf E\,\left[\sum_{i=1}^N a_i(\log a_i)^r\right]\, + (d-1)\mathbf E\,\left[\sum_{i=1}^N (A_N-a_i) (\log a_i)^{r-1}\right]\,. \end{aligned}$$

Since N and {m i} are independent, Wald’s equality implies

$$\displaystyle \begin{aligned}\mathbf E\,\left[\sum_{i=1}^N a_i(\log a_i)^r\right]\, = \mathbf E\,\left[N\right]\, \cdot \mathbf E\,\left[a_i(\log a_i)^r\right]\,. \end{aligned}$$

The total expectation formula yields that

$$\displaystyle \begin{aligned} \mathbf E\,\left[\sum_{i=1}^N (A_N-a_i) (\log a_i)^{r-1}\right]\, &=\sum_{n=1}^\infty {\mathbf P}(N=n) \mathbf E\,\left[\sum_{i=1}^n (A_n-a_i) (\log a_i)^{r-1}\right]\, \\ &=\sum_{n=1}^\infty {\mathbf P}(N=n) \sum_{i=1}^n \mathbf E\,\left[(A_n-a_i) (\log a_i)^{r-1}\right]\, \\ &=\sum_{n=1}^\infty {\mathbf P}(N=n) \sum_{i=1}^n \mathbf E\,\left[(A_n-a_i)\right]\, \cdot \mathbf E\,\left[(\log a_i)^{r-1}\right]\, \\\noalign{} &=\sum_{n=1}^\infty {\mathbf P}(N=n) n(n-1) \mathbf E\,\left[|m_1|\right]\, \cdot \mathbf E\,\left[(\log |m_1|)^{r-1}\right]\, \\ &=\mathbf E\,\left[|m_1|\right]\, \cdot \mathbf E\,\left[(\log |m_1|)^{r-1}\right]\, \mathbf E\,\left[N(N-1)\right]\,. \end{aligned} $$

This together with the latter bound proves the desired result.

6 Stochastic Integrals

Stochastic integrals with respect to the Brownian sheet have been intensively studied since the 70s. Treated as signed measures, stochastic integrals fit very well the framework of Theorem 18.2. Although the construction of stochastic integrals can be done for any dimension, we restrict ourselves to the case of d = 2 as in [6].

Let W be a white noise in the plane, that is a finitely additive set function defined on the Borel subsets of \(\mathbb R^2_+\) such that W(A) is a normal random variable with parameters 0 and |A| and W(A) and W(B) are independent for disjoint Borel subsets A and B.

If R st denotes the rectangle [0, s] × [0, t], then W st = W(R st) is called the Brownian sheet. By \(\mathcal {F}_{st}\), \((s,t)\in \mathbb R^2_+\), we denote the σ-algebra generated by the random variables W uv, (u, v) ≼ (s, t).

Let A be a closed rectangle with the lower left-hand corner z 0. Introduce the function ϕ z, \(z\in \mathbb R^2_+\), as follows

$$\displaystyle \begin{aligned} \phi_z=\phi_0 1\kern-3.5pt\operatorname{I}_A(z), \qquad z\in\mathbb R^2_+, \end{aligned} $$
(18.17)

where ϕ 0 is a \(\mathcal {F}_{ z0}\) measurable random variable. Then, by definition,

$$\displaystyle \begin{aligned}\int_{R_{z}}\phi\,dW=\phi_0 W(A\cap R_{z}), \qquad z\in\mathbb R^2_+. \end{aligned}$$

The integral is extended by linearity to simple ϕ, i.e. to finite linear combinations of “step” functions of the form (18.17). In general, let ϕ be such that

  1. (a)

    ϕ z is \(\mathcal {F}_z\)-measurable,

  2. (b)

    (z, ω)↦ϕ z(ω), \(z\in \mathbb R^2_+\), ω ∈ Ω, is \(\mathcal {B}\times \mathcal {F}\)-measurable where \(\mathcal {B}\) is the family of Borel subsets in the plane and \(\mathcal {F}\) is the σ-algebra of the probability space \((\varOmega ,\mathcal { F},{\mathbf P})\), and

  3. (c)

    \(\displaystyle \int _{R_z} \mathbf E\,\left [\phi _\zeta ^2\right ]\,\,d\zeta <\infty \) for all \(z\in \mathbb R^2_+\).

Then one can find a sequence of simple random functions {ϕ n} for which

$$\displaystyle \begin{aligned}\lim_{n\to\infty}\int_{R_z} \mathbf E\,\left[(\phi_n-\phi)^2\right]\,\,d\zeta=0\qquad \text{for all}\ z\in\mathbb R^2_+. \end{aligned}$$

The integrals \(\int _{R_z} \phi _n\,dW\) converge in the mean square sense, the limit is denoted by \(\int _{R_z} \phi \,dW\). The defined integral is

  • continuous as a function of z,

  • a two-parameter martingale, and

  • for all \(z\in \mathbb R^2_+\),

    $$\displaystyle \begin{aligned} \mathbf E\,\left[\left(\int_{R_z} \phi\,dW\right)^2\right]\,=\int_{R_z} \mathbf E\,\left[\phi_\zeta^2\right]\,\,d\zeta. \end{aligned} $$
    (18.18)

Recall the three defining properties for an arbitrary two-parameter martingale M z, \(z\in \mathbb R^2_+\), with respect to the family of σ-algebras \(\mathcal {F}_z\), \(z\in \mathbb R^2_+\) (see [5]):

  1. (I)

    \(\mathbf E\,\left [|M_z|\right ]\,<\infty \) for all \(z\in \mathbb R^2_+\);

  2. (II)

    M z is \(\mathcal {F}_z\)-measurable;

  3. (III)

    if z ≼ z , then \(\mathbf E\,\left [M_{z^{\prime }}\big |\mathcal {F}_z\right ]\,=M_z\).

The final step in the construction of the integral is to pass to a general bounded Borel set A by letting

$$\displaystyle \begin{aligned}\int_A \phi\,dW = \int_R 1\kern-3.5pt\operatorname{I}_A\phi\,dW, \end{aligned}$$

where R is a rectangle containing A. Then, for each fixed ϕ,

$$\displaystyle \begin{aligned} \xi(A)=\int_A\phi\,dW \end{aligned}$$

is a signed measure.

Now we define the two-parameter discrete time martingale associated with the stochastic integral. For \((m,n)\in \mathbb N^2\), define r mn = R mn ∖ (R m−1,n ∪ R m,n−1) and put

$$\displaystyle \begin{aligned}X_{mn} = \int_{r_{mn}} \phi\,dW, \qquad S_{mn}=\sum_{i=1}^m\sum_{j=1}^n X_{ij}=\int_{R_{mn}} \phi\,dW. \end{aligned}$$

Then S mn is a two-parameter discrete time martingale with respect to the family of σ-algebras \(\mathcal {F}_{mn}\). It follows from [20] that if

$$\displaystyle \begin{aligned} \sum_{m=1}^\infty \sum_{n=1}^\infty \frac {\mathbf E\,\left[X_{mn}^2\right]\,}{(mn)^2}<\infty, \end{aligned} $$
(18.19)

then the strong law of large numbers holds for {S mn},

$$\displaystyle \begin{aligned} \lim_{mn\to\infty} \frac {S_{mn}}{mn}=0 \qquad \text{a.s.} \end{aligned} $$
(18.20)

In view of (18.18), condition (18.19) is equivalent to

$$\displaystyle \begin{aligned} \int_1^\infty\int_1^\infty \frac{\mathbf E\,\left[\phi_{st}^2\right]\,}{(st)^2}\,ds\,dt<\infty. \end{aligned} $$
(18.21)

The strong law of large numbers (18.20) is easily extended to the continuous time limit result by using the Cairoli maximal inequality [5]. Thus (18.4) holds with the signed measure ξ(A) =∫Aϕ dW. Now Theorem 18.2 implies the following corollary.

Corollary 18.3

Let ϕ z, \(z\in \mathbb R^2_+\) , satisfy conditions (a)–(c). Let \(\mathcal {A}\) be a family of subsets of the square [0, 1] × [0, 1] such that conditions (18.5) and (18.6) hold. Then (18.21) implies

$$\displaystyle \begin{aligned}\lim_{\underset{s\ge1,\ t\ge1}{st\to\infty}}\, \sup_{A\in\mathcal{A}}\, \left|\frac{1}{st}\int_{st\cdot A}\phi\,dW\right|=0 \qquad \mathit{\text{a.s.}} \end{aligned}$$

7 Concluding Remarks

The assumptions for the uniform strong law of large numbers imposed on the family \(\mathcal {A}\) (either (18.2) in Theorem 18.1 or (18.5)–(18.6) in Theorem 18.2) do not involve any entropy type restriction needed for both the central limit theorem [1] and law of the iterated logarithm [2]. For the both latter results, one needs to assume that the entropy is integrable, that is

$$\displaystyle \begin{aligned}\int_0^1 \sqrt{\frac{H(u)}{u}}\,du<\infty, \end{aligned}$$

where H(u) is the entropy of the family \(\mathcal {A}\) being the logarithm of the cardinality of a minimal u-net.

Krengel and Pyke [21] provide the strong law of large numbers for multiparameter subadditive processes rather than for signed measures as in our Theorem 18.2. It is worthwhile mentioning that they do not get a uniform version. Liu, Rio, and Rouault [23] treat the uniform strong law of large numbers for random measures, which is a partial case of signed measures with a one-dimensional growth parameter. A version of Theorem 18.1 for random product measures is considered by Kil and Kwon [18]. Jang and Kwon [16] obtain a generalization of Theorem 18.1 for fuzzy random variables. Bing [4] extends Theorem 18.1 for the α-mixing case. Note that this result follows from Theorem 18.2 by referring to the usual strong law of large numbers available in this case. Ziegler [30] investigates the uniform law of large numbers for triangular arrays extending Theorem 18.1 to the case of non-identically distributed random variables.

Considering random sets as measurable mappings from a probability space into the set of compact convex subsets of a Banach space, Jang and Kwon [16] prove a uniform strong law of large numbers for sequences of independent and identically distributed random sets, which is another direct generalization of Theorem 18.1.

Under a mild assumption, Giné and Zinn [12] show that condition (18.2) is necessary and sufficient for the uniform strong law of large numbers (18.3) if μ = 0 (see also Hong and Kwon [13]). However, the case of μ≠0 is different.

Ivanoff [14] discusses the uniform strong law of large numbers in connection to possible generalizations of the definitions of a stochastic process indexed by \(\mathbb R_+\) to processes indexed by a multidimensional time parameter or a class of sets.

Müller and Song [24] apply the uniform strong law of large numbers for partial-sum process to investigate the problem of edge estimation in a two-region image in the setting of a fixed design regression model. Terán and López-Díaz [28] use Theorem 18.1 to study some aspects of the approximation of mappings taking values in a special class of upper semicontinuous functions and to obtain some Korovkin type theorems for positive linear operators.