Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In this chapter, we analyze the growth of the Brownian paths \(t\mapsto B_t\) as \(t\rightarrow \infty \). We will see by a property of “time inversion” of Brownian motion that this leads to small-scale properties as well. First, however, let us record some basic properties of the Brownian motion that follow somewhat directly from its definition.

FormalPara Theorem 10.1

Let \(B = \{B_t:t\ge 0\}\) be a standard one-dimensional Brownian motion starting at 0. Then

  1. 1.

    (Symmetry) \({W}_t := - B_t\), \(t\ge 0,\) is a standard Brownian motion starting at 0.

  2. 2.

    (Homogeneity and Independent Increments) \(\{B_{t+s} - B_s:t\ge 0\}\) is a standard Brownian motion independent of \(\{B_u: 0\le u\le s\}\), for every \(s\ge 0\).

  3. 3.

    (Scale-Change Invariance). For every \(\lambda >0\), \(\{B^{(\lambda )}_t := \lambda ^{-{1\over 2}}B_{\lambda t}: t\ge 0\}\) is a standard Brownian motion starting at 0.

  4. 4.

    (Time-Inversion Invariance) \({W}_t := t B_{1/t}\), \(t>0\), \(W_0 =0\), is a standard Brownian motion starting at 0.

FormalPara Proof

Each of these is obtained by showing that the conditions defining a Brownian motion are satisfied. In the case of the time-inversion property, one may apply the strong law of large numbers to obtain continuity at \(t=0\). That is, if \(0< t_n\rightarrow 0\) then write \(s_n = 1/t_n\rightarrow \infty \) and \(N_n := [s_n]\), where \([\cdot ]\) denotes the greatest integer function, so that by the strong law of large numbers, with probability one

$$W_{t_n} = {1\over s_n}B_{s_n} = {N_n\over s_n} {1\over N_n} \sum _{j=1}^{N_n}(B_i - B_{i-1}) + {1\over s_n}(B_{s_n} - B_{N_n}) \rightarrow 0,$$

since \(B_i - B_{i-1}\), \(i\ge 1,\) is an i.i.d. mean-zero sequence, \(N_n/s_n \rightarrow 1\), and \((B_{s_n}-B_{N_n})/s_n\rightarrow 0\) a.s. as \(n\rightarrow \infty \) (see Exercise 2).\(\blacksquare \)

Although the Brownian motion paths cannot be differentiable, it is possible to determine an order of continuity using the next general theorem.

FormalPara Definition 10.1

A stochastic process (or random field) \(Y = \{Y_u:u\in \varLambda \}\) is a version of \(X = \{X_u:u\in \varLambda \}\) taking values in a metric space if Y has the same finite dimensional distributions as X.

FormalPara Theorem 10.2

(Kolmogorov-Chentsov Theorem) Suppose \(X = \{X_u:u\in \varLambda \}\) is a stochastic process (or random field) with values in a complete metric space \((S,\rho )\), indexed by a bounded rectangle \(\varLambda \subset \mathbb {R}^k\) and satisfying

$${\mathbb E}\rho ^\alpha (X_u,X_v) \le c|u-v|^{k+\beta }, \quad \text {for all}\ u,v\in \varLambda ,$$

where \(c,\alpha , \beta \) are positive numbers. Then there is a version \(Y = \{Y_u:u\in \varLambda \}\) of X which is a.s. Hölder continuous of any exponent \(\gamma \) such that \(0< \gamma < {\beta \over \alpha }\).

FormalPara Proof

Without essential loss of generality we take \(\varLambda = [0,1]^k\) and the norm \(|\cdot |\) to be the maximum norm given by \(|u| = \max \{|u_i|: 1\le i\le k\}\), \(u=(u_1,\dots ,u_k)\). For each \(N = 1,2,\dots \), let \(L_N\) be the finite lattice \(\{j2^{-N}:j = 0,1,\dots 2^N\}^k\). Write \(L = \cup _{N=1}^\infty L_N\). Define \(M_N = \max \{\rho (X_u,X_v): (u,v)\in L_N^2, |u-v|\le 2^{-N}\}\). Since (i) for a given \(u\in L_N\) there are no more than \(3^k\) points in \(L_N\) such that \(|u-v|\le 2^{-N}\), (ii) there are \((2^N+1)^k\) points in \(L_N\), and (iii) for every given pair (uv), the condition of the theorem holds, one has by Chebyshev’s inequality that

$$\begin{aligned} P(M_N > 2^{-\gamma N}) \le c3^k(2^N+1)^k({2^{-N(k+\beta )}\over 2^{-\alpha \gamma N}}). \end{aligned}$$
(10.1)

In particular, since \(\gamma < \beta /\alpha \),

$$\begin{aligned} \sum _{N=1}^\infty P(M_N > 2^{-\gamma N}) < \infty . \end{aligned}$$
(10.2)

Thus there is a random positive integer \(N^* \equiv N^*(\omega )\) and a set \(\Omega ^*\) with \(P(\Omega ^*) = 1\), such that

$$\begin{aligned} M_N(\omega ) \le 2^{-\gamma N}\quad \text {for all}\ N\ge N^*(\omega ), \omega \in \Omega ^*. \end{aligned}$$
(10.3)

Fix \(\omega \in \Omega ^*\) and let \(N\ge N^*(\omega )\). By exactly the same induction argument as used for the proof of Lemma 3 in Chapter VII, one has for all \(m\ge N+1\),

$$\begin{aligned} \rho (X_u,X_v) \le 2\sum _{j=N}^m 2^{-\gamma j},\quad \text {for all}\ u,v\in L_m, |u-v|\le 2^{-N}. \end{aligned}$$
(10.4)

Since \( 2\sum _{\nu =N}^{\infty }2^{-\gamma \nu } = 2^{-\gamma N+1}(1-2^{-\gamma })^{-1}\), and \(L = \cup _{m=N+1}^\infty L_m\) for all \(N\ge N^*(\omega )\), it follows that

$$\begin{aligned}&\sup \{\rho (X_u,X_v):u,v\in L, |u-v|\le 2^{-N}\}\nonumber \\= & {} \sup \{\rho (X_u,X_v):u,v\in \cup _{m=N+1}^\infty L_m, |u-v|\le 2^{-N}\}\nonumber \\\le & {} 2^{-\gamma N+1}(1-2^{-\gamma })^{-1}, \quad N\ge N^*(\omega ), \omega \in \Omega ^*. \end{aligned}$$
(10.5)

This proves that on \(\Omega ^*\), \(u\rightarrow X_u\) is uniformly continuous (from L into \((S,\rho )\)), and is Hölder continuous with exponent \(\gamma \). Now define \(Y_u := X_u\) if \(u\in L\) and otherwise \(Y_u := \lim X_{u_N}\), with \(u_N\in L\) and \(u_N\rightarrow u\), if \(u\notin L\). Because of uniform continuity of \(u\rightarrow X_u\) on L (for \(\omega \in \Omega ^*\)), and completeness of \((S,\rho )\), the last limit is well-defined. For all \(\omega \notin \Omega ^*\), let \(Y_u\) be a fixed element of S for all \(u\in [0,1]^k\). Finally, letting \(\gamma _j\uparrow \beta /\alpha \), \(\gamma _j < \beta /\alpha , j\ge 1\), and denoting the exceptional set above as \(\Omega _j^*\), one has the Hölder continuity of Y for every \(\gamma < \beta /\alpha \) on \(\Omega ^{**} := \cap _{j=1}^\infty \Omega _j^*\) with \(P(\Omega ^{**}) = 1\).

That Y is a version of X may be seen as follows. For any \(r\ge 1\) and r vectors \(u_1,\dots ,u_r \in [0,1]^k\), there exist \(u_{jN}\in L\), \(u_{jN}\rightarrow u_j\) as \(N\rightarrow \infty \) (\(1\le j\le r\)). Then \((X_{u_{1N}},\dots ,X_{u_{rN}}) = (Y_{u_{1N}},\dots ,Y_{u_{rN}})\) a.s., and \((X_{u_{1N}},\dots ,X_{u_{rN}})\rightarrow (X_{u_{1}},\dots ,X_{u_{r}}) \) in probability, \((Y_{u_{1N}},\dots ,Y_{u_{rN}})\rightarrow (Y_{u_{1}},\dots ,Y_{u_{r}})\) almost surely. \(\blacksquare \)

FormalPara Corollary 10.3

(Brownian Motion) Let \(X = \{X_t:t\ge 0\}\) be a real-valued Gaussian process defined on \((\Omega ,\mathcal{F},P)\), with \(X_0 = 0, {\mathbb E}X_t = 0\), and \(\mathrm{Cov}(X_s,X_t) = s\wedge t\), for all \(s,t\ge 0\). Then X has a version \(B = \{B_t:t\ge 0\}\) with continuous sample paths, which are Hölder continuous on every bounded interval with exponent \(\gamma \) for every \(\gamma \in (0,{1\over 2})\).

FormalPara Proof

Since \({\mathbb E}|X_t-X_s|^{2m} = c(m)(t-s)^m\), \(0\le s\le t\), for some constant c(m), for every \(m > 0\), the Kolmogorov–Chentsov Theorem 10.2 implies the existence of a version \(B^{(0)} = \{B_t^{(0)}: 0\le t\le 1\}\) with the desired properties on [0, 1]. Let \(B^{(n)}\), \(n\ge 1\), be independent copies of \(B^{(0)}\), indedpendent of \(B^{(0)}\). Define \(B_t = B_t^{(0)}\), \(0\le t\le 1\), and \(B_t = B_1^{(0)} +\cdots +B_1^{(n-1)} + B^{(n)}_{t-[t]}\), for \(t\in [n,n+1)\), \(n = 1,2,\dots \). \(\blacksquare \)

FormalPara Corollary 10.4

(Brownian Sheet) Let \(X = \{X_u:u\in [0,\infty )^2\}\) be a real-valued Guassian random field satisfying \({\mathbb E}X_u = 0\), \(\mathrm{Cov}(X_u,X_v) = (u_1\wedge v_1)(u_2\wedge v_2)\) for all \(u = (u_1,u_2), v = (v_1,v_2)\). Then X has a continuous version on \([0,\infty )^2\), which is Hölder continuous on every bounded rectangle contained in \([0,\infty )^2\) with exponent \(\gamma \) for every \(\gamma \in (0,{1\over 2})\).

FormalPara Proof

First let us note that on every compact rectangle \([0,M]^2\), \({\mathbb E}|X_u-X_v|^{2m} \le c(M)|u-v|^m\), for all \(m = 1,2,\dots \). For this it is enough to check that on each horizontal line \(u = (u_1,c)\), \(0\le u_1 < \infty \), \(X_u\) is a one-dimensional Brownian motion with mean zero and variance parameter \(\sigma ^2 = c\) for \(c\ge 0\). The same holds on vertical lines. Hence \({\mathbb E}|X_{(u_1,u_2)} - X_{(v_1,v_2)}|^{2m} \le 2^{2m-1}\big ({\mathbb E}|X_{(u_1,u_2)} - X_{(v_1,u_2)}|^{2m} + {\mathbb E}|X_{(v_1,u_2)} - X_{(v_1,v_2)}|^{2m}\big ) \le 2^{m-1}c(m)\big (u_2^m|u_1-v_1|^m + v_1^m|u_2-v_2|^m\big ) \le 2^{m-1} c(m)M^m2|u-v|^m\), where \(u = (u_1,u_2), v =(v_1,v_2)\). \(\blacksquare \)

FormalPara Remark 10.1

One may define the Brownian sheet on the index set \(\varLambda _\mathcal{R}\) of all rectangles \(R = [u,v)\), with \(u = (u_1,u_2), v= (v_1,v_2)\), \(0\le u_i\le v_i < \infty \) (\(i = 1,2\)), by setting

$$\begin{aligned} X_R\equiv X_{[u,v)} := X_{(v_1,v_2)} - X_{(v_1,u_2)} - X_{(u_1,v_2)} + X_{(u_1,u_2)}. \end{aligned}$$
(10.6)

Then \(X_R\) is Gaussian with mean zero and variance |R|, the area of R. Moreover, if \(R_1\) and \(R_2\) are nonoverlapping rectangles, then \(X_{R_1}\) and \(X_{R_2}\) are independent. More generally, \(\mathrm{Cov}(X_{R_1},X_{R_2}) = |R_1\cap R_2|\). Conversely, given a Gaussian family \(\{X_R:R\in \varLambda _\mathcal{R}\}\) with these properties, one can restrict it to the class of rectangles \(\{R = [0,u): u = (u_1,u_2)\in [0,\infty )^2\}\) and identify this with the Brownian sheet in Corollary 10.4. It is simple to check that for all n-tuples of rectangles \(R_1,R_2,\dots , R_n\subset [0,\infty )^2\), the matrix \(((|R_i-R_j|))_{1\le i,j\le n}\) is symmetric and nonnegative definite. So the finite dimensional distributions of \(\{X_R: R\in \varLambda _\mathcal{R}\}\) satisfy Kolmogorov’s consistency condition.

In order to prove our main result of this section, we will make use of the following important inequality due to Paul Lévy.

FormalPara Proposition 10.5

(Lévy’s Inequality) Let \(X_j, j=1,\dots , N\), be independent and symmetrically distributed (about zero) random variables. Write \(S_j = \sum _{i=1}^jX_i, 1\le j\le N\). Then, for every \(y > 0\),

$$P\left( \max _{1\le j\le N}S_j \ge y\right) \le 2P(S_N\ge y) - P(S_N=y) \le 2P(S_N\ge y).$$
FormalPara Proof

Write \(A_j = [S_1< y,\dots , S_{j-1}< y,\, S_j\ge y]\), for \(1\le j\le N\). The events \([S_N-S_j < 0]\) and \([S_N-S_j > 0]\) have the same probability and are independent of \(A_j\). Therefore

$$\begin{aligned} P\left( \max _{1\le j\le N}S_j \ge y\right)= & {} P(S_N\ge y) +\sum _{j=1}^{N-1}P(A_j\cap [S_N< y])\nonumber \\\le & {} P(S_N\ge y) +\sum _{j=1}^{N-1}P(A_j\cap [S_N-S_j< 0])\nonumber \\= & {} P(S_N\ge y) +\sum _{j=1}^{N-1}P(A_j)P([S_N-S_j< 0])\nonumber \\= & {} P(S_N\ge y) +\sum _{j=1}^{N-1}P(A_j\cap [S_N-S_j> 0])\nonumber \\\le & {} P(S_N\ge y) +\sum _{j=1}^{N-1}P(A_j\cap [S_N> y])\nonumber \\\le & {} P(S_N\ge y) + P(S_N > y) \nonumber \\= & {} 2P(S_N\ge y) - P(S_N = y). \end{aligned}$$
(10.7)

This establishes the basic inequality. \(\blacksquare \)

FormalPara Corollary 10.6

For every \(y > 0\) one has for any \(t > 0\),

$$P\left( \max _{0\le s\le t} B_s \ge y\right) \le 2P(B_t\ge y).$$
FormalPara Proof

Partition [0, t] by equidistant points \( 0< u_1< u_2< \cdots < u_N = t\), and let \(X_1 = B_{u_1}, X_{j+1} = B_{u_{j+1}} - B_{u_j}\), \(1\le j\le N-1\), in the proposition. Now let \(N\rightarrow \infty \), and use the continuity of Brownian motion. \(\blacksquare \)

In fact one may use a reflection principle argument (strong Markov property) to see that this inequality is sharp for Brownian motion

$$\begin{aligned} P(\max _{0\le s\le t}B_s \ge y) = 2P(B_t\ge y). \end{aligned}$$
(10.8)

Alternatively, the following proposition concerns the simple symmetric random walk defined by \(S_0 = 0, S_j = X_1+\cdots +X_j\), \(j\ge 1\), with \(X_1,X_2,\dots \) i.i.d. \(\pm 1\)-valued with equal probabilities. It also demonstrates the remarkable strength of the reflection method, allowing one in particular to compute the distribution of the maximum of a random walk over a finite time. The above-indicated equality (10.8) then becomes a consequence of the functional central limit theorem proved in Section 1.8, (Theorem 7.15); especially see (9.27).

FormalPara Proposition 10.7

For the simple symmetric random walk one has for every positive integer y,

$$P\left( \max _{0\le j\le N}S_j\ge y\right) = 2P(S_N\ge y) - P(S_N = y).$$
FormalPara Proof

In the notation of Lévy’s inequality given in Proposition 10.5 one has, for the present case of the random walk moving by \(\pm 1\) units at a time, that \(A_j = [S_1< y,\dots , S_{j-1} < y, S_j = y]\), \(1\le j\le N\). Then in (10.7) the probability inequalities are all equalities for this special case. \(\blacksquare \)

FormalPara Corollary 10.8

Equation (10.8) holds for every \(y> 0, t > 0\).

FormalPara Theorem 10.9

(Law of the Iterated Logarithm (LIL) for Brownian Motion) Each of the following holds with probability one:

$$\overline{\lim }_{t\rightarrow \infty }{B_t\over \sqrt{2t\log \log t}} = 1, \qquad \underline{\lim }_{t\rightarrow \infty }{B_t\over \sqrt{2t\log \log t}} = -1. $$
FormalPara Proof

Let \(\varphi (t) := \sqrt{2t\log \log t}, t > 0.\) Let us first show that for any \(0< \delta <1,\) one has with probability one that

$$\begin{aligned} \overline{\lim }_{t\rightarrow \infty }{B_t\over \varphi (t)} \le 1+\delta . \end{aligned}$$
(10.9)

For arbitrary \(\alpha > 1,\) partition the time interval \([0,\infty )\) into subintervals of exponentially growing lengths \(t_{n+1}-t_n\), where \(t_n = \alpha ^n\), and consider the event

$$E_n := \left[ \max _{t_n\le t\le t_{n+1}} {B_t\over (1+\delta )\varphi (t)} > 1\right] .$$

Since \(\varphi (t)\) is a nondecreasing function, one has, using Corollary 10.6, a scaling property, and Lemma 2 from Chapter IV, that

$$\begin{aligned} \nonumber P(E_n)\le & {} P\left( \max _{0\le t\le t_{n+1}}B_t> (1+\delta )\varphi (t_{n})\right) \nonumber \\= & {} 2P\left( B_1 > {(1+\delta )\varphi (t_n)\over \sqrt{t_{n+1}}}\right) \nonumber \\\le & {} \sqrt{2\over \pi }{\sqrt{t_{n+1}}\over (1+\delta )\varphi (t_n)} e^{-{(1+\delta )^2\varphi ^2(t_n)\over 2t_{n+1}}} \le c{1\over n^{(1+\delta )^2/\alpha }} \end{aligned}$$
(10.10)

for a constant \(c > 0\) and all \(n > {1\over log\alpha }\). For a given \(\delta > 0\) one may select \(1< \alpha < (1+\delta )^{2}\) to obtain \(P(E_n \ i.o.) = 0\) from the Borel–Cantelli lemma (Part I). Thus we have (10.9). Since \(\delta > 0\) is arbitrary we have with probability one that

$$\begin{aligned} \overline{\lim }_{t\rightarrow \infty }{B_t\over \varphi (t)} \le 1. \end{aligned}$$
(10.11)

Next let us show that with probability one,

$$\begin{aligned} \overline{\lim }_{t\rightarrow \infty }{B_t\over \varphi (t)} \ge 1. \end{aligned}$$
(10.12)

For this consider the independent increments \(B_{t_{n+1}} - B_{t_n}\), \(n \ge 1.\) For \(\theta = {t_{n+1}-t_n\over t_{n+1}} = {\alpha -1\over \alpha } < 1,\) using Feller’s tail probability estimate (Lemma 2, Chapter IV) and Brownian scale change,

$$\begin{aligned} \nonumber P\left( B_{t_{n+1}} - B_{t_n}> \theta \varphi (t_{n+1})\right)= & {} P\left( B_1 > \sqrt{\theta \over t_{n+1}}\varphi (t_{n+1})\right) \nonumber \\\ge & {} {c^\prime \over \sqrt{2\theta \log \log t_{n+1}}}e^{-\theta \log \log t_{n+1}} \nonumber \\\ge & {} {c\over \sqrt{\log n}}n^{-\theta } \end{aligned}$$
(10.13)

for suitable positive constants \(c, c^\prime \) depending on \(\alpha \) and for all \(n > {1\over log\alpha }\). It follows from the Borel–Cantelli Lemma (Part II) that with probability one,

$$\begin{aligned} B_{t_{n+1}} - B_{t_n} > \theta \varphi (t_{n+1}) \ i.o. \end{aligned}$$
(10.14)

Also, by (10.11) and replacing \(\{B_t:t\ge 0\}\) by the standard Brownian motion \(\{-B_t:t\ge 0\}\),

$$\begin{aligned} \underline{\lim }_{t\rightarrow \infty }{B_t\over \varphi (t)} \ge -1, \ a.s. \end{aligned}$$
(10.15)

Since \(t_{n+1} = \alpha t_n > t_n\), we have

$$\begin{aligned} {B_{t_{n+1}}\over \sqrt{2t_{n+1}\log \log t_{n+1}}} = {B_{t_{n+1}} - B_{t_n}\over \sqrt{2t_{n+1}\log \log t_{n+1}}} + {1\over \sqrt{\alpha }}{B_{t_n}\over \sqrt{2t_n(\log \log t_n + \log \log \alpha )}}. \end{aligned}$$
(10.16)

Now, using (10.14) and (10.15), it follows that with probability one,

$$\begin{aligned} \overline{\lim }_{n\rightarrow \infty }{B_{t_{n+1}}\over \varphi (t_{n+1})} \ge \theta - {1\over \sqrt{\alpha }} = {\alpha -1\over \alpha } - {1\over \sqrt{\alpha }}. \end{aligned}$$
(10.17)

Since \(\alpha > 1\) may be selected arbitrarily large, one has with probability one that

$$\begin{aligned} \overline{\lim }_{t\rightarrow \infty }{B_{t}\over \varphi (t)} \ge \overline{\lim }_{n\rightarrow \infty }{B_{t_{n+1}}\over \varphi (t_{n+1})} \ge 1. \end{aligned}$$
(10.18)

This completes the computation of the limit superior. To get the limit inferior simply replace \(\{B_t:t\ge 0\}\) by \(\{-B_t:t\ge 0\}.\) \(\blacksquare \)

The time inversion property for Brownian motion turns the law of the iterated logarithm (LIL) into a statement concerning the degree (or lack) of local smoothness. (Also see Exercise 7).

FormalPara Corollary 10.10

Each of the following holds with probability one:

$$\overline{\lim }_{t\rightarrow 0}{B_t\over \sqrt{2t\log \log {1\over t}}} = 1, \qquad \underline{\lim }_{t\rightarrow 0}{B_t\over \sqrt{2t\log \log {1\over t}}} = -1. $$

Exercise Set X

  1. 1.

    (Ornstein–Uhlenbeck Process) Fix parameters \(\gamma> 0, \sigma > 0, x\in {\mathbb R}\) . Use the Kolmogorov–Chentsov theorem to obtain the existence of a continuous Gaussian process \(X = \{X_t:t\ge 0\}\) starting at \(X_0 = x\) with \({\mathbb E}X_t = xe^{-\gamma t}\), and \(\mathrm{Cov}(X_s,X_t) = {\sigma ^2\over \gamma }e^{-\gamma t}\sinh (\gamma s), 0 < s \le t.\)

  2. 2.
    1. (i)

      Use Feller’s tail estimate (Lemma 2, Chapter IV). to prove that \(\max \{|B_i-B_{i-1}|:i = 1,2,\dots , N+1\}/N\rightarrow 0\) a.s. as \(N\rightarrow \infty \).

    2. (ii)

      Without using the law of the iterated logarithm for standard Brownian motion B, show directly that \(\limsup _{n\rightarrow \infty }{B_n\over \sqrt{2n\log n}}\le 1\) almost surely.

  3. 3.

    Show that with probability one, standard Brownian motion has arbitrarily large zeros. [Hint: Apply the LIL.]

  4. 4.

    Fix \(t \ge 0\) and use the law of the iterated logarithm to show that \(\lim _{h\rightarrow 0}{B_{t+h} - B_t\over h}\) exists with probability zero. [Hint: Check that \(Y_h := B_{t+h}-B_t, h\ge 0,\) is distributed as standard Brownian motion starting at 0. Consider \({1\over h}Y_h = {Y_h\over \sqrt{2h\log \log (1/h)}} {\sqrt{2h\log \log (1/h)}\over h}.\)]

  5. 5.

    For the simple symmetric random walk, find the distributions of the extremes: (a) \(M_N = \max \{S_j:j=0,\dots , N\}\), and (b) \(m_N = \min \{S_j:0\le j\le N\}\).

  6. 6.

    Consider the simple symmetric random walk \(S_0 = 0\), \(S_n = X_1+\cdots +X_n, n\ge 1\), where \(X_k, k\ge 1\), are iid symmetric Bernoulli \(\pm 1\) valued random variables. Denote the range by \(R_n = \max _{m\le n}S_m - \min _{m\le n}S_m, n\ge 1\). Show that \({R_n\over \sqrt{n}}\) converges in distribution to a nonnegative random variable as \(n\rightarrow \infty \).

  7. 7.

    (Lévy Modulus of Continuity Footnote 1) Use the wavelet construction \(B_t := \sum _{n,k}Z_{n,k}S_{n,k}(t), 0\le t\le 1\), of standard Brownian motion to establish the following fine-scale properties.

    1. (i)

      Let \(0< \delta < {1\over 2}\). With probability one there is a random constant K such that if \(|t-s| \le \delta \) then \(|B_t - B_s| \le K\sqrt{\delta \log {1\over \delta }}\). [Hint: Fix N and write the increment as a sum of three terms: \(B_t - B_s = Z_{00}(t-s) + \sum _{n=0}^N\sum _{k = 2^n}^{2^{n+1}-1}Z_{n,k}\int _s^tH_{n,k}(u)du + \sum _{n=N+1}^\infty \sum _{k = 2^n}^{2^{n+1}-1}Z_{n,k}\int _s^tH_{n,k}(u)du = a + b + c\). Check that for a suitable (random) constant \(K^\prime \) one has \(|b| \le |t-s|K^\prime \sum _{n=0}^Nn^{1\over 2}2^{n\over 2} \le |t-s|K^{\prime }{\sqrt{2}\over \sqrt{2}-1}\sqrt{N}2^{N\over 2}\), and \(|c| \le K^\prime \sum _{n=N+1}^\infty n^{1\over 2}2^{-{n\over 2}} \le K^{\prime }{\sqrt{2}\over \sqrt{2}-1}\sqrt{N}2^{-{N\over 2}}\). Use these estimates, taking \(N = [ - \log _2(\delta )]\) such that \(\delta 2^N \sim 1\), to obtain the bound \(|B_t - B_s| \le |Z_{00}|\delta + 2K^{\prime }\sqrt{-\delta \log _2(\delta )}\). This is sufficient since \(\delta < \sqrt{\delta }\).]

    2. (ii)

      The modulus of continuity is sharp in the sense that with probability one, there is a sequence of intervals \((s_n,t_n), n\ge 1,\) of respective lengths \(t_n-s_n \rightarrow 0\) as \(n\rightarrow \infty \) such that the ratio \({B_{t_n}-B_{s_n}\over \sqrt{-(t_n-s_n)\log (t_n-s_n)}}\) is bounded below by a positive constant. [Hint: Use Borel–Cantelli I together with Feller’s tail probability estimate for the Gaussian distribution to show that \(P(A_n \ i.o.) = 0\), where \(A_n := [ |B_{k2^{-n}} - B_{(k-1)2^{-n}}| \le c\sqrt{n2^{-n}}, k = 1,\dots , 2^n]\) and c is fixed in \((0,\sqrt{2\log 2})\). Interpret this in terms of the certain occurrence of the complimentary event \([A_n \ i.o.]^c\).]

    3. (iii)

      The paths of Brownian motion are a.s. nowhere differentiable.