Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

We work on the canonical space for continuous processes, that is, on the set of continuous functions \(\mathcal{C}[0,\infty )\) equipped with the Borel σ-field \(\mathcal{B}(\mathcal{C}[0,\infty ))\) and the Wiener measure \(\mathbf{P}\). On this space the canonical process \(\beta _{t}(\omega ) =\omega (t)\) is a Brownian motion and the Lévy transformation T, given by the formula

$$\displaystyle{(\mathbf{T}\beta )_{t} =\int _{ 0}^{t}\mathrm{sign}(\beta _{ s})\mathrm{d}\beta _{s},}$$

is almost everywhere defined and preserves the measure P. A long standing open question is the ergodicity of this transformation. It was probably first mentioned in written form in Revuz and Yor [11] (pp. 257). Since then there were some work on the question, see Dubins and Smorodinsky [3]; Dubins et al. [4]; Fujita [5]; Malric [7, 8]. One of the recent deep result of Marc Malric, see [9], is the topological recurrence of the transformation, that is, the orbit of a typical Brownian path meets any non empty open set almost surely. Brossard and Leuridan [2] provide an alternative presentation of the proof.

In this paper we consider mainly the strong mixing property of the Lévy transformation. Our main results are formulated in terms of a strongly stationary sequence of random variables defined by evaluating the iterated paths at time one. Put \(Z_{n} =\min _{0\leq k<n}\vert ({\mathbf{T}}^{k}\beta )_{1}\vert \). We show in Theorem 8 that if

$$\displaystyle{ \liminf _{n\rightarrow \infty }\frac{Z_{n+1}} {Z_{n}} < 1,\quad \text{almost surely}, }$$
(*)

then \(\mathbf{T}\) is strongly mixing, hence ergodic.

We will say that a family of real valued variables \(\left \{\xi _{i}\,:\, i \in I\right \}\) is tight if the family of the probability measures \(\left \{\mathbf{P} \circ \xi _{i}^{-1}\,:\, i \in I\right \}\) is tight, that is if \(\sup _{i\in I}\mathbf{P}\left (\vert \xi _{i}\vert > K\right ) \rightarrow 0\) as K → .

In Theorem 11 below, we will see that the tightness of the family \(\left \{\right.nZ_{n}\,:\, n \geq 1\left.\right \}\) implies (*), in particular if \(\mathbf{E}(Z_{n}) = O(1/n)\) then the Lévy transformation is strongly mixing, hence ergodic. Another way of expressing the same idea, uses the hitting time \(\nu (x) =\inf \left \{\right.n \geq 0\,:\, Z_{n} < x\left.\right \}\) of the x-neighborhood of zero by the sequence \((({\mathbf{T}}^{k}\beta )_{1})_{k\geq 0}\) for x > 0. In the same Theorem we will see that the tightness of \(\left \{\right.x\nu (x)\,:\, x \in (0,1)\left.\right \}\) is also sufficient for (*). In particular, if \(\mathbf{E}(\nu (x)) = O(1/x)\) as x → 0, that is, the expected hitting time of small neighborhoods of the origin do not grow faster than the inverse of the size of these sets, then the Lévy transformation is strongly mixing, hence ergodic.

It is natural to compare our result with the density theorem of Marc Malric. We obtain that to settle the question of ergodicity one should focus on specific open sets only, but for those sets deeper understanding of the hitting time is required.

In the next section we sketch our argument, formulating the intermediate steps. Most of the proofs are given in Sect. 3. Note, that we do not use the topological recurrence theorem of Marc Malric, instead all of our argument is based on his density result of the zeros of the iterated paths, see [8]. This theorem states that the set

$$\displaystyle{ \left \{\right.t \geq 0\,:\, \exists n,\,({\mathbf{T}}^{n}\beta )_{ t} = 0\left.\right \}\quad \mbox{ is dense in $[0,\infty )$ almost surely}. }$$
(1)

Hence the argument given below may eventually lead to an alternative proof of the topological recurrence theorem as well.

2 Results and Tools

2.1 Integral-Type Transformations

Recall, that a measure preserving transformation T of a probability space \((\varOmega,\mathcal{B},\mathbf{P})\) is ergodic, if

$$\displaystyle{\lim _{n\rightarrow \infty }\frac{1} {n}\sum _{k=0}^{n-1}\mathbf{P}(A \cap {T}^{-k}B) = \mathbf{P}(A)\mathbf{P}(B),\quad \mbox{ for $A,B \in \mathcal{B}$},}$$

and strongly mixing provided that

$$\displaystyle{\lim _{n\rightarrow \infty }\mathbf{P}(A \cap {T}^{-n}B) = \mathbf{P}(A)\mathbf{P}(B),\quad \mbox{ for $A,B \in \mathcal{B}$}.}$$

The next theorem, whose proof is given in Sect. 3.2, uses that ergodicity and strong mixing can be interpreted as asymptotic independence when the base set Ω is a Polish space. Here the special form of the Lévy transformation and the one-dimensional setting are not essential, hence we will use the phrase integral-type for the transformation of the d-dimensional Wiener space in the form

$$\displaystyle{ T\beta =\int _{ 0}^{.}h(s,\beta )\mathrm{d}\beta _{ s} }$$
(2)

where h is a progressive d ×d-matrix valued function. It is measure-preserving, that is, T β is a d-dimensional Brownian motion, if and only if h(t, ω) is an orthogonal matrix d t ×d P almost everywhere, that is, \({h}^{T}h = I_{d}\), where h T denotes the transpose of h and I d is the identity matrix of size d ×d. Recall that \(\left \|\right.a\left.\right \|_{HS} =\mathop{ Tr}\nolimits \left (\right.a{a}^{T}{\left.\right )}^{1/2}\) is the Hilbert–Schmidt norm of the matrix a.

Theorem 1.

Let T be an integral-type measure-preserving transformation of the d-dimensional Wiener-space as in (2) and denote by X n (t) the process

$$\displaystyle{ X_{n}(t) =\int _{ 0}^{t}h_{ s}^{(n)}\mathrm{d}s\quad \text{with}\quad h_{ s}^{(n)} = h(s,{T}^{n-1}\beta )\cdots h(s,T\beta )h(s,\beta ). }$$
(3)

Then

  1. (i)

    T is strongly mixing if and only if \(X_{n}(t)\, \rightarrow p\,0\) for all t ≥ 0.

  2. (ii)

    T is ergodic if and only if \(\frac{1} {N}\sum_{n=1}^{N}\left \|\right.X_{ n}(t)\left.\right \|_{HS}^{2}\, \rightarrow p\,0\) for all t ≥ 0.

The two parts of Theorem 1 can be proved along similar lines, see Sect. 3.2. Note, that the Hilbert–Schmidt norm of an orthogonal transformation in dimension d is \(\sqrt{ d}\) hence by (3) we have the trivial bound: \(\left \|\right.X_{n}(t)\left.\right \|_{HS} \leq t\sqrt{d}\). By this boundedness the convergence in probability is equivalent to the convergence in L 1 in both parts of Theorem 1.

2.2 Lévy Transformation

Throughout this section \({\beta }^{(n)} =\beta \circ {\mathbf{T}}^{n}\) denotes the n t h iterated path under the Lévy transformation T. Then \(h_{t}^{(n)} =\prod _{ k=0}^{n-1}\mathrm{sign}(\beta _{t}^{(k)})\).

By boundedness, the convergence of X n (t) in probability is the same as the convergence in L 2. Writing out \(X_{n}^{2}(t)\) we obtain that:

$$\displaystyle{ X_{n}^{2}(t) = 2\int _{ 0<u<v<t}h_{u}^{(n)}h_{ v}^{(n)}\mathrm{d}u\mathrm{d}v. }$$
(4)

Combining (4) and (i) of Theorem 1 we obtain that T is strongly mixing provided that

$$\displaystyle\begin{array}{rcl} \mathbf{E}\left (h_{s}^{(n)}h_{ t}^{(n)}\right ) \rightarrow 0,\quad \mbox{ for almost all $0 < s < t$.}& &{}\end{array}$$
(5)

By scaling, \(\mathbf{E}\left (h_{s}^{(n)}h_{t}^{(n)}\right )\) depends only on the ratio s ∕ t, and the sufficient condition (5) is even simplifies to

$$\displaystyle{\mathbf{E}\left (h_{s}^{(n)}h_{ 1}^{(n)}\right ) \rightarrow 0,\quad \mbox{ for almost every $s \in (0,1)$.}}$$

Since \(h_{s}^{(n)}h_{1}^{(n)}\) takes values in \(\left \{\right. - 1,+1\left.\right \}\) we actually have to show that \(\mathbf{P}\left (h_{s}^{(n)}h_{1}^{(n)} = 1\right ) -\mathbf{P}\left (h_{s}^{(n)}h_{1}^{(n)} = -1\right ) \rightarrow 0\). It is quite natural to prove this limiting relation by a kind of coupling. In the present setting this means a transformation S of the state space \(\mathcal{C}[0,\infty )\) preserving the Wiener measure and mapping most of the event \(\{h_{s}^{(n)}h_{1}^{(n)} = 1\}\) to \(\{h_{s}^{(n)}h_{1}^{(n)} = -1\}\) for n large.

The transformation S will be the reflection of the path after a suitably chosen stopping time τ, i.e.,

$$\displaystyle\begin{array}{rcl} (S\beta )_{t} = 2\beta _{t\wedge \tau }-\beta _{t}.& & {}\\ \end{array}$$

Proposition 2.

Let C > 0 and s ∈ (0,1). If there exists a stopping time τ such that

  1. (a)

    s < τ < 1 almost surely,

  2. (b)

    \(\nu =\inf \left \{\right.n \geq 0\,:\,\beta _{ \tau }^{(n)} = 0\left.\right \}\) is finite almost surely,

  3. (c)

    \(\vert \beta _{\tau }^{(k)}\vert > C\sqrt{1-\tau }\) for 0 ≤ k < ν almost surely.

then

$$\displaystyle\begin{array}{rcl} \limsup _{n\rightarrow \infty }\left \vert \right.\mathbf{E}\left (h_{s}^{(n)}h_{ 1}^{(n)}\right )\left.\right \vert \leq \mathbf{P}\left (\sup _{ t\in [0,1]}\left \vert \right.\beta \left.\right \vert > C\right )& & {}\\ \end{array}$$

One can relax the requirement that τ is a stopping time in Proposition 2.

Proposition 3.

Assume that for any s < 1 and C > 0 time there exists a random time τ with properties (a), (b) and (c) in Proposition  2 .

Then there are also a stopping times with these properties for any s < 1, C > 0.

For a given s ∈ (0, 1) and C > 0, to prove the existence of the random time τ with the prescribed properties it is natural to consider all time points not only time one. That is, for a given path β (0) how large is the random set of “good time points”, which will be denoted by A(C, s):

$$\displaystyle\begin{array}{rcl} & & A(C,s) = \left \{t > 0: \mbox{ exist $n,\gamma,$ such that $st <\gamma < t$,}\right. \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \left.\mbox{ $\beta _{\gamma }^{(n)} = 0$ and $\inf _{0\leq k<n}\vert \beta _{\gamma }^{(k)}\vert > C\sqrt{t-\gamma }$}\right \}. {}\end{array}$$
(6)

Note that it may happen that n = 0 and then the infimum \(\inf _{0\leq k<n}\vert \beta _{\gamma }^{(k)}\vert \) is infinite.

Some basic properties of A(C, s) for easier reference:

  1. (a)

    Invariance under scaling. For x≠0, let Θ x denote the scaling of the path, \((\Theta _{x}\omega )(t) = {x}^{-1}\omega ({x}^{2}t)\). Then, since \(\mathbf{T}\Theta _{x} = \Theta _{x}\mathbf{T}\) clearly holds for the Lévy transformation T, we have

    $$\displaystyle{ t \in A(C,s)(\omega )\quad \Leftrightarrow \quad {x}^{-2}t \in A(C,s)(\Theta _{ x}\omega ) }$$
    (7)
  2. (b)

    Since the scaling \(\Theta _{x}\) preserves the Wiener-measure, the previous point implies that \(\mathbf{P}(t \in A(C,s))\) does not depend on t > 0.

Observe that A(C, s) contains an open interval on the right of every zero of β (n) for all n ≥ 0. Indeed, if γ is a zero of β (n) for some n ≥ 0, then by choosing the smallest n such that \(\beta _{\gamma }^{(n)} = 0\), one gets that t ∈ A(C, s) for all t > γ such that t − γ is small enough. Since the union of the set of zeros of the iterated paths is dense, see [8], we have that the set of good time points is a dense open set. Unfortunately this is not enough for our purposes; a dense open set might be of small Lebesgue measure. To prove that the set of good time points is of full Lebesgue measure, we borrow a notion from real analysis.

Definition 4.

Let \(H \subset \mathbb{R}\) and denote by \(f(x,\varepsilon )\) the supremum of the lengths of the intervals contained in \((x-\varepsilon,x+\varepsilon ) \setminus H\). Then H is porous at x if \(\limsup _{\varepsilon \rightarrow 0+}f(x,\varepsilon )/\varepsilon > 0\).

A set H is called porous when it is porous at each point x ∈ H.

Observe that if H is porous at x then its lower density

$$\displaystyle{\liminf _{\varepsilon \rightarrow 0+}\frac{\lambda ([x-\varepsilon,x+\varepsilon ] \cap H)} {2\varepsilon } \leq 1 -\limsup _{\varepsilon \rightarrow 0+}\frac{f(x,\varepsilon )} {2\varepsilon } < 1,}$$

where λ denotes the Lebesgue measure. By Lebesgue’s density theorem, see [12, pp. 13], the density of a measurable set exists and equals to 1 at almost every point of the set. Since the closure of a porous set is also porous we obtain the well known fact that a porous set is of zero Lebesgue measure.

Lemma 5.

Let H be a random closed subset of [0,∞). If H is scaling invariant, that is cH has the same law as H for all c > 0, then

$$\displaystyle{\left \{\right.1\not\in H\left.\right \}\subset \left \{\right.\mbox{ $H$ is porous at 1}\left.\right \} \quad \text{and}\quad \mathbf{P}((\left \{\right.\mbox{ $H$ is porous at 1}\left.\right \} \setminus \left \{\right.1\not\in H\left.\right \})) = 0.}$$

That is, the events \(\left \{\right.1\not\in H\left.\right \}\) and \(\left \{\right.\mbox{ $H$ is porous at 1}\left.\right \}\) are equal up to a null sets.

In particular, if H is porous at 1 almost surely, then \(\mathbf{P}(1\notin H) = 1\).

Proof.

Recall that a random closed set H is a random element in the space of the closed subset of [0, )—we denote it by \(\mathcal{F}\)—, endowed with the smallest σ-algebra containing the sets \(C_{G} = \left \{\right.F \in \mathcal{F}\,:\, F \cap G\neq \varnothing \left.\right \}\), for all open \(G \subset [0,\infty )\). Then it is easy to see, that \(\left \{\right.\omega \,:\, \mbox{ $H(\omega )$ is porous at 1}\left.\right \}\) is an event and

$$\displaystyle\begin{array}{rcl} \mathbf{H}& =& \left \{\right.(t,\omega ) \in [0,\infty ) \times \varOmega \,:\, t \in H(\omega )\left.\right \}, {}\\ \mathbf{H}_{p}& =& \left \{\right.(t,\omega ) \in [0,\infty ) \times \varOmega \,:\, \mbox{ $H(\omega )$ is porous at $t$}\left.\right \} {}\\ \end{array}$$

are measurable subsets of [0, ) ×Ω. We will also use the notation

$$\displaystyle{H_{p}(\omega ) = \left \{\right.t \in [0,\infty )\,:\, (t,\omega ) \in \mathbf{H}_{p}\left.\right \}= \left \{\right.t \in [0,\infty )\,:\, \mbox{ $H(\omega )$ is porous at $t$}\left.\right \}.}$$

Then for each ω ∈ Ω the set H(ω) ∩ H p (ω) is a porous set, hence of Lebesgue measure zero; see the remark before Lemma 5. Whence Fubini theorem yields that

$$\displaystyle{(\lambda \otimes \mathbf{P})(\mathbf{H} \cap \mathbf{H}_{p}) = \mathbf{E}(\lambda (H \cap H_{p})) = 0.}$$

Using Fubini theorem again we get

$$\displaystyle{0 = (\lambda \otimes \mathbf{P})(\mathbf{H} \cap \mathbf{H}_{p}) =\int _{ 0}^{\infty }\mathbf{P}(t \in H \cap H_{ p})dt.}$$

Since \(\mathbf{P}(t \in H \cap H_{p})\) does not depend on t by the scaling invariance of H we have that \(\mathbf{P}(1 \in H \cap H_{p}) = 0\). Now \(\left \{\right.1 \in H \cap H_{p}\left.\right \}= \left \{\right.1 \in H_{p}\left.\right \} \setminus \left \{\right.1\not\in H\left.\right \}\), so we have shown that

$$\displaystyle\begin{array}{rcl} \mathbf{P}(\left \{\right.\mbox{ $H$ is porous at 1}\left.\right \} \setminus \left \{\right.1\not\in H\left.\right \}) = 0.& & {}\\ \end{array}$$

The first part of the claim \(\left \{\right.1\not\in H\left.\right \}\subset \left \{\right.\mbox{ $H$ is porous at $1$}\left.\right \}\) is obvious, since H(ω) is closed and if 1 ∉ H(ω) then there is an open interval containing 1 and disjoint from H. □ 

We want to apply this lemma to \([0,\infty ) \setminus A(C,s)\), the random set of bad time points. We have seen in (7) that the law of \([0,\infty ) \setminus A(C,s)\) has the scaling property. For easier reference we state explicitly the corollary of the above argument, that is the combination of (i) in Theorem 1, Propositions 2–3 and Lemma 5:

Corollary 6.

If \([0,\infty ) \setminus A(C,s)\) is almost surely porous at 1 for any C > 0 and s ∈ (0,1) then the Lévy transformation is strongly mixing.

The condition formulated in terms A(C, s) requires that small neighborhoods of time 1 contain sufficiently large subintervals of A(C, s). Looking at only the left and only the right neighborhoods we can obtain Theorems 7 and 8 below, respectively.

To state these results we introduce the following notations, for t > 0

  • $$\displaystyle{\gamma _{n}(t) =\max \left \{\right.s \leq t\,:\,\beta _{ s}^{(n)} = 0\left.\right \}}$$

    is the last zero before t,

  • $$\displaystyle{\gamma _{n}^{{\ast}}(t) =\max _{ 0\leq k\leq n}\gamma _{k}(t),}$$

    the last time s before t such that \({\beta }^{(0)},\ldots {,\beta }^{(n)}\) has no zero in (s, t],

  • $$\displaystyle{Z_{n}(t) =\min _{0\leq k<n}\vert \beta _{t}^{(k)}\vert.}$$

When t = 1 we omit it from the notation, that is, \(\gamma _{n} =\gamma _{n}(1)\), \(\gamma _{n}^{{\ast}} =\gamma _{ n}^{{\ast}}(1)\) and \(Z_{n} = Z_{n}(1)\).

Theorem 7.

Let

$$\displaystyle\begin{array}{rcl} Y =\limsup _{n\rightarrow \infty } \frac{Z_{n}(\gamma _{n}^{{\ast}})} {\sqrt{1 -\gamma _{ n }^{{\ast}}}}.& &{}\end{array}$$
(8)

Then Y is a \(\mathbf{T}\) invariant, \(\left \{\right.0,\infty \left.\right \}\) valued random variable and

  1. (i)

    either \(\mathbf{P}(Y = 0) = 1\);

  2. (ii)

    or 0 < P (Y = 0) < 1, and then T is not ergodic;

  3. (iii)

    or \(\mathbf{P}(Y = 0) = 0\), that is Y = ∞ almost surely, and T is strongly mixing.

Theorem 8.

Let

$$\displaystyle\begin{array}{rcl} X =\liminf _{n\rightarrow \infty }\frac{Z_{n+1}} {Z_{n}}.& &{}\end{array}$$
(9)

Then X is a T invariant, \(\left \{\right.0,1\left.\right \}\) valued random variable and

  1. (i)

    either \(\mathbf{P}(X = 1) = 1\);

  2. (ii)

    or 0 < P (X = 1) < 1, and then T is not ergodic;

  3. (iii)

    or \(\mathbf{P}(X = 1) = 0\), that is X = 0 almost surely, and T is strongly mixing.

Remark.

In Theorem 8, the first possibility X = 1 looks very unlikely. If one is able to exclude it, then the Lévy T transformation is either strongly mixing or not ergodic and the invariant random variable X witnesses it.

The statements in Theorems 7 and 8 have similar structure, and the easy parts, the invariance of X and Y are proved in Sect. 3.4, while the more difficult parts are proved in Sects. 3.5 and 3.6, respectively.

We can complement Theorems 7 and 8 with the next statement, which shows that X, Y and the goodness of time 1 for all C > 0 and s ∈ (0, 1) are strongly connected. Its proof is deferred to Sect. 3.7 since it uses the side results of the proofs of Theorems 7 and 8.

Theorem 9.

Set

$$\displaystyle\begin{array}{rcl} A =\bigcap _{s\in (0,1)}\bigcap _{C>0}A(C,s).& & {}\\ \end{array}$$

Then the events \(\left \{\right.1 \in A\left.\right \}\) , \(\left \{\right.Y = \infty \left.\right \}\) and \(\left \{\right.X = 0\left.\right \}\) are equal up to null events. In particular, \(X = 1/(1 + Y )\) almost surely.

We close this section with a sufficient condition for X < 1 almost surely. For x > 0, let \(\nu (x) =\inf \{ n \geq 0\,:\, \vert \beta _{1}^{(n)}\vert < x\}\). By the next Corollary of the density theorem of Malric [8], recalled in (1), ν(x) is finite almost surely for all x > 0.

Corollary 10.

\(\inf _{n}{\vert \beta }^{(n)}\vert \) is identically zero almost surely, that is

$$\displaystyle{\mathbf{P}\left (\inf _{n\geq 0}\vert \beta _{t}^{(n)}\vert = 0,\,\forall t \geq 0\right ) = 1}$$

Recall that a family of real valued variables \(\left \{\right.\xi _{i}\,:\, i \in I\left.\right \}\) is tight if \(\sup _{i\in I}\mathbf{P}(\left \vert \right.\xi _{i}\left.\right \vert > K) \rightarrow 0\) as \(K \rightarrow \infty \).

Theorem 11.

The tightness of the families \(\left \{\right.x\nu (x)\,:\, x \in (0,1)\left.\right \}\) and \(\left \{\right.nZ_{n}\,:\, n \geq 1\left.\right \}\) are equivalent and both imply X < 1 almost surely, hence also the strong mixing property of the Lévy transformation.

For the sake of completeness we state the next corollary, which is just an easy application of the Markov inequality.

Corollary 12.

If there exists an unbounded, increasing function \(f: [0,\infty ) \rightarrow [0,\infty )\) such that \(\sup _{x\in (0,1)}\mathbf{E}(f(x\nu (x))) < \infty \) or \(\sup _{n}\mathbf{E}(f(nZ_{n})) < \infty \) then the Lévy transformation is strongly mixing.

In particular, if \(\sup _{x\in (0,1)}\mathbf{E}(x\nu (x)) < \infty \) or \(\sup _{n}\mathbf{E}(nZ_{n}) < \infty \) then the Lévy transformation is strongly mixing.

3 Proofs

3.1 General Results

First, we characterize strong mixing and ergodicity of measure-preserving transformation over a Polish space. This will be the key to prove Theorem 1. Although it seems to be natural, the author was not able to locate it in the literature.

Proposition 13.

Let \((\varOmega,\mathcal{B},\mathbf{P},T)\) be a measure-preserving system, where Ω is a

Polish space and \(\mathcal{B}\) is its Borel σ-field. Then

  1. (i)

    T is strongly mixing if and only if \(\mathbf{P} \circ {({T}^{0},{T}^{n})}^{-1}\, \mathop{\rightarrow}\limits^{w}\,\mathbf{P} \otimes \mathbf{P}\).

  2. (ii)

    T is ergodic if and only if \(\frac{1} {n}\sum _{k=0}^{n-1}\mathbf{P} \circ {({T}^{0},{T}^{k})}^{-1}\, \mathop{\rightarrow}\limits^{w}\,\mathbf{P} \otimes \mathbf{P}\).

Both part of the statement follows obviously from the following common generalization.

Proposition 14.

Let Ω be a Polish space and μ n ,μ be probability measures on the product \((\varOmega \times \varOmega,\mathcal{B}\times \mathcal{B})\), where \(\mathcal{B}\) is a Borel σ-field of Ω.

Assume that for all n the marginals of μ n and μ are the same, that is for \(A \in \mathcal{B}\) we have μ n (A × Ω) = μ(A × Ω) and μ n (Ω × A) = μ(Ω × A).

Then \(\mu _{n}\, \rightarrow w\,\mu\) if and only if \(\mu _{n}(A \times B) \rightarrow \mu (A \times B)\) for all \(A,B \in \mathcal{B}\).

Proof.

Assume first that \(\mu _{n}(A \times B) \rightarrow \mu (A \times B)\) for \(A,B \in \mathcal{B}\). By portmanteau theorem, see Billingsley [1, Theorem 2.1], it is enough to show that for closed sets \(F \subset \varOmega \times \varOmega \) the limiting relation

$$\displaystyle{ \limsup _{n\rightarrow \infty }\mu _{n}(F) \leq \mu (F) }$$
(10)

holds. To see this, consider first a compact subset F of Ω ×Ω and an open set G such that F ⊂ G. We can take a finite covering of F with open rectangles \(F \subset \cup _{i=1}^{r}A_{i} \times B_{i} \subset G\), where \(A_{i},B_{i} \subset \varOmega \) are open. Since the difference of rectangles can be written as finite disjoint union of rectangles we can write

$$\displaystyle{(A_{i} \times B_{i}) \setminus \bigcup _{k<i}(A_{k} \times B_{k}) =\bigcup _{j}(A^{\prime}_{i,j} \times B^{\prime}_{i,j}),}$$

where \(\left \{\right.A^{\prime}_{i,j} \times B^{\prime}_{i,j}\,:\, i,j\left.\right \}\) is a finite collection of disjoint rectangles. By assumption

$$\displaystyle{\lim _{n\rightarrow \infty }\mu _{n}\left (\right.A^{\prime}_{i,j} \times B^{\prime}_{i,j}\left.\right ) =\mu \left (\right.A^{\prime}_{i,j} \times B^{\prime}_{i,j}\left.\right ),}$$

which yields

$$\displaystyle{\limsup _{n\rightarrow \infty }\mu _{n}(F) \leq \lim _{n\rightarrow \infty }\mu _{n}\left (\right.\bigcup _{i}(A_{i} \times B_{i})\left.\right ) =\mu \left (\right.\bigcup _{i}(A_{i} \times B_{i})\left.\right ) \leq \mu (G).}$$

Taking infimum over G ⊃ F, (10) follows for compact sets.

For a general closed F, let \(\varepsilon > 0\) and denote by μ 1(A) = μ(A ×Ω), μ 2(A) = μ(Ω ×A) the marginals of μ. By the tightness of \({\left \{\right.\mu }^{1}{,\mu }^{2}\left.\right \}\), one can find a compact set C such that \({\mu }^{1}({C}^{c}) =\mu ({C}^{c} \times \varOmega ) \leq \varepsilon\) and \({\mu }^{2}({C}^{c}) =\mu (\varOmega \times {C}^{c}) \leq \varepsilon\). Then

$$\displaystyle{\mu _{n}(F) \leq \mu _{n}(F \cap (C \times C)) + 2\varepsilon.}$$

Since \(F^{\prime} = F \cap (C \times C)\) is compact, we have that

$$\displaystyle{\limsup _{n\rightarrow \infty }\mu _{n}(F) \leq \limsup _{n\rightarrow \infty }\mu _{n}(F^{\prime}) + 2\varepsilon \leq \mu (F^{\prime}) + 2\varepsilon \leq \mu (F) + 2\varepsilon.}$$

Letting \(\varepsilon \rightarrow 0\) finishes this part of the proof.

For the converse, note that μ 1 and μ 2 are regular since Ω is a Polish space and μ 1, μ 2 are probability measures on its Borel σ-field.

Fix \(\varepsilon > 0\). For \(A_{i} \in \mathcal{B}\) one can find, using the regularity of μ i, closed sets F i and open sets G i such that \(F_{i} \subset A_{i} \subset G_{i}\) and \({\mu }^{i}(G_{i} \setminus F_{i}) \leq \varepsilon\). Then

$$\displaystyle{(G_{1} \times G_{2}) \setminus (F_{1} \times F_{2}) \subset ((G_{1} \setminus F_{1}) \times \varOmega ) \cup (\varOmega \times (G_{2} \setminus F_{2}))}$$

yields that

$$\displaystyle\begin{array}{rcl} & & \mu _{n}(A_{1} \times A_{2}) \leq \mu _{n}(G_{1} \times G_{2}) \leq \mu _{n}(F_{1} \times F_{2}) + 2\varepsilon, {}\\ & & \mu _{n}(A_{1} \times A_{2}) \geq \mu _{n}(F_{1} \times F_{2}) \geq \mu _{n}(G_{1} \times G_{2}) - 2\varepsilon, {}\\ \end{array}$$

hence by portmanteau theorem \(\mu _{n}\, \rightarrow w\,\mu\) gives

$$\displaystyle\begin{array}{rcl} & & \limsup _{n\rightarrow \infty }\mu _{n}(A_{1} \times A_{2}) \leq \mu (F_{1} \times F_{2}) + 2\varepsilon \leq \mu (A_{1} \times A_{2}) + 2\varepsilon {}\\ & & \liminf _{n\rightarrow \infty }\mu _{n}(A_{1} \times A_{2}) \geq \mu (G_{1} \times G_{2}) - 2\varepsilon \geq \mu (A_{1} \times A_{2}) - 2\varepsilon. {}\\ \end{array}$$

Letting \(\varepsilon \rightarrow 0\) we get \(\lim _{n\rightarrow \infty }\mu _{n}(A_{1} \times A_{2}) =\mu (A_{1} \times A_{2})\). □ 

3.2 Proof of Theorem 1

Proof of the sufficiency of the conditions in Theorem 1.

We start with the strong mixing case. We want to show that

$$\displaystyle{ X_{n}(t) =\int _{ 0}^{t}h_{ s}^{(n)}\mathrm{d}s\, \rightarrow p\,0,\quad \mbox{ for all $t \geq 0$}, }$$
(11)

where \(h_{s}^{(n)}\) is given by (3), implies the strong mixing of the integral-type measure-preserving transformation T.

Actually, we show by characteristic function method that (11) implies that the finite dimensional marginals of \((\beta {,\beta }^{(n)})\) converge in distribution to the appropriate marginals of a 2d-dimensional Brownian motion. Then, since the sequence \((\beta {,\beta }^{(n)})_{n\geq 0}\) is tight, not only the finite dimensional marginals but the sequence of processes \((\beta {,\beta }^{(n)})\) converges in distribution to a 2d-dimensional Brownian motion. By Proposition 13 this is equivalent with the strong mixing property of T.

Let \(\underline{t} = (t_{1},\ldots,t_{k})\) be a finite subset of [0, ). Then the characteristic function of \((\beta _{t_{1}},\ldots,\beta _{t_{k}},\beta _{t_{1}}^{(n)},\ldots,\beta _{t_{k}}^{(n)})\) can be written as

$$\displaystyle\begin{array}{rcl} \phi _{n}(\alpha )& =& \mathbf{E}\left (\exp \left \{\right.i\int _{0}^{\infty }f\mathrm{d}\beta + i\int _{ 0}^{\infty }g\mathrm{{d}\beta }^{(n)}\left.\right \} \right ) \\ & =& \mathbf{E}\left (\exp \left \{\right.i\int _{0}^{\infty }(f + g{h}^{(n)})\mathrm{d}\beta \left.\right \} \right ),{}\end{array}$$
(12)

where f, g are deterministic step function obtained from the time vector \(\underline{t}\) and \(\alpha = (a_{1},\ldots,a_{k},b_{1},\ldots,b_{k})\); here \(a_{i},b_{j}\) are d-dimensional row vectors and

$$\displaystyle{f =\sum _{ j=1}^{k}a_{ j}\nVdash _{[0,t_{j}]},\quad \text{and}\quad g =\sum _{ j=1}^{k}b_{ j}\nVdash _{[0,t_{j}]}.}$$

We have to show that

$$\displaystyle{\phi _{n}(\alpha ) \rightarrow \phi (\alpha ) =\exp \left \{\right. -\frac{1} {2}\int _{0}^{\infty }(\left \vert \right.f{\left.\right \vert }^{2} + \left \vert \right.g{\left.\right \vert }^{2})\left.\right \}\quad \mbox{ as $n \rightarrow \infty $.}}$$

Using that \({\beta }^{(n)} =\int {h}^{(n)}\mathrm{d}\beta\) and

$$\displaystyle{M_{t} =\exp \left \{\right.i\int _{0}^{t}(f(s) + g(s)h_{ s}^{(n)})\mathrm{d}\beta _{ s} + \frac{1} {2}\int _{0}^{t}\left \vert \right.f(s) + g(s)h_{ s}^{(n)}{\left.\right \vert }^{2}\mathrm{d}s\left.\right \}}$$

is a uniformly integrable martingale starting from 1, we obtain that \(\mathbf{E}(M_{\infty }) = 1\) and

$$\displaystyle\begin{array}{rcl} & & \phi (\alpha ) =\phi (\alpha )\mathbf{E}(M_{\infty }) = \\ & & \qquad \quad \mathbf{E}\left (\exp \left \{\right.i\int _{0}^{\infty }(f(s) + g(s)h_{ s}^{(n)})\mathrm{d}\beta _{ s} +\int _{ 0}^{\infty }g(s)h_{ s}^{(n)}{f}^{T}(s)\mathrm{d}s\left.\right \} \right ){}\end{array}$$
(13)

As \(\exp \{i\int _{[0\infty )}(f + g{h}^{(n)})\mathrm{d}\beta \}\) is of modulus one, we get from (12) and (13) that

$$\displaystyle{ \left \vert \right.\phi (\alpha ) -\phi _{n}(\alpha )\left.\right \vert \leq \mathbf{E}\left (\left \vert \right.\exp \left \{\right.\int _{0}^{\infty }g(s)h_{ s}^{(n)}{f}^{T}(s)\mathrm{d}s\left.\right \} - 1\left.\right \vert \right ). }$$
(14)

Note that f T g is a matrix valued function of the form \({f}^{T}g =\sum _{ j=1}^{k}c_{j}\nVdash _{[0,t_{j}]}\), hence

$$\displaystyle{\int _{0}^{\infty }g(s)h_{ s}^{(n)}{f}^{T}(s)\mathrm{d}s =\int _{ 0}^{\infty }\mathop{Tr}\nolimits \left (\right.{f}^{T}(s)g(s)h_{ s}^{(n)}\left.\right )\mathrm{d}s =\sum _{ j=1}^{k}\mathop{ Tr}\nolimits \left (\right.c_{ j}X_{n}(t_{j})\left.\right ),}$$

and \(\vert \int _{0}^{\infty }g(s)h_{s}^{(n)}{f}^{T}(s)\mathrm{d}s\vert \leq M =\int _{ 0}^{\infty }\left \vert \right.f(s)\left.\right \vert \left \vert \right.g(s)\left.\right \vert \mathrm{d}s < \infty \). With this notation, using \(\left \vert \right.{e}^{x} - 1\left.\right \vert \leq \left \vert \right.x\left.\right \vert {e}^{\left \vert \right. x\left.\right \vert }\) for \(x \in \mathbb{R}\) and \(\left \vert \right.\mathop{Tr}\nolimits (ab)\left.\right \vert \leq \left \|\right.a\left.\right \|_{HS}\left \|\right.b\left.\right \|_{HS}\), we can continue (14) to get

$$\displaystyle\begin{array}{rcl} \left \vert \right.\phi _{n}(\alpha ) -\phi (\alpha )\left.\right \vert & \leq & \mathbf{E}\left (\left \vert \right.\exp \left \{\right.\int _{0}^{\infty }g(s)h_{ s}^{(n)}{f}^{T}(s)\mathrm{d}s\left.\right \} - 1\left.\right \vert \right ) \\ & \leq & {e}^{M}\mathbf{E}\left (\left \vert \right.\sum _{ j=1}^{k}\mathop{ Tr}\nolimits \left (\right.c_{ j}X_{n}(t_{j})\left.\right )\left.\right \vert \right ) \\ & \leq & {e}^{M}\sum _{ j=1}^{k}\left \|\right.c_{ j}\left.\right \| _{HS}\mathbf{E}\left (\left \|\right.X_{n}(t_{j})\left.\right \|_{HS}\right ).{}\end{array}$$
(15)

Since \(\left \|\right.X_{n}(t_{j})\left.\right \|_{HS} \leq t_{j}\sqrt{d}\) and \(X_{n}(t_{j})\, \rightarrow p\,0\) by assumption, we obtained that \(\phi _{n}(\alpha ) \rightarrow \phi (\alpha )\) and the statement follows.

To prove (ii) we use the same method. We introduce \(\kappa _{n}\) which is a random variable independent of the sequence \({(\beta }^{(n)})_{n\in \mathbb{Z}}\) and uniformly distributed on \(\left \{\right.0,1,\ldots,n - 1\left.\right \}\). Ergodicity can be formulated as \((\beta {,\beta }^{(\kappa _{n})})\) converges in distribution to a 2d-dimensional Brownian motion. The joint characteristic function ψ n of \((\beta _{t_{1}},\ldots,\beta _{t_{k}},\beta _{t_{1}}^{(\kappa _{n})},\ldots,\beta _{t_{ k}}^{(\kappa _{n})})\) can be expressed, similarly as above,

$$\displaystyle{\psi _{n} = \frac{1} {n}\sum _{\ell=0}^{n-1}\phi _{ \ell}}$$

where ϕ is as in the first part of the proof. Using the estimation (15) obtained in the first part

$$\displaystyle\begin{array}{rcl} \left \vert \right.\phi (\alpha ) -\psi _{n}(\alpha )\left.\right \vert & \leq & \frac{1} {n}\sum _{\ell=0}^{n-1}\left \vert \right.\phi (\alpha ) -\phi _{\ell}(\alpha )\left.\right \vert {}\\ &\leq & \frac{{e}^{M}} {n} \sum _{\ell=0}^{n-1}\sum _{ j=0}^{k}\left \|\right.c_{ j}\left.\right \| _{HS}\mathbf{E}\left (\left \|\right.X_{\ell}(t_{j})\left.\right \|_{HS}\right ) {}\\ & =& {e}^{M}\sum _{ j=1}^{k}\left \|\right.c_{ j}\left.\right \| _{HS}\mathbf{E}\left ( \frac{1} {n}\sum _{\ell=0}^{n-1}\left \|\right.X_{\ell}(t_{ j})\left.\right \|_{HS}\right ). {}\\ \end{array}$$

Now \(\left \vert \right.\phi (\alpha ) -\psi _{n}(\alpha )\left.\right \vert \rightarrow 0\) follows from our condition in part (ii) by the Cauchy–Schwartz inequality, since

$$\displaystyle{\left (\right.\frac{1} {n}\sum _{\ell=0}^{n-1}\left \|\right.X_{\ell}(t_{ j})\left.\right \|_{HS}{\left.\right )}^{2} \leq \frac{1} {n}\sum _{\ell=0}^{n-1}\left \|\right.X_{\ell}(t_{ j})\left.\right \|_{HS}^{2}\, \rightarrow p\,0.}$$

and \(\frac{1} {n}\sum _{\ell=0}^{n-1}\left \|\right.X_{\ell}(t_{ j})\left.\right \|_{HS}^{2} \leq t_{ j}^{2}d\). □ 

Proof of the necessity of the conditions in Theorem 1.

Recall that the quadratic variation of an m-dimensional martingale \(M = (M_{1},\ldots,M_{m})\) is a matrix valued process whose (j, k) entry is \(\langle M_{j},M_{k}\rangle\). The proof of the following fact can be found in [6], see Corollary 6.6 of Chap. VI.

Let (M (n)) be a sequence of  m -dimensional, continuous local martingales. If  \({M}^{(n)}\, \rightarrow d\,M\) then \(({M}^{(n)},\langle {M}^{(n)}\rangle )\, \rightarrow d\,(M,\langle {M}^{(n)}\rangle )\).

By enlarging the probability space, we may assume that there is a random variable U, which is uniform on (0, 1) and independent of β. Denote by \(\kappa _{n} = [nU]\) the integer part of n U. Let \(\mathcal{G}\) be the smallest filtration satisfying the usual hypotheses, making U \(\mathcal{G}_{0}\) measurable and β adapted to \(\mathcal{G}\). Then β is a Brownian motion in \(\mathcal{G}\); \((\beta {,\beta }^{(n)})\) and \((\beta {,\beta }^{(\kappa _{n})})\) are continuous martingales in \(\mathcal{G}\). The quadratic covariations are

$${\displaystyle{\langle \beta }^{(n)},\beta \rangle _{ t} =\int _{ 0}^{t}h_{ s}^{(n)}ds = X_{ n}(t),\quad \text{and}{\quad \langle \beta }^{(\kappa _{n})},\beta \rangle _{ t} =\sum _{ k=0}^{n-1}\mathbb{1} _{ (\kappa _{n}=k)}X_{k}(t).}$$

By Proposition 3, the strong mixing property and the ergodicity of T are respectively equivalent to the convergence in distribution of \((\beta {,\beta }^{(n)})\) and \((\beta {,\beta }^{(\kappa _{n})})\) to a 2d-dimensional Brownian motion.

By the fact just recalled, the strong mixing property of T implies that \({\langle \beta }^{(n)},\beta \rangle _{t}\, \rightarrow d\,0\), while its ergodicity ensures that \({\langle \beta }^{(\kappa _{n})},\beta \rangle _{t}\, \rightarrow d\,0\) for every t ≥ 0. Since the limit is deterministic, the convergence also holds in probability. The “only if” part of (i) follows immediately.

For the “only if” part of (ii) we add that

$${\displaystyle{\|\langle \beta }^{(\kappa _{n})},\beta \rangle _{ t}\|_{HS}^{2} = \left \|\right.\sum _{ k=0}^{n-1}\nVdash _{ (\kappa _{n}=k)}X_{k}(t)\left.\right \|_{HS}^{2} =\sum _{ k=0}^{n-1}\nVdash _{ (\kappa _{n}=k)}\left \|\right.X_{k}(t)\left.\right \|_{HS}^{2}}$$

Since \(\|X_{k}(t)\|_{HS}^{2} \leq {t}^{2}d\) the convergence in probability of \({\langle \beta }^{(\kappa _{n})},\beta \rangle _{t}\) to zero is also a convergence of \({\left \|\right.\langle \beta }^{(\kappa _{n})},\beta \rangle _{t}\left.\right \| _{HS}^{2}\) to zero in \({L}^{1}(\mathbf{P})\), which implies the convergence in \({L}^{1}(\mathbf{P})\) to zero of the conditional expectation

$$\displaystyle{\mathbf{E}\left ({\|\langle \beta }^{(\kappa _{n})},\beta \rangle _{ t}\|_{HS}^{2}\vert \sigma (\beta )\right ) = \frac{1} {n}\sum _{k=0}^{n-1}\left \|\right.X_{ k}(t)\left.\right \|_{HS}^{2}.}$$

The “only if” part of (ii) follows. □ 

3.3 First Results for the Lévy Transformation

We will use the following property of the Lévy transformation many times. Recall that \({\mathbf{T}}^{n}\beta =\beta \circ {\mathbf{T}}^{n}\) is also denoted by β (n). We will also use the notation \(h_{t}^{(n)} =\prod _{ k=0}^{n-1}\mathrm{sign}(\beta _{t}^{(k)})\) for n ≥ 1 and \({h}^{(0)} = 1\).

Lemma 15.

On an almost sure event the following property holds:

For any interval \(I \subset [0,\infty )\), point a ∈ I and integer n > 0, if

$$\displaystyle{ \sup _{t\in I}\vert \beta _{t} -\beta _{a}\vert <\min _{0\leq k<n}\vert ({\mathbf{T}}^{k}\beta )_{ a}\vert }$$
(16)

then

  1. (i)

    \({\mathbf{T}}^{k}\beta\) has no zero in I, for 0 ≤ k ≤ n − 1,

  2. (ii)

    \(({\mathbf{T}}^{k}\beta )_{t} - ({\mathbf{T}}^{k}\beta )_{a} = {h}^{(k)}_{a}\left (\right.\beta _{t} -\beta _{a}\left.\right )\) for t ∈ I and 0 ≤ k ≤ n.

    In particular, \(\vert ({\mathbf{T}}^{k}\beta )_{t} - ({\mathbf{T}}^{k}\beta )_{a}\vert = \vert \beta _{t} -\beta _{a}\vert \) for t ∈ I and 0 ≤ k ≤ n.

Proof.

In the next argument we only use that if β is a Brownian motion and L is its local time at level zero then the points of increase for L is exactly the zero set of β and \(\mathbf{T}\beta = \left \vert \right.\beta \left.\right \vert - L\) almost surely. Then there is Ω ′ of full probability such that on Ω ′ both properties hold for \({\mathbf{T}}^{n}\beta\) for all n ≥ 0 simultaneously.

Let \(N = N(I) =\inf \left \{\right.n \geq 0\,:\, \mbox{ ${\mathbf{T}}^{n}\beta $ has a zero in $I$}\left.\right \}\). Since \(\mathbf{T}\) acts as \(\mathbf{T}\beta = \left \vert \right.\beta \left.\right \vert - L\), if β has no zero in I we have

$$\displaystyle{\mathbf{T}\beta _{t} = \mathrm{sign}(\beta _{a})\beta _{t} - L_{a},\quad \mbox{ for $t \in I$}.}$$

But, then \(\mathbf{T}\beta _{t} -\mathbf{T}\beta _{a} = \mathrm{sign}(\beta _{a})(\beta _{t} -\beta _{a})\) and \(\left \vert \right.\mathbf{T}\beta _{t} -\mathbf{T}\beta _{a}\left.\right \vert = \left \vert \right.\beta _{t} -\beta _{a}\left.\right \vert \) for t ∈ I. Iterating it we obtain that

$$\displaystyle{ \begin{array}{cc} ({\mathbf{T}}^{k}\beta )_{t} - ({\mathbf{T}}^{k}\beta )_{a} & = {h}^{(k)}_{a}\left (\right.\beta _{t} -\beta _{a}\left.\right ), \\ \left \vert \right.({\mathbf{T}}^{k}\beta )_{t} - ({\mathbf{T}}^{k}\beta )_{a}\left.\right \vert & = \left \vert \right.\beta _{t} -\beta _{a}\left.\right \vert, \end{array} \quad \mbox{ on $\left \{\right.k \leq N\left.\right \}$and for $t \in I$}. }$$
(17)

Now assume that (16) holds. Then, necessarily n ≤ N as the other possibility would lead to a contradiction. Indeed, if N < n then N is finite, T N β has a zero t 0 in I and

$$\displaystyle{0 = \left \vert \right.{\mathbf{T}}^{N}\beta _{ t_{0}}\left.\right \vert = \left \vert \right.{\mathbf{T}}^{N}\beta _{ a}\left.\right \vert -\left \vert \right.{\mathbf{T}}^{N}\beta _{ t_{0}} -{\mathbf{T}}^{N}\beta _{ a}\left.\right \vert \geq \min _{0\leq k<n}\left \vert \right.{\mathbf{T}}^{k}\beta _{ a}\left.\right \vert -\sup _{t\in I}\left \vert \right.\beta _{t} -\beta _{a}\left.\right \vert > 0.}$$

So (16) implies that n ≤ N, which proves (i) by the definition of N and also (ii) by (17). □ 

Combined with the densities of zeros, Lemma 15 implies Corollary 10 stated above.

Proof of Corollary 10.

The statement here is that \(\inf _{n\geq 0}\vert ({\mathbf{T}}^{n}\beta )_{t}\vert = 0\) for all t ≥ 0.

Assume that for ω ∈ Ω there is some t > 0, such that \(\inf _{n\geq 0}\vert ({\mathbf{T}}^{n}\beta )_{t}\vert \) is not zero at ω. Then there is a neighborhood I of t such that

$$\displaystyle{\sup _{s\in I}\left \vert \right.\beta _{s} -\beta _{t}\left.\right \vert <\inf _{k}\left \vert \right.({\mathbf{T}}^{k}\beta )_{ t}\left.\right \vert.}$$

Using Lemma 15, we would get that for this ω the iterated paths \({\mathbf{T}}^{k}\beta (\omega )\), k ≥ 0 has no zero in I. However, since

$$\displaystyle{\left \{\right.t \geq 0\,:\, \exists k,\,({\mathbf{T}}^{k}\beta )_{ t} = 0\left.\right \}}$$

is dense in [0, ) almost surely by the result of Malric [8], ω belongs to the exceptional negligible set. □ 

Proof of Proposition 2.

Let C > 0 and s ∈ (0, 1) as in the statement and assume that τ is a stopping time satisfying (a)–(c), that is, s < τ < 1, and for the almost surely finite random index ν we have \(\beta _{\tau }^{(\nu )} = 0\) and \(\min _{0\leq k<\nu }\vert \beta _{\tau }^{(k)}\vert > C\sqrt{1-\tau }\). Recall that S denotes the reflection of the trajectories after τ.

Set \(\varepsilon _{n} = h_{s}^{(n)}h_{1}^{(n)}\) for n > 0 and

$$\displaystyle{A_{C} = \left \{\right.\sup _{t\in [\tau,1]}\vert \beta _{t}^{(0)} -\beta _{\tau }^{(0)}\vert \leq C\sqrt{1-\tau }\left.\right \}.}$$

We show below that on the event \(A_{C} \cap \left \{\right.n >\nu \left.\right \}\), we have \(\varepsilon _{n} = -\varepsilon _{n} \circ S\). Since S preserves the Wiener measure P, this implies that

$$\displaystyle\begin{array}{rcl} \left \vert \right.\mathbf{E}\left (\varepsilon _{n}\right )\left.\right \vert = \frac{1} {2}\left \vert \right.\mathbf{E}(\varepsilon _{n} +\varepsilon _{n} \circ S)\left.\right \vert & \leq & \frac{1} {2}\mathbf{E}\left (\left \vert \right.\varepsilon _{n} +\varepsilon _{n} \circ S\left.\right \vert \right ) {}\\ & =& \mathbf{P}(\varepsilon _{n} =\varepsilon _{n} \circ S) {}\\ & \leq & \mathbf{P}(A_{C}^{c} \cup \left \{\right.n \leq \nu \left.\right \} ) \leq \mathbf{P}(A_{ C}^{c}) + \mathbf{P}(n \leq \nu ) {}\\ \end{array}$$

When n → , this yields

$$\displaystyle{\limsup _{n\rightarrow \infty }\left \vert \right.\mathbf{E}\left (h_{s}^{(n)}h_{ 1}^{(n)}\right )\left.\right \vert \leq \mathbf{P}(A_{ C}^{c}) = \mathbf{P}\left (\sup _{ s\in [0,1]}\left \vert \right.\beta _{s}\left.\right \vert > C\right ),}$$

by the Markov property and the scaling property of the Brownian motion.

It remains to show that on \(A_{C} \cap \left \{\right.n >\nu \left.\right \}\) the identity \(\varepsilon _{n} = -\varepsilon _{n} \circ S\) holds. By definition of S, the trajectory of β and β ∘ S coincide on [0, τ], hence \({h}^{(k)}\) and \({h}^{(k)} \circ S\) coincide on [0, τ] for \(k > 0\). In particular, \({h}^{(k)}_{\tau } = {h}^{(k)}_{\tau } \circ S\) and \({h}^{(k)}_{s} = {h}^{(k)}_{s} \circ S\) for all k since τ > s.

On the event A C we can apply Lemma 15 with I = [τ, 1], a = τ and n = ν to both β and S ∘ β to get that

$$\displaystyle{ \begin{array}{cc} \beta _{t}^{(k)} -\beta _{\tau }^{(k)} & = {h}^{(k)}_{\tau }(\beta _{t} -\beta _{\tau }), \\ \beta _{t}^{(k)} \circ S -\beta _{\tau }^{(k)} \circ S & = -{h}^{(k)}_{\tau }(\beta _{t} -\beta _{\tau }), \end{array} \quad k \leq \nu,\,t \in [\tau,1]. }$$
(18)

We have used that \({h}^{(k)}_{\tau } = {h}^{(k)}_{\tau } \circ S\) and \(\beta _{t} \circ S_{t} -\beta _{\tau }\circ S = -(\beta _{t} -\beta _{\tau })\) for t ≥ τ by the definition of S.

Using that on A C

$$\displaystyle{\vert \beta _{\tau }^{(k)}\vert > C\sqrt{1-\tau }\geq \left \vert \right.\beta _{ 1} -\beta _{\tau }\left.\right \vert,\quad \mbox{ for $k <\nu $}}$$

we get immediately from (18) that \(\mathrm{sign}(\beta _{1}^{(k)}) = \mathrm{sign}(\beta _{1}^{(k)}) \circ S\) for k < ν.

Since \(\beta _{\tau }^{(\nu )} = (\beta _{\tau }^{(\nu )}) \circ S = 0\), for k = ν (18) gives that \({\beta }^{(\nu )}\) and \({\beta }^{(\nu )} \circ S\) coincide on [0, τ] and are opposite of each other on [τ, 1]. Hence, \({\beta }^{(k)}\) and \({\beta }^{(k)} \circ S\) coincide on [0, 1] for every k > ν.

As a result on the event A C ,

$$\displaystyle{\mathrm{sign}(\beta _{1}^{(k)})\circ S = \left \{\begin{array}{@{}l@{\quad }l@{}} \mathrm{sign}(\beta _{1}^{(k)}), \quad &\mbox{ if $k\neq \nu $}, \\ -\mathrm{sign}(\beta _{1}^{(k)}),\quad &\mbox{ if $k =\nu $} \end{array} \right.}$$

hence \(h_{1}^{(n)} \circ S = -h_{1}^{(n)}\) on \(A_{C} \cap \left \{\right.n >\nu \left.\right \}\). Since \(h_{s}^{(n)} \circ S = h_{s}^{(n)}\) for all n we are done. □ 

Proof of Proposition 3.

Let C > 0 and s ∈ (0, 1). Call τ the infimum of those time points that satisfy (b) and (c) of Proposition 2 with C replaced by 2C, namely \(\tau =\inf _{n}\tau _{n}\), where

$$\displaystyle\begin{array}{rcl} \tau _{n}& =& \inf \left \{\right.t > s\,:\,\beta _{ t}^{(n)} = 0,\,\forall k < n,\,\vert \beta _{ t}^{(k)}\vert > 2C\sqrt{(1 - t) \vee 0}\left.\right \}. {}\\ \end{array}$$

By assumption τ n  < 1 for some n ≥ 0. Furthermore, there exists some finite index ν such that τ = τ ν . Otherwise, there would exist a subsequence \((\tau _{n})_{n\in D}\) bounded by 1 and converging to τ. For every k one has k < n for infinitely many n ∈ D, hence \(\vert \beta _{\tau _{n}}^{(k)}\vert \geq 2C\sqrt{1 -\tau _{n}}\) by the choice of D. Letting n →  yields \(\left \vert \right.\beta _{\tau }^{(k)}\left.\right \vert \geq 2C\sqrt{1-\tau } > 0\) for every k. This can happen only with probability zero by Corollary 10.

As ν is almost surely finite and τ = τ ν we get that \(\beta _{\tau }^{(\nu )} = 0\) and

$$\displaystyle{\inf \{\vert \beta _{\tau }^{(k)}\vert \,:\, k <\nu \}\geq 2C\sqrt{1-\tau } > C\sqrt{1-\tau }.}$$

We have that τ > s holds almost surely, since s is not a zero of any β (n) almost surely, so τ satisfies (a)–(c) of Lemma 2. □ 

3.4 Easy Steps of the Proof of Theorems 7 and 8

The main step of the proof of these theorems, that will be given in Sects. 3.6 and 3.7, is that if Y > 0 almost surely (or X < 1 almost surely), then for any C > 0, s ∈ (0, 1) the set of the bad time points \([0,\infty ) \setminus A(C,s)\) is almost surely porous at 1. Then Corollary 6 applies and the Lévy transformation T is strongly mixing.

If Y > 0 does not hold almost surely, then either Y = 0 or Y is a non-constant variable invariant for T, hence in latter case the Lévy transformation T is not ergodic. These are the first two cases in Theorem 7. Similar analysis applies to X and Theorem 8.

To show the invariance of Y recall that \(\gamma _{n}^{{\ast}}\rightarrow 1\) by the density theorem of the zeros due to Malric [8] and γ 0 < 1, both property holding almost surely. Hence, for every large enough n, \(\gamma _{n+1}^{{\ast}} >\gamma _{0}\), therefore \(\gamma _{n+1}^{{\ast}} =\gamma _{ n}^{{\ast}}\circ \mathbf{T}\),

$$\displaystyle{Z_{n}(\gamma _{n}^{{\ast}}) \circ \mathbf{T} =\min _{ 0\leq k<n}\vert \beta _{\gamma _{n}^{{\ast}}\circ \mathbf{T}}^{(k+1)}\vert =\min _{ 1\leq k<n+1}\vert \beta _{\gamma _{n+1}^{{\ast}}}^{(k)}\vert \geq Z_{ n+1}(\gamma _{n+1}^{{\ast}}),}$$

and

$$\displaystyle\begin{array}{rcl} \frac{Z_{n}(\gamma _{n}^{{\ast}})} {\sqrt{1 -\gamma _{ n }^{{\ast}}}} \circ \mathbf{T} \geq \frac{Z_{n+1}(\gamma _{n+1}^{{\ast}})} {\sqrt{1 -\gamma _{ n+1 }^{{\ast}}}}.& & {}\\ \end{array}$$

Taking limit superior we obtain that \(Y \circ \mathbf{T} \geq Y\). Using that T is measure-preserving we conclude Y ∘ T = Y almost surely, that is, Y is T invariant.

To show the invariance of X directly, without referring to Theorem  9, we use Corollary 10, which says that almost surely \(\inf _{n\geq 0}\vert \beta _{t}^{(n)}\vert = 0\) for all t ≥ 0. Thus Z n  → 0 and since \(\vert \beta _{1}^{(0)}\vert > 0\) almost surely, for every large enough n, \(Z_{n} < \vert \beta _{1}^{(0)}\vert \), therefore \((Z_{n+1}/Z_{n}) \circ \mathbf{T} = (Z_{n+2}/Z_{n+1})\). Hence \(X \circ \mathbf{T} = X\).

3.5 Proof of Theorem 7

Fix C > 0 and s ∈ (0, 1) and consider the random set

$$\displaystyle\begin{array}{rcl} & & \tilde{A}(C,s) = \left \{t > 0: \mbox{ exist $n \geq 1$ such that $st <\gamma _{n}(t) =\gamma _{ n}^{{\ast}}(t)$ and }\right. \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \left.\mbox{ $\min _{0\leq k<n}\vert \beta _{\gamma _{n}(t)}^{(k)}\vert > C\sqrt{t -\gamma _{n } (t)}$}\right \} \subset A(C,s). {}\end{array}$$
(19)

The difference between A(C, s) and \(\tilde{A}(C,s)\) is that in the latter case we only consider last zeros satisfying γ n (t) > γ k (t) for \(k = 0,\ldots,n - 1\), whereas in the case of A(C, s) we consider any zero of the iterated paths. Note also, that here n > 0, so the zeros of β itself are not used, while n can be zero in the definition of A(C, s).

We prove below the next proposition.

Proposition 16.

Almost surely on the event \(\left \{\right.Y > 0\left.\right \}\), the closed set \([0,\infty ) \setminus \tilde{ A}(C,s)\) is porous at 1 for any C > 0 and s ∈ (0,1).

This result readily implies that if Y > 0 almost surely, then \([0,\infty ) \setminus \tilde{ A}(C,s)\) and the smaller random closed set \([0,\infty ) \setminus A(C,s)\) are both almost surely porous at 1 for any C > 0 and s ∈ (0, 1). Then the strong mixing property of T follows by Corollary 6.

It remains to show that Y =  almost surely on the event \(\left \{\right.Y > 0\left.\right \}\), which proves that \(Y \in \left \{\right.0,\infty \left.\right \}\) almost surely. This is the content of the next proposition.

Proposition 17.

Set

$$\displaystyle{\tilde{A}(s) =\bigcap _{C>0}\tilde{A}(C,s),\quad \mbox{ for $s \in (0,1)$ and}\quad \tilde{A} =\bigcap _{s\in (0,1)}\tilde{A}(s).}$$

Then the events \(\left \{\right.Y > 0\left.\right \}\) , \(\left \{\right.Y = \infty \left.\right \}\) , \(\{1 \in \tilde{ A}(s)\}\), s ∈ (0,1) and \(\{1 \in \tilde{ A}\}\) are equal up to null sets.

Proof of Proposition 17.

Recall that \(Y =\limsup _{n\rightarrow \infty }Y _{n}\) with

$$\displaystyle{Y _{n} = \frac{\min _{0\leq k<n}\vert \beta _{\gamma _{n}^{{\ast}}}^{(k)}\vert } {\sqrt{1 -\gamma _{ n }^{{\ast}}}}.}$$

With this notation, on \(\{1 \in \tilde{ A}(C,s)\}\) there is a random n ≥ 1 such that Y n  > C. Here, the restriction n ≥ 1 in the definition of \(\tilde{A}(C,s)\) is useful. This way, we get that \(\sup _{n\geq 1}Y _{n} \geq C\) on \(\{1 \in \tilde{ A}(C,s)\}\) and \(\sup _{n\geq 1}Y _{n} = \infty \) on \(\{1 \in \tilde{ A}(s)\}\). Since Y n  <  almost surely for all \(n \geq 1\), we also have that Y =  almost surely on \(\{1 \in \tilde{ A}(s)\}\).

Next, the law of the random closed set \([0,\infty ) \setminus \tilde{ A}(C,s)\) is invariant by scaling, hence by Proposition 16 and Lemma 5,

$$\displaystyle{\left \{\right.Y > 0\left.\right \}\subset \left \{\right.\mbox{ $[0,\infty ) \setminus \tilde{ A}(C,s)$ is porous at $1$}\left.\right \}\subset \left \{\right.1 \in \tilde{ A}(C,s)\left.\right \},\quad \text{almost surely}.}$$

The inclusions \(\tilde{A}(C,s) \subset \tilde{ A}(C^{\prime},s)\) for C > C ′ and \(\tilde{A}(C,s) \subset \tilde{ A}(C,s^{\prime})\) for 1 > s ′ > s > 0 yield

$$\displaystyle\begin{array}{rcl} \tilde{A} =\bigcap _{ k=1}^{\infty }\tilde{A}(k,1 - 1/k).& & {}\\ \end{array}$$

Thus, \(\left \{\right.Y > 0\left.\right \}\subset \{ 1 \in \tilde{ A}\}\) almost surely.

Hence, up to null events,

$$\displaystyle{\left \{\right.Y > 0\left.\right \}\subset \{ 1 \in \tilde{ A}\} \subset \{ 1 \in \tilde{ A}(s)\} \subset \left \{\right.Y = \infty \left.\right \}\subset \left \{\right.Y > 0\left.\right \}}$$

for any s ∈ (0, 1), which completes the proof. □ 

Proof of Proposition 16.

By Malric’s density theorem of zeros, recalled in (1), \(\gamma _{n}^{{\ast}} \rightarrow {1}^{-}\) almost surely. Hence it is enough to show that on the event \(\left \{\right.Y > 0\left.\right \} \cap \left \{\right.\gamma _{n}^{{\ast}}\rightarrow {1}^{-}\left.\right \}\) the set \(\tilde{H} = [0,\infty ) \setminus \tilde{ A}(C,s)\) is porous at 1.

Let \(\xi = Y/2\) and

$$\displaystyle{I_{n} = (\gamma _{n}^{{\ast}},\gamma _{ n}^{{\ast}} + r_{ n}),\quad \text{where}\quad r_{n} ={ \left (\frac{\xi \wedge C} {C} \right )}^{2}(1 -\gamma _{ n}^{{\ast}}).}$$

We claim that if

$$\displaystyle{ \xi > 0,\quad \gamma _{n} =\gamma _{ n}^{{\ast}} > s,\quad \text{and}\quad \vert \beta _{\gamma _{ n}}^{(k)}\vert >\xi \sqrt{1 -\gamma _{ n}},\quad \mbox{ for $0 \leq k < n$}. }$$
(20)

then \(I_{n} \subset \tilde{ A}(C,s) \cap (\gamma _{n}^{{\ast}},1)\) with \(r_{n}/(1 -\gamma _{n}^{{\ast}}) > 0\) not depending on n. Since on \(\left \{\right.Y > 0\left.\right \} \cap \left \{\right.\gamma _{n}^{{\ast}}\rightarrow {1}^{-}\left.\right \}\) the condition (20) holds for infinitely many n, we obtain the porosity at 1.

So assume that (20) holds for n at a given ω. As \(I_{n} \subset (\gamma _{n}^{{\ast}},1)\), for t ∈ I n we have that s < t < 1 and \(\mathit{st} < s <\gamma _{n}(t) =\gamma _{ n}^{{\ast}}(t) =\gamma _{n} =\gamma _{ n}^{{\ast}}\), that is, the first requirement in (19): \(\mathit{st} <\gamma _{n}(t) =\gamma _{ n}^{{\ast}}(t)\) holds for any t ∈ I n . For the other requirement, note that \(t -\gamma _{n}(t) < r_{n} \leq {(1 -\gamma _{n}^{{\ast}})\xi }^{2}/{C}^{2}\) yields

$$\displaystyle{\min _{0\leq k<n}\vert \beta _{\gamma _{n}}^{(k)}\vert >\xi \sqrt{1 -\gamma _{ n}^{{\ast}}} > C\sqrt{t -\gamma _{n } (t)},\quad \mbox{ for $t \in I_{n}$}.}$$

 □ 

3.6 Proof of Theorem 8

Compared to Theorem 7 in the proof of Theorem 8 we consider an even larger set \([0,\infty ) \setminus \breve{ A}(C,s)\), where

$$\displaystyle\begin{array}{rcl} & & \breve{A}(C,s) = \left \{t > 0: \exists n \geq 1,\,st <\gamma _{n}(t) =\gamma _{ n}^{{\ast}}(t),\right. {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \min _{0\leq k<n}\vert \beta _{\gamma _{n}(t)}^{(k)}\vert > C\sqrt{t -\gamma _{ n}(t)}, {}\\ & & \qquad \qquad \qquad \qquad \qquad \left.\max _{u\in [\gamma _{n}(t),t]}\vert \beta _{u} -\beta _{\gamma _{n}(t)}\vert < \sqrt{t -\gamma _{n } (t)}\right \} \subset \tilde{ A}(C,s) \subset A(C,s). {}\\ \end{array}$$

Here we also require that the fluctuation of β between γ n (t) and t is not too big.

We will prove the next proposition below.

Proposition 18.

Let C > 1, and s ∈ (0,1). Then almost surely on the event \(\left \{\right.X < 1\left.\right \}\), the closed set \([0,\infty ) \setminus \breve{ A}(C,s)\) is porous at 1.

This result implies that if X < 1 almost surely, then for any C > 0, s ∈ (0, 1) the random closed set \([0,\infty ) \setminus \breve{ A}(C,s)\) is porous at 1 almost surely, and so is the smaller set \([0,\infty ) \setminus A(C,s)\). Then the strong mixing of T follows from Corollary 6.

To complete the proof of Theorem 8, it remains to show that X = 0 almost surely on the event \(\left \{\right.X < 1\left.\right \}\). This is the content of next proposition. In order to prove Theorem 9 we introduce a new parameter L > 0.

$$\displaystyle\begin{array}{rcl} & & \breve{A}_{L}(C,s) = \left \{t > 0: \exists n \geq 1,\,st <\gamma _{n}(t) =\gamma _{ n}^{{\ast}}(t),\right. {}\\ & & \qquad \quad \left.\min _{0\leq k<n}\vert \beta _{\gamma _{n}(t)}^{(k)}\vert > C\sqrt{t -\gamma _{ n}(t)},\,\max _{u\in [\gamma _{n}(t),t]}\vert \beta _{u} -\beta _{\gamma _{n}(t)}\vert < L\sqrt{t -\gamma _{n } (t)}\right \} {}\\ \end{array}$$

Then \(\breve{A}(C,s) =\breve{ A}_{1}(C,s)\).

Proposition 19.

Fix \(L \geq 1\) and set

$$\displaystyle{\breve{A}_{L}(s) =\bigcap _{C>0}\breve{A}_{L}(C,s),\quad \mbox{ for $s \in (0,1)$ and}\quad \breve{A}_{L} =\bigcap _{s\in (0,1)}\breve{A}_{L}(s).}$$

Then the events \(\left \{\right.X = 0\left.\right \}\) , \(\left \{\right.X < 1\left.\right \}\) , \(\{1 \in \breve{ A}_{L}\}\) and \(\{1 \in \breve{ A}_{L}(s)\}\), s ∈ (0,1) are equal up to null sets.

Proof of Proposition 19.

Fix s ∈ (0, 1) L ≥ 1 and let C > L. Assume that \(1 \in \breve{ A}_{L}(C,s)\). Let n > 0 be an index which witnesses the containment. Then, as C > L we can apply Lemma 15 to see that the absolute increments of \({\beta }^{(0)},\ldots {,\beta }^{(n)}\) between γ n and 1 are the same. This implies that

$$\displaystyle{\vert \beta _{1}^{(k)}\vert \geq \vert \beta _{\gamma _{ n}}^{(k)}\vert -\vert \beta _{ 1}^{(k)} -\beta _{\gamma _{ n}}^{(k)}\vert = \vert \beta _{\gamma _{ n}}^{(k)}\vert -\vert \beta _{ 1} -\beta _{\gamma _{n}}\vert,\quad \mbox{ for $0 \leq k \leq n$},}$$

hence

$$\displaystyle\begin{array}{rcl} Z_{n} \geq \min _{0\leq k<n}\vert \beta _{\gamma _{n}}^{(k)}\vert -\vert \beta _{ 1} -\beta _{\gamma _{n}}\vert > C\sqrt{1 -\gamma _{n}} - L\sqrt{1 -\gamma _{n}}& & {}\\ \end{array}$$

whereas

$$\displaystyle\begin{array}{rcl} Z_{n+1} \leq \vert \beta _{1}^{(n)}\vert = \vert \beta _{ 1}^{(n)} -\beta _{\gamma _{ n}}^{(n)}\vert = \vert \beta _{ 1} -\beta _{\gamma _{n}}\vert < L\sqrt{1 -\gamma _{n}}.& & {}\\ \end{array}$$

Thus

$$\displaystyle\begin{array}{rcl} \inf _{n\geq 0}\frac{Z_{n+1}} {Z_{n}} \leq \frac{L} {C - L},\quad \mbox{ on $\left \{\right.1 \in \breve{ A}_{L}(C,s)\left.\right \}$almost surely},& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} \inf _{n\geq 0}\frac{Z_{n+1}} {Z_{n}} = 0,\quad \mbox{ on $\left \{\right.1 \in \breve{ A}_{L}(s)\left.\right \}$almost surely}.& &{}\end{array}$$
(21)

But, \(Z_{n+1}/Z_{n} > 0\) almost surely for all n, hence \(X =\liminf _{n\rightarrow \infty }Z_{n+1}/Z_{n} = 0\) almost surely on \(\{1 \in \breve{ A}_{L}(s)\}\). This proves \(\{1 \in \breve{ A}_{L}(s)\} \subset \left \{\right.X = 1\left.\right \}\).

Next, the law of the random closed set \([0,\infty ) \setminus \breve{ A}_{L}(C,s)\) is clearly invariant under scaling, hence by Proposition  18 and Lemma 5

$$\displaystyle{ \left \{\right.X < 1\left.\right \}\subset \left \{\right.\mbox{ $[0,\infty ) \setminus \breve{ A}_{L}(C,s)$ is porous at 1}\left.\right \}= \left \{\right.1 \in \breve{ A}_{L}(C,s)\left.\right \}, }$$
(22)

each relation holding up to a null set.

The inclusion \(\breve{A}_{L}(C^{\prime},s^{\prime}) \subset \breve{ A}_{L}(C,s)\) for \(C^{\prime} \geq C > 0\) and \(0 < s \leq s^{\prime} < 1\) yields

$$\displaystyle\begin{array}{rcl} \breve{A}_{L} =\bigcap _{ k=1}^{\infty }\breve{A}_{ L}(k,1 - 1/k).& & {}\\ \end{array}$$

Hence, \(\{X < 1\} \subset \{ 1 \in \breve{ A}_{L}\} \subset \{ 1 \in \breve{ A}_{L}(s)\}\) almost surely, which together with \(\{1 \in \breve{ A}_{L}(s)\} \subset \left \{\right.X = 0\left.\right \}\) completes the proof. □ 

To prove Proposition 18 we need a corollary of the Blumenthal 0 − 1 law.

Corollary 20.

Let (x n ) be a sequence of non-zero numbers tending to zero, \(\mathbf{P}\) the Wiener measure on \(\mathcal{C}[0,\infty )\) and \(D \subset \mathcal{C}[0,\infty )\) be a Borel set such that \(\mathbf{P}(D) > 0\).

Then \(\mathbf{P}(\Theta _{x_{n}}^{-1}(D)\text{ i.o.}) = 1\).

Proof.

Recall that the canonical process on \(\mathcal{C}[0,\infty )\) was denoted by β. We also use the notation \(\mathcal{B}_{t} =\sigma \left \{\right.\beta _{s}\,:\, s \leq t\left.\right \}\).

We approximate D with \(D_{n} \in \mathcal{B}_{t_{n}}\) such that \(\sum \mathbf{P}(D\bigtriangleup D_{n}) < \infty \), where \(\bigtriangleup \) denotes the symmetric difference operator. Passing to a subsequence if necessary, we may assume that \(t_{n}x_{n}^{2} \rightarrow 0\). Then, since \(\Theta _{x_{n}}^{-1}(D_{n}) \in \mathcal{B}_{t_{n}x_{n}^{2}}\), we have that \(\left \{\right.\Theta _{x_{n}}^{-1}(D_{n}),\text{ i.o.}\left.\right \}\in \cap _{s>0}\mathcal{B}_{s}\), and the Blumenthal 0 − 1 law ensures that \(\mathbf{P}(\Theta _{x_{n}}^{-1}(D_{n}),\text{ i.o.}) \in \left \{\right.0,1\left.\right \}\).

But \(\sum \mathbf{P}(\Theta _{x_{n}}^{-1}(D)\bigtriangleup \Theta _{x_{n}}^{-1}(D_{n})) < \infty \) since \(\Theta _{x_{n}}\) preserves P. Borel–Cantelli lemma shows that, almost surely, \(\Theta _{x_{n}}^{-1}(D)\bigtriangleup \Theta _{x_{n}}^{-1}(D_{n})\) occurs for finitely many n. Hence \(\mathbf{P}(\Theta _{x_{n}}^{-1}(D),\text{ i.o.}) \in \left \{\right.0,1\left.\right \}\).

Fatou lemma applied to the indicator functions of \(\Theta _{x_{n}}^{-1}{(D)}^{c}\) yields

$$\displaystyle{\mathbf{P}(\Theta _{x_{n}}^{-1}(D),\text{ i.o.}) \geq \limsup _{ n\rightarrow \infty }\mathbf{P}(\Theta _{x_{n}}^{-1}(D)) = \mathbf{P}(D) > 0.}$$

Hence \(\mathbf{P}(\Theta _{x_{n}}^{-1}(D),\text{ i.o.}) = 1\). □ 

Proof of Proposition 18.

We work on the event \(\left \{\right.X < 1\left.\right \}\). Set \(\xi = (1/X - 1)/2\). Then \(1 <\xi +1 < 1/X\) and

$$\displaystyle{1 < 1+\xi <\limsup _{n\rightarrow \infty } \frac{Z_{n}} {Z_{n+1}} =\limsup _{n\rightarrow \infty } \frac{Z_{n}} {\vert \beta _{1}^{(n)}\vert }.}$$

Hence

$$\displaystyle\begin{array}{rcl} \min _{0\leq k<n}\vert \beta _{1}^{(k)}\vert = Z_{ n} > (1+\xi )\vert \beta _{1}^{(n)}\vert,\quad \mbox{ for infinitely many $n$}.& & {}\\ \end{array}$$

Let n 1 < n 2 <  the enumeration of those indices, and set \(x_{k} = {h}^{(n_{k})}_{ 1}\beta _{1}^{(n_{k})}\) for k ≥ 1. The inequality \(\vert \beta _{1}^{(n_{k})}\vert < {(1+\xi )}^{-1}\vert \beta _{ 1}^{(n_{k-1})}\vert \) shows that x k  → 0.

Call B the Brownian motion defined by \(B_{t} =\beta _{t+1} -\beta _{1}\) and for real numbers δ, C > 0 set

$$\displaystyle\begin{array}{rcl} & & D(\delta,C) = \left \{w \in \mathcal{C}[0,\infty ): \mbox{ $\sup _{t\leq 2}\left \vert \right.w(t)\left.\right \vert < 1+\delta $;}\right. {}\\ & & \qquad \qquad \qquad \mbox{ $w + 1$ has a zero in $[0,1]$, but no zero in $(1,2]$;} {}\\ & & \qquad \qquad \left.\max _{t\in [\gamma,2]}\left \vert \right.w(t) + 1\left.\right \vert \leq \frac{\delta \wedge C} {2C} \mbox{, where $\gamma $ is the last zero of $w + 1$ in $[0,2]$}\right \}. {}\\ \end{array}$$

For each δ, C > 0 the Wiener measure puts positive, although possibly very small, probability on D(δ, C). Then Corollary 20 yields that the Brownian motion B takes values in the random sets \(\Theta _{x_{k}}^{-1}D(\xi,C)\) for infinitely many k on \(\left \{\right.\xi > 0\left.\right \}= \left \{\right.X < 1\left.\right \}\) almost surely; since the random variables x k , ξ are \(\mathcal{B}_{1}\)-measurable, and B is independent of \(\mathcal{B}_{1}\).

For k ≥ 1 let \(\tilde{\gamma }_{k} =\gamma _{n_{k}}(1 + x_{k}^{2})\), that is, the last zero of \({\beta }^{(n_{k})}\) before \(1 + x_{k}^{2}\) and set

$$\displaystyle\begin{array}{rcl} I_{k} = (\tilde{\gamma }_{k} + \tfrac{1} {2}r_{k},\tilde{\gamma }_{k} + r_{k}),\quad \text{where}\quad r_{k} ={ \left (\frac{\xi \wedge C} {C} \right )}^{2}x_{ k}^{2}.& & {}\\ \end{array}$$

This interval is similar to the one used in the proof of Proposition 16, but now we use only the right half of the interval \((\tilde{\gamma }_{k},\tilde{\gamma }_{k} + r_{k})\).

Next we show that

$$\displaystyle{ B \in \Theta _{x_{k}}^{-1}D(\xi,C),\quad \text{and}\quad s \leq {(1 + x_{ k}^{2})}^{-1} }$$
(23)

implies

$$\displaystyle\begin{array}{rcl} I_{k} \subset \breve{ A}(C,s) \cap (1,1 + 2x_{k}^{2}).& &{}\end{array}$$
(24)

By definition \(r_{k}/(4x_{k}^{2})\), the ratio of the lengths of I k and \((1,1 + 2x_{k}^{2})\), does not depend on k. Then the porosity of \([0,\infty ) \setminus \breve{ A}(C,s)\) at 1 follows for almost every point of \(\left \{\right.X < 1\left.\right \}\), as we have seen that (23) holds for infinitely many k almost surely on \(\left \{\right.X < 1\left.\right \}\).

So assume that (23) holds for k at a given ω. The key observations are that then

$$\displaystyle\begin{array}{rcl} \beta _{1+t}^{(\ell)} -\beta _{ 1}^{(\ell)} = {h}^{(\ell)}_{ 1}B_{t},& & \mbox{ for $0 \leq \ell\leq n_{k}$, $0 \leq t \leq 2x_{k}^{2}$},{}\end{array}$$
(25)
$$\displaystyle\begin{array}{rcl} \gamma _{\ell}(t) < 1,& & \mbox{ for $0 \leq \ell< n_{k}$ and $1 \leq t \leq 1 + 2x_{k}^{2}$},{}\end{array}$$
(26)
$$\displaystyle\begin{array}{rcl} \gamma _{n_{k}}(t) =\tilde{\gamma } _{k} > 1,& & \mbox{ for $t \in [\tilde{\gamma }_{k},1 + 2x_{k}^{2}]$}.{}\end{array}$$
(27)

First, we prove (25)–(27) and then with their help we derive \(I_{k} \subset \breve{ A}(C,s)\).

To get (25) and (26) we apply Lemma 15 to \(I = [1,1 + 2x_{k}^{2}]\), n = n k and \(a = 1\). This can be done since we have

$$\displaystyle\begin{array}{rcl} \min _{0\leq \ell<n_{k}}\vert \beta _{1}^{(\ell)}\vert \quad > (1+\xi )\vert x_{ k}\vert,\quad \mbox{ by the choice of $n_{k}$},& &{}\end{array}$$
(28)
$$\displaystyle\begin{array}{rcl} \max _{t\in [1,1+2x_{k}^{2}]}\vert \beta _{t} -\beta _{1}\vert \quad < (1+\xi )\vert x_{k}\vert,\quad \mbox{ since $\Theta _{x_{k}}B \in D(\xi,C)$ by (23)}.& &{}\end{array}$$
(29)

(i) of Lemma 15 is exactly (26), while (ii) of the same lemma gives (25) if we note that \(B_{t} =\beta _{1+t} -\beta _{1}\) by definition.

Equation (27) claims two things: \({\beta }^{(n_{k})}\) has a zero in \((1,1 + x_{k}^{2}]\), but has no zero in \((1 + x_{k}^{2},1 + 2x_{k}^{2}]\). Write (25) with  = n k :

$$\displaystyle{ \beta _{1+t}^{(n_{k})} =\beta _{ 1}^{(n_{k})} + {h}^{(n_{k})}_{ 1}B_{t} = {h}^{(n_{k})}_{ 1}(x_{k} + B_{t}),\quad \mbox{ for $0 \leq t \leq 2x_{k}^{2}$}. }$$

Next, we use that \(\Theta _{x_{k}}B \in D(\xi,C)\), whence \(1 + \Theta _{x_{k}}B\) has a zero in [0, 1] but no zero in (1, 2]. Then the relation

$$\displaystyle\begin{array}{rcl} x_{k}\left [1 + (\Theta _{x_{k}}B)_{v}\right ] = x_{k} + B_{x_{k}^{2}v} = {h}^{(n_{k})}_{ 1}\beta _{1+x_{k}^{2}v}^{(n_{k})}& &{}\end{array}$$
(30)

justifies (27).

To finish the proof, it remains to show that \(I_{k} \subset \breve{ A}(C,s)\), since by (27) \(\tilde{\gamma }_{k}\) the last zero of \({\beta }^{(n_{k})}\) before \(1 + x_{k}^{2}\) is greater than 1, so \(I_{k} \subset (1,1 + 2x_{k}^{2})\) holds.

Fix \(t \in I_{k}\). We need to check the next three properties.

  1. (1)

    \(st <\gamma _{n_{k}}(t) =\gamma _{ n_{k}}^{{\ast}}(t)\).

    By (27) \(\gamma _{n_{k}}(t) =\tilde{\gamma } _{k} > 1\) and by the definition of I k we have \(1 <\tilde{\gamma } _{k} < t <\tilde{\gamma } _{k} + r_{k} \leq \tilde{\gamma }_{k} + x_{k}^{2}\). Hence,

    $$\displaystyle{\gamma _{n_{k}}(t) =\tilde{\gamma } _{k} > \frac{\tilde{\gamma }_{k}} {\tilde{\gamma }_{k} + x_{k}^{2}}t > \frac{1} {1 + x_{k}^{2}}t \geq \mathit{st},}$$

    where we used \(s \leq {(1 + x_{k}^{2})}^{-1}\), the second part of (23).

    By (26), \(\gamma _{n_{k}}(t) =\gamma _{ n_{k}}^{{\ast}}(t)\), as \(t \in I_{k} \subset [1,1 + 2x_{k}^{2}]\).

  2. (2)

    \(\min _{0\leq \ell<n_{k}}\vert \beta _{\tilde{\gamma }_{k}}^{(\ell)}\vert > C\sqrt{t -\tilde{\gamma } _{ k}}\).

    Since \(x_{k} = {h}^{(n_{k})}_{ 1}\beta _{1}^{(n_{k})}\), \(\beta _{\tilde{\gamma }_{k}}^{(n_{k})} = 0\) and \(\tilde{\gamma }_{k} \in [1,1 + x_{k}^{2}]\), (25) yields

    $$\displaystyle{\max _{0\leq \ell<n_{k}}\vert \beta _{\tilde{\gamma }_{k}}^{(\ell)} -\beta _{ 1}^{(\ell)}\vert = \vert \beta _{\tilde{\gamma }_{ k}}^{(n_{k})} -\beta _{ 1}^{(n_{k})}\vert = \vert \beta _{ 1}^{(n_{k})}\vert = \vert x_{ k}\vert.}$$

    Then, by the triangle inequality and (28)

    $$\displaystyle\begin{array}{rcl} \min _{0\leq \ell<n_{k}}\vert \beta _{\tilde{\gamma }_{k}}^{(\ell)}\vert &\geq &\min _{ 0\leq \ell<n_{k}}\vert \beta _{1}^{(\ell)}\vert -\max _{ 0\leq \ell<n_{k}}\vert \beta _{\tilde{\gamma }_{k}}^{(\ell)} -\beta _{ 1}^{(\ell)}\vert {}\\ & >& (1+\xi )\vert x_{k}\vert -\vert x_{k}\vert =\xi \vert x_{k}\vert. {}\\ \end{array}$$

    On the other hand \(\sqrt{t -\tilde{\gamma } _{k}} < \sqrt{r_{k}} \leq \vert x_{k}\vert \xi /C\), hence

    $$\displaystyle{\min _{0\leq \ell<n_{k}}\vert \beta _{\tilde{\gamma }_{k}}^{(\ell)}\vert >\xi \vert x_{ k}\vert \geq C\sqrt{t -\tilde{\gamma } _{k}}.}$$
  3. (3)

    \(\max _{u\in [\tilde{\gamma }_{k},t]}\vert \beta _{u} -\beta _{\tilde{\gamma }_{k}}\vert < \sqrt{t -\tilde{\gamma } _{k}}\).

    \(1 + \Theta _{x_{k}}B\) has a zero in [0, 1] but no zero in (1, 2], since \(\Theta _{x_{k}}B \in D(\xi,C)\). Denote as above by γ its last zero in [0, 1]. Then by relation (30) we have that \(\tilde{\gamma }_{k} = 1 + x_{k}^{2}\gamma\) and

    $$\displaystyle{\max _{u\in [\tilde{\gamma }_{k},1+2x_{k}^{2}]}\vert \beta _{u}^{(n_{k})}\vert = \left \vert \right.x_{ k}\left.\right \vert \max _{v\in [\gamma,2]}\vert 1 + (\Theta _{x_{k}}B)_{v}\vert \leq \left \vert \right.x_{k}\left.\right \vert \frac{\xi \wedge C} {2C} = \frac{\sqrt{r_{k}}} {2}.}$$

    Writing (25) with  = n k and using that \(\beta _{\tilde{\gamma }_{k}}^{(n_{k})} = 0\) and t < 1 + 2x k 2 we obtain

    $$\displaystyle{\max _{u\in [\tilde{\gamma }_{k},t]}\vert \beta _{u} -\beta _{\tilde{\gamma }_{k}}\vert =\max _{u\in [\tilde{\gamma }_{k},t]}\vert \beta _{u}^{(n_{k})}\vert \leq \max _{ u\in [\tilde{\gamma }_{k},1+2x_{k}^{2}]}\vert \beta _{u}^{(n_{k})}\vert \leq \frac{\sqrt{r_{k}}} {2}.}$$

    By the definition of I k we have \(t -\tilde{\gamma }_{k} > \tfrac{1} {2}r_{k}\). Hence

    $$\displaystyle{\max _{u\in [\tilde{\gamma }_{k},t]}\vert \beta _{u} -\beta _{\tilde{\gamma }_{k}}\vert \leq \frac{\sqrt{r_{k}}} {2} < \frac{\sqrt{r_{k}}} {2} \sqrt{\frac{t -\tilde{\gamma } _{k } } {\tfrac{1} {2}r_{k}}} < \sqrt{t -\tilde{\gamma } _{k}}.}$$

     □ 

3.7 Proof of Theorem 9

In this subsection we prove the equality of the events \(\left \{\right.X = 0\left.\right \}\), \(\left \{\right.Y = \infty \left.\right \}\) and \(\left \{\right.1 \in A\left.\right \}\) up to null sets, where

$$\displaystyle{A =\bigcap _{s\in (0,1)}A(s),\quad \text{with}\quad A(s) =\bigcap _{C>0}A(C,s).}$$

We keep the notation introduced in Propositions 17 and  19 for \(\breve{A}_{L}(s)\), \(\breve{A}_{L}\) and \(\tilde{A}\).

Recall that \(\breve{A}_{L} \subset \tilde{ A} \subset A\) by definition for any L ≥ 1. Then by Propositions 17 and 19 we have

$$\displaystyle{ \left \{\right.X = 0\left.\right \}=\{ 1 \in \breve{ A}_{L}\} \subset \{ 1 \in \tilde{ A}\} = \left \{\right.Y = \infty \left.\right \}\subset \left \{\right.1 \in A\left.\right \}. }$$
(31)

For C > 0 let

$$\displaystyle{ \tau _{C} =\inf \left \{\right.t \geq \tfrac{1} {2}\,:\, \exists n \geq 0,\,\beta _{t}^{(n)} = 0\,\min _{ 0\leq k<n}\vert \beta _{t}^{(k)}\vert \geq C\sqrt{(1 - t) \vee 0}\left.\right \}. }$$

We show below that

$$\displaystyle{ \left \{\right.1 \in A\left.\right \}\subset \bigcap _{C>0}\left \{\right.\tau _{C} < 1\left.\right \},\quad \text{up to null a set}, }$$
(32)

and

$$\displaystyle\begin{array}{rcl} \mathbf{P}\left (\bigcap _{C>0}\left \{\right.\tau _{C} < 1\left.\right \} \right ) \leq \mathbf{P}(X = 0).& &{}\end{array}$$
(33)

Then the claim follows by concatenating (31) and (32), and observing that the largest and the smallest events in the obtained chain of almost inclusions has the same probability by (33).

We start with (32). If 1 ∈ A then 1 ∈ A(C, s) for every s ∈ (0, 1), especially for \(s_{0} =\gamma _{0} \vee 1/2\), where γ 0 is the last zero of β before 1, we have 1 ∈ A(C, s 0). Then, by the definition of A(C, s 0) there is an integer n ≥ 0 and a real number γ ∈ (s 0, 1) such that \(\beta _{\gamma }^{(n)} = 0\) and \(\min _{0\leq k<n}\vert \beta _{1}^{(k)}\vert > C\sqrt{1-\gamma }\). The integer n cannot be zero since \({\beta }^{(0)} =\beta\) has no zero in (s 0, 1). Thus \(\tau _{C} \leq \gamma < 1\), which shows the inclusion.

Next, we turn to (32). Fix C > L ≥ 1 and let

$$\displaystyle{\gamma =\sup \left \{\right.s \in [\tau _{C},1]\,:\,\beta _{s} =\beta _{\tau _{C}}\left.\right \}.}$$

Let us show that

$$\displaystyle{ \left \{\right.\tau _{C} < 1\text{ and }\max _{\tau _{C}\leq t\leq 1}\vert \beta _{t} -\beta _{\tau _{C}}\vert < L\sqrt{1-\gamma }\left.\right \}\subset \left \{\right.1 \in \breve{ A}_{L}(C, \tfrac{1} {2})\left.\right \}. }$$
(34)

Indeed, on the event on the left hand side of (34) there exists a random index n such that \(\beta _{\tau _{C}}^{(n)} = 0\) and

$$\displaystyle{\min _{0\leq k\leq n-1}\vert \beta _{\tau _{C}}^{(k)}\vert \geq C\sqrt{1 -\tau _{ C}} > L\sqrt{1-\gamma } >\max _{\tau _{C}\leq t\leq 1}\vert \beta _{t} -\beta _{\tau _{C}}\vert.}$$

Then we can apply Lemma 15 with \(I = [\tau _{C},1]\), a = τ C and n = n. We obtain that \({\beta }^{(k)}\) has no zero in [τ C , 1] for \(k = 0,\ldots,n - 1\), and the absolute increments \(\vert \beta _{t}^{(k)} -\beta _{\tau _{C}}^{(k)}\vert \), are the same for \(k = 0,\ldots,n\) and t ∈ [τ C , 1]. In particular, \(\beta _{\gamma }^{(k)} =\beta _{ \tau _{C}}^{(k)}\) for every 0 ≤ k ≤ n, γ is the last zero of β (n) in [τ C , 1] and \(\gamma =\gamma _{n} =\gamma _{ n}^{{\ast}}\). Moreover,

$$\displaystyle{\min _{0\leq k<n}\vert \beta _{\gamma _{n}^{{\ast}}}^{(k)}\vert =\min _{ 0\leq k<n}\vert \beta _{\tau _{C}}^{(k)}\vert \geq C\sqrt{1 -\tau _{ C}} > C\sqrt{1 -\gamma _{ n }^{{\ast}}}.}$$

So n and \(\gamma _{n}^{{\ast}}\) witnesses that \(1 \in \breve{ A}_{L}(C, \frac{1} {2})\), since we also have that

$$\displaystyle{\max _{t\in [\gamma _{n}^{{\ast}},1]}\vert \beta _{t} -\beta _{\gamma _{n}^{{\ast}}}\vert \leq \max _{t\in [\tau _{C},1]}\vert \beta _{t} -\beta _{\tau _{C}}\vert < L\sqrt{1 -\gamma _{ n }^{{\ast}}}.}$$

From (34), by the strong Markov property and the scaling invariance of β, we obtain

$$\displaystyle{\mathbf{P}(\tau _{C} < 1) \times \mathbf{P}\left (\max _{t\in [0,1]}\vert \beta _{t}\vert \leq L\sqrt{1 -\gamma _{0}}\right ) \leq \mathbf{P}\left (1 \in \breve{ A}_{L}(C, \tfrac{1} {2})\right ).}$$

Letting C go to infinity and using Proposition 19, this yields

$$\displaystyle\begin{array}{rcl} \mathbf{P}\left (\bigcap _{C>0}\left \{\right.\tau _{C} < 1\left.\right \} \right ) \times \mathbf{P}\left (\max _{t\in [0,1]}\vert \beta _{t}\vert \leq L\sqrt{1 -\gamma _{0}}\right )& \leq & \mathbf{P}\left (1 \in \breve{ A}_{L}(\tfrac{1} {2})\right ) {}\\ & =& \mathbf{P}(X = 0). {}\\ \end{array}$$

This is true for all L ≥ 1. Thus (33) is obtained by letting L go to infinity.

3.8 Proof of Theorem 11

In this subsection we prove that the tightness of \(\left \{\right.x\nu (x)\,:\, x \in (0,1)\left.\right \}\) and \(\left \{\right.nZ_{n}\,:\, n \geq 1\left.\right \}\) are equivalent and both implies X < 1 almost surely.

Fix K > 0. By definition \(\left \{\right.(K/n)\nu (K/n) > K\left.\right \}= \left \{\right.nZ_{n} \geq K\left.\right \}\) for any n ≥ 1. For small x > 0 values there is n such that \(K/n < x < 2K/n\) and \(x\nu (x) \leq (2K/n)\nu (K/n)\) by the monotonicity of ν. But, then \(\left \{\right.x\nu (x) > 2K\left.\right \}\subset \left \{\right.nZ_{n} > K\left.\right \}\). Hence

$$\displaystyle{\limsup _{x\rightarrow {0}^{+}}\mathbf{P}(x\nu (x) > 2K) \leq \limsup _{n\rightarrow \infty }\mathbf{P}(nZ_{n} \geq K) \leq \limsup _{x\rightarrow {0}^{+}}\mathbf{P}(x\nu (x) > K).}$$

So the tightness of the two families are equivalent and it is enough to prove that when \(\left \{\right.x\nu (x)\,:\, x \in (0,1)\left.\right \}\) is tight then X < 1 almost surely.

We have the next easy lemma, whose proof is sketched at the end of this subsection.

Lemma 21.

$$\displaystyle\begin{array}{rcl} X =\liminf _{n\rightarrow \infty }\frac{Z_{n+1}} {Z_{n}} =\liminf _{x\rightarrow {0}^{+}} \frac{\vert \beta _{1}^{(\nu (x))}\vert } {x}.& & {}\\ \end{array}$$

Then we have that

$$\displaystyle{\nVdash _{(X>1-\delta )} \leq \liminf _{x\rightarrow 0+}\nVdash _{(\vert \beta _{1}^{(\nu (x))}\vert /x>1-\delta )}.}$$

Hence, by Fatou lemma

$$\displaystyle\begin{array}{rcl} \mathbf{P}(X > 1-\delta ) \leq \liminf _{x\rightarrow 0+}\mathbf{P}\left (\vert \beta _{1}^{(\nu (x))}\vert > x(1-\delta )\right ).& & {}\\ \end{array}$$

Let x ∈ (0, 1) and K > 0. Since on the event

$$\displaystyle{\left \{\right.\nu (x) \leq \frac{K} {x} \left.\right \} \cap \left \{\right.\vert \beta _{1}^{(\nu (x))}\vert > x(1-\delta )\left.\right \}}$$

at least one of the standard normal variables \(\beta _{1}^{(k)}\), 0 ≤ k ≤ K ∕ x takes values in a set of size 2x δ, namely in \((-x,-x(1-\delta )) \cup (x(1-\delta ),x)\),

$$\displaystyle\begin{array}{rcl} \mathbf{P}\left (\frac{\vert \beta _{1}^{(\nu (x))}\vert } {x} > 1-\delta \right )& & \qquad \qquad {}\\ & \leq & \mathbf{P}\left (\nu (x) > \frac{K} {x} \right ) + \left (\right.\frac{K} {x} + 1\left.\right )\mathbf{P}\left (1-\delta < \frac{\left \vert \right.\beta _{1}\left.\right \vert } {x} < 1\right ) {}\\ & \leq & \mathbf{P}(x\nu (x) > K) + (K + 1)\delta. {}\\ \end{array}$$

In the last step we used that the standard normal density is bounded by \(1/\sqrt{2\pi }\), whence \(\mathbf{P}\left (1-\delta < \frac{\left \vert \right. \beta _{1}\left.\right \vert } {x} < 1\right ) \leq \delta x\).

By the tightness assumption for any \(\varepsilon > 0\) there exists \(K_{\varepsilon }\) such that \(\sup _{x\in (0,1)}\mathbf{P}(x\nu (x) > K_{\varepsilon }) \leq \varepsilon\). Hence,

$$\displaystyle{\mathbf{P}(X = 1) =\lim _{\delta \rightarrow 0+}\mathbf{P}(X > 1-\delta ) \leq \lim _{\delta \rightarrow 0+}\varepsilon + (K_{\varepsilon } + 1)\delta =\varepsilon.}$$

Since, this is true for all \(\varepsilon > 0\), we get that \(\mathbf{P}(X = 1) = 0\) and the proof of Theorem 11 is complete. □ 

Proof of Lemma 21.

Since \(Z_{\nu (x)} = \vert \beta _{1}^{(\nu (x))}\vert \) Lemma 21 is a particular case of the following claim: if (a n ) is a decreasing sequence of positive numbers tending to zero then

$$\displaystyle\begin{array}{rcl} \liminf _{k\rightarrow \infty }\frac{a_{k+1}} {a_{k}} =\liminf _{x\rightarrow {0}^{+}} \frac{a_{n(x)}} {x},& & {}\\ \end{array}$$

where \(n(x) =\inf \left \{\right.k \geq 1\,:\, a_{k} < x\left.\right \}\). First, for x < a 1 the relation \(a_{n(x)-1} \geq x > a_{n(x)}\) gives

$$\displaystyle\begin{array}{rcl} \frac{a_{n(x)}} {a_{n(x)-1}} \leq \frac{a_{n(x)}} {x} & & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} \liminf _{k\rightarrow \infty }\frac{a_{k+1}} {a_{k}} \leq \liminf _{x\rightarrow {0}^{+}} \frac{a_{n(x)}} {x}.& & {}\\ \end{array}$$

For the opposite direction, for every k ≥ 0, \(a_{n(a_{k})} < a_{k}\), therefore \(a_{n(a_{k})} \leq a_{k+1}\) as (a n ) is non-increasing. Since \(a_{k} \rightarrow 0\) as \(k \rightarrow \infty \), one gets

$$\displaystyle{\liminf _{x\rightarrow {0}^{+}} \frac{a_{n(x)}} {x} \leq \liminf _{k\rightarrow \infty }\frac{a_{n(a_{k})}} {a_{k}} \leq \liminf _{k\rightarrow \infty }\frac{a_{k+1}} {a_{k}}.}$$

 □