Abstract
The definitions, simplest properties and first examples of martingales and sub/super-martingales are given in Sect. 15.1. Stopping (Markov) times are introduced in Sect. 15.2, which also contains Doob’s theorem on random change of time and Wald’s identity together with a number of its applications to boundary crossing problems and elsewhere. This is followed by Sect. 15.3 presenting fundamental martingale inequalities, including Doob’s inequality with a number of its consequences, and an inequality for the number of strip crossings. Section 15.4 begins with Doob’s martingale convergence theorem and also presents Lévy’s theorem and an application to branching processes. Section 15.5 derives several important inequalities for the moments of stochastic sequences.
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
- Markov Chain
- Convergence Theorem
- Conditional Expectation
- Dependent Random Variable
- Uniform Integrability
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
15.1 Definitions, Simplest Properties, and Examples
In Chap. 13 we considered sequences of dependent random variables X 0,X 1,… forming Markov chains. Dependence was described there in terms of transition probabilities determining the distribution of X n+1 given X n . That enabled us to investigate rather completely the properties of Markov chains.
In this chapter we consider another type of sequence of dependent random variables. Now dependence will be characterised only by the mean value of X n+1 given the whole “history” X 0,…,X n . It turns out that one can also obtain rather general results for such sequences.
Let a probability space \(\langle\varOmega, \mathfrak{F}, \mathbf {P}\rangle\) be given together with a sequence of random variables X 0,X 1,… defined on it and an increasing family (or flow) of σ-algebras \(\{ \mathfrak{F}_{n} \}_{n \ge0}\): \(\mathfrak{F}_{0} \subseteq\mathfrak{F}_{1}\subseteq\cdots\subseteq \mathfrak{F}_{n} \subseteq\cdots\subseteq\mathfrak{F}\).
Definition 15.1.1
A sequence of pairs \(\{ X_{n} ,\mathfrak{F}_{n};\, n\ge0 \}\) is called a stochastic sequence if, for each n≥0, X n is \(\mathfrak{F}_{n}\)-measurable. A stochastic sequence is said to be a martingale (one also says that {X n } is a martingale with respect to the flow of σ-algebras \(\{\mathfrak{F}_{n}\}\)) if, for every n≥0,
-
(1)
$$\begin{aligned} \mathbf{E}|X_n|<\infty, \end{aligned}$$(15.1.1)
-
(2)
X n is measurable with respect to \(\mathfrak{F}_{n}\),
-
(3)
$$\begin{aligned} \mathbf{E} (X_{n+1}\mid\mathfrak{F}_n )=X_n. \end{aligned}$$(15.1.2)
A stochastic sequence \(\{ X_{n}, \mathfrak{F}_{n} ; \, n \ge0 \}\) is called a submartingale (supermartingale) if conditions (1)–(3) hold with the sign “=” replaced in (15.1.2) with “≥” (“≤”, respectively).
We will say that a sequence {X n } forms a martingale (submartingale, supermartingale) if, for \(\mathfrak{F}_{n} = \sigma(X_{0}, \ldots, X_{n})\), the pairs \(\{ X_{n}, \mathfrak{F}_{n} \}\) form a sequence with the same name. Submartingales and supermartingales are often called semimartingales.
It is evident that relation (15.1.2) persists if we replace X n+1 on its left-hand side with X m for any m>n. Indeed, by virtue of the properties of conditional expectations,
A similar assertion holds for semimartingales.
If {X n } is a martingale, then E(X n+1|σ(X 0,…,X n ))=X n , and, by a property of conditional expectations,
So, for martingales, as for Markov chains, we have
The similarity, however, is limited to this relation, because for a martingale, the equality does not hold for distributions, but the additional condition
is imposed.
Example 15.1.1
Let ξ n , n≥0 be independent. Then X n =ξ 1+⋯+ξ n form a martingale (submartingale, supermartingale) if E ξ n =0 (E ξ n ≥0, E ξ n ≤0). It is obvious that X n also form a Markov chain. The same is true of \(X_{n} = \prod_{k=0}^{n} \xi_{k}\) if E ξ n =1.
Example 15.1.2
Let ξ n , n≥0, be independent. Then
form a martingale if E ξ n =0, because
Clearly, {X n } is not a Markov chain here. An example of a sequence which is a Markov chain but not a martingale can be obtained, say, if we consider a random walk on a segment with reflection at the endpoints (see Example 13.1.1).
As well as {0,1,…} we will use other sets of indices for X n , for example, {−∞<n<∞} or {n≤−1}, and also sets of integers including infinite values ±∞, say, {0≤n≤∞}. We will denote these sets by a common symbol \(\mathcal{N}\) and write martingales (semimartingales) as \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \in\mathcal{N}\}\). By \(\mathfrak {F}_{-\infty}\) we will understand the σ-algebra \(\bigcap_{n \in\mathcal{N}} \mathfrak{F}_{n}\), and by \(\mathfrak{F}_{\infty}\) the σ-algebra \(\sigma (\bigcup_{n \in\mathcal{N}} \mathfrak{F}_{n} )\) generated by \(\bigcup_{n \in\mathcal{N}} \mathfrak{F}_{n}\), so that \(\mathfrak{F}_{-\infty} \subseteq\mathfrak{F}_{n}\subseteq\mathfrak {F}_{\infty}\subseteq\mathfrak{F}\) for any \(n \in\mathcal{N}\).
Definition 15.1.2
A stochastic sequence \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \in\mathcal{N}\}\) is called a martingale (submartingale, supermartingale), if the conditions of Definition 15.1.1 hold for any \(n \in\mathcal{N}\).
If \(\{ X_{n} ,\mathfrak{F};\, n \in\mathcal{N}\}\) is a martingale and the left boundary n 0 of \(\mathcal{N}\) is finite (for example, \(\mathcal{N}= \{ 0, 1, \ldots\}\)), then the martingale \(\{ X_{n} ,\mathfrak{F}_{n} \}\) can be always extended “to the whole axis” by setting \(\mathfrak{F}_{n} := \mathfrak {F}_{n_{0}}\) and \(X_{n} := X_{n_{0}}\) for n<n 0. The same holds for the right boundary as well. Therefore if a martingale (semimartingale) \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \in\mathcal{N}\}\) is given, then without loss of generality we can always assume that one is actually given a martingale (semimartingale) \(\{ X_{n}, \mathfrak{F}_{n} ;\, {-}\infty\le n \le \infty\}\).
Example 15.1.3
Let \(\{ \mathfrak{F}_{n},\, {-}\infty\le n \le \infty\}\) be a given sequence of increasing σ-algebras, and ξ a random variable on \(\langle\varOmega, \mathfrak{F}, \mathbf{P}\rangle\), E|ξ|<∞. Then \(\{ X_{n}, \mathfrak{F}_{n} ; {-}\infty\leq n \leq\infty\}\) with \(X_{n} = \mathbf{E}(\xi| \mathfrak{F}_{n} )\) forms a martingale.
Indeed, by the property of conditional expectations, for any m≤∞ and m>n,
Definition 15.1.3
The martingale of Example 15.1.3 is called a martingale generated by the random variable ξ (and the family \(\{ \mathfrak{F}_{n} \}\)).
Definition 15.1.4
A set \(\mathcal{N}_{+}\) is called the right closure of \(\mathcal{N}\) if:
-
(1)
\(\mathcal{N}_{+} = \mathcal{N}\) when the maximal element of \(\mathcal{N}\) is finite;
-
(2)
\(\mathcal{N}_{+} = \mathcal{N}\cup\{ \infty\}\) if \(\mathcal{N}\) is not bounded from the right.
If \(\mathcal{N}= \mathcal{N}_{+}\) then we say that \(\mathcal{N}\) is right closed. A martingale (semimartingale) \(\{ X_{n}, \mathfrak{F};\, n \in\mathcal{N}\}\) is said to be right closed if \(\mathcal{N}\) is right closed.
Lemma 15.1.1
A martingale \(\{ X_{n}, \mathfrak{F};\, n \in\mathcal{N}\}\) is generated by a random variable if and only if it is right closed.
The Proof
of the lemma is trivial. In one direction it follows from Example 15.1.3, and in the other from the equality
which implies that \(\{ X_{n}, \mathfrak{F}\}\) is generated by X N . The lemma is proved. □
Now we consider an interesting and more concrete example of a martingale generated by a random variable.
Example 15.1.4
Let ξ 1,ξ 2,… be independent and identically distributed and assume E|ξ 1|<∞. Set
Then \(\mathfrak{F}_{-n} \subset\mathfrak{F}_{-n+1}\) and, for any 1≤k≤n, by symmetry
From this it follows that
This means that \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \le1 \}\) forms a martingale generated by ξ 1.
We will now obtain a series of auxiliary assertions giving the simplest properties of martingales and semimartingales. When considering semimartingales, we will confine ourselves to submartingales only, since the corresponding properties of supermartingales will follow immediately if one considers the sequence Y n =−X n , where {X n } is a submartingale.
Lemma 15.1.2
-
(1)
The property that \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \in\mathcal {N}\}\) is a martingale is equivalent to invariability in m≥n of the set functions (integrals)
$$ \mathbf{E}(X_m ;\, A) = \mathbf{E}(X_n ; A) $$(15.1.3)for any \(A \in\mathfrak{F}_{n}\). In particular, E X m =const.
-
(2)
The property that \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \in\mathcal {N}\}\) is a submartingale is equivalent to the monotone increase in m≥n of the set functions
$$ \mathbf{E}(X_m ; A) \ge\mathbf{E}(X_n ; A) $$(15.1.4)for every \(A \in\mathfrak{F}_{n}\). In particular, E X m ↑.
The Proof
follows immediately from the definitions. If (15.1.3) holds then, by the definition of conditional expectation, \(X_{n} = \mathbf{E}(X_{m} | \mathfrak{F}_{n })\), and vice versa. Now let (15.1.4) hold. Put \(Y_{n} = \mathbf{E}(X_{m} | \mathfrak {F}_{n})\). Then (15.1.4) implies that E(Y n ;A)≥E(X n ;A) and E(Y n −X n ;A)≥0 for any \(A \in\mathfrak{F}_{n}\). From this it follows that \(Y_{n} = \mathbf{E} (X_{m} | \mathfrak{F}_{n }) \ge X_{n}\) with probability 1. The converse assertion can be obtained as easily as the direct one. The lemma is proved. □
Lemma 15.1.3
Let \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \in\mathcal{N}\}\) be a martingale, g(x) be a convex function, and E|g(X n )|<∞. Then \(\{ g(X_{n}), \mathfrak{F}_{n} ;\, n \in\mathcal{N}\}\) is a submartingale.
If, in addition, g(x) is nondecreasing, then the assertion of the theorem remains true when \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \in \mathcal{N}\}\) is a submartingale.
The Proof
of both assertions follows immediately from Jensen’s inequality
□
Clearly, the function g(x)=|x|p for p≥1 satisfies the conditions of the first part of the lemma, and the function g(x)=e λx for λ>0 meets the conditions of the second part of the lemma.
Lemma 15.1.4
Let \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \in\mathcal{N}\}\) be a right closed submartingale. Then, for X n (a)=max{X n ,a} and any a, \(\{ X_{n}(a), \mathfrak{F}_{n} ;\, n \in\mathcal{N}\}\) is a uniformly integrable submartingale.
If \(\{ X_{n}, \mathfrak{F}_{n} ;\, n \in\mathcal{N}\}\) is a right closed martingale, then it is uniformly integrable.
Proof
Let \(N := \sup\{ k: k\in\mathcal{N}\}\). Then, by Lemma 15.1.3, \(\{ X_{n}(a), \mathfrak{F}_{n} ;\; n \in\mathcal {N}\}\) is a submartingale. Hence, for any c>0,
(here X +=max(0,X)) and so
uniformly in n as c→∞. Therefore we get the required uniform integrability:
since sup n P(X n (a)>c)→0 as c→∞ (see Lemma A3.2.3 in Appendix 3; by truncating at the level a we avoided estimating the “negative tails”).
If \(\{ X_{n} , \mathfrak{F}_{n} ;\, n \in\mathcal{N}\}\) is a martingale, then its uniform integrability will follow from the first assertion of the lemma applied to the submartingale \(\{ |X_{n} | , \mathfrak{F}_{n} ;\, n \in\mathcal{N} \}\). The lemma is proved. □
The nature of martingales can be clarified to some extent by the following example.
Example 15.1.5
Let ξ 1,ξ 2,… be an arbitrary sequence of random variables, E|ξ k |<∞, \(\mathfrak{F}_{n} = \sigma (\xi_{1}, \ldots, \xi_{n})\) for n≥1, \(\mathfrak{F}_{0} = (\varnothing, \varOmega)\) (the trivial σ-algebra),
Then \(\{ X_{n} , \mathfrak{F}_{n} ;\, n \ge1 \}\) is a martingale. This is a consequence of the fact that
In other words, for an arbitrary sequence {ξ n }, the sequence S n can be “compensated” by a so-called “predictable” (in the sense that its value is determined by S 1,…,S n−1) sequence Z n so that S n −Z n will be a martingale.
15.2 The Martingale Property and Random Change of Time. Wald’s Identity
Throughout this section we assume that \(\mathcal{N}= \{ n \ge0 \}\). Recall the definition of a stopping time.
Definition 15.2.1
A random variable ν will be called a stopping time or a Markov time (with respect to an increasing family of σ-algebras \(\{ \mathfrak{F}_{n} ;\, n \ge0 \}\)) if, for any n≥0, \(\{ \nu\le n \} \in\mathfrak{F}_{n} \).
It is obvious that a constant ν≡m is a stopping time. If ν is a stopping time, then, for any fixed m, ν(m)=min(ν,m), is also a stopping time, since for n≥m we have
and if n<m then
If ν is a stopping time, then
Conversely, if \(\{ \nu= n \} \in\mathfrak{F}_{n}\), then \(\{ \nu\le n \} \in \mathfrak{F}_{n}\) and therefore ν is a stopping time.
Let a martingale \(\{ X_{n} , \mathfrak{F}_{n} ;\, n \ge0 \}\) be given. A typical example of a stopping time is the time ν at which X n first hits a given measurable set B:
(ν=∞ if all X n ∉B). Indeed,
If ν is a proper stopping time (P(ν<∞)=1), then X ν is a random variable, since
By \(\mathfrak{F}_{\nu}\) we will denote the σ-algebra of sets \(A \in\mathfrak{F}\) such that \(A \cap\{\nu=n \} \in\mathfrak{F}_{n}\), n=0,1,… This σ-algebra can be thought of as being generated by the events {ν≤n}∩B n , n=0,1,…, where \(B_{n} \in \mathfrak{F}_{n}\). Clearly, ν and X ν are \(\mathfrak {F}_{\nu}\)-measurable. If ν 1 and ν 2 are two stopping times, then \(\{\nu_{2} \ge\nu_{1} \} \in\mathfrak{F}_{\nu_{1}}\) and \(\{\nu_{2} \ge\nu_{1} \} \in \mathfrak{F}_{\nu_{2}}\), since {ν 2≥ν 1}=⋃ n [{ν 2=n}∩{ν 1≤n}].
We already know that if \(\{ X_{n} , \mathfrak{F}_{n} \}\) is a martingale then E X n is constant for all n. Will this property remain valid for E X ν if ν is a stopping time? From Wald’s identity we know that this is the case for the martingale from Example 15.1.1. In the general case one has the following.
Theorem 15.2.1
(Doob)
Let \(\{ X_{n} , \mathfrak{F}_{n} ;\, n \ge0 \}\) be a martingale (submartingale) and ν 1,ν 2 be stopping times such that
Then, on the set {ν 2≥ν 1},
This theorem extends the martingale (submartingale) property to random time.
Corollary 15.2.1
If ν 2=ν≥0 is an arbitrary stopping time, then putting ν 1=n (also a stopping time) we have that, on the set ν≥n,
or, which is the same, for any \(A \in\mathfrak{F}_{n} \cap\{ \nu\ge n \}\),
For submartingales substitute “=” by “≥”.
Proof of Theorem 15.2.1
To prove (15.2.3) it suffices to show that, for any \(A \in \mathfrak{F}_{\nu_{1}}\),
Since the random variables ν i are discrete, we just have to establish (15.2.4) for sets \(A_{n} = A \cap\{ \nu_{1} = n \} \in\mathfrak{F}_{n}\), n=0,1,… , i.e. to establish the equality
Thus the proof is reduced to the case ν 1=n. We have
Here we used the fact that \(\{ \nu_{2} \ge n_{1} \} \in\mathfrak{F}_{n}\) and the martingale property (15.1.3).
Applying this equality m−n times we obtain that
By (15.2.2) the last expression converges to zero for some sequence m→∞.
Since
by the property of integrals and by virtue of (15.2.6),
Thus we proved equality (15.2.5) and hence Theorem 15.2.1 for martingales. The proof for submartingales can be obtained by simply changing the equality signs in certain places to inequalities. The theorem is proved. □
The conditions of Theorem 15.2.1 are far from always being met, even in rather simple cases. Consider, for instance, a fair game (see Examples 4.2.3 and 4.4.5) versus an infinitely rich adversary, in which z+S n is the fortune of the first gambler after n plays (given he has not been ruined yet). Here z>0, \(S_{n} = \sum_{k=1}^{n} \xi_{k}\), P(ξ k =±1)=1/2, η(z)=min{k:S k =−z} is obviously a Markov (stopping) time, and the sequence {S n ; n≥0}, S 0=0, is a martingale, but S η(z)=−z. Hence E S η(z)=−z≠E S n =0, and equality (15.2.5) does not hold for ν 1=0, ν 2=η(z), z>0, n>0. In this example, this means that condition (15.2.2) is not satisfied (this is related to the fact that E η(z)=∞).
Conditions (15.2.1) and (15.2.2) of Theorem 15.2.1 can, generally speaking, be rather hard to verify. Therefore the following statements are useful in applications.
Put for brevity
Lemma 15.2.1
The condition
is sufficient for (15.2.1) and (15.2.2) (with ν i =ν).
The Proof
is almost evident since |X ν |≤Y ν and
Because P(ν>n)→0 and E Y ν <∞, it remains to use the property of integrals by which E(η; A n )→0 if E|η|<∞ and P(A n )→0. □
We introduce the following notation:
where \(\mathfrak{F}_{-1}\) can be taken to be the trivial σ-algebra.
Theorem 15.2.2
Let {X n ; n≥0} be a martingale (submartingale) and ν be a stopping time (with respect to \(\{ \mathfrak{F}_{n} = \sigma(X_{0}, \ldots, X_{n}) \}\)).
-
(1)
If
$$ \mathbf{E}\nu< \infty $$(15.2.8)and, for all n≥0, on the set \(\{ \nu\ge n \} \in\mathfrak{F}_{n-1}\) one has
$$ a_n \le c = {\rm const} , $$(15.2.9)then
$$ \mathbf{E}|X_{\nu}| < \infty,\quad \mathbf{E}X_{\nu} = \mathbf {E}X_0 \quad(\ge\mathbf{E} X_0) . $$(15.2.10) -
(2)
If, in addition, \(\mathbf{E}\sigma_{n}^{2} = \mathbf{E}\xi_{n}^{2} < \infty\) then
$$ \mathbf{E}X_{\nu}^2 = \mathbf{E}\sum _{k=1}^{\nu} \sigma_k^2. $$(15.2.11)
Proof
By virtue of Theorem 15.2.1, Corollary 15.2.1 and Lemma 15.2.1, to prove (15.2.10) it suffices to verify that conditions (15.2.8) and (15.2.9) imply (15.2.7). Quite similarly to the proof of Theorem 4.4.1, we have
Here \(\{ \nu\ge k \} = \varOmega\setminus\{ \nu\le k-1 \} \in \mathfrak{F}_{k-1}\). Therefore, by condition (15.2.9),
This means that
Now we will prove (15.2.11). Set \(Z_{n} := X_{n}^{2} - \sum_{0}^{n} \sigma_{k}^{2}\). One can easily see that Z n is a martingale, since
It is also clear that E|Z n |<∞ and ν(n)=min(ν,n) is a stopping time. By virtue of Lemma 15.2.1, conditions (15.2.1) and (15.2.2) always hold for the pair {Z k }, ν(n). Therefore, by the first part of the theorem,
It remains to verify that
The second equality follows from the monotone convergence theorem (ν(n)↑ν, \(\sigma_{k}^{2} \ge0\)). That theorem implies the former equality as well, for \(X_{\nu(n)}^{2} \stackrel{\mathit {a}.\mathit{s}.}{\longrightarrow}X_{\nu}^{2}\) and \(X_{\nu(n)}^{2} {\uparrow}\). To verify the latter claim, note that \(\{ X_{n}^{2} , \mathfrak{F}_{n} ;\, n \ge0 \}\) is a martingale, and therefore, for any \(A \in\mathfrak{F}_{n}\),
Thus (15.2.12) and (15.2.13) imply (15.2.11), and the theorem is completely proved. □
The main assertion of Theorem 15.2.2 for martingales (submartingales):
was obtained as a consequence of Theorem 15.2.1. However, we could get it directly from some rather transparent relations which, moreover, enable one to extend it to improper stopping times ν.
A stopping time ν is called improper if 0<P(ν<∞)=1−P(ν=∞)<1. To give an example of an improper stopping time, consider independent identically distributed random variables ξ k , a=E ξ k <0, \(X_{n} = \sum^{n}_{k=1} \xi_{k}\), and put
Here ν is finite only for such trajectories {X k } that sup k X k >x. If the last inequality does not hold, we put ν=∞. Clearly,
Thus, for an arbitrary (possibly improper) stopping time, we have
Assume now that changing the order of summation is justified here. Then, by virtue of the relation \(\{ \nu\geq k+1\}\in\mathfrak{F}_{k}\), we get
Since for martingales (submartingales) the factors \(\mathbf{E}(X_{k+1} - X_{k} | \mathfrak{F}_{k})= 0\) (≥0), we obtain the following.
Theorem 15.2.3
If the change of the order of summation in (15.2.15) and (15.2.16) is legitimate then, for martingales (submartingales),
Assumptions (15.2.8) and (15.2.9) of Theorem 15.2.2 are nothing else but conditions ensuring the absolute convergence of the series in (15.2.15) (see the proof of Theorem 15.2.2) and (15.2.16), because the sum of the absolute values of the terms in (15.2.16) is dominated by
where, as before, \(a_{k} = \mathbf{E} (|\xi_{k}| \mid \mathfrak{F}_{k-1} )\) with ξ k =X k −X k−1. This justifies the change of the order of summation.
There is still another way of proving (15.2.17) based on (15.2.15) specifying a simple condition ensuring the required justification. First note that identity (15.2.17) assumes that the expectation E(X ν ; ν<∞) exists, i.e. both values \(\mathbf{E}(X^{\pm}_{\nu};\,\nu<\infty)\) are finite, where x ±=max(±x,0).
Theorem 15.2.4
1. Let \(\{X_{n},\mathfrak{F}_{n}\}\) be a martingale. Then the condition
is necessary and sufficient for the relation
A necessary and sufficient condition for (15.2.17) is that (15.2.18) holds and at least one of the values \(\mathbf{E}(X_{\nu}^{\pm};\,\nu<\infty)\) is finite.
2. If \(\{ X_{n} , \mathfrak{F}_{n} \}\) is a supermartingale and
then
If, in addition, at least one of the values \(\mathbf{E}(X^{\pm}_{\nu};\,\nu<\infty)\) is finite then
3. If, in conditions (15.2.18) and (15.2.20), we replace the quantity E(X n ; ν>n) with E(X n ; ν≥n), the first two assertions of the theorem will remain true.
The corresponding symmetric assertions hold for submartingales.
Proof
As we have already mentioned, for martingales, E(ξ k ; ν≥k)=0. Therefore, by virtue of (15.2.18)
Here
Hence
These equalities also imply the necessity of condition (15.2.18).
If at least one of the values \(\mathbf{E}(X^{\pm}_{\nu};\,\nu<\infty)\) is finite, then by the monotone convergence theorem
The third assertion of the theorem follows from the fact that the stopping time ν(n)=min(ν,n) satisfies the conditions of the first part of the theorem (or those of Theorems 15.2.1 and 15.2.3), and therefore, for the martingale {X n },
so that (15.2.19) implies the convergence E(X n ; ν≥n)→0 and vice versa.
The proof for semimartingales is similar. The theorem is proved. □
That assertions (15.2.17) and (15.2.19) are, generally speaking, not equivalent even when (15.2.18) holds (i.e., lim n→∞ E(X ν ;ν≤n)=E(X ν ;ν<∞) is not always the case), can be illustrated by the following example. Let ξ k be independent random variables with
ν be independent of {ξ k }, and P(ν=k)=2−k, k=1,2,… . Then X 0=0, X k =X k−1+ξ k for k≥1 is a martingale,
by independence, and condition (15.2.18) is satisfied. By virtue of (15.2.19), this means that lim n→∞ P(X ν ; ν≤n)=0 (one can also verify this directly). On the other hand, the expectation E(X ν ; ν<∞)=E X ν is not defined, since \(\mathbf{E} X_{\nu}^{+}=\mathbf{E}X_{\nu}^{-}=\infty\). Indeed, clearly
By symmetry, we also have \(\mathbf{E}X_{\nu}^{-}=\infty\).
Corollary 15.2.2
1. If \(\{ X_{n}, \mathfrak{F}_{n} \}\) is a nonnegative martingale, then condition (15.2.18) is necessary and sufficient for (15.2.17).
2. If \(\{ X_{n}, \mathfrak{F}_{n} \}\) is a nonnegative supermartingale and ν is an arbitrary stopping time, then
Proof
The assertion follows in an obvious way from Theorem 15.2.4 since one has \(\mathbf{E}(X^{-}_{\nu};\,\nu<\infty)=0\). □
Theorem 15.2.2 implies the already known Wald’s identity (see Theorem 4.4.3) supplemented with another useful statement.
Theorem 15.2.5
(Wald’s identity)
Let ζ 1,ζ 2,… be independent identically distributed random variables, S n =ζ 1+⋯+ζ n , S 0=0, and assume E ζ 1=a. Let, further, ν be a stopping time with E ν<∞. Then
If, moreover, \(\sigma^{2} = \operatorname{Var}\zeta_{k} < \infty\), then
Proof
It is clear that X n =S n −na forms a martingale and conditions (15.2.8) and (15.2.9) are met. Therefore E X ν =E X 0=0, which is equivalent to (15.2.22), and \(\mathbf{E}X_{\nu}^{2} = \mathbf{E}\nu\sigma ^{2}\), which is equivalent to (15.2.23). □
Example 15.2.1
Consider a generalised renewal process (see Sect. 10.6) S(t)=S η(t), where \(S_{n}=\sum_{j=1}^{n}\xi_{j}\) (in this example we follow the notation of Chap. 10 and change the meaning of the notation S n from the above), η(t)=min{k:T k >t}, \(T_{n}=\sum_{j=1}^{n}\tau_{j}\) and (τ j ,ξ j ) are independent vectors distributed as (τ,ξ), τ>0. Set a ξ =E ξ, a=E τ, \(\sigma^{2}_{\xi}=\operatorname{Var}\xi\) and \(\sigma^{2}=\operatorname {Var}\tau\). As we know from Wald’s identity in Sect. 4.4,
where E χ(t)=o(t) as t→∞ (see Theorem 10.1.1) and, in the non-lattice case, \(\mathbf{E}\chi(t)\to\frac{\sigma ^{2}+a^{2}}{2a^{2}}\) if σ 2<∞ (see Theorem 10.4.3).
We now find \(\operatorname{Var}\eta(t)\) and \(\operatorname {Var}S(t)\). Omitting for brevity’s sake the argument t, we can write
The first summand on the right-hand side is equal to
by Theorem 15.2.3. The second summand equals, by (10.4.8) (χ(t)=T η(t)−t),
The last summand, by the Cauchy–Bunjakovsky inequality, is also o(t). Finally, we get
Consider now (with r=a ξ /a; ζ j =ξ j −rτ j , E ζ j =0)
The first term on the right-hand side is equal to
by Theorem 15.2.3. The second term has already been estimated above. Therefore, as before, the sum of the last two terms is o(t). Thus
This corresponds to the scaling used in Theorem 10.6.2.
Example 15.2.2
Examples 4.4.4 and 4.5.5 referring to the fair game situation with P(ζ k =±1)=1/2 and ν=min{k:S k =z 2 or S k =−z 1} (z 1 and z 2 being the capitals of the gamblers) can also illustrate the use of Theorem 15.2.5.
Now consider the case p=P(ζ k =1)≠1/2. The sequence \(X_{n} = (q/p)^{S_{n}}\), n≥0, q=1−p is a martingale, since
By Theorem 15.2.5 (the probabilities P 1 and P 2 were defined in Example 4.4.5),
From this relation and equality P 1+P 2=1 we have
Using Wald’s identity again, we also obtain that
Note that these equalities could have been obtained by elementary methodsFootnote 1 but this would require lengthy calculations.
In the cases when the nature of S ν is simple enough, the assertions of the type of Theorems 15.2.1–15.2.2 enable one to obtain (or estimate) the distribution of the random variable ν itself. In such situations, the following assertion is rather helpful.
Suppose that the conditions of Theorem 15.2.5 are met, but, instead of conditions on the moments of ζ n , the Cramér condition (cf. Chap. 9) is assumed to be satisfied:
for some λ≠0.
In other words, if
then λ +−λ −>0. Everywhere in what follows we will only consider the values
for which ψ′(λ)<∞. For such λ, the positive martingale
is well-defined so that E X n =1.
Theorem 15.2.6
Let ν be an arbitrary stopping time and λ∈B. Then
and, for any s>1 and r>1 such that 1/r+1/s=1,
A necessary and sufficient condition for
is that
Remark 15.2.1
Relation (15.2.26) is known as the fundamental Wald identity. In the literature it is usually considered for a.s. finite ν (when P(ν<∞)=1) being in that case an extension of the obvious equality \(\mathbf{E}e^{\lambda S_{n}} = \psi^{n} (\lambda)\) to the case of random ν. Originally, identity (15.2.26) was established by A. Wald in the special case where ν is the exit time of the sequence {S n } from a finite interval (see Corollary 15.2.3), and was accompanied by rather restrictive conditions. Later, these conditions were removed (see e.g. [13]). Below we will obtain a more general assertion for the problem on the first exit of the trajectory {S n } from a strip with curvilinear boundaries.
Remark 15.2.2
The fundamental Wald identity shows that, although the nature of a stopping time could be quite general, there exists a stiff functional constraint (15.2.26) on the joint distribution of ν and S ν (the distribution of ζ k is assumed to be known). In the cases where one of these variables can somehow be “computed” or “eliminated” (see Examples 15.2.2–15.2.4) Wald’s identity turns into an explicit formula for the Laplace transform of the distribution of the other variable. If ν and S ν prove to be independent (which rarely happens), then (15.2.26) gives the relationship
between the Laplace transforms of the distributions of ν and S ν .
Proof of Theorem 15.2.6
As we have already noted, for
\(\{ X_{n} , \mathfrak{F}_{n} ; \, n \geq0 \}\) is a positive martingale with X 0=1 and E X n =1. Corollary 15.2.2 immediately implies (15.2.24).
Inequality (15.2.25) is a consequence of Hölder’s inequality and (15.2.24):
The last assertion of the theorem (concerning the identity (15.2.26)) follows from Theorem 15.2.4. □
We now consider several important special cases. Note that ψ(λ) is a convex function (ψ″(λ)>0), ψ(0)=1, and therefore there exists a unique point λ 0 at which ψ(λ) attains its minimum value ψ(λ 0)≤1 (see also Sect. 9.1).
Corollary 15.2.3
Assume that we are given a sequence g(n) such that
If S n ≤g(n) holds on the set {ν>n}, then (15.2.26) holds for λ∈(λ 0,λ +]∩B, B={λ:ψ(λ)<∞}.
The random variable ν=ν g =inf{k≥1:S k >g(k)} for g(k)=o(k) obviously satisfies the conditions of Corollary 15.2.3. For stopping times ν g one could also consider the case g(n)/n→c≥0 as n→∞, which can be reduced to the case g(n)=o(n) by introducing the random variables
for which \(\nu_{g} = \inf\{ k \geq1 : S^{*}_{k} > g(k) - c k \}\).
Proof of Corollary 15.2.3
For λ>λ 0, λ∈B, we have
as n→∞, because (λ−λ 0)g +(n)=o(n). It remains to use Theorem 15.2.6. The corollary is proved. □
We now return to Theorem 15.2.6 for arbitrary stopping times. It turns out that, based on the Cramér transform introduced in Sect. 9.1, one can complement its assertions without using any martingale techniques.
Together with the original distribution P of the sequence \(\{\zeta_{k}\}_{k=1}^{\infty}\) we introduce the family of distributions P λ of this sequence in \(\langle\mathbb{R}^{\infty}, \mathfrak{B}^{\infty}\rangle\) (see Sect. 5.5) generated by the finite-dimensional distributions
This is the Cramér transform of the distribution P.
Theorem 15.2.7
Let ν be an arbitrary stopping time. Then, for any λ∈B,
Proof
Since {ν=n}∈σ(ζ 1,…,ζ n ), there exists a Borel set \(D_{n}\subset\mathbb{R}^{n}\), such that
Further,
where
This proves the theorem. □
For a given function g(n), consider now the stopping time
(cf. Corollary 15.2.3). The assertion of Theorem 15.2.7 can be obtained in that case in the following way. Denote by E λ the expectation with respect to the distribution P λ .
Corollary 15.2.4
1. If g +(n)=max(0,g(n))=o(n) as n→∞ and λ∈(λ 0,λ +]∩B, then one has P λ (ν g <∞)=1 in relation (15.2.28).
2. If g(n)≥0 and λ<λ 0, then P λ (ν g <∞)<1.
3. For λ=λ 0, the distribution \(\mathbf{P}_{\lambda _{0}}\) of the variable ν can either be proper (when one has \(\mathbf{P}_{\lambda_{0}}(\nu_{g}<\infty)=1\)) or improper \((\mathbf{P}_{\lambda_{0}}(\nu_{g}<\infty)<1)\). If λ 0∈(λ −,λ +), g(n)<(1−ε)σ(2loglogn)1/2 for all n≥n 0, starting from some n 0, and \(\sigma^{2}=\mathbf{E}_{\lambda_{0}}\zeta^{2}_{1}\), then P λ (ν g <∞)=1.
But if λ∈(λ −,λ +), g(n)≥0, and g(n)≥(1+ε)σ(2loglogn)1/2 for n≥n 0, then P λ (ν g <∞)<1 (we exclude the trivial case ζ k ≡0).
Proof
Since \(\mathbf{E}_{\lambda}\zeta_{k}=\frac{\psi'(\lambda)}{\psi(\lambda )}\), the expectation E λ ζ k is of the same sign as the difference λ−λ 0, and \(\mathbf{E}_{\lambda_{0}}\zeta_{k}=0\) (ψ′(λ 0)=0 if λ 0∈(λ −,λ +)). Hence the first assertion follows from the relations
as n→∞ by the law of large numbers for the sums \(X_{n}=\sum_{k=1}^{n} \zeta_{k}\), since E λ ζ k >0.
The second assertion is a consequence of the strong law of large numbers since E λ ζ k <0 and hence P λ (ν=∞)=P(sup n X n ≤0)>0.
The last assertion of the corollary follows from the law of the iterated logarithm which we prove in Sect. 20.2. The corollary is proved. □
The condition g(n)≥0 of part 2 of the corollary can clearly be weakened to the condition g(n)=o(n), P(ν>n)>0 for any n>0. The same is true for part 3.
An assertion similar to Corollary 15.2.4 is also true for the (stopping) time \(\nu_{g_{-}, g_{+}}\) of the first passage of one of the two boundaries g ±(n)=o(n):
Corollary 15.2.5
For λ∈B∖{λ 0}, we have \(\mathbf{P}_{\lambda}(\nu_{g_{-}, g_{+}}<\infty)=1\).
If λ=λ 0∈(λ −,λ +), then the P λ -distribution of ν may be either proper or improper.
If, for some n 0>2,
for n≥n 0 then \(\mathbf{P}_{\lambda_{0}}(\nu_{g_{-},g_{+}}<\infty)=1\).
If g ±(n)≷0 and, additionally,
for n≥n 0 then \(\mathbf{P}_{\lambda_{0}}(\nu_{g_{-},g_{+}}<\infty)<1\).
Proof
The first assertion follows from Corollary 15.2.4 applied to the sequences {±X n }. The second is a consequence of the law of the iterated logarithm from Sect. 20.2. □
We now consider several relations following from Corollaries 15.2.3, 15.2.4 and 15.2.5 (from identity (15.2.26)) for the random variables ν=ν g and \(\nu=\nu_{g_{-},g_{+}}\).
Let a<0 and ψ(λ +)≥1. Since ψ′(0)=a<0 and the function ψ(λ) is convex, the equation ψ(λ)=1 will have a unique root μ>0 in the domain λ>0. Setting λ=μ in (15.2.26) we obtain the following.
Corollary 15.2.6
If a<0 and ψ(λ +)≥1 then, for the stopping times ν=ν g and \(\nu= \nu_{g_{-}, g_{+}}\), we have the equality
Remark 15.2.3
For an x>0, put (as in Chap. 10) η(x):=inf{k:S k >0}. Since S η(x)=x+χ(x), where χ(x):=S η(x)−x is the value of overshoot over the level x, Corollary 15.2.6 implies
Note that P(η(x)<∞)=P(S>x), where S=sup k≥0 S k . Therefore, Theorem 12.7.4 and (15.2.29) imply that, as x→∞,
The last convergence relation corresponds to the fact that the limiting conditional distribution (as x→∞) G of χ(x) exists given η(x)<∞. If we denote by χ a random variable with the distribution G then (15.2.30) will mean that c=[E e μχ]−1<1. This provides an interpretation of the constant c that is different from the one in Theorem 12.7.4.
In Corollary 15.2.6 we “eliminated” the “component” ψ ν(λ) in identity (15.2.26). “Elimination” of the other component \(e^{\lambda S_{\nu}}\) is possible only in some special cases of random walks, such as the so-called skip-free walks (see Sect. 12.8) or walks with exponentially (or geometrically) distributed \(\zeta^{+}_{k} = \max(0, \zeta_{k})\) or \(\zeta^{-}_{k} = - \min(0, \zeta_{k})\). We will illustrate this with two examples.
Example 15.2.3
We return to the ruin problem discussed in Example 15.2.2. In that case, Corollary 15.2.4 gives, for g −(n):=−z 1 and g +(n)=z 2, that
In particular, for z 1=z 2=z and p=1/2, we have by symmetry that
Let λ(s) be the unique positive solution of the equation sψ(λ)=1, s∈(0,1). Since here \(\psi(\lambda) = \frac{1}{2} (e^{\lambda} + e^{-\lambda})\), solving the quadratic equation yields
Identity (15.2.31) now gives
We obtain an explicit form of the generating function of the random variable ν, which enables us to find the probabilities P(ν=n), n=1,2,… by expanding elementary functions into series.
Example 15.2.4
Simple explicit formulas can also be obtained from Wald’s identity in the problem with one boundary, where ν=ν g , g(n)=z. In that case, the class of distributions of ζ k could be wider than in Example 15.2.3. Suppose that one of the two following conditions holds (cf. Sect. 12.8).
-
1.
The transform walk is arithmetic and skip-free, i.e. ζ k are integers, P(ξ k =1)>0 and P(ζ k ≥2)=0.
-
2.
The walk is right exponential, i.e.
$$ \mathbf{P}(\zeta_k > t) = c e^{- \alpha t} $$(15.2.32)either for all t>0 or for t=0,1,2,… if the walk is integer-valued (the geometric distribution).
The random variable ν g will be proper if and only if E ξ k =ψ′(0)≥0 (see Chaps. 9 and 12). For skip-free random walks, Wald’s identity (15.2.26) yields (g(n)=z>0, S ν =z)
For s≤1, the equation ψ(λ)=s −1 (cf. Example 15.2.3) has in the domain λ>λ 0 a unique solution λ(s). Therefore identity (15.2.33) can be written as
This statement implies a series of results from Chaps. 9 and 12. Many properties of the distribution of ν:=ν z can be derived from this identity, in particular, the asymptotics of P(ν z =n) as z→∞, n→∞. We already know one of the ways to find this asymptotics. It consists of using Theorem 12.8.4, which implies
and the local Theorem 9.3.4 providing the asymptotics of P(S n =z). Using relation (15.2.34) and the inversion formula is an alternative approach to studying the asymptotics of P(ν z =n). If we use the inversion formula, there will arise an integral of the form
where the integrand s −n e −zμ(s), after the change of variable μ(s)=λ (or s=ψ(λ)−1), takes the form
The integrand in the inversion formula for the probability P(S n =z) has the same form. This probability has already been studied quite well (see Theorem 9.3.4); its exponential part has the form e −nΛ(α), where α=z/n, Λ(α)=sup λ (αλ−lnψ(λ)) is the large deviation rate function (see Sect. 9.1 and the footnote for Definition 9.1.1). A more detailed study of the inversion formula (15.2.36) allows us to obtain (15.2.35).
Similar relations can be obtained for random walks with exponential right distribution tails. Let, for example, (15.2.32) hold for all t>0. Then the conditional distribution P(S ν >t|ν=n,S n−1=x) coincides with the distribution
and clearly depends neither on n nor on x. This means that ν and S ν are independent, S ν =z+γ, \(\gamma \mathbin {{\subset }\hspace {-.7em}{=}}{\boldsymbol{\Gamma}}_{\alpha}\),
where λ(s) is, as before, the only solution to the equation ψ(λ)=s −1 in the domain λ>λ 0. This implies the same results as (15.2.34).
If P(ζ k >t)=c 1 e −αt and P(ζ k <−t)=c 2 e −βt, t>0, then, in the problem with two boundaries, we obtain for \(\nu= \nu_{g_{-}, g_{+}}\), g +(n)=z 2 and g −(n)=−z 1 in exactly the same way from (15.2.26) that
15.3 Inequalities
15.3.1 Inequalities for Martingales
First of all we note that the property E X n ≤1 of the sequence \(X_{n} = {e^{\lambda S_{n}}}{\psi_{0}(\lambda)^{-n}}\) forming a supermartingale for an appropriate function ψ 0(λ) remains true when we replace n with a stopping time ν (an analogue of inequality (15.2.24)) in a much more general case than that of Theorem 15.2.6. Namely, ζ k may be dependent.
Let, as before, \(\{\mathfrak{F}_{n}\}\) be an increasing sequence of σ-algebras, and ζ n be \(\mathfrak{F}_{n}\)-measurable random variables. Suppose that a.s.
This condition is always met if a.s.
In that case the sequence \(X_{n} = e^{\lambda S_{n}} \psi^{-n}_{0}(\lambda)\) forms a supermartingale:
Theorem 15.3.1
Let (15.3.1) hold and ν be a stopping time. Then inequalities (15.2.24) and (15.2.25) will hold true with ψ replaced by ψ 0.
The Proof
of the theorem repeats almost verbatim that of Theorem 15.2.6. □
Now we will obtain inequalities for the distribution of
X n being an arbitrary submartingale.
Theorem 15.3.2
(Doob)
Let \(\{ X_{n}, \mathfrak{F}_{n} ; \, n \geq0 \}\) be a nonnegative submartingale. Then, for all x≥0 and n≥0,
Proof
Let
It is obvious that n and ν(n) are stopping times, ν(n)≤n, and therefore, by Theorem 15.2.1 (see (15.2.3) for ν 2=n, ν 1=ν(n)),
Observing that \(\{ \overline{X}_{n} > x \} = \{ X_{\nu(n)} > x \}\), we have from Chebyshev’s inequality that
The theorem is proved. □
Theorem 15.3.2 implies the following.
Theorem 15.3.3
(The second Kolmogorov inequality)
Let \(\{ X_{n}, \mathfrak{F}_{n} ; \, n \geq0 \}\) be a martingale with a finite second moment \(\mathbf{E}X^{2}_{n}\). Then \(\{ X^{2}_{n}, \mathfrak{F}_{n} ; \, n \geq0 \}\) is a submartingale and by Theorem 15.3.2
Originally A.N. Kolmogorov established this inequality for sums X n =ξ 1+⋯+ξ n of independent random variables ξ n . Theorem 15.3.3 extends Kolmogorov’s proof to the case of submartingales and refines Chebyshev’s inequality.
The following generalisation of Theorem 15.3.3 is also valid.
Theorem 15.3.4
If \(\{ X_{n}, \mathfrak{F}_{n} ; \,n \geq0 \}\) is a martingale and E|X n |p<∞, p≥1, then \(\{ |X_{n}|^{p}, \mathfrak{F}_{n} ; \, n \geq0 \}\) forms a nonnegative submartingale and, for all x>0,
If \(\{ X_{n}, \mathfrak{F}_{n} ; \, n \geq0\}\) is a submartingale, \(\mathbf{E}e^{\lambda X_{n}} < \infty\), λ>0, then \(\{ e^{\lambda X_{n}} , \mathfrak {F}_{n} ; \, n\ge0 \}\) also forms a nonnegative submartingale,
Both Theorem 15.3.4 and Theorem 15.3.3 immediately follow from Lemma 15.1.3 and Theorem 15.3.2.
If \(X_{n} = S_{n} = \sum^{n}_{k=1} \zeta_{k}\), where ζ k are independent, identically distributed and satisfy the Cramér condition: λ +=sup{λ:ψ(λ)<∞}>0, then, with the help of the fundamental Wald identity, one can obtain sharper inequalities for \(\mathbf{P}(\overline{X}_{n} > x)\) in the case a=E ξ k <0.
Recall that, in the case a=ψ′(0)<0, the function \(\psi (\lambda) = \mathbf{E}e^{\lambda\zeta_{k}}\) decreases in a neighbourhood of λ=0, and, provided that ψ(λ +)≥1, the equation ψ(λ)=1 has a unique solution μ in the domain λ>0.
Let ζ be a random variable having the same distribution as ζ k . Put
If, for instance, P(ζ>t)=ce −αt for t>0 (in this case necessarily α>μ in (15.2.32)), then
A similar equality holds for integer-valued ξ with a geometric distribution.
For other distributions, one has ψ +>ψ −.
Under the above conditions, one has the following assertion which supplements Theorem 12.7.4 for the distribution of the random variable S=sup k S k .
Theorem 15.3.5
If a=E ζ<0 then
This theorem implies that, in the case of exponential right tails of the distribution of ζ (see (15.2.32)), inequalities (15.3.2) become the exact equality
(The same result was obtained in Example 12.5.1.) This means that inequalities (15.3.2) are unimprovable. Since \(\overline{S}_{n} = \max_{k \leq n} S_{k} \leq S\), relation (15.3.2) implies that, for any n,
Proof of Theorem 15.3.5
Set ν:=∞ if S=sup k≥0 S k ≤x, and put ν:=η(x)=min{k: S k >x} otherwise. Further, let χ(x):=S η(x)−x be the excess of the level x. We have
Similarly,
Next, by Corollary 15.2.6,
Because P(ν<∞)=P(S>x), we get from this the right inequality of Theorem 15.3.5. The left inequality is obtained in the same way. The theorem is proved. □
Remark 15.3.1
We proved Theorem 15.3.5 with the help of the fundamental Wald identity. But there is a direct proof based on the following relations:
Here the random variables e λχ(x)I(ν=k) and S n −S k are independent and, as before,
Therefore, for all λ such that ψ(λ)≤1,
Hence we obtain
Since the right-hand side does not depend on n, the same inequality also holds for P(S>x). The lower bound is obtained in a similar way. One just has to show that, in the original equality (cf. (15.3.3))
one has \(\mathbf{E}(e^{\lambda S_{n}} ; \, \nu> n) = o (1)\) as n→∞ for λ=μ, which we did in Sect. 15.2.
15.3.2 Inequalities for the Number of Crossings of a Strip
We now return to arbitrary submartingales X n and prove an inequality that will be necessary for the convergence theorems of the next section. It concerns the number of crossings of a strip by the sequence X n . Let a<b be given numbers. Set ν 0=0,
We put ν m :=∞ if the path {X n } for n≥ν m−1 never crosses the corresponding level. Using this notation, one can define the number of upcrossings of the strip (interval) [a,b] by the trajectory X 0,…,X n as the random variable
Set (a)+=max(0,a).
Theorem 15.3.6
(Doob)
Let \(\{X_{n}, \mathfrak{F}_{n} ; \, n \ge0 \}\) be a submartingale. Then, for all n,
It is clear that inequality (15.3.4) assumes by itself that only the submartingale \(\{X_{n} ,\mathfrak{F}_{n} ; 0 \le k \le n \}\) is given.
Proof
The random variable ν(a,b; n) coincides with the number of upcrossings of the interval [0,b−a] by the sequence (X n −a)+. Now \(\{(X_{n} - a)^{+} , \mathfrak{F}_{n} ; \, n \ge0 \}\) is a nonnegative submartingale (see Example 15.1.4) and therefore, without loss of generality, one can assume that a=0 and X n ≥0, and aim to prove that
Let
In Fig. 15.1, ν 1=2, ν 2=5, ν 3=8; η j =0 for j≤2, η j =1 for 3≤j≤5 etc. It is not hard to see (using the Abel transform) that (with X 0=0, η 0=1)
Moreover (here \(\mathcal{N}_{1}\) denotes the set of odd numbers),
Therefore, by virtue of the relation \(\mathbf{E}(X_{j} | \mathfrak {F}_{j-1}) - X_{j-1} \ge0\), we obtain
The theorem is proved.
□
15.4 Convergence Theorems
Theorem 15.4.1
(Doob’s martingale convergence theorem)
Let
be a submartingale. Then
-
(1)
The limit X −∞:=lim n→−∞ X n exists a.s., \(\mathbf{E}X_{-\infty}^{+} < \infty\), and the process \(\{X_{n} , \mathfrak{F}_{n} ; \, {-}\infty\le n < \infty\}\) is a submartingale.
-
(2)
If \(\sup_{n} \mathbf{E}X_{n}^{+} < \infty\) then X ∞:=lim n→∞ X n exists a.s. and \(\mathbf{E}X_{\infty}^{+} < \infty\). If, moreover, sup n E|X n |<∞ then E|X ∞|<∞.
-
(3)
The random sequence \(\{X_{n} , \mathfrak{F}_{n} ; \, -\infty\le n \le\infty\}\) forms a submartingale if and only if the sequence \(\{ X_{n}^{+} \}\) is uniformly integrable.
Proof
(1) Since
(here the limits are taken as n→−∞), the assumption on divergence with positive probability
means that there exist rational numbers a<b such that
Let ν(a,b; m) be the number of upcrossings of the interval [a,b] by the sequence Y 1=X −m ,…,Y m =X −1 and ν(a,b)=lim m→∞ ν(a,b;m). Then (15.4.1) means that
By Theorem 15.3.6 (applied to the sequence Y 1,…,Y m ),
Inequality (15.4.4) contradicts (15.4.2) and hence proves that
Moreover, by the Fatou–Lebesgue theorem (\(X_{-\infty}^{+} := \liminf X_{n}^{+}\)),
Here the second inequality follows from the fact that \(\{ X_{n}^{+}, \mathfrak{F}_{n} \}\) is also a submartingale (see Lemma 15.1.3) and therefore \(\mathbf{E}X_{n}^{+} \uparrow\).
By Lemma 15.1.2, to prove that \(\{X_{n}, \mathfrak{F}_{n} ;\, -\infty \le n < \infty\}\) is a submartingale, it suffices to verify that, for any \(A \in\mathfrak{F}_{-\infty} \subset\mathfrak{F}\),
Set X n (a):=max(X n ,a). By Lemma 15.1.4, \(\{ X_{n}(a), \mathfrak{F}_{n} ; \, n \le0 \}\) is a uniformly integrable submartingale. Therefore, for any −∞<k<n,
Letting a→−∞ we obtain (15.4.6) from the monotone convergence theorem.
(2) The second assertion of the theorem is proved in the same way. One just has to replace the right-hand sides of (15.4.3) and (15.4.4) with \(\mathbf{E}X_{n}^{+}\) and \(\sup_{n} \mathbf{E}X_{n}^{+}\), respectively. Instead of (15.4.5) we get (the limits here are as n→∞)
and if sup n E|X n |<∞ then
(3) The last assertion of the theorem is proved in exactly the same way as the first one—the uniform integrability enables us to deduce along with (15.4.7) that, for any \(A \in\mathfrak{F}_{n}\),
The converse part of the third assertion of the theorem follows from Lemma 15.1.4. The theorem is proved. □
Now we will obtain some consequences of Theorem 15.4.1.
So far (see Sect. 4.8), while studying convergence of conditional expectations, we dealt with expectations of the form \(\mathbf{E}(X_{n} | \mathfrak{F})\). Now we can obtain from Theorem 15.4.1 a useful theorem on convergence of conditional expectations of another type.
Theorem 15.4.2
(Lévy)
Let a nondecreasing family \(\mathfrak{F}_{1} \subseteq\mathfrak{F}_{2} \subseteq\cdots \subseteq\mathfrak{F}\) of σ-algebras and a random variable ξ, with E|ξ|<∞, be given on a probability space \(\langle\varOmega ,\mathfrak{F} ,\mathbf{P}\rangle\). Let, as before, \(\mathfrak{F}_{\infty} := \sigma(\bigcup_{n} \mathfrak{F}_{n})\) be the σ-algebra generated by events from \(\mathfrak{F}_{1}, \mathfrak{F}_{2}, \ldots\) . Then, as n→∞,
Proof
Set \(X_{n} := \mathbf{E}(\xi| \mathfrak{F}_{n})\). We already know (see Example 15.1.3) that the sequence \(\{ X_{n}, \mathfrak{F}_{n} ; \, 1 < n \le\infty\}\) is a martingale and therefore, by Theorem 15.4.1, the limit lim n→∞ X n =X (∞) exists a.s. It remains to prove that \(X_{(\infty)} = \mathbf{E}(\xi| \mathfrak{F}_{\infty})\) (i.e., that X (∞)=X ∞). Since \(\{ X_{n}, \mathfrak{F}_{n} ; \, 1 \le n \le\infty\} \) is by Lemma 15.1.4 a uniformly integrable martingale,
for \(A \in\mathfrak{F}_{k}\) and any k=1,2,… This means that the left- and right-hand sides of the last relation, being finite measures, coincide on the algebra \(\bigcup_{n=1}^{\infty} \mathfrak {F}_{n}\). By the theorem on extension of a measure (see Appendix 1), they will coincide for all \(A \in\sigma(\bigcup_{n=1}^{\infty} \mathfrak {F}_{n}) = \mathfrak{F}_{\infty}\). Therefore, by the definition of conditional expectation,
The theorem is proved. □
We could also note that the uniform integrability of \(\{ X_{n}, \mathfrak{F}_{n} ;\, 1 \le n \le\infty\}\) implies that \(\stackrel{\mathit{a}.\mathit{s}.}{\longrightarrow}\) in (47) can be replaced by \(\stackrel{(1)}{\longrightarrow}\).
Theorem 15.4.1 implies the strong law of large numbers. Indeed, turn to our Example 15.1.4. By Theorem 15.4.1, the limit X −∞=lim n→−∞ X n =lim n→∞ n −1 S n exists a.s. and is measurable with respect to the tail (trivial) σ-algebra, and therefore it is constant with probability 1. Since E X −∞=E ξ 1, we have \(n^{-1}{S_{n}}\stackrel{\mathit {a}.\mathit{s}.}{\longrightarrow}\mathbf{E}\xi_{1}\).
One can also obtain some extensions of the theorems on series convergence of Chap. 11 to the case of dependent variables. Let
and X n form a submartingale (\(\mathbf{E}(\xi_{n+1} | \mathfrak {F}_{n}) \ge0\)). Let, moreover, E|X n |<c for all n and for some c<∞. Then the limit S ∞=lim n→∞ S n exists a.s. (As well as Theorem 15.4.1, this assertion is a generalisation of the monotone convergence theorem. The crucial role is played here by the condition that E|X n | is bounded.) In particular, if ξ k are independent, E ξ k =0, and the variances \(\sigma_{k}^{2}\) of ξ k are such that \(\sum_{k=1}^{\infty} \sigma_{k}^{2} < \sigma^{2} <\infty\), then
and therefore \(S_{n}\stackrel{\mathit{a}.\mathit{s}.}{\longrightarrow }S_{\infty}\). Thus we obtain, as a consequence, the Kolmogorov theorem on series convergence.
Example 15.4.1
Consider a branching process {Z n } (see Sect. 7.7). We know that Z n admits a representation
where the ζ k are identically distributed integer-valued random variables independent of each other and of Z n−1, ζ k being the number of descendants of the k-th particle from the (n−1)-th generation. Assuming that Z 0=1 and setting μ:=E ζ k , we obtain
This implies that X n =Z n /μ n is a martingale, because
For branching processes we have the following.
Theorem 15.4.3
The sequence X n =μ −n Z n converges almost surely to a proper random variable X with E X<∞. The ch.f. φ(λ) of the random variable X satisfies the equation
where \(p(v)=\mathbf{E}v^{\zeta_{k}}\).
Theorem 15.4.3 means that μ −n Z n has a proper limiting distribution as n→∞.
Proof
Since X n ≥0 and E X n =1, the first assertion follows immediately from Theorem 15.4.1.
Since \(\mathbf{E}z^{Z_{n}}\) is equal to the n-th iteration of the function f(z), for the ch.f. of Z n we have (φ η (λ):=E e iλη)
Because X n ⇒X and the function p is continuous, from this we obtain the equation for the ch.f. of the limiting distribution X:
The theorem is proved. □
In Sect. 7.7 we established that in the case μ≤1 the process Z n becomes extinct with probability 1 and therefore P(X=0)=1. We verify now that, for μ>1, the distribution of X is nondegenerate (not concentrated at zero). It suffices to prove that {X n ,0≤n≤∞} forms a martingale and consequently
By Theorem 15.4.1, it suffices to verify that the sequence X n is uniformly integrable. To simplify the reasoning, we suppose that \(\operatorname{Var}(\zeta _{k} )= \sigma^{2} < \infty\) and show that then \(\mathbf{E}X_{n}^{2} < c <\infty\) (this certainly implies the required uniform integrability of X n , see Sect. 6.1). One can directly verify the identity
Since \(\mathbf{E} [Z_{k}^{2} - (\mu Z_{k-1})^{2} | Z_{k-1} ] = \sigma^{2}Z_{k-1}\) (recall that \(\operatorname{Var}(\eta)= \mathbf{E}(\eta^{2} - (\mathbf{E}\eta)^{2})\)), we have
Thus we have proved that X is a nondegenerate random variable,
From the last relation one can easily obtain that \(\operatorname{Var}( X) = \frac{\sigma^{2}}{\mu(\mu- 1)}\). To this end one can, say, prove that X n is a Cauchy sequence in mean quadratic and hence (see Theorem 6.1.3) \(X_{n} \stackrel{(2)}{\longrightarrow} X\).
15.5 Boundedness of the Moments of Stochastic Sequences
When one uses convergence theorems for martingales, conditions ensuring boundedness of the moments of stochastic sequences \(\{ X_{n}, \mathfrak{F}_{n}\}\) are of significant interest (recall that the boundedness of E X n is one of the crucial conditions for convergence of submartingales). The boundedness of the moments, in turn, ensures that X n is stochastically bounded, i.e., that sup n P(X n >N)→0 as N→∞. The last boundedness is also of independent interest in the cases where one is not able to prove, for the sequence {X n }, convergence or any other ergodic properties.
For simplicity’s sake, we confine ourselves to considering nonnegative sequences X n ≥0. Of course, if we could prove convergence of the distributions of X n to a limiting distribution, as was the case for Markov chains or submartingales in Theorem 15.4.1, then we would have a more detailed description of the asymptotic behaviour of X n as n→∞. This convergence, however, requires that the sequence X n satisfies stronger constraints than will be used below.
The basic and rather natural elements of the boundedness conditions to be considered below are: the boundedness of the moments of ξ n =X n −X n−1 of the respective orders and the presence of a negative “drift” \(\mathbf{E}(\xi_{n} | {\mathfrak{F}}_{n-1})\) in the domain X n−1>N for sufficiently large N. Such a property has already been utilised for Markov chains; see Corollary 13.7.1 (otherwise the trajectory of X n may go to ∞).
Let us begin with exponential moments. The simplest conditions ensuring the boundedness of \(\sup_{n} \mathbf{E}e^{\lambda X_{n}}\) for some λ>0 are as follows: for all n≥1 and some λ>0 and N<∞,
Theorem 15.5.1
If conditions (15.5.1) and (15.5.2) hold then
Proof
Denote by A n the left-hand side of (15.5.3). Then, by virtue of (15.5.1) and (15.5.2), we obtain
This immediately implies that
The theorem is proved. □
The conditions
are sufficient for (15.5.1) and (15.5.2).
The first condition means that Y n :=(X n +εn) I(X n−1>N) is a supermartingale.
We now prove sufficiency of (15.5.4) and (15.5.5). That (15.5.2) holds is clear. Further, make use of the inequality
which follows from the Taylor formula for e x with the remainder in the Cauchy form:
Then, on the set {X n−1>N}, one has
Since x 2<e λx/2 for sufficiently large x, by the Hölder inequality it follows that, together with (15.5.5), we will have
This implies that, for sufficiently small λ, one has on the set {X n−1>N} the inequality
This proves (15.5.1). □
Corollary 15.5.1
If, in addition to the conditions of Theorem 15.5.1, the distribution of X n converges to a limiting distribution: P(X n <t)⇒P(X<t), then
The corollary follows
from the Fatou–Lebesgue theorem (see also Lemma 6.1.1):
□
We now obtain bounds for “conventional” moments. Set
Theorem 15.5.2
Assume that \(\mathbf{E}X^{s}_{0} < \infty\) for some s>1 and there exist N≥0 and ε>0 such that
Then
If, moreover,
for some c 1>0, then
Corollary 15.5.2
If conditions (15.5.6) and (15.5.7) are met and the distribution of X n converges weakly to a limiting distribution: P(X n <t)⇒P(X<t), then E X s−1<∞.
This assertion follows from the Fatou–Lebesgue theorem
(see also Lemma 6.1.1), which implies
□
The assertion of Corollary 15.5.2 is unimprovable. One can see this from the example of the sequence X n =(X n−1+ζ n )+, where \(\zeta_{k}\stackrel{d}{=}\zeta\) are independent and identically distributed. If E ζ k <0 then the limiting distribution of X n coincides with the distribution of S=sup k S k (see Sect. 12.4). From factorisation identities one can derive that E S s−1 is finite if and only if E(ζ +)s<∞. An outline of the proof is as follows. Theorem 12.3.2 implies that \(\mathbf{E} S^{k} = c \, \mathbf{E}(\chi^{k}_{+} ; \, \eta_{+} < \infty)\), c=const<∞. It follows from Corollary 12.2.2 that
where H(x) is the renewal function for the random variable \(-\chi^{0}_{-} \geq0\). Since
(see Theorem 10.1.1 and Lemma 10.1.1; a i , b i are constants), integrating the convolution
by parts we verify that, as x→∞, the left-hand side has the same order of magnitude as \(\int^{\infty}_{0} \mathbf{P}(\zeta> v + x) \, dv\). Hence the required statement follows.
We now return to Theorem 15.5.2. Note that in all of the most popular problems the sequence M s−1(n) behaves “regularly”: either it is bounded or M s−1(n)→∞. Assertion (15.5.8) means that, under the conditions of Theorem 15.5.2, the second possibility is excluded. Condition (15.5.9) ensuring (15.5.10) is also rather broad.
Proof of Theorem 15.5.2
Let for simplicity’s sake s>1 be an integer. We have
If we replace \(\xi^{s-l}_{n}\) for s−l≥2 with |ξ n |s−l then the right-hand side can only increase. Therefore,
where
The moments \(M^{s}(n) = \mathbf{E}X^{s}_{n}\) satisfy the inequalities
Suppose now that (15.5.8) does not hold: M s−1(n)→∞. Then all the more M s(n)→∞ and there exists a subsequence n′ such that M s(n′)>M s(n′−1). Since M l(n)≤[M l+1(n)]l/l+1, we obtain from (15.5.6) and (15.5.11) that
for sufficiently large n′. This contradicts the assumption that M s(n)→∞ and hence proves (15.5.8).
We now prove (15.5.10). If this relation is not true then there exists a sequence n′ such that M s−1(n′)→∞ and M s(n′)>M s(n′−1)−c 1. It remains to make use of the above argument.
We leave the proof for a non-integer s>1 to the reader (the changes are elementary). The theorem is proved. □
Remark 15.5.1
(1) The assertions of Theorems 15.5.1 and 15.5.2 will remain valid if one requires inequalities (15.5.4) or \(\mathbf{E}(\xi_{n} + \varepsilon| {\mathfrak{F}}_{n-1})\,\mathrm{I}(X_{n-1} > N) \leq0\) to hold not for all n, but only for n≥n 0 for some n 0>1.
(2) As in Theorem 15.5.1, condition (15.5.6) means that the sequence of random variables (X n +εn) I(X n−1>N) forms a supermartingale.
(3) The conditions of Theorems 15.5.1 and 15.5.2 may be weakened by replacing them with “averaged” conditions. Consider, for instance, condition (15.5.1). By integrating it over the set {X n−1>x>N} we obtain
or, which is the same,
The converse assertion that (15.5.12) for all x>N implies relation (15.5.1) is obviously false, so that condition (15.5.12) is weaker than (15.5.1). A similar remark is true for condition (15.5.4).
One has the following generalisations of Theorems 15.5.1 and 15.5.2 to the case of “averaged conditions”.
Theorem 15.5.1A
Let, for some λ>0, N>0 and all x≥N,
Then
Put
Theorem 15.5.2A
Let \(\mathbf{E}X^{s}_{0} < \infty\) and there exist N≥0 and ε>0 such that
Then (15.5.8) holds true. If, in addition, (15.5.9) is valid, then (15.5.10) is true.
The proofs of Theorems 15.5.1A and 15.5.2A
are quite similar to those of Theorems 15.5.1 and 15.5.2. The only additional element in both cases is integration by parts. We will illustrate this with the proof of Theorem 15.5.1A. Consider
From this we find that
□
Note that Theorem 13.7.2 and Corollary 13.7.1 on “positive recurrence” can also be referred to as theorems on boundedness of stochastic sequences.
Notes
- 1.
See, e.g., [12].
References
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 1. Wiley, New York (1968)
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2. Wiley, New York (1971)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag London
About this chapter
Cite this chapter
Borovkov, A.A. (2013). Martingales. In: Probability Theory. Universitext. Springer, London. https://doi.org/10.1007/978-1-4471-5201-9_15
Download citation
DOI: https://doi.org/10.1007/978-1-4471-5201-9_15
Publisher Name: Springer, London
Print ISBN: 978-1-4471-5200-2
Online ISBN: 978-1-4471-5201-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)