Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Recent work of Marc Yor and coauthors [4] has drawn attention to how properties of a martingale are related to its family of marginal distributions. A fundamental result of this kind is Doob’s martingale convergence theorem:

  • if the marginal distributions (μ n , n ≥ 0) of a discrete time martingale (M n , n ≥ 0) are such that \(\int \vert x\vert \mu _{n}(dx)\) is bounded, then M n converges almost surely.

Other well known results relating the behavior of a discrete time martingale M n to its marginal laws μ n are:

  • for each p > 1, the sequence \(\int \vert x\vert ^{p}\mu _{n}(dx)\) is bounded if and only if M n converges in L p;

  • \(\lim _{y\rightarrow \infty }\sup _{n}\int _{\vert x\vert >y}\vert x\vert \mu _{n}(dx) = 0\), that is \((M_{n})_{n\geq 0}\) is uniformly integrable, if and only if M n converges in L 1.

We know also from Lévy that if μ n is the distribution of a partial sum S n of independent random variables, and μ n converges in distribution as \(n \rightarrow \infty \), then S n converges almost surely. These results can be found in most modern graduate textbooks in probability. See for instance Durrett [2].

What if the marginals of a martingale converge in distribution? Does that imply the martingale converges a.s? Báez-Duarte [1] and Gilat [3] gave examples of martingales that converge in probability but not almost surely. So the answer to this question is no. But worse than that, there is a sequence of martingale marginals converging in distribution, such that some martingales with these marginals converge almost surely, while others diverge almost surely. So by mixing, the probability of convergence of a martingale with these marginals can be any number in [0, 1]. Moreover, the same phenomenon can be exhibited for convergence in probability: there is a sequence of martingale marginals converging in distribution, such that some martingales with these marginals converge in probability, but others do not.

The purpose of this brief note is to record these examples, and to draw attention to the following problems which they raise:

  1. 1.

    What is a necessary and sufficient condition on martingale marginals for every martingale with these marginals to converge almost surely?

  2. 2.

    What is a necessary and sufficient condition on martingale marginals for every martingale with these marginals to converge in probability?

Perhaps the condition for almost sure convergence is Doob’s L 1-bounded condition. But this does not seem at all obvious. Moreover, L 1-bounded is not the right condition for convergence in probability: convergence in distribution to a point mass is obviously sufficient, and this condition can hold for marginals that are not bounded in L 1. See also Rao [5] for treatment of some other problems related to non-L 1-bounded martingales.

2 Examples

2.1 Almost Sure Convergence

This construction extends and simplifies the construction by Gilat [3, §2] of a martingale which converges in probability but not almost surely, with increments in the set { − 1, 0, 1} See also Báez-Duarte [1] for an earlier construction with unbounded increments, based on the double or nothing game instead of a random walk.

Let (S n , n = 0, 1, 2, ) be a simple symmetric random walk started at S 0 = 0, with \((S_{n+1} - S_{n},n = 0,1,2,\ldots )\) a sequence of independent U(±1) random variables, where U(±1) is the uniform distribution on the set \(\{\pm 1\}:=\{ -1,+1\}\). Let 0 = T 0 < T 1 < T 2 < ⋯ be the successive times n that S n  = 0. By recurrence of the simple random walk, \(P(T_{n} < \infty ) = 1\) for every n. For each k = 1, 2,  let M (k) be the process which follows the walk S n on the random interval [T k−1, T k ] of its kth excursion away from 0, and is otherwise identically 0:

$$\displaystyle{M_{n}^{(k)}:= S_{ n}1(T_{k-1} \leq n \leq T_{k})}$$

where 1(⋯ ) is an indicator random variable with value 1 if ⋯ and 0 otherwise. Each of these processes M (k) is a martingale relative to the filtration \((\mathcal{F}_{n})\) generated by the walk (S n ), by Doob’s optional sampling theorem. Now let (A k ) be a sequence of events such that the \(\sigma\)-field \(\mathcal{G}_{0}\) generated by these events is independent of the walk (S n , n ≥ 0), and set

$$\displaystyle{M_{n}:=\sum _{ k=1}^{\infty }M_{ n}^{(k)}1(A_{ k})}$$

So M n follows the path of S n on its kth excursion away from 0 if A k occurs, and otherwise M n is identically 0. Let \(\mathcal{G}_{n}\) for n ≥ 0 be the \(\sigma\)-field generated by \(\mathcal{G}_{0}\) and \(\mathcal{F}_{n}\). Then it is clear that \((M_{n},\mathcal{G}_{n})\) is a martingale, no matter what choice of the sequence of events (A k ) independent of (S n ). The distribution of M n is determined by the formula

$$\displaystyle{P(M_{n} = x) =\sum _{ k=1}^{\infty }P(S_{ n} = x,T_{k-1} \leq n \leq T_{k})P(A_{k})}$$

for all integers x ≠ 0. A family of martingales with the same marginals is thus obtained by varying the structure of dependence between the events A k for a given sequence of probabilities P(A k ). The only way that a path of M n can converge is if M n is eventually absorbed in state 0. So if \(N:=\sum _{k}1(A_{k})\) denotes the number of events A k that occur,

$$\displaystyle{P(M_{n}\mbox{ converges}) = P(N < \infty ).}$$

Now take P(A k ) = p k for a decreasing sequence p k with limit 0 but \(\sum _{k}p_{k} = \infty \), for instance \(p_{k} = 1/k\). Then (A k ) can be constructed so that the A k are mutually independent, and \(P(N = \infty ) = 1\) by the Borel-Cantelli lemma. Or these events can be nested:

$$\displaystyle{A_{1} \supseteq A_{2} \supseteq A_{3}\cdots }$$

in which case

$$\displaystyle{P(N \geq k) = P(A_{k}) \downarrow 0\mbox{ as $k \rightarrow \infty $},}$$

so \(P(N = \infty ) = 0\) in this case. Thus we obtain a sequence of marginal distributions for a martingale, such that some martingales with these marginals converge almost surely, while others diverge almost surely.

2.2 Convergence in Probability

Let us construct a martingale M n which converges in distribution, but not in probability, following indications of such a construction by Gilat [3, §1].

This will be an inhomogeneous Markov chain with integer values, starting from M 0 = 0. Its first step will be to M 1 with U(±1) distribution. Thereafter, the idea is to force M n to alternate between the values ± 1, with probability increasing to 1 as \(n \rightarrow \infty \). This achieves U(±1) as its limit in distribution, while preventing convergence in probability by the alternation. The transition probabilities of M n are as follows:

$$\displaystyle\begin{array}{rcl} P(M_{n+1} = M_{n} \pm 1\,\vert \,M_{n}\mbox{ with }M_{n}\notin \{ \pm 1\}) = 1/2& &{}\end{array}$$
(1)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = -1\,\vert \,M_{n} = 1) = 1 - 2^{-n}& &{}\end{array}$$
(2)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = 2^{n+1} - 1\,\vert \,M_{ n} = 1) = 2^{-n}& &{}\end{array}$$
(3)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = +1\,\vert \,M_{n} = -1) = 1 - 2^{-n}& &{}\end{array}$$
(4)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = -2^{n+1} + 1\,\vert \,M_{ n} = -1) = 2^{-n}.& &{}\end{array}$$
(5)

The first line indicates that whenever M n is away from the two point set { ± 1}, it moves according to a simple symmetric random walk, until it eventually gets back to { ± 1} with probability one. Once it is back in { ± 1}, it is forced to alternate between these values, with probability \(1 - 2^{-n}\) for an alternation at step n, compensated by moving to \(\pm (2^{n+1} - 1)\) with probability 2n. Since the probabilities 2n are summable, the Borel-Cantelli Lemma ensures that with probability one only finitely many exits from { ± 1} ever occur. After the last of these exits, the martingale eventually returns to { ± 1} with probability one. From that time onwards, the martingale flips back and forth deterministically between { ± 1}.

A slight modification of these transition probabilities, gives another martingale with the same marginal distributions which converges almost surely and hence in probability. With M 0 = 0 as before, the modified scheme is as follows:

$$\displaystyle\begin{array}{rcl} P(M_{n+1} = M_{n} \pm 1\,\vert \,M_{n}\mbox{ with }M_{n}\notin \{ \pm 1\}) = 1/2& &{}\end{array}$$
(6)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = 1\,\vert \,M_{n} = 1) = 1 - 2^{-n}& &{}\end{array}$$
(7)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = 2^{n+1} - 1\,\vert \,M_{ n} = 1) = 2^{-n}p_{ n}& &{}\end{array}$$
(8)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = -2^{n+1} + 1\,\vert \,M_{ n} = 1) = 2^{-n}q_{ n}& &{}\end{array}$$
(9)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = -1\,\vert \,M_{n} = -1) = 1 - 2^{-n}& &{}\end{array}$$
(10)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = -2^{n+1} + 1\,\vert \,M_{ n} = -1) = 2^{-n}p_{ n}& &{}\end{array}$$
(11)
$$\displaystyle\begin{array}{rcl} P(M_{n+1} = 2^{n+1} - 1\,\vert \,M_{ n} = -1) = 2^{-n}q_{ n}& &{}\end{array}$$
(12)

where

$$\displaystyle{p_{n}:= 1/(2 - 2^{-n})\mbox{ and }q_{ n}:= 1 - p_{n}}$$

are chosen so that the distribution with probability p n at \(2^{n+1} - 1\) and q n at \(-2^{n+1} + 1\) has mean

$$\displaystyle{p_{n}(2^{n+1} - 1) + q_{ n}(-2^{n+1} + 1) = 1.}$$

In this modified process, the alternating transition out of states ± 1 is replaced by holding in these states, while the previous compensating moves to \(\pm (2^{n+1} - 1)\) are replaced by a nearly symmetric transitions from ± 1 to these values. This preserves the martingale property, and also preserves the marginal laws. But the previous argument for eventual alternation now shows that the modified martingale is eventually absorbed almost surely in one of the states ± 1. So the modified martingale converges almost surely to a limit which has U(±1) distribution.

These martingales (M n ) have jumps that are unbounded. Gilat [3, §2] left open the question of whether there exist martingales with uniformly bounded increments which converge in distribution but not in probability. But such martingales can be created by a variation of the first construction of (M n ) above, as follows.

Run a simple symmetric random walk starting from 0. Each time the random walk makes an alternation between the two states ± 1, make the walk delay for a random number of steps in its current state in ± 1 before continuing, for some rapidly increasing sequence of random delays. Call the resulting martingale M n . So by construction, M 1 has U(±) distribution,

$$\displaystyle{M_{n} = (-1)^{k-1}M_{ 1}\mbox{ for }S_{k} \leq n \leq T_{k}}$$

for some increasing sequence of randomized stopping times

$$\displaystyle{1 = S_{1} < T_{1} < S_{2} < T_{2} < \cdots \,,}$$

and during the kth crossing interval [T k , S k+1] the process M n follows a simple random walk path starting in state \((-1)^{k-1}M_{1}\) and stopping when it first reaches state (−1)k M 1.

The claim is that a suitable construction of the delays T k S k will ensure that the distribution of M n converges to U(±1), while there is almost deterministic alternation for large k of the state \(M_{t_{k}}\) for some rapidly increasing deterministic sequence t k . To achieve this end, let t 1 = 1 and suppose inductively for k = 1, 2,  that t k has been chosen so that

$$\displaystyle{ P(M_{t_{k}} = (-1)^{k-1}M_{ 1}) > 1 -\epsilon _{k}\mbox{ for some $\epsilon _{k} \downarrow 0$ as $k \rightarrow \infty $}. }$$
(13)

Here M 1 ∈ {±1} is the first step of the simple random walk. The random number of steps required for random walk crossing between states ± 1 is a.s. finite. So having defined t k , we can choose an even integer t k+1 so large, that \(t_{k+1}/2 > t_{k}\) and all of the following events occur with probability at least \(1 -\epsilon _{k+1}\):

  • \(M_{t_{k+1}/2} = (-1)^{k-1}M_{1}\), meaning that the (k − 1)th crossing between ± 1 has been completed by time \(S_{k} < t_{k+1}/2\);

  • the kth crossing is started at time T k that is uniform on \([t_{k+1}/2,t_{k+1})\) given \(S_{k} < t_{k+1}/2\);

  • the kth crossing is completed at time \(S_{k+1} < t_{k+1}\), so \(M_{n} = (-1)^{k}M_{1}\) for \(S_{k+1} \leq n \leq t_{k+1}\).

Moreover, t k+1 can be chosen so large that the uniform random start time of the kth crossing given \(S_{k} < t_{k+1}/2\) ensures that also

$$\displaystyle{P(M_{n} \in \{\pm 1\}) \geq 1 - 2\epsilon _{k}\mbox{ for all }t_{k} \leq n \leq t_{k+1}}$$

because with high probability the length \(S_{k+1} - T_{k}\) of the kth crossing is negligible in comparison with the length \(t_{k+1}/2\) of the interval \([t_{k+1}/2,t_{k+1}]\) in which this crossing is arranged to occur. It follows from this construction that M n converges in distribution to U(±1), while the forced alternation (13) prevents M n from having a limit in probability.

A feature of the previous example is that \(\sup _{n}M_{n} = -\inf _{n}M_{n} = \infty \) almost surely, since in the end every step of the underlying simple symmetric random walk is made by the time-changed martingale M n . A similar example can be created from a standard Brownian motion (B t , t ≥ 0) using a predictable {0, 1}-valued process (H t , t ≥ 0) to create successive switching between and holding in states ± 1 so that the martingale

$$\displaystyle{M_{t}:=\int _{ 0}^{t}H_{ t}dB_{t}}$$

converges in distribution to U(±1) while not converging in probability. In this example, \(\int _{0}^{\infty }H_{t}dt =\sup _{t}M_{t} = -\inf _{t}M_{t} = \infty \) almost surely.