Abstract
The chapter begins with introducing the concept of stochastic random sequences in Sect. 17.1. The idea of renovating events together with the key results on ergodicity of stochastic random sequences and the boundedness thereof is presented in Sect. 17.2, whereas the Loynes ergodic theorem for the case of monotone functions specifying the recursion is proved in Sect. 17.3. Section 17.4 establishes ergodicity conditions for contracting in mean Lipschitz transformations.
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
- Stochastic Recursive Sequences
- Ergodicity Condition
- Random Transformations
- Measurable State Space
- Conservative Shift
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
17.1 Basic Concepts
Consider two measurable state spaces and , and let {ξ n } be a sequence of random elements taking values in . If \(\langle\varOmega ,\mathfrak{F},\mathbf{P}\rangle\) is the underlying probability space, then \(\{\omega :\,\xi_{k}\in B\}\in\mathfrak{F}\) for any \(B\in\mathfrak{B}_{\mathcal{Y}}\). Assume, moreover, that a measurable function \(f:\,\mathcal{X}\times\mathcal{Y}\to\mathcal{X}\) is given on the measurable space \(\langle\mathcal{X}\times\mathcal{Y}, \mathfrak{B}_{\mathcal{X}}\times\mathfrak{B}_{\mathcal{Y}}\rangle\), where \(\mathfrak{B}_{\mathcal{X}}\times\mathfrak{B}_{\mathcal{Y}}\) denotes the σ-algebra generated by sets A×B with \(A\in\mathfrak{B}_{\mathcal{X}}\) and \(B\in\mathfrak{B}_{\mathcal{Y}}\).
For simplicity’s sake, by \(\mathcal{X}\) and \(\mathcal{Y}\) we can understand the real line \(\mathbb{R}\), and by , the σ-algebras of Borel sets.
Definition 17.1.1
A sequence {X n }, n=0,1,…, taking values in is said to be a stochastic recursive sequence (s.r.s.) driven by the sequence {ξ n } if X n satisfies the relation
for all n≥0. For simplicity’s sake we will assume that the initial state X 0 is independent of {ξ n }.
The distribution of the sequence {X n ,ξ n } on \(\langle (\mathcal{X}\times\mathcal{Y})^{\infty}, (B_{\mathcal{X}}\times B_{\mathcal{Y}})^{\infty}\rangle\) can be constructed in an obvious way from finite-dimensional distributions similarly to the manner in which we constructed on \(\langle\mathcal{X}^{\infty}, \mathfrak{B}^{\infty}_{\mathcal{X}}\rangle\) the distribution of a Markov chain X with values in from its transition function P(x,B)=P(X 1(x)∈B). The finite-dimensional distributions of {(X 0,ξ 0),…,(X k ,ξ k )} for the s.r.s. are given by the relations
where f 1(x,y 0):=f(x,y 0), f l (x,y 0,…,y l ):=f(f l−1(x,y 0,…,y l−1),y l ).
Without loss of generality, the sequence {ξ n } can be assumed to be given for all −∞<n<∞ (as we noted in Sect. 16.1, for a stationary sequence, the required extension to n<0 can always be achieved with the help of Kolmogorov’s theorem).
A stochastic recursive sequence is a more general object than a Markov chain. It is evident that if ξ k are independent, then the X n form a Markov chain. A stronger assertion is true as well: under broad assumptions about the space \(\langle\mathcal{X},\mathfrak{B}_{\mathcal{X}}\rangle\), for any Markov chain {X n } in one can construct a function f and a sequence of independent identically distributed random variables {ξ n } such that (17.1.1) holds. We will elucidate this statement in the simplest case when both \(\mathcal{X}\) and \(\mathcal{Y}\) coincide with the real line \(\mathbb{R}\). Let P(x,B), \(B\in\mathfrak{B}\), be the transition function of the chain {X n }, and F x (t)=P(x,(−∞,t)) the distribution function of X 1(x) (X 0=x). Then if \(F^{-1}_{x}(t)\) is the function inverse (in t) to F x (t) and α⊂=U 0,1 is a random variable, then, as we saw before (see e.g. Sect. 6.2), the random variable \(F_{x}^{-1}(\alpha)\) will have the distribution function F x (t). Therefore, if {α n } is a sequence of independent random variables uniformly distributed over [0,1], then the sequence \(X_{n+1}=F^{-1}_{X_{n}}(\alpha_{n})\) will have the same distribution as the original chain {X n }. Thus the Markov chain is an s.r.s. with the function \(f(x,y)=F^{-1}_{x}(y)\) and driving sequence {α n }, α n ⊂=U 0,1.
For more general state spaces \(\mathcal{X}\), a similar construction is possible if the σ-algebra \(\mathfrak{B}_{\mathcal{X}}\) is countably-generated (i.e. is generated by a countable collection of sets from \(\mathcal{X}\)). This is always the case for Borel σ-algebras in \(\mathcal{X}=\mathbb{R}^{d}\), d≥1 (see [22]).
One can always consider f(⋅,ξ n ) as a sequence of random mappings of the space \(\mathcal{X}\) into itself. The principal problem we will be interested in is again (as in Chap. 13) that of the existence of the limiting distribution of X n as n→∞.
In the following sections we will consider three basic approaches to this problem.
17.2 Ergodicity and Renovating Events. Boundedness Conditions
17.2.1 Ergodicity of Stochastic Recursive Sequences
We introduce the σ-algebras
In the sequel, for the sake of definiteness and simplicity, we will assume the initial value X 0 to be constant unless otherwise stated.
Definition 17.2.1
An event \(A\in\mathfrak{F}^{\xi}_{n+m}\), m≥0, is said to be renovating for the s.r.s. {X n } on the segment [n,n+m] if there exists a measurable function \(g :\,\mathcal{Y}^{m+1}\to\mathcal{X}\) such that, on the set A (i.e. for ω∈A),
It is evident that, for ω∈A, relations of the form X n+m+k+1=g k (ξ n ,…,ξ n+m+k ) will hold for all k≥0, where g k is a function depending on its arguments only and determined by the event A.
The sequence of events {A n }, \(A_{n}\in\mathfrak{F}_{n+m}^{\xi}\), where the integer m is fixed, is said to be renovating for the s.r.s. {X n } if there exists an integer n 0≥0 such that, for n≥n 0, one has relation (17.2.1) for ω∈A n , the function g being common for all n.
We will be mainly interested in “positive” renovating events, i.e. renovating events having positive probabilities P(A n )>0.
The simplest example of a renovating event is the hitting by the sequence X n of a fixed point x 0: A n ={X n =x 0} (here m=0), although such an event could be of zero probability. Below we will consider a more interesting example.
The motivation behind the introduction of renovating events is as follows. After the trajectory {X k ,ξ k }, k≤n+m, has entered a renovating set \(A\in\mathfrak{F}_{n+m}^{\xi}\), the future evolution of the process will not depend on the values {X k }, k≤n+m, but will be determined by the values of ξ k ,ξ k+1,… only. It is not a complete “regeneration” of the process which we dealt with in Chap. 13 while studying Markov chains (first of all, because the ξ k are now, generally speaking, dependent), but it still enables us to establish ergodicity of the sequence X n (in approximately the same sense as in Chap. 13).
Note that, generally speaking, the event A and hence the function g may depend on the initial value X 0. If X 0 is random then a renovating event is to be taken from the σ-algebra \(\mathfrak{F}_{n+m}^{\xi}\times\sigma(X_{0})\).
In what follows it will be assumed that the sequence {ξ n } is stationary. The symbol U will denote the measure preserving shift transformation of \(\mathfrak{F}^{\xi}\)-measurable random variables generated by {ξ n }, so that Uξ n =ξ n+1, and the symbol T will denote the shift transformation of sets (events) from the σ-algebra \(\mathfrak{F}^{\xi}:\xi_{n+1}(\omega )=\xi_{n}(T\omega )\). The symbols U n and T n, n≥0, will denote the powers (iterations) of these transformations respectively (so that U 1=U, T 1=T; U 0 and T 0 are identity transformations), while U −n and T −n are transformations inverse to U n and T n, respectively.
A sequence of events {A k } is said to be stationary if A k =T k A 0 for all k.
Example 17.2.1
Consider a real-valued sequence
where x +=max(0,x) and {ξ n } is a stationary metric transitive sequence. As we already know from Sect. 12.4, the sequence {X n } describes the dynamics of waiting times for customers in a single-channel service system. The difference is that in Sect. 12.4 the initial value has subscript 1 rather than 0, and that now the sequence {ξ n } has a more general nature. Furthermore, it was established in Sect. 12.4 that Eq. (17.2.2) has the solution
where
(certain changes in the subscripts in comparison to (17.2.4) are caused by different indexing of the initial values). From representation (17.2.3) one can see that the event
implies the event {X n+1=0} and so is renovating for m=0, g(y)≡0. If X n+1=0 then
and so on do not depend on X 0.
Now consider, for some n 0>1 and any n≥n 0, the narrower event
(we assume that the sequence {ξ n } is defined on the whole axis). Clearly, A n ⊂B n ⊂{X n+1=0}, so A n is a renovating event as well. But, unlike B n , the renovating event A n is stationary: A n =T n A 0.
We assume now that E ξ 0<0 and show that in this case P(A 0)>0 for sufficiently large n 0. In order to do this, we first establish that \(\mathbf{P}(\overline{S}_{0,\infty}=0)>0\). Since, by the ergodic theorem, \(S_{0,j}\stackrel{\mathit{a.s.}}{\longrightarrow }-\infty\) as j→∞, we see that \(\overline{S}_{0,\infty}\) is a proper random variable and there exists a v such that \(\mathbf{P}(\overline{S}_{0,\infty}<v)>0\). By the total probability formula,
Therefore there exists a j such that
But the supremum in the last expression has the same distribution as \(\overline{S}_{0,\infty}\). This proves that \(p:=\mathbf{P}(\overline{S}_{0,\infty}=0)>0\). Next, since \(S_{0,j}\stackrel{\mathit{a.s.}}{\longrightarrow} -\infty\), one also has \(\sup_{j\geq k}S_{0,j}\stackrel {\mathit{a.s.}}{\longrightarrow}-\infty\) as k→∞. Therefore, P(sup j≥k S 0,j <−X 0)→1 as k→∞, and hence there exists an n 0 such that
Since P(AB)≥P(A)+P(B)−1 for any events A and B, the aforesaid implies that P(A 0)≥p/2>0.
In the assertions below, we will use the existence of stationary renovating events A n with P(A 0)>0 as a condition insuring convergence of the s.r.s. X n to a stationary sequence. However, in the last example such convergence can be established directly. Let E ξ 0<0. Then by (17.2.3), for any fixed v,
where evidently
as n→∞. Hence the following limit exists
Recall that in the above example the sequence of events A n becomes renovating for n≥n 0. But we can define other renovating events C n along with a number m and function \(g:\ \mathbb{R}^{m+1}\to\mathbb{R}\) as follows:
The events \(C_{n}\in\mathfrak{F}^{\xi}_{n+m}\) are renovating for {X n } on the segment [n,n+m] for all n≥0, so in this case the n 0 in the definition of a renovating sequence will be equal to 0.
A similar argument can also be used in the general case for arbitrary renovating events. Therefore we will assume in the sequel that the number n 0 from the definition of renovating events is equal to zero.
In the general case, the following assertion is valid.
Theorem 17.2.1
Let {ξ n } be an arbitrary stationary sequence and for the s.r.s. {X n } there exists a sequence of renovating events {A n } such that
uniformly in s≥1. Then one can define, on a common probability space with {X n }, a stationary sequence {X n:=U n X 0} satisfying the equations X n+1=f(X n,ξ n ) and such that
If the sequence {ξ n } is metric transitive and the events A n are stationary, then the relations P(A 0)>0 and \(\mathbf{P} (\bigcup_{n=0}^{\infty}A_{n} )=1\) are equivalent and imply (17.2.6) and (17.2.7).
Note also that if we introduce the measure π(B)=P(X 0∈B) (as we did in Chap. 13), then (17.2.7) will imply convergence in total variation:
Proof of Theorem 17.2.1
First we show that (17.2.6) implies that
uniformly in s≥0. For a fixed s≥1, consider the sequence \(X_{j}^{s}=U^{-s}X_{s+j}\). It is defined forj≥−s, and
and so on. It is clear that the event
implies the event
We show that
For simplicity’s sake put m=0. Then, for the event \(X_{j+1}=X_{j+1}^{s}\) to occur, it suffices that the events A j and T −s A j+s occur simultaneously. In other words,
Therefore (17.2.6) implies (17.2.8) and convergence
uniformly in k≥0 and s≥0. If we introduce the metric ρ putting ρ(x,y):=1 for x≠y, ρ(x,x)=0, then the aforesaid means that, for any δ>0, there exists an N such that
for n≥N and any k≥0, s≥0, i.e. \(X_{k}^{n}\) is a Cauchy sequence with respect to convergence in probability for each k. Because any space \(\mathcal{X}\) is complete with such a metric, there exists a random variable X k such that \(X_{k}^{n}\xrightarrow{p} X^{k}\) as n→∞ (see Lemma 4.2). Due to the specific nature of the metric ρ this means that
The sequence X k is stationary. Indeed, as n→∞,
Since the probability P(X k+1≠UX k) does not depend on n, X k+1=UX k a.s.
Further, X n+k+1=f(X n+k ,ξ n+k ), and therefore
The left and right-hand sides here converge in probability to X k+1 and f(X k,ξ k ), respectively. This means that X k+1=f(X k,ξ k ).
To prove convergence (17.2.7) it suffices to note that, by virtue of (17.2.10), the values \(X_{k}^{n}\) and X k, after having become equal for some k, will never be different for greater values of k. Therefore, as well as (17.2.9) one has the relation
which is equivalent to (17.2.7).
The last assertion of the theorem follows from Theorem 16.2.5. The theorem is proved. □
Remark 17.2.1
It turns out that condition (17.2.6) is also a necessary one for convergence (17.2.7) (see [6]). For more details on convergence of stochastic recursive sequences and their generalisations, and also on the relationship between (17.2.6) and conditions (I) and (II) from Chap. 13, see [6].
In Example 17.2.1 the sequence X k was actually found in an explicit form (see (17.2.3) and (17.2.5)):
These random variables are proper by Corollary 16.3.1. It is not hard to also see that, for X 0=0, one has (see (17.2.3))
17.2.2 Boundedness of Random Sequences
Consider now conditions of boundedness of an s.r.s. in spaces \(\mathcal{X}=[0,\infty)\) and \(\mathcal{X}=(-\infty,\infty)\). Assertions about boundedness will be stated in terms of existence of stationary majorants, i.e. stationary sequences M n such that
Results of this kind will be useful for constructing stationary renovating sequences.
Majorants will be constructed for a class of random sequences more general than stochastic recursive sequences. Namely, we will consider the class of random sequences satisfying the inequalities
where the measurable function h will in turn be bounded by rather simple functions of X n and ξ n . The sequence {ξ n } will be assumed given on the whole axis.
Theorem 17.2.2
Assume that there exist a number N>0 and a measurable function g 1 with E g 1(ξ n )<0 such that (17.2.13) holds with
If X 0≤M<∞, then the stationary sequence
where S n,−1=0 and S k,j =g 1(ξ k )+⋯+g 1(ξ k−j ) for j≥0, is a majorant for X n .
Proof
For brevity’s sake, put ζ i :=g 1(ξ i ), Z:=max(M,N), and Z n :=X n −Z. Then Z n will satisfy the following inequalities:
Consider now a sequence {Y n } defined by the relations Y 0=0 and
Assume that Z n ≤Y n . If Z n >N−Z then
If Z n ≤N−Z then
Because Z 0≤0=Y 0, it is evident that Z n ≤Y n for all n. But we know the solution of the equation for Y n and, by virtue of (17.2.11) and (17.2.13),
The theorem is proved. □
Theorem 17.2.2A
Assume that there exist a number N>0 and measurable functions g 1 and g 2 such that
and
If Z 0≤M<∞, then the conditions of Theorem 17.2.2 are satisfied (possibly for other N and g 1) and for X n there exists a stationary majorant of the form (17.2.15).
Proof
We set g:=−E g 1(ξ n )>0 and find L>0 such that E(g 2(ξ n ); g 2(ξ n )>L)≤g/2. Introduce the function
Then \(\mathbf{E}g_{1}^{*}(\xi_{n})\leq-g/2<0\) and
This means that inequalities (17.2.14) hold with N replaced with N ∗=N+L. The theorem is proved. □
Note again that in Theorems 17.2.2 and 17.2.2A we did not assume that {X n } is an s.r.s.
The reader will notice the similarity of the conditions of Theorems 17.2.2 and 17.2.2A to the boundedness condition in Sect. 15.5, Theorem 13.7.3 and Corollary 13.7.1.
The form of the assertions of Theorems 17.2.2 and 17.2.2A enables one to construct stationary renovating events for a rather wide class of nonnegative stochastic recursive sequences (so that \(\mathcal{X}=[0,\infty)\)) having, say, a “positive atom” at 0. It is convenient to write such sequences in the form
Example 17.2.2
Let an s.r.s. (see (17.1.1)) be described by Eq. (17.2.18) and satisfy conditions (17.2.14) or (17.2.17), where the function h is sufficiently “regular” to ensure that
is an event for any T. (For instance, it is enough to require h(t,v) to have at most a countable set of discontinuity points t. Then the set B n,T can be expressed as the intersection of countably many events ⋂ k {h(t k ,ξ n )≤−t k }, where {t k } form a countable set dense on [0,T].) Furthermore, let there exist an L>0 such that
(M n was defined in (17.2.15)). Then the event A n ={M n <L}B n,L is clearly a positive stationary renovating event with the function g(y)=(h(0,y))+, m=0. (On the set \(A_{n}\in\mathfrak{F}_{n}^{\xi}\) we have X n+1=0, X n+2=h(0,ξ n+1)+ and so on.) Therefore, an s.r.s. satisfying (17.2.18) satisfies the conditions of Theorem 17.2.1 and is ergodic in the sense of assertion (17.2.7).
It can happen that, from a point t≤L, it would be impossible to reach the point 0 in one step, but it could be done in m>1 steps. If B is the set of sequences (ξ n ,…,ξ n+m ) that effect such a transition, and P(M n <L), then A n ={M n <L}B will also be stationary renovating events.
17.3 Ergodicity Conditions Related to the Monotonicity of f
Now we consider ergodicity conditions for stochastic recursive sequences that are related to the analytic properties of the function f from (17.1.1). As we already noted, the sequence f(x,ξ k ), k=1,2,…, may be considered as a sequence of random transformations of the space \(\mathcal{X}\). Relation (17.1.1) shows that X n+1 is the result of the application of n+1 random transformations f(⋅,ξ k ), k=0,1,…,n, to the initial value \(X_{0}=x\in\mathcal{X}\). Denoting by \(\xi^{n+k}_{n}\) the vector \(\xi^{n+k}_{n}=(\xi_{n},\ldots,\xi_{n+k})\) and by f k the k-th iteration of the function f: f 1(x,y 1)=f(x,y 1), f 2(x,y 1,y 2)=f(f(x,y 1),y 2) and so on, we can re-write (17.1.1) for X 0=x in the form
so that the “forward” and “backward” equations hold true:
In the present section we will be studying stochastic recursive sequences for which the function f from representation (17.1.1) is monotone in the first argument. To this end, we need to assume that a partial order relation “≥” is defined in the space \(\mathcal{X}\). In the space \(\mathcal{X}=\mathbb{R}^{d}\) of vectors x=(x(1),…,x(d)) (or its subspaces) the order relation can be introduced in a natural way by putting x 1≥x 2 if x 1(k)≥x 2(k) for all k.
Furthermore, we will assume that, for each non-decreasing sequence x 1≤x 2≤⋯≤x n ≤…, there exists a limit \(x\in\mathcal{X}\), i.e. the smallest element \(x\in\mathcal{X}\) for which x k ≤x for all k. In that case we will write x k ↑x or lim k→∞ x k =x. In \(\mathcal{X}=\mathbb{R}^{d}\) such convergence will mean conventional convergence. To facilitate this, we will need to complete the space \(\mathbb{R}^{d}\) by adding points with infinite components.
Theorem 17.3.1
(Loynes)
Suppose that the transformation f=f(x,y) and space \(\mathcal{X}\) satisfy the following conditions:
-
(1)
there exists an \(x_{0}\in\mathcal{X}\) such that f(x 0,y)≥x 0 for all \(y\in\mathcal{Y}\);
-
(2)
the function f is monotone in the first argument: f(x 1,y)≥f(x 2,y) if x 1≥x 2;
-
(3)
the function f is continuous in the first argument with respect to the above convergence: f(x n ,y)↑f(x,y) if x n ↑x.
Then there exists a stationary random sequence {X n } satisfying Eq. (17.1.1): X n+1=UX n=f(X n,ξ n ), such that
where convergence takes place for all elementary outcomes.
Since the distributions of X n and U −n X n coincide, in the case where convergence of random variables η n ↑η means convergence (in a certain sense) of their distributions (as is the case when \(\mathcal{X} = \mathbb{R}^{d}\)), Theorem 17.2.1 also implies convergence of the distributions of X n to that of X 0 as n→∞.
Remark 17.3.1
A substantial drawback of this theorem is that it holds only for a single initial value X 0=x 0. This drawback disappears if the point x 0 is accessible with probability 1 from any \(x\in\mathcal{X}\), and ξ k are independent. In that case x 0 is likely to be a positive atom, and Theorem 13.6.1 for Markov chains is also applicable.
The limiting sequence X s in (17.3.2) can be “improper” (in spaces \(\mathcal{X}=\mathbb{R}^{d}\) it may assume infinite values). The sequence X s will be proper if the s.r.s. X n satisfies, say, the conditions of the theorems of Sect. 15.5 or the conditions of Theorem 17.2.2.
Proof of Theorem 17.3.1
Put
Here the superscript −k indicates the number of the element of the driving sequence \(\{\xi_{n}\}_{n=-\infty}^{\infty}\) such that the elements of this sequence starting from that number are used for constructing the s.r.s. The subscript s is the “time epoch” at which we observe the value of the s.r.s. From the “backward” equation in (17.3.1) we get that
This means that the sequence \(v^{-k}_{s}\) increases as k grows, and therefore there exists a random variable \(X^{s}\in\mathcal{X}\) such that
Further, \(v_{s}^{-k}\) is a function of \(\xi_{-k}^{s-1}\). Therefore, X s is a function of \(\xi_{-\infty}^{s-1}\):
Hence
which means that {X s} is stationary. Using the “forward” equation from (17.3.1), we obtain that
Passing to the limit as k→∞ gives, since f is continuous, that
The theorem is proved. □
Example 17.2.1 clearly satisfies all the conditions of Theorem 17.3.1 with \(\mathcal{X}=[0,\infty)\), x 0=0, and f(x,y)=(x+y)+.
17.4 Ergodicity Conditions for Contracting in Mean Lipschitz Transformations
In this section we will assume that \(\mathcal{X}\) is a complete separable metric space with metric ρ. Consider the following conditions on the iterations \(X_{k}(x)=f_{k}(x,\xi_{0}^{k-1})\).
Condition (B) (boundedness). For some \(x_{0}\in \mathcal{X}\) and any δ>0, there exists an N=N δ such that, for all n≥1,
It is not hard to see that condition (B) holds (possibly with a different N) as soon as we can establish that, for some m≥1, the above inequality holds for all n≥m.
Condition (B) is clearly met for stochastic random sequences satisfying the conditions of Theorems 17.2.2 and 17.2.2A or the theorems of Sect. 15.5.
Condition (C) (contraction in mean). The function f is continuous in the first argument and there exist m≥1, β>0 and a measurable function \(q:\mathbb{R}^{m}\to \mathbb{R}\) such that, for any x 1 and x 2,
Observe that conditions (B) and (C) are, generally speaking, not related to each other. Let, for instance, \(\mathcal{X}=\mathbb{R}\), X 0≥0, ξ n ≥0, ρ(x,y)=|x−y|, and f(x,y)=bx+y, so that
Then condition (C) is clearly satisfied for 0<b<1, since
At the same time, condition (B) will be satisfied if and only if Elnξ 0<∞. Indeed, if Elnξ 0=∞, then the event {lnξ k >−2klnb} occurs infinitely often a.s. But X n+1 has the same distribution as
where, in the sum on the right-hand side, the number of terms exceeding exp{−klnb} increases unboundedly as n grows. This means that \(X(n+1)\stackrel{p}{\to}\infty\) as n→∞. The case Elnξ 0<∞ is treated in a similar way. The fact that (B), generally speaking, does not imply (C) is obvious.
As before, we will assume that the “driving” stationary sequence \(\{\xi_{n}\}_{n=-\infty}^{\infty}\) is given on the whole axis. Denote by U the respective distribution preserving shift operator.
Convergence in probability and a.s. of a sequence of \(\mathcal{X}\)-valued random variables \(\eta_{n}\in\mathcal{X}\) (\(\eta_{n}\overset{p}{\longrightarrow}\,\eta\), \(\eta_{n}\stackrel {\mathit{a.s.}}{\longrightarrow}\eta\)) is defined in the natural way by the relations P(ρ(η n ,η)>δ)→0 as n→∞ and \(\mathbf{P} (\rho(\eta_{k},\eta)>\delta\mbox{ \it for some }\ k\geq n )\to0\) as n→∞ for any δ>0, respectively.
Theorem 17.4.1
Assume that conditions (B) and (C) are met. Then there exists a stationary sequence {X n } satisfying (17.1.1):
such that, for any fixed x,
This convergence is uniform in x over any bounded subset of \(\mathcal{X}\).
Theorem 17.2.2 implies the weak convergence, as n→∞, of the distributions of X n (x) to that of X 0. Condition (B) is clearly necessary for ergodicity. As the example of a generalised autoregressive process below shows, condition (C) is also necessary in some cases.
Set Y n :=U n X n (x 0), where x 0 is from condition (B). We will need the following auxiliary result.
Lemma 17.4.1
Assume that conditions (B) and (C) are met and the stationary sequence \(\{q (\xi^{km+m-1}_{km} ) \}^{\infty}_{k=-\infty}\) is ergodic. Then, for any δ>0, there exists an n δ such that, for all k≥0,
For ergodicity of \(\{q(\xi^{km+m-1}) \}_{k=-\infty}^{\infty}\) it suffices that the transformation T m is metric transitive.
The lemma means that, with probability 1, the distance ρ(Y n+k ,Y n ) tends to zero uniformly in k as n→∞. Relation (17.4.2) can also be written as P(A δ )≤δ, where
Proof of Lemma 17.4.1
By virtue of condition (B), there exists an N=N δ such that, for all k≥1,
Hence
The random variable θ n,k :=U −n−k X k (x 0) has the same distribution as X k (x 0). Next, by virtue of (C),
Denote by B s the set of numbers n of the form n=lm+s, l=0,1,2,…, 0≤s<m, and put
Then, for n∈B s , we obtain from (17.4.3) and similar relations that
where the last factor (denote it just by ρ) is bounded from above:
The random variables U −n X j (x 0) have the same distribution as X j (x 0). By virtue of (B), there exists an N=N δ such that, for all j≥1,
Hence, for all n, k and s, we have P(ρ>2N)<δ/(2m), and the right-hand side of (17.4.4) does not exceed \(2N\,\exp \{\sum_{j=1}^{l}\lambda_{j} \}\) on the complement set {ρ≤2N}.
Because E λ j ≤−mβ<0 and the sequence {λ j } is metric transitive, by the ergodic Theorem 16.3.1 we have
for all l≥l(ω), where l(ω) is a proper random variable. Choose l 1 and l 2 so that the inequalities
hold. Then, putting
we obtain that
But the intersection of the events from the term with {l δ ≥l(ω)} is empty. Therefore, the former event is a subset of the event {l(ω)>l δ }, and
The lemma is proved. □
Lemma 17.4.2
(Completeness of \(\mathcal{X}\) with respect to convergence in probability)
Let \(\mathcal{X}\) be a complete metric space. If a sequence of \(\mathcal{X}\)-valued random elements η n is such that, for any δ>0,
as n→∞, then there exists a random element \(\eta\in\mathcal{X}\) such that \(\eta\stackrel{p}{\to}\eta\) (that is, P(ρ(η n ,η)>δ)→0 as n→∞).
Proof
For given ε and δ choose n k , k=0,1,…, such that
and, for the sake of brevity, put \(\zeta_{k}:=\eta_{n_{k}}\). Consider the set
Then P(D)>1−2ε and, for any ω∈D, one has ρ(ζ k+s (ω),ζ k (ω))<δ2k−1 for all s≥1. Hence ζ k (ω) is a Cauchy sequence in \(\mathcal{X}\) and there exists an \(\eta=\eta(\omega )\in\mathcal{X}\) such that ζ k (ω)→η(ω). Since ε is arbitrary, this means that \(\zeta_{k}\stackrel{\mathit{a.s.}}{\longrightarrow}\eta\) as k→∞, and
Therefore, for any n≥n 0,
Since ε and δ are arbitrary, the lemma is proved. □
Proof of Theorem 17.4.1
From Lemma 17.4.1 it follows that
This means that Y n is a Cauchy sequence with respect to convergence in probability, and by Lemma 17.4.2 there exists a random variable X 0 such that
By continuity of f,
We proved the required convergence for a fixed initial value x 0. For an arbitrary x∈C n ={z: ρ(x 0,z)≤N}, one has
where the first term on the right-hand side converges in probability to 0 uniformly in x∈C N . For n=lm this follows from the inequality (see condition (C))
and the above argument. Similar relations hold for n=lm+s, m>s>0. This, together with (17.4.5) and (17.4.6), implies that
uniformly in x∈C N . This proves the assertion of the theorem in regard to convergence in probability.
We now prove convergence with probability 1. To this end, one should repeat the argument proving Lemma 17.4.1, but bounding ρ(X 0,U −n X n (x)) rather than ρ(Y n+k ,Y n ). Assuming for simplicity’s sake that s=0 (n is a multiple of m), we get (similarly to (17.4.4)) that, for any x,
The rest of the argument of Lemma 17.4.1 remains unchanged. This implies that, for any δ>0 and sufficiently large n δ ,
Theorem 17.4.1 is proved. □
Example 17.4.1
(Generalised autoregression)
Let \(\mathcal{X}=\mathbb{R}\). A generalised autoregression process is defined by the relations
where F and G are functions mapping \(\mathbb{R}\mapsto\mathbb{R}\) and ξ n =(ζ n ,η n ) is a stationary ergodic driving sequence, so that {X n } is an s.r.s. with the function
If the functions F and G are nondecreasing and left continuous, G(x)≥0 for all \(x\in\mathbb{R}\), and the elements ζ n are nonnegative, then the process (17.4.9) satisfies the condition of Theorem 17.3.1, and therefore U −n+s X n (0)↑X s with probability 1 (as n→∞). To establish convergence to a proper stationary sequence X s, one has to prove uniform boundedness in probability (in n) of the sequence X n (0) (see below).
Now we will establish under what conditions the sequence (17.4.9) will satisfy the conditions of Theorem 17.4.1. Suppose that the functions F and G satisfy the Lipschitz condition:
Then
Theorem 17.4.2
Under the above assumptions, the sequence (17.4.9) will satisfy condition (C) if
The sequence (17.4.9) will satisfy condition (B) if (17.4.11) holds and, moreover,
When (17.4.11) and (17.4.12) hold, the sequence (17.4.9) has a stationary majorant, i.e. there exists a stationary sequence M n (depending on X 0) such that |X n |≤M n for all n.
Proof
That condition (C) for ρ(x 1,x 2)=|x 1−x 2| follows from (17.4.10) is obvious. We prove (B). To do this, we will construct a stationary majorant for |X n |. One could do this using Theorems 17.2.2 and 17.2.2A. In our case, it is simpler to prove it directly, making use of the inequalities
where we assume, for simplicity’s sake, that G(0) and F(0) are finite. Then
where
From this we get that, for X 0=x,
Put
By the strong law of large numbers, there are only finitely many positive values S l −al, where 2a=E α j <0. Therefore, for all l except for those with S l −al>0,
On the other hand, γ −l−1 exceeds the level l only finitely often. This means that the series in (17.4.13) (denote it by R) converges with probability 1. Moreover,
is a proper random variable. As result, we obtain that, for all n,
where all the terms on the right-hand side are proper random variables. The required majorant
is constructed. This implies that (B) is met. The theorem is proved. □
The assertion of Theorem 17.4.2 can be extended to the multivariable case \(\mathcal{X}=\mathbb{R}^{d}\), d>1, as well (see [6]).
Note that conditions (17.4.11) and (17.4.12) are, in a certain sense, necessary not only for convergence U −n+s X n (x)→X s, but also for the boundedness of X n (x) (or of X 0) only. This fact can be best illustrated in the case when F(t)≡G(t)≡t. In that case, U −n X n+s+1(x) and X s+1 admit explicit representations
Assume that Elnζ≥0, η≡1, and put
Then
with probability 1, and consequently X 1=∞ and X n →∞ with probability 1.
If E[lnη]+=∞ and ζ=b<1 then
where y j =lnη j ; the event {y −l−1>−llnb} occurs infinitely often with probability 1. This means that X 1=∞ and X n →∞ with probability 1.
References
Borovkov, A.A.: Ergodicity and Stability of Stochastic Processes. Wiley, Chichester (1998)
Kifer, Yu.: Ergodic Theory of Random Transformations. Birkhäuser, Boston (1986)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag London
About this chapter
Cite this chapter
Borovkov, A.A. (2013). Stochastic Recursive Sequences. In: Probability Theory. Universitext. Springer, London. https://doi.org/10.1007/978-1-4471-5201-9_17
Download citation
DOI: https://doi.org/10.1007/978-1-4471-5201-9_17
Publisher Name: Springer, London
Print ISBN: 978-1-4471-5200-2
Online ISBN: 978-1-4471-5201-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)