Keywords

MSC 2010 Classification

38.1 Introduction

Perturbed  Markov chains is one of the popular and important objects of studies in the theory of Markov processes and their applications to stochastic networks, queuing and reliability models, bio-stochastic systems, and many other stochastic models.

We refer here to some recent books and papers devoted to perturbation problems for Markov type processes, [5, 6, 11, 13, 14, 21, 23, 24, 26,27,28, 30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46, 48, 49]. In particular, we would like to mention works [5, 24, 39, 40], where the extended bibliographies of works in the area and the corresponding methodological and historical remarks can be found.

We are especially interested in models of Markov chains commonly used for description of information networks.  With recent advancement in technology, filtering information has become a challenge in such systems. Moreover, their significance is visible as they find their applications in Internet search engines, biological, financial, transport, queuing networks and many others, [1,2,3,4,5,6,7,8,9,10, 12, 15,16,17,18,19, 22, 25, 29, 47]. In such models an information network is represented by the Markov chain associated with the corresponding node links graph. Stationary distributions and other related characteristics of information Markov chains usually serve as basic tools for ranking of nodes in information networks.

The ranking problem may be complicated by singularity of the corresponding information Markov chain, where its phase space is split into several weakly or completely non communicating groups of states. In such models, the matrix of transition probabilities  \(\mathbf {P}_0\) of information Markov chain is usually regularised and approximated by the stochastic matrix \(\mathbf {P}_{\varepsilon } = (1 - \varepsilon ) \mathbf {P}_0 + \varepsilon \mathbf {D}\), where \(\mathbf {D}\) is a so-called damping stochastic matrix with identical rows and all positive elements, while \(\varepsilon \in [0, 1\)] is a damping (perturbation) parameter. Let \(\bar{\pi }_{\varepsilon }\) be the stationary distribution of a Markov chain \(X_{\varepsilon , n}\) with the regularised matrix of transition probabilities \(\mathbf {P}_{\varepsilon }\). As was mentioned above this stationary distribution is often used for ranking nodes in the corresponding information network. The damping parameter \(0 < \varepsilon \le 1\) should be chosen neither too small nor too large. In the first case, where \(\varepsilon \) takes too small values, the damping effect will not work against singularity effects. In the second case, the ranking information (accumulated by matrix \({\mathbf{P}}_0\) via the corresponding stationary distribution) may be partly lost, due to the deviation of matrix \({\mathbf{P}}_\varepsilon \) from matrix \({\mathbf{P}}_0\). This actualises the problem of estimation for deviation \(| \bar{\pi }_{\varepsilon } - \bar{\pi }_{0} |\) and construction asymptotic expansions for perturbed stationary distribution \(\bar{\pi }_{\varepsilon }\) with respect to damping parameter \(\varepsilon \).

The above problems can also be considered in purely algebraic problem connected with perturbation analysis of invariant  (stationary) distributions \(\bar{\pi }_{\varepsilon }\) for perturbed stochastic matrices  \(\mathbf {P}_{\varepsilon } = (1 - \varepsilon ) \mathbf {P}_0 + \varepsilon \mathbf {D}\). The model, where matrix \(\mathbf {P}_0\) is a matrix of transition probabilities for a Markov chain, which phase space is one class of communicative states, is usually referred as the model with regular perturbations. The model, where matrix \(\mathbf {P}_0\) is a matrix of transition probabilities for a Markov chain, which phase space split in several closed classes of communicative states plus a class (possibly empty) of transient states, is usually referred as the model with singular perturbations.  The perturbation analysis in the singular case is much more difficult than in the regular case. The approach used in this paper based on the use of method of artificial regeneration and renewal techniques for deriving special series representation for stationary distributions \(\bar{\pi }_{\varepsilon }\) and some classical results from the matrix theory such as Perron–Frobenius theorem and eigenvalues decomposition representation.

The paper includes six sections. In Sect. 38.2, we describe the algorithm for stochastic modelling of Markov chains with damping component and the procedure of embedding such Markov chains in the model of discrete time regenerative processes with special damping regenerative times. Also, we derive renewal type equations for the corresponding transition probabilities. In Sect. 38.3, we present ergodic theorems for the Markov chains with damping component and give explicit formulas for the corresponding stationary distributions. In Sect. 38.4, we describe continuity properties of transition probabilities and stationary distributions with respect to damping parameter. In Sect. 38.5, explicit upper bounds in approximations of the stationary distributions for Markov chain with damping component are given. In Sect. 38.6, we present asymptotic expansions for stationary distribution of Markov chains with damping component with respect to the damping parameter.

38.2 Markov Chains with Damping Component (MCDC)

In this section, we introduce the model of MCDC, which is often used for description of information networks. We also describe the procedure of embedding such Markov chains in the model of discrete time regenerative processes with special damping regenerative times and present the corresponding renewal type equations for transition probabilities.

38.2.1 Stochastic Modelling of MCDC

Let (a) \(\mathbb {X} = \{1, 2, \ldots , m \}\) be a finite space, (b) \(\bar{p} = \langle p_1, \ldots , p_m \rangle \), \(\bar{d} = \langle d_1, \ldots , d_m \rangle \), and \(\bar{q} = \langle q_0, q_1 \rangle \) be three discrete probability distributions, (c) \(\mathbf {P}_0 = \Vert p_{0, ij} \Vert \) be a \(m \times m\) stochastic matrix and \(\mathbf {D} = \Vert d_{ij} \Vert \) be a damping \(m \times m\) stochastic matrix with elements \(d_{ij} = d_j > 0, i, j = 1, \ldots , m\), and \(\mathbf {P}_\varepsilon = \Vert p_{\varepsilon , ij} \Vert = (1- \varepsilon )\mathbf {P}_0 + \varepsilon \mathbf {D}\) is a stochastic matrix with elements \(p_{\varepsilon , ij} = (1- \varepsilon ) p_{0, ij} + \varepsilon d_j, i, j = 1, \ldots , m\), where \(\varepsilon \in [0, 1]\).

Let us describe an algorithm for stochastic modelling of a discrete time homogeneous Markov chain \(X_{\varepsilon , n}, n = 0, 1, \ldots \), with the phase space \(\mathbb {X}\), the initial distribution \(\bar{p}\), and the matrix of transition probabilities \(\mathbf {P}_\varepsilon \).

Let (a) U is a random variable taking values in space \(\mathbb {X}\) and such that \(\mathsf {P} \{U = j \} = p_j, j \in \mathbb {X}\); (b) \(U_{i, n}, i \in \mathbb {X}, n = 1, 2, \ldots \) be a family of independent random variables taking values in space \(\mathbb {X}\) and such that \(\mathsf {P} \{U_{i, n} = j \} = p_{ij}, i, j \in \mathbb {X}, n = 1, 2, \ldots \); (c) \(V_n, n = 0, 1, \ldots \) be a sequence of independent random variables taking values in space \(\mathbb {X}\) and such that \(\mathsf {P} \{V_n = j \} = d_j, j \in \mathbb {X}, n = 1, 2, \ldots \); (d) W is a binary random variable taking values 0 and 1 with probabilities, respectively \(q_0\) and \(q_1\); (e) \(W_{\varepsilon , n}, n = 1, 2, \ldots \) be, for every \(\varepsilon \in [0, 1]\) a sequence of independent binary random variables taking values 0 and 1 with probabilities, respectively, \(1 - \varepsilon \) and \(\varepsilon \), for \(n = 1, 2, \ldots \); (f) the random variables UW, the family of random variables \(U_{i, n}, i \in \mathbb {X}, n = 1, 2, \ldots \), and the random sequences \(V_{\varepsilon , n}, n = 1, 2, \ldots \) and \(W_{\varepsilon , n}, n = 1, 2, \ldots \) are mutually independent, for every \(\varepsilon \in [0, 1]\).

Let us now define, for every \(\varepsilon \in [0, 1]\), the random sequence \(X_{\varepsilon , n}, n = 0, 1, \ldots \), by the following recurrent relation,

$$\begin{aligned} X_{\varepsilon , n} = U_{X_{\varepsilon , n -1}, n} \mathrm{I}(W_{\varepsilon , n} = 0) + V_{n} \mathrm{I}(W_{\varepsilon , n} = 1), n = 1, 2, \ldots , \, X_{\varepsilon , 0} = U. \end{aligned}$$
(38.1)

It is readily seen that the random sequence \(X_{\varepsilon , n}, n = 0, 1, \ldots \) is, for every \(\varepsilon \in [0, 1]\), a homogeneous Markov chain with phase space \(\mathbb {X}\), the initial distribution \(\bar{p}\) and the matrix of transition probabilities \(\mathbf {P}_\varepsilon \). This Markov chain can be referred as a Markov chain with damping component (MCDC).

38.2.2 Regenerative Properties of MCDC

Let us consider the extended random sequence,

$$\begin{aligned} Y_{\varepsilon ,n} = (X_{\varepsilon , n}, W_{\varepsilon , n}), n = 1, 2, \ldots , \, X_{\varepsilon , 0} = U, \, W_{\varepsilon , 0} = W. \end{aligned}$$
(38.2)

This sequence also is, for every \(\varepsilon \in [0, 1]\), a homogeneous Markov chain, with phase space \(\mathbb {Y} = \mathbb {X} \times \{ 0, 1 \}\), the initial distribution \(\overline{pq} = \langle p_i q_r, (i, r) \in \mathbb {Y} \rangle \) and the transition probabilities,

$$\begin{aligned} p_{\varepsilon , ir, jk} = \mathsf {P} \{ X_{\varepsilon , 1} = j, W_{\varepsilon , 1} = k / X_{\varepsilon , 0} = i, W_{\varepsilon , 0} = r \} \end{aligned}$$
$$\begin{aligned} = \left\{ \begin{array}{lll} (1- \varepsilon ) p_{0, ij} &{} \text {for} \ i, j \in \mathbb {X}, \, r = 0, 1, \, k = 0, \\ \varepsilon d_j &{} \text {for} \ i, j \in \mathbb {X}, \, r = 0, 1, \, k = 1. \end{array} \right. \end{aligned}$$
(38.3)

It is worth to note that the transition probabilities \(p_{\varepsilon , ir, jk} = p_{\varepsilon , i, jk}, \, (i, r)\), \((j, k) \in \mathbb {Y}\) do not depend on \(r = 0, 1\) and on \(i \in \mathbb {X}\) if \(k = 1\).

Let us, assume that the damping (perturbation) parameter \(\varepsilon \in (0, 1]\).

Let us define times of sequential hitting state 1 by the second component \(W_{\varepsilon , n}\),

$$\begin{aligned} T_{\varepsilon , n} = \min (n > T_{\varepsilon , n-1}, W_{\varepsilon , n} = 1), n = 1, 2, \ldots , \, T_{\varepsilon , 0} = 0. \end{aligned}$$
(38.4)

The random sequence \(Y_{\varepsilon , n}, n = 0, 1, \ldots \) is also a discrete time regenerative process with “damping” regeneration times \(T_{\varepsilon , n}, n = 0, 1, \ldots \).

This follows from independence of transition probabilities \(p_{\varepsilon , ir, jk}\) of the Markov chain \(Y_{\varepsilon ,n}\), which are given by relation (38.3), on \(i \in \mathbb {X}\) if \(k = 1\).

This is a standard regenerative process, if the initial distribution \(\overline{pq} = \overline{dq}_1 = \langle d_i q_{1, r}\), \((i, r) \in \mathbb {Y} \rangle \), where \(\bar{q}_1 = \langle q_{1, 0} = 0, q_{1,1} = 1 \rangle \).

Otherwise, \(Y_{\varepsilon , n}\) is a regenerative process with the transition period \([0, T_{\varepsilon , 1})\).

It is useful to note that the inter-regeneration times \(S_{\varepsilon , n} = T_{\varepsilon , n} - T_{\varepsilon , n-1}, n = 1, 2, \ldots \) are i.i.d. geometrically distributed random variables, with parameter \(\varepsilon \), i.e.,

$$\begin{aligned} \mathsf {P} \{ S_{\varepsilon , 1} = n \} = \left\{ \begin{array}{lll} 0 &{} \text {for} \ n = 0, \\ \varepsilon (1 - \varepsilon )^{n-1} &{} \text {for} \ n = 1, 2, \ldots . \end{array} \right. \end{aligned}$$
(38.5)

38.2.3 Renewal Type Equations for Transition Probabilities of MCDC

Let us denote by \(\mathcal{P}_m\) the class of all initial distributions \(\bar{p} = \langle p_i , i \in \mathbb {X} \rangle \).

Let us introduce n-step transition probabilities for the Markov chain \(X_{\varepsilon , n}\), for \( i, j \in \mathbb {X}, \, n = 0, 1, \ldots \),

$$\begin{aligned} p_{\varepsilon , ij}(n) = \mathsf {P} \{ X_{\varepsilon , n} = j / X_{\varepsilon , 0} = i \}, \end{aligned}$$
(38.6)

and probabilities, for \(\bar{p} \in \mathcal{P}_m, j \in \mathbb {X}, \, n = 0, 1, \ldots \),

$$\begin{aligned} p_{\varepsilon , \bar{p}, j}(n) = \mathsf {P}_{\bar{p}} \{ X_{\varepsilon , n} = j \} = \sum _{i \in \mathbb {X}} p_i p_{\varepsilon , ij}(n). \end{aligned}$$
(38.7)

Here and henceforth, symbols \(\mathsf {P}_{\bar{p}}\) and \(\mathsf {E}_{\bar{p}}\) are used for probabilities and expectations related to a Markov chain with an initial distribution \(\bar{p}\). In the case, where the initial distribution is concentrated in a state i the above symbols take the forms \(\mathsf {P}_i\) and \(\mathsf {E}_i\).

Clearly, \(p_{\varepsilon , ij}(0) = \mathrm{I} (i = j), i, j \in \mathbb {X}\) and \(p_{\varepsilon , ij}(1) = p_{\varepsilon , ij}, i, j \in \mathbb {X}\). Also, \(p_{\varepsilon , \bar{p}, j}(0) = p_j, j \in \mathbb {X}\).

Let us denote by \(\mathcal{PQ}_m\) the class of all initial distributions \(\overline{pq} = \langle p_iq_r\), \((i, r) \in \mathbb {Y} \rangle \).

Analogously, let us introduce n-step transition probabilities for the Markov chain \(Y_{\varepsilon , n}\), for \( \, (i, r), (j, k) \in \mathbb {Y}, \, n = 0, 1, \ldots \),

$$\begin{aligned} p_{\varepsilon , ir, jk}(n) = \mathsf {P} \{ Y_{\varepsilon , n} = (j, k) / Y_{\varepsilon , 0} = (i, r) \}, \end{aligned}$$
(38.8)

and probabilities, for \(\overline{pq} \in \mathcal{PQ}_m, (j, k) \in \mathbb {Y}, \, n = 0, 1, \ldots \),

$$\begin{aligned} p_{\varepsilon , \overline{pq}, jk}(n) = \mathsf {P}_{\overline{pq}} \{ Y_{\varepsilon , n} = (j, k) \} = \sum _{(i, r) \in \mathbb {Y}} p_iq_r p_{\varepsilon , ir, jk}(n) \end{aligned}$$
(38.9)

Obviously, \(p_{\varepsilon , ir, jk}(0) = \mathrm{I} ((i, r) = (j, k)), (i, r), (j, k) \in \mathbb {Y}\) and

$$p_{\varepsilon , ir, jk}(1) = p_{\varepsilon , ir, jk}, (i, r), (j, k) \in \mathbb {Y}.$$

Also, \(p_{\varepsilon , \bar{pq}, (j, k)}(0) = p_jq_k, (j, k) \in \mathbb {Y}\).

Independence of the transition probabilities \(p_{\varepsilon , ir, jk} = p_{\varepsilon , i, jk}, \, (i, r), (j, k) \in \mathbb {Y}\) on \(r = 0, 1\) and on \(i \in \mathbb {X}\) if \(k = 1\), implies that n-step transition probabilities \(p_{\varepsilon , ir, jk}(n) = p_{\varepsilon , i, jk}(n), (i, r), (j, k) \in \mathbb {Y}, n = 0, 1, \ldots \) also are independent on \(r = 0, 1\) and on \(i \in \mathbb {X}\) if \(k = 1\).

This also implies that probabilities \(p_{\varepsilon , \overline{pq}, jk}(n) = p_{\varepsilon , \overline{p}, jk}(n), \overline{pq} \in \mathcal{PQ}_m, (j, k) \in \mathbb {Y}, n = 1, 2, \ldots \) are independent on the initial distribution \(\bar{q}\).

Let us assume that the initial distribution \(\overline{pq} = \overline{dq}_1\). As was mentioned above, \(Y_{\varepsilon , n}\) is, in this case, the standard regenerative process with regeneration times \(T_{\varepsilon , n}, n = 0, 1, \ldots \).

This fact and relations (38.3) and (38.5) imply that probabilities \(p_{\varepsilon , \overline{pq}, jk}(n), n = 0, 1, \ldots \) are, for every \(j \in \mathbb {X}, \, k = 0, 1\), the unique bounded solution for the following discrete time renewal equation,

$$\begin{aligned} p_{\varepsilon , \overline{dq}_1, jk}(n) = q_{\varepsilon , \overline{dq}_1, ik}(n) + \sum _{l = 1}^n p_{\varepsilon , \overline{dq}_1, jk}(n - l) \varepsilon (1- \varepsilon )^{l- 1}, n \ge 0, \end{aligned}$$
(38.10)

where, for \(j \in \mathbb {X}, k = 0, 1, n \ge 0\),

$$\begin{aligned} q_{\varepsilon , \overline{dq}_1, jk}(n) = \mathsf {P}_{ \overline{dq}_1} \{Y_{\varepsilon , n} = (j, k), T_{\varepsilon , 1} > n \} \end{aligned}$$
$$\begin{aligned} = \left\{ \begin{array}{lll} p_{0, \bar{d}, j}(n)(1- \varepsilon )^n \mathrm{I}(n > 0) &{} \text {if} \ k = 0, \\ d_j \mathrm{I}(n = 0) &{} \text {if} \ k = 1. \end{array} \right. \end{aligned}$$
(38.11)

Let us now consider the general case, with some initial distribution

$$\overline{pq} = \langle p_i q_r, (i, r) \in \mathbb {Y} \rangle \in \mathcal{PQ}_m.$$

As was mentioned above, \(Y_{\varepsilon , n},\) is, in this case, the regenerative process with regeneration times \(T_{\varepsilon , n}, n = 0, 1, \ldots \) and the transition period \([0, T_{\varepsilon , 1})\).

In this case, probabilities \(p_{\varepsilon , \overline{pq}, jk}(n)\) and \(p_{\varepsilon , \overline{dq}_1, jk}(n)\) are, for \(j \in \mathbb {X}, k = 0, 1\), connected by the following renewal type relation,

$$\begin{aligned} p_{\varepsilon , \overline{pq}, jk}(n) = q_{\varepsilon , \overline{pq}, jk}(n) + \sum _{l = 1}^n p_{\varepsilon , \overline{dq}_1, jk}(n - l) \varepsilon (1- \varepsilon )^{l- 1}, \, n \ge 0, \end{aligned}$$
(38.12)

where, for \(j \in \mathbb {X}, k = 0, 1, n \ge 0\),

$$\begin{aligned} q_{\varepsilon , \overline{pq}, jk}(n) = \mathsf {P}_{ \overline{pq}} \{Y_{\varepsilon , n} = (j, k), T_{\varepsilon , 1} > n \} \end{aligned}$$
$$\begin{aligned} = \left\{ \begin{array}{lll} p_{0, \bar{p}, j}(n)(1- \varepsilon )^n \mathrm{I}(n > 0) &{} \text {if} \ k = 0, \\ p_j \mathrm{I}(n = 0) &{} \text {if} \ k = 1. \end{array} \right. \end{aligned}$$
(38.13)

The summation of renewal equations (38.10) over \(k = 0, 1\) yields the discrete time renewal equation for probabilities \(p_{\varepsilon , \bar{d}, j}(n), n = 0, 1, \ldots \), which are, for every \(j \in \mathbb {X}\), the unique bounded solution for this equation,

$$\begin{aligned} p_{\varepsilon , \bar{d}, j}(n) = p_{0, \bar{d}, j}(n)(1- \varepsilon )^n + \sum _{l = 1}^n p_{\varepsilon , \bar{d}, j}(n - l) \varepsilon (1- \varepsilon )^{l - 1}, \, n \ge 0. \end{aligned}$$
(38.14)

Also, the summation of renewal type equations (38.12) over \(k = 0, 1\) yields that, in the case of general initial distribution \(\overline{p} = \langle p_i, i \in \mathbb {X} \rangle \), the probabilities \(p_{\varepsilon , \bar{p}, j}(n)\) and \(p_{\varepsilon , \bar{d}, j}(n)\) are, for every \(j \in \mathbb {X}\), connected by the following renewal type relation,

$$\begin{aligned} p_{\varepsilon , \bar{p}, j}(n) = p_{0, \bar{p}, j}(n)(1- \varepsilon )^n + \sum _{l = 1}^n p_{\varepsilon , \bar{d}, j}(n - l) \varepsilon (1- \varepsilon )^{l - 1}, \, n \ge 0. \end{aligned}$$
(38.15)

38.3 Stationary Distributions of MCDC

In this section we present ergodic relations for transition probabilities of MCDC.

38.3.1 Stationary Distributions of Markov Chains \(X_{\varepsilon , n}\) and \(Y_{\varepsilon , n}\)

Let us describe ergodic properties of Markov chains \(X_{\varepsilon , n}\) and \(Y_{\varepsilon , n}\), for the case, where \(\varepsilon \in (0, 1]\).

Lemma 38.1

Let \(\varepsilon \in (0, 1]\). Then the following ergodic relation takes place for any initial distribution \(\overline{pq} \in \mathcal{PQ}_m\) and \((j, k) \in \mathbb {Y}\),

$$\begin{aligned} p_{\varepsilon , \overline{pq}, jk}(n) \rightarrow \pi _{\varepsilon , jk} = \varepsilon \sum _{l = 0}^{\infty } q_{\varepsilon , \overline{dq}_1, jk}(l) \ \mathrm{as } \ n \rightarrow \infty . \end{aligned}$$
(38.16)

Proof

The geometrical distribution of the regeneration time \(T_{\varepsilon , 1}\) is aperiodic and has the first moment \(\varepsilon ^{-1}\).

This makes it possible to apply the discrete time renewal theorem (see, for example, [20]) to the renewal equation (38.10) that yields the following ergodic relation, for \((j, k) \in \mathbb {Y}\),

$$\begin{aligned} p_{\varepsilon , \overline{dq}_1, jk}(n) \rightarrow \pi _{\varepsilon , jk} \ \mathrm{as} \ n \rightarrow \infty . \end{aligned}$$
(38.17)

Obviously \(q_{\varepsilon , \overline{pq}, jk}(n) \rightarrow 0\) as \(n \rightarrow \infty \), for \((j, k) \in \mathbb {Y}\).

Let us also define \(p_{\varepsilon , \overline{dq}_1, jk}(n - l) = 0\), for \(l > n\). Relation (38.17) implies that \(p_{\varepsilon , \overline{dq}_1, jk}(n - l) \rightarrow \pi _{\varepsilon , jk}\) as \(n \rightarrow \infty \), for \(l \ge 0\) and \((j, k) \in \mathbb {Y}\).

Using relation (38.12), the latter two asymptotic relations and the Lebesgue theorem, we get, for \(\overline{pq} \in \mathcal{PQ}_m, (j, k) \in \mathbb {Y}\),

$$\begin{aligned} \lim _{n \rightarrow \infty } p_{\varepsilon , \overline{pq}, jk}(n)&= \lim _{n \rightarrow \infty } q_{\varepsilon , \overline{pq}, jk}(n) \nonumber \\&\quad + \lim _{n \rightarrow \infty } \sum _{l = 1}^\infty p_{\varepsilon , \overline{dq}_1, jk}(n - l) \varepsilon (1- \varepsilon )^{l - 1} = \pi _{\varepsilon , jk}. \end{aligned}$$
(38.18)

The proof is complete. \(\square \)

The following lemma is the direct corollary of Lemma 38.1.

Lemma 38.2

Let \(\varepsilon \in (0, 1]\). Then the following ergodic relation takes place for any initial distribution \(\bar{p} \in \mathcal{P}_m\) and \(j \in \mathbb {X}\),

$$\begin{aligned} p_{\varepsilon , \bar{p}, j}(n) \rightarrow \pi _{\varepsilon , j} = \varepsilon \sum _{l = 0}^{\infty } p_{0, \bar{d}, j}(l)(1- \varepsilon )^l \ \mathrm{as } \ n \rightarrow \infty . \end{aligned}$$
(38.19)

It is useful to note that the stationary distribution \(\bar{\pi }_\varepsilon = \langle \pi _{\varepsilon , j}, j \in \mathbb {X} \rangle \) is the unique positive solution for the system of linear equations,

$$\begin{aligned} \sum _{i \in \mathbb {X}} \pi _{\varepsilon , i} p_{\varepsilon , ij} = \pi _{\varepsilon , j}, j \in \mathbb {X}, \ \sum _{j \in \mathbb {X}} \pi _{\varepsilon , j} = 1. \end{aligned}$$
(38.20)

Also, the stationary probabilities \(\pi _{\varepsilon , j}\) can be represented in the form \(\pi _{\varepsilon , j} = e_{\varepsilon , j}^{-1},j \in \mathbb {X}\), via the expected return times \(e_{\varepsilon , j}\), with the use of regeneration property of the Markov chain \(X_{\varepsilon , n}\) at moments of return in state j.

The series representation  for the stationary distribution of Markov chain \(X_{\varepsilon , n}\), given by relation (38.16), is based on the use of alternative damping regeneration times. This representation is, by our opinion, a more effective tool for performing asymptotic perturbation analysis for MCDC.

38.3.2 Stationary Distribution of Markov Chain \(X_{0, n}\)

Let us describe ergodic properties of the Markov chain \(X_{0, n}\). Its ergodic properties are determined by communicative properties of its phase space \(\mathbb {X}\) and the matrix of transition probabilities \(\mathbf {P}_0\). The simplest case is where the following condition holds:

\(\mathbf{A}_1\)::

The phase space \(\mathbb {X}\) is one aperiodic class of communicative states for the Markov chain \(X_{0, n}\).

In this case, the following ergodic relation holds, for any \(\bar{p} \in \mathcal{P}_m, j \in \mathbb {X}\),

$$\begin{aligned} p_{0, \bar{p}, j}(n) \rightarrow \pi _{0, j} \ \mathrm{as} \ \varepsilon \rightarrow 0, \end{aligned}$$
(38.21)

The stationary distribution \(\bar{\pi }_0 = \langle \pi _{0, j}, j \in \mathbb {X} \rangle \) is the unique positive solution of the system of linear equations,

$$\begin{aligned} \sum _{i \in \mathbb {X}} \pi _{0, i} p_{0, ij} = \pi _{0, j}, j \in \mathbb {X}, \ \sum _{j \in \mathbb {X}} \pi _{0, j} = 1. \end{aligned}$$
(38.22)

A more complex is the case, where the following condition holds:

\(\mathbf{B}_1\)::

The phase space \(\mathbb {X} = \cup _{g = 0}^h \mathbb {X}^{(g)}\), where: (a) \(\mathbb {X}^{(g)}, g = 0, \ldots , h\) are non-intersecting subsets of \(\mathbb {X}\), (b) \(\mathbb {X}^{(g)}, g = 1, \ldots , h\) are non-empty, closed, aperiodic classes of communicative states for the Markov chain \(X_{0, n}\), (c) and \(\mathbb {X}^{(0)}\) is a class (possibly empty) of transient states for the Markov chain \(X_{0, n}\).

If the initial distribution of there Markov chain \(X_{0, n}\) is concentrated at the set \(\mathbb {X}^{(g)}\), for some \(g = 1, \ldots , h\), then \(X_{0, n} = X^{(g)}_{0, n}, n = 0, 1, \ldots \) can be considered as a Markov chain with the reduced phase space \(\mathbb {X}^{(g)}\) and the matrix of transition probabilities \(\mathbf {P}_{0, g} = \Vert p_{0, rk} \Vert _{k, r \in \mathbb {X}^{(g)}}\).

According to condition \(\mathbf{B}_1\), for any \(r, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\),

$$\begin{aligned} p_{0, r k}(n) \rightarrow \pi ^{(g)}_{0, k} \ \mathrm{as} \ n \rightarrow \infty , \end{aligned}$$
(38.23)

where \(\bar{\pi }_0^{(g)} = \langle \pi ^{(g)}_{0, k}, k \in \mathbb {X}^{(g)} \rangle \) is, for \(g = 1, \ldots , h\), the stationary distribution of the Markov chain \(X^{(g)}_{0, n}\).

The stationary distribution \(\bar{\pi }_0^{(g)}\) is, for every \(g = 1, \ldots , h\), the unique positive solution for the system of linear equations,

$$\begin{aligned} \pi ^{(g)}_{0, k} = \sum _{r \in \mathbb {X}^{(g)}} \pi ^{(g)}_{0, r} p_{0, rk}, k \in \mathbb {X}^{(g)}, \ \sum _{k \in \mathbb {X}^{(g)}} \pi ^{(g)}_{0, k} = 1. \end{aligned}$$
(38.24)

Let \(Z_\varepsilon = \min (n \ge 0: X_{\varepsilon , n} \in \overline{\mathbb {X}}^{(0)})\) be the first hitting time of the Markov chain \(X_{\varepsilon , n}\) into the set \(\overline{\mathbb {X}}^{(0)}\).

Note that \(Z_\varepsilon = 0\), if \(X_{\varepsilon , 0} \in \overline{\mathbb {X}}^{(0)}\), while \(Z_\varepsilon \ge 1\), if \(X_{\varepsilon , 0} \in \mathbb {X}^{(0)}\).

Let also introduce probabilities, for \(i \in \mathbb {X}, \, g = 1, \ldots , h\),

$$\begin{aligned} f^{(g)}_{\varepsilon , i} = \mathsf {P}_i \{ X_{\varepsilon , Z_\varepsilon } \in \mathbb {X}^{(g)} \}. \end{aligned}$$
(38.25)

The following relation takes place, for \(\bar{p} \in \mathcal{P}_m, \, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\),

$$\begin{aligned} f^{(g)}_{\varepsilon , \bar{p}}&= \mathsf {P}_{\bar{p}} \{ X_{\varepsilon , Z_\varepsilon } \in \mathbb {X}^{(g)} \} = \sum _{i \in \mathbb {X}^{(g)}} p_i + \sum _{i \in \mathbb {X}^{(0)}} p_i f^{(g)}_{\varepsilon , i} \nonumber \\&= \sum _{i \in \mathbb {X}^{(g)}} p_i + \sum _{i \in \mathbb {X}^{(0)}} p_i \sum _{l = 1}^\infty \sum _{r \in \mathbb {X}^{(g)}} \mathsf {P}_i \{ Z_\varepsilon = l, X_{\varepsilon , l} = r \}. \end{aligned}$$
(38.26)

Note that in the case, where the set \(\mathbb {X}^{(0)}\) is empty, the second sum disappears in the above formulas for probabilities \(f^{(g)}_{\varepsilon , \bar{p}}\).

Lemma 38.3

Let condition \(\mathbf{B}_1\) holds. Then, the following ergodic relation takes place, for \(\bar{p} \in \mathcal{P}_m\) and \(k \in \mathbb {X}\),

$$\begin{aligned} \lim _{n \rightarrow \infty } p_{0, \bar{p}, k}(n) = \pi _{0, \bar{p}, k} = \left\{ \begin{array}{lll} f^{(g)}_{0, \bar{p}} \pi ^{(g)}_{0, k} &{} \ \text {for} \ k \in \mathbb {X}^{(g)}, g = 1, \ldots , h, \\ 0 &{} \ \text {for} \ k \in \mathbb {X}^{(0)}. \end{array} \right. \end{aligned}$$
(38.27)

Proof

Let us assume that \(\mathbb {X}^{(0)}\) is a non-empty set.

The following relation takes place, for \(\bar{p} \in \mathcal{P}_m, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\),

$$\begin{aligned} p_{0, \bar{p}, k}(n)&= \sum _{i \in \mathbb {X}^{(g)}} p_i p_{0, ik}(n) \nonumber \\&\quad + \sum _{i \in \mathbb {X}^{(0)}} p_i \sum _{l = 1}^n \sum _{r \in \mathbb {X}^{(g)}} \mathsf {P}_i \{ Z_0 = l, X_{0, l} = r \} p_{0, rk}(n - l), \, n \ge 0. \end{aligned}$$
(38.28)

Define \(p_{0, rk}(n - l) = 0\), for \(l > n\). Relation (38.23) implies that \(p_{0, rk}(n - l) \rightarrow \pi ^{(g)}_{0, k}\) as \(n \rightarrow \infty \), for \( l \ge 0\) and \(r, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\).

Using the above relation, relations (38.26), (38.28) and the Lebesgue theorem, we get, for \(\bar{p} \in \mathcal{P}_m, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\),

$$\begin{aligned}&\lim _{n \rightarrow \infty } p_{0, \bar{p}, k}(n) = \lim _{n \rightarrow \infty } \sum _{i \in \mathbb {X}^{(g)}} p_i p_{0, ik}(n) \nonumber \\&\quad \quad \quad + \lim _{n \rightarrow \infty } \sum _{i \in \mathbb {X}^{(0)}} p_i \sum _{l = 1}^\infty \sum _{r \in \mathbb {X}^{(g)}} \mathsf {P}_i \{ Z_0 = l, X_{0, l} = r \} p_{0, rk}(n - l) \nonumber \\&\quad \quad = \sum _{i \in \mathbb {X}^{(g)}} p_i \pi ^{(g)}_{0, k} + \sum _{i \in \mathbb {X}^{(0)}} p_i \sum _{l = 1}^\infty \sum _{r \in \mathbb {X}^{(g)}} \mathsf {P}_i \{ Z_0 = l, X_{0, l} = r \} \pi ^{(g)}_{0, k} \nonumber \\&\quad \quad = f^{(g)}_{0, \bar{p}} \pi ^{(g)}_{0, k}. \end{aligned}$$
(38.29)

Also, the following relation holds, for \(\bar{p} \in \mathcal{P}_m, k \in \mathbb {X}^{(0)}\),

$$\begin{aligned} \pi _{0, \bar{p}, k}(n)&= \sum _{i \in \mathbb {X}^{(0)}} p_i \mathsf {P}_i \{ Z_0> n, X_{0, n} = k \} \nonumber \\&\le \sum _{i \in \mathbb {X}^{(0)}} p_i \mathsf {P}_i \{ Z_0 > n \} \rightarrow 0 \ \mathrm{as} \ n \rightarrow \infty . \end{aligned}$$
(38.30)

The case, where \(\mathbb {X}^{(0)} = \emptyset \), is trivial. \(\square \)

Remark 38.1

Ergodic relation (38.27) shows that in the singular case, where condition \(\mathbf{B}_1\) holds, the stationary probabilities \(\pi _{0, \bar{p}, k}\) defined by the asymptotic relation (38.27) may depend on the initial distribution.

38.4 Perturbation Model for MCDC

In this section, we show in which way MCDC can be interpreted as a stochastic perturbed model. We also present results concerning continuity of stationary distributions \(\bar{\pi }_\varepsilon \) with respect to damping (perturbation) parameter \(\varepsilon \).

38.4.1 Continuity Property for Transition Probabilities of MCDC

In what follows, relation \(\varepsilon \rightarrow 0\) is a reduced version of relation \(0 < \varepsilon \rightarrow 0\).

The Markov chain \(X_{\varepsilon , n}\) has the matrix of transition probabilities

$$\mathbf {P}_\varepsilon = (1-\varepsilon )\mathbf {P}_0 + \varepsilon \mathbf {D}.$$

Obviously, for \(i, j \in \mathbb {X}\),

$$\begin{aligned} p_{\varepsilon , ij} \rightarrow p_{0, ij} \ \mathrm{as} \ \varepsilon \rightarrow 0. \end{aligned}$$
(38.31)

Also, as well known, matrix \(\Vert p_{\varepsilon , ij}(n) \Vert = \mathbf {P}_\varepsilon ^n\), for \(n = 0, 1, \ldots \), where \(\mathbf {P}_\varepsilon ^0 = \Vert \mathrm{I}(i = j) \Vert \).

Therefore, the following asymptotic relation holds, for \(n \ge 0, i, j \in \mathbb {X}\),

$$\begin{aligned} p_{\varepsilon , ij}(n) \rightarrow p_{0, ij}(n) \ \mathrm{as} \ \varepsilon \rightarrow 0. \end{aligned}$$
(38.32)

This relation let one consider the Markov chain \(X_{\varepsilon , n}\), for \(\varepsilon \in (0, 1]\), as a perturbed version of the Markov chain \(X_{0, n}\) and to interpret the damping parameter \(\varepsilon \) as a perturbation parameter.

Note that the phase space \(\mathbb {X}\) of the perturbed Markov chain \(X_{\varepsilon , n}\) is one aperiodic class of communicative states, for every \(\varepsilon \in (0, 1]\).

As far as the unperturbed Markov chain \(X_{0, n}\) is concerned, there are two different cases.

The first one is, where condition \(\mathbf{A}_1\) holds, i.e., the phase space \(\mathbb {X}\) is one communicative class of states also for the Markov chain \(X_{0, n}\). In this case, one can refer to the corresponding perturbation model as regular.

The second one is, where condition \(\mathbf{B}_1\) holds, i.e., the phase space \(\mathbb {X}\) is not one communicative class of states for the Markov chain \(X_{0, n}\). In this case, one can refer to the corresponding perturbation model as singular.

38.4.2 Continuity Property for Stationary Distributions of Regularly Perturbed MCDC

The following proposition takes place.

Lemma 38.4

Let condition \(\mathbf{A}_1\) holds. Then, the following asymptotic relation holds, for \(j \in \mathbb {X}\),

$$\begin{aligned} \pi _{\varepsilon , j} \rightarrow \pi _{0, j} \ \mathrm{as} \ \varepsilon \rightarrow 0. \end{aligned}$$
(38.33)

Proof

Let \(\nu _\varepsilon \) be a random variable geometrically distributed with parameter \(\varepsilon \), i.e., \(\mathsf {P} \{ \nu _\varepsilon = n \} = \varepsilon (1 - \varepsilon )^{n-1}, n = 1, 2, \ldots \). Obviously,

$$\begin{aligned} \nu _\varepsilon - 1 {\mathop {\longrightarrow }\limits ^{\mathsf {P}}} \infty \ \mathrm{as} \ \varepsilon \rightarrow 0. \end{aligned}$$
(38.34)

In the case, where condition \(\mathbf{A}_1\) holds, we get using relations (38.21) and (38.34) that the following asymptotic relation holds, for \(j \in \mathbb {X}\),

$$\begin{aligned} p_{0, \bar{d}, j}(\nu _\varepsilon -1) {\mathop {\longrightarrow }\limits ^{\mathsf {P}}} \pi _{0, j} \ \mathrm{as} \ \varepsilon \rightarrow 0. \end{aligned}$$
(38.35)

It is readily seen that the following representation takes place for the stationary probabilities \(\pi _{\varepsilon , j}, i \in \mathbb {X}\),

$$\begin{aligned} \pi _{\varepsilon , j} = \varepsilon \sum _{l = 0}^{\infty } p_{0, \bar{d}, j}(l)(1- \varepsilon )^l = p_{0, \bar{d}, j}(\nu _\varepsilon -1). \end{aligned}$$
(38.36)

Since the sequence \(p_{\bar{p}, j}(n), n = 0, 1, \ldots \) is a bounded, relations (38.35), (38.36) and the corresponding variant of the Lebesgue theorem imply that the following asymptotic relation holds, for \(j \in \mathbb {X}\),

$$\begin{aligned} \pi _{\varepsilon , j} = \mathsf {E} p_{0, \bar{d}, j}(\nu _\varepsilon -1) \rightarrow \pi _{0, j} \ \mathrm{as} \ \varepsilon \rightarrow 0. \end{aligned}$$
(38.37)

The proof is complete. \(\square \)

38.4.3 Continuity Property for Stationary Distributions of Singularly Perturbed MCDC

In this case, the following lemma takes place.

Lemma 38.5

Let condition \(\mathbf{B}_1\) holds. Then, the following asymptotic relation holds, for \(k \in \mathbb {X}\),

$$\begin{aligned} \pi _{\varepsilon , k} \rightarrow \pi _{0, \bar{d}, k} \ \mathrm{as} \ \varepsilon \rightarrow 0. \end{aligned}$$
(38.38)

Proof

In the case, where condition \(\mathbf{B}_1\) holds, we get using relations (38.27) and (38.34) that the following asymptotic relation holds, for \(k \in \mathbb {X}\),

$$\begin{aligned} p_{0, \bar{d}, k}(\nu _\varepsilon -1) {\mathop {\longrightarrow }\limits ^{\mathsf {P}}} \pi _{0,\bar{d}, k} \ \mathrm{as} \ \varepsilon \rightarrow 0. \end{aligned}$$
(38.39)

Analogously to the relation (38.37), we get, using relations (38.35) and (38.39), the following asymptotic relation, for \(k \in \mathbb {X}\),

$$\begin{aligned} \pi _{\varepsilon , k} = \mathsf {E} p_{0, \bar{d}, k}(\nu _\varepsilon -1) \rightarrow \pi _{0, \bar{d}, k} \ \mathrm{as} \ \varepsilon \rightarrow 0. \end{aligned}$$
(38.40)

The proof is complete. \(\square \)

Remark 38.2

Lemmas 38.1 and 38.2 imply that, in the case where condition \(\mathbf{A}_1\) holds, the continuity property for stationary distributions \(\bar{\pi }_\varepsilon \) (as \(\varepsilon \rightarrow 0\)) takes place. However, in the case where condition \(\mathbf{B}_1\) holds, the continuity property for stationary distributions \(\bar{\pi }_\varepsilon \) (as \(\varepsilon \rightarrow 0\)) takes place under the additional assumption that \(f^{(g)}_{\bar{p}} = f^{(g)}_{\bar{d}}, g = 0, \ldots , h\).

38.5 Rate of Convergence for Stationary Distributions of Perturbed MCDC

In this section, we obtain explicit upper bounds for deviations of stationary distributions of Markov chains \(X_{\varepsilon , n}\) and \(X_{0, n}\).

38.5.1 Rate of Convergence for Stationary Distributions of Regularly Perturbed MCDC

Let us get some explicit upper bound for the rate of convergence  in the asymptotic relation (38.33) for the case, where condition \(\mathbf{A}_1\) holds.

It is well known that, under condition \(\mathbf{A}_1\), the rate of convergence in the ergodic relation (38.33) is exponential.  This means that there exist some constants \(C = C(\mathbf {P}_0) \in [0, \infty )\), \(\lambda = \lambda (\mathbf {P}_0) \in [0, 1)\), and distribution \(\bar{\pi }_0 = \langle \pi _{0, j}, j \in \mathbb {X} \rangle \), with all positive component such that the following relation holds,

$$\begin{aligned} \max _{i,j \in \mathbb {X}} |p_{0, ij}(n) - \pi _{0, j} | \le C \lambda ^n, \ n \ge 1. \end{aligned}$$
(38.41)

In fact, condition \(\mathbf{A}_1\) is equivalent to the following condition:

\(\mathbf{A}_2\)::

There exist a constants \(C = C(\mathbf {P}_0) \in [0, \infty )\), \(\lambda = \lambda (\mathbf {P}_0) \in [0, 1)\), and a distribution \(\bar{\pi }_0 = \langle \pi _{0, j}, j \in \mathbb {X} \rangle \) with all positive component such that relation (38.41) holds.

Indeed, condition \(\mathbf{A}_2\) implies that probabilities \(p_{0, ij}(n) > 0, i, j \in \mathbb {X}\) for all large enough n. This implies that \(\mathbb {X}\) is one aperiodic class of communicative states. Also, condition \(\mathbf{A}_2\) implies that \(p_{0, ij}(n) \rightarrow \pi _{0, j}\) as \(n \rightarrow \infty \), for \(i, j \in \mathbb {X}\), and, thus, \(\bar{\pi }_0 \) is the stationary distribution for the Markov chain \(X_{0, n}\).

According to the Perron–Frobenius theorem, the role of \(\lambda \) can play the absolute value of the second (by absolute value), eigenvalue for matrix \(\mathbf {P}_0\). As far as constant C is concerned, we refer to the book [20], where one can find the algorithms which let one compute this constant.

The following theorem present explicit upper bounds for deviations of stationary distributions of Markov chains \(X_{\varepsilon , n}\) and \(X_{0, n}\).

Theorem 38.1

Let condition \(\mathbf{A}_2\) holds. Then the following relation holds, for \(j \in \mathbb {X}\),

$$\begin{aligned} | \pi _{\varepsilon , j} - \pi _{0, j} | \le \varepsilon (| d_j - \pi _{0, j}| + \frac{C \lambda }{1 - \lambda }). \end{aligned}$$
(38.42)

Proof

The inequalities appearing in condition \(\mathbf{A}_2\) imply that the following relation holds, for \(n \ge 1, j \in \mathbb {X}\),

$$\begin{aligned} |p_{0, \bar{d}, j}(n) - \pi _{0, j} |&= | \sum _{i \in \mathbb {X}} (d_i p_{0, ij}(n) - d_i \pi _{0, j}) | \nonumber \\&\le \sum _{i \in \mathbb {X}} d_i | p_{0, ij}(n) - \pi _{0, j})| \le C \lambda ^n. \end{aligned}$$
(38.43)

Using relations (38.16) and (38.43), we get the following estimate, for \(j \in \mathbb {X}\),

$$\begin{aligned} | \pi _{\varepsilon , j} - \pi _{0, j} |&\le | \varepsilon \sum _{l = 0}^{\infty } p_{0, \bar{d}, j}(l)(1- \varepsilon )^l - \pi _{0, j} | \nonumber \\&= | \varepsilon \sum _{l = 0}^{\infty } p_{0, \bar{d}, j}(l)(1- \varepsilon )^l - \varepsilon \sum _{l = 0}^{\infty } \pi _{0, j} (1- \varepsilon )^l | \nonumber \\&\le \varepsilon | d_j - \pi _{0, j}| + \varepsilon \sum _{l = 1}^{\infty } C\lambda ^l (1- \varepsilon )^l \nonumber \\&\le \varepsilon \left( | d_j - \pi _{0, j}| + \frac{C \lambda (1-\varepsilon )}{1 - \lambda (1 - \varepsilon )}\right) \nonumber \\&\le \varepsilon \left( | d_j - \pi _{0, j}| + \frac{C \lambda }{1 - \lambda }\right) . \end{aligned}$$
(38.44)

The proof is complete. \(\square \)

Remark 38.3

The quantities \(| d_j - \pi _{0, j}|\) appearing in inequality (38.42) is, in some sense, determined by a prior information about the stationary probabilities. They take smaller values if one can choose initial distribution \(\bar{p}\) with smaller deviation of the stationary distribution \(\bar{\pi }_0\). Inequalities \(| d_j - \pi _{0, j}| \le d_j \vee (1 - d_j) \le 1\) let one replace the term \(| d_j - \pi _{0, j}|\) in inequality (38.42) by quantities independent on the corresponding stationary probabilities \(\pi _{0, j}\).

Remark 38.4

Theorem 38.1 remains also valid if condition \(\mathbf{A}_2\) is weakened by omitting in it the assumption of positivity for the distribution \(\bar{\pi }_0 = \langle \pi _{0, i}, i \in \mathbb {X} \rangle \) appearing in this condition. In this case, condition \(\mathbf{A}_2\) implies that the phase space \(\mathbb {X} = \mathbb {X}_1 \cup \mathbb {X}_0\), where \(\mathbb {X}_1 = \{ i \in \mathbb {X}: \pi _{0, i} > 0 \}\) is a non-empty closed communicative class of states, while \(\mathbb {X}_0 = \{ i \in \mathbb {X}: \pi _{0, i} = 0 \}\) is a the class (possibly empty) of transient states, for the Markov chain \(X_{0, n}\). Note that \(\bar{\pi }_0\) still is the stationary distribution for this Markov chain.

We would like also to refer to paper [30], where one can find alternative upper bounds for the rate of convergence of stationary distributions for perturbed Markov chains and further related references.

38.5.2 Rate of Convergence for Stationary Distributions of Singularly Perturbed MCDC

Let now assume that condition \(\mathbf{B}_1\) holds.

Let us consider matrices, for \(g = 0, \ldots , h\) and \(n = 0, 1, \ldots \),

$$\begin{aligned} \mathbf {P}_{0, g} = \Vert p_{0, rk} \Vert _{r, k \in \mathbb {X}^{(g)}} \ \mathrm{and} \ \mathbf {P}^n_{0, g} = \Vert p^{(g)}_{0, rk}(n) \Vert _{r, k \in \mathbb {X}^{(g)}}. \end{aligned}$$
(38.45)

Note that, for \(g = 1, \ldots , h\), probabilities \(p^{(g)}_{0, rk}(n) = p_{0, rk}(n), r, k \in \mathbb {X}^{(g)}, n \ge 0\), since \(\mathbb {X}^{(g)}, j = 1, \ldots , h\) are closed classes of states.

The reduced Markov chain \(X^{(g)}_{0, n}\) with the phase space \( \mathbb {X}^{(g)}\) and the matrix of transition probabilities \(\mathbf {P}_{0, g}\) is, for every \(g = 1, \ldots , h\), exponentially ergodic and the following estimates take place, for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\) and \(n = 0, 1, \ldots \),

$$\begin{aligned} \max _{r, k \in \mathbb {X}^{(g)}} |p^{(g)}_{0, rk}(n) - \pi ^{(g)}_{0, k} | \le C_{g} \lambda ^n_{g}, \end{aligned}$$
(38.46)

with some constants \(C_g = C_g(\mathbf {P}_0) \in [0, \infty ), \lambda _g = \lambda _g(\mathbf {P}_0) \in [0, 1), g = 1, \ldots , h\) and distributions \(\bar{\pi }^{(g)}_0 = \langle \pi ^{(g)}_{0, k}, k\in \mathbb {X}^{(g)} \rangle , g = 1, \ldots , h\), with all positive component.

Obviously, inequalities (38.46) imply that \(p^{(g)}_{0, rk}(n) \rightarrow \pi ^{(g)}_{0, k}\) as \(n \rightarrow \infty \), for \(r, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\). Thus, distribution \(\bar{\pi }^{(g)}_0\) is the stationary distribution for the Markov chain \(X^{(g)}_{0, n}\), for every \(g = 1, \ldots , h\).

As has been mentioned above the role of \(\lambda _g\) can play, for every \(g = 1, \ldots , h\), the absolute value of the second (by absolute value), eigenvalue for matrix \(\mathbf {P}_{0, g}\), and \(C_{j}\) is the constant, which as has been mentioned above can be computed using the algorithm described in the book [20].

As well known, there exists \(\lambda _0 = \lambda _0(\mathbf {P}_0) \in (0, 1)\) such that there exist finite exponential moments, for \(i \in \mathbb {X}^{(0)}\),

$$\begin{aligned} C_{0, i} = C_{0, i}(\mathbf {P}_0) = \mathsf {E}_i e^{(\ln \lambda ^{-1}_0) Z_0} = \mathsf {E}_i \lambda _0^{- Z_0} < \infty . \end{aligned}$$
(38.47)

Let us also denote,

$$\begin{aligned} C_0 = \max _{i \in \mathbb {X}^{(0)}} C_{0, i}. \end{aligned}$$
(38.48)

The upper estimates for \(\lambda _0\) can be found, for example, in the book [24].

Let us denote,

$$\begin{aligned} \lambda = \max _{0 \le g \le h} \lambda _g, \ C = \max _{1 \le g \le h} (C_g + C_gC_0 + C_0). \end{aligned}$$
(38.49)

Here, one should formally count \(C_0, \lambda _0 = 0\), if the class \(\mathbb {X}^{(0)}\) is empty.

Condition \(\mathbf{B}_1\) is, in fact, equivalent to the following condition:

\(\mathbf{B}_2\)::

The phase space \(\mathbb {X} = \cup _{g = 0}^h \mathbb {X}^{(g)}\), where: (a) \(X^{(g)}, g = 0, \ldots , h\) are non-intersecting subsets of \(\mathbb {X}\), (b) \(X^{(g)}, g = 1, \ldots , h\) are non-empty, closed classes of states for the Markov chain \(X_{0, n}\) such that inequalities (38.46) hold, (c) \(X^{(0)}\) is a class of states for the Markov chain \(X_{0, n}\) such that relation (38.47) holds (if \(X^{(0)}\) is a non-empty set).

Indeed, condition \(\mathbf{B}_2\) implies that probabilities \(p^{(g)}_{0, rk}(n) > 0, r, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\) for all large enough n. This implies that \(\mathbb {X}^{(g)}, g = 1, \ldots , h\) are closed aperiodic classes of communicative states. Also, inequalities (38.46) imply that \(p^{(g)}_{0, rk}(n) \rightarrow \pi ^{(g)}_{0, k}\) as \(n \rightarrow \infty \), for \(r, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\), and, thus, \(\bar{\pi }^{(g)}_0 = \langle \pi ^{(g)}_{0, k}, k \in \mathbb {X}^{(g)} \rangle \) is the stationary distribution for the Markov chain \(X^{(g)}_{0, n}\), for every \(g = 1, \ldots , h\). Also, relation (38.47) implies that probabilities \(p^{(0)}_{0, rk}(n) \rightarrow 0\) as \(n \rightarrow \infty \), for \(r, k \in \mathbb {X}^{(0)}\) (if \(X^{(0)}\) is a non-empty set). This implies that \(\mathbb {X}^{(0)}\) is a transient class of states for the Markov chain \(X^{(0)}_{0, n}\).

Theorem 38.2

Let condition \(\mathbf{B}_2\) holds. Then the following relation holds, for \(k \in \mathbb {X}\),

$$\begin{aligned} | \pi _{\varepsilon , k} - \pi _{0, \bar{d}, k} | \le \varepsilon \left( | d_k - \pi _{0, \bar{d}, k}| + \frac{C \lambda }{1 - \lambda }\right) . \end{aligned}$$
(38.50)

Proof

Let us, first, assume that \(X^{(0)} = \emptyset \).

In this case, relations (38.26)–(38.28) imply that, for \(n \ge 1, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\),

$$\begin{aligned} | p_{0, \bar{d}, k}(n) - \pi _{0, \bar{d}, k}|&= | \sum _{i \in X^{(g)}} d_i p^{(g)}_{0, ik}(n) - \sum _{i \in X^{(g)}} d_i \pi ^{(g)}_{0, k} | \nonumber \\&\le C_g \lambda _g^n \le C \lambda ^n. \end{aligned}$$
(38.51)

Let us now assume that \(X^{(0)} \ne \emptyset \).

Using relations (38.26)–(38.28) and (38.46)–(38.47), we get the following inequalities, for \(n \ge 1, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\),

$$\begin{aligned}&| p_{0, \bar{d}, k}(n) - \pi _{0, \bar{d}, k}| = | \sum _{i \in \mathbb {X}^{(g)}} d_i p^{(g)}_{0, ik}(n) \nonumber \\&\quad \quad + \sum _{i \in \mathbb {X}^{(0)}} d_i \sum _{l = 1}^{n} \sum _{r \in \mathbb {X}^{(g)}} \mathsf {P}_i \{ Z_0 = l, X_{0, l} = r \} p^{(g)}_{0, rk}(n - l) - f^{(g)}_{0, \bar{d}} \pi ^{(g)}_{0, k} | \nonumber \\&\quad \le \sum _{i \in \mathbb {X}^{(g)}} d_i | p^{(g)}_{0, ik}(n) - \pi ^{(g)}_{0, k} | \nonumber \\&\quad \quad + \sum _{i \in \mathbb {X}^{(0)}} d_i \sum _{l = 1}^{n - 1} \sum _{r \in \mathbb {X}^{(g)}} \mathsf {P}_i \{ Z_0 = l, X_{0, l} = r \} |p^{(g)}_{0, rk}(n - l) - \pi ^{(g)}_{0, k}| \nonumber \\&\quad \quad + \sum _{i \in \mathbb {X}^{(0)}} d_i \sum _{r \in \mathbb {X}^{(g)}} \mathsf {P}_i \{ Z_0 = n, X_{0, n} = r \} |\mathrm{I}(r = k) - \pi ^{(g)}_{0, k}| \nonumber \\&\quad \quad + \, |\sum _{i \in \mathbb {X}^{(0)}} d_i \sum _{l = 1}^n \sum _{r \in \mathbb {X}^{(g)}} \mathsf {P}_i \{ Z_0 = l, X_{0, l} = r \} - f^{(g)}_{0, \bar{d}} \, | \pi ^{(g)}_{0, k} \nonumber \\&\quad \le C_g \lambda _g^n + \sum _{i \in \mathbb {X}^{(0)}} d_i \sum _{l = 1}^{n - 1} \ \mathsf {P}_i \{ Z_0 = l \} C_g \lambda _g^{n- l} \nonumber \\&\quad \quad + \sum _{i \in \mathbb {X}^{(0)}} d_i \mathsf {P}_i \{ Z_0 = n \} + \sum _{i \in \mathbb {X}^{(0)}} d_i \mathsf {P}_i \{ Z_0 > n \} \nonumber \\&\quad \le C_g \lambda ^n + \sum _{i \in \mathbb {X}^{(0)}} d_i \sum _{l = 1}^{n- 1} \mathsf {P}_i \{ Z_0 = l \} \lambda _0^{-l} \left( \frac{\lambda _0}{\lambda }\right) ^l C_g \lambda ^n + C_0 \lambda ^n \nonumber \\&\quad \le ( C_g + C_0 C_g + C_0) \lambda ^n = C \lambda ^n. \end{aligned}$$
(38.52)

Also, in this case, the following relation holds, for \(n \ge 1, k \in \mathbb {X}^{(0)}\),

$$\begin{aligned} \pi _{0, \bar{d}, k}(n)&= \sum _{i \in \mathbb {X}^{(0)}} d_i \mathsf {P}_i \{ Z_0> n, X_{0, n} = k \} \nonumber \\&\le \sum _{i \in \mathbb {X}^{(0)}} d_i \mathsf {P}_i \{ Z_0 > n \} \le C_0\lambda _0^n \le C \lambda ^n. \end{aligned}$$
(38.53)

Using relation (38.16) and (38.51)–(38.53), we get the following estimate, for \(k \in \mathbb {X}\),

$$\begin{aligned} | \pi _{\varepsilon , k} - \pi _{0, \bar{d}, k} |&\le | \varepsilon \sum _{l = 0}^{\infty } p_{0, \bar{d}, k}(l)(1- \varepsilon )^l - \pi _{0, \bar{d}, k} | \nonumber \\&= | \varepsilon \sum _{l = 0}^{\infty } p_{0, \bar{d}, k}(l)(1- \varepsilon )^l - \varepsilon \sum _{l = 0}^{\infty } \pi _{0, \bar{d}, k} (1- \varepsilon )^l | \nonumber \\&\le \varepsilon | d_k - \pi _{0, \bar{d}, k}| + \varepsilon \sum _{l = 1}^{\infty } C\lambda ^l (1- \varepsilon )^l \nonumber \\&\le \varepsilon (| d_k - \pi _{0, \bar{d}, k}| + \frac{C \lambda (1-\varepsilon )}{1 - \lambda (1 - \varepsilon )}) \nonumber \\&\le \varepsilon (| d_k - \pi _{0, \bar{d}, k}| + \frac{C \lambda }{1 - \lambda }). \end{aligned}$$
(38.54)

The proof is complete. \(\square \)

38.6 Asymptotic Expansions for Stationary Distributions of Perturbed MCDC

In this section, we present asymptotic expansions for stationary distributions of perturbed MCDC.

38.6.1 Asymptotic Expansions for Stationary Distributions of Regularly Perturbed MCDC

Let us get some asymptotic expansions for perturbed stationary distributions in the case, where condition \(\mathbf{A}_1\) holds.

According the Perron–Frobenius theorem, in this case, the eigenvalues \(\rho _1, \ldots \), \(\rho _m\) of the stochastic matrix \(\mathbf {P}_0\) satisfy the following condition:

\(\mathbf{A}_3\)::

\(\rho _1 = 1 > |\rho _2| \ge \cdots \ge |\rho _m|\).

Note that some of eigenvalues \(\rho _2, \ldots , \rho _m\) can be complex numbers.

As well known, condition \(\mathbf{A}_3\) implies that the following eigenvalues decomposition representation takes place, for \(i, j \in \mathbb {X}\) and \(n \ge 1\),

$$\begin{aligned} p_{0, ij}(n) = \pi _{0, j} + \rho ^n_2 \pi _{0, ij}[2] + \cdots + \rho ^n_m \pi _{0, ij}[m], \end{aligned}$$
(38.55)

where: (a) \(\bar{\pi }_0 = \langle \pi _{0, j}, j \in \mathbb {X} \rangle \) is distribution with all positive component,

(b) \(\pi _{0, ij}[l], i, j \in \mathbb {X}, l = 2, \ldots , m\) are some complex- or real-valued coefficients.

Obviously, relation (38.55) implies that probabilities \(p_{ij}(n) \rightarrow \pi _{0, j}\) as \(n \rightarrow \infty \), for \(i, j \in \mathbb {X}\). Thus, \(\bar{\pi }_0\) is the stationary distribution for the Markov chain \(X_{0, n}\).

In fact, condition \(\mathbf{A}_3\) is equivalent to condition \(\mathbf{A}_1\).

Indeed, the convergence relation, \(p_{0, ij}(n) \rightarrow \pi _{0, j}\) as \(n \rightarrow \infty \) for \(i, j \in \mathbb {X}\), implies that \(p_{0, ij}(n) > 0, i, j \in \mathbb {X}\), for all large enough n. This implies that \(\mathbb {X}\) is one communicative, aperiodic class of states.

We refer to book [20], where one can find the description of effective algorithm for finding matrices \(\varvec{\varPi }_l = \Vert \pi _{0, ij}[l] \Vert , l = 2, \ldots , m\).

Relation (38.55) implies the following relation holds, for \(j \in \mathbb {X}\) and \(n \ge 1\),

$$\begin{aligned} p_{0, \bar{d}, j}(n) = \pi _{0, j} + \rho ^n_2 \pi _{0, \bar{d}, j}[2] + \cdots + \rho ^n_m \pi _{0,\bar{d}, j}[m], \end{aligned}$$
(38.56)

where, for \(j \in \mathbb {X}, l = 2, \ldots , m\),

$$\begin{aligned} \pi _{0, \bar{d}, j}[l] = \sum _{i \in \mathbb {X}} d_i \pi _{0, i j}[l]. \end{aligned}$$
(38.57)

Let also define coefficients, for \(j \in \mathbb {X}, n \ge 1\),

$$\begin{aligned} \tilde{\pi }_{0, \bar{d}, j}[n] = \left\{ \begin{array}{lll} d_j - \pi _{0, j} + \sum _{l = 2}^m \pi _{0, \bar{d}, j}[l] \frac{\rho _l}{1 - \rho _l} &{} \text {for} \ n = 1, \\ (-1)^{n -1} \sum _{l = 2}^m \pi _{0, \bar{d}, j}[l] \frac{\rho _l^{n-1}}{(1 - \rho _l)^{n }} &{} \text {for} \ n > 1. \end{array} \right. \end{aligned}$$
(38.58)

Below, symbol \(O(\varepsilon ^n)\) is used for quantities such that \(O(\varepsilon ^n)/\varepsilon ^n\) is bounded as function of \(\varepsilon \in (0, 1]\).

The following theorem takes place.

Theorem 38.3

Let condition \(\mathbf{A}_3\) holds. Then, the following asymptotic expansions take place for every \(j \in \mathbb {X}\) and \(n \ge 1\),

$$\begin{aligned} \pi _{\varepsilon , j} = \pi _{0, j} + \tilde{\pi }_{0, \bar{d}, j}[1] \varepsilon + \cdots + \tilde{\pi }_{0, \bar{d}, j}[n] \varepsilon ^{n} + O(\varepsilon ^{n+1}). \end{aligned}$$
(38.59)

Proof

Relations (38.16) and (38.56) imply that the following relation holds, for \(j \in \mathbb {X}\),

$$\begin{aligned} \pi _{\varepsilon , j}&= \varepsilon \sum _{n = 0}^{\infty } p_{0, \bar{d}, j}(n)(1- \varepsilon )^n \nonumber \\&= \varepsilon d_j + \varepsilon \sum _{n = 1}^{\infty } \left( \pi _{0, j} + \sum _{l = 2}^m \rho ^n_l \pi _{0, \bar{d}, j}[l]\right) (1- \varepsilon )^n \nonumber \\&= \pi _{0, j} + \varepsilon (d_j - \pi _{0, j}) + \sum _{l = 2}^m \pi _{0, \bar{d}, j}[l] \varepsilon \sum _{n = 1}^{\infty } \rho ^n_l (1- \varepsilon )^n \nonumber \\&= \pi _{0, j} + \varepsilon (d_j - \pi _{0, j}) + \sum _{l = 2}^m \pi _{0, \bar{d}, j}[l] \frac{\rho _l \varepsilon (1- \varepsilon )}{1 - \rho _l(1-\varepsilon )} \nonumber \\&= \pi _{0, j} + \varepsilon (d_j - \pi _{0, j}) + \sum _{l = 2}^m \pi _{0, \bar{d}, j}[l] \rho _l \varepsilon (1 - \varepsilon ) (1 - \rho _l + \rho _l \varepsilon )^{-1}. \end{aligned}$$
(38.60)

Functions \((a + b\varepsilon )^{-1}, \varepsilon \in [0, 1]\) and \(b \varepsilon (1 - \varepsilon )(a + b\varepsilon )^{-1}, \varepsilon \in [0, 1]\) admit, for any complex numbers \(a \ne 0\) and b, the following Taylor asymptotic expansions, for every \(n \ge 1\) and \(\varepsilon \rightarrow 0\),

$$\begin{aligned} (a + b\varepsilon )^{-1}&= a^{-1} - a^{-2}b \varepsilon + a^{-3}b^2\varepsilon ^2 \nonumber \\&\quad + \cdots + (-1)^{n} a^{- (n + 1)}b^n \varepsilon ^n + O(\varepsilon ^{n+1}), \end{aligned}$$
(38.61)

and

$$\begin{aligned}&b\varepsilon (1 - \varepsilon ) (a + b\varepsilon )^{-1} = a^{-1}b\varepsilon - a^{-1}b (1 + a^{-1}b) \varepsilon ^2 + a^{-2}b^2(1 + a^{-1}b) \varepsilon ^3 \nonumber \\&\quad \quad \quad + \cdots + (-1)^{n-1} a^{- (n -1)}b^{n-1}(1 + a^{-1}b) \varepsilon ^{n} + O(\varepsilon ^{n+1}). \end{aligned}$$
(38.62)

Relations (38.60)–(38.62) let us write down the following Taylor asymptotic expansions for stationary probabilities \(\pi _{\varepsilon , j}, j \in \mathbb {X}\), for every \(n \ge 1\) and \(\varepsilon \rightarrow 0\),

$$\begin{aligned} \pi _{\varepsilon , j}&= \pi _{0, j} + \varepsilon (d_j - \pi _{0, j}) + \sum _{l = 2}^m \pi _{0, \bar{d}, j}[l] \rho _l \varepsilon (1 - \varepsilon ) (1 - \rho _l + \rho _l \varepsilon )^{-1} \nonumber \\&= \pi _{0, j} + \tilde{\pi }_{0, \bar{d}, j}[1] \varepsilon + \cdots + \tilde{\pi }_{0, \bar{d}, j}[n] \varepsilon ^{n} + O(\varepsilon ^{n+1}). \end{aligned}$$
(38.63)

The proof is complete. \(\square \)

Some of eigenvalues \(\rho _l\) and coefficients \(\pi _{0, ij}[r]\) can be complex numbers. Despite of this, coefficients \(\tilde{\pi }_{0, \bar{d}, j}[n], n \ge 1\) in the expansions given in relation (38.59) are real numbers.

Indeed, \(\pi _{\varepsilon , j}\) is a positive number, for every \(\varepsilon \in [0, 1]\). Relation (38.63) implies that \((\pi _{\varepsilon , j} - \pi _{0, j}) \varepsilon ^{-1} \rightarrow \tilde{\pi }_{0, \bar{d}, j}[1]\) as \(\varepsilon \rightarrow 0\). Thus, \(\tilde{\pi }_{0, \bar{d}, j}[1]\) is a real number. In this way, the above proposition can be proved for all coefficients in expansions (38.59). This implies that the remainders of these expansions \(O(\varepsilon ^{n+1})\) also are real-valued functions of \(\varepsilon \).

Moreover, since \(\bar{\pi }_\varepsilon = \langle \pi _{\varepsilon , j}, j \in \mathbb {X} \rangle , \varepsilon \in (0, 1]\) and \(\bar{\pi }_{0} = \langle \pi _{0, j}, j \in \mathbb {X} \rangle \) are probability distributions, the following equalities connect coefficients in the asymptotic expansions (38.59), \(\sum _{j \in \mathbb {X}} \tilde{\pi }_{0, \bar{d}, j}[n] = 0\), for \(n \ge 1\).

38.6.2 Asymptotic Expansions for Stationary Distributions of Singularly Perturbed MCDC

Let now consider the case, where condition \(\mathbf{B}_1\) holds. We can assume that class \(\mathbb {X}^{(g)}\) includes \(m_g\) states, for \(g = 0, \ldots , h\), where \(m_g > 0, j = 1, \ldots , h\), while \(m_0 \ge 0\).

Let us denote by \(\rho _{g, 1}, \ldots \), \(\rho _{j, m_j}\) the eigenvalues of the stochastic matrices \(\mathbf {P}_{0, g}\), \(g = 1, \ldots , h\) and by \(\rho _{0, 1}, \ldots \), \(\rho _{0, m_0}\) the eigenvalues of the sub-stochastic matrix \(\mathbf {P}_{0, 0}\). We can assume that these eigenvalues are ordered by absolute values, i.e. \(|\rho _{g, 1}| \ge |\rho _{g, 2}| \ge \cdots \ge |\rho _{g, m_g}|\), for \(g = 0, \ldots , h\).

Condition \(\mathbf{B}_1\) is, in fact, equivalent to the following condition:

\(\mathbf{B}_3\)::

The phase space \(\mathbb {X} = \cup _{g = 0}^h \mathbb {X}^{(g)}\), where: (a) \(X^{(g)}, j = 0, \ldots , h\) are non-intersecting subsets of \(\mathbb {X}\), (b) \(X^{(g)}, g = 1, \ldots , h\) are non-empty, closed classes of states for the Markov chain \(X_{0, n}\) such that inequalities \(\rho _{g, 1} = 1 > |\rho _{g, 2}| \ge \cdots \ge |\rho _{g, m_g}|, \, g = 1, \ldots , h\), hold, (c) \(X^{(0)}\) is a class of states for the Markov chain \(X_{0, n}\) such that inequalities \(|\rho _{0, 1}|, \ldots ,\) \(|\rho _{0, m_0}| < 1\) hold (if \(X^{(0)}\) is a non-empty set).

Inequalities given in condition \(\mathbf{B}_3\) (c) should be omitted in condition \(\mathbf{B}_3\), if \(X^{(0)} = \emptyset \) and, thus, \(m_0 = 0\).

Condition \(\mathbf{B}_1\) implies that condition \(\mathbf{B}_3\) holds. This follows from the Perron–Frobenius theorem.

Condition \(\mathbf{B}_3\) imply that the following eigenvalues decomposition representations take place, for \(r, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\) and \(n \ge 1\),

$$\begin{aligned} p^{(g)}_{0, rk}(n) = \pi ^{(g)}_{0, k} + \rho _{g, 2}^n \pi ^{(g)}_{0, rk}[2] + \cdots + \rho _{g, m_j}^n \pi ^{(g)}_{0, rk}[m_j], \end{aligned}$$
(38.64)

and, for \(r, k \in \mathbb {X}^{(0)}, n \ge 1\),

$$\begin{aligned} p^{(0)}_{0, rk}(n) = \rho _{0, 1}^n \pi ^{(0)}_{0, rk}[1] + \cdots + \rho _{0, m_0}^n \pi ^{(0)}_{0, rk}[m_0], \end{aligned}$$
(38.65)

where: (a) \(\bar{\pi }^{(g)}_0 = \langle \pi ^{(g)}_{0, k}, k \in \mathbb {X}^{(g)} \rangle \) is a distribution with all positive component, for \(g = 1, \ldots , h\), (b) \(\pi ^{(g)}_{0, rk}[l], r, k \in \mathbb {X}^{(g)}, l = 2, \ldots , m_g, g = 1, \ldots , h\) and \(\pi ^{(0)}_{0, rk}[l], r, k \in \mathbb {X}^{(0)}, l = 1, \ldots , m_0\) are some complex- or real-valued coefficients.

Obviously, relation (38.64) implies that probabilities \(p^{(g)}_{0, rk}(n) \rightarrow \pi ^{(g)}_{0, k}\) as \(n \rightarrow \infty \), for \(r, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\). Thus, \(\bar{\pi }^{(g)}_0\) is the stationary distribution of the Markov chain \(X^{(g)}_{0, n}\), for \(g = 1, \ldots , h\).

In fact, condition \(\mathbf{B}_3\) is equivalent to condition \(\mathbf{B}_1\).

Indeed, as was mentioned above, relation (38.64) implies that \(p^{(g)}_{0, rk}(n) \rightarrow \pi ^{(g)}_{0, k}\) as \(n \rightarrow \infty \), for \(r, k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\). Thus, probabilities \(p^{(g)}_{0, rk}(n) > 0, r, k \in \mathbb {X}, g = 1, \ldots , h\), for all large enough n. This implies that \(\mathbb {X}^{(g)}\) is, for every \(g = 1, \ldots , h\) a communicative, aperiodic class of states for the Markov chain \(X_{0, n}\). Relation (38.64) also implies that \(p^{(0)}_{0, rk}(n) \rightarrow 0\) as \(n \rightarrow \infty \), for \(r, k \in \mathbb {X}^{(0)}\). This implies that \(\mathbb {X}^{(0)}\) is a transient class of states for the Markov chain \(X_{0, n}\).

Thus, condition \(\mathbf{B}_3\) implies that condition \(\mathbf{B}_1\) holds.

We again refer to book [20], where one can find the description of effective algorithm for finding matrices \(\varvec{\varPi }^{(g)}_l = \Vert \pi ^{(g)}_{0, rk}[l] \Vert , l = 2, \ldots , m_g, g = 1, \ldots , h\) and \(\varvec{\varPi }^{(0)}_l = \Vert \pi ^{(0)}_{0, rk}[l] \Vert , l = 1, \ldots , m_0\).

Let us, first, consider the simpler case, where \(X^{(0)} = \emptyset \).

Relation (38.64) implies, in this case, the following relation holds, for any \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\) and \(n \ge 1\),

$$\begin{aligned} p_{0, \bar{d}, k}(n) = \pi _{0, \bar{d}, k} + \rho ^n_{g, 2} \pi ^{(g)}_{0, \bar{d}, k}[2] + \cdots + \rho ^n_{g, m_g} \pi ^{(g)}_{0,\bar{d}, k}[m_g], \end{aligned}$$
(38.66)

where, for \(k \in \mathbb {X}^{(g)}, l = 2, \ldots , m_g, g = 1, \ldots , h\),

$$\begin{aligned} \pi ^{(g)}_{0, \bar{d}, k}[l] = \sum _{r \in \mathbb {X}^{(g)}} d_r \pi ^{(g)}_{0, r k}[l]. \end{aligned}$$
(38.67)

Let also define coefficients, for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h, n \ge 1\),

$$\begin{aligned} \tilde{\pi }^{(g)}_{0, \bar{d}, k}[n] = \left\{ \begin{array}{lll} d_k - \pi ^{(g)}_{0, k} + {\sum }_{l = 2}^{m_g} \pi ^{(g)}_{0, \bar{d}, k}[l] \frac{\rho _{g, l}}{1 - \rho _{g, l}} &{} \text {for} \ n = 1, \\ (-1)^{n -1} {\sum }_{l = 2}^{m_g} \pi ^{(g)}_{0, \bar{d}, k}[l] \frac{\rho _{g, l}^{n-1}}{(1 - \rho _{g, l})^{n }} &{} \text {for} \ n > 1. \end{array} \right. \end{aligned}$$
(38.68)

The following theorem takes place.

Theorem 38.4

Let condition \(\mathbf{B}_3\) (with \(X^{(0)} = \emptyset \)) holds. Then, the following asymptotic expansion take place for every for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h, n \ge 1\),

$$\begin{aligned} \pi _{\varepsilon , k} = \pi _{0, \bar{d}, k} + \tilde{\pi }^{(g)}_{0, \bar{d}, k}[1] \varepsilon + \cdots + \tilde{\pi }^{(g)}_{0, \bar{d}, k}[n] \varepsilon ^{n} + O(\varepsilon ^{n+1}). \end{aligned}$$
(38.69)

The proof of the above theorem is similar with the proof of Theorem 38.3.

The case, where the set of transient states \(X^{(0)} \ne \emptyset \) is much more complicated. This is because of the corresponding stationary probabilities take, in this case, much more complex forms.

Theorem 38.5

Let condition \(\mathbf{B}_3\) holds. Then, the following asymptotic expansions, with coefficients given below in relations (38.86)–(38.89) and (38.92)–(38.93), take place, for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h, n \ge 1\),

$$\begin{aligned} \pi _{\varepsilon , k} = \pi _{0, \bar{d}, k} + \tilde{\pi }^{(g)}_{0, \bar{d}, k}[1] \varepsilon + \cdots + \tilde{\pi }^{(g)}_{0, \bar{d}, k}[n] \varepsilon ^{n} + O(\varepsilon ^{n+1}). \end{aligned}$$
(38.70)

and, for \(k \in \mathbb {X}^{(0)}, n \ge 1\),

$$\begin{aligned} \pi _{\varepsilon , k} = \tilde{\pi }^{(0)}_{0, \bar{d}, k}[1] \varepsilon + \cdots + \tilde{\pi }^{(0)}_{0, \bar{d}, k}[n] \varepsilon ^{n} + O(\varepsilon ^{n+1}). \end{aligned}$$
(38.71)

Proof

Relation (38.28) can be rewritten in the following form, for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\) and \(n \ge 1\),

$$\begin{aligned} p_{0, \bar{d}, k}(n)&= \sum _{i \in \mathbb {X}^{(g)}} d_i \pi ^{(g)}_{0, k} + \sum _{i \in \mathbb {X}^{(0)}} d_i \sum _{l = 1}^\infty \mathsf {P}_i \{ S_0 = l, X_{0, l} \in \mathbb {X}^{(g)} \} \pi ^{(g)}_{0, k} \nonumber \\&\quad + \sum _{i \in \mathbb {X}^{(g)}} d_i (p^{(g)}_{0, ik}(n) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad + \sum _{i \in \mathbb {X}^{(0)}} d_i \sum _{l = 1}^n \sum _{r \in \mathbb {X}^{(g)}} \mathsf {P}_i \{ S_0 = l, X_{0, l} = r \} ( p^{(g)}_{0, rk}(n - l) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad - \sum _{i \in \mathbb {X}^{(0)}} d_i \sum _{l = n + 1}^\infty \mathsf {P}_i \{ S_0 = l, X_{0, l} \in \mathbb {X}^{(g)} \} \pi ^{(g)}_{0, k} \nonumber \\&= f^{(g)}_{0, \bar{d}} \pi ^{(g)}_{0, k} + \sum _{i \in \mathbb {X}^{(g)}} d_i (p^{(g)}_{0, ik}(n) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad + \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i \sum _{l = 1}^n p^{(0)}_{0, is}(l-1) p_{0, sr} (p^{(g)}_{0, rk}(n - l) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad - \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i \sum _{l = n + 1}^\infty p^{(0)}_{0, is}(l-1) p_{0, sr} \pi ^{(g)}_{0, k}. \end{aligned}$$
(38.72)

By continuing the above relation, we get, for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\) and \(n = 1\),

$$\begin{aligned} p_{0, \bar{d}, k}(1)&= \pi _{0, \bar{d}, k} + \sum _{i \in \mathbb {X}^{(g)}} d_i (p^{(g)}_{0, ik}(1) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad + \sum _{i \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i p_{0, ir} ( \mathrm{I}(r = k) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad - \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i p_{0, sr} \pi ^{(g)}_{0, k} \sum _{l = 2}^\infty p^{(0)}_{0, is}(l-1). \end{aligned}$$
(38.73)

and, for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\) and \(n \ge 2\),

$$\begin{aligned} p_{0, \bar{d}, k}(n)&= \pi _{0, \bar{d}, k} + \sum _{i \in \mathbb {X}^{(g)}} d_i (p^{(g)}_{0, ik}(n) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad + \sum _{i \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i p_{0, ir} (p_{0, rk}(n - 1) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad + \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i \sum _{l = 2}^{n - 1} p^{(0)}_{0, is}(l-1) p_{0, sr} (p^{(g)}_{0, rk}(n - l) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad + \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i p^{(0)}_{0, is}(n -1) p_{0, sr} (\mathrm{I} (k = r) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad - \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i \sum _{l = n + 1}^\infty p^{(0)}_{0, is}(l-1) p_{0, sr} \pi ^{(g)}_{0, k}. \end{aligned}$$
(38.74)

By using relations (38.27)–(38.28) and by substituting decomposition expressions (given for the corresponding transition probabilities in relation (38.64)), in relations (38.73) and (38.74), we get the following relation, for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\) and \(n = 1\),

$$\begin{aligned} p_{0, \bar{d}, k}(1) = \pi _{0, \bar{d}, k} + A_{\bar{d}, g, k}, \end{aligned}$$
(38.75)

where

$$\begin{aligned}&A_{\bar{d}, g, k} = \sum _{i \in \mathbb {X}^{(g)}} \sum _{u = 2}^{m_g} d_i \pi ^{(g)}_{0, ik}[u] \rho _{g, u} + \sum _{i \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i p_{0, ir} (\mathrm{I}(r = k) - \pi ^{(g)}_{0, k}) \nonumber \\&\quad \quad \quad \quad \ - \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} \sum _{t = 1}^{m_0} d_i \pi ^{(0)}_{0, is}[t] p_{0, sr} \pi ^{(g)}_{0, k} \frac{\rho _{0, t}}{1 - \rho _{0, t}}, \end{aligned}$$
(38.76)

and, for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\) and \(n \ge 2\),

$$\begin{aligned} p_{0, \bar{d}, k}(n)&= \pi _{0, \bar{d}, k} + \sum _{u = 2}^{m_g}B_{\bar{d}, g, k}(u) \rho _{g, u}^{n -2} \nonumber \\&\quad + \sum _{u = 2}^{m_g}C_{\bar{d}, g, k}(u) (n - 2) \rho _{g, u}^{n-2} + \sum _{t = 1}^{m_0}D_{\bar{d}, g, k}(t) \rho _{g, t}^{n-2}, \end{aligned}$$
(38.77)

where

$$\begin{aligned}&B_{\bar{d}, g, k}(u) = \sum _{i \in \mathbb {X}^{(g)}} d_i \pi ^{(g)}_{0, ik}[u] \rho _{g, u}^2 + \sum _{i \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i p_{0, ir} \pi ^{(g)}_{0, rk}[u] \rho _{j, u} \nonumber \\&\quad \quad \quad \quad \quad \quad + \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} \sum _{t = 1}^{m_0} d_i \pi ^{(0)}_{0, is}[t] p_{0, sr} \pi ^{(g)}_{0, rk}[u] \mathrm{I}(\rho _{g, u} \ne \rho _{0, t}) \frac{\rho _{g, u}\rho _{0, t}}{\rho _{g, u} - \rho _{0, t}}, \end{aligned}$$
(38.78)
$$\begin{aligned} C_{\bar{d}, g, k}(u) = \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} \sum _{t = 1}^{m_0} d_i \pi ^{(0)}_{0, is}[t] p_{0, sr} \pi ^{(g)}_{0, rk}[u] \mathrm{I}(\rho _{g, u} = \rho _{0, t}) \rho _{g, u}, \end{aligned}$$
(38.79)

and

$$\begin{aligned}&D_{\bar{d}, g, k}(t) = \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i \pi ^{(0)}_{0, is}[t] p_{0, sr} (\mathrm{I} (k = r) - \pi ^{(g)}_{0, k}) \, \rho _{0, t} \nonumber \\&\quad \quad \quad \quad \quad \quad - \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} \sum _{u = 2}^{m_g} d_i \pi ^{(0)}_{0, is}[t] p_{0, sr} \pi ^{(g)}_{0, rk}[u] \mathrm{I}(\rho _{g, u} \ne \rho _{0, t}) \frac{\rho _{g, u}\rho _{0, t}}{\rho _{g, u} - \rho _{0, t}} \nonumber \\&\quad \quad \quad \quad \quad \quad - \sum _{i, s \in \mathbb {X}^{(0)}} \sum _{r \in \mathbb {X}^{(g)}} d_i \pi ^{(0)}_{0, is}[t] p_{0, sr} \pi ^{(g)}_{0, k} \frac{\rho _{0, t}^{2}}{1 - \rho _{0, t}}. \end{aligned}$$
(38.80)

Two formulas are used, for \(n \ge 2\), in the above transformations,

$$\begin{aligned} \sum _{l = n}^\infty \rho _{0, t}^{l} = \rho _{0, t}^{n}\frac{1}{1 - \rho _{0, t}}, \end{aligned}$$
(38.81)

and

$$\begin{aligned} \sum _{l = 2}^{n-1}\rho _{0, t}^{l-1} \rho _{g, u}^{n -l} = \left\{ \begin{array}{lll} \rho _{g, u}\rho _{0, t}\frac{ \rho _{g, u}^{n-2} - \rho _{0, t}^{n-2}}{\rho _{g, u} - \rho _{0, t}} &{} \text {if} \ \rho _{g, u} \ne \rho _{0, t}, \\ (n - 2) \rho _{g, u}^{n-1} &{} \text {if} \ \rho _{g, u} = \rho _{0, t}. \end{array} \right. \end{aligned}$$
(38.82)

Relations (38.16) and (38.76)–(38.82) imply that the following relation holds, for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\),

$$\begin{aligned} \pi _{\varepsilon , k}&= \varepsilon \sum _{n = 0}^{\infty } p_{0, \bar{d}, k}(n)(1- \varepsilon )^n \nonumber \\&= \varepsilon d_k + \varepsilon (1 - \varepsilon ) (\pi _{0, \bar{d}, k} + A_{\bar{d}, g, k}) \nonumber \\&\quad + \varepsilon (1- \varepsilon )^2 \Bigg (\sum _{n = 2}^\infty \bigg (\pi _{0, \bar{d}, k} + \sum _{u = 2}^{m_j}B_{\bar{d}, g, k}(u) \rho _{g, u}^{n -2} \nonumber \\&\quad + \sum _{u = 2}^{m_g}C_{\bar{d}, g, k}(u) (n - 2) \rho _{g, u}^{n-2} + \sum _{t = 1}^{m_0}D_{\bar{d}, g, k}(t) \rho _{j, t}^{n-2} \bigg ) (1- \varepsilon )^{n-2} \Bigg ) \nonumber \\&= \pi _{0, \bar{d}, k} + \varepsilon (d_k - \pi _{0, \bar{d}, k}) + \varepsilon (1 - \varepsilon )A_{\bar{d}, g, k} \nonumber \\&\quad + \sum _{u = 2}^{m_g} B_{\bar{d}, g, k}(u)\frac{\varepsilon (1- \varepsilon )^2 }{1 - \rho _{g, u}(1 - \varepsilon )} \nonumber \\&\quad + \sum _{u = 2}^{m_g} C_{\bar{d}, g, k}(u) \frac{\rho _{g, u}\varepsilon (1 - \varepsilon )^3}{(1 - \rho _{g, u}(1 - \varepsilon ))^2} + \sum _{t = 1}^{m_0} D_{\bar{d}, g, k}(t) \frac{\varepsilon (1- \varepsilon )^2 }{1 - \rho _{g, t}(1 - \varepsilon )}. \end{aligned}$$
(38.83)

Functions \(\varepsilon (1-\varepsilon )^2(a + b\varepsilon )^{-1}, \varepsilon \in [0, 1]\) and \(b\varepsilon (1 - \varepsilon )^3(a + b\varepsilon )^{-2}, \varepsilon \in [0, 1]\) admit, for any complex numbers \(a \ne 0\) and b, the following Taylor asymptotic expansions, for every \(n \ge 3\) and \(\varepsilon \rightarrow 0\),

$$\begin{aligned}&\varepsilon (1- \varepsilon )^2(a + b\varepsilon )^{-1} = a^{-1}\varepsilon - a^{-1}(a^{-1}b + 2) \varepsilon ^2 + a^{-1}(a^{-2} b^2 + 2a^{-1}b +1) \varepsilon ^3 \nonumber \\&\quad \quad + \cdots + (-1)^{n-1} a^{- (n - 2)}b^{n - 3}(a^{-2} b^2 + 2a^{-1}b +1) \varepsilon ^n + O(\varepsilon ^{n+1}), \end{aligned}$$
(38.84)

and, for every \(n \ge 4\) and \(\varepsilon \rightarrow 0\),

$$\begin{aligned}&b\varepsilon (1 - \varepsilon )^3 (a + b\varepsilon )^{-2} = a^{-2}b\varepsilon - a^{-2}b (2a^{-1}b + 3) \varepsilon ^2 \nonumber \\&\quad \quad + a^{-2}b(3a^{-2}b^2 + 2 \cdot 3 a^{-1}b +3) \varepsilon ^3 \nonumber \\&\quad \quad - a^{-2}b (4a^{-3}b^3 + 3 \cdot 3 a^{-2}b^2 + 2 \cdot 3 a^{-1} b +1)\varepsilon ^4 \nonumber \\&\quad \quad + \cdots + (-1)^{n-1} a^{-(n-2)}b^{n -3}(n a^{- 3}b^{3} + (n-1) \cdot 3 a^{-2}b^2 \nonumber \\&\quad \quad + (n-2) \cdot 3 a^{-1} b + (n -3)) \varepsilon ^{n} + O(\varepsilon ^{n+1}). \end{aligned}$$
(38.85)

By expanding the rational functions of \(\varepsilon \) appearing in the expression on the right side of relation (38.83) in asymptotic Taylor expansions and gathering coefficients for \(\varepsilon ^n\), we get the asymptotic Taylor expansions (38.70) for stationary probabilities \(\pi _{\varepsilon , k}\), for \(k \in \mathbb {X}^{(g)}, g = 1, \ldots , h\), with coefficients given by the following relations,

$$\begin{aligned}&\tilde{\pi }^{(g)}_{0, \bar{d}, k}[1] = (d_k - \pi _{0, \bar{d}, k}) + A_{\bar{d}, g, k} + \sum _{u = 2}^{m_g} B_{\bar{d}, g, k}(u)\frac{1}{1 - \rho _{g, u}} \nonumber \\&\quad \quad \quad \quad \quad \quad + \sum _{u = 2}^{m_g} C_{\bar{d}, g, k}(u) \frac{\rho _{g, u}}{(1 - \rho _{g, u})^{2}} + \sum _{t = 1}^{m_0} D_{\bar{d}, g, k}(t) \frac{1}{1 - \rho _{g, t}}, \end{aligned}$$
(38.86)
$$\begin{aligned}&\tilde{\pi }^{(g)}_{0, \bar{d}, k}[2] = - \sum _{u = 2}^{m_g} B_{\bar{d}, g, k}(u)\frac{2 - \rho _{g, u}}{(1 - \rho _{g, u})^{2}}-A_{\bar{d}, g, k} \nonumber \\&\quad \quad \quad \quad \quad \quad - \sum _{u = 2}^{m_g} C_{\bar{d}, g, k}(u) \frac{\rho _{g, u}(3 - \rho _{g, u})}{(1 - \rho _{g, u})^{3}} - \sum _{t = 1}^{m_0} D_{\bar{d}, g, k}(t) \frac{2 - \rho _{0, t}}{(1 - \rho _{0, t})^{2}}, \end{aligned}$$
(38.87)
$$\begin{aligned}&\tilde{\pi }^{(g)}_{0, \bar{d}, k}[3] = \sum _{u = 2}^{m_g} B_{\bar{d}, g, k}(u)\frac{1}{(1 - \rho _{g, u})^{3}} \nonumber \\&\quad \quad \quad \quad \quad \quad + \sum _{u = 2}^{m_g} C_{\bar{d}, g, k}(u) \frac{3 \rho _{g, u}}{(1 - \rho _{g, u})^{4}} + \sum _{t = 1}^{m_0} D_{\bar{d}, g, k}(t) \frac{1}{(1 - \rho _{0, t})^{3}}, \end{aligned}$$
(38.88)

and, for \(n \ge 4\),

$$\begin{aligned}&\tilde{\pi }^{(g)}_{0, \bar{d}, k}[n] = (-1)^{n -1} \bigg (\sum _{u = 2}^{m_g} B_{\bar{d}, g, k}(u) \frac{\rho _{g, u}^{n - 3}}{(1 - \rho _{g, u})^{n}} \nonumber \\&\quad \quad \quad \quad \quad \quad + \sum _{u = 2}^{m_g} C_{\bar{d}, g, k}(u) \frac{\rho ^{n -3}_{g, u}(3 \rho _{g, u} + n -3)}{(1 - \rho _{g, u})^{n +1}} \nonumber \\&\quad \quad \quad \quad \quad \quad + \sum _{t = 1}^{m_0} D_{\bar{d}, g, k}(t) \frac{\rho _{0, t}^{n - 3}}{(1 - \rho _{0, t})^{n}} \bigg ). \end{aligned}$$
(38.89)

Also, the following relation takes place, for \(k \in \mathbb {X}^{(0)}, n \ge 1\),

$$\begin{aligned} \pi _{\varepsilon , k}&= \varepsilon \sum _{n = 0}^{\infty } p_{0, \bar{d}, k}(n)(1- \varepsilon )^n \nonumber \\&= \varepsilon d_k + \varepsilon \sum _{n = 1}^\infty \bigg (\sum _{r \in \mathbb {X}^{(0)}} d_r \sum _{t = 1}^{m_0} \pi ^{(0)}_{0, rk}[t] \rho _{0, t}^n \bigg ) (1 - \varepsilon )^n \nonumber \\&= \varepsilon d_k + \sum _{t = 1}^{m_0} E_{\bar{d}, 0, k}(t) \frac{ \rho _{0, t}\varepsilon (1-\varepsilon )}{1 - \rho _{0,t}(1- \varepsilon )}, \end{aligned}$$
(38.90)

where, for \(k \in \mathbb {X}^{(0)}, t = 1, \ldots , m_0\),

$$\begin{aligned} E_{\bar{d}, 0, k}(t) = \sum _{r \in \mathbb {X}^{(0)}} d_r \pi ^{(0)}_{0, rk}[t]. \end{aligned}$$
(38.91)

By expanding the rational functions of \(\varepsilon \) appearing in the expression on the right side of relation (38.90) in asymptotic Taylor expansions and gathering coefficients for \(\varepsilon ^n\), we get the asymptotic Taylor expansions (38.71) for stationary probabilities \(\pi _{\varepsilon , k}\), for \(k \in \mathbb {X}^{(0)}\), with coefficients given by the following relations,

$$\begin{aligned} \tilde{\pi }^{(0)}_{0, \bar{d}, k}[1] = d_k + \sum _{t = 1}^{m_0} E_{\bar{d}, 0, k}(t) \frac{\rho _{0, t}}{1 - \rho _{0,t}}, \end{aligned}$$
(38.92)

and, for \(n \ge 2\),

$$\begin{aligned} \tilde{\pi }^{(0)}_{0, \bar{d}, k}[n] = (-1)^{n-1} \sum _{t = 1}^{m_0} E_{\bar{d}, 0, k}(t) \frac{\rho _{0, t}^{n-1}}{(1 - \rho _{0,t})^{n}}. \end{aligned}$$
(38.93)

The proof is complete. \(\square \)

38.6.3 Conclusion

The results of the present paper extend results of the paper [3] related to rates of convergence and asymptotic expansions for stationary distributions \(\bar{\pi }_{\varepsilon }\) for model with singular perturbations. In paper [3], the case, where the class of transient states \(\mathbb {X}_0\) is empty, was treated. In the present paper, the general singular case, where the class of transient states may be non-empty is considered. This extension, significantly complicates the corresponding asymptotic analysis.

In principle, the asymptotic expansions given in Theorems 38.338.5 can be improved and given in the variant with explicit upper bounds for remainders, \(| O(\varepsilon ^{n+1}) | \le G_{n+1} \varepsilon ^{n+1}, \varepsilon \in (0, \varepsilon _{n+1}]\), with explicit formulas for constants \(G_{n+1}\) and \( \varepsilon _{n+1}\). We are going to present the corresponding results as well as results of experimental numerical studies and applications to concrete information networks in the near future.