Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The theory of large deviations had arisen in the work of H. Cramér [2] and deals with the asymptotic estimations for probabilities of rare events. The main problem in the large deviations theory is the construction of the rate functional to estimate probabilities of rare events. The method, used in the majority of classical works, is based on the change of measure and application of variational formula to the cumulant of the process under study. Different aspects and applications of this problem were investigated by many mathematicians. We discuss the Markov processes with independent increments, so it is natural to refer a reader to the fundamental works [3, 16] and [7].

Another approach arises in works [8] and [1] and is applied to the large deviations problem in [6]. It is based on the asymptotic analysis of the nonlinear Hamilton-Jacobi equation corresponding to the process under study. Then, the solution of the limit nonlinear Hamilton-Jacobi equation is given by the variational formula that defines the rate functional of the prelimit process. The main problem here is to prove the uniqueness of the solution of the limit nonlinear equation.

All the technical problems connected to the application of the last method to different classes of Markov processes are solved in the monograph [5]. The main idea of this monograph is the following.

Let η(t), t ≥ 0 be a Markov process in Euclidean space R, defined by its linear generator L. The function \(\varphi (u) \in \mathcal{B}_{\mathbf{R}}\). Unlike the classical martingale characterization of the Markov processes

$$\displaystyle{\mu _{t} =\varphi (\eta (t)) -\varphi (\eta (0)) -\int _{0}^{t}\mathbf{L}\varphi (\eta (s))ds,}$$

the large deviations theory is based on the exponential martingale characterization (see [5, Chap. 1]). Namely,

$$\displaystyle{\tilde{\mu }_{t} =\exp \{\varphi (\eta (t)) -\varphi (\eta (0)) -\int _{0}^{t}\mathbf{H}\varphi (\eta (s))ds\}}$$

is a martingale.

The exponential (nonlinear) operator H is connected with the linear generator L of the Markov process η(t), t ≥ 0 in a following way:

$$\displaystyle{\mathbf{H}\varphi (u) = {e}^{-\varphi (u)}\mathbf{L}{e}^{\varphi (u)},\quad {e}^{\varphi (u)} \in \mathcal{D}(\mathbf{L}).}$$

The large deviations problem may be formulated as a limit theorem in the scheme of series with a small series parameter \(\varepsilon \rightarrow 0(\varepsilon > 0)\). Namely (compare with [5, Chap. 1])

$$\displaystyle{{\mathbf{H}{}^{\varepsilon }\varphi }^{\varepsilon } \rightarrow \mathbf{H}\varphi,{\quad \varphi }^{\varepsilon } \rightarrow \varphi,\quad \varepsilon \rightarrow 0.}$$

Here by definition

$$\displaystyle{{\mathbf{H}}^{\varepsilon }\varphi (u):= {e}^{-\varphi (u)/\varepsilon }\varepsilon {\mathbf{L}}^{\varepsilon }{e}^{\varphi (u)/\varepsilon }.}$$

The generator \({\mathbf{L}}^{\varepsilon },\varepsilon > 0\) defines Markov process \({x}^{\varepsilon }(t),t \geq 0,\varepsilon > 0\) in the scheme of series under some scaling transform.

Example 1.

The asymptotically small diffusion process is given by \(\sqrt{\varepsilon }\sigma w(t),t \geq 0\) with the standard Brownian motion process w(t), t ≥ 0. The generator of such a process is the following:

$$\displaystyle{{\mathbf{L}}^{\varepsilon }\varphi (u) =\varepsilon \frac{1} {2}B\varphi ^{\prime\prime}(u),\quad B {=\sigma }^{2},\quad \varphi ^{\prime\prime}(u):= {\partial }^{2}\varphi (u)/\partial {u}^{2}.}$$

The exponential generator of the asymptotically small diffusion process may be easily calculated:

$$\displaystyle{{\mathbf{H}}^{\varepsilon }\varphi (u) = \frac{1} {2}B{[\varphi ^{\prime}(u)]}^{2} +\varepsilon B\varphi ^{\prime\prime}(u).}$$

Hence, the limit exponential operator is represented as

$$\displaystyle{ \mathbf{H}\varphi (u) = \frac{1} {2}B{[\varphi ^{\prime}(u)]}^{2}. }$$
(1)

Remark 1.

The exponential operator (1) in Euclidean space R d, d ≥ 2 is represented by the quadratic form

$$\displaystyle{\mathbf{H}\varphi (u) = \frac{1} {2}\sum _{k,r=1}^{d}B_{ kr}\varphi ^{\prime}_{k}(u)\varphi ^{\prime}_{r}(u),\quad \varphi ^{\prime}_{k}(u):= \partial \varphi (u)/\partial u_{k},}$$

B = [B kr ; 1 ≤ k, r ≤ d] is the variance matrix of w(t).

To simplify the notations, we present all the following results in R.

The aim of our investigation is the asymptotic analysis of the large deviations problem for the random evolutions in the scheme of asymptotically small diffusion.

At the beginning (Sect. 2) the large deviations problem is considered for the processes with locally independent increments under the scaling proposed by A.A. Mogulskii [12]:

$$\displaystyle{{ \eta }^{\varepsilon }(t) {=\varepsilon }^{2}\eta (t{/\varepsilon }^{3}),\quad t \geq 0,\quad \varepsilon > 0. }$$
(2)

The generator of the Markov process (2) is given by

$$\displaystyle{{ \varGamma }^{\varepsilon }\varphi (u) {=\varepsilon }^{-3}\int _{ \mathbf{R}}[\varphi (u {+\varepsilon }^{2}v) -\varphi (u)]\varGamma (u,dv),\quad u \in \mathbf{R},\quad \varphi (u) \in \mathcal{B}_{\mathbf{ R}}. }$$
(3)

Usually we assume that the Lévy measure Γ(u, dv) satisfies the condition

$$\displaystyle{ \int _{\mathbf{R}}{e}^{av}\varGamma (u,dv) < \infty,\quad a > 0,\quad u \in \mathbf{R}. }$$
(4)

In the Sect. 3 the large deviations problem is considered for the random evolution process with Markov switching [9, Chap. 2]. The scheme of asymptotically small diffusion is considered under additional balance conditions (local and total).

The large deviations problem in the scheme of phase merging is investigated in Sect. 4.

2 Processes with Locally Independent Increments

In this section we consider the compound Poisson processes which is supposed to be defined by the generator (3) under the condition (4) for simplicity.

The balance condition (local) formulates as

Λ B: b(u): =  R v Γ(u, dv) ≡ 0. 

The main part of the asymptotic representation of the generator (3) on smooth enough test functions is

$$\displaystyle{{\varGamma }^{\varepsilon }\varphi (u) =\varepsilon \frac{1} {2}B(u)\varphi ^{\prime\prime}(u) {+\varepsilon \delta }^{\varepsilon }(u)\varphi (u),}$$

where

$$\displaystyle{B(u) =\int _{\mathbf{R}}{v}^{2}\varGamma (u,dv)}$$

and the negligible term converges uniformly by u on the functions \(\varphi (u) \in {C}^{3}(\mathbf{R})\):

$$\displaystyle{ {\vert \delta }^{\varepsilon }(u)\varphi (u)\vert \rightarrow 0,\ \varepsilon \rightarrow 0. }$$
(5)

The large deviations problem for the processes (2) may be solved using the limit approximation of the exponential generator [5, Part 1]:

$$\displaystyle{{\mathbf{H}}^{\varepsilon }\varphi (u) = {e{}^{-\varphi (u)/\varepsilon }\varepsilon \varGamma }^{\varepsilon }{e}^{\varphi (u)/\varepsilon } {=\varepsilon }^{-2}\int _{ \mathbf{R}}[{e}^{{\varDelta }^{\varepsilon }\varphi } - 1]\varGamma (u,dv),}$$
$$\displaystyle{{\varDelta }^{\varepsilon }\varphi:{=\varepsilon }^{-1}[\varphi (u {+\varepsilon }^{2}v) -\varphi (u)] =\varepsilon v\varphi ^{\prime}(u) {+\varepsilon \delta }^{\varepsilon }\varphi (u).}$$

Hence, due to the Λ B condition,

$$\displaystyle{{\mathbf{H}}^{\varepsilon }\varphi (u) {=\varepsilon }^{-2}\int _{ \mathbf{R}}[\varepsilon v\varphi ^{\prime}(u) {+\varepsilon }^{2}\frac{1} {2}{v}^{2}{[\varphi ^{\prime}(u)]}^{2}]\varGamma (u,dv) {+\delta }^{\varepsilon }(u)\varphi (u) =}$$
$$\displaystyle{= \frac{1} {2}B(u){[\varphi ^{\prime}(u)]}^{2} {+\delta }^{\varepsilon }(u)\varphi (u)}$$

with the negligible term (5).

Conclusion (comp. with [12]): The limit exponential operator for the processes with locally independent increments in the scheme of asymptotically small diffusion is given by

$$\displaystyle{ \mathbf{H}\varphi (u) = \frac{1} {2}B(u){[\varphi ^{\prime}(u)]}^{2}. }$$
(6)

3 Random Evolutions in the Scheme of Ergodic Phase Merging

In this section we investigate the random evolutions with locally independent increments and switching, so we should note that random evolutions with switching are also studied in Chap. 11 of [5] by the classical methods of averaging and homogenization. This approach involves perturbed PDEs operators and perturbed test functions and arises in the works [11, 13]. Recent monographs [14, 15] include large bibliography on this problem. Application of this method for the nonlinear case may also be found in the work [4]. This approach is important for the infinite dimensional state space models like interacting particles or stochastic PDEs. But in this case a lot of additional problems appear: correct description of the functional space for the solutions, the domain of the infinitesimal operators, etc.

We use the generators of Markov processes with a locally compact vector state space (see [9] for more details). This simplifies the analysis because the generators are defined for all bounded measurable functions. We lose generality, but can present obvious algorithms for verification of convergence conditions and calculation of the limit generators. This approach is important for finite dimensional models arising in the theory of random evolutions in R d, queuing theory, etc.

The Markov random evolution process in the scheme of series with a small series parameter \(\varepsilon \rightarrow 0(\varepsilon > 0)\) is considered as the stochastic additive functional [9, Sect. 3.4.2]:

$$\displaystyle{{ \xi }^{\varepsilon }(t) =\xi _{0} +\int _{ 0}^{t}{\eta }^{\varepsilon }(ds;x(s{/\varepsilon }^{2})),\quad t \geq 0 }$$
(7)

in the case of local balance condition or

$$\displaystyle{{ \xi }^{\varepsilon }(t) =\xi _{0} +\int _{ 0}^{t}{\eta }^{\varepsilon }(ds;x(s{/\varepsilon }^{3})),\quad t \geq 0 }$$
(8)

in the case of total balance condition.

The family of the processes with locally independent increments \({\eta }^{\varepsilon }(t;x),\) t ≥ 0, x ∈ E is determined by the generators

$$\displaystyle{{ \varGamma }^{\varepsilon }(x)\varphi (u) {=\varepsilon }^{-3}\int _{ \mathbf{R}}[\varphi (u {+\varepsilon }^{2}v) -\varphi (u)]\varGamma (u,dv;x),\quad \varphi (u) \in \mathcal{B}_{\mathbf{ R}}. }$$
(9)

The switching Markov process x(t), t ≥ 0 is given on the standard phase space \((E,\mathcal{E})\) by the generator

$$\displaystyle{ Q\varphi (x) = q(x)\int _{E}P(x,dy)[\varphi (y) -\varphi (x)],\quad \varphi (u) \in \mathcal{B}_{\mathbf{E}}. }$$
(10)

The random evolution process is considered as the two-component Markov process \({\xi }^{\varepsilon }(t),{x}^{\varepsilon }(t):= x(t{/\varepsilon }^{2}),t \geq 0,\) given by the generator [9, Sect. 5.3.2]

$$\displaystyle{ \mathbf{L}_{\varLambda }^{\varepsilon }\varphi (u,x) = [{\varepsilon }^{-2}Q {+\varGamma }^{\varepsilon }(x)]\varphi (u,x) }$$
(11)

in the case of the local balance condition or as the two-component Markov process \({\xi }^{\varepsilon }(t),{x}^{\varepsilon }(t):= x(t{/\varepsilon }^{3}),t \geq 0,\) given by the generator

$$\displaystyle{ \mathbf{L}_{T}^{\varepsilon }\varphi (u,x) = [{\varepsilon }^{-3}Q {+\varGamma }^{\varepsilon }(x)]\varphi (u,x) }$$
(12)

in the case of the total balance condition.

The main assumption in the scheme of ergodic phase merging is the uniform ergodicity of the switching Markov process x(t).

EA::

There exists the stationary distribution π(dx) on \((E,\mathcal{E})\) which defines the projector

$$\displaystyle{\varPi \varphi (x):=\int _{E}\pi (dx)\varphi (x),\quad \varphi (x) \in \mathcal{B}_{E}}$$

on the null-space of the generator Q:

$$\displaystyle{\varPi Q = Q\varPi = 0.}$$

The main assumption EA provides that the potential operator R 0 exists:

$$\displaystyle{QR_{0} = R_{0}Q =\varPi -I.}$$

So, the Poisson equation

$$\displaystyle{Q\varphi (x) =\psi (x),\quad \varPi \psi (x) = 0}$$

may be solved as follows:

$$\displaystyle{\varphi (x) = R_{0}\psi (x),\quad \varPi \varphi (x) = 0.}$$

The scheme of asymptotically small diffusion is considered under additional balance condition (local or total):

Λ B::

\(b(u;x):=\int _{\mathbf{R}}v\varGamma (u,dv;x) \equiv 0.\)

TB::

b(u): =  E π(dx)b(u; x) ≡ 0. 

Lemma 1 ([10]).

The generator (11) of the random evolution (7) admits the following asymptotic representation:

$$\displaystyle{\mathbf{L}_{\varLambda }^{\varepsilon }\varphi (u,x) = [{\varepsilon }^{-2}Q +\varepsilon \mathbf{B}(x)]\varphi (u,x) {+\delta }^{\varepsilon }(u,x)\varphi (u),}$$
$$\displaystyle{\mathbf{B}(x)\varphi (u) = \frac{1} {2}B(u;x)\varphi ^{\prime\prime}(u),\quad B(u;x) =\int _{\mathbf{R}}{v}^{2}\varGamma (u,dv;x)}$$

under the local balance condition ΛB.

The generator (12) of the random evolution (8) admits the following asymptotic representation:

$$\displaystyle{\mathbf{L}_{T}^{\varepsilon }\varphi (u,x) = [{\varepsilon }^{-3}Q {+\varepsilon }^{-1}\varGamma (x) +\varepsilon \mathbf{B}(x)]\varphi (u,x) {+\delta }^{\varepsilon }(u,x)\varphi (u),}$$

under the total balance condition TB and the negligible terms converge uniformly by u,x on the functions \(\varphi (u) \in {C}^{3}(\mathbf{R})\) :

$$\displaystyle{{\vert \delta }^{\varepsilon }(u,x)\varphi (u)\vert \rightarrow 0.}$$

The large deviations problem for the random evolutions in the scheme of ergodic phase merging is solved by the exponential generators described in the following theorem.

Theorem 1 ([10]).

The exponential generators of the large deviations for the random evolutions (7)–(12) are determined by the relations

$$\displaystyle{ \mathbf{H}\varphi (u) = \frac{1} {2}B_{{\ast}}(u){[\varphi ^{\prime}(u)]}^{2}. }$$
(13)

The variation B (u) is determined by

$$\displaystyle{ B_{\varLambda }(u) =\int _{E}\pi (dx)B(u;x),\quad B(u;x) =\int _{\mathbf{R}}{v}^{2}\varGamma (u,dv) }$$
(14)

under the local balance condition ΛB, and by

$$\displaystyle{ B_{T}(u) = B_{\varLambda }(u) + B_{0}(u), }$$
(15)
$$\displaystyle{B_{0}(u) =\int _{E}\pi (dx)B_{0}(u;x),\quad B_{0}(u;x) = 2b(u;x)R_{0}b(u;x),}$$

under the total balance condition TB.

Remark 2.

The exponential generators of the large deviations for the random evolutions in the scheme of asymptotically small diffusion are determined exactly as the exponential generator of the processes with independent increments (compare (2)–(4), (6) with (7), (8), (13)–(15)).

The proof of the Theorem 1 is based on the following lemma:

Lemma 2 ([10]).

The exponential generator on the perturbed test function admits the following asymptotic representations:

  1. (1)

    In the case of the local balance condition ΛB on the perturbed test function \({\varphi }^{\varepsilon }(u,x) =\varphi (u) +\varepsilon \ln [1 +\varepsilon \varphi _{1}(u,x)]\),

    $$\displaystyle{{\mathbf{H}{}^{\varepsilon }\varphi }^{\varepsilon }(u,x) = Q\varphi _{1} +\tilde{ \mathbf{B}}(x)\varphi (u) {+\delta }^{\varepsilon }(u,x)\varphi (u).}$$

    Here the operator

    $$\displaystyle{\tilde{\mathbf{B}}(x)\varphi (u) = \frac{1} {2}B(u;x){[\varphi ^{\prime}(u)]}^{2}.}$$
  2. (2)

    In the case of the total balance condition TB on the perturbed test function \({\varphi }^{\varepsilon }(u,x) =\varphi (u) +\varepsilon \ln [1 +\varepsilon \varphi _{1}(u,x) {+\varepsilon }^{2}\varphi _{2}(u,x)]\) :

    $$\displaystyle{{\mathbf{H}{}^{\varepsilon }\varphi }^{\varepsilon }(u,x) {=\varepsilon }^{-1}[Q\varphi _{ 1}+\varGamma (x)\varphi (u)]+[Q\varphi _{2} -\varphi _{1}Q\varphi _{1}+\tilde{\mathbf{B}}(x)\varphi (u)]{+\delta }^{\varepsilon }(u,x)\varphi (u).}$$

    In this case the operator

    $$\displaystyle{\varGamma (x)\varphi (u):= b(u;x)\varphi ^{\prime}(u).}$$

    The negligible terms converge uniformly by u,x on the functions \(\varphi (u) \in {C}^{3}(\mathbf{R})\) :

    $$\displaystyle{{\vert \delta }^{\varepsilon }(u,x)\varphi (u)\vert \rightarrow 0,\varepsilon \rightarrow 0.}$$

4 Large Deviations in the Scheme of Split-and-Double Merging [9, Sect. 5.7.2]

4.1 Split-and-Double Merging Scheme

We introduce the switching Markov process \({x}^{\varepsilon }(t),t \geq 0\) on the standard phase (state) space \((E,\mathcal{E})\) in the series scheme with a small series parameter \(\varepsilon \rightarrow 0,\varepsilon > 0\) on the split phase space

$$\displaystyle{E =\bigcup _{ k=1}^{N}E_{ k},\quad E_{k} \cap E_{k^{\prime}} = \varnothing,\quad k\neq k^{\prime}.}$$

The Markov kernel is

$$\displaystyle{{Q}^{\varepsilon }(x,B,t) = {P}^{\varepsilon }(x,B)[1 - {e}^{-q(x)t}],\quad \ x \in E,\quad B \in \mathcal{E},\quad t \geq 0.}$$

We also introduce the following assumptions:

ME1: :

The transition kernel of the embedded Markov chain \(x_{n}^{\varepsilon },n \geq 0\) has the following representation:

$$\displaystyle{{P}^{\varepsilon }(x,B) = P(x,B) +\varepsilon P_{1}(x,B).}$$

The stochastic kernel P(x, B) is coordinated with the split phase space as follows:

$$\displaystyle{P(x,E_{k}) = \mathbf{1}_{k}(x):= \left \{\begin{array}{c} 1,x \in E_{k}, \\ 0,x\notin E_{k}.\end{array} \right.}$$

The stochastic kernel P(x, B) determines the support Markov chain x n , n ≥ 0 on the separate classes E k , 1 ≤ k ≤ N. Moreover, the perturbing signed kernel P 1(x, B) satisfies the conservative condition

$$\displaystyle{P_{1}(x,E) = 0,}$$

which is a direct consequence of \({P}^{\varepsilon }(x,E) = P(x,E) = 1.\)

ME2: :

The associated Markov process x 0(t), t ≥ 0, given by the generator

$$\displaystyle{Q\varphi (x) = q(x)\int _{E}P(x,dy)[\varphi (y) -\varphi (x)]}$$

is uniformly ergodic in every class E k , 1 ≤ k ≤ N, with the stationary distributions π k (dx), 1 ≤ k ≤ N, satisfying the relations:

$$\displaystyle{\pi _{k}(dx)q(x) = q_{k}\rho _{k}(dx),\quad q_{k}:=\int _{E_{k}}\pi _{k}(dx)q(x).}$$
ME3: :

The average exit probabilities

$$\displaystyle{\hat{p}_{k}:=\int _{E_{k}}\rho _{k}(dx)P_{1}(x,E\setminus E_{k}) > 0,\quad 1 \leq k \leq N}$$

are positive and

$$\displaystyle{0 < q(x) < +\infty.}$$

The perturbing signed kernel P 1(x, B) defines the transition probabilities between classes E k , 1 ≤ k ≤ N. So, the relation \({P}^{\varepsilon }(x,B) = P(x,B) +\varepsilon P_{1}(x,B)\) means that the embedded Markov chain \(x_{n}^{\varepsilon },n \geq 0\) spends a long time in every class E k and jumps from one class to another with the small probabilities \(\varepsilon P_{1}(x,E\setminus E_{k}).\)

Under Assumptions ME1–ME3 the following weak convergence holds [9, Chap. 5]:

$$\displaystyle{v({x}^{\varepsilon }(t)) \Rightarrow \hat{ x}(t),\ \varepsilon \rightarrow 0,\quad v(x) = k \in \hat{ E} =\{ 1,\ldots,N\},\ x \in E_{k}.}$$

The limit Markov process \(\hat{x}(t),t \geq 0\) on the merged phase space \(\hat{E} =\{ 1,\ldots,N\}\) is determined by the generating matrix

$$\displaystyle{\hat{Q}_{1} = (\hat{q}_{kr},1 \leq k,r \leq N),}$$

where:

$$\displaystyle{\hat{q}_{kr} =\hat{ q}_{k}\hat{p}_{kr},\quad k\neq r,\quad \hat{q}_{k} =\hat{ p}_{k}q_{k},\quad 1 \leq k \leq N.}$$
$$\displaystyle{\hat{p}_{kr} = p_{kr}/\hat{p}_{k},\quad p_{kr} =\int _{E_{k}}\rho _{k}(dx)P_{1}(x,E_{r}),\quad 1 \leq k,\quad r \leq N,\quad k\neq r,}$$
$$\displaystyle{\hat{p}_{k} = -\int _{E_{k}}\rho _{k}(dx)P_{1}(x,E_{k}).}$$
ME4: :

The merged Markov process \(\hat{x}(t),t \geq 0\) is ergodic, with the stationary distribution \(\hat{\pi }= (\pi _{k},k \in \hat{ E}).\)

Thus, the operator \({Q}^{\varepsilon }\) may be presented as

$$\displaystyle{{Q}^{\varepsilon } = Q +\varepsilon Q_{1},\quad Q_{1}(x) = q(x)\int _{E}P_{1}(x,dy)\varphi (y).}$$

Let Π be the projector onto the null-space of the reducible-invertible operator Q acting as follows on the test functions \(\varphi\):

$$\displaystyle{\varPi \varphi (x) =\sum _{ k=1}^{N}\hat{\varphi }_{ k}\mathbf{1}_{k}(x),\quad \hat{\varphi }_{k}:=\int _{E_{k}}\pi _{k}(dx)\varphi (x).}$$

The contracted operator \(\hat{Q}_{1}\) is defined by the relation

$$\displaystyle{\hat{Q}_{1}\varPi =\varPi Q_{1}\varPi.}$$

Let \(\hat{\varPi }\) be the projector onto the null-space of the reducible-invertible contracted operator \(\hat{Q}_{1}\):

$$\displaystyle{\hat{\varPi }\hat{\varphi }:=\sum _{k\in \hat{E}}\hat{\pi }_{k}\hat{\varphi }_{k}.}$$

We define the potential matrix \(\hat{R}_{0} = [\hat{R}_{kl}^{0};1 \leq k,l \leq N]\) by the following relations:

$$\displaystyle{\hat{Q}_{1}\hat{R}_{0} =\hat{ R}_{0}\hat{Q}_{1} =\hat{\varPi } -I.}$$

4.2 Large Deviations Under the Local Balance Condition Λ B

The random evolutions are studied under the condition

Λ B::

\(b(u;x):=\int _{\mathbf{R}}v\varGamma (u,dv;x) \equiv 0\)

with the following scaling:

$$\displaystyle{{ \xi }^{\varepsilon }(t) {=\varepsilon }^{2}\xi (t{/\varepsilon }^{3}),\quad x_{ t}^{\varepsilon }:= {x}^{\varepsilon }(t{/\varepsilon }^{3}). }$$
(16)

The generator of the random evolution is given by

$$\displaystyle{ \mathbf{L}_{\varLambda }^{\varepsilon }\varphi (u,x) = [{\varepsilon }^{-3}Q {+\varepsilon }^{-2}Q_{ 1} {+\varGamma }^{\varepsilon }(x)]\varphi (u,x), }$$
(17)
$$\displaystyle{{\varGamma }^{\varepsilon }(x)\varphi (u) {=\varepsilon }^{-3}\int _{ \mathbf{R}}[\varphi (u {+\varepsilon }^{2}v) -\varphi (u)]\varGamma (u,dv;x).}$$

The generator (17) has the following asymptotic representation:

$$\displaystyle{\mathbf{L}_{\varLambda }^{\varepsilon }\varphi (u,x) = [{\varepsilon }^{-3}Q {+\varepsilon }^{-2}Q_{ 1} +\varepsilon \mathbf{B}(x)]\varphi (u,x) {+\varepsilon \delta }^{\varepsilon }(u,x)\varphi (u,x).}$$

Here

$$\displaystyle{\mathbf{B}(x)\varphi (u) = \frac{1} {2}B(u;x)\varphi ^{\prime\prime}(u),\quad B(u;x) =\int _{\mathbf{R}}{v}^{2}\varGamma (u,dv;x).}$$

Theorem 2.

The exponential generator of the large deviations for the random evolutions (16) under the conditions ME1–ME4 and ΛB is determined by the relation

$$\displaystyle{\mathbf{H}\varphi (u) = \frac{1} {2}\hat{\hat{B}}(u){[\varphi ^{\prime}(u)]}^{2},}$$
$$\displaystyle{\hat{\hat{B}}(u) =\sum _{ k=1}^{N}\hat{\pi }_{ k}\int _{E_{k}}\pi _{k}(dx)B(u;x),\quad B(u;x) =\int _{\mathbf{R}}{v}^{2}\varGamma (u,dv;x).}$$

The proof follows from Lemma 3.

Lemma 3.

The exponential generator on the perturbed test function

$$\displaystyle{{\varphi }^{\varepsilon }(u,x) =\varphi (u) +\varepsilon \ln [1 +\varepsilon \varphi _{1}(u,x) {+\varepsilon }^{2}\varphi _{ 2}(u,x)]}$$

admits the following asymptotic representation:

$$\displaystyle{{\mathbf{H}{}^{\varepsilon }\varphi }^{\varepsilon }(u,x) {=\varepsilon }^{-1}Q\varphi _{ 1} + Q\varphi _{2} + Q_{1}\varphi _{1} -\varphi _{1}Q\varphi _{1} +\tilde{ \mathbf{B}}(x)\varphi (u) +\delta _{ H}^{\varepsilon }(u,x)\varphi (u),}$$

and the negligible term converges uniformly by u,x on the functions \(\varphi (u) \in {C}^{3}(\mathbf{R})\) :

$$\displaystyle{\vert \delta _{H}^{\varepsilon }(u,x)\varphi (u)\vert \rightarrow 0,\ \varepsilon \rightarrow 0.}$$

Here the operator

$$\displaystyle{ \tilde{\mathbf{B}}(x)\varphi (u) = \frac{1} {2}B(u;x){[\varphi ^{\prime}(u)]}^{2}. }$$
(18)

Proof.

The proof of the lemma is based on the asymptotic analysis of the items

$$\displaystyle\begin{array}{rcl} H_{Q}^{\varepsilon }{\varphi }^{\varepsilon }(u,x)& =& {e}^{-\varphi (u)/\varepsilon }{[1+\varepsilon \varphi _{ 1}{+\varepsilon }^{2}\varphi _{ 2}]}^{-1}[{\varepsilon }^{-2}Q{+\varepsilon }^{-1}Q_{ 1}][1+\varepsilon \varphi _{1}{+\varepsilon }^{2}\varphi _{ 2}]{e}^{\varphi (u)/\varepsilon } {}\\ & =& {e}^{-\varphi (u)/\varepsilon }[1-\varepsilon \varphi _{ 1}][{\varepsilon }^{-2}Q{+\varepsilon }^{-1}Q_{ 1}][1+\varepsilon \varphi _{1}{+\varepsilon }^{2}\varphi _{ 2}]{e}^{\varphi (u)/\varepsilon }{+\delta }^{\varepsilon }(x)\varphi (u) {}\\ & =& {\varepsilon }^{-1}Q\varphi _{ 1} + Q\varphi _{2} + Q_{1}\varphi _{1} -\varphi _{1}Q\varphi _{1} {+\delta }^{\varepsilon }(x)\varphi (u) {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} H_{\varGamma }^{\varepsilon }{\varphi }^{\varepsilon }(u,x)& =& {e}^{-\varphi (u)/\varepsilon }{[1 +\varepsilon \varphi _{ 1} {+\varepsilon }^{2}\varphi _{ 2}]{}^{-1}\varepsilon \varGamma }^{\varepsilon }(x)[1 +\varepsilon \varphi _{ 1} {+\varepsilon }^{2}\varphi _{ 2}]{e}^{\varphi (u)/\varepsilon } {}\\ & =& {e}^{-\varphi (u)/\varepsilon }{[1 -\varepsilon \varphi _{ 1}]\varepsilon \varGamma }^{\varepsilon }(x)[1 +\varepsilon \varphi _{ 1} {+\varepsilon }^{2}\varphi _{ 2}]{e}^{\varphi (u)/\varepsilon } {+\delta }^{\varepsilon }(u,x)\varphi (u) {}\\ & =& {\varepsilon }^{-2}\int _{ \mathbf{R}}[{e}^{\varDelta _{v}^{\varepsilon }\varphi (u) } - 1]\varGamma (u,dv;x) {+\delta }^{\varepsilon }(u,x)\varphi (u). {}\\ \end{array}$$

Here

$$\displaystyle{\varDelta _{v}^{\varepsilon }\varphi (u) {=\varepsilon }^{-1}[\varphi (u {+\varepsilon }^{2}v) -\varphi (u)] =\varepsilon v\varphi ^{\prime}(u) {+\varepsilon }^{3}\hat{\varphi }^{\prime\prime}_{ v}(u),}$$

and due to the Λ B condition, we obtain

$$ \displaystyle\begin{array}{rcl} {\varepsilon }^{-2}\int _{ \mathbf{R}}[{e}^{\varDelta _{v}^{\varepsilon }\varphi (u) } - 1]\varGamma (u,dv;x)& =& {\varepsilon }^{-2}\int _{ \mathbf{R}}\left [\varepsilon v\varphi ^{\prime}(u) + \frac{1} {2}{(\varepsilon v)}^{2}{[\varphi ^{\prime}(u)]}^{2}\right ]\varGamma (u,dv;x) {}\\ & & \quad {+\delta }^{\varepsilon }(u,x)\varphi (u) =\tilde{ \mathbf{B}}(x)\varphi (u) {+\delta }^{\varepsilon }(u,x)\varphi (u). {}\\ \end{array}$$

Thus,

$$\displaystyle{\mathbf{H}_{\varGamma }^{\varepsilon }{\varphi }^{\varepsilon }(u,x) =\tilde{ \mathbf{B}}(x)\varphi (u) {+\delta }^{\varepsilon }(u,x)\varphi (u)}$$

with the main term (18).

Proof of Theorem 2.

To finish the proof of the theorem we should apply the solution of the singular perturbation problem for the equations:

$$\displaystyle{Q\varphi _{1}(u,x) = 0}$$
$$\displaystyle{Q\varphi _{2} + Q_{1}\varphi _{1} +\tilde{ \mathbf{B}}(x)\varphi (u) =\hat{\hat{ B}}\varphi (u).}$$

It follows from the first equation that \(\varphi _{1}(u,x) =\varphi _{1}(u,\hat{x}) \in N_{Q}\); thus, from the solvability condition for the second equation, we obtain a new relation

$$\displaystyle{\varPi Q_{1}\varPi \varphi _{1} +\varPi \tilde{ \mathbf{B}}(x)\varPi \varphi (u) =\hat{\hat{ B}}\varphi (u),}$$

or

$$\displaystyle{\hat{Q}_{1}\hat{\varphi }_{1} +\widehat{\tilde{ \mathbf{B}}(x)}\hat{\varphi }(u) =\hat{\hat{ B}}\varphi (u).}$$

The solvability condition for the averaged equation gives finally

$$\displaystyle{\hat{\varPi }\widehat{\tilde{\mathbf{B}}(x)}\hat{\varPi }\hat{\varphi }(u) =\hat{\hat{ B}}\varphi (u).}$$

Thus, the relation

$$\displaystyle{{\mathbf{H}{}^{\varepsilon }\varphi }^{\varepsilon }(u,x) = \mathbf{H}\varphi (u) +\delta _{ H}^{\varepsilon }(u,x)\varphi (u)}$$

finishes the proof of the theorem.

4.3 Large Deviations Under the Total Balance Condition TB

Under the total balance condition:

TB::

\(\begin{array}{c} b(u;x) =\int _{\mathbf{R}}v\varGamma (u,dv;x)\not\equiv 0, \\ \sum _{k=1}^{N}\hat{\pi }_{k}\hat{b}_{k}(u) = 0,\quad \hat{b}_{k}(u) =\int _{E_{k}}\pi _{k}(dx)b(u;x),\quad 1 \leq k \leq N\end{array}\)

we use the following scaling for the random evolutions:

$$\displaystyle{{ \xi }^{\varepsilon }(t) {=\varepsilon }^{2}\xi (t{/\varepsilon }^{3}),\quad x_{ t}^{\varepsilon }:= {x}^{\varepsilon }(t{/\varepsilon }^{4}). }$$
(19)

The generator of the random evolution is given by

$$\displaystyle{ \mathbf{L}_{T}^{\varepsilon }\varphi (u,x) = [{\varepsilon }^{-4}Q {+\varepsilon }^{-3}Q_{ 1} {+\varGamma }^{\varepsilon }(x)]\varphi (u,x), }$$
(20)

where

$$\displaystyle{{\varGamma }^{\varepsilon }(x)\varphi (u) {=\varepsilon }^{-3}\int _{ \mathbf{R}}[\varphi (u {+\varepsilon }^{2}v) -\varphi (u)]\varGamma (u,dv;x).}$$

The generator (20) has the following asymptotic representation:

$$\displaystyle{\mathbf{L}_{T}^{\varepsilon }\varphi (u,x) = [{\varepsilon }^{-4}Q {+\varepsilon }^{-3}Q_{ 1} {+\varepsilon }^{-1}\varGamma (x) +\varepsilon \mathbf{B}(x)]\varphi (u,x) {+\varepsilon \delta }^{\varepsilon }(u,x)\varphi (u,x).}$$

Here

$$\displaystyle{\varGamma (x)\varphi (u):= b(u;x)\varphi ^{\prime}(u).}$$

Theorem 3.

The exponential generator of the large deviations for the random evolutions defined by (19) under the conditions ME1–ME4 and TB is determined by the relation

$$\displaystyle{\mathbf{H}\varphi (u) = \frac{1} {2}\hat{\hat{B}}_{T}(u){[\varphi ^{\prime}(u)]}^{2},\quad \hat{\hat{B}}_{ T}(u) =\hat{\hat{ B}}(u) +\hat{\hat{ B}}_{0}(u).}$$

Here

$$\displaystyle{\hat{\hat{B}}(u):=\sum _{ k=1}^{N}\hat{\pi }_{ k}\int _{E_{k}}\pi _{k}(dx)B(u;x),\quad B(u;x) =\int _{\mathbf{R}}{v}^{2}\varGamma (u,dv;x),}$$
$$\displaystyle{\hat{\hat{B}}_{0}(u):=\hat{\varPi }\hat{ b}(u,\hat{x})\hat{R}_{0}\hat{b}(u,\hat{x})\hat{\varPi } =\sum _{ k,l=1}^{N}\hat{\pi }_{ k}\hat{b}_{k}\hat{R}_{kl}^{0}\hat{b}_{ l}.}$$

Remark 3.

The limit exponential generator consists of two parts: the first one is the averaged diffusion coefficient and the second one is defined by the merged first moments of jumps, averaged with the potential of the limit merged Markov switching process.

The proof is based on the following Lemma.

Lemma 4.

The exponential generator on the perturbed test function

$$\displaystyle{{\varphi }^{\varepsilon }(u,x) =\varphi (u) +\varepsilon \ln [1 +\varepsilon \varphi _{1}(u,x) {+\varepsilon }^{2}\varphi _{ 2}(u,x) {+\varepsilon }^{3}\varphi _{ 3}(u,x)]}$$

admits the following asymptotic representation:

$$\displaystyle\begin{array}{rcl}{ \mathbf{H}{}^{\varepsilon }\varphi }^{\varepsilon }(u,x)& =& {\varepsilon }^{-2}Q\varphi _{ 1} {+\varepsilon }^{-1}[Q\varphi _{ 2} + Q_{1}\varphi _{1} -\varphi _{1}Q\varphi _{1} +\varGamma (x)\varphi (u)] {}\\ & & \quad + [Q\varphi _{3} + Q_{1}\varphi _{2} -\varphi _{1}Q\varphi _{2} -\varphi _{2}Q\varphi _{1} -\varphi _{1}Q_{1}\varphi _{1} +\tilde{ \mathbf{B}}(x)\varphi (u)] {}\\ & & \quad {+\delta }^{\varepsilon }(u,x)\varphi (u), {}\\ \end{array}$$

and the negligible term converges uniformly by u,x on the functions \(\varphi (u) \in {C}^{3}(\mathbf{R})\) :

$$\displaystyle{{\vert \delta }^{\varepsilon }(u,x)\varphi (u)\vert \rightarrow 0,\ \varepsilon \rightarrow 0.}$$

Here the operators

$$\displaystyle{ \varGamma (x)\varphi (u):= b(u;x)\varphi ^{\prime}(u),\quad \tilde{\mathbf{B}}(x)\varphi (u):= \frac{1} {2}B(u;x){[\varphi ^{\prime}(u)]}^{2}. }$$
(21)

Proof.

The proof of lemma is based on the asymptotic analysis of the items

$$\displaystyle\begin{array}{rcl} H_{Q}^{\varepsilon }{\varphi }^{\varepsilon }(u,x)& =& {e}^{-\varphi (u)/\varepsilon }{[1 +\varepsilon \varphi _{ 1} {+\varepsilon }^{2}\varphi _{ 2} {+\varepsilon }^{3}\varphi _{ 3}]}^{-1}[{\varepsilon }^{-3}Q {+\varepsilon }^{-2}Q_{ 1}] {}\\ & & \quad \times [1 +\varepsilon \varphi _{1} {+\varepsilon }^{2}\varphi _{ 2} {+\varepsilon }^{3}\varphi _{ 3}]{e}^{\varphi (u)/\varepsilon } {}\\ & =& {e}^{-\varphi (u)/\varepsilon }[1 -\varepsilon \varphi _{ 1} {-\varepsilon }^{2}\varphi _{ 2} {+\varepsilon }^{2}\varphi _{ 1}^{2} {-\varepsilon }^{3}\varphi _{ 3}] {}\\ & & \quad \times [{\varepsilon }^{-3}Q {+\varepsilon }^{-2}Q_{ 1}][1 +\varepsilon \varphi _{1} {+\varepsilon }^{2}\varphi _{ 2} {+\varepsilon }^{3}\varphi _{ 3}]{e}^{\varphi (u)/\varepsilon } {+\delta }^{\varepsilon }(x)\varphi (u) {}\\ & =& {\varepsilon }^{-2}Q\varphi _{ 1} {+\varepsilon }^{-1}[Q\varphi _{ 2} + Q_{1}\varphi _{1} -\varphi _{1}Q\varphi _{1}] {}\\ & & \quad + [Q\varphi _{3} + Q_{1}\varphi _{2} -\varphi _{1}Q\varphi _{2} --\varphi _{2}Q\varphi _{1} +\varphi _{ 1}^{2}Q\varphi _{ 1} -\varphi _{1}Q_{1}\varphi _{1}] {}\\ & & \quad {+\delta }^{\varepsilon }(x)\varphi (u) {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} H_{\varGamma }^{\varepsilon }{\varphi }^{\varepsilon }(u,x)& =& {e}^{-\varphi (u)/\varepsilon }{[1 +\varepsilon \varphi _{ 1} {+\varepsilon }^{2}\varphi _{ 2} {+\varepsilon }^{3}\varphi _{ 3}]{}^{-1}\varepsilon \varGamma }^{\varepsilon }(x)[1 +\varepsilon \varphi _{ 1} {+\varepsilon }^{2}\varphi _{ 2} {}\\ & & \quad {+\varepsilon }^{3}\varphi _{ 3}]{e}^{\varphi (u)/\varepsilon } {}\\ & =& {e}^{-\varphi (u)/\varepsilon }{[1 -\varepsilon \varphi _{ 1} {-\varepsilon }^{2}\varphi _{ 2} {+\varepsilon }^{2}\varphi _{ 1}^{2}]\varepsilon \varGamma }^{\varepsilon }(x)[1 +\varepsilon \varphi _{ 1} {+\varepsilon }^{2}\varphi _{ 2} {+\varepsilon }^{3}\varphi _{ 3}]{e}^{\varphi (u)/\varepsilon } {}\\ & & \quad {+\delta }^{\varepsilon }(u,x)\varphi (u) {}\\ & =& {\varepsilon }^{-2}\int _{ \mathbf{R}}[{e}^{\varDelta _{v}^{\varepsilon }\varphi (u) } - 1]\varGamma (u,dv;x) {}\\ & & \quad {+\varepsilon }^{2}{e}^{-\varphi (u)/\varepsilon }[{\varGamma }^{\varepsilon }(x)\varphi _{ 1}{e}^{\varphi (u)/\varepsilon } -\varphi {_{ 1}\varGamma }^{\varepsilon }(x){e}^{\varphi (u)/\varepsilon }] +\delta _{ \varGamma }^{\varepsilon }(u,x)\varphi (u). {}\\ \end{array}$$

Here

$$\displaystyle{\varDelta _{v}^{\varepsilon }\varphi (u) {=\varepsilon }^{-1}[\varphi (u {+\varepsilon }^{2}v) -\varphi (u)] =\varepsilon v\varphi ^{\prime}(u) {+\varepsilon }^{3}\hat{\varphi }^{\prime\prime}_{ v}(u),}$$

and due to the TB condition, we obtain

$$ \displaystyle\begin{array}{rcl} {\varepsilon }^{-2}\int _{ \mathbf{R}}[{e}^{\varDelta _{v}^{\varepsilon }\varphi (u) } - 1]\varGamma (u,dv;x)& =& {\varepsilon }^{-2}\int _{ \mathbf{R}}\left [\varepsilon v\varphi ^{\prime}(u) + \frac{1} {2}{(\varepsilon v)}^{2}{[\varphi ^{\prime}(u)]}^{2}\right ]\varGamma (u,dv;x) {}\\ & & \quad {+\delta }^{\varepsilon }(u,x)\varphi (u) {}\\ & =& {\varepsilon }^{-1}\varGamma (x)\varphi (u) +\tilde{ \mathbf{B}}(x)\varphi (u) {+\delta }^{\varepsilon }(u,x)\varphi (u). {}\\ \end{array}$$

Each of the terms in the square brackets is not negligible, for instance,

$$\displaystyle{{\varepsilon }^{2}\varphi _{ 1}{e{}^{-\varphi (u)/\varepsilon }\varGamma }^{\varepsilon }(x){e}^{\varphi (u)/\varepsilon } =\varepsilon \varphi _{ 1}{e{}^{-\varphi (u)/\varepsilon }\varepsilon \varGamma }^{\varepsilon }(x){e}^{\varphi (u)/\varepsilon } =\varphi _{ 1}\varGamma (x)\varphi (u)}$$
$$\displaystyle{\quad {+\delta }^{\varepsilon }(u,x)\varphi (u).}$$

But their difference is equal to 0 due to the relation

$$ \displaystyle\begin{array}{rcl} {\varGamma }^{\varepsilon }(x){e}^{\varphi (u)/\varepsilon }\varphi _{ 1}& =& {\varepsilon }^{-3}\int _{ \mathbf{R}}[{e}^{\varphi (u{+\varepsilon }^{2}v)/\varepsilon }\varphi _{1}(u +\varepsilon v,x) - {e}^{\varphi (u)/\varepsilon }\varphi _{ 1}(u,x)]\varGamma (u,dv;x) {}\\ & =& \varphi _{1}{(u,x)\varGamma }^{\varepsilon }(x){e}^{\varphi (u)/\varepsilon } + o({\varepsilon }^{2}). {}\\ \end{array}$$

Thus,

$$\displaystyle{H_{\varGamma }^{\varepsilon }{\varphi }^{\varepsilon }(u,x) {=\varepsilon }^{-1}\varGamma (x)\varphi (u) +\tilde{ \mathbf{B}}(x)\varphi (u) {+\delta }^{\varepsilon }(u,x)\varphi (u),}$$

with the main terms (21).

Proof of Theorem 3.

To finish the proof of the theorem, we should apply the solution of the singular perturbation problems for the equations

$$\displaystyle{Q\varphi _{1} = 0,}$$
$$\displaystyle{Q\varphi _{2} + Q_{1}\varphi _{1} + b(u;x)\varphi ^{\prime}(u) = 0,}$$
$$\displaystyle{Q\varphi _{3} + Q_{1}\varphi _{2} -\varphi _{1}Q\varphi _{2} -\varphi _{1}Q_{1}\varphi _{1} +\tilde{ \mathbf{B}}(x)\varphi (u) =\hat{\hat{ B}}\varphi (u).}$$

It follows from the first equation that \(\varphi _{1}(u,x) =\varphi _{1}(u,\hat{x}) \in N_{Q}\); thus, from the solvability condition for the second equation

$$\displaystyle{ \hat{Q\varphi _{2}} +\hat{ Q}_{1}\hat{\varphi }_{1} +\hat{ b}(u;\hat{x})\varphi ^{\prime}(u) = 0,\quad \hat{Q\varphi _{2}} = 0, }$$
(22)

we obtain a new relation:

$$\displaystyle{\hat{Q}_{1}\hat{\varphi }_{1} +\hat{ b}(u,\hat{x})\varphi ^{\prime}(u) = 0,\quad \hat{\varPi }\hat{b}(u;\hat{x}) \equiv 0,}$$

from which we have

$$\displaystyle{ \hat{\varphi }_{1}(u,\hat{x}) =\hat{ R}_{0}\hat{b}(u;\hat{x})\varphi ^{\prime}(u),\quad \hat{Q}_{1}\hat{\varphi }_{1} = -\hat{b}(u,\hat{x})\varphi ^{\prime}(u). }$$
(23)

Then, the solvability condition for the equation

$$\displaystyle{Q\varphi _{3} + Q_{1}\varphi _{2} -\varphi _{1}Q\varphi _{2} -\varphi _{1}Q_{1}\varphi _{1} +\tilde{ \mathbf{B}}(x)\varphi (u) =\hat{\hat{ B}}\varphi (u)}$$

gives

$$\displaystyle{\hat{Q}_{1}\hat{\varphi }_{2} -\hat{\varphi }_{1}\hat{Q\varphi _{2}} -\hat{\varphi }_{1}\hat{Q}_{1}\hat{\varphi }_{1} +\hat{\tilde{ \mathbf{B}}}(x)\hat{\varphi }(u) =\hat{\hat{ B}}\varphi (u),}$$

but from (22)

$$\displaystyle{\hat{Q\varphi _{2}} = -[\hat{Q_{1}\varphi _{1}} +\hat{ b}(u,\hat{x})\varphi ^{\prime}(u)] = 0,}$$

and using the solution (23), we have

$$\displaystyle{\hat{Q}_{1}\hat{\varphi }_{2} +\hat{ B}_{T}(x)\hat{\varphi }(u) =\hat{\hat{ B}}\varphi (u).}$$

Application of the solvability condition for this equation finishes the proof of the theorem.