Keywords

1 Introduction: Model of BRW/r/k/m

We present results for continuous-time branching random walks (BRWs) on the lattice \(\mathbf {Z}^{d}\), \(d\in \mathbf {N}\), with a finite number of lattice sites in which the generation of particles can occur, which are called branching sources. By a BRW we mean a stochastic process that combines branching (birth or death) of particles at certain points on \(\mathbf {Z}^{d}\) with their random walk on \(\mathbf {Z}^{d}\). The goal of the paper is to study the distributions of the particle population \(\mu _{t}(y)\) at every point \(y\in \mathbf {Z}^{d}\) and \(\mu _{t}=\sum _{y\in \mathbf {Z}^{d}}\mu _{t}(y)\) over the lattice \(\mathbf {Z}^{d}\) for a BRW with branching sources of different type without any assumptions on the variance of jumps of the underlying random walk.

Suppose that there is a single particle at the moment \(t=0\) on the lattice situated at the point \(x \in \mathbf {Z}^{d}\). Each particle moves on the lattice \(\mathbf {Z}^{d}\) until it reaches a source where its behavior changes. There are three types of branching sources, depending on whether branching takes place or not and on whether random walk symmetry is violated or not. At sources of the first type, particles die or are born, and random walk symmetry is maintained, see, e.g., [1, 2, 11]. At sources of the second type, walk symmetry is violated through an increase in the degree of branching or walk dominance, see, e.g., [9]. Sources of the third type should be called “pseudo-sources,” because at these sources only the walk symmetry is violated, with no particle births or deaths ever occurring. BRWs with r sources of the first type, k of the second type, and m of the third type are denoted BRW/r/k/m and introduced in [12]. Particles exist on \(\mathbf {Z}^{d}\) independently of each other and of their antecedent history.

We define random walk by its generator

$$\begin{aligned} A= \mathscr {A} + \sum _{j=1}^{k+m} \zeta _{j} \varDelta _{u_{j}} \mathscr {A} \end{aligned}$$
(1)

where \(\mathscr {A} = \left( a(x,y)\right) _{x,y \in \mathbf {Z}^{d}}\) satisfies the regularity property \(\sum _{y \in \mathbf {Z}^{d}}a(x,y)=0\) for all x, where \(a(x,y)\ge 0\) for \(x\ne y\), \(-\infty<a(x,x)<0\). From this it follows that A itself satisfies this regularity property [12, 13]. Additionally, we assume that the intensities a(xy) are symmetric and spatially homogeneous, that is, \(a(x-y) : = a(x,y)=a(y,x)=a(0,y-x)\). Thus we can denote a(yx), \(a(0,y-x)\), that is, \(a(x-y):= a(x,y)=a(y,x)=a(0,y-x)\). The matrix \(\mathscr {A}\) under consideration is irreducible, so for any \(z\in \mathbf {Z}^{d}\) there is such a set of vectors \(z_{1},\dots ,z_{k}\in \mathbf {Z}^{d}\) that \(z=\sum _{i=1}^{k}z_{i}\) and \(a(z_{i})\ne 0\) for \(i=1,\dots ,k\). It is fairly clear that the irreducibility property is inherited by the perturbed matrix A. This, however, does not hold true for the properties of spatial homogeneity and, most importantly, symmetry. We will, however, make use of the structure of A and the symmetry of the underlying matrix \(\mathscr {A}\) in order to overcome this complication.

According to the axiomatics outlined in [3, Ch. III, §2], the probabilities p(hxy) of a particle at \(x\notin \{v_{1},v_{2},\dots ,v_{k+r}\}\) to jump to a point y over a short period of time h can be presented as \(p(h,x,y)=a(x,y)h+o(h)\) for \(y\ne x\) and \(p(h,x,x)=1+a(x,x)h+o(h)\) for \(y=x\). From these equalities, see, for instance, [3, Ch. III], we obtain the Kolmogorov backward equations:

$$\begin{aligned} \frac{\partial p(t,x,y)}{\partial t}=\sum _{x'}a(x,x') p(t,x',y),\qquad p(0,x,y)=\delta (x-y), \end{aligned}$$
(2)

where \(\delta (\cdot )\) is the discrete Kronecker \(\delta \)-function on \(\mathbf {Z}^{d}\).

Infinitesimal generating functions \(f(u,v_{i})=\sum _{n=0}^\infty b_n(v_{i}) u^n\), \(0\le u \le 1\), govern branching process at each of the sources \(v_{1},v_{2},\dots ,v_{k+r}\). We denote source intensities \(\beta _{i}:=\beta _{i}^{(1)}=f^{(1)}(1,v_{i})= (-b_{1}(v_{i}))\left( \sum _{n\ne 1}n b_{n}(v_{i})/(-b_{1}(v_{i}))-1\right) \) where the sum is the average number of descendants a particle has at the source \(v_{i}\).

If the particle is not in the branching source, then its random walk occurs in accordance with the above rules. Consider a combination of branching and walking processes observed when a particle is in one of the branching sources \(v_{1},v_{2},\dots ,v_{k+r}\). In this case, the following possible transitions, which can occur in a short period of time h, are as follows: the particle will either move to a point \(y\ne v_{i}\) with the probability of \( p(h,v_{i},y)=a(v_{i},y)h+o(h), \) or will remain at the source and produce \(n\ne 1\) descendants with the probability of \( p_{*}(h,v_{i},n)=b_{n}(v_{i})h+o(h) \) (we assume that the particle itself is included in these n descendants and we say that the particle dies if \(n=0\)), or no changes will occur to the particle at all, which has the probability of \( 1-\sum _{y\ne v_{i}}a(v_{i},y)h-\sum _{n\ne 1}b_{n}(v_{i})h+o(h). \) Thus, the time spent by the particle in the source \(v_{i}\) is exponentially distributed with the parameter \(-(a(v_{i},v_{i})+b_{1}(v_{i}))\). The evolution of each new particle obeys the same law and does not depend on the evolution of other particles.

Let us introduce the moments of the random variables \(\mu _{t}(y)\) and \(\mu _{t}\) as \(m_{n}(t, x, y)= \mathsf {E}_{x} \mu _{t}^{n}(y)\) and \(m_{n}(t, x)= \mathsf {E}_{x} \mu ^{n}_{t}\), respectively, where n is the order of the moment and \(\mathsf {E}_{x}\) is the mean on condition \(\mu _{0}(\cdot ) = \delta _{x}(\cdot )\).

In BRW/r/k/m more general multi-point perturbations of the self-adjoint operator \(\mathscr {A}\) generated of the symmetric random walk are used than in BRW/r/0/0 or in BRW/0/k/0, see, e.g., [13]. This follows from the statement, see [12], that the mean number of particles \(m_{1}(t)=m_1(t,\cdot ,y)\) at a point \(y\in \mathbf {Z}^{d}\) in BRW/r/k/m is governed by:

$$ \frac{dm_{1}(t)}{dt}=\mathscr {Y}m_{1}(t), \quad m_{1}(0)=\delta _{y}, $$

where

$$\begin{aligned} \mathscr {Y}=\mathscr {A}+ \left( \sum _{s=1}^{r}\beta _{s}\varDelta _{z_{s}}\right) + \left( \sum _{i=1}^{k}\zeta _{i}\varDelta _{x_{i}}\mathscr {A}+ \sum _{i=1}^{k}\eta _{i}\varDelta _{x_{i}}\right) + \left( \sum _{j=1}^{m}\chi _{j}\varDelta _{y_{j}}\mathscr {A}\right) . \end{aligned}$$
(3)

Here, \(\mathscr {A}:l^{p}(\mathbf {Z}^{d})\rightarrow l^{p}(\mathbf {Z}^{d})\), \(p\in [1,\infty ]\), is a symmetric operator, \(\varDelta _{x}=\delta _{x}\delta _{x}^{T}\), and \(\delta _{x}=\delta _{x}(\cdot )\) denotes a column-vector on the lattice taking the unit value at the point x and vanishing at other points, \(\beta _{s}\), \(\zeta _{i}\), \(\eta _{i}\), and \(\chi _{j}\) are some constants. The same equation is also valid for the mean number of particles (the mean population size) over the lattice \(m_{1}(t)=m_1(t,\cdot )\) with the initial condition \(m_{1}(0)=1\) in \(l^{\infty }(\mathbf {Z}^{d})\). Operator (3) can be written as

$$\begin{aligned} \mathscr {Y}=\mathscr {A}+\sum _{i=1}^{k+m}\zeta _{i}\varDelta _{u_{i}}\mathscr {A}+ \sum _{j=1}^{k+r}\beta _{j}\varDelta _{v_{j}}. \end{aligned}$$
(4)

In each of the sets \(U=\{u_{i}\}_{i=1}^{k+m}\), and \(V=\{v_{j}\}_{j=1}^{k+r}\), the points are pairwise distinct, but U and V may have a nonempty intersection. The points from \(V\setminus U\) correspond to r sources of the first type; those from \(U\cap V\) to k sources of the second type; and those from \(U\setminus V\) to m sources of the third type.

Denote the largest positive eigenvalue of the operator Y by \(\lambda _{0}\). In contrast to [7] we consider BRW/r/k/m instead of BRW/r/0/0 and assume that in (4) the parameters \(\beta _{j}\) are real (\(\beta _{j}\in \mathbb {R}\)) instead of being positive (\(\beta _{j}>0\)). Under this assumption, we conclude that if \(\lambda _{0}\) exitsts then it is simple, strictly positive and guarantees an exponential growth of the first moments \(m_{1}\) of particle both at an arbitrary point y and on the entire lattice.

Theorem 1

Let for BRW/r/k/m under consideration the operator \(\mathscr {Y}\) have an isolated eigenvalue \(\lambda _{0}>0\), and let the remaining part of its spectrum be located on the halfline \(\{\lambda \in \mathbb {R}:~\lambda \leqslant \lambda _{0}-\epsilon \}\), where \(\epsilon >0\). If \(\beta _{i}^{(r)} = O(r! r^{r-1})\) for all \(i = 1, \ldots , N\) and \(r\in \mathbb {N}\), then the following statements hold in the sense of convergence in distribution

$$\begin{aligned} \lim _{t \rightarrow \infty } \mu _{t}(y) e^{-\lambda _{0} t} = \psi (y)\xi ,\quad \lim _{t \rightarrow \infty } \mu _{t} e^{-\lambda _{0} t} = \xi , \end{aligned}$$
(5)

where \(\psi (y)\) is the eigenfunction corresponding to the eigenvalue \(\lambda _{0}\) and \(\xi \) is a nondegenerate random variable.

One approach to analysing Eqs. (2) and evolutionary equations for mean numbers of particles \(m_{1}(t,x,y)\) and \(m_{1}(t,x)\) is to treat them as differential equations in Banach spaces. To apply this approach to our case, we introduce the operators

$$ (\mathscr {A} u)(x)=\sum _{x'}a(x-x') u(x'),\qquad (\varDelta _{x_{i}}u)(x)=\delta (x-x_{i})u(x),\quad i=1,\ldots ,N. $$

on functions set u(x), \(x\in \mathbf {Z}^{d}\). We can represent the operator (4) in a more convenient form:

$$\begin{aligned} \mathscr {Y}=\mathscr {Y}_{\beta _{1},\ldots ,\beta _{k+r}}= A+\sum _{i=1}^{k+r}\beta _{i}\varDelta _{v_{i}} \end{aligned}$$
(6)

where \(\beta _{i}\in \mathbb {R}\), \(i=1,\ldots ,\beta _{k+r}\). All operators in (6) can be considered as linear continuous operators in any of the spaces \(l^{p}(\mathbf {Z}^{d})\), \(p\in [1,\infty ]\). Note that the operator \(\mathscr {A}\) is self-adjoint in \(l^{2}(\mathbf {Z}^{d})\) [12,13,14].

Now, treating for each \(t\ge 0\) and each \(y\in \mathbf {Z}^{d}\) the \(p(t,\cdot ,y)\) and \(m_{1}(t,\cdot ,y)\) as elements of \(l^{p}(\mathbf {Z}^{d})\) for some p, we can write (see, for example, [12]) the following differential equations in \(l^{p}(\mathbf {Z}^{d})\):

$$\begin{aligned} \frac{d p(t,x,y)}{d t}&=(Ap(t,\cdot ,y))(x),&\qquad p(0,x,y)&=\delta (x-y),\\ \frac{d m_{1}(t,x,y)}{d t}&=(\mathscr {Y} m_{1}(t,\cdot ,y))(x),&\qquad m_{1}(0,x,y)&=\delta (x-y), \end{aligned}$$

and for \(m_{1}(t,x)\) the following differential equation in \(l^{\infty }(\mathbf {Z}^{d})\):

$$\begin{aligned} \frac{d m_{1}(t,x)}{d t}=(\mathscr {Y} m_{1}(t,\cdot ))(x), \qquad m_{1}(0,x)\equiv 1. \end{aligned}$$

Point out that for large t the asymptotic behaviour of the transition probabilities p(txy), as well as of the mean particle numbers \(m_{1}(t,x,y)\) and \(m_{1}(t,x)\) is tightly connected with operators \(\mathscr {A}\) and \(\mathscr {Y}\) spectral properties.

The properties of p(txy) can be expressed in terms of the Green’s function which can be defined [11, § 2.2] as the Laplace transform of the transition probability p(txy) or through the resolvent form:

$$ G_\lambda (x,y):=\int _0^\infty e^{-\lambda t}p(t,x,y)dt= \frac{1}{(2\pi )^d} \int _{ [-\pi ,\pi ]^{d}} \frac{e^{i(\theta , y-x)}}{\lambda -\phi (\theta )}d\theta ,\qquad \lambda \ge 0. $$

where \(x,y\in \mathbf {Z}^{d}\), \(\lambda \ge 0\), and \(\phi (\theta )\) is the transition intensity a(z) Fourier transform:

$$\begin{aligned} \phi (\theta ):=\sum _{z\in \mathbf {Z}^{d}}a(z)e^{i(\theta ,z)}=\sum _{x \in \mathbf {Z}^{d}} a(x) \cos (x, \theta ),\qquad \theta \in [-\pi ,\pi ]^{d}. \end{aligned}$$
(7)

The meaning of the function \(G_{0}(x,y)\) is as follows: it represents the mean amount of time spent by a particle at at \(y\in \mathbf {Z}^{d}\) as \(t\rightarrow \infty \) provided that at the initial moment \(t=0\) the particle was at \(x\in \mathbf {Z}^{d}\). The asymptotic behaviour of the mean numbers of particles \(m_{1}(t,x,y)\) and \(m_{1}(t,x)\) as \(t\rightarrow \infty \) can be described in terms of the function \(G_\lambda (x,y)\), see, e.g., [11]. Lastly, BRW asymptotic behaviour depends strongly on whether \(G_{0}: = G_{0}(0,0)\) is finite, it was shown in [10].

The approach presented in this section is based on representing the BRW evolution equations as differential equations in Banach spaces. It can also be applied to a wide range of problems, including the description of the evolution of higher-order moments of particle numbers (see, e.g., [11, 12]).

2 Key Equations and Auxiliary Results

We start off with a crucial remark. Since the operator \(\mathscr {Y}\) is in general not self-adjoint, the vast analytical apparatus, developed in [13] and relying heavily on the self-adjointness of the operators involved, is not applicable here directly. Due to the structure of \(\mathscr {Y}\), however, this difficulty can be obviated, to a certain extent, with relative ease. Indeed, consider the following differential equation in a Banach space

$$ \frac{df(t, x, y)}{dt} = \mathscr {Y}f(t, x, y) $$

with \(\mathscr {Y}=\mathscr {A}+\sum _{i=1}^{k+m}\zeta _{i}\varDelta _{u_{i}}\mathscr {A}+ \sum _{j=1}^{k+r}\beta _{j}\varDelta _{v_{j}}\). Let us now introduce the operator

$$ D := \biggl (I + \sum _{i=1}^{k+m}\zeta _{i}\varDelta _{u_{i}} \biggr )^{-\frac{1}{2}}, $$

which is correctly defined for \(\zeta _{i} > -1\), and rewrite the equation using this notation:

$$ \frac{df(t, x, y)}{dt} = \biggl (D^{-2}\mathscr {A} + \sum _{j=1}^{k+r}\beta _{j}\varDelta _{v_{j}}\biggr )f(t, x, y), $$

which is equivalent to

$$ D^{-1}\frac{d D f(t, x, y)}{dt} = \biggl (D^{-1} D^{-1} \mathscr {A} D^{-1} + \sum _{j=1}^{k+r}\beta _{j} \varDelta _{v_{j}} D^{-1}\biggr )D f(t, x, y). $$

By applying D to both parts of the equation above, we obtain

$$ \frac{d D f(t, x, y)}{dt} = \biggl ( D^{-1} \mathscr {A} D^{-1} + \sum _{j=1}^{k+r}\beta _{j} D \varDelta _{v_{j}} D^{-1}\biggr )D f(t, x, y). $$

Since the operators D and \(\varDelta _{v_{j}}\) commute, the expression above is equivalent to

$$ \frac{d g(t, x, y)}{dt} = \biggl ( D^{-1} \mathscr {A} D^{-1} + \sum _{j=1}^{k+r}\beta _{j} \varDelta _{v_{j}}\biggr )g(t, x, y), $$

where \(g := Df\). We have thus rewritten the original equation in such a way that the previously non-self-adjoint operator \(\mathscr {Y}\) is replaced with the self-adjoint operator

$$ \mathscr {Y}^{\prime } := D^{-1} \mathscr {A} D^{-1} + \sum _{j=1}^{k+r}\beta _{j} \varDelta _{v_{j}}, $$

and a one-to-one correspondence between the solutions f to the starting equation and the solutions g to the new equation can be established through the formula \(g = Df\). Therefore, when it comes to analysing Cauchy problems, the operator \(\mathscr {Y}\) can, for all intents and purposes, be considered self-adjoint.

We introduce the Laplace generating functions of the random variables \(\mu _{t}(y)\) and \(\mu _{t}\) for \(z \geqslant 0\):

$$ F(z; t, x, y):= \mathsf {E}_{x} e^{-z \mu _{t}(y)},\qquad F(z; t, x):= \mathsf {E}_{x} e^{-z \mu _{t}}. $$

where \(\mathsf {E}_{x}\) is the mean on condition \(\mu _{0}(\cdot ) = \delta _{x}(\cdot )\).

Theorem 2

Let the operator A have the form (1). The functions F(ztx) and F(ztxy) are continuously differentiable with respect to t uniformly with respect to \(x,y\in {\mathbf{Z}}^{d}\) for all \(0 \leqslant z \leqslant \infty \). They satisfy the inequalities \(0 \leqslant F(z; t, x), F(z; t, x, y) \leqslant 1\) and are the solutions to the following Cauchy problems in \(l^{\infty }\left( {\mathbf{Z}}^{d} \right) \)

$$\begin{aligned} \frac{d F(z;t, \cdot )}{d t}&= A F(z;t, \cdot ) + \sum _{j=1}^{k+r} \varDelta _{v_{j}} f_{j} \left( F(z; t, \cdot ) \right) \end{aligned}$$
(8)

with the initial condition \(F(z; 0, \cdot ) = e^{-z}\) and

$$\begin{aligned} \frac{d F(z; t, \cdot , y)}{d t}&= A F(z;t, \cdot , y) + \sum _{j=1}^{k+r} \varDelta _{v_{j}} f_{j} \left( F(z;t, \cdot , y) \right) \end{aligned}$$
(9)

with the initial condition \(F(z;0, \cdot , y) = e^{-z \delta _{y}(\cdot )}\).

Theorem 2 allows us to advance from analysing the BRW at hand to considering the corresponding Cauchy problem in a Banach space instead. Note that, contrary to the single branching source case examined in [11], there is not one but several terms \(\varDelta _{v_{j}}f_{j}(F)\) in the right-hand side of Eqs. (8) and (9), \(j=1,2,\ldots ,N\).

Theorem 3

The moments \(m_{n}(t, \cdot , y)\in l^{2}\left( {\mathbf{Z}}^{d}\right) \) and \(m_{n}(t, \cdot )\in l^{\infty }\left( {\mathbf{Z}}^{d}\right) \) satisfy the following differential equations in the corresponding Banach spaces for all natural \(n \geqslant 1\):

$$\begin{aligned} \frac{d m_{1}}{dt}&= \mathscr {Y}m_{1},\end{aligned}$$
(10)
$$\begin{aligned} \frac{d m_{n}}{dt}&= \mathscr {Y}m_{n} + \sum _{j=1}^{k+r} \varDelta _{v_{j}} g_{n}^{(j)} (m_{1}, \ldots , m_{n-1}),\qquad n \geqslant 2, \end{aligned}$$
(11)

the initial values being \(m_{n}(0, \cdot , y) = \delta _{y}(\cdot )\) and \(m_{n}(0, \cdot ) \equiv 1\) respectively. Here \(\mathscr {Y}m_{n}\) stands for \(\mathscr {Y}m_{n}(t, \cdot , y)\) or \(\mathscr {Y}m_{n} (t, \cdot )\) respectively, and

$$\begin{aligned} g_{n}^{(j)} (m_{1}, \ldots , m_{n-1}) :=\sum _{q=2}^{n} \frac{\beta _{j}^{(q)}}{q!} \sum _{\begin{array}{c} i_{1}, \ldots , i_{q} > 0 \\ i_{1} + \cdots + i_{q} = n \end{array}} \frac{n!}{i_{1}! \cdots i_{q}!} m_{i_{1}} \cdots m_{i_{q}}. \end{aligned}$$
(12)

Theorem 3 will later be used in the proof of Theorem 8 to help determine the asymptotic behaviour of the moments as \(t\rightarrow \infty \).

Theorem 4

The moments \(m_{1}(t, x,\cdot )\in l^{2}\left( {\mathbf{Z}}^{d}\right) \) satisfy the following Cauchy problem in \(l^{2}\left( {\mathbf{Z}}^{d}\right) \):

$$ \frac{d m_{1}(t, x,\cdot )}{d t} = \mathscr {Y} m_{1}(t, x,\cdot ),\qquad m_{1} (0, x, \cdot ) = \delta _{x}(\cdot ). $$

This theorem allows us to obtain different differential equations by making use of the BRW symmetry.

Theorem 5

The moment \(m_{1}(t, x, y)\) satisfies both integral equations

$$\begin{aligned} m_{1}(t, x, y)&= p(t, x, y) + \sum _{j=1}^{k+r} \beta _{j} \int _{0}^{t} p(t-s, x, v_{j}) m_{1}(t-s, v_{j}, y)ds,\\ m_{1}(t, x, y)&= p(t, x, y) + \sum _{j=1}^{k+r} \beta _{j} \int _{0}^{t} p(t-s, v_{j}, y) m_{1}(t-s, x, v_{j})ds. \end{aligned}$$

The moment \(m_{1}(t, x)\) satisfies both integral equations

$$\begin{aligned} m_{1}(t, x)&= 1 + \sum _{j=1}^{k+r} \beta _{j} \int _{0}^{t} p(t-s, x, v_{j}) m_{1}(s, v_{j}) ds,\end{aligned}$$
(13)
$$\begin{aligned} m_{1}(t, x)&= 1 + \sum _{j=1}^{k+r} \beta _{j} \int _{0}^{t} m_{1}(s, x, v_{j}) ds. \end{aligned}$$
(14)

For \(k > 1\) the moments \(m_{k}(t, x, y)\) and \(m_{k}(t, x)\) satisfy the equations

$$\begin{aligned} m_{k}(t, x, y)&= m_{1}(t, x, y)\\ {}&\quad + \sum _{j=1}^{k+r} \int _{0}^{t} m_{1}(t-s, x, v_{j}) g_{k}^{(j)} \left( m_{1}(s, v_{j}, y), \ldots , m_{k-1}(s, v_{j}, y) \right) ds,\\ m_{k}(t, x)&= m_{1}(t, x)\\ {}&\quad + \sum _{j=1}^{k+r} \int _{0}^{t} m_{1}(t-s, x, v_{j}) g_{k}^{(j)} \left( m_{1}(s, v_{j}), \ldots , m_{k-1}(s, v_{j}) \right) ds . \end{aligned}$$

This theorem allows us to make transition from differential equations to integral equations. It is later used to prove Theorem 8.

Theorems 35 are a generalization to the case BRW/r/k/m of Lemma 1.2.1, Theorem 1.3.1 and Theorem 1.4.1 from [11], proved there for BRW/1/0/0. The proofs of Theorems 35 differ only in technical details from the proofs of the above statements from [11] and are therefore omitted here.

3 Properties of the Operator \(\mathscr {Y}\)

We call a BRW supercritical if \(\mu _{t}(y)\) and \(\mu _{t}\) grow exponentially. As was mentioned in Introduction, one of the main results of this work is the numbers of particles limit behavior (5), from which it follows that the BRW with several branching sources with arbitrary intensities is supercritical if the operator \(\mathscr {Y}\) has a positive eigenvalue \(\lambda \). For this reason we devote this section to studying the spectral properties of the operator \(\mathscr {Y}\).

We mention a statement proved in [11, Lemma 3.1.1].

Lemma 1

The spectrum \(\sigma (\mathscr {A})\) of the operator \(\mathscr {A}\) is included in the half-line \((-\infty , 0]\). Also, since the operator \(\sum _{j=1}^{N} \beta _{j} \varDelta _{v_{j}}\) is compact, \(\sigma _{ess}(\mathscr {Y}) = \sigma \left( \mathscr {A} \right) \subset (-\infty , 0]\), where \(\sigma _{ess}(\mathscr {Y})\) denotes the essential spectrum [6] of the operator \(\mathscr {Y}\).

The following theorem provides a criterion of there being a positive eigenvalue in the spectrum of the operator \(\mathscr {Y}\).

Theorem 6

A number \(\lambda > 0\) is an eigenvalue and \(f \in l^{2}\left( {\mathbf{Z}}^{d} \right) \) is the corresponding eigenvector of the operator \(\mathscr {Y}\) if and only if the system of linear equations

$$\begin{aligned} f(u_{i})&=\frac{1}{1+\zeta _{i}} \biggl ( \lambda \sum _{j=1}^{k+m} \zeta _{j} f(u_{j}) I_{u_{j} - u_{i}} (\lambda ) + \sum _{j=1}^{k+r}\beta _{j}f(v_{j}) I_{v_{j} - u_{i}}(\lambda ) \biggr ) \end{aligned}$$
(15)

for \(i = 1, \ldots , k+m\), and

$$\begin{aligned} f(v_{i})&= \biggl ( \lambda \sum _{j=1}^{k+m} \zeta _{j} f(u_{j}) I_{u_{j} - v_{i}} (\lambda ) + \sum _{j=1}^{k+r}\beta _{j}f(v_{j}) I_{v_{j} - v_{i}} (\lambda ) \biggr ) \end{aligned}$$
(16)

for \(i = 1, \ldots , k+r\), with respect to the variables \(f(u_{j})\) and \(f(v_{j})\), where

$$ I_{x}(\lambda ) := G_{\lambda }(x, 0) = \frac{1}{(2\pi )^{d}} \int _{[-\pi , \pi ]^{d}} \frac{e^{-i(\theta , x)}}{\lambda - \phi (\theta )} d\theta ,\qquad x \in {\mathbf{Z}}^{d}, $$

has a non-trivial solution.

Proof

For \(\lambda > 0\) to be an eigenvalue of the operator \(\mathscr {Y}\) it is necessary and sufficient that there be a non-zero element \(f \in l^{2}\left( {\mathbf{Z}}^{d} \right) \) that satisfies the equation

$$ \left( \mathscr {Y} - \lambda I \right) f = \biggl (\mathscr {A}+\sum _{i=1}^{k+m}\zeta _{i}\varDelta _{u_{i}}\mathscr {A}+ \sum _{j=1}^{k+r}\beta _{j}\varDelta _{v_{j}} - \lambda I \biggr )f = 0. $$

Obviously, the solution sets of such an equation for the operators \(\mathscr {Y}\) and \(C^{-1} \mathscr {Y} C\) are the same for any operator C; let us set \(C := \left( I + \sum _{i=1}^{k+m}\zeta _{i}\varDelta _{u_{i}} \right) ^{\frac{1}{2}}\), which is correctly defined since \(\zeta _{i} > -1\) for all i. Thus the equation above can be rewritten as follows:

$$ \biggl (\mathscr {A}+\sum _{i=1}^{k+m}\zeta _{i}\mathscr {A}\varDelta _{u_{i}}+ \sum _{j=1}^{k+r}\beta _{j}\varDelta _{v_{j}} - \lambda I \biggr )f = 0 $$

Since \((\varDelta _{v_{j}} f)(x) := f(x)\delta _{v_{j}}(x) = f(v_{j})\delta _{v_{j}}(x)\) and \(( \mathscr {A}\varDelta _{u_{i}} f)(x) := f(x)A\delta _{u_{j}}(x) \), the preceding expression can be rewritten as follows:

$$ (\mathscr {A}f)(x) + \sum _{j=1}^{k+m} \zeta _{j}f(u_{j})A\delta _{u_{j}}(x) + \sum _{j=1}^{k+r} \beta _{j}f(v_{j})\delta _{v_{j}}(x) = \lambda f(x),\qquad x \in {\mathbf{Z}}^{d}. $$

We apply Fourier transform to this equality and obtain

$$\begin{aligned} (\widetilde{\mathscr {A}f})(\theta ) + \sum _{j=1}^{k+m} \zeta _{j}f(u_{j})\widetilde{\mathscr {A}\delta _{u_{j}}}(\theta ) + \sum _{j=1}^{k+r} \beta _{j} f(v_{j}) e^{i(\theta , v_{j})} = \lambda \tilde{f}(\theta ), \end{aligned}$$
(17)

for \(\theta \in [-\pi , \pi ]^{d}\). The Fourier transform \(\widetilde{\mathscr {A}f}\) of \((\mathscr {A}f)(x)\) is of the form \(\phi \tilde{f}\), where \(\tilde{f}\) is the Fourier transform of f, and \(\phi (\theta )\) is defined by the equality (7), see [11, Lemma 3.1.1]. With this in mind, and making use of the fact that, by the definition of the Fourier transform,

$$ \widetilde{\mathscr {A}\delta _{u_{j}}}(\theta ) = \phi (\theta ) \widetilde{\delta _{u_{j}}} (\theta ) = \phi (\theta ) \sum _{x \in \mathbf {Z}^{d}} \delta _{u_{j}}(x) e^{i(x, \theta )} = \phi (\theta ) e^{i(u_{j}, \theta )}, $$

we rewrite equality (17) as

$$ \phi (\theta ) \tilde{f}(\theta ) + \sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) \phi (\theta ) e^{i(u_{j}, \theta )} + \sum _{j=1}^{k+r} \beta _{j} f(v_{j}) e^{i(\theta , v_{j})} = \lambda \tilde{f}(\theta ), $$

or

$$\begin{aligned} \tilde{f}(\theta ) = \frac{1}{\lambda - \phi (\theta )} \biggl [\sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) \phi (\theta ) e^{i(u_{j}, \theta )} + \sum _{j=1}^{k+r} \beta _{j} f(v_{j}) e^{i(\theta , v_{j})}\biggr ], \end{aligned}$$
(18)

where \(\theta \in [-\pi , \pi ]^{d}\). Since \(\lambda > 0\) and \(\phi (\theta ) \leqslant 0\), \( \int _{[-\pi , \pi ]^{d}} |\lambda - \phi (\theta )|^{-2} d\theta < \infty \), which allows us to apply the inverse Fourier transform to equality (18): as

$$\begin{aligned} \varPhi ^{-1}&\biggl [\frac{1}{\lambda - \phi (\theta )} \sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) \phi (\theta ) e^{i(u_{j}, \theta )}\biggr ] \\ {}&= \varPhi ^{-1} \biggl [-\sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) e^{i(u_{i}, \theta )} + \frac{\lambda }{\lambda - \phi (\theta )} \sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) e^{i(u_{j}, \theta )} \biggr ] \\ {}&= -\sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) \frac{1}{(2\pi )^{d}} \int _{[-\pi , \pi ]^{d}} e^{-i(\theta , u_{j} - x)}d\theta + \lambda \sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) I_{u_{j} - x} (\lambda ) \\ {}&= -\sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) \mathbf {I} [x = u_{j}] + \lambda \sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) I_{u_{j} - x} (\lambda ) \end{aligned}$$

we obtain

$$\begin{aligned} f(x)&+ \sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) \mathbf {I} [x =u_{j}] \nonumber \\&= \lambda \sum _{j=1}^{k+m} \zeta _{j}f(u_{j}) I_{u_{j} -x} (\lambda ) \phi (\theta ) e^{i(u_{j}, \theta )} + \sum _{j=1}^{k+r} \beta _{j} I_{v_{j} - x}(\lambda ) f(v_{j}). \end{aligned}$$
(19)

By choosing \(x = u_{i}\), where \(i = 1, \ldots , k+m\), or \(x = v_{i}\), where \(i = 1, \ldots , k+r\), we can rewrite (19) as follows:

$$\begin{aligned} f(u_{i})&=\frac{1}{1+\zeta _{i}} \biggl ( \lambda \sum _{j=1}^{k+m} \zeta _{j} f(u_{j}) I_{u_{j} - u_{i}} (\lambda ) + \sum _{j=1}^{k+r}\beta _{j}f(v_{j}) I_{v_{j} - u_{i}} (\lambda ) \biggr ),\\ f(v_{i})&= \lambda \sum _{j=1}^{k+m} \zeta _{j} f(u_{j}) I_{u_{j} - v_{i}} (\lambda ) + \sum _{j=1}^{k+r}\beta _{j}f(v_{j}) I_{v_{j} - v_{i}} (\lambda ). \end{aligned}$$

We note that any solution of system (15) completely defines f(x) on the entirety of its domain by formula (19), which proves the theorem.    \(\square \)

Let among \(k+r\) sources, in which branching occurs, in \(s \le k+r\) sources intensities are \(\beta _i > 0\), \(i=0, \ldots , s\), and in \(k+r-s\) sources intensities are \(\beta _i\le 0\), \(i=s+1, \ldots , k+r\). We represent the operator \(\mathscr {Y}\) defined by (6) as follows:

$$ \mathscr {Y} = \mathscr {A} + \sum \limits _{i=1}^{k+m} \zeta _i \varDelta _{u_i}\mathscr {A} + \sum \limits _{i=1}^{s} \beta _i \varDelta _{v_i} + \sum \limits _{i=s+1}^{k+r} \beta _i \varDelta _{v_i}. $$

Define operator

$$ \mathscr {B} := \lambda I - \mathscr {A} - \sum \limits _{i=1}^{k+m} \zeta _i \varDelta _{u_i}\mathscr {A} - \sum \limits _{i=s+1}^{k+r} \beta _i \varDelta _{v_i}, $$

then the eigenvector h corresponding to the eigenvalue \(\lambda \) of \(\mathscr {Y}\) satisfies the equation

$$ \mathscr {B} h = \sum \limits _{i=1}^{s} \beta _i \delta _{v_i} \langle \delta _{v_i}, h \rangle . $$

Note that \(\langle \mathscr {A} x, x \rangle \le 0\). Besides, \(\beta _i<0\) for \(i=s+1, \ldots , k+r\), and therefore \(\langle \sum \limits _{i=s+1}^{k+r} \beta _i \varDelta _{v_i}x, x \rangle \le 0\). Hence, the operator \(\mathscr {B}\) is reversible. The problem of existence of positive eigenvalues of the operator \(\mathscr {Y}\) is converted to the question of the existence of nonzero solutions for the equation \(h = \mathscr {B}^{-1}\sum \limits _{j=1}^{s} \beta _j \delta _{v_j}\langle \delta _{v_j},h\rangle \), which, after introducing auxiliary variables \(q_{i}=\langle \delta _{v_i},h\rangle \) and scalar multiplication on the left of this equality by \(\delta _{v_{i}}\) reduces to a finite system of equations

$$\begin{aligned} q_{i}= \sum _{j=1}^{s}\beta _{j}\langle \delta _{v_{i}},\mathscr {B}^{-1} \delta _{v_{j}}\rangle q_{j}, \qquad i = 1, 2, \ldots , s. \end{aligned}$$
(20)

Denote matrix \(B^{(\lambda )}\):

$$\begin{aligned} B^{(\lambda )}_{i, j} :=\beta _{j}\langle \delta _{v_i},\mathscr {B}^{-1} \delta _{v_j}\rangle , \ i, j = 1, \ldots , s. \end{aligned}$$
(21)

So the matrix representation (20) has the following form

$$\begin{aligned} q = B^{(\lambda )}q, \end{aligned}$$
(22)

and the problem on positive eigenvalues for \(\mathscr {Y}\) is reduced to the question of for which \(\lambda > 0\) the number 1 is the matrix \(B^{(\lambda )}\) eigenvalue.

Theorem 7

Let \(\lambda _{0}>0\) be the largest eigenvalue of the operator \(\mathscr {Y}\). Then \(\lambda _{0}\) is a simple eigenvalue of \(\mathscr {Y}\), and 1 is the largest eigenvalue of the matrix \(B^{(\lambda _{0})}\).

Proof

Denote \(\zeta := \max (0, \max \limits _{i}(\zeta _i)) \ge 0\) and note that the elements of the operator

$$ \tilde{\mathscr {A}} := \mathscr {A} + \sum \limits _{i=1}^{k+m} \zeta _i \varDelta _{u_i}\mathscr {A} - \mathbf{a} (0, 0)(\zeta + 1)I $$

are non-negative. It follows from Schur’s test [4] that in each of the spaces \(l^{p}(\mathbf {Z}^{d})\) for the operator norm \(\tilde{\mathscr {A}}\) there is an estimation

$$\begin{aligned} \Vert \tilde{\mathscr {A}}\Vert _{p}\le -\mathbf{a} (0, 0)(\zeta +1). \end{aligned}$$
(23)

Operator \(\mathscr {B}\) can be represented as follows:

$$ \mathscr {B} = \lambda I -\mathbf{a} (0, 0)(\zeta +1)I- \sum \limits _{i=s+1}^{k+r} \beta _i \varDelta _{v_i} -\tilde{\mathscr {A}}=\mathscr {F}_{\lambda }-\tilde{\mathscr {A}}, $$

where the operator

$$ \mathscr {F}_{\lambda } = \lambda I -\mathbf{a} (0, 0)(\zeta +1)I- \sum \limits _{i=s+1}^{k+r} \beta _i \varDelta _{v_i} $$

is diagonal with all its diagonal elements no less than \(-\mathbf{a} (0, 0)(\zeta +1) + \lambda > 0\). Then

$$\begin{aligned} \Vert \mathscr {F}_{\lambda }^{-1}\Vert _{p}\le \frac{1}{-\mathbf{a} (0, 0)(\zeta +1)+\lambda }. \end{aligned}$$
(24)

Then \(\mathscr {B}\) can be represented in the following form \(\mathscr {B} = \mathscr {F}_{\lambda }\left( I-\mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\right) \) and therefore

$$\begin{aligned} \mathscr {B}^{-1}=\left( I-\mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\right) ^{-1}\mathscr {F}_{\lambda }^{-1}. \end{aligned}$$
(25)

Here by virtue of (23) and (24) the operator norm of \(\mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\) is less than one:

$$ \Vert \mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\Vert _{p}\le \frac{-\mathbf{a} (0, 0)(\zeta +1)}{-\mathbf{a} (0, 0)(\zeta +1)+\lambda }<1, $$

and therefore the operator \(\left( I-\mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\right) ^{-1}\) can be represented as a series:

$$ \left( I-\mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\right) ^{-1}=\sum _{n=0}^{\infty } \left( \mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\right) ^{n}. $$

Hence, by virtue of (25)

$$\begin{aligned} \mathscr {B}^{-1}=\sum _{n=0}^{\infty } \left( \mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\right) ^{n}\mathscr {F}_{\lambda }^{-1}. \end{aligned}$$
(26)

Note that the right-hand side (26) is the sum of the products of operators (infinite matrices) with non-negative elements. Therefore, each element of the operator (infinite matrix) \(\mathscr {B}^{-1}\) is non-negative.

Let us prove that each element of the operator \(\mathscr {B}^{-1}\) is strictly positive. For the proof, we use the fact that our random walk is irreducible. Note that, since the random walk under the action of the operator \(\mathscr {A}\) is irreducible, for any pair \(x, y \in \mathbf {Z}^{d}\) there exists \(n \ge 1\) and the set of points

$$\begin{aligned} u_{0},u_{1},\ldots ,u_{n}\in \mathbf {Z}^{d},\qquad u_{0}=x,~u_{n}=y, \end{aligned}$$
(27)

such that \(a(u_{1}-u_{0})a(u_{2}-u_{1})\cdots a(u_{n}-u_{n-1}) > 0\), whence follows

$$\begin{aligned} \overline{a}(u_{1}-u_{0})\overline{a}(u_{2}-u_{1}) \cdots \overline{a}(u_{n}-u_{n-1}) > 0. \end{aligned}$$
(28)

Note that the elements of the infinite matrix \(\mathscr {B}^{-1}\) are indexed by pairs of points \(x, y\in \mathbf {Z}^{d}\). In addition, the element with indices (xy) of the matrix \(\left( \mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\right) ^{n}\) in (26) is a sum of the form

$$\begin{aligned} \sum _{u_{0}=x,~u_{n}=y} \overline{a}(u_{1}-u_{0})\overline{a}(u_{2}-u_{1})\cdots \overline{a}(u_{n}-u_{n-1}) f_{u_{0},u_{1},\ldots ,u_{n}} \end{aligned}$$
(29)

taken over all possible “chains” of n elements, satisfying (27), in which positive factors \(f_{u_{0},u_{1},\ldots ,u_{n}}\) are formed due to the presence of the diagonal matrix \(\mathscr {F}_{\lambda }^{-1}\) of \(\left( \mathscr {F}_{\lambda }^{-1}\tilde{\mathscr {A}}\right) ^{n}\). But by virtue of (28) at least one term in the sum (29) is strictly positive, while the rest are non-negative. Hence, the entire sum is also strictly positive, which implies that all elements of the operator (infinite matrix) \(\mathscr {B}^{-1}\) are strictly positive. Since the elements of \(\mathscr {B}^{-1}\) (see (21)) are positive, the matrix \(B^{(\lambda )}\) is positive.

The right side of (25) contains the operators \(\mathscr {F}_{\lambda }^{-1}\), whose elements, monotonically decreasing in \(\lambda > 0\), tend to zero as \(\lambda \rightarrow \infty \). Since in this case all multiplied and added operators (infinite matrices) are positive, then all their elements in this case will also decrease monotonically in \(\lambda > 0\) and tend to zero as \(\lambda \rightarrow \infty \).

We first show that if \(\lambda _{0}\) is the operator \(\mathscr {Y}\) largest eigenvalue, then the largest (absolute) eigenvalue of the matrix \(B^{(\lambda _{0})}\) is 1. Indeed, assume it is not the case.

It follows from (22) that \(\lambda _{0} > 0\) is an eigenvalue of \(\mathscr {Y}\) if and only if 1 is an eigenvalue of \(B^{(\lambda _{0})}\). All elements of \(B^{(\lambda _{0})}\) are strictly positive. Consequently, by the Perron-Frobenius theorem, see [5, Theorem 8.4.4], the matrix \(B^{(\lambda _{0})}\) has a strictly positive eigenvalue that is strictly greater (by absolute value) than any other of its eigenvalues.

Let us denote the dominant eigenvalue of \(B^{(\lambda _{0})}\) by \(\gamma (\lambda _{0})\). Since we assumed that 1 is not the largest eigenvalue of \(B^{(\lambda _{0})}\), then \(\gamma (\lambda _{0})> 1\). Given that with respect to \(\lambda \) the functions \(I_{x_{i} - x_{j}}(\lambda )\) are continuous, then all elements of \(B^{(\lambda )}\) and all eigenvalues of \(B^{(\lambda )}\) are continuous functions with respect to \(\lambda \). All matrix \(B^{(\lambda )}\) eigenvalues tend to zero as \(\lambda \rightarrow \infty \), because for all i and j \(I_{x_{i} - x_{j}}(\lambda ) \rightarrow 0\) as \(\lambda \rightarrow \infty \). Hence there is such a \(\hat{\lambda } > \lambda _{0}\) that \(\gamma (\hat{\lambda }) = 1\). Then, as was shown earlier, this \(\hat{\lambda }\) has to be an eigenvalue of \(\mathscr {Y}\), which contradicts the initial assumption that \(\lambda _{0}\) is the largest eigenvalue of the operator \(\mathscr {Y}\).

We have just proved that 1 is the largest eigenvalue of \(B^{(\lambda _{0})}\); then we obtain from the Perron-Frobenius theorem the simplicity of this eigenvalue. Therefore, to complete the proof, we have to show the simplicity of the eigenvalue \(\lambda _{0}\) of \(\mathscr {Y}\).

Suppose it is not, and \(\lambda _{0}\) is not simple. In this case, there are at least two linearly independent eigenvectors \(f_{1}\) and \(f_{2}\) corresponding to the eigenvalue \(\lambda _{0}\). Therefore, we can again applying the equality (19), notice that the linear independence of the vectors \(f_{1}\) and \(f_{2}\) is equivalent to the linear independence of the vectors

$$ \hat{f}_{i} := \left( f_{i}(u_{1}), \ldots , f_{i}(u_{N}) \right) ,\qquad i = 1, 2. $$

From the definition of \(B^{(\lambda )}\) and Theorem 6 it follows that vectors \(\hat{f}_{1}\) and \(\hat{f}_{2}\) satisfy \(\left( B^{(\lambda _{0})} - I\right) f = 0\). It contradicts the simplicity of eigenvalue 1 of \(B^{(\lambda _{0})}\). This completes the proof.    \(\square \)

Lemma 2

Let \(\mathscr {Y}\) be a self-adjoint continuous operator on a separable Hilbert space E, the spectrum of which is a disjoint union of two sets: fist one is a finite (counting multiplicity) set of isolated eigenvalues \(\lambda _{i} > 0\) and second one is the remaining part of the spectrum which is included in \([-s, 0]\), \(s> 0\). Then the solution m(t) of the Cauchy problem

$$ \frac{d m(t)}{dt} = \mathscr {Y} m(t),\qquad m(0) = m_{0}, $$

satisfies the condition

$$ \lim _{t \rightarrow \infty } e^{-\lambda _{0} t} m(t) = C\left( m_{0}\right) , $$

where \(\lambda _{0} = \max _{i} \lambda _{i}\).

Proof

Let us denote by \(V_{\lambda _{i}}\) the finite-dimensional eigenspace of \(\mathscr {Y}\) that corresponds to the eigenvalue \(\lambda _{i}\).

We consider the projection \(P_{i}\) of \(\mathscr {Y}\) onto \(V_{\lambda _{i}}\), see [6]. Let

$$\begin{aligned} x_{i}(t)&:= P_{i} m(t),\\ v(t)&:= \left( I - \sum _{i} P_{i} \right) m(t) = m(t) - \sum _{i} x_{i}(t). \end{aligned}$$

All spectral operators \(P_{i}\) and \(\left( I - \sum P_{i} \right) \) commute with \(\mathscr {Y}\), see [6]. Therefore

$$\begin{aligned} \frac{d x_{i}(t)}{dt}&= P_{i} \mathscr {Y} m(t) = \mathscr {Y} x_{i}(t)\\ \frac{d v(t)}{dt}&= \left( I - \sum P_{i} \right) \mathscr {Y} m(t) = \left( I - \sum P_{i} \right) \mathscr {Y} \left( I - \sum P_{i} \right) v(t). \end{aligned}$$

As \(x_{i}(t) \in V_{\lambda _{i}}\), we can see that \(\mathscr {Y} x_{i}(t) = \lambda _{i} x_{i}(t)\), from which it follows that \(x_{i}(t) = e^{\lambda _{i} t} x_{i}(0)\). Since the spectrum of the operator

$$\mathscr {Y}_{0} := \left( I - \sum P_{i} \right) \mathscr {Y} \left( I - \sum P_{i} \right) $$

is included into the spectrum of operator \(\mathscr {Y}\) and \(\mathscr {Y}_{0}\) has no isolated eigenvalues \(\lambda _{i}\), it is included into \([-s, 0]\). From this for all \(t\geqslant 0\) we obtain \(|v(t)| \leqslant |v(0)|\), see [11, Lemma 3.3.5]. Hence

$$\begin{aligned} m(t) = \sum _{i} e^{\lambda _{i} t} P_{i} m(0 ) + v(t), \end{aligned}$$
(30)

and the proof is complete.    \(\square \)

Remark 1

Let \(\lambda _{0}\) be the largest eigenvalue of \(\mathscr {Y}\). Denote \(P_{0} m(0 )=C(m_{0})\) in (30). Then \(C(m_0)\ne 0\) if and only if the orthogonal projection \(P_{0} m(0 )\) of the initial value \(m_0=m(0)\) onto the corresponding to the eigenvalue \(\lambda _{0}\) eigenspace is non-zero.

If the eigenvalue \(\lambda _{0}\) of \(\mathscr {Y}\) is simple and f is an eigenvector corresponding to \(\lambda _{0}\), the projection \(P_{0}\) is defined by the formula \(P_{0}x=\frac{(f,x)}{(f,f)}f\), where \((\cdot ,\cdot )\) denots scalar product in the Hilbert space E. In cases when this \(\lambda _{0}\) is not simple, describing the projection \(P_{0}\) is a significantly more difficult task.

We remind the reader that we proved the simplicity of the largest eigenvalue of \(\mathscr {Y}\) above allowing us to bypass this issue.

Theorem 8

Let defined by (6) operator \(\mathscr {Y}\) with the parameters \(\lbrace \zeta _{i} \rbrace _{i=1}^{k+r}\) and \(\lbrace \beta _{i} \rbrace _{i=1}^{k+m}\), has a finite number of positive eigenvalues (counting multiplicity). We denote by \(\lambda _{0}\) the largest of them, and the corresponding to \(\lambda _{0}\) normalized vector by f. Then for \(t \rightarrow \infty \) and all \(n \in {\mathbf{N}}\) the following statements hold:

$$\begin{aligned} m_{n}(t, x, y) \sim C_{n}(x, y) e^{n \lambda _{0} t},\quad m_{n}(t, x) \sim C_{n}(x) e^{n\lambda _{0} t}, \end{aligned}$$
(31)

where

$$ C_{1} (x, y) = f(y) f(x),\qquad C_{1}(x) = f(x)\frac{1}{\lambda _{0}} \sum _{j=1}^{k+r} \beta _{j} f(v_{j}), $$

and the functions \(C_{n}(x, y)\) and \(C_{n}(x) > 0\) for \(n \geqslant 2\) are defined as follows:

$$\begin{aligned} C_{n}(x, y)&= \sum _{j=1}^{k+r} g^{(j)}_{n}\left( C_{1}(v_{j}, y), \ldots , C_{n-1}(v_{j}, y) \right) D^{(j)}_{n}(x),\\ C_{n}(x)&= \sum _{j=1}^{k+r} g^{(j)}_{n}\left( C_{1}(v_{j}), \ldots , C_{n-1}(v_{j}) \right) D^{(j)}_{n}(x), \end{aligned}$$

where \(D^{(j)}_{n}(x)\) are certain functions satisfying the estimate \(|D_{n}^{(j)} (x)|\leqslant \frac{2}{n\lambda _{0}}\) for \(n\geqslant n_{*}\) and some \(n_{*}\in {\mathbf{N}}\) and \(g^{(j)}_{n}\) are the functions defined in (12).

Proof

For \(n \in {\mathbf{N}}\) we consider the functions \(\nu _{n} := m_{n} (t, x, y) e^{-n \lambda _{0} t}\). From Theorem 3 (see Eqs. (10) and (11) for \(m_{n}\)) we obtain the following equations for \(\nu _{n}\):

$$\begin{aligned} \frac{d \nu _{1}}{dt}&= \mathscr {Y} \nu _{1} - \lambda _{0} \nu _{1},\\ \frac{d \nu _{n}}{dt}&= \mathscr {Y} \nu _{n} - n \lambda _{0} \nu _{n} + \sum _{j=1}^{k+r} \varDelta _{v_{j}} g_{n}^{(j)} \left( \nu _{1}, \ldots , \nu _{n-1} \right) ,\qquad n \geqslant 2, \end{aligned}$$

the initial values being \(\nu _{n}(0, \cdot , y) = \delta _{y}(\cdot ), n \in {\mathbf{N}}\).

Since \(\lambda _{0}\) is the largest eigenvalue of \(\mathscr {Y}\), the spectrum of \(\mathscr {Y}_{n} := \mathscr {Y} - n \lambda _{0} I\) for \(n \geqslant 2\) is included into \((-\infty , -(n-1)\lambda _{0}]\). As it was shown, for example, in [11, p. 58], that if the spectrum of a self-adjoint continuous operator \(\widetilde{\mathscr {Y}}\) on a Hilbert space is included into \((-\infty , -s], s > 0\), and also \(f(t) \rightarrow f_{*}\) as \(t \rightarrow \infty \), then the solution of the equation

$$ \frac{d \nu }{dt} = \widetilde{\mathscr {Y}} \nu + f(t) $$

satisfies \(\nu (t) \rightarrow -\widetilde{\mathscr {Y}}^{-1} f_{*}\) condition. For this reason for \(n \geqslant 2\) we obtain

$$\begin{aligned} C_{n}(x, y) = \lim _{t \rightarrow \infty } \nu _{n}&= -\sum _{j=1}^{k+r} \left( \mathscr {Y}_{n}^{-1} \varDelta _{v_{j}}g_{n}^{(j)}(C_{1}(\cdot , y), \ldots , C_{n-1}(\cdot , y) )\right) (x)\\&= -\sum _{j=1}^{k+r} g_{n}^{(j)}(C_{1}(v_{j}, y), \ldots , C_{n-1}(v_{j}, y) )(\mathscr {Y}_{n}^{-1} \delta _{v_{j}}(\cdot ))(x)). \end{aligned}$$

Now we prove the existence of such a natural number \(n_{*}\) that for all \(n\geqslant n_{*}\) the estimates

$$ D_{n}^{(j)} (x) := |(\mathscr {Y}_{n}^{-1} \delta _{v_{j}}(\cdot ))(x)| \leqslant \frac{2}{n\lambda _{0}} $$

hold. We evaluate the operator \(\mathscr {Y}_{n}^{-1}\) norm. For this, let us consider two vectors u and x such that \(u=\mathscr {Y}_{n}x= \mathscr {Y}x - n\lambda _{0} x\). Then \(\Vert u\Vert \geqslant n\lambda _{0} \Vert x\Vert - \Vert \mathscr {Y}x\Vert \geqslant (n\lambda _{0} -\Vert \mathscr {Y}\Vert )\Vert x\Vert \), hence \(\Vert \mathscr {Y}_{n}^{-1}u\Vert =\Vert x\Vert \leqslant \Vert u\Vert /\left( n\lambda _{0} -\Vert \mathscr {Y}\Vert \right) \), and for all \(n\geqslant n_{*}=2\lambda _{0}^{-1}\Vert \mathscr {Y}\Vert \) the estimate \(\Vert \mathscr {Y}_{n}^{-1}\Vert \leqslant \frac{2}{n\lambda _{0}}\) holds. From this we conclude that

$$ |(\mathscr {Y}_{n}^{-1} \delta _{v_{j}}(\cdot ))(x)| \leqslant \Vert \mathscr {Y}_{n}^{-1} \delta _{v_{j}}(\cdot )\Vert \leqslant \Vert \mathscr {Y}_{n}^{-1} \Vert \Vert \delta _{v_{j}}(\cdot ) \Vert \leqslant \frac{2}{n\lambda _{0}},\qquad n\geqslant n_{*}. $$

Now we have to estimate the particle number moments asymptotic behaviour. It follows from (14) that as \(t \rightarrow \infty \) the following asymptotic equivalences hold:

$$\begin{aligned} m_{1}(t, x) \sim \sum _{j=1}^{k+r} \beta _{j} \int _{0}^{t} m_{1}(s, x, v_{j}) \,ds \sim \sum _{j=1}^{k+r} \frac{\beta _{j}}{\lambda _{0}} m_{1}(t, x, v_{j}). \end{aligned}$$
(32)

The function \(m_{1}(t, x, 0)\) exhibits exponential growth as \(t \rightarrow \infty \) and \(m_{1}(t, x)\) will display the same behaviour.

We can now infer the asymptotic behaviour of the higher moments \(m_{n}(t, x)\) for \(n \geqslant 2\) from Eqs. (11) in a similar way to how it was done above for \(m_{n}(t, x, y)\).

We proceed to prove the equalities for \(C_{1} (x, y)\) and \( C_{1}(x)\). The eigenvalue \(\lambda _{0}\) is simple by Corollary 7 and it follows, according to Remark 1, that

$$ C_{1} (x, y) = \lim _{t \rightarrow \infty } e^{-\lambda _{0} t} m_{1}(t, x, y) = Pm_{0} = \left( m_{1}(0, x, y), f\right) f(x). $$

But \(m_{1}(0, x, y) = \delta _{y}(x)\), hence

$$ C_{1} (x, y) = \left( m_{1}(0, x, y), f\right) f(x) = f(y) f(x). $$

We also obtain from (32) that

$$ C_{1}(x) = \frac{1}{\lambda _{0}} \sum _{j=1}^{k+r} \beta _{j} C_{1}(x, v_{j}) = f(x)\frac{1}{\lambda _{0}} \sum _{j=1}^{k+r} \beta _{j} f(v_{j}), $$

which concludes the proof.    \(\square \)

Corollary 1

\(C_{n}(x, y) = \psi ^{n}(y) C_{n}(x)\), where \( \psi (y) = \frac{\lambda _{0}f(y)}{\sum _{j=1}^{k+r} \beta _{j} f(v_{j})}\).

Proof

We prove the corollary by induction on n. For \(n=1\) the induction basis holds due to Theorem 8. Let us now deal with the induction step: according to Theorem 8,

$$\begin{aligned} C_{n+1}(x, y)&= \sum _{j=1}^{k+r} g^{(j)}_{n+1}\left( C_{1}(v_{j}, y), \ldots , C_{n}(v_{j}, y) \right) D^{(j)}_{n+1}(x),\\ C_{n+1}(x)&= \sum _{j=1}^{k+r} g^{(j)}_{n+1}\left( C_{1}(v_{j}), \ldots , C_{n}(v_{j}) \right) D^{(j)}_{n+1}(x); \end{aligned}$$

therefore, it suffices to prove that for all j the equalities

$$ g^{(j)}_{n+1}\left( C_{1}(v_{j}, y), \ldots , C_{n}(v_{j}, y) \right) = \psi ^{n+1}(y) g^{(j)}_{n+1}\left( C_{1}(v_{j}), \ldots , C_{n}(v_{j}) \right) $$

hold. As a consequence of the definition and hypothesis of induction,

$$\begin{aligned} g^{(j)}_{n+1}&\left( C_{1}(v_{j}, y), \ldots , C_{n}(v_{j}, y) \right) \\ {}&= \sum _{r=2}^{n+1} \frac{\beta _{j}^{(r)}}{r!} \sum _{\begin{array}{c} i_{1}, \ldots , i_{r}> 0 \\ i_{1} + \cdots + i_{r} = n+1 \end{array}} \frac{n!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(v_{j}, y) \cdots C_{i_{r}}(v_{j}, y) \\&= \psi ^{n+1} (y) \sum _{r=2}^{n+1} \frac{\beta _{j}^{(r)}}{r!} \sum _{\begin{array}{c} i_{1}, \ldots , i_{r} > 0 \\ i_{1} + \cdots + i_{r} = n+1 \end{array}} \frac{n!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(v_{j}) \cdots C_{i_{r}}(v_{j}), \end{aligned}$$

which proves the corollary.    \(\square \)

4 Proof of Theorem 1

Let us introduce the function

$$ f(n, r) := \sum _{\begin{array}{c} i_{1}, \ldots , i_{r} > 0 \\ i_{1} + \cdots + i_{r} = n \end{array}} i_{1}^{i_{1}} \cdots i_{r}^{i_{r}},\qquad 1 \leqslant r \leqslant n. $$

The following auxiliary lemma is proved in [7, Lemma 9].

Lemma 3

There is such a constant \(C>0\) that \(f(n, r) < C \frac{n^{n}}{r^{ r -1}}\) for all \(n \geqslant r \geqslant 2\).

We now turn to proving Theorem 1.

Proof

Let us define the functions

$$ m(n, x, y) :=\lim _{t\rightarrow \infty }\frac{m_{n}(t,x,y)}{m_{1}^{n}(t,x,y)} =\frac{C_{n} (x, y)}{C_{1}^{n} (x, y)}, $$
$$ m(n, x) :=\lim _{t\rightarrow \infty }\frac{m_{n}(t,x)}{m_{1}^{n}(t,x)}= \frac{C_{n} (x)}{C_{1}^{n} (x)}; $$

as follows from Theorem 8 and \(G_{\lambda }(x,y)\) being positive, these definitions are sound. Corollary 1 yields

$$ m(n, x, y) = m(n, x) = \frac{C_{n}(x)}{C_{1}^{n}(x)} = \frac{C_{n}(x, y)}{C_{1}^{n}(x, y)}. $$

From the above equalities and the asymptotic equivalences (31) we have Theorem 1 statements in terms of convergence of moments of the random variables \(\xi (y)=\psi (y)\xi \) and \(\xi \).

The distributions of the limit random variables \(\xi (y)\) and \(\xi \) to be uniquely determined by their moments if, as was shown in [11], the Carleman condition

$$\begin{aligned} \sum _{n=1}^{\infty } m(n, x, y)^{-1/2n} = \infty ,\qquad \sum _{n=1}^{\infty } m(n, x)^{-1/2n} = \infty \end{aligned}$$
(33)

We establish below that the series for the m(nx) diverges and that, therefore, said moments define the random variable \(\xi \) uniquely; the statement concerning \(\xi (y)\) and its moments can be proved in much the same manner.

Since \(\beta ^{(r)}_{j} = O(r! r^{r-1})\), there is such a constant D that for all \(r \geqslant 2\) and \(j = 1, \ldots , k+r\) the inequality \(\beta ^{(r)}_{j} < D r! r^{r-1}\) holds. Without loss of generality we assume that for all n

$$ C_{n}(x) \leqslant \max _{j = 1, \ldots , k+r}C_{n}(v_{j}) = C_{n}(v_{1}). $$

Let \(\gamma := 2 N C D E\frac{\lambda _{0}\beta _{2}}{2} C_{1}^{2}(v_{1})\), where C is defined in Lemma 3, and the constant E is such that \(C_{n}(v_{1}) \leqslant \gamma ^{n-1} n! n^{n}\) for \(n \leqslant \max \{n_{*}, 2 \}\), where \(n_{*}\) is defined in Theorem 8.

From this point on, the proof follows to the scheme of proof of [7, Th. 1] and is included only for readability.

Let us show by induction that

$$ C_{n}(x) \leqslant C_{n}(v_{1}) \leqslant \gamma ^{n-1} n! n^{n}. $$

The induction basis for \(n = 1\) is valid due to the C choice. In order to prove the step of induction, we will show that

$$ C_{n+1}(x) \leqslant C_{n+1}(v_{1}) \leqslant \gamma ^{n} (n+1)! (n+1)^{n+1}. $$

It follows from \(C_{n+1}(v_{1})\) formula and the estimate for \(D_{n}^{(j)}(x)\) from Theorem 8 that

$$ C_{n+1}(v_{1}) \leqslant \sum _{j=1}^{N}\sum _{r=2}^{n+1} \frac{\beta ^{(j)}_{r}}{r!} \sum _{\begin{array}{c} i_{1}, \ldots , i_{r} > 0 \\ i_{1} + \cdots + i_{r} = n+1 \end{array}} \frac{(n+1)!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(v_{1})\cdots C_{i_{r}}(v_{1}) \frac{2}{\lambda _{0}(n+1)}. $$

By the induction hypothesis

$$ \frac{(n+1)!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(0) \cdots C_{i_{r}}(0) \leqslant \gamma ^{n+1-r} (n+1)! i_{1}^{i_{1}} \cdots i_{r}^{i_{r}}; $$

which, added to the fact that \(\beta _{j}^{(r)} < D r! r^{r-1}\) and \(\gamma ^{n+1-r} \leqslant \gamma ^{n-1}\), yields

$$\begin{aligned} \sum _{j=1}^{N} \sum _{r=2}^{n+1} \frac{\beta _{j}^{(r)}}{r!}&\sum _{\begin{array}{c} i_{1}, \ldots , i_{r}> 0 \\ i_{1} + \cdots + i_{r} = n+1 \end{array}} \frac{(n+1)!}{i_{1}! \cdots i_{r}!} C_{i_{1}}(v_{1}) \cdots C_{i_{r}}(v_{1}) \\&\leqslant N\gamma ^{n-1} D (n+1)! \sum _{r=2}^{n+1} r^{r-1} \sum _{\begin{array}{c} i_{1}, \ldots , i_{r} > 0 \\ i_{1} + \cdots + i_{r} = n+1 \end{array}} i_{1}^{i_{1}} \cdots i_{r}^{i_{r}} \\&= N \gamma ^{n-1}D (n+1)! \sum _{r=2}^{n+1} r^{r-1} f(n+1, r). \end{aligned}$$

We infer from Lemma 3 that

$$\begin{aligned} N \gamma ^{n-1}D (n+1)! \sum _{r=2}^{n+1} r^{r-1} f(n+1, r)&\leqslant N \gamma ^{n-1} (n+1)!D C \sum _{r=2}^{n+1} (n+1)^{n+1} \\ {}&\leqslant N \gamma ^{n-1} D C (n+1)! (n+1)^{n+2}. \end{aligned}$$

Hence, by referring to the \(\gamma \) definition we obtain

$$ C_{n+1}(x) \leqslant \gamma ^{n} (n+1)! (n+1)^{n+1}, $$

which completes the proof of the step of induction.

Since \(n! \leqslant \left( \frac{n+1}{2} \right) ^{n}\), \(C_{n}(x) \leqslant \frac{\gamma ^{n}}{2^{n}} (n+1)^{2n}\). Thus,

$$ m(n, x) = \frac{C_{n}(x)}{C_{1}^{n}(x)} \leqslant \left( \frac{\gamma }{2 C_{1}(x)} \right) ^{n} (n+1)^{2n}, $$

from which we obtain that

$$ \sum _{n=1}^{\infty } m(n, x)^{-1/2n} \geqslant \sqrt{\frac{2 C_{1}(x)}{\gamma }} \sum _{n=1}^{\infty } \frac{1}{n+1} = \infty . $$

The condition (33) is satisfied, and the corresponding Stieltjes moment problem for the moments m(nx) has a unique solution [8, Th. 1.11]. Hence, statements (5) are valid in terms of convergence in distribution and Theorem 1 is proved.    \(\square \)