1. INTRODUCTION. STATEMENT OF THE PROBLEM

We will denote the elements of the \(d\)-dimensional Euclidean space \( {\mathbb R}^d\) by boldface symbols, for example, \({\boldsymbol {x}}=(x_{(1)},\dots ,x_{(d)})\), \({\boldsymbol {0}}=(0,\dots ,0)\), and \({\boldsymbol {\mu }}=({\mu }_{(1)},\dots ,{\mu }_{(d)})\). The scalar product of elements \({\boldsymbol {x}},{\boldsymbol {y}}\in {\mathbb R}^d \) will be designated as

$$ {\boldsymbol {x}}{\boldsymbol {y}}:=x_{(1)}y_{(1)}+\dots +x_{(d)}y_{(d)}.$$

The norm in \( {\mathbb R}^d\) will be denoted by \(|{\boldsymbol {x}}|:=\sqrt {{\boldsymbol {x}}{\boldsymbol {x}}} \). Random vectors with values in \({\mathbb R}^d \) will also be denoted by bold symbols, for example, \({\boldsymbol {\zeta }}=\big ({\zeta }_{(1)},\dots ,{\zeta }_{(d)}\big ) \). The notation

$$ {\boldsymbol {\xi }}=({\tau },{\boldsymbol {\zeta }})=\big ({\tau },{\zeta }_{(1)},\dots ,{\zeta }_{(d)} \big )$$

will be used for a random vector in \({\mathbb R}^{d+1} \).

Consider a sequence

$$ \big \{{\boldsymbol {\xi }}_k=({\tau }_k,{\boldsymbol {\zeta }}_k) \big \}_{k=0}^\infty$$
(1.1)

of independent random vectors in \( {\mathbb R}^{d+1}\) such that \({\tau }_0=0 \), \({\boldsymbol {\zeta }}_0={\boldsymbol {0}} \), \({\tau }_1\ge 0\), \({\tau }_k>0 \) for \(k\ge 2 \). Suppose that the vectors \({\boldsymbol {\xi }}_k=({\tau }_k,{\boldsymbol {\zeta }}_k)\) for \(k\ge 2 \) have the same distribution as \({\boldsymbol {\xi }}=({\tau },{\boldsymbol {\zeta }})\). Put

$$ T_n:=\sum _{k=0}^n{\tau }_k, \quad {\mathbf {Z}}_n:=\sum _{k=0}^n{\boldsymbol {\zeta }}_k, \quad {\bf S}_n=(T_n,{\mathbf {Z}}_n):= \sum _{k=0}^n{\boldsymbol {\xi }}_k\enspace \enspace \text {for}\enspace \enspace n\ge 0,$$

and so \(T_0=0 \), \({\mathbf {Z}}_0={\boldsymbol {0}} \), \({\bf S}_0=(0,{\boldsymbol {0}}) \). Let \({\nu }(0)=\eta (0):=0 \). For \(t>0 \), we put

$$ {\nu }(t):=\max \left \{k\ge 0:T_{k}<t\right \}, \quad \eta (t):=\min \left \{k\ge 0:T_k\ge t\right \},$$
(1.2)

and so \(\eta (t)={\nu }(t)+1\) for \(t>0 \).

The first and second compound renewal processes (c.r.p.) \({\mathbf {Z}}(t)\), \({\mathbf {Y}}(t)\) are defined by the equalities (see, for example, [2, 11])

$$ {\mathbf {Z}}(t):={\mathbf {Z}}_{{\nu }(t)}, \quad {\mathbf {Y}}(t):={\mathbf {Z}}_{\eta (t)},\enspace \enspace t \ge 0, $$
(1.3)

respectively. The standard commonly accepted model of a c.r.p. presumes that the time \({\tau }_1\) of the appearance of the first jump and the value \({\boldsymbol {\zeta }}_1 \) of this jump have a joint distribution that is in general different from the joint distribution of \(({\tau },{\boldsymbol {\zeta }})\) (see, for example, [1]). This is so, for instance, for c.r.p. with stationary increments.

If either \(({\tau }_1,{\boldsymbol {\zeta }}_1)\overset {d}{=}({\tau },{\boldsymbol {\zeta }})\) or \(({\tau }_1,{\boldsymbol {\zeta }}_1)\overset {d}{=}(0,{\boldsymbol {0}})\) then we will refer to the processes \({\mathbf {Z}}(t)\) and \({\mathbf {Y}}(t)\) as a homogeneous c.r.p. and otherwise as an inhomogeneous c.r.p.

Thus, the distributions of two random vectors \({\boldsymbol {\xi }}_1=({\tau }_1,{\boldsymbol {\zeta }}_1)\) and \({\boldsymbol {\xi }}=({\tau },{\boldsymbol {\zeta }})\)completely determine the distributions of the first c.r.p. \({\mathbf {Z}}(t)\), \(t\ge 0 \), and the second c.r.p. \({\mathbf {Y}}(t) \), \(t\ge 0\). At the event

$$ \left \{T_k<t\le T_{k+1}\right \}\enspace \enspace \mbox {for}\enspace \enspace k\ge 0, $$

we have the equalities \({\nu }(t)=k \), \(\eta (t)=k+1\), \({\mathbf {Z}}(t)={\mathbf {Z}}_k\), \({\mathbf {Y}}(t)={\mathbf {Z}}_{k+1}\). Consequently, the step processes \({\nu }(t) \), \({\mathbf {Z}}(t)\), \(\eta (t) \), and \({\mathbf {Y}}(t) \) are left continuous for \(t>0 \).

It follows from [16] that limit theorems for the first c.r.p. \({\mathbf {Z}}(t)\) and the second c.r.p. \( {\mathbf {Y}}(t)\) may substantially differ.

As in [9, 11, 12], in the present article we assume that the random vectors \( {\boldsymbol {\xi }}_1=({\tau }_1,{\boldsymbol {\zeta }}_1) \) and \({\boldsymbol {\xi }}=({\tau },{\boldsymbol {\zeta }})\) defining the c.r.p. \({\mathbf {Z}}(t)\) and \({\mathbf {Y}}(t) \) satisfy Cramér’s moment condition in the following form:

\([\mathbf {C}_0] \) :

\(\enspace \enspace {\mathbb {E}} e^{{\delta }|{\boldsymbol {\xi }}_1|}<\infty , \enspace \enspace {\mathbb {E}} e^{{\delta }|{\boldsymbol {\xi }}|}<\infty \enspace \enspace \textit {for some} \enspace \enspace {\delta }>0 \).

The papers [9, 10, 13, 14, 15] in particular contain integro-local theorems for the processes \({\mathbf {Z}}(t) \) and \({\mathbf {Y}}(t) \) as \(t\to \infty \) in the case where the random vector \({\boldsymbol {\xi }}=({\tau },{\boldsymbol {\zeta }})\) “defining” these processes is nonlattice. In the present article (and in the one-dimensional case for \({\mathbf {Z}}(t)\) in [12]), the analogous problem is solved for arithmetic c.r.p. \({\mathbf {Z}}(t)\) and \({\mathbf {Y}}(t)\) (and their right continuous versions), i.e., in the case where the vectors \({\boldsymbol {\xi }}_1 \), \(\boldsymbol {\xi }\) lie on the integer lattice \({\mathbb Z}^{d+1}\). More exactly, we will assume that \({\mathbb {P}}\left ({\boldsymbol {\xi }}_1\in {\mathbb Z}^{d+1}\right )=1\), and the vector \(\boldsymbol {\xi }\) satisfies the following stronger arithmeticity condition, which is formulated in terms of the characteristic function of this vector (see [2, p. 69]):

$$ f({\boldsymbol {u}}):={\mathbb {E}} e^{i{\boldsymbol {u}}{\boldsymbol {\xi }}}, \quad {\boldsymbol {u}}\in {\mathbb R}^{d+1}, $$

where \({\boldsymbol {u}}{\boldsymbol {\xi }} \) is the scalar product of the vectors \(\boldsymbol {u} \) and \(\boldsymbol {\xi } \) in \({\mathbb R}^{d+1} \).

\([\mathbf {Z}] \) :

(Arithmeticity condition) Every \( {\boldsymbol {u}}\in {\mathbb Z}^{d+1} \) satisfies the equality \( f(2\pi {\boldsymbol {u}})=1\), and for every \({\boldsymbol {u}}\in {\mathbb R}^{d+1}\setminus {\mathbb Z}^{d+1}\), the inequality \(\big \vert f(2\pi {\boldsymbol {u}})\big \vert <1 \) is valid.

We note here that if the random vector \({\boldsymbol {\xi }}=({\tau },{\boldsymbol {\zeta }})\) satisfies condition \([\mathbf {Z}] \) then this vector cannot be degenerate in \({\mathbb R}^{d+1}\), i.e., it cannot lie with probability 1 on the plane

$$ L_{{\bf n},c}:=\big \{(u,{\bf v})\in {\mathbb R}^{d+1}:\thinspace n_1u+{\bf n}_2{\bf v}=c \big \}$$

defined by a unit normal \({\bf n}=(n_1,{\bf n}_2) \) and a constant \(c \). Indeed, if \({\mathbb {P}}\left ({\boldsymbol {\xi }}\in L_{{\bf n},c}\right )=1 \) then we can choose real \(r \) such that \({\boldsymbol {u}}=(u_1,{\boldsymbol {u}}_2):=(rn_1,r{\bf n}_2)\notin {\mathbb Z}^{d+1} \). Then for this \({\boldsymbol {u}}\notin {\mathbb Z}^{d+1}\) we have the equality

$$ \big \vert f(2\pi {\boldsymbol {u}}) \big \vert = \big \vert {\mathbb {E}} e^{i2\pi rc} \big \vert =1,$$

which is impossible in view of condition \([\mathbf {Z}] \).

Since the arithmetic c.r.p. \({\mathbf {Z}}(t)\) and \({\mathbf {Y}}(t)\) are changed only at the integer time moments \( t=1,2,\dots \) and are left continuous, it follows that, for every noninteger argument \(t>0\), we have the equalities

$$ {\mathbf {Z}}(t)={\mathbf {Z}}\big ([t]+1 \big ), \quad {\mathbf {Y}}(t)={\mathbf {Y}}\big ([t]+1 \big ),$$

where, as usual, \([t] \) stands for the integer part of a nonnegative number \(t \). Therefore, we will consider only the integer values of argument \( t=n\in \left \{0,1,2,\dots \right \}\), i.e., we will study the random sequences

$$ \big \{{\mathbf {Z}}(n) \big \}_{n\ge 0}, \quad \big \{{\mathbf {Y}}(n) \big \}_{n\ge 0}.$$
(1.4)

This somewhat simplifies the notation and does not diminish generality.

One can also consider right continuous versions of the processes \({\mathbf {Z}}(t) \) and \({\mathbf {Y}}(t) \) by setting

$$ {\mathbf {Z}}_+(t):={\mathbf {Z}}(t+0), \quad {\mathbf {Y}}_+(t):={\mathbf {Y}}(t+0),\enspace \enspace t\ge 0. $$

In other words, for defining the right continuous versions of the processes \( {\mathbf {Z}}(t)\) and \({\mathbf {Y}}(t) \), it suffices to consider the right continuous functionals

$$ \begin {aligned} {\nu }_+(t)&=\max \big \{k\ge 0:~T_{k}\le t \big \},\\ \eta _+(t)&=\min \big \{k\ge 0:T_{k}>t \big \}, \end {aligned} \qquad t>0,$$
(1.5)

and for \(t>0 \), put

$$ {\mathbf {Z}}_+(t):={\mathbf {Z}}_{{\nu }_+(t)}\thinspace , \quad {\mathbf {Y}}_+(t):={\mathbf {Z}}_{\eta _+(t)}\thinspace . $$

Thus, alongside the sequences in (1.4), one can study the sequences

$$ \big \{{\mathbf {Z}}_+(n) \big \}_{n\ge 0}\thinspace , \quad \big \{{\mathbf {Y}}_+(n) \big \}_{n\ge 0}\thinspace . $$
(1.6)

However, studying the sequences (1.6) easily amounts to studying the sequences (1.4) since (see definitions (1.2) and (1.5))

$$ {\nu }_+(n)={\nu }(n+1), \quad \eta _+(n)=\eta (n+1),$$

and hence,

$$ {\mathbf {Z}}_+(n)={\mathbf {Z}}(n+1), \quad {\mathbf {Y}}_+(n)={\mathbf {Y}}(n+1).$$
(1.7)

In the present article, we obtain local theorems for the multidimensional arithmetic c.r.p. \( {\mathbf {Z}}(n)\), \({\mathbf {Y}}(n) \), in which we find the exact asymptotics for the probabilities

$$ {\mathbb P}\big ({\mathbf {Z}}(n)={\boldsymbol {x}} \big ), \quad {\mathbb P}\big ({\mathbf {Y}}(n)={\boldsymbol {x}} \big ),\enspace \enspace n\to \infty , $$

for sequences \({\boldsymbol {x}}={\boldsymbol {x}}_n\in {\mathbb Z}^d\) such that the point \({\boldsymbol {\alpha }}:={\boldsymbol {x}}/n\) lies in some fixed compact set \(K\subset {\mathbb R}^d \) (see Theorems 2.1 and 2.2 below) or the point \(\boldsymbol {\alpha }\) converges to some fixed \({\boldsymbol {\alpha }}_0\) (see Theorems 2.1A and 2.2A below). By (1.7), we also find the exact asymptotics also for the probabilities (see Corollaries 2.1 and 2.2 below)

$$ {\mathbb P}\big ({\mathbf {Z}}_+(n)={\boldsymbol {x}} \big ), \quad {\mathbb P}\big ({\mathbf {Y}}_+(n)={\boldsymbol {x}} \big ),\enspace \enspace n\to \infty . $$

Earlier in [12], the analogous problem was solved for the one-dimensional (\(d=1\)) arithmetic c.r.p. \({\mathbf {Z}}(n)\). The proof of the local theorems for arithmetic c.r.p. is based on the local limit theorem for the renewal function

$$ H(t,{\boldsymbol {x}}):= \sum _{n=0}^\infty {\mathbb P}\big ({\bf S}_n=(t,{\boldsymbol {x}}) \big ),\enspace \enspace (t,{\boldsymbol {x}})\in {\mathbb Z}^{d+1},$$
(1.8)

in which the exact asymptotics of \( H(t,{\boldsymbol {x}})\) is investigated for the sequences \( (t,{\boldsymbol {x}})=(t_n,{\boldsymbol {x}}_n)\in {\mathbb Z}^{d+1} \) such that the points \(({\theta },{\boldsymbol {\alpha }}):=\frac {1}{n}(t,{\boldsymbol {x}}) \) lie in some compact set \(K\subset {\mathbb R}^{d+1} \) (Theorem 2.3).

Historical reviews of related results can be found in [2, 9]. As regards the methods, the present article follows [9, 10, 13, 14, 15] where, under Cramér’s moment condition \([\mathbf {C}_0] \), integro-local theorems are obtained for the nonlattice c.r.p. \({\mathbf {Z}}(t) \), \({\mathbf {Y}}(t) \) and the corresponding renewal measure \(H(B) \) in the one- and multidimensional cases, respectively, and also follows [12] where analogous results are obtained in the one-dimensional case under conditions \([\mathbf {C}_0] \) and \([\mathbf {Z}] \). Therefore, Theorems 2.1–2.3 of the present article supplement the results of [9, 10, 13, 14, 15, 12].

The remaining part of the article consists of Sections 2 and 3. In Section 2, we formulate the main results of the article. The statements involve a number of “basic” functions whose sense and properties are desirable to know for understanding the nature of the established laws. Therefore, in Section 2, we give definitions and some properties of these functions. Their detailed description can be found in [9] (the one-dimensional case) and in [13] (the multidimensional case). The proof of main assertions of the article are exposed in Section 3.

2. STATEMENT OF THE MAIN ASSERTIONS

2.1. The definitions and properties of the necessary functions

In this section, we briefly give the definitions of the functions that will play a defining in local limit theorems.

For \((\lambda ,{\boldsymbol {\mu }})\in {\mathbb R}^{d+1} \), we put

$$ \begin {gathered} \psi ({\lambda },{\boldsymbol {\mu }}):={\mathbb {E}} e^{{\lambda }{\tau }+{\boldsymbol {\mu }}{\boldsymbol {\zeta }}}, \quad \psi _1({\lambda },{\boldsymbol {\mu }}):={\mathbb {E}} e^{{\lambda }{\tau }_1+{\boldsymbol {\mu }}{\boldsymbol {\zeta }}_1};\\ A({\lambda },{\boldsymbol {\mu }}):=\ln \psi ({\lambda },{\boldsymbol {\mu }}), \quad A_1({\lambda },{\boldsymbol {\mu }}):={\lambda }n\psi _1({\lambda },{\boldsymbol {\mu }});\\ \mathcal {A}:=\big \{({\lambda },{\boldsymbol {\mu }}):\psi ({\lambda },{\boldsymbol {\mu }})<\infty \big \}, \quad \mathcal {A}_1:=\big \{({\lambda },{\boldsymbol {\mu }}):\psi _1({\lambda },{\boldsymbol {\mu }})<\infty \big \}. \end {gathered}$$

Clearly, in accordance with condition \([\mathbf {C}_0]\), the interiors \((\mathcal {A})\) and \((\mathcal {A}_1)\) of the sets \(\mathcal {A} \) and \(\mathcal {A}_1 \) contain the point \(({\lambda },{\boldsymbol {\mu }})=(0,{\boldsymbol {0}})\) and are the domains of analyticity of the functions \(\psi (\lambda ,{\boldsymbol {\mu }}) \) and \(\psi _1(\lambda ,{\boldsymbol {\mu }})\), respectively.

A key role in the description of the asymptotics of the renewal function (see (1.8)) corresponding to the random walk \(\left \{\mathbf {S}_n\right \}_{n\geq 0}\) is played by the so called second deviation function of the arguments \( \theta \geq 0\) and \({\boldsymbol {\alpha }}\in \mathbb {R}^d\)

$$ D({\theta },{\boldsymbol {\alpha }}):=\sup _{(\lambda ,{\boldsymbol {\mu }})\in \mathcal {A}^{\le 0}} \left \{\lambda {\theta }+{\boldsymbol {\mu }}{\boldsymbol {\alpha }} \right \}=\sup _{(\lambda ,{\boldsymbol {\mu }})\in \partial \mathcal {A}^{\le 0}} \left \{\lambda {\theta }+{\boldsymbol {\mu }}{\boldsymbol {\alpha }} \right \}, $$
(2.1)

where

$$ \mathcal {A}^{\leq 0}:= \big \{(\lambda ,{\boldsymbol {\mu }}):A(\lambda ,{\boldsymbol {\mu }})\leq 0 \big \}, $$

\(\partial \mathcal {A} \) is the boundary of \(\mathcal {A} \). The properties of the function \(D(\theta ,{\boldsymbol {\alpha }})\) are studied rather completely (see, for example, [2, Sec. 2.9; 16]). It is convex, lower semicontinuous, semiadditive, and linear along any ray starting from the point \( ({\theta }_0,{\boldsymbol {\alpha }}_0)=(0,{\boldsymbol {0}}) \).

Note that, by the linearity of the function \(D(\theta ,{\boldsymbol {\alpha }})\) along any ray starting from any point \( (0,{\boldsymbol {0}})\), for \({\theta }>0 \) we have the equality

$$ D({\theta },{\boldsymbol {\alpha }})={\theta } D\bigg (1,\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ), $$
(2.2)

and hence the function of two variables \(D({\theta },{\boldsymbol {\alpha }})\) is completely determined by the values of the function of one variable

$$ D({\boldsymbol {\alpha }}):=D(1,{\boldsymbol {\alpha }}).$$
(2.3)

Moreover, the main results of the present article will be formulated in terms of the function \(D({\boldsymbol {\alpha }}) \). By (2.1),

$$ D({\boldsymbol {\alpha }})=\sup _{({\lambda },{\boldsymbol {\mu }})\in \partial \mathcal {A}^{\le 0}} \{{\boldsymbol {\mu }}{\boldsymbol {\alpha }}+{\lambda }\}=\sup _{{\boldsymbol {\mu }}} \big \{{\boldsymbol {\mu }}{\boldsymbol {\alpha }}-A({\boldsymbol {\mu }}) \big \}, $$
(2.4)

where

$$ A({\boldsymbol {\mu }}):=-\sup \big \{\lambda :A(\lambda ,{\boldsymbol {\mu }})\leq 0 \big \}$$

is the so-calledbasic function for the c.r.p. It was proved in [16] that the function \(A({\boldsymbol {\mu }}) \) is convex and lower semicontinuous, and hence we have the equality (see [16])

$$ A({\boldsymbol {\mu }})=\sup _{{\boldsymbol {\alpha }}}\big \{{\boldsymbol {\mu }}{\boldsymbol {\alpha }}-D({\boldsymbol {\alpha }}) \big \}.$$

Let \(\big (\lambda ({\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\boldsymbol {\alpha }})\big )\) be any point at which the supremum

$$ \sup _{(\lambda ,{\boldsymbol {\mu }})\in \mathcal {A}^{\leq 0}} \{\lambda +{\boldsymbol {\mu }}{\boldsymbol {\alpha }}\}=D(1,{\boldsymbol {\alpha }})=D({\boldsymbol {\alpha }}) $$

is attained if such a point exists. It was shown in [9] (in the one-dimensional case) and [13] (in the multidimensional case) that the functions \( D({\boldsymbol {\alpha }})\) and \(D(\theta ,{\boldsymbol {\alpha }})\) are analytic in the sets

$$ \mathfrak {A}:=\Big \{{\boldsymbol {\alpha }}:\big (\lambda ({\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}) \big )\in (\mathcal {A}) \Big \},\quad \mathcal {D}:=\bigg \{({\theta },{\boldsymbol {\alpha }}):\frac {{\boldsymbol {\alpha }}}{{\theta }}\in \mathfrak {A},\enspace {\theta }>0 \bigg \}$$

respectively. Moreover, for \({\boldsymbol {\alpha }}\in \mathfrak {A}\), the point \(\big (\lambda ({\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\boldsymbol {\alpha }})\big ) \) is unique and

$$ \lambda ({\boldsymbol {\alpha }})=- A\big ({\boldsymbol {\mu }}({\boldsymbol {\alpha }}) \big ), \quad {\boldsymbol {\mu }}({\boldsymbol {\alpha }})=D^{\prime }({\boldsymbol {\alpha }}), D({\boldsymbol {\alpha }})={\lambda }({\boldsymbol {\alpha }})+{\boldsymbol {\alpha }}{\boldsymbol {\mu }}({\boldsymbol {\alpha }}), \quad {A}\big ({\lambda }({\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}) \big )=0. $$
(2.5)

2.2. Local theorems for arithmetic compound renewal processes

First, consider the first c.r.p. The asymptotics of the probability

$$ {\mathbb P}\big ({\mathbf {Z}}(n)={\boldsymbol {x}} \big )$$
(2.6)

will be studied in the normalized deviation zone

$$ {\boldsymbol {\alpha }}:=\frac {{\boldsymbol {x}}}{n}\in \mathfrak {A},$$
(2.7)

and it is natural to call this zone theCramér zone by analogy with the domains of analyticity arising in the classical theorems for random walks. However, in contrast to these classical theorems, it is not always possible to obtain asymptotics (2.6) in the whole zone (2.7). It has to be restricted in a number of cases. If \({\lambda }_+>D({\boldsymbol {0}})\), where

$$ {\lambda }_+=\sup \left \{{\lambda }:{\mathbb {E}} e^{{\lambda }{\tau }}<\infty \right \}, $$

then no restriction is required.

Consider the case of \({\lambda }_+\le D({\boldsymbol {0}}) \). In this case, the forbidden part of the zone \(\mathfrak {A} \) is the closed set

$$ \mathfrak {B}_Z:=\big \{{\boldsymbol {\alpha }}\in {\mathbb R}^d:{\lambda }({\boldsymbol {\alpha }})\ge {\lambda }_+ \big \}.$$

Thus, in Theorem 2.1, we will study the probability asymptotics (2.6) in the regular deviation zone

$$ {\boldsymbol {\alpha }}:=\frac {{\bf x}}{n}\in \mathfrak {A}\setminus \mathfrak {B}_Z, $$

and in Theorem 2.1*, in zone (2.7).

Put

$$ C({\boldsymbol {\alpha }}):=C_H(1,{\boldsymbol {\alpha }}), \quad I_Z({\alpha }):=\sum _{m=1}^\infty e^{{\lambda }({\alpha })m} {\mathbb {P}}\left ( {\tau }\ge m\right ),$$

where the positive function \(C_H({\theta },{\boldsymbol {\alpha }})\) continuous in the cone \(\mathcal {D} \) is defined by formula (3.6) below. It is easy to note that if \({\boldsymbol {\alpha }}\in (\mathfrak {B}_Z)\) then

$$ I_Z({\boldsymbol {\alpha }})\ge \mathbb {E}e^{\lambda ({\boldsymbol {\alpha }})\tau }=\infty .$$
(2.8)

Let us now formulate the local theorem for the process \({\mathbf {Z}}(n)\).

Theorem 2.1 \(. \) Suppose that conditions \([\mathbf {C}_0] \) and \( [\mathbf {Z}]\) are fulfilled, a compact set

$$ K\subset \mathfrak {A}\setminus \mathfrak {B}_Z $$

is fixed, and the admissible inhomogeneity conditions

$$ \begin{gathered} \mathcal {A}_K\subset (\mathcal {A}_1), \enspace \enspace \mbox {where} \enspace \enspace \mathcal {A}_K:=\Big \{({\lambda },{\boldsymbol {\mu }})=\big ({\lambda }({\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}) \big ):{\boldsymbol {\alpha }}\in K \Big \},\\ {\mathbb {P}}\left ({\tau }_1\ge n\right )=o\bigg (\frac {1}{n^{d/2}}e^{-nD({\boldsymbol {0}})} \bigg ) \enspace \enspace \mbox {as}\enspace \enspace n\to \infty \enspace \enspace \mbox {if}\enspace \enspace {\boldsymbol {\alpha }}={\boldsymbol {0}}\in K \end{gathered} $$
(2.9)

hold. Then, for \( {\boldsymbol {\alpha }}:={{\boldsymbol {x}}}/{n}\in K \) , \( {\boldsymbol {x}}\in {\mathbb Z}^d\) , there is valid the representation

$$ {\mathbb P}\big ({\mathbf {Z}}(n)={\boldsymbol {x}} \big )=\psi _1 \big ({{\lambda }}({\boldsymbol {\alpha }}),{{\boldsymbol {\mu }}}({\boldsymbol {\alpha }}) \big )\frac {C({\boldsymbol {\alpha }})} {n^{d/2}}e^{-nD({\boldsymbol {\alpha }})}I_Z({\boldsymbol {\alpha }}) \big (1+o(1) \big ),$$
(2.10)

in which the remainder \(o(1)={\varepsilon }_n({\boldsymbol {x}}) \) satisfies the relation

$$ \lim \limits _{n\to \infty }\thinspace \sup _{{\boldsymbol {x}}\in {\mathbb Z}^d,\thinspace {{\boldsymbol {x}}}/{n}\in K}\big \vert {\varepsilon }_n({\boldsymbol {x}}) \big \vert =0. $$
(2.11)

Remark 2.1\(. \) Suppose that the compact set \(K \) in Theorem 2.1 has the form \(K=[{\boldsymbol {a}}]_{\varepsilon }\) for sufficiently small \({\varepsilon }>0\). Then the admissible inhomogeneity conditions (2.9) of Theorem 2.1 are fulfilled automatically.

In the one-dimensional case \(d=1\), assertion (2.10) in Theorem 2.1 was proved in [12]. Theorem 2.1 agrees with Theorem 1.1 in [9] and Theorem 3.1 in [13] and supplements these theorems in the arithmetic multidimensional case (as we already said, in [9] and [13], integro-local theorems were obtained for the nonlattice case in the one- and multidimensional cases, respectively). As was observed in [9] and [13], the form of the sum \(I_Z({\boldsymbol {\alpha }}) \) to some extent explains the substantiality of the presence of the forbidden set \(\mathfrak {B}_Z\): for \({\boldsymbol {\alpha }}\in (\mathfrak {B}_Z)\) (or, which is the same, \( {\lambda }({\boldsymbol {\alpha }})>{\lambda }_+ \)) the sum \(I_Z({\boldsymbol {\alpha }}) \) diverges (see (2.8)) and the asymptotics of the sequence \({\mathbb {P}}\big (\thinspace {\mathbf {Z}}(n)={\boldsymbol {x}}\thinspace \big ) \) is substantially different. The substantiality of condition (2.9) explains the presence of the factor \(\psi _1\big ({\lambda }({\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\boldsymbol {\alpha }})\big )\) on the right-hand side of (2.10).

Corollary 2.1\(. \) Under the conditions of 2.1, representation (2.10) is preserved if we replace \( {\mathbf {Z}}(n)\) by \( {\mathbf {Z}}_+(n)\) and \( I_Z({\boldsymbol {\alpha }})\) by \(I_{Z+}({\boldsymbol {\alpha }}){:=}e^{-{\lambda }({\boldsymbol {\alpha }})}I_Z({\boldsymbol {\alpha }}) \) therein.

Denote by

$$ {\gamma }(n):=n-T_{{\nu }(n)} \quad \text {and}\quad \chi (n):=T_{{\nu }(n)+1}-n$$

the defect up to the level \(n \) and overshoot through the level \(n\) of the random walk \(\left \{T_k\right \}\) respectively. Thus, the summand \({\tau }_{{\nu }(n)+1}\) “covering” the level \(n \) is representable as the sum \({\tau }_{{\nu }(n)+1}={\gamma }(n)+\chi (n)\).

A. A. Borovkov focused the authors’ attention on the following fact: If, in the local theorem for the c.r.p. \({\mathbf {Z}}(n)\), the defect \( {\gamma }(n)\) is fixed (or appropriately bounded) then the deviation zone can be substantially extended by replacing the condition \(K\subset \mathfrak {A}\setminus \mathfrak {B}_Z\) with the condition \(K\subset \mathfrak {A} \). In other words, we have

Theorem 2.1*\(. \) Suppose that conditions \([\mathbf {C}_0] \) and \( [\mathbf {Z}]\) are fulfilled, a compact set

$$ K\subset \mathfrak {A} $$

is fixed, and the admissible inhomogeneity conditions (2.9) hold.

Then, for any fixed integers \( u\ge 1\), \( v\ge 0\), every sequence \( {\boldsymbol {x}}={\boldsymbol {x}}_n\in {\mathbb Z}^d \) with \( {\boldsymbol {\alpha }}:={{\boldsymbol {x}}}/{n}\in K \) admits the representations

$$ {\mathbb {P}}\big (\thinspace {\mathbf {Z}}(n)={\boldsymbol {x}},\thinspace {\gamma }(n)=u\thinspace \big ) =\psi _1\big ({\lambda }({\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}) \big ) \frac {C({\boldsymbol {\alpha }})} {n^{d/2}}e^{-nD({\boldsymbol {\alpha }})}I_Z \big ({\boldsymbol {\alpha }};u)(1+o_u(1) \big ), $$
(2.12)
$$ {\mathbb {P}}\big (\thinspace {\mathbf {Z}}(n)\!=\!{\boldsymbol {x}},\thinspace {\gamma }(n)\!=\!u,~\chi (n)\!=\!v\thinspace \big ) \!=\!\psi _1\big ({\lambda }({\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}) \big ) \frac {C({\boldsymbol {\alpha }})} {n^{d/2}}e^{-nD({\boldsymbol {\alpha }})}I_Z({\boldsymbol {\alpha }};u,v) \big (1\!+\!o_{u,v}(1) \big ), $$
(2.13)

where

$$ I_Z({\boldsymbol {\alpha }};u):=e^{{\lambda }({\boldsymbol {\alpha }})u} {\mathbb {P}}\left ({\tau }\ge u\right ), \quad I_Z({\boldsymbol {\alpha }};u,v):=e^{{\lambda }({\boldsymbol {\alpha }})u} {\mathbb {P}}\left ({\tau }=u+v\right ),$$

and the remainders \(o_u(1)={\varepsilon }_n({\boldsymbol {x}})\) and \(o_{u,v}(1)={\varepsilon }_n({\boldsymbol {x}}) \) enjoy (2.11).

Theorems 2.1 and 2.1* will be proved in Subsection 3.3.

Let us now pass to formulating the local theorem for the second c.r.p. \({\mathbf {Y}}(n)\) and its right-continuous version \({\mathbf {Y}}_+(n)\). Put

$$ {\cal M}:= \left \{{\boldsymbol {\mu }}\in {\mathbb R}^d:~{\mathbb {E}} e^{{\boldsymbol {\mu }}{\boldsymbol {\zeta }}}<\infty \right \}.$$

By condition \( [\mathbf {C}_0]\), the interior of \(({\cal M}) \) contains the point \({\boldsymbol {\mu }}={\boldsymbol {0}}\). For the processes \({\mathbf {Y}}(n)\) and \({\mathbf {Y}}_+(n) \), the forbidden set is

$$ \mathfrak {B}_Y:= \big \{{\boldsymbol {\alpha }}\in \mathfrak {A}:{\boldsymbol {\mu }}({\boldsymbol {\alpha }})\notin (\mathcal {M}) \big \}.$$

Put

$$ I_Y({\boldsymbol {\alpha }}):= \sum _{m=1}^\infty e^{{\lambda }({\boldsymbol {\alpha }})m}{\mathbb {E}} \big (e^{{\boldsymbol {\mu }}({\boldsymbol {\alpha }}){\boldsymbol {\zeta }}};\thinspace {\tau }\ge m \big ).$$

As follows from Lemma 3.6 (see below), if \({\boldsymbol {\alpha }}\in \mathfrak {A}\setminus \mathfrak {B}_Y \) then \( I_Y({\boldsymbol {\alpha }})<\infty \).

Conversely, if \({\boldsymbol {\alpha }}\in (\mathfrak {B}_Y) \) then

$$ \begin {gathered} I_Y({\boldsymbol {\alpha }}) :=\sum _{m=1}^\infty e^{\lambda ({\boldsymbol {\alpha }})m}\mathbb {E} \big (e^{{\boldsymbol {\mu }}({\boldsymbol {\alpha }}){\boldsymbol {\zeta }}};\thinspace {\tau }\ge m \big ) =\sum _{k=1}^{\infty }\mathbb {E} \big (e^{{\boldsymbol {\mu }}({\boldsymbol {\alpha }}){\boldsymbol {\zeta }}};\thinspace \tau =k \big )\sum _{m=1}^ke^{\lambda ({\boldsymbol {\alpha }})m}\\ \geqslant \sum _{k=1}^{\infty }\mathbb {E} \big (e^{{\boldsymbol {\mu }}({\boldsymbol {\alpha }}){\boldsymbol {\zeta }}};\thinspace \tau =k \big )e^{\lambda ({\boldsymbol {\alpha }})}= e^{\lambda ({\boldsymbol {\alpha }})}\mathbb {E} e^{{\boldsymbol {\mu }}({\boldsymbol {\alpha }}){\boldsymbol {\zeta }}}=\infty \end {gathered}$$

and, like in the case of process \(\mathbf {Z}(n)\), the probability asymptotics (2.15) in the domain \((\mathfrak {B}_Y)\) is substantially different.

Theorem 2.2\(. \) Suppose that conditions \([\mathbf {C}_0] \) and \( [\mathbf {Z}]\) are fulfilled and a compact set \(K\) such that

$$ K\subset \mathfrak {A}\setminus \mathfrak {B}_Y $$

is fixed. Suppose that the initial random vector \({\boldsymbol {\xi }}_1=({\tau }_1,{\boldsymbol {\zeta }}_1)\) satisfies the admissible inhomogeneity condition

$$ \big (0,{\boldsymbol {\mu }}({\boldsymbol {\beta }}) \big )\in (\mathcal {A}_1) \enspace \enspace \text {and} \enspace \enspace \big ({\lambda }({\boldsymbol {\beta }}),{\boldsymbol {\mu }}({\boldsymbol {\beta }}) \big )\in (\mathcal {A}_1) \enspace \enspace \mbox {for all}\enspace \enspace {\boldsymbol {\beta }}\in K.$$
(2.14)

Then the following relation holds for \({\boldsymbol {\alpha }}:={{\boldsymbol {x}}}/{n}\in K\), \({\boldsymbol {x}}\in {\mathbb Z}^d \) as \( n\to \infty \):

$$ {\mathbb P}\big ({\mathbf {Y}}(n)={\boldsymbol {x}} \big )=\psi _1 \big ({{\lambda }}({\boldsymbol {\alpha }}),{{\boldsymbol {\mu }}}({\boldsymbol {\alpha }}) \big )\frac {C({\boldsymbol {\alpha }})}{n^{d/2}}e^{-nD({\boldsymbol {\alpha }})}I_Y({\boldsymbol {\alpha }}) \big (1+o(1) \big ).$$
(2.15)

Here the remainder \(o(1)\thinspace {=}\thinspace {\varepsilon }_n({\boldsymbol {x}})\) satisfies (2.11).

Remark 2.2\(. \) Suppose that the compact set in Theorem 2.2 has the form \( K=[{\boldsymbol {a}}]_{\varepsilon }\) for sufficiently small \({\varepsilon }>0\). Then the admissible inhomogeneity condition (2.14) of Theorem 2.2 is fulfilled automatically.

Corollary 2.2\(. \) Under the conditions of Theorem 2.2, representation (2.15) is preserved if we replace \( {\mathbf {Y}}(n)\) by \( {\mathbf {Y}}_+(n)\) and \( I_Y({\boldsymbol {\alpha }})\) by \(I_{Y+}({\boldsymbol {\alpha }}):=e^{-{\lambda }({\boldsymbol {\alpha }})}I_Y({\boldsymbol {\alpha }}) \) therein.

If we fix the overshoot \(\chi (n)\) in the local theorem for the c.r.p. \({\mathbf {Y}}(n)\) then we can substatially simplify the deviation zone by replacing the condition \(K\subset \mathfrak {A}\setminus \mathfrak {B}_Y\) (3.21) by the condition \(K\subset \mathfrak {A} \). In other words, we have

Theorem 2.2*\(. \) Suppose that conditions \([\mathbf {C}_0] \) and \( [\mathbf {Z}]\) are fulfilled, a compact set \(K\) is fixed such that

$$ K\subset \mathfrak {A} $$

and the admissible inhomogeneity condition (2.14) is fulfilled.

Then, for any fixed integer \( v\ge 0\), any sequence \( {\boldsymbol {x}}={\boldsymbol {x}}_n\in {\mathbb Z}^d \) such that \( {\boldsymbol {\alpha }}:=\frac {{\boldsymbol {x}}}{n}\in K \) admits the representation

$$ {\mathbb P}\big ({\mathbf {Y}}(n)={\boldsymbol {x}},\thinspace \chi (n)=v \big ) =\psi _1\big ({\lambda }({\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}) \big ) \frac {C({\boldsymbol {\alpha }})} {n^{d/2}} e^{-nD({\boldsymbol {\alpha }})}I_Y({\boldsymbol {\alpha }};v) \big (1+o_v(1) \big ),$$
(2.16)

where

$$ I_Y({\boldsymbol {\alpha }};v):=e^{-{\lambda }({\boldsymbol {\alpha }})v}$$

and the remainder \(o_w(1)={\varepsilon }_n({\boldsymbol {x}})\) satisfies (2.11).

Theorems 2.2 and 2.2* will be proved in Subsection 3.4.

Note in passing that the proofs of Theorems 2.1 and 2.2 differ substantially at the key points: In the first proof, we must fight against a “thick right tail” of the random variable \(\tau \), and in the second proof, with “thick tails” of the random vector \(\boldsymbol {\zeta } \). Therefore we cannot manage with Theorem 2.1 only for one version of the compound renewal process obtaining the desired assertion for the second version as an easy consequence.

Let us now turn to large and normal deviations for c.r.p.

For \({\boldsymbol {\alpha }}\in {\mathbb R}^d \), \({\varepsilon }>0 \) denote by

$$ [{\boldsymbol {\alpha }}]_{\varepsilon }:=\big \{{\boldsymbol {\beta }}\in {\mathbb R}^d:|{\boldsymbol {\beta }}-{\boldsymbol {\alpha }}|\le {\varepsilon } \big \} $$

the closed ball of radius \(\varepsilon \) centered at \(\boldsymbol {\alpha } \). Put

$$ {\boldsymbol {a}}:=\frac {{\mathbb {E}}{\boldsymbol {\zeta }}} {{\mathbb {E}}\tau }. $$

Now, consider the local theorems for the processes \({\mathbf {Z}}(n) \) and \({\mathbf {Y}}(n) \) in the domains of moderate large and normal deviations. Put

$$ B^2:=\frac 1{{\mathbb {E}}{\tau }}B^2_0, \quad \sigma ^2:=|B^2|, $$

where

$$ B_0^2:={\mathbb {E}}({\boldsymbol {\zeta }}-{\boldsymbol {a}}{\tau })({\boldsymbol {\zeta }}-{\boldsymbol {a}}{\tau })^\top $$

is the covariance matrix of the random vector \({\boldsymbol {\zeta }}-{\boldsymbol {a}}{\tau }\) and \(\top \) is the transposition sign.

Lemma 2.1 \(. \) Suppose the fulfillment of condition \([\mathbf {C}_0] \) . Then

$$ C({\boldsymbol {a}})I_Z({\boldsymbol {a}})=C({\boldsymbol {a}})I_Y({\boldsymbol {a}})= \frac 1{\sigma (2\pi )^{d/2}}, $$
(2.17)
$$ D({\boldsymbol {a}})=0,\enspace \enspace D^{\prime }({\boldsymbol {a}})={\boldsymbol {0}},\enspace \enspace D^{\prime {}\prime }({\boldsymbol {a}})=B^{-2}. $$
(2.18)

Lemma 2.1 will be proved in Subsection 3.5. With account taken of 2.1 and 2.2, Theorems 2.1 and 2.2 and Lemma 2.1 imply

Corollary 2.3 \(. \) Let conditions \( [\mathbf {C}_0]\) and \( [\mathbf {Z}]\) be fulfilled. Then, for every sequence \({\boldsymbol {x}}={\boldsymbol {x}}_n\in {\mathbb Z}^d \) such that \( {\boldsymbol {\alpha }}:=\frac {{\boldsymbol {x}}}{n}\rightarrow {\boldsymbol {a}} \) as \( n\rightarrow \infty \) , there are valid the relations

$$ {\mathbb {P}}\big (\thinspace {\mathbf {Z}}(n)={\boldsymbol {x}}\thinspace \big )\sim {\mathbb {P}}\big (\thinspace {\mathbf {Y}}(n)={\boldsymbol {x}}\thinspace \big )\sim \frac 1{\sigma (2\pi n)^{d/2}}e^{-nD({\boldsymbol {\alpha }})}. $$

In particular, if \( {\boldsymbol {\alpha }}-{\boldsymbol {a}}=o\left (\frac 1{n^{1/3}}\right ) \) as \( n\to \infty \) then for \( {\boldsymbol {y}}:={\boldsymbol {x}}-n{\boldsymbol {a}} \) we have

$$ nD({\boldsymbol {\alpha }})=\frac {{\boldsymbol {y}} B^{-2}{\boldsymbol {y}}^\top }{2n}+o(1),$$

which implies that

$$ {\mathbb {P}}\big (\thinspace {\mathbf {Z}}(n)={\boldsymbol {x}}\thinspace \big )\sim {\mathbb {P}}\big (\thinspace {\mathbf {Y}}(n)={\boldsymbol {x}}\thinspace \big )\sim \frac 1{\sigma (2\pi n)^{d/2}} \exp \bigg \{{-}\frac {{\boldsymbol {y}} B^{-2}{\boldsymbol {y}}^\top }{2n} \bigg \}. $$

Thus, the local theorems for the c.r.p. \({\mathbf {Z}}(n) \) and \({\mathbf {Y}}(n) \) in the domains of large and normal deviations coincide.

2.3. The local theorem for the renewal function

In the arithmetic case, a substantial role for proving Theorems 2.1, and 2.2 is played by the local limit Theorem 2.3 for the renewal function (see (1.8)) in which the asymptotics of \(H(t,{\boldsymbol {x}}) \) (as \(n\to \infty \)) for a sequence \((t,{\boldsymbol {x}})=(t_n,{\boldsymbol {x}}_n)\in {\mathbb Z}^{d+1} \) in the case where the point \(({\theta },{\boldsymbol {\alpha }}):=\frac 1n(t,{\boldsymbol {x}})\) lies in some fixed compact set \(K\) included in the analyticity domain \(\mathcal {D}\) of the function \( D({\theta },{\boldsymbol {\alpha }})\).

Theorem 2.3\(. \) Suppose that conditions \([\mathbf {C}_0] \) and \( [\mathbf {Z}]\) are fulfilled and, for a fixed compact set \( K\subset \mathcal {D}, \) the admissible inhomogeneity condition

$$ \mathcal {A}_K\subset (\mathcal {A}_1), \enspace \enspace \mbox {where} \enspace \enspace \mathcal {A}_K:= \bigg \{({\lambda },{\boldsymbol {\mu }})=\bigg ({\lambda }\bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ),{\boldsymbol {\mu }} \bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ) \bigg ):({\theta },{\boldsymbol {\alpha }})\in K \bigg \}. $$

is fulfilled. Then for \( (t,{\boldsymbol {x}})\in {\mathbb Z}^{d+1} \), \( ({\theta },{\boldsymbol {\alpha }}):=\frac 1n(t,{\boldsymbol {x}}) \) we have the representation

$$ H(t,{\boldsymbol {x}})=\psi _1 \Bigg ({\lambda }\bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ),\thinspace {\boldsymbol {\mu }} \bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ) \Bigg ) \frac {C_H({\theta },{\boldsymbol {\alpha }})} {n^{d/2}}e^{-nD({\theta },{\boldsymbol {\alpha }})}\big (1+o(1) \big ),$$
(2.19)

in which the remainder \(o(1)={\varepsilon }_n(t,{\boldsymbol {x}})\) satisfies the relation

$$ \lim _{n\to \infty } \sup _{\substack {(t,{\boldsymbol {x}})\in {\mathbb Z}^{d+1},\\ ({t}/{n},{{\boldsymbol {x}}}/{n})\in K}} \big \vert {\varepsilon }_n(t,{\boldsymbol {x}}) \big \vert =0 $$

and the function \( C_{H}(\theta ,{\boldsymbol {\alpha }}) \) is defined by (3.6).

In the one-dimensional case \(d=1\), Theorem 2.3 was proved in [12]; it supplements Theorem 4.1 in [10] and Theorem 4.1 in [13] where integro-local theorems were established for the renewal function in the nonlattice one- and multidimensional cases, respectively.

3. PROOFS OF THE MAIN ASSERTIONS

3.1. The first deviation function. The local theorem for a one-dimensional random walk.

In the proof of the assertions of the present article (as in [9, 10, 13]), we use the known integro-local theorems for the sums \({\bf S}_n=(T_n,{\mathbf {Z}}_n)\) (see, for example, [2, Sec. 2.9] or Theorem 3.1 below). An important role here is played by the (first) deviation function

$$ \Lambda ({\theta },{\boldsymbol {\alpha }}):=\sup _{({\lambda },{\boldsymbol {\mu }})}\big \{{\lambda }{\theta }+{\boldsymbol {\mu }}{\boldsymbol {\alpha }}-A({\lambda },{\boldsymbol {\mu }}) \big \} $$
(3.1)

corresponding to the random vector \({\boldsymbol {\xi }}=({\tau },{\boldsymbol {\zeta }})\). This is the Legendre transform of the convex lower semicontinuous function \(A({\lambda },{\boldsymbol {\mu }})\); therefore the function \(\Lambda ({\theta },{\boldsymbol {\alpha }})\) is also convex and lower semicontinuous.

Agree on the following notations. Given a function \(F(u,{\boldsymbol {v}}) \) of two variables \(u\in {\mathbb R} \) and \({\boldsymbol {v}}\in {\mathbb R}^d\), in what follows, the subscripts \((1) \) and \((2) \) will designate the derivatives with respect to the first and second arguments respectively (the gradient); for example:

$$ \begin {aligned} F^{\prime }_{(1)}(u_1,{\boldsymbol {v}}_1)&= \frac {\partial } {\partial u}F(u,{\boldsymbol {v}}_1)\biggm \vert _{u=u_1}, &\quad F^{\prime {}\prime }_{(1)}(u_1,{\boldsymbol {v}}_1)&= \frac {\partial ^2} {\partial u^2}F(u,{\boldsymbol {v}}_1)\biggm \vert _{u=u_1},\\ F^{\prime {}\prime }_{(2)}(u_1,{\boldsymbol {v}}_1)&= \frac {\partial ^2} {\partial {\boldsymbol {v}}^2}F(u_1,{\boldsymbol {v}})\biggm \vert _{{\boldsymbol {v}}={\boldsymbol {v}}_1}, &\quad F^{\prime {}\prime }_{(2,1)}(u_1,{\boldsymbol {v}}_1)&= \frac {\partial } {\partial u} \frac {\partial } {\partial {\boldsymbol {v}}}F(u,{\boldsymbol {v}})\biggm \vert _{(u,{\boldsymbol {v}})=(u_1,{\boldsymbol {v}}_1)}. \end {aligned} $$

Denote by \(F^{\prime }=F^{\prime }(u,{\boldsymbol {v}})\) and \(F^{\prime {}\prime }=F^{\prime {}\prime }(u,{\boldsymbol {v}})\) the vector

$$ F^{\prime }=F^{\prime }(u,{\boldsymbol {v}})=\left (F_{(1)}^{\prime }(u,{\boldsymbol {v}}),F_{(2)}^{\prime }(u,{\boldsymbol {v}}) \right ) $$

of the first derivatives and the matrix

$$ F^{\prime {}\prime }=F^{\prime {}\prime }(u,{\boldsymbol {v}})$$

of order \(d+1 \) of the second derivatives respectively. Designate as \(|F^{\prime {}\prime }|\) the determinant of the matrix \(F^{\prime {}\prime }\).

Alongside the sets \(\mathcal {A}_1\) and \(\mathcal {A} \), we will need the analyticity domain \(\mathcal {L} \) of the function \(\Lambda ({\theta },{\boldsymbol {\alpha }})\). This domain contains the points \(({\theta },{\boldsymbol {\alpha }})\in {\mathbb R}^{d+1} \) for which the system of equations

$$ \begin {cases} A^{\prime }_{(1)}({\lambda },{\boldsymbol {\mu }})={\theta },\\ A^{\prime }_{(2)}({\lambda },{\boldsymbol {\mu }})={\boldsymbol {\alpha }} \end {cases}$$
(3.2)

for the coordinates of the point \(({\lambda },{\boldsymbol {\mu }}) \) (where the supremum in (3.1) is attained) has a solution

$$ \big ({\lambda }({\theta },{\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\theta },{\boldsymbol {\alpha }}) \big ),$$

belonging to \((\mathcal {A}) \), and so

$$ \mathcal {L}=\big \{A^{\prime }({\lambda },{\boldsymbol {\mu }}):({\lambda },{\boldsymbol {\mu }})\in (\mathcal {A}) \big \}.$$

Since the function \(A({\lambda },{\boldsymbol {\mu }})\) is strictly convex in \((\mathcal {A}) \), such a solution is always unique. What has been said means that the conditions

$$ ({\theta },{\boldsymbol {\alpha }})\in \mathcal {L} \enspace \enspace \mbox {and} \enspace \enspace \big ({\lambda }({\theta },{\boldsymbol {\alpha }}),{\boldsymbol {\mu }}({\theta },{\boldsymbol {\alpha }}) \big )\in (\mathcal {A})$$

are equivalent. Moreover, the mappings

$$ ({\theta },{\boldsymbol {\alpha }})=A^{\prime }({\lambda },{\boldsymbol {\mu }})\enspace \enspace \mbox {and} \enspace \enspace ({\lambda },{\boldsymbol {\mu }})=\Lambda ^{\prime }({\theta },{\boldsymbol {\alpha }})$$

are mutually inverse; they perform a one-to-one correspondence between the domains \(\mathcal {A} \) and \(\mathcal {L} \). Clearly, \(({\theta },{\boldsymbol {\alpha }})=(a_{\tau },{\boldsymbol {a}}_{\boldsymbol {\zeta }}) \), where \(a_{\tau }:={\mathbb {E}}{\tau }\), \({\boldsymbol {a}}_{\boldsymbol {\zeta }}:={\mathbb {E}}{\boldsymbol {\zeta }} \), always belongs to \(\mathcal {L} \); for it,

$$ \big ({\lambda }(a_{\tau },{\boldsymbol {a}}_{\boldsymbol {\zeta }}),{\boldsymbol {\mu }}(a_{\tau },{\boldsymbol {a}}_{\boldsymbol {\zeta }}) \big )=(0,{\boldsymbol {0}})\in (\mathcal {A}). $$

It is known (see, for example, [2, 9, 13]) that if \(({\theta },{\boldsymbol {\alpha }})\in \mathcal {L}\) then

$$ \Lambda ^{\prime }({\theta },{\boldsymbol {\alpha }})= \big (\Lambda ^{\prime }_{(1)}({\theta },{\boldsymbol {\alpha }}),\thinspace \Lambda ^{\prime }_{(2)}({\theta },{\boldsymbol {\alpha }}) \big )=\big ({\lambda }({\theta },{\boldsymbol {\alpha }}),\thinspace {\boldsymbol {\mu }}({\theta },{\boldsymbol {\alpha }}) \big ),$$

where the pair of functions \({\lambda }({\theta },{\boldsymbol {\alpha }})\), \({\boldsymbol {\mu }}({\theta },{\boldsymbol {\alpha }})\) is the only solution to (3.2).

Theorem 2.3.2 in [2, p. 72] implies the following version of the local theorem for the sums \({\bf S}_n \) of random vectors in the arithmetic case:

Theorem 3.1 \(. \) For \( n\ge 1\) , let

$$ \mathbf {S}_n:=\sum _{k=1}^n{\boldsymbol {\xi }}_k=(T_n,\mathbf {Z}_n) $$

be the sum of independent identically distributed random vectors. Suppose that conditions \([\mathbf {C}_0] \) and \( [\mathbf {Z}]\) are fulfilled, a compact set \(K\subset \mathcal {L} \) is fixed, and the point \( (t,{\boldsymbol {x}})\in {\mathbb Z}^{d+1} \) is such that \( ({\gamma },{\boldsymbol {\beta }}):=({t}/{n},{{\boldsymbol {x}}}/{n})\in K \) . Then, for

$$ C_1({\gamma },{\boldsymbol {\beta }}):= \frac {\sqrt {\big \vert \Lambda ^{\prime {}\prime }({\gamma },{\boldsymbol {\beta }}) \big \vert }} {(2\pi )^{{(d+1)}/{2}}},$$

as \( n\to \infty \) , we have

$$ {\mathbb {P}}\left (T_n=t,~{\mathbf {Z}}_n={\boldsymbol {x}}\right )= \frac {C_1({\gamma },{\boldsymbol {\beta }})} {n^{{(d+1)}/{2}}}e^{-n\Lambda ({\gamma },{\boldsymbol {\beta }})} \big (1+o(1) \big ),$$
(3.3)

where the remainder \( o(1)={\varepsilon }_n(t,{\boldsymbol {x}}) \) is uniform over \( ({\gamma },{\boldsymbol {\beta }})\in K \) ,

$$ \lim _{n\to \infty } \sup _{\substack {(t,{\boldsymbol {x}})\in {\mathbb Z}^{d+1},\\ ({t}/{n},{{\boldsymbol {x}}}/{n})\in K}} \big \vert {\varepsilon }_n(t,{\boldsymbol {x}}) \big \vert =0.$$

3.2. Proof of Theorem 2.3

First we expose the form of the local theorem for the renewal function equivalent to Theorem 2.3 (which is more convenient for proving).

Theorem 2.3A\(. \) Suppose that conditions \([\mathbf {C}_0] \) and \( [\mathbf {Z}]\) are fulfilled and, for a fixed point

$$ ({\theta }_0,{\boldsymbol {\alpha }}_0)\in \mathcal {D}, $$

the admissible inhomogeneity condition

$$ \Bigg ({\lambda }\bigg (\frac {{\boldsymbol {\alpha }}_0}{{\theta }_0} \bigg ),\thinspace {\boldsymbol {\mu }} \bigg (\frac {{\boldsymbol {\alpha }}_0} {{\theta }_0} \bigg ) \Bigg )\in (\mathcal {A}_1) $$

holds. Then any sequence

$$ (t,{\boldsymbol {x}})\in {\mathbb Z}^{d+1} $$

such that

$$ ({\theta },{\boldsymbol {\alpha }}):=\bigg (\frac {t}{n}, \frac {{\boldsymbol {x}}}{n} \bigg ) \to ({\theta }_0,{\boldsymbol {\alpha }}_0)\enspace \enspace \mbox {as}\enspace \enspace n\to \infty$$

admits representation (2.19) in which the remainder \( o(1)={\varepsilon }_n\) satisfies the relation

$$ \lim _{n\to \infty }|{\varepsilon }_n|=0. $$

In [12], a proof is given of the equivalence of Theorems 2.3 and 2.3A in the one-dimensional case \(d=1 \). This proof remains valid for \(d\ge 1 \).

For \(({\theta },{\boldsymbol {\alpha }})\in \mathcal {D} \), we put for brevity

$$ \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big ):=A^{\prime } \big (\widehat {{\lambda }}, \widehat {{\boldsymbol {\mu }}} \big ),$$
(3.4)
$$ \big (\widehat {\lambda },\widehat {{\boldsymbol {\mu }}} \big ):= \Bigg (\lambda \bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ),{\mu } \bigg (\frac {{\alpha }}{{\theta }} \bigg ) \Bigg )=D^{\prime }({\theta },{\boldsymbol {\alpha }})= \Bigg ({-}A\bigg ({\boldsymbol {\mu }}\bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ) \bigg ),{\boldsymbol {\mu }} \bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ) \Bigg ), $$
(3.5)

and so the vectors

$$ \begin {gathered} \big (\widehat {\lambda },\widehat {{\boldsymbol {\mu }}} \big )= \big (\widehat {\lambda }({\theta },{\boldsymbol {\alpha }}), \widehat {{\boldsymbol {\mu }}}({\theta },{\boldsymbol {\alpha }}) \big )= \Bigg (\lambda \bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ),{\boldsymbol {\mu }} \bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ) \Bigg ),\\ \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )= \big (\widehat {{\theta }}({\theta },{\boldsymbol {\alpha }}), \widehat {{\boldsymbol {\alpha }}}({\theta },{\boldsymbol {\alpha }}) \big )= A^{\prime }\Bigg (\lambda \bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ),{\boldsymbol {\mu }} \bigg (\frac {{\boldsymbol {\alpha }}}{{\theta }} \bigg ) \Bigg ) \end {gathered} $$

are functions on one variable \({{\boldsymbol {\alpha }}}/{{\theta }}\).

For proving Theorem 2.3A, we also need

Lemma 3.1 [9, 10] \(. \) Suppose that condition \([\mathbf {C}_0] \) is fulfilled and \( ({\theta },{\boldsymbol {\alpha }})\in \mathcal {D} \).

The the following hold:

  1. I.

    The minimum of the function \(L(r)=L_{{\theta },{\boldsymbol {\alpha }}}(r):=r\Lambda ({{\theta }}/{r},{{\boldsymbol {\alpha }}}/{r}) \) is attained at the only point

    $$ r_{{\theta },{\boldsymbol {\alpha }}}= \frac {{\theta }}{\widehat {{\theta }}}\enspace , \enspace \enspace \mbox {and also}\enspace \enspace r_{{\theta },{\boldsymbol {\alpha }}}\widehat {{\boldsymbol {\alpha }}}={\boldsymbol {\alpha }}.$$

    The functions \( \widehat {\lambda }\) and \( \widehat {{\boldsymbol {\mu }}}\) admit the representations

    $$ \widehat {\lambda }=\lambda \bigg (\frac {{\theta }}{r_{{\theta },{\boldsymbol {\alpha }}}}, \frac {{\boldsymbol {\alpha }}}{r_{{\theta },{\boldsymbol {\alpha }}}} \bigg ), \quad \widehat {{\boldsymbol {\mu }}}={\boldsymbol {\mu }} \bigg (\frac {{\theta }}{r_{{\theta },{\boldsymbol {\alpha }}}}, \frac {{\boldsymbol {\alpha }}}{r_{{\theta },{\boldsymbol {\alpha }}}} \bigg ), $$

    where the functions \( \lambda (\boldsymbol {\cdot }\thinspace ,\boldsymbol {\cdot }) \) and \( {\boldsymbol {\mu }}(\boldsymbol {\cdot }\thinspace ,\boldsymbol {\cdot }) \) are a solution to system (3.2). Moreover,

    $$ L^{\prime }(r_{{\theta },{\boldsymbol {\alpha }}})=0, \quad L^{\prime {}\prime }(r_{{\theta },{\boldsymbol {\alpha }}})=\frac 1{r_{{\theta },{\boldsymbol {\alpha }}}} \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )\widehat {\Lambda }^{\prime {}\prime } \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )^\top >0, $$

    where the functions \( \widehat {{\theta }}=\widehat {{\theta }}({\theta },{\boldsymbol {\alpha }}) \), \( \widehat {{\boldsymbol {\alpha }}}=\widehat {{\boldsymbol {\alpha }}}({\theta },{\boldsymbol {\alpha }})\) are defined in (3.4), (3.5),

    $$ \widehat {\Lambda }^{\prime {}\prime }:=\Lambda ^{\prime {}\prime }({\theta },{\boldsymbol {\alpha }}) \bigr \vert _{({\theta },{\boldsymbol {\alpha }})=(\widehat {{\theta }},\widehat {{\boldsymbol {\alpha }}} )}.$$
  2. II.

    Given \({\varepsilon }>0 \) , there exists \( {\varepsilon }_1>0\) such that

    $$ \min _{|r-r_{{\theta },{\boldsymbol {\alpha }}}|\thinspace \ge {\varepsilon }}L(r)\ge L(r_{{\theta },{\boldsymbol {\alpha }}})+{\varepsilon }_1.$$

For \(({\theta },{\boldsymbol {\alpha }})\in \mathcal {D} \), we put

$$ C_H({\theta },{\boldsymbol {\alpha }}) :=\sqrt {\frac {\widehat {{\theta }}^d \big \vert \Lambda ^{\prime {}\prime } \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big ) \big \vert } {(2\pi {\theta })^dQ\big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )}},\quad Q\big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big ) :=\big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )\Lambda ^{\prime {}\prime } \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big ) \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )^\top , $$
(3.6)

and so

$$ \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )\Lambda ^{\prime {}\prime } \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big ) \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )^\top :=\widehat {{\theta }}^{\thinspace 2}\Lambda ^{\prime {}\prime }_{(1)} \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )+2\widehat {{\theta }}\Lambda ^{\prime {}\prime }_{(1,2)} \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )\widehat {{\boldsymbol {\alpha }}}^\top + \widehat {{\boldsymbol {\alpha }}}\Lambda ^{\prime {}\prime }_{(2)} \big (\widehat {{\theta }}, \widehat {{\boldsymbol {\alpha }}} \big )\widehat {{\boldsymbol {\alpha }}}^\top . $$

Proof of Theorem 2.3A. Basing on Theorem 3.1 and Lemma 3.1, carry out the proof in two steps.

I. Denote by \(H_0(B)\) the renewal measure for the case where the vectors \({\boldsymbol {\xi }}_1=({\tau }_1,{\boldsymbol {\zeta }}_1)\) and \({\boldsymbol {\xi }}=({\tau },{\boldsymbol {\zeta }})\) (i.e., all the summands \( {\boldsymbol {\xi }}_i\) are equal to \(({\tau }_i,{\boldsymbol {\zeta }}_i)\) for \(i\ge 1 \)) are identically distributed. At Stage I, prove Theorem 2.3A for the homogeneous case. It suffices to prove that, for any fixed \(({\theta }_0,{\boldsymbol {\alpha }}_0)\in \mathcal {D}\) and any sequence \( (t,{\boldsymbol {x}})\in {\mathbb Z}^{d+1}\) such that \( ({\theta },{\boldsymbol {\alpha }}):=\frac {1}{n}(t,{\boldsymbol {x}})\to ({\theta }_0,{\boldsymbol {\alpha }}_0)\) for \(n\to \infty \), we have the relation

$$ H_0\big (\{t\}\times \{{\boldsymbol {x}}\} \big )= \frac 1{n^{d/2}} C_H({\theta },{\boldsymbol {\alpha }})e^{-nD({\theta },{\boldsymbol {\alpha }} )} \big (1+o(1) \big ),$$
(3.7)

which coincides with (2.19) for \(\psi _1=\psi \). The proof of (3.7) on the basis of Theorem 3.1 and Lemma 3.1 is a repetition of the proof of the integro-local theorem for the renewal function in the homogeneous case in [13] (see also the proof of the local theorem in the one-dimensional arithmetic case in [12]); therefore we omit it.

II. At this stage, represent the renewal measure in the inhomogeneous case as

$$ H(t,{\boldsymbol {x}})= {\mathbb {E}} H_0\big (\{t-{\tau }_1\}\times \{{\boldsymbol {x}}-{\boldsymbol {\zeta }}_1\} \big )$$

and use the following assertion (cf. Lemma 4.2 in [10], which was established in proving Theorem 3.1 in [10]).

Lemma 3.2 \(. \) Suppose the fulfillment of the hypotheses of Theorem 2.3A. Then, for some \(c>0 \) , \( C<\infty \) , \( n_0<\infty \) and for all \(n\ge n_0\) , we have

$$ \Big \vert H(t,{\boldsymbol {x}})-{\mathbb {E}} \Big (H_0\big (\{t-{\tau }_1\}\times \{{\boldsymbol {x}}-{\boldsymbol {\zeta }}_1\} \big );\thinspace |{\boldsymbol {\xi }}_1|\le \ln ^2n \Big ) \Big \vert \le Ce^{-nD({\theta },{\boldsymbol {\alpha }})-c\ln ^2n}. $$
(3.8)

Since the proof of Lemma 3.2 repeats verbatim the proof of Lemma 4.2 in [10] (see also the proofs of Lemma 4.1 in [13] and Lemma 3.2 in [12]), we omit it.

By (3.8), for proving assertion (2.19) in the general case, it suffices to find the asymptotics of the mean

$$ {\mathbb {E}}\Big (H_0\big (\{t-{\tau }_1\}\times \{{\boldsymbol {x}}-{\boldsymbol {\zeta }}_1\} \big );\thinspace |{\boldsymbol {\xi }}_1|\le \ln ^2n \Big ).$$

Using the result of Stage I and (3.8), we have

$$ \begin {gathered} H(t,{\boldsymbol {x}})= \frac 1{n^{d/2}}{\mathbb {E}} \bigg (C_1\bigg ({\theta }{-}\frac {{\tau }_1}{n},{\boldsymbol {\alpha }}{-} \frac {{\boldsymbol {\zeta }}_1}{n} \bigg )e^{{-}nD({\theta }{-}{{\tau }_1}/{n},{\boldsymbol {\alpha }}{-}{{\boldsymbol {\zeta }}_1}/{n})} \big (1{+}o(1) \big );|{\boldsymbol {\xi }}_1|{\le }\ln ^2n \bigg ) \\ + o\bigg ( \frac 1{n^{d/2}}e^{-nD({\theta },{\boldsymbol {\alpha }})} \bigg ). \end {gathered}$$
(3.9)

Since in \(|{\boldsymbol {\xi }}_1|\le \ln ^2n \), as \(n\to \infty \) we have

$$ {-}nD\bigg ({\theta }-\frac {{\tau }_1}{n},{\boldsymbol {\alpha }}- \frac {{\boldsymbol {\zeta }}_1}{n} \bigg )=-nD({\theta },{\boldsymbol {\alpha }})+\widehat {\lambda }{\tau }_1+ \widehat {{\boldsymbol {\mu }}}{\boldsymbol {\zeta }}_1+o(1), $$

we conclude that the right-hand side of (3.9) coincides with the right-hand side of (2.19).

3.3. Proofs of Theorems 2.1, 2.1*, and Corollary 2.1

We now give another (equivalent) form of Theorem 2.1, which is more convenient for the proof (but less convenient for application).

Theorem 2.1A\(. \) Suppose that conditions \([\mathbf {C}_0] \) and \( [\mathbf {Z}]\) are fulfilled and, for a fixed point

$$ {\boldsymbol {\alpha }}_0\in \mathfrak {A}\setminus \mathfrak {B}_Z,$$
(3.10)

he admissible inhomogeneity conditions

$$ \big (\lambda ({\boldsymbol {\alpha }}_0),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0) \big )\in (\mathcal {A}_1), $$
(3.11)
$$ {\mathbb {P}}\left ({\tau }_1\ge n\right )=o \bigg (\frac {1}{n^{d/2}}e^{-nD({\boldsymbol {0}})} \bigg )\enspace \enspace \mbox {as}\enspace \enspace n\to \infty \enspace \enspace \mbox {if}\enspace \enspace {\boldsymbol {\alpha }}_0={\boldsymbol {0}} $$
(3.12)

hold. Then, for any sequence \( {\boldsymbol {x}}={\boldsymbol {x}}_n\in {\mathbb Z}^d \) such that

$$ \lim _{n\to \infty }{\boldsymbol {\alpha }}={\boldsymbol {\alpha }}_0, \enspace \enspace \mbox {where}\enspace \enspace {\boldsymbol {\alpha }}:=\frac {{\boldsymbol {x}}}{n},$$

we have representation (2.10) in which the remainder \(o(1)={\varepsilon }_n({\boldsymbol {x}})\) satisfies the relation

$$ \lim _{n\to \infty }\big \vert {\varepsilon }_n({\boldsymbol {x}}) \big \vert =0. $$

We need the following assertion.

Lemma 3.3 \(. \) Suppose the fulfillment of the hypotheses of Theorem 2.1A. Then, for some \(c>0 \) , \( C<\infty \) , \( n_0<\infty \) and, for all \(n\ge n_0\) , we have the inequality

$$ R(n):={\mathbb P} \big ({\mathbf {Z}}(n)={\boldsymbol {x}},\thinspace {\tau }_{{\nu }(n)+1}\ge \ln ^2n \big )\le Cne^{-nD({\alpha })-c\ln ^2n}.$$
(3.13)

The proof of (3.13) in the one-dimensional case \(d=1\) is given in [12] (see the proof of Lemma 3.3 in [12]). Since the proof of inequality (3.13) in the general case \(d\ge 1 \) repeats the proof of Lemma 3.3 in [12] with obvious changes, and we omit it.

Put

$$ K_m({\boldsymbol {\alpha }}):=e^{\lambda ({\boldsymbol {\alpha }})m} {\mathbb {P}}\left ({\tau }\ge m\right ), $$
(3.14)

and so,

$$ I_Z({\boldsymbol {\alpha }})=\sum _{m\ge 1}K_m({\boldsymbol {\alpha }}).$$

Lemma 3.4 \(. \) Under the conditions of Theorem 2.1A, there exists \({\delta }_0>0 \) such that

$$ \sum _{m\ge 1}e^{{\delta }_0m}K_m({\boldsymbol {\alpha }}_0)<\infty .$$
(3.15)

Proof. The exponential Chebyshev inequality implies that

$$ {\mathbb {P}}\left ({\tau }\ge m\right )\le e^{-\Lambda _{\tau }(m)}, $$

where

$$ \Lambda _{\tau }(\theta ):= \sup _{\lambda }\big \{\lambda \theta -A(\lambda ,{\boldsymbol {0}}) \big \} $$

is the deviation function for the random variable \(\tau \) satisfying the relation

$$ \liminf _{m\to \infty } \frac 1m\Lambda _{\tau }(m)\ge {\lambda }_+.$$

Since, under the conditions of Theorem 2.1A, \({\lambda }({\boldsymbol {\alpha }}_0)<{\lambda }_+ \), for some \({\delta }_0>0 \) and some \(C<\infty \) for all \(m\ge 0 \) we have

$$ {\mathbb {P}}\left ({\tau }\ge m\right )\le Ce^{-2{\delta }_0m}.$$

Proof of Theorem 2.1A. We have

$$ \begin {gathered} P_1(n) ={\mathbb {P}}\left ({\mathbf {Z}}(n)={\boldsymbol {x}},\thinspace {\tau }_{{\nu }(n)+1}\le \ln ^2 n\right )\\ ={\mathbb {P}}\left ({\mathbf {Z}}_0={\boldsymbol {x}},\thinspace {\nu }(n)=0,\thinspace {\tau }_1\le \ln ^2 n\right ) +\sum _{k=1}^\infty {\mathbb {P}}\left ({\mathbf {Z}}_k={\boldsymbol {x}},\thinspace {\nu }(n)=k,\thinspace {\tau }_{{\nu }(n)+1}\le \ln ^2 n\right )\\ ={\mathbb {P}}\left ({\mathbf {Z}}_0={\boldsymbol {x}},\thinspace {\tau }_1\ge n,\thinspace {\tau }_1\le \ln ^2n\right ) +\sum _{k=1}^\infty {\mathbb {P}}\left ({\mathbf {Z}}_k={\boldsymbol {x}},\thinspace T_k<n\le T_k+{\tau }_{k+1},\thinspace \tau _{k+1}\leqslant \ln ^2n\right )\\ =0+\sum _{k=1}^\infty {\mathbb {P}}\left ({\mathbf {Z}}_k={\boldsymbol {x}},\thinspace T_k<n\le T_k+{\tau }_{k+1},\thinspace \tau _{k+1}\leqslant \ln ^2n\right )\\ =\sum _{m=1}^{\ln ^2n}\thinspace \sum _{k=1}^\infty {\mathbb {P}}\left ({\mathbf {Z}}_k={\boldsymbol {x}},\thinspace T_k=n-m\right ) {\mathbb {P}}\left (m\le {\tau }\le \ln ^2n\right ). \end {gathered}$$

Since the inner sum on the right-hand side of the last formula “folds” into the renewal function \(H(n-m,{\boldsymbol {x}}) \), we obtain the representation

$$ P_1(n)=\sum _{m=1}^{\ln ^2n}H(n-m,{\boldsymbol {x}}) {\mathbb {P}}\left (m\le {\tau }\le \ln ^2n\right ). $$
(3.16)

Now, applying Theorem 2.3, from (3.16) we obtain the representation

$$ P_1(n)=\psi _1\big ({\lambda }({\boldsymbol {\alpha }}_0),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0) \big ) \frac {C_H(1,{\boldsymbol {\alpha }}_0)} {n^{d/2}}e^{-nD(1,{\boldsymbol {\alpha }})}P_2(n)\big (1+o(1) \big ),$$
(3.17)

where

$$ \begin {gathered} P_2(n):=\sum _{m=1}^{\ln ^2n}K_m({\boldsymbol {\alpha }},n),\\ K_m({\boldsymbol {\alpha }},n):= \exp \big \{{n\big (D(1,{\boldsymbol {\alpha }})- D(1-{m}/{n},{\boldsymbol {\alpha }}) \big )}\big \} {\mathbb {P}}\left (m\le {\tau }\le \ln ^2n\right ). \end {gathered}$$

Obviously, for every \(m\in \left \{1,2,\dots ,\ln ^2n\right \}\), as \(n\to \infty \) (see (3.14)), we have the convergence

$$ K_m({\boldsymbol {\alpha }},n)\to K_m({\boldsymbol {\alpha }}_0),$$

and the inequalities

$$ K_m({\boldsymbol {\alpha }},n)\le e^{{\delta } m}K_m({\boldsymbol {\alpha }}_0), \quad m\in \left \{1,2,\dots ,\ln ^2n\right \},$$

hold for any \( {\delta }>0\) and all sufficiently large \(n \). Therefore, using Lemma 3.4, by the dominated convergence theorem, we obtain the relation

$$ \lim \limits _{n\to \infty }P_2(n)= \sum _{m=1}^{\infty }K_m({\boldsymbol {\alpha }}_0)=I_Z({\boldsymbol {\alpha }}_0). $$
(3.18)

The relations (3.16)–(3.18) and Lemma 3.3 imply Theorem 2.1A.

Proof of Theorem 2.1*. Repeating the deduction of (3.15) with obvious changes, we obtain

$$ {\mathbb P}\big ({\mathbf {Z}}(n)={\boldsymbol {x}},\thinspace {\gamma }(n)=u \big )=H (n-u,{\boldsymbol {x}}) {\mathbb P}({\tau }\ge u). $$

Applying further the local theorem 2.3 for the renewal function, we obtain the assertion (2.12) of Theorem 2.1*. Since (2.13) is proved similarly, Theorem 2.1* is proved.

Proof of Corollary 2.1. First, prove that

$$ (n+1)D\bigg (\frac {{\boldsymbol {x}}}{n+1} \bigg )=nD({\boldsymbol {\alpha }})+{\lambda }({\boldsymbol {\alpha }})+o(1). $$
(3.19)

For

$$ \frac {{\boldsymbol {x}}}{n+1}=\frac {{\boldsymbol {x}}}n+{\boldsymbol {y}}={\boldsymbol {\alpha }}+{\boldsymbol {y}}, \enspace \enspace \mbox {where} \enspace \enspace {\boldsymbol {y}}:=\frac {-{\boldsymbol {x}}} {n(n+1)},$$

we have

$$ \begin {aligned} (n\thinspace {+}\thinspace 1)D \bigg (\frac {{\boldsymbol {x}}}{n{+}1} \bigg ) &=nD\bigg (\frac {{\boldsymbol {x}}}{n} \bigg )\thinspace {+}\thinspace n \Bigg (D\bigg (\frac {{\boldsymbol {x}}}{n{+}1} \bigg )\thinspace {-}\thinspace D \bigg (\frac {{\boldsymbol {x}}}{n} \bigg ) \Bigg )\thinspace {+}\thinspace D\bigg (\frac {{\boldsymbol {x}}}{n{+}1} \bigg )\nonumber \\ &=nD({\boldsymbol {\alpha }})\thinspace {+}\thinspace n\big (D({\boldsymbol {\alpha }}\thinspace {+}\thinspace {\boldsymbol {y}})\thinspace {-}\thinspace D({\boldsymbol {\alpha }}) \big )\thinspace {+}\thinspace D({\boldsymbol {\alpha }})\thinspace {+}\thinspace o(1). \end {aligned} $$
(3.20)

By (2.5),

$$ D^{\prime }({\boldsymbol {\alpha }})={\boldsymbol {\mu }}({\boldsymbol {\alpha }}), \enspace \enspace \mbox {and so} \enspace \enspace n\big (D({\boldsymbol {\alpha }}+{\boldsymbol {y}})-D({\boldsymbol {\alpha }}) \big )=-{\boldsymbol {\mu }}({\boldsymbol {\alpha }}){\boldsymbol {\alpha }}+o(1). $$

Therefore, continuing (3.20) and using (2.5), we obtain

$$ (n+1)D \bigg (\frac {{\boldsymbol {x}}}{n+1} \bigg ) =nD({\boldsymbol {\alpha }})-{\boldsymbol {\mu }}({\boldsymbol {\alpha }}){\boldsymbol {\alpha }}+{\boldsymbol {\mu }}({\boldsymbol {\alpha }})+{\lambda }({\boldsymbol {\alpha }})+o(1) =nD({\boldsymbol {\alpha }})+{\lambda }({\boldsymbol {\alpha }})+o(1).$$

Equality (3.19) is proved. For proving Corollary 2.1, it remains to make use Theorem 2.1, equality (3.19), and the relation

$$ \mathbf {Z}_+(n)=\mathbf {Z}(n+1).$$

3.4. Proofs of Theorems 2.2, 2.2*, and Corollary 2.3

We now give an equivalent version of Theorem 2.2, which is more convenient for proving.

Theorem 2.2A\(. \) Suppose that conditions \([\mathbf {C}_0] \) and \( [\mathbf {Z}]\) are fulfilled and a point

$$ {\boldsymbol {\alpha }}_0\in \mathfrak {A}\setminus \mathfrak {B}_Y$$
(3.21)

is fixed. Suppose that the admissible inhomogeneity condition

$$ \big (0,{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0) \big )\in (\mathcal {A}_1), \quad \big ({\lambda }({\boldsymbol {\alpha }}_0),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0) \big )\in (\mathcal {A}_1)$$

is fulfilled for the initial random vector \({\boldsymbol {\xi }}_1=({\tau }_1,{\boldsymbol {\zeta }}_1)\). Then, for any sequence \({\boldsymbol {x}}={\boldsymbol {x}}_n\in {\mathbb Z}^d\) such that

$$ \lim \limits _{n\to \infty }{\boldsymbol {\alpha }}={\boldsymbol {\alpha }}_0, \enspace \enspace \mbox {with}\enspace \enspace {\boldsymbol {\alpha }}:=\frac {{\boldsymbol {x}}}{n}, $$

representation (2.15) is valid, in which the remainder \(o(1)={\varepsilon }_n \) satisfies the relation

$$ \lim \limits _{n\to \infty }\big \vert {\varepsilon }_n({\boldsymbol {x}}) \big \vert =0.$$

We will need the following assertion.

Lemma 3.5 \(. \) Suppose the fulfillment of the hypotheses of Theorem 2.2A. Then there are constants \(c>0 \) , \( C<\infty \) , \( n_0<\infty \) such that the equality

$$ {\mathbb P}\Big ({\mathbf {Y}}(n)= {\boldsymbol {x}},\thinspace \big ({\tau }_{\eta (n)},{\boldsymbol {\zeta }}_{\eta (n)} \big )\notin L_n^{d+1},\thinspace \eta (n)\ge 2 \Big )\le Ce^{-nD({\boldsymbol {\alpha }})+c\ln ^2n}$$

holds for all \( n\ge n_0\) , where

$$ L_n^{d+1}:= \Big \{(m,{\boldsymbol {z}})\in {\mathbb Z}^{d+1}:1\le m\le \ln ^2n, \thinspace \max _{1\le i\le d}|z_i|\le \ln ^2n \Big \}.$$
(3.22)

Since the proof of Lemma 3.5 is an almost verbatim repetition of the proof of Lemma 2.3 in [15], we omit it.

Lemma 3.6 \(. \) Suppose the fulfillment of the conditions of Theorem 2.2A. Then there exists \( {\delta }>0\) such that

$$ R({\boldsymbol {\alpha }}_0,{\delta }):= \sum _{m\ge 1}e^{({\lambda }({\boldsymbol {\alpha }}_0)+{\delta })m}{\mathbb {E}} \big (e^{{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0){\boldsymbol {\zeta }}+{\delta }|{\boldsymbol {\zeta }}|};\thinspace {\tau }\ge m \big )<\infty . $$
(3.23)

Proof. If \({\lambda }({\boldsymbol {\alpha }}_0)\ge 0\) then

$$ R({\boldsymbol {\alpha }}_0,{\delta }) \le \sum _{m\ge 1}e^{{\delta } m}{\mathbb {E}} \big (e^{{\lambda }({\boldsymbol {\alpha }}_0){\tau }+{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0){\boldsymbol {\zeta }}+{\delta }|{\boldsymbol {\zeta }}|};\thinspace {\tau }\ge m \big ) =\sum _{m\ge 1}e^{{\delta } m}{\mathbb {E}} \Big (e^{{\delta }| {}\thinspace \widehat {{{\boldsymbol {\zeta }}}} \thinspace |};\thinspace {}\thinspace \widehat {{\tau }}\ge m \Big ),$$

where the distribution of the random vector \(\big ({}\thinspace \widehat {{\tau }},{}\thinspace \widehat {{{\boldsymbol {\zeta }}}}\thinspace \big ) \) is defined as follows:

$$ {\mathbb P}\big (({}\thinspace \widehat {{\tau }},{}\thinspace \widehat {\boldsymbol {\zeta }})\in \boldsymbol {\cdot } \big ):={\mathbb {E}} \big (e^{{\lambda }({\boldsymbol {\alpha }}_0){\tau }+{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0){\boldsymbol {\zeta }}}:({\tau },{\boldsymbol {\zeta }})\in \boldsymbol {\cdot } \big ). $$

Under our conditions, we have \(\big ({\lambda }({\boldsymbol {\alpha }}_0),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0)\big )\in (\mathcal {A})\); therefore condition \([\mathbf {C}_0] \) is fulfilled for the new vector \(\big ({}\thinspace \widehat {{\tau }},{}\thinspace \widehat {{{\boldsymbol {\zeta }}}}\big ) \). Use the Cauchy inequality: for sufficiently small \({\delta }>0 \) and sufficiently large \(C<\infty \), we have

$$ {\mathbb {E}}\Big (e^{{\delta }|{}\thinspace \widehat {{{\boldsymbol {\zeta }}}}|};\thinspace {}\thinspace \widehat {{\tau }}\ge m \Big )\le \Big ({\mathbb {E}} e^{2{\delta }|{}\thinspace \widehat {{{\boldsymbol {\zeta }}}}|} \Big )^{1/2} \big ({\mathbb P}({}\thinspace \widehat {{\tau }}\ge m) \big )^{1/2}\le C \big ({\mathbb P}({}\thinspace \widehat {{\tau }}\ge m) \big )^{1/2}. $$

Since the random variable \({}\thinspace \widehat {{\tau }}\) satisfies condition \([\mathbf {C}_0] \), there exist constants \(c>0 \) and \(C_1<\infty \) such that

$$ \big ({\mathbb P}({}\thinspace \widehat {{\tau }}\ge m) \big )^{{1}/{2}}\le C_1e^{-cm} $$

for all \(m\ge 0\). We have obtained the estimate

$$ R({\boldsymbol {\alpha }}_0,{\delta })\le \sum _{m=1}^\infty CC_1 e^{{\delta } m-cm},$$

in which the right-hand side is finite for \({\delta }\in (0,c)\).

Let now \({\lambda }({\boldsymbol {\alpha }}_0)<0 \). Then

$$ R({\boldsymbol {\alpha }}_0,{\delta })\le \sum _{m\ge 1}e^{{\delta } m}{\mathbb {E}} \big (e^{{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0){\boldsymbol {\zeta }}+{\delta }|{\boldsymbol {\zeta }}|};\thinspace {\tau }\ge m \big ), $$

and, for estimating the right-hand side of the last inequality, we can use Hölder’s inequality: For any \(p,q\in (0,1) \) such that \(p+q=1 \), we have

$$ R({\boldsymbol {\alpha }}_0,{\delta })\le \sum _{m\ge 1}e^{{\delta } m} \big ({\mathbb {E}} e^{{1}/{p}{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0){\boldsymbol {\zeta }}+ {1}/{p}{\delta }|{\boldsymbol {\zeta }}|} \big )^p \big ({\mathbb P}({\tau }\ge m) \big )^q. $$
(3.24)

Since, under our conditions, we have \({\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0)\in (\mathcal {M}) \), there exist \({\delta }>0 \), \(c>0 \), \({1}/{p}>1 \), and \(C \), \(C_1<\infty \) such that

$$ \big ({\mathbb {E}} e^{{1}/{p}{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0){\boldsymbol {\zeta }}+{1}/{p}{\delta }|{\boldsymbol {\zeta }}|} \big )^p\le C, \quad \big ({\mathbb P}({\tau }\ge m) \big )^q\le C_1e^{-cm}.$$

Therefore, from (3.24) we deduce

$$ R({\boldsymbol {\alpha }}_0,{\delta })\le \sum _{m\ge 1}e^{{\delta } m}C \big ({\mathbb P}({\tau }\ge m) \big )^q\le \sum _{m\ge 1}e^{{\delta } m}CC_1e^{-cm}. $$

Since the right-hand side of the last inequality is finite for \({\delta }\in (0,c)\), inequality (3.23) is proved.

Lemma 3.7 \(. \) Suppose the fulfillment of the hypotheses of Theorem 2.2A. Then

$$ R_1(n):={\mathbb P}\big ({\mathbf {Y}}(n)={\boldsymbol {x}},\thinspace \eta (n)=1 \big )={\mathbb P}({\mathbf {Z}}_1={\boldsymbol {x}},{\tau }\ge n)= o\bigg (\frac {1}{n^{d/2}}e^{-nD({\boldsymbol {\alpha }})} \bigg ). $$

The proof of Lemma 3.7 repeats the proof of Lemma 4.1(b) in [15]; therefore we omit it.

Proof of Theorem 2.2A. Alongside with the set \( L_n^{d+1}\) defined by (3.22), we will use the set

$$ M_n^d:=\Big \{{\boldsymbol {z}}\in {\mathbb Z}^d;\thinspace \max _{1\le i\le d}|z_i|\le \ln ^2n \Big \}. $$

We infer

$$ \begin {aligned} P(n) &:={\mathbb P}\big ({\mathbf {Y}}(n)={\boldsymbol {x}},\thinspace {\boldsymbol {\xi }}_{\eta (n)}\in L^{d+1}_n,\thinspace \eta (n)\ge 2 \big )\\ &\enspace =\sum _{k=2}^\infty {\mathbb P} \big ({\mathbf {Z}}_k={\boldsymbol {x}},\thinspace {\boldsymbol {\xi }}_{k}\in L^{d+1}_n,\thinspace \eta (n)=k \big )\\ &\enspace =\sum _{k=2}^\infty {\mathbb P} \big ({\mathbf {Z}}_k={\boldsymbol {x}},\thinspace {\boldsymbol {\xi }}_{k}\in L^{d+1}_n,\thinspace T_{k-1}<n\le T_k \big )\\ &\enspace =\sum _{m=1}^{\ln ^2n} \sum _{{\boldsymbol {z}}\in M^{d}_n} \sum _{k=2}^\infty {\mathbb P}\big ({\mathbf {Z}}_{k-1}={\boldsymbol {x}}-{\boldsymbol {z}},\thinspace T_{k-1}=n-m \big ) {\mathbb P}({\tau }\ge m,\thinspace {\boldsymbol {\zeta }}={\boldsymbol {z}}). \end {aligned}$$

Since the inner sum on the right-hand side of the last expression is “compiled” into the renewal function, we obtain the representation

$$ P(n)=\sum _{m=1}^{\ln ^2n}\thinspace \sum _{{\boldsymbol {z}}\in M^{d}_n}H(n-m,{\boldsymbol {x}}-{\boldsymbol {z}}){\mathbb P}({\tau }\ge m,\thinspace {\boldsymbol {\zeta }}={\boldsymbol {z}}).$$

Applying now Theorem 2.3, we obtain

$$ P(n)=\psi _1\big ({\lambda }({\boldsymbol {\alpha }}_0),{\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0) \big ) \frac {C_H(1,{\boldsymbol {\alpha }}_0)} {n^{d/2}}e^{-nD(1,{\boldsymbol {\alpha }}_0)}P_1\big (n,{\boldsymbol {\alpha }})(1+o(1) \big ), $$
(3.25)

where

$$ \begin {gathered} P_1(n,{\boldsymbol {\alpha }}):= \sum _{m=1}^{\ln ^2n}\thinspace \sum _{{\boldsymbol {z}}\in M^{d}_n}K_{m,{\boldsymbol {z}}}(n,{\boldsymbol {\alpha }}),\\ K_{m,{\boldsymbol {z}}}(n,{\boldsymbol {\alpha }}):= \exp \left \{n\left (D(1,{\boldsymbol {\alpha }})-D\left (1-\frac {m}{n},{\boldsymbol {\alpha }}-\frac {{\boldsymbol {z}}}{n}\right )\right )\right \} {\mathbb {P}}\left ({\tau }\ge m,\thinspace {\boldsymbol {\zeta }}={\boldsymbol {z}}\right ). \end {gathered}$$

It is easy to see that, as \(n\to \infty \),

$$ K_{m,{\boldsymbol {z}}}(n,{\boldsymbol {\alpha }})\to K_{m,{\boldsymbol {z}}}({\boldsymbol {\alpha }}_0), \enspace \enspace m\in \left \{1,\dots ,\ln ^2n\right \}, \enspace \enspace {\boldsymbol {z}}\in M^d_n, $$

and for any \({\delta }>0 \) and all sufficiently large \(n \)

$$ K_{m,{\boldsymbol {z}}}(n,{\boldsymbol {\alpha }})\le e^{{\delta } m+{\delta }|{\boldsymbol {z}}|}K_{m,{\boldsymbol {z}}}({\boldsymbol {\alpha }}_0), \enspace \enspace m\in \left \{1,\dots ,\ln ^2n\right \}, \enspace \enspace {\boldsymbol {z}}\in M^d_n.$$

Now, applying Lemma 3.6, by the dominated convergence theorem, we obtain

$$ P_1(n,{\boldsymbol {\alpha }})\to \sum _{m=1}^{\infty }\thinspace \sum _{{\boldsymbol {z}}\in {\mathbb Z}^d}K_{m,{\boldsymbol {z}}}({\boldsymbol {\alpha }}_0)=I_Y({\boldsymbol {\alpha }}_0). $$
(3.26)

Using Lemma 3.5, 3.7 and relations (3.25), (3.26), we infer

$$ {\mathbb P}\big ({\mathbf {Y}}(n)={\boldsymbol {x}} \big )=\psi _1\big ({\lambda }({\boldsymbol {\alpha }}_0),\thinspace {\boldsymbol {\mu }}({\boldsymbol {\alpha }}_0) \big ) \frac {C_H(1,{\boldsymbol {\alpha }}_0)}{n^{d/2}}e^{-nD({\boldsymbol {\alpha }}_0)}I_Y \big ({\boldsymbol {\alpha }}_0)(1+o(1) \big ). $$

Assertion (2.15) of Theorem 2.2A and Theorem 2.2A are proved.

Proof of Theorem 2.2*. It is easy to see that, for all integers \(v\thinspace {\ge }\thinspace 0 \),

$$ {\mathbb P}\big ({\mathbf {Y}}(n)={\boldsymbol {x}},\thinspace \chi (n)=v \big )= {\mathbb P}\big ({\mathbf {Z}}(n+v+1)={\boldsymbol {x}},\thinspace {\gamma }(n+v+1)=1 \big ). $$
(3.27)

Therefore, calculating the right-hand side of (3.27) with the use of Theorem 2.1* and using the relation

$$ -(n+v+1)D \bigg (\frac {{\boldsymbol {x}}}{n+v+1} \bigg )=-nD({\boldsymbol {\alpha }})-(v+1){\lambda }({\boldsymbol {\alpha }})+o(1), $$

which follows from (3.19), we obtain (2.16).

Proof of Corollary 2.2 is completely similar to that of Corollary 2.1.

3.5. Proof of Lemma 2.1.

Establish (2.18). Recall that, for points \(\boldsymbol {\alpha }\) in some neighborhood of the point \({\boldsymbol {\alpha }}_0={\boldsymbol {a}}\), we have the relation

$$ D({\boldsymbol {\alpha }})={\boldsymbol {\mu }}({\boldsymbol {\alpha }}){\boldsymbol {\alpha }}-A\big ({\boldsymbol {\mu }}({\boldsymbol {\alpha }}) \big ),$$

where the function \(A({\boldsymbol {\mu }}) \) is a solution to the equation

$$ \psi \big ({-}A({\boldsymbol {\mu }}),{\boldsymbol {\mu }} \big )=1$$
(3.28)

and the function \({\boldsymbol {\mu }}({\boldsymbol {\alpha }})\) is inverse to the function \( A^{\prime }({\boldsymbol {\mu }})\), i.e., satisfies the equation

$$ A^{\prime }\big ({\boldsymbol {\mu }}({\boldsymbol {\alpha }}) \big )={\boldsymbol {\alpha }}.$$

Verify that

$$ A({\boldsymbol {0}})=0,\enspace \enspace A^{\prime }({\boldsymbol {0}})={\boldsymbol {a}},\enspace \enspace A^{\prime {}\prime }({\boldsymbol {0}})=B^2.$$
(3.29)

The equalities in (3.29) are obtained from equation (3.28) by differentiating it sufficiently many times:

  1. (0)

    for \( {\boldsymbol {\mu }}={\boldsymbol {0}}\), by \(\psi (0,{\boldsymbol {0}})=1\) we have \(A({\boldsymbol {0}})=0\);

  2. (1)

    differentiating equality (3.28) once for \({\boldsymbol {\mu }}={\boldsymbol {0}}\), we obtain

    $$ \psi ^{\prime }_{(1)}(0,{\boldsymbol {0}}) \big ({-}A^{\prime }({\boldsymbol {0}}) \big )+\psi ^{\prime }_{(2)}(0,{\boldsymbol {0}})={\boldsymbol {0}}; $$

    therefore,

    $$ A^{\prime }({\boldsymbol {0}})=\frac {\psi ^{\prime }_{(2)}(0,{\boldsymbol {0}})} {\psi ^{\prime }_{(1)}(0,{\boldsymbol {0}})}={\boldsymbol {a}};$$
  3. (2)

    differentiating equality (3.28) twice for \({\boldsymbol {\mu }}={\boldsymbol {0}}\), we obtain

    $$ \psi ^{\prime {}\prime }_{(1,1)}(0,{\boldsymbol {0}}) \big ({-}A^{\prime }({\boldsymbol {0}}) \big ) \big ({-}A^{\prime }({\boldsymbol {0}}) \big )^\top -2\psi ^{\prime {}\prime }_{(1,2)}(0,{\boldsymbol {0}}) \big (A^{\prime }({\boldsymbol {0}}) \big )^\top +\psi ^{\prime {}\prime }_{(2,2)}(0,{\boldsymbol {0}})-\psi ^{\prime }_{(1)}(0,{\boldsymbol {0}})A^{\prime {}\prime }({\boldsymbol {0}})={\boldsymbol {0}}{\boldsymbol {0}}^\top ;$$

    therefore,

    $$ A^{\prime {}\prime }({\boldsymbol {0}})=\frac {1}{\psi ^{\prime }_{(1)}(0,{\boldsymbol {0}})} \big [{\mathbb {E}}{\zeta }{\zeta }^\top -2{\boldsymbol {a}}{\mathbb {E}}{\boldsymbol {\zeta }}{\tau }+{\boldsymbol {a}}{\boldsymbol {a}}^\top {\mathbb {E}}{\tau }^2 \big ]=B^2.$$

The equalities (3.29) are proved. Since the functions \(D^{\prime }({\boldsymbol {\alpha }}) \) and \(A^{\prime }({\boldsymbol {\mu }})\) are mutually inverse, relation (3.29) implies the equalities

$$ D({\boldsymbol {a}})=0,\enspace \enspace D^{\prime }({\boldsymbol {a}})={\boldsymbol {0}},\enspace \enspace D^{\prime {}\prime }({\boldsymbol {a}})=B^{-2}. $$

The equalities (2.18) are proved.

Prove (2.17). Since

$$ I_Z({\boldsymbol {a}})={\mathbb {E}}{\tau },$$

it suffices to verify the equality

$$ C({\boldsymbol {a}})=\frac 1{\sigma {\mathbb {E}}{\tau }(2\pi )^{d/2}}, \enspace \enspace \mbox {with}\enspace \enspace \sigma ^2=|B^2|=\frac 1{\big \vert D^{\prime {}\prime }({\boldsymbol {a}}) \big \vert }. $$
(3.30)

The law of large numbers for the process \({\mathbf {Z}}(n)\) and Theorem 2.1 imply the existence of a sequence \( {\varepsilon }_n>0\) tending to \(0 \) slowly enough, for which

$$ 1=\lim \limits _{n\to \infty }{\mathbb P} \bigg (\bigg |\frac {{\mathbf {Z}}(n)}n-{\boldsymbol {a}} \bigg |<{\varepsilon }_n \bigg )=C({\boldsymbol {a}})I_Z({\boldsymbol {a}})\lim \limits _{n\to \infty }\Sigma _n, $$
(3.31)

where

$$ \Sigma _n:= \sum _{{\boldsymbol {z}}\in {\mathbb Z}^d,\thinspace |{\boldsymbol {\alpha }}-{\boldsymbol {a}}|<{\varepsilon }_n} \frac 1{n^{d/2}}e^{-nD({\boldsymbol {\alpha }})}.$$

Since it is obvious that, as \({\varepsilon }_n\sqrt {n}\to \infty \),

$$ \lim \limits _{n\to \infty }\Sigma _n =\lim \limits _{n\to \infty } \int _{|{\boldsymbol {u}}|<{\varepsilon }_n\sqrt {n}} \exp \left \{{-}\frac {{\boldsymbol {u}}D^{\prime {}\prime }({\boldsymbol {a}}){\boldsymbol {u}}^\top }{2}\right \} d{\boldsymbol {u}} =\int _{{\boldsymbol {u}}\in {\mathbb R}^d}\exp \left \{{-}\frac {{\boldsymbol {u}} D^{\prime {}\prime }({\boldsymbol {a}}){\boldsymbol {u}}^\top }2\right \} d{\boldsymbol {u}}= \frac {(2\pi )^{d/2}} {\sqrt {|D^{\prime {}\prime }({\boldsymbol {a}})|} }, $$

(3.31) implies (3.30).