1. Introduction

In [6] a formula for the spectral density of Jacobi matrices in terms of the asymptotics of orthogonal polynomials was found for a certain class of matrices with unbounded entries. This class is an example of the so-called noncritical case. The critical case for Jacobi matrices with unbounded entries was studied in several other papers, among which we mention [9], [13], [17], [23], and [25]. The distinction between these cases is determined by the limit (whenever it exists) of the transfer matrix of the eigenvector equation: in the noncritical case, it is diagonalizable, whereas in the critical case, it is similar to a Jordan block. For the special case of the discrete Schrödinger operator with rapidly decreasing potential (the main diagonal), the critical situation occurs only for two values of the spectral parameter, \(\lambda=\pm2\). However, for Jacobi matrices, in the critical case, we have a similar situation for all values of the spectral parameter \(\lambda\). The type of asymptotics of generalized eigenvectors differs significantly in these two cases, critical and noncritical. In the papers dealing with the critical case mentioned above only asymptotics of the generalized eigenvectors were studied, and there is no analogue of the formula for spectral density in [6]. The aim of the present paper is to obtain such formulas in the critical case.

We consider a class of Jacobi matrices in the critical case which is an extension of the class studied in [17]. In addition to asymptotics of generalized eigenvectors, we obtain a formula for the spectral density of a matrix in terms of the asymptotics of its orthogonal polynomials. Namely, we consider matrices of the form

$$ \mathcal J= \begin{pmatrix} b_1 & a_1 & 0 & 0 & \dots \\ a_1 & b_2 & a_2 & 0 &\dots \\ 0 & a_2 & b_3 & a_3 &\dots \\ 0 & 0 & a_3 & b_4 & \dots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$$
(1.1)

with entries

$$ a_n=n^{\alpha}+p_n,\quad b_n=-2n^{\alpha}+q_n,$$
(1.2)

where \(a_n>0\), \(b_n\in\mathbb R\), \(\alpha\in(0,1)\), and

$$ \bigg\{\frac{p_n}{n^{\alpha/2}}\bigg\}_{n=1}^{\infty},\bigg\{\frac{q_n}{n^{\alpha/2}}\bigg\}_{n=1}^{\infty}\in l^1.$$
(1.3)

The operators of this family act on the Hilbert space \(l^2(\mathbb N)\) and have the following spectral properties: the spectrum on the right half-line is pure point, and the spectrum on the left half-line is purely absolutely continuous. This follows from the asymptotics of the generalized eigenvectors by subordinacy theory ([10], [19]). For \(\alpha\in(1/2,2/3)\), such a family was considered in [17], and for \(\lambda\in\mathbb R\setminus\{0\}\), asymptotics of generalized eigenvectors were found. In the present paper, to find asymptotics, we use a modified version of the remarkable method of Kooman [20], which can yield a result for all complex \(\lambda\) in the same manner and works for \(\alpha\in(0,1)\). In essence, Kooman’s method is based on a transformation which reduces the discrete linear system to the Levinson (L-diagonal) form and can be considered as an extension of Benzaid–Harris–Lutz methods ([7], [11]; see also [8], [14], [27]). All of them are based on the discrete analogue of the Levinson asymptotic theorem [3, Theorem 8.1]. Using Kooman’s method allowed us to substantially simplify the proof of the main theorem in the difficult critical case.

We refer to a formula which relates the spectral density of an ordinary differential or difference operator to the coefficient in the asymptotics of a solution of the eigenvector equation which satisfies the boundary condition as a Titchmarsh–Weyl formula by analogy with the classical case of the Schrödinger operator on the half-line with self-adjoint boundary condition at zero and integrable potential; see Titchmarsh’s book [4, Chap. V, Sec. 5.6].

In this study of formulas for spectral density we have a certain application in mind, namely, the phenomenon of spectral phase transition. If a family of self-adjoint operators depends on one or several real parameters, it may happen that the space of these parameters is divided into regions where the operators have similar spectral structures. For example, in some regions the spectrum may be purely absolutely continuous, and in others it may be discrete. Then on the boundaries of the regions a spectral phase transition occurs. We want to see by explicit examples how such transitions are described in terms of the spectral measure. Some examples of spectral phase transitions can be found in the papers [15], [16], and [28]. In these works only the “geometry” and type of the spectrum were considered because of the lack of suitable methods for analyzing spectral measure. This is where formulas for spectral density could be useful. Several papers were devoted to establishing such formulas in both discrete and continuous cases ([18], [21], [29]). That analysis has been used to study the behavior of the spectral density of discrete ([30]) and differential ([24], [31]) Schrödinger operators with Wigner–von Neumann potential near the critical points which appear inside the continuous spectrum of an operator with such a potential. These formulas were derived for special classes of operators and, moreover, for the noncritical case. The example that we consider in the present paper concerns a family of Jacobi matrices with spectral phase transition in a situation when the parameters belong to the boundary between two regions. On that boundary Jacobi matrices are in the critical case. The formula for the spectral density in this case is needed as the first important step to understanding the “inner structure” of the spectral phase transition in this family.

We mention the interesting recent works [32]–[35], also devoted, in particular, to the study of spectral density.

The paper is organized as follows. In Section 2 we recall some basic notions and facts related to Jacobi matrices and their generalized eigenvector equations and also explain why in our situation the critical case occurs. In Section 3 we describe the idea of approximating the matrix \(\mathcal J\) by “stabilized” matrices \(\mathcal J_N\), which was used in [6] and which allows us to find the spectral density of \(\mathcal J\) by exploiting the \(*\)-weak convergence of spectral measures. In Section 4 the main result of the paper (Theorem 4.2) is proved: a formula for the spectral density of \(\mathcal J\) in the critical case.

2. Preliminaries

For complete information on definitions of Jacobi matrices, orthogonal polynomials associated to them, and Weyl functions, we refer the reader to Akhiezer’s book [1].

The operator \(\mathcal J\) defined by the matrix (1.1)–(1.3) is self-adjoint in \(l^2(\mathbb N)\) according to the Carleman condition [1]. If \(\mathcal E\) is its projection-valued spectral measure and \(\{e_n,\ n\in\mathbb N\}\) is the standard basis in \(l^2(\mathbb N)\), then \(\rho=(\mathcal Ee_1,e_1)\) is its scalar spectral measure, and its density \(\rho'\) (the Radon–Nikodym derivative of its absolutely continuous part with respect to the Lebesgue measure) is its spectral density. By \(m\) we denote the Weyl function, for which the following relations hold:

$$m(\lambda)=\int_{\mathbb R}\frac{d\rho(x)}{x-\lambda},\quad\lambda\in\mathbb C\setminus\mathbb R,$$
(2.1)
$$\rho'(\lambda)=\frac1{\pi}\operatorname{Im}m(\lambda+i0)\quad\text{for a. e. }\,\lambda\in\mathbb R.$$
(2.2)

Orthogonal polynomials of the first and the second kind, \(P_n(\lambda)\) and \(Q_n(\lambda)\), respectively, are solutions of the eigenvector equation

$$ a_{n-1}u_{n-1}+b_nu_n+a_nu_{n+1}=\lambda u_n,\qquad n\geqslant 2,$$
(2.3)

with initial conditions \(P_1(\lambda)=1\), \(P_2(\lambda)=(b_1-\lambda)/a_1\), \(Q_1(\lambda)=0\), and \(Q_2(\lambda)=1/a_1\). For \(\lambda\in\mathbb C\setminus\mathbb R\), their linear combination \(Q_n(\lambda)+m(\lambda)P_n(\lambda)\) is the only (up to multiplication by a constant) solution of (2.3) which belongs to \(l^2(\mathbb N)\). The weighted Wronskian of two solutions \(u\) and \(v\) of the eigenvector equation (2.3) is defined as

$$ W\{u,v\}:=a_n(u_nv_{n+1}-u_{n+1}v_n),\qquad n\in\mathbb N,$$
(2.4)

and does not depend on \(n\).

The eigenvector equation (2.3) can be written in the vector form:

$$\vec u_n:= \begin{pmatrix} u_{n-1} \\ u_n \end{pmatrix},\quad B_n(\lambda):= \begin{pmatrix} 0 & 1 \\ -a_{n-1}/a_n&(\lambda-b_n)/a_n \end{pmatrix},$$
(2.5)
$$\vec u_{n+1}=B_n(\lambda)\vec u_n,\quad n\geqslant 2.$$
(2.6)

The matrices \(B_n(\lambda)\) are called the transfer matrices for the equation (2.3).

In our case of the matrix (1.1)–(1.3), the transfer matrices for every \(\lambda\) have the limit

$$ \begin{pmatrix} 0 & 1 \\ -1 & 2 \end{pmatrix}.$$
(2.7)

The eigenvalues of this limit matrix coincide, and, by analogy to the case of constant coefficients, when the roots of the characteristic equation are the same as the eigenvalues of the limit transfer matrix, this case is called the double root case, or the critical case. The matrix (2.7) is not proportional to the identity, so it is not diagonalizable and is similar to a Jordan block. For this reason, our situation can also be called the Jordan block case. Asymptotic analysis of solutions may become involved in this case; several papers were devoted to studying examples of such Jacobi matrices, among which we mention [9], [13], [17], and [23].

3. The Stabilized Matrix

In this section \(\mathcal J\) denotes a Jacobi matrix

$$ \mathcal J= \begin{pmatrix} b_1 & a_1 & 0 & 0 & \dots \\ a_1 & b_2 & a_2 & 0 &\dots \\ 0 & a_2 & b_3 & a_3 &\dots \\ 0 & 0 & a_3 & b_4 & \dots \\ \vdots & \vdots & \vdots & \vdots & \ddots \\ \end{pmatrix},$$
(3.1)

with arbitrary sequences \(\{a_n\}_{n=1}^{\infty}\) and \(\{b_n\}_{n=1}^{\infty}\) of positive and real numbers, respectively. Consider the bounded matrix \(\mathcal J_N\) which has the sequence \(a_1,a_2,\dots,a_{N-1},a_N,a_N,a_N,\dots\) of off-diagonal entries and the sequence \(b_1,b_2,\dots,b_{N-1},b_N,b_N,b_N,\dots\) on the main diagonal:

$$ \mathcal J_N= \begin{pmatrix} b_1 & a_1 & 0 & \cdots & 0 & 0 & \vdots \\ a_1 & b_2 & a_2 & \cdots & 0 &0 & \vdots \\ 0 & a_2 & \ddots & \ddots & 0 & 0 & \vdots \\ \vdots & \vdots & \ddots & b_N & a_N & 0 & \vdots \\ 0 & 0 & 0 & a_N & b_N & a_N & \vdots \\ 0 & 0 & 0 & 0 & a_N & b_N & \ddots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots \end{pmatrix}.$$
(3.2)

The matrix \(\mathcal J_N\) is a scaled and shifted discrete Schrödinger operator perturbed by finitely supported sequences of diagonal and off-diagonal entries (finite-rank perturbation). It is known that its spectrum is purely absolutely continuous on the interval \([b_N-2a_N,b_N+2a_N]\), which follows, e.g., from subordinacy theory [19], and discrete and finite on the rest of the real line. One can write an explicit formula for its spectral density in terms of orthogonal polynomials associated with the original matrix \(\mathcal J\).

Proposition 3.1.

Consider a Jacobi matrix \(\mathcal J\) given by (3.1) with some sequences \(\{a_n\}_{n=1}^{\infty}\) of positive numbers and \(\{b_n\}_{n=1}^{\infty}\) of real numbers. Let \(\mathcal J_N\) be defined by (3.2). Then its spectral density is

$$ \rho'_N(\lambda)=\frac{\sqrt{1-\big(\frac{\lambda-b_N}{2a_N}\big)^2}}{\pi a_N|P_{N+1}(\lambda)-z_N(\lambda)P_N(\lambda)|^2},\qquad\lambda\in(b_N-2a_N,b_N+2a_N),$$
(3.3)

where the \(\{P_n(\lambda)\}_{n=1}^{\infty}\) are the orthogonal polynomials of the first kind associated with the matrix \(\mathcal J\) and

$$ z_N(\lambda):=\frac{\lambda-b_N}{2a_N}- i\sqrt{1-\bigg(\frac{\lambda-b_N}{2a_N}\bigg)^2}.$$
(3.4)

This result is analogous to the classical Titchmarsh–Weyl formula for the Schrödinger differential operator with integrable potential on the half-line and is more or less well known. A version of it is contained in [6]. A much more general version, for sequences \(\{a_n\}_{n=1}^{\infty}\) and \(\{b_n\}_{n=1}^{\infty}\) of bounded variation, can be found in [22]; see also [5].

Stabilized matrices approximate the original matrix as \(N\to\infty\), so knowing the spectral density of \(\mathcal J_N\), we can pass to the limit and find the spectral density of \(\mathcal J\). The next two propositions specify the exact sense of this limit passage; both of them are more or less standard. We use the following notation: for \(-\infty\leqslant A<B\leqslant+\infty\),

$$C_{c}(A,B)=\{\varphi\in C(A,B)\mid \operatorname{supp}\varphi\,\text{ is compact}\},$$
(3.5)
$$C_{0}(A,B)=\{\varphi\in C(A,B)\mid \forall\varepsilon\ \exists\text{ compact }K\,{\subset}\,(A,B):|\varphi(x)|<\varepsilon\;\;\forall x\in(A,B)\setminus K\}.$$
(3.6)

\(C_{0}(A,B)\) is a Banach space with norm \(\|\varphi\|=\sup_{x\in\mathbb R}|\varphi(x)|\), \(C_{c}(A,B)\) is its dense linear subset, and the space \(C_{0}^*(A,B)\) consists of finite complex-valued Borel measures (automatically regular) [26, Chaps. 3 and 6]. The following proposition can be found, for example, in [6].

Proposition 3.2.

Let \(\mathcal J\) be a Jacobi matrix (3.1) with some sequences \(\{a_n\}_{n=1}^{\infty}\) of positive numbers and \(\{b_n\}_{n=1}^{\infty}\) of real numbers such that \(\mathcal J\) is in the limit-point case, and let \(\mathcal J_N\) be defined by (3.2). Then \(\rho_N\to\rho\) in the \(*\)-weak sense as \(N\to\infty\).

The next elementary proposition is also essentially known (see [12] for a more sophisticated version), but it is convenient for us to use it in the following special form.

Proposition 3.3.

Let \(\mathcal J\) be a Jacobi matrix (3.1) with some sequences \(\{a_n\}_{n=1}^{\infty}\) of positive numbers and \(\{b_n\}_{n=1}^{\infty}\) of real numbers such that \(\mathcal J\) is in the limit-point case, and let \(\mathcal J_N\) be defined by (3.2). If there exist a continuous function \(f\) on \((A,B)\), \(-\infty\leqslant A<B\leqslant+\infty\), and an increasing sequence \(\{N_k\}_{k=1}^{\infty}\) such that, for every compact set \(K\subset(A,B)\), there exists a natural \(M(K)\) such that, for every \(k>M(K)\), one has \(K\subset[b_{N_k}-2a_{N_k},b_{N_k}+2a_{N_k}]\) and \(\rho'_{N_k}(\lambda)\to f(\lambda)\) as \(k\to\infty\) uniformly in \(\lambda\in K\), then the spectral measure \(\rho\) of the operator \(\mathcal J\) is absolutely continuous on the interval \((A,B)\) and \(\rho'(\lambda)=f(\lambda)\) for a. e. \(\lambda\in(A,B)\).

Proof.

By Proposition 3.2, \(\rho_{N_k}\to\rho\) \(*\)-weakly as \(k\to\infty\) in \(C_0^*(\mathbb R)\). Hence \(\rho_{N_k}|_{(A,B)}\to\rho|_{(A,B)}\) \(*\)-weakly in \(C_0^*(A,B)\): if \((A,B)\neq\mathbb R\), then, for any function \(\varphi\in C_0(A,B)\), consider its continuation \(\widetilde\varphi\in C_0(\mathbb R)\) to \(\mathbb R\) by zero, and the convergence follows. Since one sequence in this topology cannot have two limits, it is enough to prove that \(d\rho_{N_k}(\lambda)|_{(A,B)}\to f(\lambda)d\lambda\) \(*\)-weakly in \(C_0^*(A,B)\). For every \(\varphi\in C_{c}(A,B)\) and \(k>M({\rm supp\,}\varphi)\), we have

$$\begin{aligned} \, \bigg|\int_A^B\varphi(\lambda)\,d\rho_{N_k}(\lambda)-\int_A^B\varphi(\lambda)f(\lambda)\,d\lambda\bigg| &=\bigg|\int_{\operatorname{supp}\varphi}\varphi(\lambda)(\rho_{N_k}'(\lambda)-f(\lambda))\,d\lambda\bigg|\\ &\leqslant\sup_{x\in\operatorname{supp}\varphi}|\rho_{N_k}'(x)-f(x)|\int_A^B|\varphi(\lambda)|\,d\lambda\to0,\qquad k\to\infty. \end{aligned}$$

Owing to the uniform estimate

$$\|\rho_{N_k}|_{(A,B)}\|_{C_0^*(A,B)}\leqslant 1,$$

convergence holds for every \(\varphi\in C_0(A,B)\) as well. Thus, \(d\rho(\lambda)|_{(A,B)}=f(\lambda)d\lambda\), which completes the proof. \(\Box\)

4. Spectral Density in the Critical Case

In this section we formulate and prove the main result of the paper. We investigate the asymptotics of solutions to (2.3) as \(n\to\infty\) locally in \(\lambda\). Let us fix some \(0<r<R<\infty\) and consider the open set

$$ \Omega_0:=\{\lambda\in\mathbb C:r<|\lambda|<R\}\setminus\mathbb R_-;$$
(4.1)

see Fig. 1.

Fig. 1.
figure 1

The domain \(\Omega_0\).

Remark 4.1.

By \(\overline\Omega_0\) we denote the closure of \(\Omega_0\) (on the Riemannian surface of \(\sqrt\lambda\)), which contains both sides (considered to be different) of the cut along \(\mathbb R_-\). Writing \([-R,-r]\), we will always mean the upper side of the cut.

Theorem 4.2.

Let the entries of the Jacobi matrix \(\mathcal J\) be given by

$$ a_n=n^{\alpha}+p_n,\quad b_n=-2n^{\alpha}+q_n$$
(4.2)

with

$$ \alpha\in(0,1),$$
(4.3)

and let real sequences \(\{p_n\}_{n=1}^{\infty}\) and \(\{q_n\}_{n=1}^{\infty}\) be such that \(a_n>0\) for all \(n\) and

$$ \bigg\{\frac{p_n}{n^{\alpha/2}}\bigg\}_{n=1}^{\infty},\bigg\{\frac{q_n}{n^{\alpha/2}}\bigg\}_{n=1}^{\infty}\in l^1.$$
(4.4)

Then, for the domain \(\Omega_0\), there exists an \(N_0\in\mathbb N\) such that, for every \(\lambda\in\overline\Omega_0\), the equation

$$ a_{n-1}u_{n-1}+b_nu_n+a_nu_{n+1}=\lambda u_n,\qquad n\geqslant 2,$$
(4.5)

has a solution \(u^-(\lambda):=\{u_n^-(\lambda)\}_{n=1}^{\infty}\) with asymptotics

$$ u_n^{-}(\lambda)=\bigg(\prod_{l=N_0}^n\eta_l^{-}(\lambda)\bigg)(1+o(1)),\qquad n\to\infty,$$
(4.6)

uniform in \(\lambda\in\overline\Omega_0\), whereFootnote 1

$$ \eta^{-}_n(\lambda)=1+\frac{\lambda}{2n^{\alpha}}+\frac{\alpha}{4n}- \frac{\sqrt{\lambda}}{n^{\alpha/2}}\sqrt{1+\frac{\lambda}{4n^{\alpha}}+\frac{\alpha}{\lambda n^{1-\alpha}}}$$
(4.7)

and \(\eta^-_n(\lambda)\neq0\) for \(\lambda\in\overline\Omega_0\) and \(n\geqslant N_0\). For every \(n\), \(u^-_n\) is analytic in \(\Omega_0\) and continuous in \(\overline\Omega_0\). For \(\lambda\in[-R,-r]\), there exists a nonzero limit

$$ H(\lambda):=\lim_{n\to\infty}n^{\alpha/4}\prod_{l=N_0}^n|\eta^-_l(\lambda)|,$$
(4.8)

which is continuous in \(\lambda\). For \(\lambda\in[-R,-r]\), the sequence \(u^+(\lambda):=\{u^+_n(\lambda)\}_{n=1}^{\infty}\) with \(u^+_n(\lambda)=\overline{u^-_n(\lambda)}\) is another solution of the equation (4.5) with nonzero Wronskian

$$ W\{u^+(\lambda),u^-(\lambda)\}=-2i\sqrt{-\lambda}\,H^2(\lambda).$$
(4.9)

The orthogonal polynomials of the first kind associated with \(\mathcal J\) can be expressed as

$$ P_n(\lambda)=\Psi(\lambda)u_n^{+}(\lambda)+\overline{\Psi(\lambda)}u_n^{-}(\lambda), \qquad\lambda\in[-R,-r],\;n\in\mathbb N,$$
(4.10)

where

$$ \Psi(\lambda)=\frac{u_0^-(\lambda)}{2i\sqrt{-\lambda}\,H^2(\lambda)},$$
(4.11)

\(u_0^-(\lambda):=(\lambda-b_1)u_1^-(\lambda)-a_1u_2^-(\lambda)\) (assuming formally in (4.5) that \(a_0:=1\)). The spectral density of \(\mathcal J\) is given by the formula

$$ \rho'(\lambda)=\frac1{4\pi\sqrt{-\lambda}\,|\Psi(\lambda)|^2H^2(\lambda)}=\frac{\sqrt{-\lambda}\,H^2(\lambda)}{\pi|u^-_0(\lambda)|^2}, \qquad\lambda\in[-R,-r].$$
(4.12)

Remark 4.3.

Another critical case with \(b_n=2n^{\alpha}+q_n\) can be easily reduced to the case (4.2) with \(\lambda\) replaced by \(-\lambda\).

Remark 4.4.

The solution \(u^-_n(\lambda)\) depends, through \(N_0\), on the set \(\Omega_0\) (i.e., on \(r\) and \(R\)), as well as the coefficients \(\Psi\), \(H\), and \(u^-_0\): they differ by a product of a finite number of the values \(\eta^-_n(\lambda)\) for small \(n\), which is a function of \(\lambda\). Unfortunately, this function cannot be taken the same for the whole set \((\overline{\mathbb C\setminus\mathbb R_-})\setminus\{0\}\). At the same time, the functions \(|\Psi(\lambda)|H(\lambda)\), \(H(\lambda)/|u^-_0(\lambda)|\), and hence \(\rho'(\lambda)\) do not depend on \(r\) and \(R\). Note also that we can take any compact set in \((\overline{\mathbb C\setminus\mathbb R_-})\setminus\{0\}\) instead of \(\overline\Omega_0\).

Remark 4.5.

1. In the particular case \(\alpha\in(1/2,2/3)\) considered in [17], formula (4.6) and its conjugate version can be written as

$$ u_n^{\pm}(\lambda)=\frac{H(\lambda)+o(1)}{n^{\alpha/4}} \exp\bigg(\pm i\bigg(\frac{\sqrt{-\lambda}\,n^{1-\alpha/2}}{1-\alpha/2}-\frac{n^{\alpha/2}}{\sqrt{-\lambda}} +\frac{(\sqrt{-\lambda})^3n^{1-3\alpha/2}}{24(1-3\alpha/2)}+\varphi_0(\lambda)\bigg)\bigg),$$
(4.13)

where \(\varphi_0\) is some real-valued function. Formula (4.13) coincides up to multiplication by a constant in \(n\) with the formula from the work [17]. Note that the restriction \(\alpha\in(1/2,2/3)\) is essential here.

2. It is easy to see that, in the case \(\alpha\in(0,1)\setminus(1/2,2/3)\), the asymptotics of \(\prod_{l=1}^n\eta^-_l\) has a form similar to (4.13). The power terms in \(n\) in the exponent have orders from the interval \((0,1)\) and correspond to nonsummable terms in the asymptotic expansions of \(\ln \eta^-_n\) as \(n\to\infty\). The number of such terms depends on \(\alpha\) and grows without bound as \(\alpha\) approaches \(0\) or \(1\). At the same time, the decay of polynomials is always of the order \(1/n^{\alpha/4}\) for \(\lambda<0\), and the asymptotics of the solution \(u^-\) can be written as

$$u^{\pm}_n(\lambda)=\frac{H(\lambda)+o(1)}{n^{\alpha/4}}\exp(\pm i\phi_n(\lambda)),\qquad n\to\infty,$$

where \(\phi_n(\lambda)=\sum\limits_{k=0}^{K}\varphi_k(\lambda)n^{\alpha_k}\) with some \(K\), \(0=\alpha_0<\alpha_1<\cdots<\alpha_K< 1\), and some real-valued \(\{\varphi_k(\lambda)\}_{k=1}^{K}\). Elementary calculations show that \(K\sim(2/\alpha+1/(2(1-\alpha)))\) as \(\alpha\to0\) or \(\alpha\to1\).

Proof of Theorem 4.2.

Consider \(\lambda\in\overline\Omega_0\). For the system

$$ \vec u_{n+1}=B_n(\lambda)\vec u_n,\qquad n\geqslant2,$$
(4.14)

which is equivalent to the eigenvector equation, we choose a sequence \(\{S_n(\lambda)\}_{n=N_0}^{\infty}\) of diagonal matrices

$$S_n(\lambda)= \begin{pmatrix} s_n^+(\lambda) & 0 \\ 0 & s_n^-(\lambda) \\ \end{pmatrix}$$

such that the transformation

$$ \vec u_n=S_n(\lambda)\vec v_n,\quad \vec v_{n+1}=S_{n+1}^{-1}(\lambda)B_n(\lambda)S_n(\lambda)\vec v_n,\qquad n\geqslant N_0,$$
(4.15)

leads to the system (resembling the system for the discrete Schrödinger operator with spectral parameter on the boundary of the essential spectrum)

$$ \vec v_{n+1}= \begin{pmatrix} 0 & 1 \\ -1+c_n(\lambda) & 2 \\ \end{pmatrix}\vec v_n, \qquad n\geqslant N_0,$$
(4.16)

with some sequence \(\{c_n(\lambda)\}_{n=N_0}^{\infty}\). The value \(N_0\in\mathbb N\) will be chosen large enough uniformly in \(\lambda\in\overline\Omega_0\) in order that a number of conditions hold (these conditions will be specified in what follows). This form of the system corresponds, via the transformation \(\vec v_n=\begin{pmatrix} x_n \\ x_{n+1} \end{pmatrix}\), to the three-term recurrence relation \(x_{n+2}-2x_{n+1}+(1-c_n(\lambda))x_n=0\), which was studied in Kooman’s paper [20]. In order to obtain this system, we need the following relations to hold:

$$ \begin{pmatrix} 0 & \dfrac{s_n^-(\lambda)}{s_{n+1}^+(\lambda)}\\[10pt] -\dfrac{s_n^+(\lambda)a_{n-1}}{s_{n+1}^-(\lambda)a_n}&\dfrac{s_n^-(\lambda)(\lambda-b_n)}{s_{n+1}^-(\lambda)a_n} \end{pmatrix}=\begin{pmatrix} 0 & 1 \\ -1+c_n(\lambda) & 2 \\ \end{pmatrix},\qquad n\geqslant N_0.$$
(4.17)

Comparing the rightmost columns, we obtain \(s_n^-(\lambda)(\lambda-b_n)/(s_{n+1}^-(\lambda)a_n)=2\) and \(s_n^-(\lambda)=s_{n+1}^+(\lambda)\). We denote

$$d_n(\lambda):=\frac{\lambda-b_n}{2a_n},\qquad n\geqslant1.$$

Then \(s^-_{n+1}(\lambda)/s^-_n(\lambda)=d_n(\lambda)\). The index \(N_0\) will be chosen large enough to ensure, in particular, that \(d_n(\lambda)\neq0\) for \(\lambda\in\overline\Omega_0\) and \(n\geqslant N_0-1\). Take

$$s_n^-(\lambda):=\prod_{l=N_0-1}^{n-1}d_l(\lambda),\quad s_n^+(\lambda):=\prod_{l=N_0-1}^{n-2}d_l(\lambda),\qquad n\geqslant N_0,$$

so that

$$ S_n(\lambda)=\bigg(\prod_{l=N_0-1}^{n-2}d_l(\lambda)\bigg) \begin{pmatrix} 1 & 0 \\ 0 & d_{n-1}(\lambda) \\ \end{pmatrix}, \qquad n\geqslant N_0.$$
(4.18)

From the equality of the lower-left entries in (4.17) we have

$$-1+c_n(\lambda)=-\frac{s_n^+(\lambda)a_{n-1}}{s_{n+1}^-(\lambda)a_n};$$

therefore, we have to define

$$c_n(\lambda):=1-\frac{4a_{n-1}^2}{(\lambda-b_{n-1})(\lambda-b_n)},\qquad n\geqslant N_0.$$

Note that \(c_n(\lambda)\to0\) as \(n\to\infty\) in the critical case. The substitution

$$ \vec v_n= \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \vec w_n$$
(4.19)

transforms system (4.16) into the system

$$ \vec w_{n+1}= \begin{pmatrix} 1 & 1 \\ c_n(\lambda) & 1 \end{pmatrix} \vec w_n,\qquad n\geqslant N_0,$$
(4.20)

which has the same form as in [20]. Following the method of Kooman, we look for sequences \(\{g_n^{\pm}(\lambda)\}_{n=N_0}^{\infty}\) such that the substitution

$$ \vec w_n= \begin{pmatrix} 1 & 1 \\ g_n^+(\lambda) & g_n^-(\lambda) \\ \end{pmatrix} \vec y_n$$
(4.21)

transforms system (4.20) into

$$\begin{gathered} \, \vec y_{n+1}=\bigg(\begin{pmatrix} 1+g_n^+ & 0 \\ 0 & 1+g_n^- \end{pmatrix}+\frac1{g_{n+1}^--g_{n+1}^+}\\ \times\begin{pmatrix} g_{n+1}^+-g_n^++g_n^+g_{n+1}^+-c_n & g_{n+1}^--g_n^-+g_n^-g_{n+1}^--c_n \\ -(g_{n+1}^+-g_n^++g_n^+g_{n+1}^+-c_n) & -(g_{n+1}^--g_n^-+g_n^-g_{n+1}^--c_n) \\ \end{pmatrix}\bigg) \vec y_n, \end{gathered}$$
(4.22)

which has the Levinson (L-diagonal) form ([3], [8]) if the second term in the coefficient matrix in (4.22) is integrable. Combining the substitutions (4.15), (4.19), and (4.21), we see that the solutions of systems (4.14) and (4.22) are related by

$$\vec u_n=S_n(\lambda) \begin{pmatrix} 1 & 1 \\ 1+g^+_n(\lambda) & 1+g^-_n(\lambda) \end{pmatrix} \vec y_n,\qquad n\geqslant N_0.$$

Consider the sequence \(\{c_n(\lambda)\}_{n=N_0}^{\infty}\). It has asymptotics

$$\begin{aligned} \, c_n(\lambda)&=1-\frac{4((n-1)^{\alpha}+p_{n-1})^2}{(\lambda+2(n-1)^{\alpha}-q_{n-1})(\lambda+2n^{\alpha}-q_n)}\notag\\ &=\frac{\frac\lambda{n^{\alpha}}+\frac{\lambda^2}{4n^{2\alpha}}+\frac\alpha n}{1+\frac\lambda{n^{\alpha}}+\frac{\lambda^2}{4n^{2\alpha}}} +\frac{r^{(1)}_n(\lambda)}{n^{\alpha/2}}=\chi_n(\lambda)+\frac{r^{(1)}_n(\lambda)}{n^{\alpha/2}}, \end{aligned}$$
(4.23)

where \(\{\sup_{\lambda\in\overline\Omega_0}\{|r^{(1)}_n(\lambda)|\}\}_{n=N_0}^{\infty}\in l^1\) for sufficiently large \(N_0\). Here we use the notation

$$ \chi_n(\lambda):= \frac{\frac\lambda{n^{\alpha}}+\frac{\lambda^2}{4n^{2\alpha}}+\frac\alpha n}{\big(1+\frac\lambda{2n^{\alpha}}\big)^2}\,.$$
(4.24)

The index \(N_0\) will be chosen so that the roots and the poles of \(\chi_n\) and the poles of \(c_n\) lie outside \(\overline\Omega_0\) for \(n\geqslant N_0\).

An important property, which also arises in [20, Theorem 1, case 1] as a condition and which one can check straightforwardly here, is that

$$ \bigg\{\sup_{\lambda\in \overline\Omega_0}\bigg|\frac{\chi_{n+1}(\lambda)-\chi_n(\lambda)}{\chi_n(\lambda)}+\frac{\alpha}n\bigg|\bigg\}_{n=N_0}^{\infty}\in l^1$$
(4.25)

due to the property

$$\chi_{n+1}(\lambda)=\chi_n(\lambda)\bigg(1-\frac{\alpha}n+r_n^{(2)}(\lambda)\bigg)\quad\text{with }\,\bigg\{\sup_{\lambda\in \overline\Omega_0}|r^{(2)}_n(\lambda)|\bigg\}_{n=N_0}^{\infty}\in l^1$$

for sufficiently large \(N_0\). Let us define

$$ g_n^{\pm}(\lambda):=\pm\sqrt{\chi_n(\lambda)}+\frac{\alpha}{4n},\qquad n\geqslant N_0.$$
(4.26)

To specify the branch of the square root, note that the function \(\chi_n\) has one pole \(\mu_n:=-2n^{\alpha}\) and two zeros \(\lambda^{\pm}_n:=2n^{\alpha}(-1\pm\sqrt{1-\frac{\alpha}n})\) such that \(\lambda^-_n\to-\infty\) and \(\lambda^+_n\to0\) as \(n\to\infty\). Since \(\chi_n^{-1}((-\infty,0]))=[\lambda_n^-,\lambda_n^+]\setminus\{\mu_n\}\) and \(\chi_n(\mu_n)=\infty\), it follows that, for \(\lambda\in\mathbb C\setminus[\lambda_n^-,\lambda_n^+]\), the value \(z=\chi_n(\lambda)\) never lies on the cut \((-\infty,0]\). A branch of \(\sqrt z\) on \(\mathbb C\setminus(-\infty,0]\) can be chosen so that \(\operatorname{Re}\sqrt z>0\) (the principle branch). Thus, we can choose a branch of \(\sqrt{\chi_n(\lambda)}\) in \(\mathbb C\setminus[\lambda_n^-,\lambda_n^+]\) so that \(\sqrt{\chi_n(0)}=\sqrt{\frac{\alpha}n}>0\), and for this choice we have \(\operatorname{Re}\sqrt{\chi_n(\lambda)}>0\) for \(\lambda\in\mathbb C\setminus[\lambda_n^-,\lambda_n^+]\). We choose \(N_0\) large enough to ensure that \(\lambda^-_n,\mu_n<-3R\) and \(\lambda^+_n>-r\) for \(n\geqslant N_0\). Then

$$ \operatorname{Re}\sqrt{\chi_n(\lambda)}\geqslant0\quad\text{for }\,\lambda\in\overline\Omega_0,\;n\geqslant N_0.$$
(4.27)

Note that there are two sides of the cut \([-R,-r]\) in \(\overline{\Omega}_0\) for which the values of \(\sqrt{\chi_n}\) are different; see Remark 4.1. For every \(n\geqslant N_0\), the function \(\sqrt{\lambda/n^{\alpha}+\lambda^2/4n^{2\alpha}+\alpha/n}\) is also analytic in \(\Omega_0\) and continuous in \(\overline{\Omega}_0\), and we can write

$$\sqrt{\chi_n(\lambda)}= \frac{\sqrt{\frac\lambda{n^{\alpha}}+\frac{\lambda^2}{4n^{2\alpha}}+\frac\alpha n}}{1+\frac{\lambda}{2n^{\alpha}}} = \frac{\frac{\sqrt{\lambda}}{n^{\alpha/2}} \sqrt{1+\frac{\lambda}{4n^{\alpha}}+\frac{\alpha}{\lambda n^{1-\alpha}}}}{1+\frac{\lambda}{2n^{\alpha}}}\quad\text{for }\,\lambda\in\overline\Omega_0,\;n\geqslant N_0,$$

which corresponds to the choice of the branch of \(\sqrt\lambda\).

It follows from (4.25) that

$$ \bigg\{\sup_{\lambda\in \overline\Omega_0}\bigg|\frac{g_{n+1}^{\pm}(\lambda)-g_n^{\pm}(\lambda)+g_n^{\pm}(\lambda)g_{n+1}^{\pm}(\lambda)-\chi_n(\lambda)} {g_{n+1}^-(\lambda)-g_{n+1}^+(\lambda)}\bigg|\bigg\}_{n=N_0}^{\infty}\in l^1.$$
(4.28)

Indeed, one can easily check that

$$\begin{aligned} \, g^{\pm}_{n+1}(\lambda)&=\pm\sqrt{\chi_n(\lambda)}\sqrt{1-\frac{\alpha}n+r^{(3)}_n(\lambda)}+\frac{\alpha}{4(n+1)}\notag\\ &=\pm\sqrt{\chi_n(\lambda)}\mp\frac{\alpha\sqrt{\chi_n(\lambda)}}{2n}+\frac{\alpha}{4n}+\sqrt{\chi_n(\lambda)}r^{(4)}_n(\lambda),\notag\\ g^{\pm}_{n+1}(\lambda)-g^{\pm}_n(\lambda)&=\sqrt{\chi_n(\lambda)}\Big(\mp\frac{\alpha}{2n}+r^{(5)}_n(\lambda)\Big),\notag\\ g^-_{n+1}(\lambda)-g^+_{n+1}(\lambda)&=-2\sqrt{\chi_{n+1}(\lambda)}=\sqrt{\chi_n(\lambda)}\Big(-2+\frac{\alpha}n+r^{(5)}_n(\lambda)\Big),\\ g^{\pm}_n(\lambda)g^{\pm}_{n+1}(\lambda)&=\chi_n(\lambda)\pm\frac{\alpha\sqrt{\chi_n(\lambda)}}{2n}+\sqrt{\chi_n(\lambda)}\,r^{(6)}(\lambda)\notag \end{aligned}$$
(4.29)

and \(\{\sup_{\lambda\in \overline\Omega_0}\{|r^{(3)}_n(\lambda)|, |r^{(4)}_n(\lambda)|,|r^{(5)}_n(\lambda)|,|r^{(6)}_n(\lambda)|\}\}_{n=N_0}^{\infty}\in l^1\) for sufficiently large \(N_0\), from which (4.28) follows. From (4.23), (4.29), and (4.24) we have

$$\bigg\{\sup_{\lambda\in \overline\Omega_0}\bigg|\frac{c_n(\lambda)-\chi_n(\lambda)}{g_{n+1}^-(\lambda)-g_{n+1}^+(\lambda)}\bigg|\bigg\}_{n=N_0}^{\infty}\in l^1.$$

Therefore, we can write system (4.22) in the form

$$ \vec y_{n+1}= \bigg(\begin{pmatrix} 1+g_n^+(\lambda) & 0 \\ 0 & 1+g_n^-(\lambda) \\ \end{pmatrix} +R_n(\lambda)\bigg) \vec y_n, \qquad n\geqslant N_0,$$
(4.30)

where \(\{\sup_{\lambda\in \overline\Omega_0}\|R_n(\lambda)\|\}_{n=N_0}^{\infty}\in l^1\). Clearly, \(c_n\) and \(g^{\pm}_n\) are analytic in \(\Omega\) and continuous in \(\overline\Omega_0\) for all \(n\geqslant N_0\).

Now the tools used to prove the following lemma can be employed to show the existence of a solution \(\vec y_n^{\,-}(\lambda)\) analytic in \(\Omega_0\), continuous in \(\overline\Omega_0\) (for all \(n\geqslant N_0\)), and having asymptotics

$$ \vec y_n^{\,-}(\lambda)=\bigg(\prod_{l=N_0}^{n-1}(1+g_l^-(\lambda))\bigg)(\vec e_-+o(1)), \qquad n\to\infty,$$
(4.31)

uniform in \(\overline\Omega_0\).

Lemma 4.6.

Let a sequence \(\{\lambda_n\}_{n=1}^{\infty}\) of nonzero complex numbers be such that, for some \(C>0\) and any \(p,q\in\mathbb N\) such that \(p\leqslant q\) ,

$$ \prod_{l=p}^q|\lambda_l|\geqslant\frac1C\,.$$
(4.32)

Let a sequence \(\{R_n\}_{n=1}^{\infty}\) of complex \(2\times2\) matrices be such that

$$\det\bigg(\begin{pmatrix} \lambda_n & 0 \\ 0 & 1/\lambda_n \\ \end{pmatrix} +R_n\bigg)\neq0,\qquad n\geqslant1,$$

and

$$ \sum_{k=1}^{\infty}|\lambda_k|\|R_k\|<\infty.$$
(4.33)

Then the system

$$ \vec x_{n+1}=\bigg(\begin{pmatrix} \lambda_n & 0 \\ 0 & 1/\lambda_n\\ \end{pmatrix} +R_n\bigg)\vec x_n,\qquad n\geqslant1,$$
(4.34)

has the solution

$$ \vec x^{\,-}_n=\bigg(\prod_{l=1}^{n-1}\frac1{\lambda_l}\bigg)(\vec e_-+o(1)),\qquad n\to\infty,$$
(4.35)

where \(\vec e_-= \begin{pmatrix} 0 \\ 1 \end{pmatrix}\) .

Proof.

One can check that the solutions of system (4.34) are the same as the solutions of the integral equations

$$\vec x_n= \begin{pmatrix} \prod\limits_{l=1}^{n-1}\lambda_l & 0 \\ 0 & \prod\limits_{l=1}^{n-1}\frac1{\lambda_l} \end{pmatrix} \vec f -\sum_{k=n}^{\infty} \begin{pmatrix} \prod\limits_{l=n}^{k}\frac1{\lambda_l} & 0 \\ 0 & \prod\limits_{l=n}^{k}\lambda_l \end{pmatrix} R_k\vec x_k$$

for arbitrary \(\vec f\in\mathbb C^2\) (this argument is a kind of the variation of parameters method). In particular, consider \(\vec f=\vec e_-\) and

$$ \vec x^{\,-}_n= \left(\prod\limits_{l=1}^{n-1}\frac1{\lambda_l}\right) \vec e_- -\sum_{k=n}^{\infty} \begin{pmatrix} \prod\limits_{l=n}^{k}\frac1{\lambda_l} & 0 \\ 0 & \prod\limits_{l=n}^{k}\lambda_l \end{pmatrix} R_k\vec x^{\,-}_k.$$
(4.36)

Then, for the new sequence of vectors

$$\vec{\tilde x}^{-}_n:=\bigg(\prod\limits_{l=1}^{n-1}\lambda_l\bigg)\vec x^{\,-}_n,$$

equation (4.36) is equivalent to the equation

$$ \vec{\tilde x}^{-}_n= \vec e_- -\sum_{k=n}^{\infty} \begin{pmatrix} \prod\limits_{l=n}^{k}\frac1{\lambda_l^2} & 0 \\ 0 & 1 \end{pmatrix} \lambda_kR_k\vec{\tilde x}^{-}_k.$$
(4.37)

Consider the family of \(2\times2\) matrices

$$V_{nk}:=- \begin{pmatrix} \prod\limits_{l=n}^{k}\frac1{\lambda_l^2} & 0 \\ 0 & 1 \end{pmatrix} \lambda_kR_k,\qquad n,k\geqslant1.$$

Since the norms

$$ \|V_{nk}\|\leqslant(C^2+1)|\lambda_k|\|R_k\|$$
(4.38)

are summable in \(k\) by conditions (4.32) and (4.33), equation (4.37) can be written as

$$\vec{\tilde x}^{-}=\vec e_-+V\vec{\tilde x}^{-}$$

with the Volterra operator \(V\) defined by the matrix-valued kernel \(V_{nk}\) on the Banach space \(l^{\infty}(\mathbb N;\mathbb C^2)\). One has

$$\begin{gathered} \, \|V^j\|\leqslant\frac{\big(\sum_{k=1}^{\infty}(C^2+1)|\lambda_k|\|R_k\|\big)^j}{j!},\qquad j\geqslant0,\notag\\ \|(I-V)^{-1}\|\leqslant\exp\bigg(\sum_{k=1}^{\infty}(C^2+1)|\lambda_k|\|R_k\|\bigg),\notag\\ \vec{\tilde x}^{-}=(I-V)^{-1}\vec e_-=\sum_{j=0}^{\infty}V^j\vec e_-. \end{gathered}$$
(4.39)

From (4.37) and (4.38) it follows that \(\vec{\tilde x}^{-}_n\to e_-\), \(n\to\infty\), and this implies (4.35), which completes the proof. \(\Box\)

Remark 4.7.

Lemma 4.6 is another variation of the discrete asymptotic Levinson theorem; see [7, Lemma 2.1]. However, note that the proof of the existence of a “small” solution does not require the dichotomy condition. This is crucial for the uniformity of asymptotics and for the continuity of solutions in the parameter \(\lambda\) in what follows. Other formulations of smooth and uniform discrete Levinson theorems, such as those in [27], do not yield the result we need.

Consider the following transformation \(\vec y_n\to\vec x_n\):

$$ \vec y_n=\bigg(\prod_{l=N_0}^{n-1}\sqrt{1+g^+_l(\lambda)}\sqrt{1+g^-_l(\lambda)}\bigg)\vec x_n,\qquad n\geqslant N_0.$$
(4.40)

For sufficiently large \(N_0\), we have \(|g^{\pm}_n(\lambda)|\leqslant\frac12\) for \(\lambda\in\overline\Omega\), \(n\geqslant N_0\), and we can take the principal branch for both square roots. This substitution transforms (4.30) into the system

$$\vec x_{n+1}= \left(\begin{pmatrix} \frac{\sqrt{1+g^+_n}}{\sqrt{1+g^-_n}} & 0 \\ 0 &\frac{\sqrt{1+g^-_n}}{\sqrt{1+g^+_n}} \end{pmatrix} + \frac{R_n}{\sqrt{1+g^+_n}\sqrt{1+g^-_n}} \right) \vec x_n,\qquad n\geqslant N_0,$$

to which Lemma 4.6 is applicable in \(\overline\Omega_0\) if \(N_0\) is chosen large enough (shifting the index by \(N_0-1\) does not change the situation). Indeed, the constant \(C\) in (4.32) can be chosen equal to 1 for every \(\lambda\in\overline\Omega_0\): due to (4.27) we have

$$\bigg|\frac{1+\sqrt{\chi_n(\lambda)}+\alpha/(4n)}{1-\sqrt{\chi_n(\lambda)}+\alpha/(4n)}\bigg|\geqslant1$$

for all \(\lambda\in\overline\Omega_0\) and \(n\geqslant N_0\). Moreover, we have

$$\bigg\{\sup_{\lambda\in \overline\Omega_0}\frac{\|R_n(\lambda)\|}{|1+g^-_n(\lambda)|}\bigg\}_{n=N_0}^{\infty}\in l^1,$$

since \(|g^-_n(\lambda)|\leqslant1/2\) for \(\lambda\in\overline\Omega\) and \(n\geqslant N_0\), which ensures that the estimate of the sum in (4.37) provided by (4.38) is uniform in \(\lambda\in\overline\Omega_0\). By Lemma 4.6 there exists a solution

$$ \vec x^{\,-}_n(\lambda)=\Bigg(\prod_{l=N_0}^{n-1}\frac{\sqrt{1+g^+_l(\lambda)}}{\sqrt{1+g^-_l(\lambda)}}\Bigg)(\vec e_-+o(1)),\qquad n\to\infty,$$
(4.41)

which, together with (4.40), gives (4.31), and the asymptotics in (4.41) and (4.31) are uniform. Furthermore, the sum in (4.39) converges absolutely and uniformly (i.e., \(\sum_{j=0}^{\infty}\sup_{\lambda\in\overline\Omega_0}\|V^j(\lambda)\vec e_-\|<\infty\)), and the summands are analytic in \(\Omega_0\) and continuous in \(\overline\Omega_0\); thus, the solutions \(\vec x^{\,-}(\lambda)\) and \(\vec y^{\,-}(\lambda)\) are also analytic in \(\Omega_0\) and continuous in \(\overline\Omega_0\). Returning to the system (4.14), using (4.21), (4.19), and (4.15), we obtain a solution of system (4.14) analytic in \(\Omega_0\) and continuous in \(\overline\Omega_0\) whose asymptotics as \(n\to\infty\) is uniform in \(\overline\Omega_0\); we denote this solution by \(\vec{\tilde u}^{\,-}_n(\lambda)\):

$$\begin{aligned} \, \vec{\tilde u}^{\,-}_n(\lambda):\!&=S_n(\lambda) \begin{pmatrix} 1 & 0 \\ 1 & 1 \\ \end{pmatrix} \begin{pmatrix} 1 & 1 \\ g_n^+(\lambda) & g_n^-(\lambda) \\ \end{pmatrix} \vec y_n^{\,-}(\lambda)\\ &=S_n(\lambda) \begin{pmatrix} 1 & 0 \\ 1 & 1 \\ \end{pmatrix} \begin{pmatrix} 1 & 1 \\ g_n^+(\lambda) & g_n^-(\lambda) \\ \end{pmatrix} \bigg(\prod_{l=N_0}^{n-1}\sqrt{1+g^+_l(\lambda)}\sqrt{1+g^-_l(\lambda)}\bigg)\vec x^{\,-}_n\\ &=\bigg(\prod_{l=N_0}^{n-1}h_l^-(\lambda)\bigg) \begin{pmatrix} 1 & 1 \\ h_n^+(\lambda) & h_n^-(\lambda) \\ \end{pmatrix} (\vec e_-+o(1)), \end{aligned}$$

where

$$ h_n^{\pm}(\lambda):=d_{n-1}(\lambda)(1+g^{\pm}_n(\lambda)).$$
(4.42)

As one can see,

$$ d_n(\lambda)=1+\frac{\lambda}{2n^{\alpha}}+\frac{r^{(7)}_n(\lambda)}{n^{\alpha/2}},\quad\text{and }\,\bigg\{\sup_{\lambda\in \overline\Omega_0}|r^{(7)}_n(\lambda)|\bigg\}_{n=1}^{\infty}\in l^1.$$
(4.43)

Using this formula and the definitions (4.26), (4.24), and (4.7), we obtain

$$h^-_n(\lambda)=\eta^-_n(\lambda)+r^{(8)}_n(\lambda),\quad\text{and }\,\bigg\{\sup_{\lambda\in \overline\Omega_0}|r^{(8)}_n(\lambda)|\bigg\}_{n=N_0}^{\infty}\in l^1.$$

By the choice of \(N_0\) we can ensure that, for all \(\lambda\in\overline\Omega\) and \(n\geqslant N_0\), we have \(|\eta^-_n(\lambda)|\geqslant1/2\). Then, for every \(n\geqslant N_0\),

$$\frac{h^-_n(\lambda)}{\eta^-_n(\lambda)}= 1+\frac{r^{(8)}_n(\lambda)}{\eta^-_n(\lambda)}= 1+r^{(9)}_n(\lambda),\quad\text{and }\,\bigg\{\sup_{\lambda\in \overline\Omega_0}|r^{(9)}_n(\lambda)|\bigg\}_{n=N_0}^{\infty}\in l^1.$$

This quotient is a function analytic in \(\Omega_0\) and continuous in \(\overline\Omega_0\), and it has no roots. Therefore, the product

$$\prod_{l=N_0}^{\infty}\frac{h^-_n(\lambda)}{\eta^-_n(\lambda)}=:C_0(\lambda)$$

converges and has the same properties. Consider the solution

$$\begin{aligned} \, \vec u^{\,-}_n(\lambda)&=\begin{pmatrix} u_{n-1}(\lambda) \\ u_n(\lambda) \end{pmatrix}:= \frac{\vec{\tilde u}^-_n(\lambda)}{C_0(\lambda)}\notag\\ &=\bigg(\prod_{l=N_0}^{n-1}\eta^-_l(\lambda)\bigg) \bigg(\prod_{l=n}^{\infty}\frac{\eta^-_l(\lambda)}{h_l^-(\lambda)}\bigg) \begin{pmatrix} 1 & 1 \\ h_n^+(\lambda) & h_n^-(\lambda) \\ \end{pmatrix} (\vec e_-+o(1))\notag\\ &=\bigg(\prod_{l=N_0}^{n-1}\eta^-_l(\lambda)\bigg) \begin{pmatrix} 1 & 1 \\ h_n^+(\lambda) & h_n^-(\lambda) \\ \end{pmatrix} (\vec e_-+o(1)), \end{aligned}$$
(4.44)

which is proportional to \(\vec{\tilde u}^-_n(\lambda)\) and implies (4.6). The solution \(\vec u_n\) is analytic in \(\Omega_0\) and continuous in \(\overline\Omega_0\) for every \(n\geqslant N_0\). Moreover, although this solution is initially defined for \(n\geqslant N_0\), it exists for all \(n\geqslant2\) (and can be formally defined also for \(n=1\), in which case \(a_0:=0\)), retaining the same properties, because the matrices \(B_n(\lambda)\) and \(B^{-1}_n(\lambda)\) are entire in \(\lambda\) for all \(n\).

Note that the asymptotics of the form

$$\bigg(\prod_{l=N_0}^{n-1}\eta^-_l(\lambda)\bigg) \begin{pmatrix} 1 & 1 \\ h_n^+(\lambda) & h_n^-(\lambda) \\ \end{pmatrix} (\vec e_-+o(1))$$

implies, but is not equivalent to,

$$\bigg(\prod_{l=N_0}^{n-1}\eta^-_l(\lambda)\bigg) \begin{pmatrix} 1+o(1)\\ 1+o(1) \end{pmatrix},$$

which would be enough to obtain (4.6). It contains more information, which is lost due to the degeneracy of the matrix

$$\begin{pmatrix} 1&1\\ 1&1 \end{pmatrix} =\lim_{n\to\infty} \begin{pmatrix} 1 & 1 \\ h_n^+(\lambda) & h_n^-(\lambda) \\ \end{pmatrix}.$$

We will employ this more refined form of asymptotics below, in (4.46).

In this proof, we have many times declared that \(N_0\) should be chosen sufficiently large, so that one property or another holds true. Evidently, one can choose \(N_0\) so that all of them hold at once. This choice is determined only by the values \(r\) and \(R\).

For \(\lambda\in[-R,-r]\) and large \(n\),

$$|\eta^-_n(\lambda)|^2=\Big(1+\frac{\lambda}{2n^{\alpha}}+\frac{\alpha}{4n}\Big)^2 -\frac{\lambda}{n^{\alpha}}-\frac{\lambda^2}{4n^{2\alpha}}-\frac{\alpha}n =1-\frac{\alpha}{2n}+\frac{\alpha\lambda}{4n^{1+\alpha}}+\frac{\alpha^2}{16n^2}\,.$$

Thus, \(n^{\frac{\alpha}2}\prod_{n=N_0}^n|\eta^-_n(\lambda)|^2\) has a finite limit as \(n\to\infty\), which is a continuous function of \(\lambda\) without roots in \([-R,-r]\). Hence \(H(\lambda)\) is well defined by (4.8).

Since the recurrence relation (4.5) has real coefficients, the sequence \(u^+_n(\lambda):=\overline{u^-_n(\lambda)}\) is its solution for \(\lambda\in[-R,-r]\). The sequence

$$\vec u^{\,+}_n(\lambda):= \begin{pmatrix} u^+_{n-1}(\lambda) \\ u^+_n(\lambda) \end{pmatrix} =\bigg(\prod_{l=N_0}^{n-1}\overline{\eta^-_l(\lambda)}\bigg) \begin{pmatrix} 1 & 1 \\ \overline{h^+_n(\lambda)} & \overline{h^-_n(\lambda)} \end{pmatrix} (\vec e_-+o(1))$$

is a solution to system (4.14). For \(\lambda\in[-R,-r]\) and large \(n\), one has \(h^+_n(\lambda)=\overline{h^-_n(\lambda)}\), and hence

$$\begin{aligned} \, \vec u^{\,+}_n(\lambda) &=\bigg(\prod_{l=N_0}^{n-1}\overline{\eta^-_l(\lambda)}\bigg) \begin{pmatrix} 1 & 1 \\ h^-_n(\lambda) & h^+_n(\lambda) \end{pmatrix} (\vec e_-+o(1))\notag\\ &=\bigg(\prod_{l=N_0}^{n-1}\overline{\eta^-_l(\lambda)}\bigg) \begin{pmatrix} 1 & 1 \\ h^+_n(\lambda) & h^-_n(\lambda) \end{pmatrix} (\vec e_++o(1)), \end{aligned}$$
(4.45)

where \(\vec e_+:= \begin{pmatrix} 1 \\ 0 \end{pmatrix}\). For fixed \(\lambda\in[-R,-r]\) and any \(n\in\mathbb N\), the Wronskian of the solutions \(u^+(\lambda)\) and \(u^-(\lambda)\) is equal to

$$\begin{aligned} \, W\{u^+(\lambda),u^-(\lambda)\} &=a_n(u^+_n(\lambda)u^-_{n+1}(\lambda)-u^+_{n+1}(\lambda)u^-_n(\lambda))\notag\\ &=a_n\det \begin{pmatrix} u^+_n(\lambda) & u^-_n(\lambda) \\ u^+_{n+1}(\lambda) & u^-_{n+1}(\lambda) \\ \end{pmatrix}\notag\\ &=\lim_{n\to\infty}a_n\det \bigg(\begin{pmatrix} 1 & 1 \\ h_{n+1}^+(\lambda) & h_{n+1}^-(\lambda) \\ \end{pmatrix} (I+o(1)) \prod_{l=N_0}^n \begin{pmatrix} \overline{\eta^-_l(\lambda)} & 0 \\ 0 & \eta^-_l(\lambda) \end{pmatrix} \bigg)\notag\\ &=\lim_{n\to\infty}a_n(h^-_{n+1}(\lambda)-h^+_{n+1}(\lambda))\bigg(\prod_{l=N_0}^n|\eta^-_l(\lambda)|^2\bigg)(1+o(1))\notag\\ &=-2\lim_{n\to\infty}a_nd_n(\lambda)\sqrt{\chi_{n+1}(\lambda)}\bigg(\prod_{l=N_0}^n|\eta^-_l(\lambda)|^2\bigg)(1+o(1))\notag\\ &=-2i\sqrt{-\lambda}H^2(\lambda), \end{aligned}$$
(4.46)

where we used (4.42), (4.44), and (4.45) to obtain the third equality, (4.42) and (4.26) to obtain the fifth one, and (1.2), (4.43), (4.24), and (4.8) to obtain the last equality. Hence \(u^+(\lambda)\) and \(u^-(\lambda)\) are linearly independent for \(\lambda\in [-R,-r]\), and the orthogonal polynomials of the first kind can be expressed as

$$ P_n(\lambda)=\Psi(\lambda)u_n^+(\lambda)+\overline{\Psi(\lambda)}u_n^-(\lambda)$$
(4.47)

with

$$\Psi(\lambda)=\frac{W\{P(\lambda),u^-(\lambda)\}}{W\{u^+(\lambda),u^-(\lambda)\}} =\frac{a_0(P_0(\lambda)u^-_1(\lambda)-P_1(\lambda)u^-_0(\lambda))}{-2i\sqrt{-\lambda}\,H^2(\lambda)} =\frac{u^-_0(\lambda)}{2i\sqrt{-\lambda}\,H^2(\lambda)},$$

where \(P(\lambda):=\{P_n(\lambda)\}_{n=1}^{\infty}\) and \(P_0(\lambda)\) is the formal extension of the solution of the eigenvector equation with \(a_0=1\), i.e., \(P_0(\lambda)\equiv0\); one can easily check that the Wronskian remains constant on the larger index set \(\mathbb N\cup\{0\}\). From this we see that \(\Psi\) and \(u^-_0\) cannot have zeros on \([-R,-r]\).

Using Proposition 3.3, we can prove the existence of a limit, uniform in \([-R,-r]\), of the sequence

$$ \rho'_n(\lambda)=\frac{\sqrt{1-d_n^2(\lambda)}}{\pi a_n|P_{n+1}(\lambda)-z_n(\lambda)P_n(\lambda)|^2}\quad\text{for a. e. }\,\lambda\in(b_n-2a_n,b_n+2a_n),$$
(4.48)

where

$$z_n(\lambda)=d_n(\lambda)-i\sqrt{1-d_n^2(\lambda)},$$

according to (3.3) and (3.4). Firstly, from (4.43) we have

$$\sqrt{1-d_n^2(\lambda)}=\frac{\sqrt{-\lambda}}{n^{\alpha/2}}\sqrt{1+\frac2{\lambda}n^{\alpha/2}r^{(7)}_n(\lambda)+o(1)}\,.$$

The term \((2/\lambda)n^{\alpha/2}r^{(7)}_n(\lambda)\) does not generally tend to zero under our condition (1.3). However, it can be made infinitesimal on a subsequence. Since we are looking for uniform convergence, this subsequence should not depend on \(\lambda\). Let \(\varkappa_n:=\max\{|p_n|/n^{\alpha/2},|q_n|/n^{\alpha/2}\}\); then \(\{\varkappa_n\}_{n=1}^{\infty}\in l^1\) by condition (1.3). One can choose an increasing sequence \(\{n_k\}_{k=1}^{\infty}\) such that \(\varkappa_{n_k}=o(1/n_k^{\alpha})\), \(k\to\infty\). This means that \(p_{n_k},q_{n_k}=o(1/n_k^{\alpha/2})\). Thus,

$$d_{n_k}(\lambda)=\frac{\lambda+2n_k^{\alpha}-q_{n_k}}{2n_k^{\alpha}+2p_{n_k}} =1+\frac{\lambda}{2n_k^{\alpha}}+o\bigg(\frac1{n_k^{3\alpha/2}}\bigg),\nonumber$$
$$\sqrt{1-d_{n_k}^2(\lambda)}=\frac{\sqrt{-\lambda}}{n_k^{\alpha/2}}\bigg(1+o\bigg(\frac1{n_k^{\alpha/2}}\bigg)\bigg),$$
(4.49)
$$z_{n_k}(\lambda)=1-\frac{i\sqrt{-\lambda}}{n_k^{\alpha/2}}+o\bigg(\frac1{n_k^{\alpha/2}}\bigg),$$
(4.50)
$$h^{\pm}_{n_k+1}(\lambda)=1\pm\frac{i\sqrt{-\lambda}}{n_k^{\alpha/2}}+o\bigg(\frac1{n_k^{\alpha/2}}\bigg).$$
(4.51)

The remainder \(o\)-terms are uniform in \(\lambda\in[-R,-r]\). This also gives \(b_{n_k}+2a_{n_k}\to0\) as \(k\to\infty\) (while \(b_n-2a_n\to-\infty\) as \(n\to\infty\) holds for the whole sequence). Thus, we can find \(K\in\mathbb N\) such that, for \(k\geqslant K\), the inclusion \([-R,-r]\subset(b_{n_k}-2a_{n_k},b_{n_k}+2a_{n_k})\) holds and (4.48) is true for a. e. \(\lambda\in[-R,-r]\) and \(n=n_k\). Further, on \([-R,-r]\) we have

$$P_{n+1}-z_nP_n=\Psi(u^+_{n+1}-z_nu^+_n)+\overline{\Psi}(u^-_{n+1}-z_nu^-_n).$$

Using (4.45), we obtain

$$\begin{aligned} \, u^+_{n+1}-z_nu^+_n&=\begin{pmatrix} -z_n & 1 \\ \end{pmatrix} \begin{pmatrix} u^+_n\\ u^+_{n+1} \end{pmatrix}\\ &=\begin{pmatrix} -z_n & 1 \\ \end{pmatrix} \bigg(\prod_{l=N_0}^n\overline{\eta^-_l}\bigg) \begin{pmatrix} 1 & 1 \\ h^+_{n+1} & h^-_{n+1} \end{pmatrix} (\vec e_++o(1))\\ &=\bigg(\prod_{l=N_0}^n\overline{\eta^-_l}\bigg) \begin{pmatrix} h^+_{n+1}-z_n& h^-_{n+1}-z_n \end{pmatrix} \begin{pmatrix} 1+o(1) \\ o(1) \end{pmatrix} \end{aligned}$$

and, similarly,

$$u^-_{n+1}-z_nu^-_n=\left(\prod_{l=N_0}^n\eta^-_l\right)\begin{pmatrix} h^+_{n+1}-z_n& h^-_{n+1}-z_n \end{pmatrix} \begin{pmatrix} o(1) \\ 1+o(1) \end{pmatrix}.$$

From (4.50) and (4.51) we have

$$\begin{gathered} \, u^+_{n_k+1}-z_{n_k}u^+_{n_k}= \bigg(\prod_{l=N_0}^{n_k}\overline{\eta^-_l}\bigg)(h^+_{n_k+1}-z_{n_k})(1+o(1)),\\ u^-_{n_k+1}-z_{n_k}u^-_{n_k}= \bigg(\prod_{l=N_0}^{n_k}\eta^-_l\bigg)o\bigg(\frac1{n_k^{\alpha/2}}\bigg) \end{gathered}$$

as \(k\to\infty\), and in view of (4.8) all this implies

$$\begin{gathered} \, |u^+_{n_k+1}(\lambda)-z_{n_k}(\lambda)u^+_{n_k}(\lambda)|=\frac{2\sqrt{-\lambda}H(\lambda)+o(1)}{n_k^{\alpha/4}},\\ |u^-_{n_k+1}(\lambda)-z_{n_k}(\lambda)u^-_{n_k}(\lambda)|=o\bigg(\frac1{n_k^{\alpha/4}}\bigg). \end{gathered}$$

Therefore,

$$|P_{n_k+1}(\lambda)-z_{n_k}(\lambda)P_{n_k}(\lambda)|=\frac{2\sqrt{-\lambda}|\Psi(\lambda)|H(\lambda)+o(1)}{n_k^{\alpha/4}},\qquad k\to\infty,$$

uniformly in \(\lambda\in[-R,-r]\). Here we write only the asymptotics of absolute values, because, firstly, this is exactly what we need to proceed, and secondly, because its form should depend on \(\alpha\) and cannot be written explicitly for all \(\alpha\in(0,1)\); see Remark 4.5. Since \(a_n=n^{\alpha}(1+o(1))\) due to (4.4), this, together with (4.49), is sufficient for passing to the limit for the subsequence \(\rho_{n_k}'\) by using (4.48):

$$\rho'_{n_k}(\lambda)=\frac{\sqrt{1-d_{n_k}^2(\lambda)}}{\pi a_n|P_{n_k+1}(\lambda)-z_{n_k}(\lambda)P_{n_k}(\lambda)|^2}\to\frac1{4\pi\sqrt{-\lambda}\,|\Psi(\lambda)|^2H^2(\lambda)},$$

and this limit is uniform in \(\lambda\in[-R,-r]\). By Proposition 3.3 the spectral measure \(\rho\) is absolutely continuous on \((-R,-r)\) and

$$\rho'(\lambda)=\frac1{4\pi\sqrt{-\lambda}\,|\Psi(\lambda)|^2H^2(\lambda)}=\frac{\sqrt{-\lambda}\,H^2(\lambda)}{\pi|u^-_0(\lambda)|^2}\quad\text{for a.\,e. }\,\lambda\in(-R,-r),$$

which finishes the proof. \(\Box\)