1 Introduction

In this paper, we are interested in the polynomials \(P_n\) that are orthogonal with respect to the weight function \(J_{\nu }\) on \([0,\infty )\), where \(J_{\nu }\) is the Bessel function of order \(\nu \ge 0\). The Bessel function is oscillatory with an amplitude that decays like \(\mathcal {O}(x^{-1/2})\) as \(x \rightarrow \infty \), and therefore the moments

$$\begin{aligned} \int _0^{\infty } x^j J_{\nu }(x) {\text {d}}x \end{aligned}$$

do not exist. It follows that the polynomials \(P_n\) cannot be defined by the usual orthogonality property

$$\begin{aligned} \int _0^\infty P_n(x) x^j J_\nu (x) {\text {d}}x =0, \quad j=0,1,\ldots ,n-1. \end{aligned}$$

Asheim and Huybrechs [1] introduced the polynomials \(P_n\) via a regularization of the weight with an exponential factor. For each \(s > 0\), they consider the monic polynomial \(P_n(x;s)\) of degree n that is orthogonal with respect to the weight function \(J_{\nu }(x) {\text {e}}^{-sx}\), in the following sense:

$$\begin{aligned} \int _0^\infty P_n(x;s) x^j J_\nu (x) {\text {e}}^{-sx}{\text {d}}x=0, \quad j=0,1,\ldots ,n-1, \end{aligned}$$
(1.1)

and they take the limit

$$\begin{aligned} P_n(x) = \lim _{s \rightarrow 0+} P_n(x; s), \end{aligned}$$
(1.2)

provided that the limit exists. Since the weight function \(J_{\nu }(x){\text {e}}^{-sx}\) changes sign on the positive real axis, there is actually no guarantee for existence or uniqueness of \(P_n(x;s)\). Therefore, for the limit (1.2), we also have to assume that \(P_n(x;s)\) exists and is unique for n large enough.

The polynomials \(P_n\) can alternatively be defined by the moments, since the limiting moments for the Bessel function of order \(\nu \ge 0\) are known, namely

$$\begin{aligned} m_j := \lim _{s \rightarrow 0+} \int _0^{\infty } x^j J_{\nu }(x) {\text {e}}^{-sx} {\text {d}}x = 2^{j} \frac{\varGamma \left( \frac{1+\nu +j}{2}\right) }{\varGamma \left( \frac{1+\nu -j}{2}\right) }, \end{aligned}$$

see [1, section 3.4]. Thus we have the determinantal formula (which is familiar from the general theory of orthogonal polynomials)

$$\begin{aligned} P_n(x) = \frac{1}{\varDelta _n} \begin{vmatrix} m_0&\quad m_1&\quad \cdots&\quad m_{n-1}&\quad m_n \\ m_1&\quad m_2&\quad \cdots&\quad m_{n}&\quad m_{n+1} \\ \vdots&\quad \vdots&\quad \ddots&\quad \vdots&\quad \vdots \\ m_{n-1}&\quad m_{n}&\quad \cdots&\quad m_{2n-2}&\quad m_{2n-1} \\ 1&\quad x&\quad \cdots&\quad x^{n-1}&\quad x^n \end{vmatrix} \end{aligned}$$
(1.3)

with a Hankel determinant \(\varDelta _n = \det \left[ m_{i+j} \right] _{i,j=0}^{n-1}\). The polynomial \(P_n\) thus exists if and only if \(\varDelta _n \ne 0\).

Asheim and Huybrechs [1] analyze Gaussian quadrature rules with oscillatory weight functions, such as complex exponentials, Airy and Bessel functions. The nodes for the Gaussian quadrature rule are the zeros of the orthogonal polynomials. Since the weight is not real and positive on the interval of orthogonality, there is a problem of existence and uniqueness of the orthogonal polynomials. In addition, even when the orthogonal polynomial exists, its zeros may not be real, and they may distribute themselves on some curve or union of curves in the complex plane as the degree tends to infinity. Examples of this kind of behavior are known in the literature, for instance with Laguerre or Jacobi polynomials with nonstandard parameters, see [2, 14, 16], and for complex exponentials [4].

In the present case, with orthogonality defined as (1.1)–(1.2), it was shown in [1, Theorem 3.5] that the zeros of \(P_n\) are on the imaginary axis in case \(\nu = 0\) and n is even. Namely, if \(t_1, \ldots , t_{n/2}\) are the zeros of the orthogonal polynomial of degree n / 2 (where n is even) with respect to the positive weight \(K_0(\sqrt{t}) t^{-1/2}\) on \([0,\infty )\), then the zeros of \(P_n\) are \(\pm i \sqrt{t_1}, \ldots , \pm i \sqrt{t_{n/2}}\). Here \(K_0\) is the modified Bessel function of the second kind.

For \(\nu > 0\), the zeros of \(P_n\) are not on the imaginary axis, as is clear from the illustrations given in [1], see also Figs. 1 and 2. The computations have been carried out in Maple, using extended precision. From these numerical experiments Asheim and Huybrechs [1] concluded that the zeros seem to cluster along the vertical line \({{\mathrm{{\text {Re}}\,}}}z = \frac{\nu \pi }{2}\). More precisely, for \(\nu \le \frac{1}{2}\), one sees in Fig. 1 that the vast majority of zeros are near a vertical line, which is indeed at \({{\mathrm{{\text {Re}}\,}}}z = \frac{\nu \pi }{2}\).

Fig. 1
figure 1

Plot of the zeros of the polynomials \(P_n\) for \(n=200\) and \(\nu =0.25\) (left), \(\nu =0.5\) (right)

Fig. 2
figure 2

Plot of the zeros of the polynomials \(P_n\) for \(n=200\) and \(\nu =0.8\) (left), \(\nu =1.3\) (right)

For \(\nu > \frac{1}{2}\), one sees in Fig. 2 that the zeros with large imaginary part are close to the vertical line \({{\mathrm{{\text {Re}}\,}}}z = \frac{\nu \pi }{2}\), although they are not as close to the vertical line as the zeros in Fig. 1.

We were intrigued by these figures, and the aim of this paper is to give a partial explanation of the observed behavior of zeros. We are able to analyze the polynomials \(P_n\) when \(0 \le \nu \le \frac{1}{2}\) in the large n limit by means of a Riemann–Hilbert analysis. The result is that we indeed find that the real parts of most of the zeros tend to \(\frac{\nu \pi }{2}\) as \(n \rightarrow \infty \).

We are not able to handle the case \(\nu > \frac{1}{2}\), since in this case our method to construct a local parametrix at the origin fails. This difficulty may very well be related to the different behavior of the zeros in the case \(\nu > \frac{1}{2}\). It would be very interesting to analyze this case as well. From the figures, it seems that there is a limiting curve for the scaled zeros, if we divide the imaginary parts of the zeros by n and keep the real parts fixed. This limiting curve is a vertical line segment if \(\nu \le \frac{1}{2}\) (this will follow from our results), but we do not know the nature of this curve if \(\nu > \frac{1}{2}\).

2 Statement of Main Results

2.1 Convergence of Zeros

Our first result is about the weak limit of zeros.

Theorem 2.1

Let \(0 < \nu \le \frac{1}{2}\). Then the polynomials \(P_n\) exist for n large enough. In addition, the zeros of \(P_n(in \pi z)\) all tend to the interval \([-1,1]\) and have the limiting density

$$\begin{aligned} \psi (x)=\frac{1}{\pi }\log \frac{1+\sqrt{1-x^2}}{|x|}, \quad x\in [-1,1]. \end{aligned}$$
(2.1)

The convergence of zeros to the limiting density (2.1) is in the sense of weak convergence of normalized zero counting measures. This means that if \(z_{1,n}, \ldots , z_{n,n}\) denote the n zeros of \(P_n\), then

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n} \sum _{j=1}^n \delta _{\frac{z_{j,n}}{i \pi n}} = \psi (x) {\text {d}}x \end{aligned}$$

in the sense of weak-star convergence of probability measures. Equivalently, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n} \sum _{j=1}^n f\left( \frac{z_{j,n}}{i \pi n}\right) = \int _{-1}^1 f(x) \psi (x) {\text {d}}x \end{aligned}$$

for every function f that is defined and continuous in a neighborhood of \([-1,1]\) in the complex plane.

The weak limit of zeros, if we rescale them by a factor \(i \pi n\), exists and does not depend on the value of \(\nu \). Theorem 2.1 is known to hold for \(\nu =0\), and we believe that it also holds true for \(\nu > \frac{1}{2}\).

Regarding the real parts of the zeros of \(P_n\) as \(n\rightarrow \infty \), we have the following result.

Theorem 2.2

Let \(0<\nu \le 1/2\), and let \(\delta >0\) be fixed. Then there exist \(n_0\in \mathbb {N}\) and \(C > 0\) such that for \(n\ge n_0\), every zero \(z_{j,n}\) of \(P_n\) outside the disks \(D(0,n\delta )\) and \(D(\pm n\pi i, n \delta )\) satisfies

$$\begin{aligned} \left| {{\mathrm{{\text {Re}}\,}}}z_{j,n} - \frac{\nu \pi }{2} \right| \le C \epsilon _n, \end{aligned}$$
(2.2)

where

$$\begin{aligned} \epsilon _n = \frac{n^{\nu -1/2}}{(\log n)^{\nu +1/2}}. \end{aligned}$$
(2.3)

Remark 2.3

For each fixed \(\delta > 0\), there are approximately \(\varepsilon n\) zeros of \(P_n\) in the disks \(D(0,n\delta )\) and \(D(\pm n\pi i, n\delta )\) as n is large, where

$$\begin{aligned} \varepsilon = \int _{-1}^{-1+\delta /\pi } \psi (x) {\text {d}}x + \int _{-\delta /\pi }^{\delta /\pi } \psi (x) {\text {d}}x + \int _{1-\delta /\pi }^1 \psi (x) {\text {d}}x. \end{aligned}$$

This is a consequence of the weak convergence of zeros, see Theorem 2.1.

Clearly, \(\varepsilon \rightarrow 0\) as \(\delta \rightarrow 0\), and so it follows from Theorem 2.2 by taking \(\delta \) arbitrarily small that for all but o(n) zeros, one has that the real part tends to \(\frac{\nu \pi }{2}\) as \(n \rightarrow \infty \).

Remark 2.4

We do not have information about the zeros in the disk \(D(0,n \delta )\). In our Riemann–Hilbert analysis we prove the existence of a local parametrix around the origin, but we do not have an explicit construction with special functions. Therefore we cannot analyze the zeros near the origin.

On the other hand, we do have potential access to the extreme zeros in the disks \(D(\pm n \pi i, n \delta )\), since the asymptotics of the polynomials \(P_n(in \pi z)\) is given in terms of Airy functions. From the figures, it seems that the result (2.2) also holds for the extreme zeros, but we omit this asymptotic result in Theorem 2.6, since it does not follow clearly from the construction of the local parametrices in this case.

2.2 Orthogonality of \(P_n(in \pi z)\) and Discussion

Theorems 2.1 and 2.2 follow from strong asymptotic formulas for the rescaled polynomials

$$\begin{aligned} \widetilde{P}_n(z) = (in \pi )^{-n} P_n(in\pi z). \end{aligned}$$
(2.4)

These polynomials are orthogonal polynomials on the real line, but with a complex weight function.

Proposition 2.5

Let \(0 \le \nu < 1\). Then the polynomial \(\widetilde{P}_n\), if it exists uniquely, is the monic orthogonal polynomial of degree n for the weight

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{{e}}^{ \nu \pi i/2} K_{\nu }(-n \pi x) &{} \mathrm{{ for}}\; x < 0, \\ \mathrm{{e}}^{- \nu \pi i/2} K_{\nu }(n \pi x) &{} \mathrm{{ for }}\; x > 0, \end{array}\right. } \end{aligned}$$
(2.5)

on the real line. That is,

$$\begin{aligned} \int _{-\infty }^{\infty }\widetilde{P}_n(z) x^j \mathrm{{e}}^{- {{\mathrm{sgn}}}(x) \nu \pi i/2} K_{\nu }(n \pi |x|) \mathrm{{d}}x = 0, \quad j =0,1 \ldots , n-1. \end{aligned}$$
(2.6)

The function \(K_{\nu }\) in (2.5) is the modified Bessel function of second kind of order \(\nu \). Proposition 2.5 is proved in Sect. 3.3.

Since \(K_{\nu }(x) \sim x^{-\nu }\) as \(x \rightarrow 0\), see for instance [17, 18, 10.30.2], the condition \(\nu < 1\) is necessary for the convergence of the integral (2.6) with \(j=0\). In case \(\nu =0\), then (2.5) is the real and positive weight function \(K_{0}(n\pi |x|)\). Then \(\widetilde{P}_n\) has all its zeros on the real line, and consequently the zeros of \(P_n\) are on the imaginary axis. This way we recover the result of [1].

If \(\nu = 1/2\), the modified Bessel function reduces to an elementary function and the weight function (2.5) is

$$\begin{aligned} {\left\{ \begin{array}{ll} {\text {e}}^{\pi i/4} (2n |x|)^{-1/2} {\text {e}}^{-n \pi |x|}, &{} \quad x < 0, \\ {\text {e}}^{-\pi i/4} (2n |x|)^{-1/2} {\text {e}}^{- n \pi |x|}, &{} \quad x > 0. \end{array}\right. } \end{aligned}$$
(2.7)

The weight (2.7) has three components:

  • An exponential varying weight \({\text {e}}^{-n \pi |x|}\) with a potential function \(V(x) = \pi |x|\) that is convex but nonsmooth at the origin.

  • A square root singularity at the origin \(|x|^{-1/2}\).

  • A complex phase factor \({\text {e}}^{\pm \pi i/4}\) with a jump discontinuity at the origin.

The exponential varying weight determines the limiting density (2.1). Indeed, we have that \(\psi (x) {\text {d}}x\) is the minimizer of the logarithmic energy in the external field \(\pi |x|\) among probability measures on the real line, see [19], and as is well known, the zeros of the orthogonal polynomials with varying weight function \({\text {e}}^{-n \pi |x|}\) have \(\psi \) as limiting density. This continues to be the case for the weights (2.5) as is claimed by Theorem 2.1. A Riemann–Hilbert analysis for the weight \({\text {e}}^{-n \pi |x|}\) and other Freud weights are in [13].

The square root singularity and the jump discontinuity are known as Fisher–Hartwig singularities in the theory of Toeplitz determinants. There is much recent progress in the understanding of Toeplitz and Hankel determinants with such singularities [7]. This is also related to the asymptotics of the corresponding orthogonal polynomials, whose local behavior near a Fisher–Hartwig singularity is described with the aid of confluent hypergeometric functions, see the works of Deift et al. [6, 12] and also Foulquié et al. [10, 11].

We are facing the complication that the Fisher–Hartwig singularity is combined with a logarithmic divergence of the density \(\psi \) at the origin, see (2.1). In our Riemann–Hilbert analysis we were not able to construct a local parametrix with special functions, and we had to resort to an existence proof, where we used ideas from Kriecherbauer and McLaughlin [13] and Bleher and Bothner [3], although our proof is at the technical level different from either of these papers.

2.3 Asymptotic Behavior

Away from the region where the zeros of \(P_n\) lie, the asymptotic behavior is governed by the g function associated with the limiting density \(\psi \) given by (2.1):

$$\begin{aligned} g(z)=\int _{-1}^1 \log (z-x)\psi (x){\text {d}}x. \end{aligned}$$
(2.8)

The function g is defined and analytic for \(z\in \mathbb {C} \setminus (-\infty ,1]\).

We prove the following asymptotic behavior of \(P_n\) in the region away from the zeros. We continue to use \(\epsilon _n\) as defined in (2.3).

Theorem 2.6

Let \(0<\nu \le 1/2\). Then the polynomial \(P_n\) exists and is unique for sufficiently large n. Moreover, the polynomial \(\widetilde{P}_n\) given by (2.4) has the following behavior as \(n\rightarrow \infty \):

$$\begin{aligned} \widetilde{P}_n(z)=\mathrm{{e}}^{ng(z)} \left( \frac{z(z+(z^2-1)^{1/2})}{2(z^2-1)}\right) ^{1/4}\left( \frac{(z^2-1)^{1/2}-i}{(z^2-1)^{1/2}+i}\right) ^{-\nu /4} \left( 1+\mathcal {O}(\epsilon _n)\right) , \end{aligned}$$
(2.9)

uniformly for z in compact subsets of \(\mathbb {C}\setminus [-1,1]\). Here the branch of the function \((z^2-1)^{1/2}\) is taken which is analytic in \(\mathbb {C}\setminus [-1,1]\) and positive for real \(z > 1\).

In a neighborhood of \((-1,1)\), we find oscillatory behavior of the polynomials \(\widetilde{P}_n\) as \(n\rightarrow \infty \). We state the asymptotic formula (2.12) for \({{\mathrm{{\text {Re}}\,}}}z \ge 0\) only. There is an analogous formula for \({{\mathrm{{\text {Re}}\,}}}z < 0\). This follows from the fact that the polynomial \(P_n\) has real coefficients, as all the moments in the determinantal formula (1.3) are real. Thus \(P_n(\overline{z}) = \overline{P_n(z)}\), and so

$$\begin{aligned} \widetilde{P}_n(-\overline{z}) = \overline{\widetilde{P}_n(z)}, \quad z \in \mathbb C. \end{aligned}$$

To describe the behavior near the interval, we need the analytic continuation of the density (2.1), which we also denote by \(\psi \),

$$\begin{aligned} \psi (z) = \frac{1}{\pi } \log \frac{1+ (1-z^2)^{1/2}}{z}, \quad {{\mathrm{{\text {Re}}\,}}}z > 0, \end{aligned}$$
(2.10)

which is defined and analytic in \(\{ z \mid {{\mathrm{{\text {Re}}\,}}}z > 0\} \setminus [1, \infty )\). For \({{\mathrm{{\text {Re}}\,}}}z > 0\) with \(z \not \in [1, \infty )\), we also define

$$\begin{aligned} \theta _n(z) = n \pi \int _z^1 \psi (s) \mathrm{d}s + \frac{1}{4} \arccos z - \frac{\pi }{4}. \end{aligned}$$
(2.11)

Theorem 2.7

Let \(0 < \nu \le 1/2\). There is an open neighborhood E of \((-1,1)\) such that for \(z \in E \setminus \{0\}\) with \({{\mathrm{{\text {Re}}\,}}}z \ge 0\), we have

$$\begin{aligned} \widetilde{P}_n(z)= & {} \frac{z^{1/4} \mathrm{{e}}^{\frac{\nu \pi i}{4}} \mathrm{{e}}^{n \pi z/2}}{2^{1/4} (2e)^n (1-z^2)^{1/4}} \left[ \exp \left( \frac{\nu \pi }{2} \psi (z) + i \theta _n(z)\right) \left( 1 + \mathcal {O}\left( \frac{\log n}{n}\right) \right) \right. \nonumber \\&\left. + \exp \left( - \frac{\nu \pi }{2} \psi (z) - i \theta _n(z) \right) \left( 1 + \mathcal {O}\left( \frac{\log n}{n}\right) \right) + \mathcal {O}(\epsilon _n) \right] \end{aligned}$$
(2.12)

as \(n \rightarrow \infty \), with \(\psi \) and \(\theta _n\) given by (2.10) and (2.11). The asymptotic expansion (2.12) is uniform for \(z \in E\) with \({{\mathrm{{\text {Re}}\,}}}z \ge 0\) and \(|z-1| > \delta \), \(|z| > \delta \), for every \(\delta > 0\).

The two terms \(\exp \left( \frac{\nu \pi }{2} \psi (z) + i \theta _n(z)\right) \) and \(\exp \left( - \frac{\nu \pi }{2} \psi (z) - i \theta _n(z)\right) \) in (2.12) describe the oscillatory behavior near the interval as well as the leading order behavior of the zeros. Zeros can only happen when these two terms are of comparable absolute value so that cancellations can take place. When \(\nu = 0\), this happens for real \(z \in E\). However, for \(\nu > 0\), this does not happen for real z, but near the line \({{\mathrm{{\text {Im}}\,}}}z = -\frac{\nu }{2n}\), as we will show in Sect. 4.4. This leads to Theorem 2.2.

2.4 Outline of the Paper

The structure of the rest of the paper is as follows. In Sect. 3, we state the Riemann–Hilbert (RH) problem \(Y^{(s)}\) for \(P_n(x;s)\) with \(s > 0\), and we make an initial transformation

$$\begin{aligned} Y^{(s)} \mapsto X^{(s)}. \end{aligned}$$

In the RH problem for \(X^{(s)}\), we can take the limit \(s \rightarrow 0+\) which leads to a RH problem for X, that characterizes the polynomial \(P_n(x)\). Then we carry out the further transformations

$$\begin{aligned} X \mapsto U \mapsto T \mapsto S \mapsto Q \mapsto R \end{aligned}$$

of the Deift–Zhou nonlinear steepest descent method [5, 8]. The step \(X\mapsto U\) is rotation and scaling, to translate the problem to the interval \([-1,1]\). This leads to the polynomials \(\widetilde{P}_n\) and the proof of Proposition 2.5. The normalization at \(\infty \) in the \(U\mapsto T\) step is carried out using an equilibrium problem with a Freud weight \(w(x)={\text {e}}^{-n V(x)}\), where \(V(x)=\pi |x|\) is the pointwise limit as \(n\rightarrow \infty \) of the varying weight

$$\begin{aligned} V_n(x)=-\frac{1}{n}\log K_{\nu }(n\pi |x|). \end{aligned}$$

The construction of the global parametrix N on the interval \([-1,1]\) involves two Szegő functions \(D_1(z)\) and \(D_2(z)\) that correspond respectively to an algebraic singularity of the weight function at the origin and to a complex phase factor. The local parametrices near the endpoints \(\pm 1\) involve Airy functions, since the density \(\psi (x)\) in (2.1) behaves like a square root in a neighborhood of these endpoints. The main difficulty of the analysis is the construction of a local parametrix in a neighborhood of the origin, and the reason is the lack of analyticity of the weight function \(V_n(x)\) in that neighborhood. In this paper, we reduce the jump matrices in that local analysis to almost constant in a disk around 0 and then use a small norm argument in \(L^2\cap L^{\infty }\) to prove existence of a solution to this local RH problem. In this respect, the analysis is similar to the one presented by Kriecherbauer and McLaughlin [13]. We also observe that the same limiting potential V(x) appears in the work of Bleher and Bothner in [3]. Another example of nonanalytic weight function was considered in the work of Foulquié Moreno et al., see [10, 11], although in this case the local parametrix at the origin is explicitly given in terms of confluent hypergeometric functions.

Finally, in Sect. 4 we follow the transformations both outside and inside the lens, but away from the origin, to get the asymptotic information about \(P_n(z)\) and its zeros. This proves Theorem 2.6 and 2.7. Theorem 2.1 follows from Theorems 2.6 and 2.7 is a consequence of Proposition 2.2.

3 Riemann–Hilbert Problem

3.1 RH Problem for Polynomials \(P_n(x;s)\)

We let \(\nu > 0\) and \(s > 0\). Orthogonal polynomials are characterized by a matrix valued Riemann–Hilbert problem as was first shown by Fokas et al. [9], see also [5]. This characterization does not use the fact that the orthogonality weight is nonnegative, and it therefore also applies to oscillatory weights. Thus the polynomial \(P_n(x;s)\) satisfying (1.1) is characterized by the following Riemann–Hilbert problem:

RH problem 3.1

\(Y^{(s)} :\mathbb {C}\setminus [0,\infty ) \rightarrow \mathbb {C}^{2\times 2}\) is a \(2 \times 2\) matrix valued function that satisfies:

  1. 1.

    \(Y^{(s)}\) is analytic in \(\mathbb {C}\setminus [0,\infty )\).

  2. 2.

    \(Y^{(s)}\) satisfies the jump condition

    $$\begin{aligned} Y^{(s)}_{+}(x)= Y^{(s)}_{-}(x) \begin{pmatrix} 1 &{} J_{\nu }(x){\text {e}}^{-sx} \\ 0 &{} 1 \end{pmatrix} \quad \text {on } (0,\infty ). \end{aligned}$$
  3. 3.

    As \(z \rightarrow \infty \),

    $$\begin{aligned} Y^{(s)}(z)=(I+\mathcal {O}(1/z))\begin{pmatrix} z^{n} &{} 0 \\ 0 &{} z^{-n} \end{pmatrix}, \end{aligned}$$
    (3.1)

    where I denotes the \(2\times 2\) identity matrix.

  4. 4.

    \(Y^{(s)}(z)\) remains bounded as \(z \rightarrow 0\).

The polynomial \(P_n(x;s)\) exists and is unique if and only if the RH problem has a unique solution. In that case, we have

$$\begin{aligned} P_n(x;s) = Y^{(s)}_{11}(x). \end{aligned}$$
(3.2)

3.2 First Transformation

In the first transformation, we use the following connection formula between \(J_{\nu }\) and the modified Bessel function \(K_{\nu }\) of the second kind:

$$\begin{aligned} J_\nu (z)=\frac{1}{\pi i}\left( {\text {e}}^{-\frac{\nu \pi i}{2}}K_\nu (-iz)-{\text {e}}^{\frac{\nu \pi i}{2}}K_\nu (iz)\right) , \quad |\arg z|\le \frac{\pi }{2}, \end{aligned}$$
(3.3)

see for instance [17, 18, formula 10.27.9], Alternatively, the Bessel function can be written in terms of Hankel functions as in [17, 18, formula 10.4.4].

The formula (3.3) leads to the following factorization of the jump matrix:

$$\begin{aligned} \begin{pmatrix} 1 &{}\quad J_{\nu }(x){\text {e}}^{-sx} \\ 0 &{}\quad 1 \end{pmatrix} = \begin{pmatrix} 1 &{}\quad -\frac{{\text {e}}^{\frac{\nu \pi i}{2}}}{\pi i}K_\nu (ix){\text {e}}^{-sx} \\ 0 &{}\quad 1 \end{pmatrix} \begin{pmatrix} 1 &{}\quad \frac{{\text {e}}^{-\frac{\nu \pi i}{2}}}{\pi i}K_\nu (-ix){\text {e}}^{-sx} \\ 0 &{}\quad 1 \end{pmatrix}. \end{aligned}$$
(3.4)

We define the new matrix valued function \(X^{(s)}\) by

$$\begin{aligned} X^{(s)}(z)={\left\{ \begin{array}{ll} \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad (\pi i)^{-1} \end{pmatrix} Y^{(s)}(z) \begin{pmatrix} 1 &{}\quad - {\text {e}}^{-\frac{\nu \pi i}{2}} K_\nu (-iz){\text {e}}^{-sz} \\ 0 &{}\quad \pi i \end{pmatrix}, &{} 0 <\arg z < \frac{\pi }{2}, \\ \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad (\pi i)^{-1} \end{pmatrix} Y^{(s)}(z) \begin{pmatrix} 1 &{} \quad - {\text {e}}^{\frac{\nu \pi i}{2}} K_\nu (iz){\text {e}}^{-sz} \\ 0 &{}\quad \pi i \end{pmatrix}, &{}-\frac{\pi }{2} <\arg z < 0, \\ \begin{pmatrix} 1 &{} \quad 0 \\ 0 &{}\quad (\pi i)^{-1} \end{pmatrix} Y^{(s)}(z) \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad \pi i \end{pmatrix},&\text {elsewhere}. \end{array}\right. } \end{aligned}$$
(3.5)

Then \(X^{(s)}\) has an analytic continuation across the positive real axis, due to the factorization (3.4). Thus \(X^{(s)}\) is defined and analytic in the complex plane except for the imaginary axis, and it satisfies the following RH problem:

RH problem 3.2

  1. 1.

    \(X^{(s)}\) is analytic in \(\mathbb {C}\setminus i\mathbb {R}\).

  2. 2.

    \(X^{(s)}\) satisfies the jump condition (the imaginary axis is oriented from bottom to top)

    $$\begin{aligned} X^{(s)}_{+}(x)=X^{(s)}_{-}(x) {\left\{ \begin{array}{ll} \begin{pmatrix} 1 &{}\quad {\text {e}}^{-\frac{\nu \pi i}{2}} K_\nu (-ix) {\text {e}}^{-sx} \\ 0 &{}\quad 1 \end{pmatrix} &{}\text { for } x \in (0,+i\infty ),\\ \begin{pmatrix} 1 &{}\quad {\text {e}}^{\frac{\nu \pi i}{2}} K_\nu (ix) {\text {e}}^{-sx} \\ 0 &{}\quad 1 \end{pmatrix}&\text { for }x \in (-i\infty ,0). \end{array}\right. } \end{aligned}$$
    (3.6)
  3. 3.

    As \(z\rightarrow \infty \),

    $$\begin{aligned} X^{(s)}(z)=(I+\mathcal {O}(1/z)) \begin{pmatrix} z^{n} &{}\quad 0 \\ 0 &{}\quad z^{-n} \end{pmatrix}. \end{aligned}$$
    (3.7)
  4. 4.

    \(X^{(s)}(z)\) remains bounded as \(z \rightarrow 0\) with \({{\mathrm{{\text {Re}}\,}}}z < 0\), and

    $$\begin{aligned} X^{(s)}(z)=\begin{pmatrix} \mathcal {O}(1) &{} \mathcal {O}(z^{-\nu }) \\ \mathcal {O}(1) &{} \mathcal {O}(z^{-\nu }) \end{pmatrix} \quad \text { as } z \rightarrow 0 \text { with } {{\mathrm{{\text {Re}}\,}}}z > 0. \end{aligned}$$
    (3.8)

The asymptotic condition (3.7) follows from (3.1), the definition (3.5), and the fact that

$$\begin{aligned} K_{\nu }(z)=\left( \frac{\pi }{2z}\right) ^{1/2} {\text {e}}^{-z}\left( 1+\mathcal {O}(1/z)\right) \quad \text {as } z\rightarrow \infty , \quad |\arg z|<\frac{3\pi }{2}, \end{aligned}$$
(3.9)

see [17, 18, formula 10.40.2]. The \(\mathcal {O}(z^{-\nu })\) terms in (3.8) appear because of the behavior

$$\begin{aligned} K_{\nu }(z) \sim \frac{\varGamma (\nu )}{2^{1-\nu }} z^{-\nu } \end{aligned}$$
(3.10)

as \(z \rightarrow 0\) for \(\nu >0\), see for instance [17, 18, formula 10.30.2]. Note that by (3.2) and (3.5),

$$\begin{aligned} P_n(x;s) = X^{(s)}_{11}(x). \end{aligned}$$
(3.11)

In the RH problem for \(X^{(s)}\), we can take \(s \rightarrow 0+\). Indeed, after setting \(s=0\) in (3.6), the off-diagonal entries in the jump matrices still tend to 0 as \(|x| \rightarrow \infty \) because of (3.9). We set \(s=0\), and we consider the following RH problem:

RH problem 3.3

We seek a function \(X:\mathbb {C}\setminus i\mathbb {R} \rightarrow \mathbb {C}^{2\times 2}\) satisfying:

  1. 1.

    X is analytic in \(\mathbb {C}\setminus i\mathbb {R}\).

  2. 2.

    X satisfies the jump condition (the imaginary axis is oriented from bottom to top)

    $$\begin{aligned} X_{+}(x)=X_{-}(x) {\left\{ \begin{array}{ll} \begin{pmatrix} 1 &{}\quad {\text {e}}^{-\frac{\nu \pi i}{2}} K_\nu (-ix) \\ 0 &{}\quad 1 \end{pmatrix} &{} \text { for } x \in (0,+i\infty ),\\ \begin{pmatrix} 1 &{}\quad {\text {e}}^{\frac{\nu \pi i}{2}} K_\nu (ix) \\ 0 &{} \quad 1 \end{pmatrix}&\text { for }x \in (-i\infty ,0). \end{array}\right. } \end{aligned}$$
  3. 3.

    As \(z\rightarrow \infty \),

    $$\begin{aligned} X(z)=(I+\mathcal {O}(1/z)) \begin{pmatrix} z^{n} &{}\quad 0 \\ 0 &{}\quad z^{-n} \end{pmatrix}. \end{aligned}$$
  4. 4.

    X(z) remains bounded as \(z \rightarrow 0\) with \({{\mathrm{{\text {Re}}\,}}}z < 0\), and

    $$\begin{aligned} X(z)=\begin{pmatrix} \mathcal {O}(1) &{} \mathcal {O}(z^{-\nu }) \\ \mathcal {O}(1) &{} \mathcal {O}(z^{-\nu }) \end{pmatrix} \quad \text { as } z \rightarrow 0 \text { with } {{\mathrm{{\text {Re}}\,}}}z > 0. \end{aligned}$$

If there is a unique solution, then the 11-entry is a monic polynomial of degree n, say \(P_n\), and

$$\begin{aligned} P_n(x) = X_{11}(x) = \lim _{s \rightarrow 0+} X^{(s)}_{11}(x) = \lim _{s\rightarrow 0+} P_n(x;s), \end{aligned}$$
(3.12)

see (3.11). Thus \(P_n\) is the polynomial that we are interested in.

Remark 3.4

Observe that the jump matrices in (3.6) tend to the jump matrices in the RH problem for X as \(s \rightarrow 0+\), but not in a uniform way, due to the fact that the jump matrices are unbounded near \(x=0\). Therefore we cannot claim that \(X^{(s)} \rightarrow X\) uniformly as \(s \rightarrow 0+\).

To overcome this problem, we modify \(X^{(s)}\) as follows. Define, for a given \(\delta > 0\),

$$\begin{aligned} X^{(s,\delta )}(z)= {\left\{ \begin{array}{ll} X^{(s)}(z) \begin{pmatrix} 1 &{} \quad {\text {e}}^{-\frac{\nu \pi i}{2}} K_\nu (-iz) {\text {e}}^{-sz} \\ 0 &{}\quad 1 \end{pmatrix} &{} \text { for } 0 < {{\mathrm{{\text {Re}}\,}}}z < \delta , \, {{\mathrm{{\text {Im}}\,}}}z > 0, \\ X^{(s)}(z) \begin{pmatrix} 1 &{}\quad {\text {e}}^{\frac{\nu \pi i}{2}} K_\nu (iz) {\text {e}}^{-sz} \\ 0 &{} \quad 1 \end{pmatrix} &{} \text { for } 0 < {{\mathrm{{\text {Re}}\,}}}z < \delta , \, {{\mathrm{{\text {Im}}\,}}}z < 0, \\ X^{(s)}(z) &{} \text {elsewhere.} \end{array}\right. } \end{aligned}$$

Then \(X^{(s, \delta )}\) has jumps on \({{\mathrm{{\text {Re}}\,}}}z = \delta \) with jump matrices as given in (3.6), but moved from the imaginary axis to \({{\mathrm{{\text {Re}}\,}}}z = \delta \). In addition, there is a jump on the interval \((0,\delta )\),

$$\begin{aligned} X^{(s,\delta )}_+(x) = X^{(s,\delta )}_-(x) \begin{pmatrix} 1 &{}\quad \pi i J_{\nu }(x) {\text {e}}^{-sx} \\ 0 &{}\quad 1 \end{pmatrix}, \quad 0 < x < \delta . \end{aligned}$$

The jump matrices for \(X^{(s,\delta )}\) have a limit in \(L^2 \cap L^{\infty }\) as \(s \rightarrow 0+\) . By standard arguments [5], we have the convergence of the solutions of the corresponding RH problems. The limit \(\lim \limits _{s \rightarrow 0+} X^{(s,\delta )}\) is then a modification of X in the same way that \(X^{(s, \delta )}\) is a modification of \(X^{(s)}\). The modifications do not affect the 11-entry, and (3.12) follows.

3.3 Second Transformation

We introduce a scaling and rotation \(z\mapsto i\pi n z\), and our main interest is in the rescaled polynomials \(P_n(in\pi z)\) whose zeros will accumulate on the interval \([-1,1]\) as \(n\rightarrow \infty \). More precisely, we define U as

$$\begin{aligned} U(z)=\begin{pmatrix} (in\pi )^{-n} &{} 0 \\ 0 &{} (in\pi )^{n} \end{pmatrix} X(in\pi z). \end{aligned}$$
(3.13)

From (3.13) and the RH problem 3.3, we immediately obtain the following RH problem for U(z):

RH problem 3.5

  1. 1.

    U is analytic in \(\mathbb {C}\setminus \mathbb {R}\).

  2. 2.

    U satisfies the jump condition

    $$\begin{aligned} U_{+}(x)=U_{-}(x) {\left\{ \begin{array}{ll} \begin{pmatrix} 1 &{}\quad {\text {e}}^{\nu \pi i/2} K_{\nu }(n\pi |x|) \\ 0 &{}\quad 1 \end{pmatrix}, \quad x \in (-\infty ,0), \\ \begin{pmatrix} 1 &{}\quad {\text {e}}^{-\nu \pi i/2} K_{\nu }(n\pi |x|) \\ 0 &{}\quad 1 \end{pmatrix}, \quad x \in (0,\infty ). \end{array}\right. } \end{aligned}$$
  3. 3.

    As \(z\rightarrow \infty \),

    $$\begin{aligned} U(z)=(I+\mathcal {O}(1/z)) \begin{pmatrix} z^{n} &{}\quad 0 \\ 0 &{}\quad z^{-n} \end{pmatrix}. \end{aligned}$$
  4. 4.

    U(z) remains bounded as \(z \rightarrow 0 \) with \({{\mathrm{{\text {Im}}\,}}}z > 0\), and

    $$\begin{aligned} U(z)= \begin{pmatrix} \mathcal {O}(1) &{} \mathcal {O}(z^{-\nu }) \\ \mathcal {O}(1) &{} \mathcal {O}(z^{-\nu }) \end{pmatrix} \quad \text { as } z \rightarrow 0 \text { with } {{\mathrm{{\text {Im}}\,}}}z < 0. \end{aligned}$$

Note that by (3.12), (3.13), and (2.4),

$$\begin{aligned} U_{11}(z) = (i n \pi )^{-n} X_{11}(in \pi z) = (in \pi )^{-n} P_{n}(in \pi z) = \widetilde{P}_n(z), \end{aligned}$$
(3.14)

which is a monic polynomial of degree n. The zeros of \(U_{11}(z)\) are obtained from the zeros of \(P_n\) by rotation over 90 degrees in the clockwise direction and by dividing by a factor \(\pi n\).

We can now prove Proposition 2.5.

Proof

(Proof of Proposition 2.5) The RH problem for U is the RH problem for orthogonal polynomials on the real line for the varying weight function \({\text {e}}^{\mp \nu \pi i/2} K_{\nu }(n \pi |x|)\) for \(x \in \mathbb R^{\pm }\), see [5, 8, 9]. Because of the \({\text {e}}^{\mp \nu \pi i/2}\) factor, the weight function is not real on the real line, and it has a singularity at the origin because of the behavior (3.10) of the \(K_{\nu }\) function near 0. The singularity is integrable since \(\nu < 1\), and so \(U_{11} = \widetilde{P}_n\) is the monic polynomial of degree n satisfying (2.6). \(\square \)

3.4 Equilibrium Problem and Third Transformation

In order to normalize the RH problem at infinity, we make use of an equilibrium problem with external field \(V(x)=\pi |x|\). The equilibrium measure \(\mu \) minimizes the energy functional

$$\begin{aligned} I(\mu ) = \iint \log \frac{1}{|x-y|} \text {d}\mu (x)\text {d}\mu (y) + \int \pi |x| \text {d}\mu (x) \end{aligned}$$

among all probability measures on \(\mathbb {R}\). The minimizer is supported on \([-1,1]\). It is absolutely continuous with respect to the Lebesgue measure, \(\text {d}\mu (x)=\psi (x){\text {d}}x\), and has density

$$\begin{aligned} \psi (x)=\frac{1}{\pi }\int _{|x|}^1 \frac{1}{\sqrt{s^2-x^2}}\mathrm{d}s, \end{aligned}$$

which corresponds to the case \(\beta =1\) in [13]. The integral can be evaluated explicitly, and it gives the formula (2.1). Note that \(\psi (x)\) grows like a logarithm at \(x = 0\).

The g function is defined in (2.8). The boundary values \(g_+(x)\) and \(g_-(x)\) on the real axis (oriented from left to right) satisfy

$$\begin{aligned} g_+(x)-g_-(x)={\left\{ \begin{array}{ll} 2\pi i,\quad &{}x\le -1, \\ 2\pi i \int _x^1 \psi (s)\mathrm{d}s,\quad &{}-1 < x < 1,\\ 0,\quad &{} x\ge 1. \end{array}\right. } \end{aligned}$$
(3.15)

The Euler–Lagrange equations for the equilibrium problem imply that we have (see, e.g., [5] or [19])

$$\begin{aligned} g_{+}(x)+g_{-}(x)- \pi |x| {\left\{ \begin{array}{ll} = \ell , &{} \quad x\in [-1,1], \\ <\ell , &{} \quad x\in (-\infty ,-1)\cup (1,\infty ), \end{array}\right. } \end{aligned}$$
(3.16)

with the constant \(\ell \) (see Theorem IV.5.1 in [19] or formula (3.5) in [13]):

$$\begin{aligned} \ell = - 2 - 2 \log 2. \end{aligned}$$
(3.17)

A related function is

$$\begin{aligned} \varphi (z) = g(z) - \frac{V(z)}{2} - \frac{\ell }{2}, \end{aligned}$$
(3.18)

where

$$\begin{aligned} V(z)={\left\{ \begin{array}{ll} \pi z, &{} \quad {{\mathrm{{\text {Re}}\,}}}z >0,\\ -\pi z, &{} \quad {{\mathrm{{\text {Re}}\,}}}z <0. \end{array}\right. } \end{aligned}$$
(3.19)

The \(\varphi \)-function is analytic in \(\mathbb {C}\setminus \left( (-\infty ,1]\cup i\mathbb {R}\right) \). For \(x\in [-1,1]\), we have from the variational equation (3.16):

$$\begin{aligned} \begin{aligned} \varphi _+(x)&= g_+(x)-\frac{V(x)}{2}-\frac{\ell }{2} = \frac{1}{2}(g_+(x)-g_-(x)), \\ \varphi _-(x)&=-\varphi _+(x). \end{aligned} \end{aligned}$$
(3.20)

Thus \(2\varphi \) gives an analytic extension of \(g_+(x)-g_-(x)\) from \([-1,1]\) into the upper half-plane minus the imaginary axis, and of \(g_-(x) - g_+(x)\) into the lower half-plane minus the imaginary axis. Note that \(\varphi _{\pm }(x)\) is purely imaginary on \([-1,1]\) because of (3.15).

On the imaginary axis, the function \(\varphi (z)\) is not analytic because of the discontinuity in V(z). The boundary values of this weight function satisfy

$$\begin{aligned} V_-(z)=V_+(z)+2\pi z, \end{aligned}$$

and as a consequence,

$$\begin{aligned} \varphi _-(z)=\varphi _+(z)-\pi z, \quad z \in i \mathbb R. \end{aligned}$$

Here we take the orientation of the imaginary axis from bottom to top.

Now we are ready for the third transformation of the RH problem, and we define the matrix valued function

$$\begin{aligned} T(z)={\text {e}}^{-n\ell \sigma _3/2} (2n)^{\sigma _3/4} U(z){\text {e}}^{-n(g(z)-\ell /2) \sigma _3} (2n)^{-\sigma _3/4}, \end{aligned}$$
(3.21)

where \(\sigma _3=\begin{pmatrix} 1 &{} 0\\ 0 &{}-1\end{pmatrix}\) is the third Pauli matrix. We also write

$$\begin{aligned} W_n(x)= \sqrt{2n} K_{\nu }(n\pi |x|){\text {e}}^{n\pi |x|}, \quad x \in \mathbb R. \end{aligned}$$
(3.22)

Then from the above definitions and properties and from the RH problem 3.5 for U, we find that T satisfies the following Riemann–Hilbert problem.

RH problem 3.6

  1. 1.

    T is analytic in \(\mathbb {C}\setminus \mathbb {R}\).

  2. 2.

    T satisfies the jump conditions

    $$\begin{aligned} T_{+}(x)=T_{-}(x) {\left\{ \begin{array}{ll} \begin{pmatrix} 1 &{}\quad {\text {e}}^{\nu \pi i/2} W_n(x) {\text {e}}^{2n \varphi _+(x)} \\ 0 &{}\quad 1\end{pmatrix}, \quad x\in (-\infty ,-1),\\ \begin{pmatrix} {\text {e}}^{-2n\varphi _{+}(x)} &{}\quad {\text {e}}^{\nu \pi i/2} W_n(x) \\ 0 &{}\quad {\text {e}}^{-2n\varphi _{-}(x)} \end{pmatrix}, \quad x\in (-1,0),\\ \begin{pmatrix} {\text {e}}^{-2n\varphi _{+}(x)} &{}\quad {\text {e}}^{-\nu \pi i/2} W_n(x) \\ 0 &{}\quad {\text {e}}^{-2n\varphi _{-}(x)} \end{pmatrix}, \quad x\in (0,1),\\ \begin{pmatrix} 1 &{}\quad {\text {e}}^{-\nu \pi i/2} W_n(x) {\text {e}}^{2n \varphi _+(x)} \\ 0 &{}\quad 1\end{pmatrix}, \quad x\in (1,\infty ), \end{array}\right. } \end{aligned}$$

    where \(W_n\) is given in (3.22) and the real axis is again oriented from left to right.

  3. 3.

    As \(z\rightarrow \infty \),

    $$\begin{aligned} T(z)=I+\mathcal {O}(1/z). \end{aligned}$$
  4. 4.

    T(z) remains bounded as \(z \rightarrow 0\) with \({{\mathrm{{\text {Im}}\,}}}z > 0\), and

    $$\begin{aligned} T(z)=\begin{pmatrix} \mathcal {O}(1) &{}\quad \mathcal {O}(z^{-\nu }) \\ \mathcal {O}(1) &{}\quad \mathcal {O}(z^{-\nu }) \end{pmatrix}\quad \text { as } z \rightarrow 0 \text { with } {{\mathrm{{\text {Im}}\,}}}z < 0. \end{aligned}$$
    (3.23)

The off-diagonal elements in the jump matrices on \((-\infty ,-1)\) and \((1,\infty )\) tend to 0 at an exponential rate because of the Euler–Lagrange condition (3.16).

3.5 Fourth Transformation

The jump matrix on the interval \((-1,0)\) has a factorization

$$\begin{aligned}&\begin{pmatrix} {\text {e}}^{-2n\varphi _{+}(x)} &{}\quad {\text {e}}^{\nu \pi i/2} W_n(x) \\ 0 &{}\quad {\text {e}}^{-2n\varphi _{-}(x)}\end{pmatrix} \\&\quad = \begin{pmatrix} 1 &{}\quad 0\\ \frac{{\text {e}}^{-\nu \pi i/ 2}}{W_n(x)}{\text {e}}^{-2n\varphi _-(x)} &{}\quad 1\end{pmatrix} \begin{pmatrix} 0&{}\quad {\text {e}}^{\nu \pi i/2}W_n(x)\\ -\frac{{\text {e}}^{-\nu \pi i/2}}{W_n(x)} &{}\quad 0\end{pmatrix} \begin{pmatrix} 1 &{}\quad 0\\ \frac{{\text {e}}^{-\nu \pi i/2}}{W_n(x)}{\text {e}}^{-2n\varphi _{+}(x)} &{}\quad 1\end{pmatrix}, \end{aligned}$$

while the jump matrix on (0, 1) factorizes as

$$\begin{aligned}&\begin{pmatrix} {\text {e}}^{-2n\varphi _{+}(x)} &{}\quad {\text {e}}^{-\nu \pi i/2}W_n(x) \\ 0 &{}\quad {\text {e}}^{-2n\varphi _{-}(x)}\end{pmatrix} \\&\quad = \begin{pmatrix} 1 &{}\quad 0\\ \frac{{\text {e}}^{\nu \pi i/2}}{W_n(x)} {\text {e}}^{-2n\varphi _{-}(x)}&{}\quad 1\end{pmatrix} \begin{pmatrix} 0&{}\quad {\text {e}}^{-\nu \pi i/2}W_n(x)\\ -\frac{{\text {e}}^{\nu \pi i/2}}{W_n(x)} &{}\quad 0\end{pmatrix} \begin{pmatrix} 1 &{}\quad 0\\ \frac{{\text {e}}^{\nu \pi i/2}}{W_n(x)} {\text {e}}^{-2n\varphi _{+}(x)} &{}\quad 1\end{pmatrix}. \end{aligned}$$

In order to open the lens around \((-1,1)\), we need the analytic extension of the function \(W_n\) from (3.22) to \(\mathbb C \setminus i \mathbb R\), which we also denote by \(W_n\),

$$\begin{aligned} W_n(z)={\left\{ \begin{array}{ll} \sqrt{2n} K_{\nu }(n\pi z){\text {e}}^{n\pi z},&{} \quad {{\mathrm{{\text {Re}}\,}}}z > 0, \\ \sqrt{2n} K_{\nu }(-n\pi z){\text {e}}^{-n\pi z},&{} \quad {{\mathrm{{\text {Re}}\,}}}z < 0. \end{array}\right. } \end{aligned}$$
(3.24)

Note that as \(n \rightarrow \infty \), see (3.9) and (3.24),

$$\begin{aligned} W_n(z) = {\left\{ \begin{array}{ll} z^{-1/2} ( 1 + \mathcal {O}(1/(nz)), &{}\quad {{\mathrm{{\text {Re}}\,}}}z > 0, \\ (-z)^{-1/2}(1 + \mathcal {O}(1/(nz))), &{}\quad {{\mathrm{{\text {Re}}\,}}}z < 0, \end{array}\right. } \end{aligned}$$
(3.25)

which explains the factor \(\sqrt{2n}\) that we introduced in (3.22) and (3.24).

Next, we fix a number \(\rho >0\), and we open a lens around \([-1,1]\), which defines contours \(\varSigma _j\), \(j=1, \ldots , 4\), and domains \(\varOmega _j\), \(j=1, \ldots , 4\), as indicated in Fig. 3.

Fig. 3
figure 3

Opening of a lens around \([-1,1]\); and contour \(\varSigma _S\) consisting of \(\varSigma _1, \ldots , \varSigma _4\), the segment \((-i\rho ,i\rho )\), and the real line

In the fourth transformation, we define the matrix valued function S(z):

$$\begin{aligned} S(z) = {\left\{ \begin{array}{ll} T(z)\begin{pmatrix} 1 &{}\quad 0\\ -\frac{{\text {e}}^{\nu \pi i/2}}{W_n(z)} {\text {e}}^{-2n\varphi (z)}&{}\quad 1\end{pmatrix} &{}\quad \text {for } z \in \varOmega _1,\\ T(z) \begin{pmatrix} 1 &{}\quad 0\\ -\frac{{\text {e}}^{-\nu \pi i/ 2}}{W_n(z)}{\text {e}}^{-2n\varphi (z)} &{}\quad 1\end{pmatrix} &{}\quad \text {for } z \in \varOmega _2,\\ T(z) \begin{pmatrix} 1 &{}\quad 0\\ \frac{{\text {e}}^{-\nu \pi i/ 2}}{W_n(z)}{\text {e}}^{-2n\varphi (z)} &{}\quad 1\end{pmatrix} &{} \text {for } z \in \varOmega _3,\\ T(z) \begin{pmatrix} 1 &{}\quad 0\\ \frac{{\text {e}}^{\nu \pi i/2}}{W_n(z)} {\text {e}}^{-2n\varphi (z)}&{}\quad 1\end{pmatrix} &{} \text {for } z \in \varOmega _4,\\ T(z) &{}\quad {{\text {elsewhere}}}, \end{array}\right. } \end{aligned}$$
(3.26)

using the analytic extension (3.24) for the function \(W_n(z)\) in each region and \(\varphi (z)\) defined in (3.18).

Remark 3.7

In order to divide by \(W_n(z)\), we need to be careful with possible zeros of this function in the complex plane. Following the general theory in [20, §15.7], the Bessel function \(K_{\nu }(n\pi z)\) is free from zeros in the half-plane \(|\arg z|\le \tfrac{\pi }{2}\). Using (3.24), we can conclude that \(W_n(z)\ne 0\).

From the RH problem 3.6 and (3.26), we find that S(z) is the solution of the following RH problem:

RH problem 3.8

  1. 1

    S is analytic in \(\mathbb {C}\setminus \varSigma _S\), where \(\varSigma _S\) is depicted in Fig. 3.

  2. 2.

    S satisfies the jump conditions \(S_+ = S_- J_S\) where

    $$\begin{aligned} J_S(z) = {\left\{ \begin{array}{ll} \begin{pmatrix} 1 &{}\quad 0\\ \frac{{\text {e}}^{\nu \pi i/2}}{W_n(z)} {\text {e}}^{-2n\varphi (z)}&{}\quad 1\end{pmatrix}, &{} \quad z\in \varSigma _1\cup \varSigma _4,\\ \begin{pmatrix} 1 &{}\quad 0\\ \frac{{\text {e}}^{-\nu \pi i/ 2}}{W_n(z)}{\text {e}}^{-2n\varphi (z)} &{}\quad 1\end{pmatrix}, &{} \quad z\in \varSigma _2\cup \varSigma _3,\\ \begin{pmatrix} 0&{}\quad {\text {e}}^{\nu \pi i/2}W_n(x)\\ -\frac{{\text {e}}^{-\nu \pi i/2}}{W_n(x)} &{} \quad 0\end{pmatrix}, &{} \quad z \in (-1,0),\\ \begin{pmatrix} 0&{}\quad {\text {e}}^{-\nu \pi i/2}W_n(x)\\ -\frac{{\text {e}}^{\nu \pi i/2}}{W_n(x)} &{}\quad 0\end{pmatrix}, &{} \quad z \in (0,1),\\ \begin{pmatrix} 1 &{}\quad {\text {e}}^{\nu \pi i/2}{\text {e}}^{2n \varphi (z)} W_n(z) \\ 0 &{}\quad 1\end{pmatrix}, &{} \quad z\in (-\infty ,-1),\\ \begin{pmatrix} 1 &{}\quad {\text {e}}^{-\nu \pi i/2}{\text {e}}^{2n \varphi (z)} W_n(z) \\ 0 &{}\quad 1\end{pmatrix}, &{} \quad z\in (1,\infty ),\\ \begin{pmatrix} 1 &{} \quad 0\\ j_1(z) &{}\quad 1 \end{pmatrix}, &{} \quad z\in (0, i \rho ),\\ \begin{pmatrix} 1 &{}\quad 0\\ j_2(z) &{}\quad 1 \end{pmatrix},&\quad z \in (-i \rho , 0). \end{array}\right. } \end{aligned}$$
    (3.27)

    Here

    $$\begin{aligned} j_1(z)=\frac{{\text {e}}^{\nu \pi i/2} {\text {e}}^{-2n\varphi _-(z)}}{W_{n,-}(z)}-\frac{{\text {e}}^{-\nu \pi i/2}{\text {e}}^{-2n\varphi _+(z)}}{W_{n,+}(z)}, \quad z \in (0, i \rho ), \end{aligned}$$
    (3.28)

    and

    $$\begin{aligned} j_2(z)=-\frac{{\text {e}}^{\nu \pi i/2} {\text {e}}^{-2n\varphi _-(z)}}{W_{n,-}(z)}+\frac{{\text {e}}^{-\nu \pi i/2}{\text {e}}^{-2n\varphi _+(z)}}{W_{n,+}(z)}, \quad z \in (-i \rho , 0), \end{aligned}$$
    (3.29)

    using the appropriate values of \(\varphi _{\pm }(z)\) and \(W_{n,\pm }(z)\) in each case. The imaginary axis is oriented upwards, and so for \(z \in i\mathbb R\), we have that \(\varphi _+(z)\) and \(W_{n,+}(z)\) (\(\varphi _-(z)\) and \(W_{n,-}(z)\)) denote the limiting value from the left (right) half-plane.

  3. 3.

    As \(z\rightarrow \infty \),

    $$\begin{aligned} S(z)=I+\mathcal {O}(1/z). \end{aligned}$$
  4. 4.

    S(z) remains bounded as \(z \rightarrow 0\) with \({{\mathrm{{\text {Im}}\,}}}z > 0\), and

    $$\begin{aligned} S(z)=\begin{pmatrix} \mathcal {O}(z^{\nu }) &{}\quad \mathcal {O}(z^{-\nu }) \\ \mathcal {O}(z^{\nu }) &{}\quad \mathcal {O}(z^{-\nu }) \end{pmatrix}, \quad \text { as } z \rightarrow 0 \text { with } {{\mathrm{{\text {Im}}\,}}}z < 0. \end{aligned}$$
    (3.30)

Note that as a consequence of the definition of \(\varphi (z)\) in (3.18) and formula (3.20), \(\mathfrak {I}\varphi (x)\) is decreasing on \([-1,1]\). Because of the Cauchy–Riemann equations, \({{\mathrm{{\text {Re}}\,}}}\varphi (z)>0\) as we move away from the interval.

We may and do assume that the lens is small enough such that \({{\mathrm{{\text {Re}}\,}}}\varphi (z) > 0\) on the lips of the lens. Then it follows from (3.25) and (3.27) that the jump matrix \(J_S\) on the lips of the lens tends to I at an exponential rate as \(n \rightarrow \infty \), if we stay away from the endpoints \(\pm 1\). Also the jump matrix on \((-\infty ,-1)\) and \((1, \infty )\) tends to the identity matrix. Thus for any \(\delta > 0\), there is a constant \(c > 0\) such that

$$\begin{aligned} J_S(z) = I + \mathcal {O}({\text {e}}^{-cn}), \quad z \in \varSigma _S \setminus ([-1,1] \cup [-i \rho , i\rho ] \cup D(\pm 1, \delta )). \end{aligned}$$
(3.31)

The condition (3.30) needs some explanation, since (3.23) and (3.26) at first sight lead to the behavior \(S(z)=\begin{pmatrix} \mathcal {O}(1) &{}\quad \mathcal {O}(z^{-\nu }) \\ \mathcal {O}(1) &{}\quad \mathcal {O}(z^{-\nu }) \end{pmatrix}\) as \(z \rightarrow 0\) with \({{\mathrm{{\text {Im}}\,}}}z < 0\). However, a cancellation takes place for the entries in the first column, as can be checked from the jump conditions for S, see (3.27) on the intervals \((-1,0)\) and (0, 1). Since S remains bounded as \(z \rightarrow 0\) with \({{\mathrm{{\text {Im}}\,}}}z > 0\), and

$$\begin{aligned} S_-(z) = S_+(z) \begin{pmatrix} 0 &{} \quad \mathcal {O}(z^{-\nu }) \\ \mathcal {O}(z^{\nu }) &{}\quad 0 \end{pmatrix}, \quad \text { as } z \rightarrow 0, \end{aligned}$$

one finds (3.30).

3.6 Global Parametrix

If we ignore the jump matrices in the RH problem for S except for the one on the interval \([-1,1]\), we arrive at the following RH problem for a \(2 \times 2\) matrix valued function N:

RH problem 3.9

  1. 1.

    N is analytic in \(\mathbb {C}\setminus [-1,1]\).

  2. 2.

    N satisfies the jump conditions

    $$\begin{aligned} N_+(x)&=N_-(x) {\left\{ \begin{array}{ll} \begin{pmatrix} 0 &{}\quad {\text {e}}^{\nu \pi i/2}W_n(x) \\ -\frac{{\text {e}}^{-\nu \pi i/2}}{W_n(x)} &{}\quad 0 \end{pmatrix}, \quad x\in (-1,0), \\ \begin{pmatrix} 0 &{}\quad {\text {e}}^{-\nu \pi i/2}W_n(x) \\ -\frac{{\text {e}}^{\nu \pi i/2}}{W_n(x)} &{}\quad 0 \end{pmatrix}, \quad x\in (0,1). \end{array}\right. } \end{aligned}$$
  3. 3.

    As \(z\rightarrow \infty \),

    $$\begin{aligned} N(z)=I+\mathcal {O}(1/z). \end{aligned}$$

We solve the RH problem for N by means of two Szegő functions \(D_{1,n}\) and \(D_2\), see also [15], that are associated with \(W_n\) and \({\text {e}}^{- {{\mathrm{sgn}}}(x) \nu \pi i/2}\), respectively.

The first Szegő function \(D_1 = D_{1,n}\) is defined by

$$\begin{aligned} D_{1,n}(z)=\exp \left( \frac{(z^2-1)^{1/2}}{2\pi }\int _{-1}^{1}\frac{\log W_n(x)}{\sqrt{1-x^2}}\frac{{\text {d}}x}{z-x}\right) , \end{aligned}$$
(3.32)

which is defined and analytic for \(z \in \mathbb C \setminus [-1,1]\). It satisfies

$$\begin{aligned} D_{1,n+}(x)D_{1,n-}(x)=W_n(x), \quad x \in (-1,1). \end{aligned}$$

It follows from (3.32) that \(D_{1,n}\) has no zeros in \(\mathbb C \setminus [-1,1]\) and

$$\begin{aligned} D_{\infty ,n} := \lim _{z\rightarrow \infty } D_{1,n}(z) = \exp \left( \frac{1}{2\pi }\int _{-1}^1 \frac{\log W_n(x)}{\sqrt{1-x^2}} {\text {d}}x\right) \in (0,\infty ). \end{aligned}$$

In what follows, we are not going to indicate the n-dependence in the notation for \(D_{1,n}\) and \(D_{\infty ,n}\), since the dependence on n is only mild. Indeed, because of (3.25), we have that \(D_{1,n}\) tends to the Szegő function for the weight \(|x|^{-1/2}\) with a rate given in the following lemma:

Lemma 3.10

We have

$$\begin{aligned} D_{1,n}(z)&= \left( \frac{z + (z^2-1)^{1/2}}{z} \right) ^{1/4} \left( 1 + \mathcal {O}\left( \frac{\log n}{n}\right) \right) , \end{aligned}$$
(3.33)
$$\begin{aligned} D_{\infty ,n}&= 2^{1/4} + \mathcal {O}\left( \frac{\log n}{n}\right) , \end{aligned}$$
(3.34)

as \(n \rightarrow \infty \), with a \(\mathcal {O}\) term that is uniform for \(z \in \mathbb C \setminus ([-1,1] \cup D(0, \delta )\cup D(\pm 1,\delta ))\) for any \(\delta > 0\).

Proof

The Szegő function for \(|x|^{-1/2}\) is

$$\begin{aligned} \begin{aligned} D(z; |x|^{-1/2})&= \exp \left( \frac{(z^2-1)^{1/2}}{2\pi } \int _{-1}^1 \frac{ \log |x|^{-1/2}}{\sqrt{1-x^2}} \frac{{\text {d}}x}{z-x} \right) \\&= \left( \frac{z + (z^2-1)^{1/2}}{z} \right) ^{1/4}, \end{aligned} \end{aligned}$$

and so

$$\begin{aligned} D_{1,n}(z)=D(z; |x|^{-1/2}) \exp \left( \frac{(z^2-1)^{1/2}}{2\pi } \int _{-1}^1 \frac{ \log (|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{{\text {d}}x}{z-x} \right) . \end{aligned}$$
(3.35)

Because of (3.25), there exist \(c_0, c_1 > 0\),

$$\begin{aligned} \left| |x|^{1/2} W_n(x) - 1 \right| \le \frac{c_1}{n|x|} < \frac{1}{2}, \quad |x| \ge \frac{c_0}{n}. \end{aligned}$$

Then also for some \(c_2 > 0\),

$$\begin{aligned} \left| \log (|x|^{1/2} W_n(x))\right| \le \frac{c_2}{n|x|}, \quad |x| \ge \frac{c_0}{n}. \end{aligned}$$

It follows that

$$\begin{aligned} \begin{aligned} \left| \int _{c_0/n}^1 \frac{ \log (|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{{\text {d}}x}{z-x} \right|&\le \frac{c_2}{{{\mathrm{dist}}}(z, [-1,1]) n} \int _{c_0/n}^1 \frac{1}{x\sqrt{1-x^2}} {\text {d}}x \\&\le \frac{c_3}{{{\mathrm{dist}}}(z, [-1,1])} \frac{\log n}{n}, \end{aligned} \end{aligned}$$

with a constant \(c_3\) that is independent of n and z. By deforming the integration path into the complex plane in such a way that it stays at a certain distance from z, and applying similar estimates, we find

$$\begin{aligned} \left| \int _{c_0/n}^1 \frac{ \log (|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{{\text {d}}x}{z-x} \right| \le \frac{c_4}{|z|} \frac{\log n}{n}, \end{aligned}$$
(3.36)

with a constant that is independent of \(z \in \mathbb C \setminus ([-1,1] \cup D(0, \delta )\cup D(\pm 1,\delta ))\). Similarly,

$$\begin{aligned} \left| \int _{-1}^{-c_0/n} \frac{ \log (|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{{\text {d}}x}{z-x} \right| \le \frac{c_5}{|z|} \frac{\log n}{n}. \end{aligned}$$
(3.37)

Near \(x=0\) we use (3.10) and (3.22) to find a \(c_6 > 0\) such that

$$\begin{aligned} c_6 |nx|^{1/2 - \nu }\le |x|^{1/2} W_n(x) \le 1, \quad |x| \le \frac{c_0}{n}. \end{aligned}$$

The upper bound follows from the fact that \(0 < K_{\nu }(s) \le K_{1/2}(s)\) if \(0\le \nu <1/2\) and \(s > 0\) and the explicit formula for \(K_{1/2}(s)\), see [17, 18, 10.37.1,10.39.2]. Then

$$\begin{aligned} \left| \log (|x|^{1/2} W_n(x)) \right| \le \left| \log c_6 + \left( \tfrac{1}{2} - \nu \right) \log |nx|\right| , \quad |x| \le \frac{c_0}{n}, \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \left| \int _{-c_0/n}^{c_0/n} \frac{ \log |x|^{1/2} W_n(x)}{\sqrt{1-x^2}} \frac{{\text {d}}x}{z-x} \right|&\le \frac{2}{|z|} \int _{-c_0/n}^{c_0/n} \left| \log c_6 +\left( \tfrac{1}{2} - \nu \right) \log |nx| \right| {\text {d}}x\\&\le \frac{c_7}{|z|} \frac{1}{n} \end{aligned} \end{aligned}$$
(3.38)

for some new constant \(c_7 > 0\).

Combining the estimates (3.36), (3.37), and (3.38), we get

$$\begin{aligned} \left| \frac{(z^2-1)^{1/2}}{2\pi } \int _{-1}^1 \frac{ \log (|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{{\text {d}}x}{z-x} \right| = \mathcal {O}\left( \frac{ \log n}{n}\right) , \end{aligned}$$

with a \(\mathcal {O}\) term that is uniform for \(|z| > \delta \) \(|z\pm 1| > \delta \), and so by (3.35)

$$\begin{aligned} \left( \frac{z + (z^2-1)^{1/2}}{z} \right) ^{-1/4} D_{1,n}(z) = \exp \left( \mathcal {O}\left( \frac{ \log n}{n}\right) \right) = 1 + \mathcal {O}\left( \frac{ \log n}{n}\right) , \end{aligned}$$

as claimed in (3.33).

Since (3.33) is uniform for \(|z| > \delta \), \(|z\pm 1| > \delta \), we can let \(z \rightarrow \infty \), and obtain (3.34). \(\square \)

The second Szegő function \(D_2\) corresponds to the weight \({\text {e}}^{\pm \nu \pi i/2}\) and is defined as

$$\begin{aligned} D_2(z)=\left( \frac{\sqrt{z^2-1}-i}{\sqrt{z^2-1}+i}\right) ^{\nu /4}, \quad z \in \mathbb C \setminus [-1,1], \end{aligned}$$
(3.39)

with the branch of the square root that is positive for real \(z > 1\). It is not difficult to check that \(z \mapsto w = D_2(z)\) is the conformal mapping from \(\mathbb C \setminus [-1,1]\) onto the sector \(- \frac{\nu \pi }{4} < \arg w < \frac{\nu \pi }{4}\) that maps \(z= 0+\) to \(w=0\), \(z=0-\) to \(w=\infty \), \(z = \pm 1\) to \({\text {e}}^{\mp \frac{\nu \pi }{4}}\) and \(z=\infty \) to \(w=1\).

The Szegő function \(D_2\) is related to the function \(\psi \) from (2.10).

Lemma 3.11

We have

$$\begin{aligned} \log D_2(z) = {\left\{ \begin{array}{ll} - \frac{\nu \pi }{2} \psi (z) - \frac{\nu \pi i}{4}, &{} {{\mathrm{{\text {Re}}\,}}}z > 0, \, {{\mathrm{{\text {Im}}\,}}}z > 0, \\ \frac{\nu \pi }{2} \psi (z) - \frac{\nu \pi i}{4}, &{} {{\mathrm{{\text {Re}}\,}}}z > 0, \, {{\mathrm{{\text {Im}}\,}}}z <0, \\ - \frac{\nu \pi }{2} \psi (z) + \frac{\nu \pi i}{4}, &{} {{\mathrm{{\text {Re}}\,}}}z < 0, \, {{\mathrm{{\text {Im}}\,}}}z > 0, \\ \frac{\nu \pi }{2} \psi (z) + \frac{\nu \pi i}{4}, &{} {{\mathrm{{\text {Re}}\,}}}z < 0, \, {{\mathrm{{\text {Im}}\,}}}z < 0. \end{array}\right. } \end{aligned}$$
(3.40)

Proof

This follows from (2.10) and (3.39) by straightforward calculation. \(\square \)

It follows from (3.40) that \(D_2\) satisfies

$$\begin{aligned} D_{2+}(x) D_{2-}(x) = {\left\{ \begin{array}{ll} {\text {e}}^{\nu \pi i/2}, &{} \quad x \in (-1,0), \\ {\text {e}}^{-\nu \pi i/2}, &{} \quad x \in (0,1), \end{array}\right. } \end{aligned}$$

and, since \(\psi (z) \sim \frac{1}{\pi } \log (1/z)\) as \(z \rightarrow 0\),

$$\begin{aligned} D_{2}(z) = {\left\{ \begin{array}{ll} \mathcal {O}(z^{\nu /2}) &{} \text { as } z \rightarrow 0 \text { with } {{\mathrm{{\text {Im}}\,}}}z > 0, \\ \mathcal {O}(z^{-\nu /2}) &{} \text { as } z \rightarrow 0 \text { with } {{\mathrm{{\text {Im}}\,}}}z < 0. \end{array}\right. } \end{aligned}$$
(3.41)

Having \(D_1\) and \(D_2\), we seek N in the form

$$\begin{aligned} N(z) = D_{\infty }^{\sigma _3} N_0(z) \left( D_1(z) D_2(z) \right) ^{-\sigma _3}. \end{aligned}$$
(3.42)

Then N satisfies the RH problem 3.9 if and only if \(N_0\) satisfies the following standard RH problem:

RH problem 3.12

  1. 1.

    \(N_0\) is analytic in \(\mathbb {C}\setminus [-1,1]\).

  2. 2.

    \(N_0\) satisfies the jump conditions

    $$\begin{aligned} N_{0+}(x)= N_{0-}(x)\begin{pmatrix} 0 &{}\quad 1\\ -1 &{}\quad 0 \end{pmatrix}, \quad x\in (-1,1). \end{aligned}$$
  3. 3.

    \(N_0(z) = I + \mathcal {O}(1/z)\) as \(z\rightarrow \infty \).

The RH problem for \(N_0\) has the explicit solution (see for instance [5, Section7.3]):

$$\begin{aligned} N_0(z)=\begin{pmatrix} \frac{\beta (z)+\beta (z)^{-1}}{2} &{}\quad \frac{\beta (z)-\beta (z)^{-1}}{2i}\\ -\frac{\beta (z)-\beta (z)^{-1}}{2i} &{}\quad \frac{\beta (z)+\beta (z)^{-1}}{2} \end{pmatrix}, \quad \text { with } \beta (z)=\left( \frac{z-1}{z+1}\right) ^{1/4}, \end{aligned}$$

for \(z \in \mathbb C \setminus [-1,1]\), and we take the branch of the fourth root that is analytic in \(\mathbb C \setminus [-1,1]\) and that is real and positive for \(z > 1\). Note that we can also write

$$\begin{aligned} N_0(z)= \frac{1}{\sqrt{2}(z^2-1)^{1/4}} \begin{pmatrix} f(z)^{1/2} &{}\quad i f(z)^{-1/2} \\ -i f(z)^{-1/2} &{}\quad f(z)^{1/2} \end{pmatrix}, \end{aligned}$$
(3.43)

where

$$\begin{aligned} f(z) = z + (z^2-1)^{1/2} \end{aligned}$$
(3.44)

is the conformal map from \(\mathbb C \setminus [-1,1]\) to the exterior of the unit disk.

3.7 Fifth Transformation

Around the endpoints \(z=\pm 1\) we build Airy parametrices \(P_{{{\mathrm{Ai}}}}\) in the usual way. We take \(\delta > 0\) sufficiently small, and \(P_{{{\mathrm{Ai}}}}\) is defined and analytic in \(D(\pm 1, \delta ) \setminus \varSigma _S\) such that it has the same jumps as S on \(\varSigma _S \cap D(\pm 1, \delta )\), and such that

$$\begin{aligned} P_{{{\mathrm{Ai}}}}(z) = N(z) (1 + \mathcal {O}(n^{-1})), \quad \text {uniformly for } |z \pm 1| = \delta , \end{aligned}$$
(3.45)

as \(n \rightarrow \infty \). We refer the reader for instance to the monograph by Deift [5, §7.6] for details. Observe that the matching is not essentially affected by the extra factor \({\text {e}}^{-\nu \pi i/2}W_n(z)\) in this construction, since this function has a limit as \(n\rightarrow \infty \) and is very well-behaved near \(z=1\).

In the fifth transformation, we put

$$\begin{aligned} Q = {\left\{ \begin{array}{ll} SN^{-1} &{} \quad \text { outside the disks}\, D(\pm 1, \delta ), \\ S P_{{{\mathrm{Ai}}}}^{-1} &{}\quad \text { inside the disks.} \end{array}\right. } \end{aligned}$$
(3.46)

Then Q is defined and analytic outside of a contour consisting of \(\varSigma _S\) and two circles around \(\pm 1\). The construction of the Airy parametrix is such that it has the same jump as S inside the circles. As a result, Q is analytic inside the two disks. Also S and N have the same jump on \((-1,1)\), and it follows that Q is analytic across \((-1,1)\). Therefore Q is analytic in \(\mathbb C \setminus \varSigma _Q\), where \(\varSigma _Q\) consists of two circles around \(\pm 1\), the parts of \((-\infty , -1)\), \(\varSigma _j\), \(j=1,\ldots , 4\), and \((1, \infty )\) outside of these circles, and the segment \((-i \rho , i \rho )\) on the imaginary axis. See Fig. 4.

Fig. 4
figure 4

Oriented contour \(\varSigma _Q\) consisting of two circles around \(\pm 1\), the parts of \((-\infty , -1)\) and \((1, \infty )\) outside of these circles, the lips of the lens around \([-1,1]\), and the segment \((-i \rho , i \rho )\) on the imaginary axis

From the RH problem 3.8 for S and (3.46), it then follows that Q solves the following RH problem:

RH problem 3.13

  1. 1.

    \(Q : \mathbb C \setminus \varSigma _Q \rightarrow \mathbb C^{2 \times 2}\) is analytic.

  2. 2.

    Q satisfies the jump condition \(Q_+ = Q_- J_Q\) on \(\varSigma _Q\), where

    $$\begin{aligned} J_Q(z) = {\left\{ \begin{array}{ll} N(z) P_{{{\mathrm{Ai}}}}^{-1}(z) &{} \quad \text { for}\, z\, \text { on the circles}, \\ N(z) \begin{pmatrix} 1 &{} \quad 0\\ j_1(z) &{} 1 \end{pmatrix} N^{-1}(z) &{} \quad \text { for } z \in (0, i \rho ),\\ N(z) \begin{pmatrix} 1 &{}\quad 0\\ j_2(z) &{}\quad 1 \end{pmatrix} N^{-1}(z) &{} \quad \text { for } z \in (-i \rho , 0), \\ N(z) J_S(z) N(z)^{-1} &{}\quad \text { elsewhere on}\, \varSigma _Q. \end{array}\right. } \end{aligned}$$

    Here \(j_1\) and \(j_2\) are given by (3.28) and (3.29).

  3. 3.

    As \(z\rightarrow \infty \),

    $$\begin{aligned} Q(z)=I+\mathcal {O}(1/z). \end{aligned}$$
  4. 4.

    \(Q(z) = \mathcal {O}(1)\) as \(z \rightarrow 0\).

In the behavior around 0, there is no longer a distinction between the upper and lower half-planes, and Q remains bounded in all directions.

We note that

$$\begin{aligned} J_Q(z) = I + \mathcal {O}(n^{-1}) \quad \text { for}\,z\,\text { on the circles} \end{aligned}$$
(3.47)

because of the matching property (3.45). We also note that

$$\begin{aligned} J_Q(z) = I + \mathcal {O}({\text {e}}^{-cn}) \quad \text { on } \varSigma _Q \setminus (\partial D(\pm 1, \delta ) \cup [-i \rho , i\rho ] ) \end{aligned}$$
(3.48)

because of (3.31), (3.42), and Lemma 3.10.

The jump matrix \(J_Q\) on the imaginary axis can be rewritten as (we use (3.42)):

$$\begin{aligned} J_Q(z) = D_{\infty }^{\sigma _3} N_0(z) \begin{pmatrix} 1 &{} \quad 0 \\ j_{1,2}(z) (D_1(z) D_2(z))^2 &{}\quad 1 \end{pmatrix} N_0^{-1}(z) D_{\infty }^{-\sigma _3}, \quad z \in (-i \rho , i \rho ), \end{aligned}$$
(3.49)

with \(j_1\) on \((0, i\rho )\), and \(j_2\) on \((-i \rho ,0)\).

Remark 3.14

The entry \(j_{1,2}(z) (D_1(z) D_2(z))^2\) in (3.49) depends on n and tends to 0 as \(n \rightarrow \infty \) for every \(z \in (-i \rho , 0) \cup (0, i \rho )\), but not in a uniform way. Hence, further analysis is needed in the next section. A similar situation is studied in [3, Section 5], where the jump on the imaginary axis has the same structure and approaches the identity matrix at a rate \(1/\log (n)\) as \(n\rightarrow \infty \). In that case, no local parametrix near the origin is needed.

3.8 Local Parametrix Near \(z=0\)

The construction of a local parametrix in a neighborhood of the origin follows the idea exposed in [13]. We take \(\varepsilon > 0\), with

$$\begin{aligned} \varepsilon < \min \left( \tfrac{1}{2 e}, \tfrac{\rho }{3} \right) , \end{aligned}$$

and we build a local parametrix P defined in a neighborhood \(|z| < 3\varepsilon \) of 0. We use a cut-off function \(\chi (z)\) on \(i \mathbb R\) such that

  1. (a)

    \(\chi : i \mathbb R \rightarrow \mathbb R\) is a \(C^{\infty }\) function,

  2. (b)

    \(0 \le \chi (z) \le 1\) for all \(z \in i \mathbb R\),

  3. (c)

    \(\chi (z) \equiv 1\) for \(z \in (-i \varepsilon , i \varepsilon )\),

  4. (d)

    \(\chi (z) \equiv 0\) for \(z \in \left( -i \infty , -2i\varepsilon \right) \cup \left( 2i\varepsilon , i \infty \right) \).

Then we modify \(J_Q\) by multiplying the off-diagonal entry in the middle factor of (3.49) by \(\chi (z)\), and in addition we use this as a jump matrix in the full imaginary axis. Thus

$$\begin{aligned} J_{P}(z) = D_{\infty }^{\sigma _3} N_0(z) \begin{pmatrix} 1 &{}\quad 0 \\ j_{1,2}(z) (D_1(z) D_2(z))^2 \chi (z) &{}\quad 1 \end{pmatrix} N_0^{-1}(z) D_{\infty }^{-\sigma _3}, \quad z \in i \mathbb R, \end{aligned}$$
(3.50)

with \(j_1\) on \(i \mathbb R^+\) and \(j_1\) on \(i \mathbb R^-\).

Then the RH problem for the local parametrix P at the origin is:

RH problem 3.15

  1. 1)

    \(P : \{ z\in \mathbb C \mid -1 < {{\mathrm{{\text {Re}}\,}}}z < 1 \} \setminus i \mathbb R \rightarrow \mathbb C^{2 \times 2}\) is analytic.

  2. 2)

    P satisfies the jump condition

    $$\begin{aligned} P_+(z)= P_-(z) J_P(z), \quad z\in i \mathbb R, \end{aligned}$$
    (3.51)

    where \(J_P(z)\) is given by (3.50).

  3. 3)

    \(P(z) = I + \mathcal {O} \left( \epsilon _n \right) \) as \(n \rightarrow \infty \) uniformly for \(|z| = 3 \varepsilon \) with \(\epsilon _n\) given by (2.3).

Proposition 3.16

The RH problem 3.15 has a solution for n large enough.

The rest of this subsection is devoted to the proof of Proposition 3.16. It takes a number of steps, and it is the most technical part of the paper.

3.8.1 RH Problem for \(\widehat{P}\)

We introduce a matrix \(\widehat{P}(z)\) in the following way:

$$\begin{aligned} P(z) = {\left\{ \begin{array}{ll} D_{\infty }^{\sigma _3} N_0(z) \widehat{P}(z) N_0(z)^{-1} D_{\infty }^{-\sigma _3} &{} \text {for } {{\mathrm{{\text {Im}}\,}}}z < 0, \\ D_{\infty }^{\sigma _3} N_0(z) \begin{pmatrix} 0 &{}\quad -1 \\ 1 &{}\quad 0 \end{pmatrix} \widehat{P}(z) \begin{pmatrix} 0 &{} 1 \\ -1 &{}\quad 0 \end{pmatrix} N_0(z)^{-1} D_{\infty }^{-\sigma _3}&\quad \text {for } {{\mathrm{{\text {Im}}\,}}}z > 0. \end{array}\right. } \end{aligned}$$
(3.52)

The extra factors in (3.52) for \({{\mathrm{{\text {Im}}\,}}}z > 0\) are introduced in order to compensate the jumps of \(N_0\) on \([-1,1]\). Then P satisfies the jump condition (3.51) in the RH problem 3.15 if and only if \(\widehat{P}_+ = \widehat{P}_- J_{\widehat{P}}\), where the jump is

$$\begin{aligned} J_{\widehat{P}}(z) = {\left\{ \begin{array}{ll} \begin{pmatrix} 1 &{}\quad - j_1(z) (D_1(z) D_2(z))^2 \chi (z) \\ 0 &{}\quad 1 \end{pmatrix} &{}\quad \text { for } z \in i \mathbb R^+, \\ \begin{pmatrix} 1 &{}\quad 0 \\ j_2(z) (D_1(z) D_2(z))^2 \chi (z) &{}\quad 1 \end{pmatrix}&\quad \text { for } z \in i \mathbb R^-. \end{array}\right. } \end{aligned}$$
(3.53)

Note the difference in the triangularity structure. So, we look for \(\widehat{P}\) that solves the following RH problem:

RH problem 3.17

  1. 1.

    \(\widehat{P} : \mathbb C \setminus i \mathbb R \rightarrow \mathbb C^{2 \times 2}\) is analytic.

  2. 2.

    \(\widehat{P}\) satisfies the jump conditions

    $$\begin{aligned} \widehat{P}_+(z)= \widehat{P}_-(z) J_{\widehat{P}}(z), \quad z\in i \mathbb R, \end{aligned}$$
    (3.54)

    where \(J_{\widehat{P}}(z)\) is given by (3.53).

  3. 3.

    \(\widehat{P}(z) = I + \mathcal {O}(1/z)\) as \(z \rightarrow \infty \).

Our aim is to show that the RH problem for \(\widehat{P}\) has a solution for n sufficiently large and that this solution satisfies, in addition:

  1. 4.

    \(\widehat{P}(z) = I + \mathcal {O} \left( \epsilon _n \right) \) as \(n \rightarrow \infty \), uniformly for \(|z| = 3 \varepsilon \).

Having \(\widehat{P}\), we define P by (3.52) in terms of \(\widehat{P}\), and it will satisfy the requirements of the RH problem 3.15.

We prove the following result:

Lemma 3.18

If \(0<\nu \le 1/2\), then for n large enough there exists \(\widehat{P}(z)\) that solves the RH problem 3.17, and as \(n\rightarrow \infty \),

$$\begin{aligned} \begin{aligned} |\widehat{P}_{11}(z) - 1|&= \mathcal {O} \left( n^{-1/2} (\log n)^{-2\nu -1/2}\right) , \nonumber \\ |\widehat{P}_{21}(z) |&= \mathcal {O} \left( n^{\nu -1/2} (\log n)^{-\nu -1/2} \right) , \nonumber \end{aligned} \end{aligned}$$

for \(z \in \mathbb {C}\setminus V\), where V is any neighborhood of \([-2i\varepsilon ,0]\), and

$$\begin{aligned} \begin{aligned} |\widehat{P}_{12}(z)|&= \mathcal {O} \left( n^{-\nu -1/2}(\log n)^{-\nu -1/2} \right) ,\nonumber \\ |\widehat{P}_{22}(z) - 1|&= \mathcal {O} \left( n^{-1/2} (\log n)^{-2\nu -1/2}\right) ,\nonumber \end{aligned} \end{aligned}$$

for \(z \in \mathbb {C}\setminus V\), where V is any neighborhood of \([0,2i\varepsilon ]\). Here \(\widehat{P}_{ij}(z)\) denotes the (ij) component of the matrix \(\widehat{P}(z)\).

Remark 3.19

It follows from Lemma 3.18 that \(\widehat{P}(z) = I + \mathcal {O} \left( \epsilon _n \right) \) as \(n \rightarrow \infty \), uniformly for \(|z| = 3 \varepsilon \), and because of (3.52), the same holds for P(z).

In the proof of this lemma, we will need the following steps:

  1. 1.

    We write the jump conditions for \(\widehat{P}(z)\) componentwise, and in terms of two integral operators \(K_1\) and \(K_2\).

  2. 2.

    We estimate the operator norms \(\Vert K_1\Vert \) and \(\Vert K_2\Vert \) as \(n\rightarrow \infty \). This requires estimates for the functions \(j_1(z)\), \(j_2(z)\), \(D_1(z)\), and \(D_2(z)\), which are uniform as \(n\rightarrow \infty \) for y in a fixed interval around the origin on the imaginary axis.

  3. 3.

    We show that the operators \(I-K_2K_1\) and \(I-K_1K_2\) are invertible for n large enough, and this gives the existence and asymptotics of \(\widehat{P}\).

Finally, the estimates for \(\widehat{P}(z)\) are used to prove that the matrix R(z), which will be defined in Sect. 3.9 and which solves the Riemann–Hilbert problem 3.26, is close to the identity matrix as \(n\rightarrow \infty \).

3.8.2 Integral Operators

Let us write

$$\begin{aligned} \begin{aligned} \eta _1(z)&= - j_1(z) (D_1(z) D_2(z))^2 \chi (z),&z \in i \mathbb R^+,\\ \eta _2(z)&= j_2(z) (D_1(z) D_2(z))^2 \chi (z),&z \in i \mathbb R^-. \end{aligned} \end{aligned}$$
(3.55)

These functions depend on n, since \(j_1\), \(j_2\), and \(D_1\) depend on n. Note, however, that \(D_2\) and \(\chi \) do not depend on n.

The jump condition (3.53)–(3.54) yields that for \(j=1,2\),

$$\begin{aligned} \begin{aligned} \widehat{P}_{j1+}(z)&= {\left\{ \begin{array}{ll} \widehat{P}_{j1-}(z) &{} \text { for } z \in i \mathbb R^+, \\ \widehat{P}_{j1-}(z) + \eta _2(z) \widehat{P}_{j2-}(z) &{} \text { for } z \in i \mathbb R^-, \end{array}\right. }\\ \widehat{P}_{j2+}(z)&= {\left\{ \begin{array}{ll} \widehat{P}_{j2-}(z) + \eta _1(z) \widehat{P}_{j1-}(z) &{} \text { for } z \in i \mathbb R^+, \\ \widehat{P}_{j2-}(z) &{} \text { for } z \in i \mathbb R^-. \end{array}\right. } \end{aligned} \end{aligned}$$

Since \(\chi (z) = 0\) for \(|z| \ge 2 \varepsilon \), we find that \(\widehat{P}_{j1}\) is analytic in \(\mathbb C \setminus [-2i \varepsilon , 0] \), and \(\widehat{P}_{j2}\) is analytic in \(\mathbb C \setminus [0,2i\varepsilon ]\). Then by the Sokhotski–Plemelj formula and the asymptotic condition \(\widehat{P}(z) \rightarrow I\) as \(z \rightarrow \infty \), we get

$$\begin{aligned} \begin{aligned} \widehat{P}_{11}(z)&= 1 + \frac{1}{2\pi i} \int _{-2i \varepsilon }^0 \frac{\eta _2(s) \widehat{P}_{12}(s)}{s-z} \mathrm{d}s,&\widehat{P}_{12}(z)&= \frac{1}{2\pi i} \int _0^{2i \varepsilon } \frac{\eta _1(s) \widehat{P}_{11}(s)}{s-z} \mathrm{d}s. \\ \widehat{P}_{21}(z)&= \frac{1}{2\pi i} \int _{-2i \varepsilon }^0 \frac{\eta _2(s) \widehat{P}_{22}(s)}{s-z} \mathrm{d}s,&\widehat{P}_{22}(z)&= 1 + \frac{1}{2\pi i} \int _0^{2i \varepsilon } \frac{\eta _1(s) \widehat{P}_{21}(s)}{s-z} \mathrm{d}s. \end{aligned} \end{aligned}$$
(3.56)

We can write the equations in operator form if we introduce two operators

$$\begin{aligned} K_1 : L^2([0,2i \varepsilon ]) \rightarrow L^2([-2i \varepsilon ,0]) \quad \text { and } \quad K_2 : L^2([-2i \varepsilon ,0]) \rightarrow L^2([0,2i \varepsilon ]) \end{aligned}$$

by

$$\begin{aligned} (K_1 f)(z)&= \frac{1}{2\pi i} \int _0^{2i \varepsilon } \frac{\eta _1(s) f(s)}{s-z} \mathrm{d}s, \quad f \in L^2([0,2i \varepsilon ]), \\ (K_2 g)(z)&= \frac{1}{2\pi i} \int _{-2i \varepsilon }^0 \frac{\eta _2(s) g(s)}{s-z} \mathrm{d}s, \quad g \in L^2([-2i \varepsilon ,0]). \end{aligned}$$

Then \(f_1 = \widehat{P}_{11}\), \(g_1 = \widehat{P}_{12}\) should solve

$$\begin{aligned} f_1 = 1 + K_2 g_1, \quad g_1 = K_1 f_1, \end{aligned}$$
(3.57)

and \(f_2 = \widehat{P}_{21}\), \(g_2 = \widehat{P}_{22}\) should solve

$$\begin{aligned} f_2= K_2 g_2, \quad g_2= 1 + K_1 f_2. \end{aligned}$$
(3.58)

Both \(K_1\) and \(K_2\) are integral operators between Hilbert spaces with operator norms

$$\begin{aligned} \Vert K_1 \Vert ^2 = \int _{-2i \varepsilon }^0 \int _0^{2i \varepsilon } \frac{|\eta _1(s)|^2}{|s-t|^2} |\mathrm{d}s| |\mathrm{d}t|, \\ \Vert K_2 \Vert ^2 = \int _0^{2i \varepsilon } \int _{-2i \varepsilon }^0 \frac{|\eta _2(s)|^2}{|s-t|^2} |\mathrm{d}s| |\mathrm{d}t|. \end{aligned}$$

The t-integrals can be done explicitly. This leads to the estimates (we also change to a real integration variable by putting \(s = \pm iy\))

$$\begin{aligned} \Vert K_1 \Vert \le \left( \int _0^{2\varepsilon } \frac{|\eta _1(iy)|^2}{y} \mathrm{d}y \right) ^{1/2}, \quad \Vert K_2 \Vert \le \left( \int _0^{2\varepsilon } \frac{|\eta _2(-iy)|^2}{y} \mathrm{d}y \right) ^{1/2}. \end{aligned}$$
(3.59)

The next step is to show that both integrals are finite (so that \(K_1\) and \(K_2\) are well-defined bounded operators) and that \(\Vert K_1 K_2 \Vert \) and \(\Vert K_2 K_1 \Vert \) tend to 0 as \(n \rightarrow \infty \). To this end, we need to control the functions \(\eta _1\) and \(\eta _2\), defined in (3.55).

3.8.3 The Functions \(\eta _1(z)\) and \(\eta _2(z)\)

The functions \(\eta _1\) and \(\eta _2\) are defined in terms of \(j_1\), \(j_2\), \(D_1\), and \(D_2\), see (3.55). In this section, we obtain estimates for all these functions for large n.

First we write the functions \(j_1(z)\) and \(j_2(z)\) in terms of Bessel functions. Because of the property \(K_{\nu }(\overline{z})=\overline{K_{\nu }(z)}\) for real \(\nu \), see [17, 18, §10.34.7], if we consider the positive imaginary axis and we write \(z=iy\), with \(y>0\), then the function \(W_n\) [recall (3.24)] can be written as

$$\begin{aligned} W_{n,\pm }(iy)= \sqrt{2n} K_{\nu }(\mp n\pi iy){\text {e}}^{\mp n\pi iy}, \end{aligned}$$
(3.60)

so \(W_{n,+}(iy)=\overline{W_{n,-}(iy)}\). Similarly, on the negative imaginary axis,

$$\begin{aligned} W_{n,\pm }(-iy)= \sqrt{2n} K_{\nu }(\pm n\pi iy){\text {e}}^{\mp n\pi iy}, \end{aligned}$$
(3.61)

so again \(W_{n,+}(-iy)=\overline{W_{n,-}(-iy)}\). Additionally, we have

$$\begin{aligned} \begin{aligned} |W_{n,-}(iy)|^2 = 2n |K_{\nu }(n\pi i y)|^2&=\frac{n \pi ^2}{2}|H^{(2)}_{\nu }(n\pi y)|^2\\&=\frac{n \pi ^2}{2}\left[ J_{\nu }(n\pi y)^2+Y_{\nu }(n\pi y)^2\right] ,\\ |W_{n,-}(-iy)|^2= 2n |K_{\nu }(-n\pi i y)|^2&=\frac{n\pi ^2}{2}|H^{(1)}_{\nu }(n\pi y)|^2\\&=\frac{n \pi ^2}{2}\left[ J_{\nu }(n\pi y)^2+Y_{\nu }(n\pi y)^2\right] , \end{aligned} \end{aligned}$$
(3.62)

in terms of Hankel functions, see [17, 18, §10.27.8]. We have the following auxiliary result:

Lemma 3.20

For \(y>0\), the functions \(j_1(iy)\) and \(j_2(-iy)\) can be written as follows:

$$\begin{aligned} \begin{aligned} |j_1(iy)|&=\frac{2{\text {e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(iy)}}{\sqrt{2n} \pi } \frac{|J_{\nu }(n\pi y)\cos \nu \pi -Y_{\nu }(n\pi y)\sin \nu \pi |}{J^2_{\nu }(n\pi y)+Y^2_{\nu }(n\pi y)},\\ |j_2(-iy)|&=\frac{2{\text {e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(-iy)}}{\sqrt{2n} \pi }\frac{|J_{\nu }(n\pi y)|}{J^2_{\nu }(n\pi y)+Y^2_{\nu }(n\pi y)}. \end{aligned} \end{aligned}$$

Proof

It follows from (3.28) that \(j_1\) can be written as

$$\begin{aligned} j_1(iy) =\frac{{\text {e}}^{-2n\varphi _-(iy)-n\pi iy}}{W_{n,-}(iy)W_{n,+}(iy)} \left[ {\text {e}}^{\frac{\nu \pi i}{2}+n\pi iy}W_{n,+}(iy) -{\text {e}}^{-\frac{\nu \pi i}{2}-n\pi iy}W_{n,-}(iy)\right] , \end{aligned}$$

and because of \(\varphi _-(z)=\varphi _+(z)-\pi z\) on the imaginary axis, and the fact that \(W_{n,+}(iy)=\overline{W_{n,-}(iy)}\), the two terms on the right-hand side are complex conjugates, so

$$\begin{aligned} j_1(iy)=\frac{-2i {\text {e}}^{-2n\varphi _-(iy)-n\pi iy}}{|W_{n,-}(iy)|^2} {{\mathrm{{\text {Im}}\,}}}\left[ {\text {e}}^{-\frac{\nu \pi i}{2}-n\pi iy}W_{n,-}(iy)\right] . \end{aligned}$$
(3.63)

Using the formula

$$\begin{aligned} K_{\nu }(z)=-\frac{\pi i}{2}{\text {e}}^{-\frac{\nu \pi i}{2}} H_{\nu }^{(2)}(ze^{-\frac{\pi i}{2}}), \quad -\frac{\pi }{2}< \arg z\le \pi , \end{aligned}$$

in terms of Hankel functions, see [17, 18, §10.27.8], and (3.60), we observe that

$$\begin{aligned} \begin{aligned} {\text {e}}^{-\frac{\nu \pi i}{2}-n\pi iy}W_{n,-}(iy)&={\text {e}}^{-\frac{\nu \pi i}{2}} \sqrt{2n} K_{\nu }(n\pi iy)\\&=-\frac{\sqrt{2n} \pi i\, {\text {e}}^{-\nu \pi i}}{2}\left( J_{\nu }(n\pi y)-iY_{\nu }(n\pi y)\right) . \end{aligned} \end{aligned}$$

Hence, on the positive imaginary axis,

$$\begin{aligned} {{\mathrm{{\text {Im}}\,}}}\left[ {\text {e}}^{-\frac{\nu \pi i}{2}-n\pi iy}W_{n,-}(iy)\right] =-\frac{\sqrt{2n}\pi }{2}(J_{\nu }(n\pi y)\cos \nu \pi -Y_{\nu }(n\pi y)\sin \nu \pi ). \end{aligned}$$

Using (3.63) and (3.62), this proves the first formula. Similarly, for \(y>0\),

$$\begin{aligned} j_2(-iy)= \frac{2i {\text {e}}^{-2n\varphi _-(-iy)-n\pi iy}}{|W_{n,-}(-iy)|^2} {{\mathrm{{\text {Im}}\,}}}\left[ {\text {e}}^{-\frac{\nu \pi i}{2}-n\pi iy}W_{n,-}(-iy)\right] . \end{aligned}$$
(3.64)

In this case, we use

$$\begin{aligned} K_{\nu }(z)=\frac{\pi i}{2}{\text {e}}^{\frac{\nu \pi i}{2}} H_{\nu }^{(1)}\left( ze^{\frac{\pi i}{2}}\right) , \quad -\pi <\arg z\le \frac{\pi }{2}, \end{aligned}$$

see [17, 18, 10.27.8], and (3.61) to obtain

$$\begin{aligned} \begin{aligned} {\text {e}}^{-\frac{\nu \pi i}{2}-n\pi iy}W_{n,-}(-iy)&={\text {e}}^{-\nu \pi i/2} \sqrt{2n} K_{\nu }(-n\pi iy)\\&=\frac{\sqrt{2n} \pi i}{2}\left( J_{\nu }(n\pi y)+iY_{\nu }(n\pi y)\right) , \end{aligned} \end{aligned}$$

so

$$\begin{aligned} {{\mathrm{{\text {Im}}\,}}}\left[ {\text {e}}^{\frac{-\nu \pi i}{2}-n\pi iy}W_{n,-}(-iy)\right] =\frac{\sqrt{2n} \pi }{2}J_{\nu }(n\pi y). \end{aligned}$$

We use (3.64) and (3.62), and this completes the proof. \(\square \)

Next, we will obtain estimates of the previous functions \(j_1\) and \(j_2\) for large n.

Lemma 3.21

For \(0<\nu \le 1/2\), there exist constants \(C_\nu , C'_\nu >0\) such that for all \(s > 0\), we have

$$\begin{aligned} \begin{aligned} \frac{|J_{\nu }(s)\cos \nu \pi -Y_{\nu }(s)\sin \nu \pi |}{J_{\nu }(s)^2+Y_{\nu }(s)^2}&\le C_{\nu }\, \frac{s^{\nu }(1+s^{1-2\nu })}{1+s^{1/2-\nu }},\\ \frac{|J_{\nu }(s)|}{J_{\nu }(s)^2+Y_{\nu }(s)^2}&\le C'_{\nu }\, \frac{s^{3\nu }(1+s^{1-2\nu })}{1+s^{1/2+\nu }}. \end{aligned} \end{aligned}$$

Proof

For the proof, we consider the following expansions: as \(s\rightarrow 0^+\),

$$\begin{aligned} J_{\nu }(s)=\frac{s^{\nu }}{2^{\nu }\varGamma (\nu +1)}\left( 1+\mathcal {O}\left( s^{-1}\right) \right) , \quad \nu \ne -1,-2,\ldots , \end{aligned}$$
(3.65)

and for \(\nu <1\), we have

$$\begin{aligned} Y_{\nu }(s)=-\frac{\varGamma (\nu )}{\pi }\left( \frac{s}{2}\right) ^{-\nu } + \mathcal {O}(s^{\nu }). \end{aligned}$$
(3.66)

As \(s\rightarrow \infty \), we have

$$\begin{aligned} \begin{aligned} J_{\nu }(s)&= \left( \frac{2}{\pi s}\right) ^{1/2}\cos \omega \, \left( 1+\mathcal {O}\left( s^{-1}\right) \right) , \\ Y_{\nu }(s)&= \left( \frac{2}{\pi s}\right) ^{1/2}\sin \omega \, \left( 1+\mathcal {O}\left( s^{-1}\right) \right) , \end{aligned} \end{aligned}$$
(3.67)

where \(\omega =s-\frac{\nu \pi }{2}-\frac{\pi }{4}\). See, for instance, [17, 18, formulas 10.7.3–4,10.17.3–4].

From this, it follows that

$$\begin{aligned} \begin{aligned} J_{\nu }(s)^2+Y_{\nu }(s)^2&= \frac{\varGamma (\nu )^2}{\pi ^2}\left( \frac{s}{2}\right) ^{-2\nu } + \mathcal {O}(1),&s\rightarrow 0,\\ J_{\nu }(s)^2+Y_{\nu }(s)^2&= \frac{2}{\pi s}+\mathcal {O}\left( s^{-2}\right) ,&s\rightarrow \infty . \end{aligned} \end{aligned}$$
(3.68)

From (3.68), we claim that there exist two constants \(C_{1,\nu },C_{2,\nu }>0\) such that

$$\begin{aligned} C_{1,\nu }\,\frac{s^{-2\nu }}{1+s^{1-2\nu }}\le J_{\nu }(s)^2+Y_{\nu }(s)^2 \le C_{2,\nu }\,\frac{s^{-2\nu }}{1+s^{1-2\nu }}, \quad s>0. \end{aligned}$$

Using a similar argument, we have

$$\begin{aligned} |J_{\nu }(s)|\le C_{3,\nu }\,\frac{s^{\nu }}{1+s^{1/2+\nu }}, \end{aligned}$$

and also

$$\begin{aligned} |J_{\nu }(s)\cos \nu \pi -Y_{\nu }(s)\sin \nu \pi |\le C_{4,\nu }\,\frac{s^{-\nu }}{1+s^{1/2-\nu }}, \end{aligned}$$

and putting all the estimates together, we get the bounds in the lemma. \(\square \)

As a consequence of Lemmas 3.20 and 3.21, we obtain the following bounds for \(j_1\) and \(j_2\) for \(y>0\):

$$\begin{aligned} \begin{aligned} |j_1(iy)|&\le C_{\nu }\, \frac{2{\text {e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(iy)}}{\sqrt{2n} \pi } \frac{(n\pi y)^{\nu }(1+(n\pi y)^{1-2\nu })}{1+(n\pi y)^{1/2-\nu }},\\ |j_2(-iy)|&\le C'_{\nu }\, \frac{2{\text {e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(-iy)}}{\sqrt{2n} \pi } \frac{(n\pi y)^{3\nu }(1+(n\pi y)^{1-2\nu })}{1+(n\pi y)^{1/2+\nu }}. \end{aligned} \end{aligned}$$
(3.69)

Next, we need an estimate for \(D_1(z)\) (see formula (3.32)), with \(z = iy\), \(y \in [-\rho , \rho ]\), where we recall that \(\pm i \rho \) is the intersection of the lens with the imaginary axis.

Lemma 3.22

For \(0<\nu \le 1/2\), there exists a constant \(C_\nu \) such that for all sufficiently large n,

$$\begin{aligned} |D_1(iy)|^2\le C_\nu \,\frac{ n^{1/2-\nu } |y|^{-\nu }}{1+(n|y|)^{1/2-\nu }}, \quad y \in [-\rho , \rho ]. \end{aligned}$$
(3.70)

Proof

We write first \(z=iy\) with \(y>0\) in (3.32) and use the parity of the function \(W_n\) to get the following expression:

$$\begin{aligned} D_1(iy)=\exp \left( \frac{y(y^2+1)^{1/2}}{2\pi }\int _0^1 \frac{\log W_n(x)}{\sqrt{1-x^2}}\frac{{\text {d}}x}{x^2+y^2}\right) . \end{aligned}$$
(3.71)

Using the asymptotic expansions (3.65), (3.66), and (3.67), we claim that there exist two constants \(C_1\) and \(C_2\), depending on \(\nu \), such that \(W_n(x)\) satisfies

$$\begin{aligned} W_n(x)\le C_1 |x|^{-1/2}, \quad |n \pi x|\ge 1, \end{aligned}$$

and

$$\begin{aligned} W_n(x)\le C_2 n^{1/2 - \nu } |x|^{-\nu }, \quad |n \pi x|\le 1. \end{aligned}$$

Since \(\nu \le 1/2\), both bounds hold uniformly for \(n\pi x>0\). Since the integrand in (3.71) is a real function, we can bound \(D_1(iy)\) from above by another Szegő function:

$$\begin{aligned} D_1(iy)^2\le D\left( iy;C_1 |\pi x|^{-1/2}\right) ^2=C_1 \pi ^{-1/2} D\left( iy;|x|^{-1/2}\right) ^2. \end{aligned}$$

This last Szegő function is explicit, since for a general exponent \(\alpha >-1\), we have

$$\begin{aligned} D(z;|x|^{\alpha })=\left( \frac{z}{z+\sqrt{z^2-1}}\right) ^{\alpha /2}. \end{aligned}$$

As a consequence, substituting \(z=iy\) with \(y \in [-\rho ,\rho ]\), and \(\alpha =-1/2\),

$$\begin{aligned} D_1(iy)^2\le C_1 (ny)^{-1/2}(y+\sqrt{y^2+1})^{1/2}\le C_1\left( \rho +\sqrt{\rho ^2+1}\right) ^{1/2} (ny)^{-1/2}, \end{aligned}$$

and by the same argument with \(\alpha =-\nu \),

$$\begin{aligned} D_1(iy)^2\le C_2 n^{1/2-\nu } y^{-\nu }(y+\sqrt{y^2+1})^{\nu }\le C_2 \left( \rho +\sqrt{\rho ^2+1}\right) ^{\nu } n^{1/2-\nu } y^{-\nu }. \end{aligned}$$

The bound in the lemma follows for \(y>0\) from these two estimates, for some constant \(C_{\nu }\). Finally, from the definition of \(D_1\), see (3.32), we have that if \(y<0\), then \(D_1(iy)=\overline{D_1(-iy)}\), so the modulus is equal and the bound holds also in this case. \(\square \)

Now we write together all the estimates computed before to obtain bounds for the functions \(\eta _1\) and \(\eta _2\) defined in (3.55).

Lemma 3.23

For \(0<\nu \le 1/2\), there exist constants \(C_{\nu }, C'_{\nu } > 0\) such that for n large enough and \(y \in [0,\rho ]\), we have the bounds

$$\begin{aligned} |\eta _1(iy)|&\le \left| j_1(iy) (D_1(iy) D_2(iy))^2 \right| \le C_\nu \, y^\nu \, {\text {e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(iy)}, \end{aligned}$$
(3.72)
$$\begin{aligned} |\eta _2(-iy)|&\le \left| j_2(-iy) (D_1(-iy) D_2(-iy))^2 \right| \le C'_\nu \, (n^{2\nu }y^\nu +ny^{1-\nu }) \, \mathrm{{e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(-iy)}. \end{aligned}$$
(3.73)

Proof

We collect the results on \(D_1\) [see formula (3.70)], \(D_2\) [we use the fact that this function does not depend on n and formula (3.41)], \(j_1\) and \(j_2\) [formula (3.69)]. Then for some constant \(C_{1,\nu }\), we simplify the bound to

$$\begin{aligned} |\eta _1(iy)|\le C_{1,\nu } y^{\nu } \frac{1+(ny)^{1-2\nu }}{(1+(ny)^{1/2-\nu })^2}{\text {e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(iy)} \le C_{\nu } y^{\nu } {\text {e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(iy)}. \end{aligned}$$

Also,

$$\begin{aligned} \begin{aligned} |\eta _2(-iy)|&\le C_{2,\nu } n^{2\nu } y^{\nu } \frac{1+(ny)^{1-2\nu }}{\left( 1+(ny)^{1/2+\nu }\right) \left( 1+(ny)^{1/2-\nu }\right) } {\text {e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(-iy)}\\&\le C'_{\nu } n^{2\nu } y^{\nu }\left( 1+(ny)^{1-2\nu }\right) {\text {e}}^{-2n{{\mathrm{{\text {Re}}\,}}}\varphi _-(-iy)}, \end{aligned} \end{aligned}$$

and the result follows. \(\square \)

3.8.4 Estimates for \(\Vert K_1\Vert \) and \(\Vert K_2\Vert \) as \(n\rightarrow \infty \)

In order to estimate the norms of \(K_1\) and \(K_2\), we need the \(\Vert \cdot \Vert _2\) norm of \(\eta _1\) and \(\eta _2\), see formula (3.59). For this we use the estimate in Lemma 3.23 and the following bound on \(\varphi (z)\):

Lemma 3.24

For every \(s \in i \mathbb R\), we have

$$\begin{aligned} \begin{aligned} {{\mathrm{{\text {Re}}\,}}}\varphi _+(s) = {{\mathrm{{\text {Re}}\,}}}\varphi _-(s)&= -|s| \log |s| + |s| \log (1+ \sqrt{1+s^2}) + \log (|s| + \sqrt{1+s^2}) \\&\ge |s| \log \frac{1}{|s|}. \end{aligned} \end{aligned}$$
(3.74)

Proof

We consider \({{\mathrm{{\text {Re}}\,}}}\varphi _-(s)\) with \(s \in i \mathbb R_+\). The other cases follow by symmetry. Let \(x \in (0,1)\). Then by (3.15) and (3.20),

$$\begin{aligned} \varphi _{\pm }(x)=\pm \pi i\int _x^1 \psi (t)\text {d}t, \end{aligned}$$

and so \(\varphi '_+(x) = - \pi i \psi (x)\). By analytic continuation, we find

$$\begin{aligned} \varphi '(z) = -\pi i \psi (z), \quad {{\mathrm{{\text {Re}}\,}}}z > 0, \, {{\mathrm{{\text {Im}}\,}}}z > 0. \end{aligned}$$

Then

$$\begin{aligned} \varphi _-(s) = \varphi _+(x) + \int _x^s \varphi '(z) \text {d}z = \varphi _+(x) - \pi i \int _x^s \psi (z) \text {d}z. \end{aligned}$$

Since \(\varphi _+(x)\) is purely imaginary, we obtain by taking the real part and letting \(x \rightarrow 0^+\),

$$\begin{aligned} {{\mathrm{{\text {Re}}\,}}}\varphi _-(s) = {{\mathrm{{\text {Im}}\,}}}\pi \int _0^s \psi (z) \text {d}z = {{\mathrm{{\text {Im}}\,}}}\int _0^s \log \left( \frac{1+ (1-z^2)^{1/2}}{z} \right) \text {d}z, \end{aligned}$$

where we used (2.10) for \(\psi \). The integral can be evaluated explicitly, and it gives (3.74). \(\square \)

Without loss of generality, we assume in what follows that \(\rho \) is small enough so that \(|s|\log \frac{1}{|s|}>0\) for \(s\in (-i\rho ,i\rho )\).

In order to estimate integrals involving the functions \(\varphi _{\pm }(z)\), we use (3.74), together with the following technical lemma.

Lemma 3.25

For any \(\alpha >-1\), there exists a constant \(C = C_{\alpha }\) such that for n large enough,

$$\begin{aligned} \int _0^{1/e} y^{\alpha }\mathrm{{e}}^{-4n y\log \frac{1}{y}}\mathrm{d}y\le C (n\log n)^{-\alpha -1}. \end{aligned}$$
(3.75)

Proof

We split the integral into two parts, and we estimate

$$\begin{aligned} \int _0^{1/e} y^{\alpha }{\text {e}}^{-4n y\log \frac{1}{y}} \, \mathrm{d}y&=\int _0^{1/\sqrt{n}} y^{\alpha }{\text {e}}^{-4n y\log \frac{1}{y}} \,\mathrm{d}y +\int _{1/\sqrt{n}}^{1/e} y^{\alpha }{\text {e}}^{-4n y\log \frac{1}{y}} \, \mathrm{d}y \nonumber \\&\le \int _0^{1/\sqrt{n}} y^{\alpha }{\text {e}}^{-4 yn\log n} \, \mathrm{d}y +\int _{1/\sqrt{n}}^{1/e} y^{\alpha }{\text {e}}^{-2 \sqrt{n}\log {n}} \, \mathrm{d}y, \end{aligned}$$
(3.76)

where for the second integral we used that \(-y \log \frac{1}{y}\) is decreasing on \([0, \frac{1}{e}]\) and so \(-y \log \frac{1}{y} \le \frac{1}{\sqrt{n}} \log \sqrt{n}\) for \(y \in [\frac{1}{\sqrt{n}}, \frac{1}{e}]\). The first integral of (3.76) is estimated by extending the integral to \(+\infty \), and the result is that it is \(\mathcal {O}((n\log n)^{-\alpha -1})\) as \(n \rightarrow \infty \). The second integral in (3.76) is \(\mathcal {O}({\text {e}}^{-c \sqrt{n}})\) as \(n \rightarrow \infty \). This gives the result. \(\square \)

Combining the estimates in (3.72), (3.73), (3.74), and (3.75), we obtain, whenever \(2 \varepsilon < \frac{1}{e}\),

$$\begin{aligned} \begin{aligned} \int _0^{2\varepsilon } |\eta _1(iy)|^2 \mathrm{d}y&=\mathcal {O}\left( n^{-2\nu -1}(\log n)^{-2\nu -1}\right) ,\\ \int _0^{2\varepsilon } \frac{|\eta _1(iy)|^2}{y} \mathrm{d}y&=\mathcal {O}\left( n^{-2\nu }(\log n)^{-2\nu }\right) , \end{aligned} \end{aligned}$$
(3.77)

and

$$\begin{aligned} \begin{aligned} \int _0^{2\varepsilon } |\eta _2(-iy)|^2 \mathrm{d}y&=\mathcal {O}\left( n^{2\nu -1}(\log n)^{-2\nu -1}\right) ,\\ \int _0^{2\varepsilon } \frac{|\eta _2(-iy)|^2}{y} \mathrm{d}y&=\mathcal {O} \left( n^{2\nu }(\log n)^{-2\nu }\right) , \end{aligned} \end{aligned}$$
(3.78)

as \(n \rightarrow \infty \). To obtain (3.78), one has to consider the three different integrals coming from the square of the factor \(n^{2\nu }y^\nu +ny^{1-\nu }\) in (3.72)–(3.73), and retain the largest one.

Hence, using (3.59) and (3.77)–(3.78), we have the bounds

$$\begin{aligned} \begin{aligned} \Vert K_1\Vert&\le \left( \int _0^{2\varepsilon } \frac{|\eta _1(iy)|^2}{y} \mathrm{d}y\right) ^{1/2}=\mathcal {O}\left( n^{-\nu }(\log n)^{-\nu }\right) ,\\ \Vert K_2\Vert&\le \left( \int _0^{2\varepsilon } \frac{|\eta _2(-iy)|^2}{y} \mathrm{d}y\right) ^{1/2}=\mathcal {O}\left( n^{\nu }(\log n)^{-\nu }\right) . \end{aligned} \end{aligned}$$
(3.79)

Thus \(K_1\) and \(K_2\) are bounded operators between the Hilbert spaces \(L^2([0,2i\varepsilon ])\) and \(L^2([-2i \varepsilon ,0])\). In addition, from (3.79), we get

$$\begin{aligned} \Vert K_1 K_2\Vert \le \Vert K_1 \Vert \, \Vert K_2 \Vert = \mathcal {O}\left( (\log n)^{-2\nu }\right) , \quad n \rightarrow \infty , \end{aligned}$$
(3.80)

and similarly,

$$\begin{aligned} \Vert K_2 K_1\Vert =\mathcal {O}((\log n)^{-2\nu }), \quad n\rightarrow \infty . \end{aligned}$$
(3.81)

3.8.5 Proof of Lemma 3.18

Proof

It follows from (3.80) and (3.81) that the operators \(I-K_2K_1\) and \(I-K_1K_2\) are invertible for n large enough, and then we can solve the Eqs. (3.57) and (3.58). Thus we define the entries of the matrix \(\widehat{P}\) as follows:

$$\begin{aligned} \widehat{P}_{11}&= (I-K_2K_1)^{-1} 1,&\,&\qquad \widehat{P}_{12}=K_1\widehat{P}_{11}, \end{aligned}$$
(3.82)
$$\begin{aligned} \widehat{P}_{21}&= K_2 \widehat{P}_{22},&\,&\qquad \widehat{P}_{22} =\left( I-K_1K_2\right) ^{-1} 1. \end{aligned}$$
(3.83)

In (3.82) and (3.83), we use 1 to denote the identically-one function in \(L^2([0, 2 i \varepsilon ])\) and \(L^2([-2i \varepsilon ,0])\), respectively. Then (3.57) and (3.58) hold true, which means that the equations in (3.56) hold. This then also means that the jump condition (3.54) in the RH problem 3.17 is satisfied.

The Eq. (3.56) allow us to give estimates on \(\widehat{P}(z)\). First of all, we obtain from (3.80)–(3.81), (3.82), and (3.83) that

$$\begin{aligned} \Vert \widehat{P}_{11}\Vert _{L^2([0,2i\varepsilon ])} = \mathcal {O}(1), \quad \Vert \widehat{P}_{22}\Vert _{L^2([-2i\varepsilon ,0])} = \mathcal {O}(1), \end{aligned}$$

and then by (3.79)

$$\begin{aligned} \Vert \widehat{P}_{12} \Vert _{L^2([-2i \varepsilon ,0])}&\le \Vert K_1 \Vert \, \Vert \widehat{P}_{11}\Vert _{L^2([0,2i\varepsilon ])} = \mathcal {O}\left( n^{-\nu }(\log n)^{-\nu }\right) ,\\ \Vert \widehat{P}_{21} \Vert _{L^2([0, 2i \varepsilon ])}&\le \Vert K_2 \Vert \, \Vert \widehat{P}_{22}\Vert _{L^2([-2i\varepsilon ,0])} =\mathcal {O}\left( n^{\nu }(\log n)^{-\nu }\right) . \nonumber \end{aligned}$$
(3.84)

For pointwise estimates, we use the distances

$$\begin{aligned} d_+(z)={{\mathrm{dist}}}(z,[0,2i\varepsilon ]), \quad d_-(z) ={{\mathrm{dist}}}(z,[-2i\varepsilon ,0]). \end{aligned}$$

Then by the first equation in (3.56), we get for \(z \in \mathbb C \setminus [-2i\varepsilon ,0]\),

$$\begin{aligned} |\widehat{P}_{11}(z) - 1|&\le \frac{1}{2\pi d_-(z)} \left| \int _0^{2i \varepsilon } \eta _2(s) \widehat{P}_{12}(s) \mathrm{d}s \right| \le \frac{1}{2\pi d_-(z)} \Vert \eta _2\Vert _2 \, \Vert \widehat{P}_{12} \Vert _2, \end{aligned}$$

where we used the Cauchy–Schwarz inequality, and \(\Vert \cdot \Vert _2\) is the \(L^2\) norm on \([-2i\varepsilon ,0]\). Thus by (3.78) and (3.84),

$$\begin{aligned} |\widehat{P}_{11}(z) - 1| = \frac{1}{d_-(z)} \, \mathcal {O} \left( n^{-1/2} (\log n)^{-2\nu -1/2}\right) , \end{aligned}$$
(3.85)

as \(n \rightarrow \infty \), uniformly for \(z\in \mathbb C \setminus [-2i\varepsilon ,0]\). Using similar arguments, we obtain

$$\begin{aligned} |\widehat{P}_{12}(z)|&= \frac{1}{d_+(z)} \mathcal {O} \left( n^{-\nu -1/2}(\log n)^{-\nu -1/2} \right) , \end{aligned}$$
(3.86)
$$\begin{aligned} |\widehat{P}_{21}(z) |&=\frac{1}{d_-(z)} \mathcal {O} \left( n^{\nu -1/2} (\log n)^{-\nu -1/2} \right) , \end{aligned}$$
(3.87)
$$\begin{aligned} |\widehat{P}_{22}(z) - 1|&= \frac{1}{d_+(z)} \mathcal {O} \left( n^{-1/2} (\log n)^{-2\nu -1/2}\right) , \end{aligned}$$
(3.88)

as \(n \rightarrow \infty \), and the \(\mathcal {O}\) terms are uniform in z. Observe that all \(\mathcal {O}\) terms tend to 0 as \(n \rightarrow \infty \), since \(\nu \le 1/2\).

It follows from (3.85)–(3.88) that \(\widehat{P}(z) = I + \mathcal {O}(z^{-1})\) as \(z \rightarrow \infty \), and therefore \(\widehat{P}\) satisfies the RH problem 3.17. For \(|z| = 3 \varepsilon \), we have \(d_{\pm }(z) \ge \varepsilon \). From (3.85)–(3.88), we then immediately find that the estimates in Lemma 3.18 hold, and the lemma is proved. \(\square \)

This also completes the proof of Proposition 3.16.

3.9 Final Transformation

Having P as in Proposition 3.16, we define the final transformation \(Q \mapsto R\) as

$$\begin{aligned} R(z)={\left\{ \begin{array}{ll} Q(z) &{} {{\text {for }}} |z| > 3 \varepsilon , \\ Q(z)P(z)^{-1} &{} {{\text {for }}} |z| < 3 \varepsilon . \end{array}\right. } \end{aligned}$$
(3.89)

Recall that Q is the solution of the RH problem 3.13.

Then R has jumps on a contour \(\varSigma _R\) that consists of \(\varSigma _Q \setminus (-i \varepsilon , i \varepsilon )\) together with the circle of radius \(3 \varepsilon \) around 0, see Fig. 5. Note that the jumps of P and Q coincide on \((-i\varepsilon , i \varepsilon )\), so that R has an analytic continuation across that interval.

Fig. 5
figure 5

Oriented contour \(\varSigma _R\), consisting of \(\varSigma _Q\) minus the interval \((-i \varepsilon , i \varepsilon )\) on the imaginary axis, together with the circle of radius \(3 \varepsilon \) around 0

From RH problem 3.13 and the definition (3.89), it follows that R satisfies the following RH problem.

RH problem 3.26

  1. 1.

    \(R : \mathbb C \setminus \varSigma _R \rightarrow \mathbb C^{2\times 2}\) is analytic.

  2. 2.

    R satisfies the jump condition \(R_+ = R_- J_R\) on \(\varSigma _R\), where

    $$\begin{aligned} J_R(z) = {\left\{ \begin{array}{ll} J_Q(z) &{} \text { for } z \in \varSigma _R \text { with } |z| > 3 \varepsilon , \\ P(z)^{-1} &{} \text { for } |z| = 3 \varepsilon , \\ P_-(z) J_Q(z) P^{-1}_+(z) &{} \text { for } z \in (-3i \varepsilon , - i \varepsilon ) \cup (i \varepsilon , 3i \varepsilon ). \end{array}\right. } \end{aligned}$$
    (3.90)
  3. 3.

    As \(z\rightarrow \infty \),

    $$\begin{aligned} R(z)=I+\mathcal {O}(1/z). \end{aligned}$$

In order to solve this RH problem asymptotically for large n, we need to show that the jump matrices for R(z) are close to the identity matrix uniformly for \(z\in \varSigma _R\), see Fig. 5.

Lemma 3.27

The jump matrix \(J_R\) in the RH problem for R satisfies for some constant \(c > 0\),

$$\begin{aligned} J_R(z) = {\left\{ \begin{array}{ll} I + \mathcal {O}(\epsilon _n) &{} \text { for } |z| = 3 \varepsilon , \\ I + \mathcal {O}(1/n) &{} \text { for } |z\pm 1| = \delta , \\ I + \mathcal {O}({\text {e}}^{-cn}) &{} \text { elsewhere on}\,\varSigma _R, \end{array}\right. } \end{aligned}$$
(3.91)

as \(n \rightarrow \infty \), where the \(\mathcal {O}\) terms are uniform.

Proof

For \(z\in \varSigma _R\) with \(|z|>3\varepsilon \), we have \(J_R(z)=J_Q(z)\). On the boundary of the disks around the endpoints, we have \(J_Q(z)=I+\mathcal {O}(n^{-1})\), see (3.47), and on the rest of \(\varSigma _R\) except \((-i\rho ,i\rho )\), we have \(J_Q(z)=I+\mathcal {O}({\text {e}}^{-cn})\) for some \(c>0\), see (3.48).

On the circle \(|z|=3\varepsilon \), the jump is \(J_R(z)=P(z)^{-1}\). We use (3.52) and the fact that \(\widehat{P}(z)=I+\mathcal {O}(\epsilon _n)\), uniformly for \(|z|=3\varepsilon \), to find that

$$\begin{aligned} J_R(z) = P(z)^{-1} = I+\mathcal {O}(\epsilon _n), \end{aligned}$$

as given in (3.91).

For \(z \in (3i\varepsilon ,i\rho )\), we get from (3.90) and (3.49),

$$\begin{aligned} J_R(z)=J_Q(z)=D_{\infty }^{\sigma _3}N_0(z)\begin{pmatrix} 1 &{}\quad 0\\ j_1(z)(D_1(z)D_2(z))^2 &{}\quad 1\end{pmatrix} N_0^{-1}(z)D_{\infty }^{-\sigma _3}. \end{aligned}$$

From (3.72) and (3.74), we obtain for \(y \in [0, \rho ]\),

$$\begin{aligned} |j_1(iy)(D_1(iy)D_2(iy))^2|\le C_{\nu } y^{\nu }{\text {e}}^{-2n y}, \quad C_{\nu } > 0, \end{aligned}$$

We also use (3.34) and then (3.91) for \(z \in (3i \varepsilon , i \rho )\) follows. The case \(z \in (-i \rho , -3i \varepsilon )\) can be handled in a similar way.

What is left are the intervals \((i\varepsilon ,3i\varepsilon )\) and \((-3i \varepsilon , -i \varepsilon )\). For \(z \in (i\varepsilon , 3i \varepsilon )\), we find from (3.90) and (3.52) that

$$\begin{aligned} J_R(z)= & {} D_{\infty }^{\sigma _3} N_0(z) \begin{pmatrix} 0 &{}\quad -1 \\ 1 &{}\quad 0\end{pmatrix} \widehat{P}_-(z) \begin{pmatrix} 1 &{}\quad -j_1(z)(D_1(z)D_2(z))^2\\ 0 &{}\quad 1\end{pmatrix}\\&\times \widehat{P}^{-1}_+(z) \begin{pmatrix} 0 &{}\quad 1 \\ -1 &{}\quad 0\end{pmatrix} N_0(z)^{-1}D_{\infty }^{-\sigma _3}. \end{aligned}$$

Using (3.53)–(3.54), we rewrite this as

$$\begin{aligned} J_R(z)= & {} I - j_1(z)(D_1(z)D_2(z))^2(1-\chi (z)) D_{\infty }^{\sigma _3} N_0(z) \begin{pmatrix} 0 &{}\quad -1 \\ 1 &{}\quad 0\end{pmatrix} \widehat{P}_+(z) \begin{pmatrix} 0 &{}\quad 1 \\ 0 &{}\quad 0 \end{pmatrix} \nonumber \\&\times \widehat{P}^{-1}_+(z) \begin{pmatrix} 0 &{}\quad 1 \\ -1 &{}\quad 0\end{pmatrix} N_0(z)^{-1}D_{\infty }^{-\sigma _3}. \end{aligned}$$
(3.92)

Here we note that \(\det \widehat{P}(z) = 1\), which follows by standard arguments from the RH problem 3.17, and therefore \(\widehat{P}^{-1}_+ = \begin{pmatrix} \widehat{P}_{22} &{}\quad - \widehat{P}_{12} \\ -\widehat{P}_{21} &{}\quad \widehat{P}_{11} \end{pmatrix}_+\). Then a little calculation shows that (3.92) reduces to

$$\begin{aligned} J_R(z)&=I + j_1(z)(D_1(z)D_2(z))^2(1-\chi (z)) \varLambda (z), \quad z \in (i\varepsilon , 3 i \varepsilon ), \end{aligned}$$
(3.93)

where

$$\begin{aligned} \varLambda (z)=D_{\infty }^{\sigma _3}N_0(z) \begin{pmatrix} -\widehat{P}_{11}(z)\widehat{P}_{21}(z) &{}\quad -\widehat{P}_{21}(z)^2 \\ \widehat{P}_{11}(z)^2 &{}\quad \widehat{P}_{11}(z)\widehat{P}_{21}(z) \end{pmatrix} N_0^{-1}(z)D_{\infty }^{-\sigma _3}. \end{aligned}$$

The functions \(\widehat{P}_{11}\) and \(\widehat{P}_{21}\) are analytic on \((i\varepsilon , 3 i \varepsilon )\) and so we do not have to take the \(+\)-boundary value.

Then it follows from (3.34) and the estimates in (3.85) and (3.87) that all entries in \(\varLambda \) are uniformly bounded as \(n \rightarrow \infty \). Then by (3.72) and (3.93), we find (3.91) for \(z \in (i \varepsilon , 3 i \varepsilon )\). A similar argument shows that \(J_R(z)\) is exponentially close to the identity matrix for \(z \in (-3i \varepsilon , -i \varepsilon )\) as well, and the lemma follows. \(\square \)

As a consequence of (3.91), the biggest estimates for \(J_R - I\) are on the circle \(|z|=3\varepsilon \). For \(0<\nu \le 1/2\), the jump matrix satisfies [recall \(\epsilon _n\) is given by (2.3)]

$$\begin{aligned} J_R(z) = I + \mathcal {O}(\epsilon _n), \quad n \rightarrow \infty , \end{aligned}$$

uniformly for \(z\in \varSigma _R\), where \(\varSigma _R\) is the union of contours depicted in Fig. 5. Note that \(J_R(z) \rightarrow I\) as \(n \rightarrow \infty \), but the rate of convergence is remarkably slow.

Following standard arguments, we now find that for n sufficiently large, the RH problem 3.26 for R is solvable, and

$$\begin{aligned} R(z) = I + \mathcal {O}(\epsilon _n), \quad n\rightarrow \infty , \end{aligned}$$
(3.94)

uniformly for \(z \in \mathbb C \setminus \varSigma _R\). The convergence rate in (3.94) may not be optimal, since some of the bounds in the analysis may not be as sharp as possible. Note that for \(\nu = 1/2\), we only have \(R(z) = I + \mathcal {O}(\frac{1}{\log n})\), which is a very slow convergence.

Since all of the transformations \(X \mapsto U \mapsto T \mapsto S \mapsto Q \mapsto R\) are invertible, we then also find that the RH problem for X is solvable for n large enough. In particular, we find that the polynomial \(P_n = X_{11}\) exists for n large enough.

4 Proofs of the Theorems

4.1 Proof of Theorem 2.6

Proof

Following the transformations of the Deift–Zhou steepest descent analysis and using formula (3.94), we obtain asymptotic information about \(\widetilde{P}_n(z) = U_{11}(z)\) in the complex plane, see (3.14) and (2.4). Consider the region in Fig. 5 which is outside the lens and outside of the disks around \(z=\pm 1\). In this case, \(U_{11}(z)=T_{11}(z){\text {e}}^{ng(z)}\), and by (3.21), (3.26), (3.46), (3.89),

$$\begin{aligned} T(z)=S(z)=Q(z)N(z)=R(z)N(z), \end{aligned}$$

which means that

$$\begin{aligned} \begin{aligned} \widetilde{P}_n(z) {\text {e}}^{-ng(z)}&= T_{11}(z) = R_{11}(z)N_{11}(z)+R_{12}(z)N_{21}(z)\\&= N_{11}(z)(1+\mathcal {O}(\epsilon _n)) + N_{21}(z) \mathcal {O}(\epsilon _n), \end{aligned} \end{aligned}$$
(4.1)

using (3.94). Here \(\epsilon _n\) is given again by (2.3). We observe from (3.42) that \(N_{11}=D_{\infty }N_{0,11} (D_1D_2)^{-1}\), and using (3.33), (3.34), (3.39), and (3.43), we get

$$\begin{aligned} N_{11}(z)=\left( \frac{z(z+(z^2-1)^{1/2})}{2(z^2-1)}\right) ^{1/4}\left( \frac{(z^2-1)^{1/2}-i}{(z^2-1)^{1/2}+i}\right) ^{-\nu /4} \left( 1+\mathcal {O} \left( \frac{\log n}{n}\right) \right) , \end{aligned}$$
(4.2)

as \(n\rightarrow \infty \). Similarly, we also see that \(N_{21}(z) = \mathcal {O}(1)\) as \(n \rightarrow \infty \), and (2.9) follows.

Since the lens can be taken arbitrarily close to the interval \([-1,1]\) and the disks can be taken arbitrarily small, the asymptotics (2.9) is valid uniformly on any compact subset of \(\mathbb C \setminus [-1,1]\). This proves Theorem 2.6. \(\square \)

4.2 Proof of Theorem 2.7

Proof

Inside the lens, but away from the endpoints and the origin, we use the relation (3.26) between the functions T(z) and S(z). Let z be in the lens with \({{\mathrm{{\text {Re}}\,}}}z > 0\). Then we have

$$\begin{aligned} T_{11}(z)=S_{11}(z) \pm S_{12}(z)\frac{{\text {e}}^{\frac{\nu \pi i}{2}-2n\varphi (z)}}{W_n(z)} \end{aligned}$$

for \(\pm {{\mathrm{{\text {Im}}\,}}}z > 0\), and therefore

$$\begin{aligned} \widetilde{P}_n(z) = {\text {e}}^{ng(z)} T_{11}(z) ={\text {e}}^{ng(z)} \left[ S_{11}(z) \pm S_{12}(z)\frac{{\text {e}}^{\frac{\nu \pi i}{2}-2n\varphi (z)}}{W_n(z)} \right] . \end{aligned}$$

Since \(S(z)=Q(z)N(z)\) away from the endpoints, and \(Q(z)=R(z)\) away from the origin (if \(|z|>3\varepsilon \)), see (3.46) and (3.89), we obtain

$$\begin{aligned} \widetilde{P}_n(z) = {\text {e}}^{n g(z)} \left[ N_{11}(z) \pm N_{12}(z)\frac{{\text {e}}^{\frac{\nu \pi i}{2}-2n\varphi (z)}}{W_n(z)}+\mathcal {O}(\epsilon _n) \right] \end{aligned}$$
(4.3)

for \({{\mathrm{{\text {Re}}\,}}}z \ge 0\), and \(\pm {{\mathrm{{\text {Im}}\,}}}z > 0\).

We are going to simplify the expression (4.3), and we do it for \({{\mathrm{{\text {Re}}\,}}}z > 0\), \({{\mathrm{{\text {Im}}\,}}}z > 0\). First we use (3.18), (3.19), and (3.17) in (4.3) to get

$$\begin{aligned} \begin{aligned}&\widetilde{P}_n(z) = \frac{{\text {e}}^{\frac{n \pi z}{2}}}{(2e)^n W_n(z)^{1/2}}\\&\quad \quad \quad \quad \times \left[ N_{11}(z) W_n(z)^{1/2} {\text {e}}^{n \varphi (z)} + \frac{N_{12}(z)}{W_n(z)^{1/2}} {\text {e}}^{\frac{\nu \pi i}{2}-n\varphi (z)} +\mathcal {O}(\epsilon _n) \right] . \end{aligned} \end{aligned}$$

From (3.42) we have \(N_{11} = D_{\infty } N_{0,11} (D_1 D_2)^{-1}\), \(N_{12} = D_{\infty } N_{0,12} D_1 D_2\) and so

$$\begin{aligned} \begin{aligned}&\widetilde{P}_n(z) = \frac{D_{\infty } {\text {e}}^{\frac{n \pi z}{2}+\frac{\nu \pi i}{4}}}{(2e)^n W_n(z)^{1/2}}\\&\quad \quad \quad \quad \times \left[ \frac{N_{0,11}(z) W_n(z)^{1/2}}{D_1(z) D_2(z)} {\text {e}}^{-\frac{\nu \pi i}{4} + n \varphi (z)}+ \frac{N_{0,12}(z) D_1(z) D_2(z)}{W_n(z)^{1/2}} {\text {e}}^{\frac{\nu \pi i}{4}-n\varphi (z)} +\mathcal {O}(\epsilon _n) \right] . \end{aligned} \end{aligned}$$

Next we use (3.43) to write

$$\begin{aligned} N_{0,11}(z)= {\text {e}}^{-\frac{\pi i }{4}} \frac{f(z)^{1/2}}{\sqrt{2} (1-z^2)^{1/4}}, \quad N_{0,12}(z)= {\text {e}}^{\frac{\pi i}{4}} \frac{f(z)^{-1/2}}{\sqrt{2} (1-z^2)^{1/4}}, \end{aligned}$$

where \((1-z^2)^{1/4}\) denotes the branch that is real and positive for \(-1 < z < 1\) and f(z) is given by (3.44). Thus

$$\begin{aligned} \begin{aligned}&\widetilde{P}_n(z) = \frac{D_{\infty } {\text {e}}^{\frac{n \pi z}{2}+\frac{\nu \pi i}{4}}}{\sqrt{2} (2e)^n (1-z^2)^{1/4} W_n(z)^{1/2}} \\&\quad \times \left[ \frac{(f(z)W_n(z))^{1/2}}{D_1(z) D_2(z)} {\text {e}}^{n \varphi (z)-\frac{(\nu +1)\pi i}{4}} + \frac{D_1(z) D_2(z)}{ (f(z)W_n(z))^{1/2}} {\text {e}}^{-n\varphi (z)+\frac{(\nu +1)\pi i}{4}} +\mathcal {O} \left( \epsilon _n \right) \right] . \end{aligned} \end{aligned}$$
(4.4)

The first two terms inside the brackets are inverses of each other. We write all contributing factors in exponential form. We have by (3.20), (3.15), and (3.40),

$$\begin{aligned} {\text {e}}^{n \varphi (z)}&= \exp \left( \pi i n \int _z^1 \psi (s) \mathrm{d}s\right) , \end{aligned}$$
(4.5)
$$\begin{aligned} D_2(z) {\text {e}}^{\frac{\nu \pi i}{4}}&= \exp \left( - \frac{\nu \pi }{2} \psi (z)\right) \end{aligned}$$
(4.6)

for \({{\mathrm{{\text {Re}}\,}}}z > 0\), \({{\mathrm{{\text {Im}}\,}}}z > 0\), and we note that by (3.25) and (3.33),

$$\begin{aligned} \frac{W_n(z)^{1/2}}{D_1(z)} = f(z)^{-1/4} \left( 1 + \mathcal {O}\left( \frac{\log n}{n}\right) \right) \end{aligned}$$
(4.7)

as \(n \rightarrow \infty \). Finally, we write

$$\begin{aligned} f(z)^{1/2} = {\text {e}}^{\frac{i}{2} \arccos z}, \quad {{\mathrm{{\text {Im}}\,}}}z > 0, \end{aligned}$$
(4.8)

and inserting (4.5)–(4.8) into (4.4), we find (2.12), where we also use (3.25) and (3.34) to simplify the first factor.

A similar calculation leads to the same formula (2.12) for \(z \in E\) with \({{\mathrm{{\text {Re}}\,}}}z > 0\) and \({{\mathrm{{\text {Im}}\,}}}z < 0\). \(\square \)

4.3 Proof of Theorem 2.1

Proof

It follows from (4.1) and (4.2) that the leading factor in the outer asymptotics of \(P_n(in\pi z)\) does not vanish for \(z \in \mathbb C \setminus [-1,1]\).

Let \(\widetilde{P}_n(z) = (in\pi )^{-n} P_n(in\pi z)\) be the monic polynomial. Then we find from (2.8) that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n} \log | \widetilde{P}_n(z)| = {{\mathrm{{\text {Re}}\,}}}g(z) = \int _{-1}^1 \log |z-x| \psi (x) {\text {d}}x, \end{aligned}$$
(4.9)

uniformly for z in compact subsets of \(\mathbb C \setminus [-1,1]\). This implies that for any given compact subset \(K \subset \overline{\mathbb {C}}\setminus [-1,1]\), the polynomial \(\widetilde{P}_n\) does not have any zeros in K for n large enough. In other words, all zeros of \(\widetilde{P}_n\) tend to the interval \([-1,1]\) as \(n \rightarrow \infty \).

In addition, we find from (4.9) that the zeros of \(\widetilde{P}_n\) have \(\psi (x)\) as limiting density. This follows from standard arguments in potential theory, see, e.g., [19]. This proves Theorem 2.1. \(\square \)

4.4 Proof of Theorem 2.2

Let E be the neighborhood of \((-1,1)\) as in Theorem 2.7. Theorem 2.2 will follow from the asymptotic approximation (4.3) that is valid uniformly for z in

$$\begin{aligned} E_{\delta } = E \setminus \left( D(-1, \delta ) \cup D(0,\delta ) \cup D(1, \delta )\right) , \end{aligned}$$

with \({{\mathrm{{\text {Re}}\,}}}z \ge 0\).

Lemma 4.1

There is a constant \(C > 0\) such that for large n, all zeros in \(E_{\delta }\) satisfy

$$\begin{aligned} \left| {{\mathrm{{\text {Re}}\,}}}\frac{\nu \pi }{2} \psi (z) - {{\mathrm{{\text {Im}}\,}}}\theta _n(z) \right| < C \epsilon _n. \end{aligned}$$
(4.10)

Proof

It is enough to consider \({{\mathrm{{\text {Re}}\,}}}z \ge 0\).

Let

$$\begin{aligned} F_n(z) = \exp \left( \frac{\nu \pi }{2} \psi (z) + i \theta _n(z)\right) . \end{aligned}$$

Then by (2.12), we have that zeros of \(\widetilde{P}_n\) in \(E_{\delta }\) with \({{\mathrm{{\text {Re}}\,}}}z > 0\) are in the region where

$$\begin{aligned} F_n(z) \left( 1+ \mathcal {O}\left( \frac{\log n}{n}\right) \right) + F_n(z)^{-1}\left( 1 + \mathcal {O}\left( \frac{\log n}{n}\right) \right) = \mathcal {O}(\epsilon _n). \end{aligned}$$

This leads to

$$\begin{aligned} F_n(z) + F_n(z)^{-1}= \mathcal {O}(\epsilon _n), \end{aligned}$$

and so there is a constant \(C > 0\) such that all zeros in \(E_{\delta }\) satisfy

$$\begin{aligned} |F_n(z) + F_n(z)^{-1}| \le C \epsilon _n \end{aligned}$$
(4.11)

if n is large enough.

Note that

$$\begin{aligned} |F_n(z)| = \exp \left( {{\mathrm{{\text {Re}}\,}}}\frac{\nu \pi }{2} \psi (z) - {{\mathrm{{\text {Im}}\,}}}\theta _n(z) \right) . \end{aligned}$$

Thus if (4.10) is not satisfied, then either \( |F_n(z)| \ge \exp (C \epsilon _n)\) or \(|F_n(z)| \le \exp (-C \epsilon _n)\). In both cases, it follows that

$$\begin{aligned} |F_n(z) + F_n(z)^{-1}| \ge {\text {e}}^{C \epsilon _n} - {\text {e}}^{-C \epsilon _n} \ge 2 C \epsilon _n. \end{aligned}$$

Because of (4.11), this cannot happen for zeros of \(\widetilde{P}_n\) in \(E_{\delta }\) if n is large enough, and the lemma follows. \(\square \)

The lemma is the main ingredient to prove Theorem 2.2.

Proof

(Proof of Theorem 2.2) In the proof, we use \(c_1, c_2, \ldots \), to denote positive constants that do not depend on n or z. The constants will depend on \(\delta > 0\).

It is easy to see from the definition (2.11) that \(\theta _n'(x) \le c_1 n < 0\) for \(x \in (0, 1- \delta )\). This implies that for some constant \(c_2 > 0\),

$$\begin{aligned} {{\mathrm{{\text {Im}}\,}}}\theta _n(z) {\left\{ \begin{array}{ll} \le - c_2 n {{\mathrm{{\text {Im}}\,}}}z &{} \text { for } z \in E_{\delta }, {{\mathrm{{\text {Re}}\,}}}z >0, {{\mathrm{{\text {Im}}\,}}}z \ge 0, \\ \ge c_2 n |{{\mathrm{{\text {Im}}\,}}}z| &{} \text { for } z \in E_{\delta }, {{\mathrm{{\text {Re}}\,}}}z > 0, {{\mathrm{{\text {Im}}\,}}}z < 0. \end{array}\right. } \end{aligned}$$
(4.12)

There are also constants \(c_3, c_4 > 0\) such that

$$\begin{aligned} c_3 < {{\mathrm{{\text {Re}}\,}}}\frac{\nu \pi }{2} \psi (z) < c_4, \quad z \in E_{\delta }, {{\mathrm{{\text {Re}}\,}}}z > 0, \end{aligned}$$
(4.13)

see (2.10). Thus if \({{\mathrm{{\text {Im}}\,}}}z \ge 0\), then by (4.12) and (4.13),

$$\begin{aligned} \left| {{\mathrm{{\text {Re}}\,}}}\frac{\nu \pi }{2} \psi (z) - {{\mathrm{{\text {Im}}\,}}}\theta _n(z) \right| \ge c_2 n {{\mathrm{{\text {Im}}\,}}}z + c_3 \ge c_3 > 0, \end{aligned}$$

and thus there are no zeros in \(E_{\delta }\) with \({{\mathrm{{\text {Im}}\,}}}z \ge 0\) by Lemma 4.1 if n is large enough.

For \({{\mathrm{{\text {Im}}\,}}}z \le 0\), we have by (4.12) and (4.13),

$$\begin{aligned} \left| {{\mathrm{{\text {Re}}\,}}}\frac{\nu \pi }{2} \psi (z) - {{\mathrm{{\text {Im}}\,}}}\theta _n(z) \right| \ge c_2 n |{{\mathrm{{\text {Im}}\,}}}z| - c_4. \end{aligned}$$

It follows from this and Lemma 4.1 that for large n, there are no zeros with \({{\mathrm{{\text {Im}}\,}}}z \le -\frac{c_5}{n}\) if \(c_5 > c_4/c_2\).

Now assume \(z\in E_{\delta }\) with \( - \frac{c_5}{n} < {{\mathrm{{\text {Im}}\,}}}z < 0\) and \({{\mathrm{{\text {Re}}\,}}}z > 0\). Write \( z = x + i y\). Then by Taylor expansion,

$$\begin{aligned} \frac{\nu \pi }{2} \psi (z) = \frac{\nu \pi }{2} \psi (x) + \mathcal {O}(1/n), \end{aligned}$$

and, see also (2.11),

$$\begin{aligned} \theta _n(z)&= \theta _n(x) + iy \theta _n'(x) + \mathcal {O}(1/n) \\&= \theta _n(x) - iy n \pi \psi (x) + \mathcal {O}(1/n), \end{aligned}$$

and \(\mathcal {O}\) terms are uniform for z in the considered region.

Then since \(\psi (x)\) and \(\theta _n(x)\) are real, we have

$$\begin{aligned} {{\mathrm{{\text {Re}}\,}}}\frac{\nu \pi }{2} \psi (z) - {{\mathrm{{\text {Im}}\,}}}\theta _n(z)&= \frac{\nu \pi }{2} \psi (x) + yn \pi \psi (x) + \mathcal {O}(1/n) \\&= \left( \frac{\nu }{2} + ny \right) \pi \psi (x) + \mathcal {O}(1/n). \end{aligned}$$

Thus if \(|\frac{\nu }{2 } + ny| \ge c_6 \epsilon _n\), then by the above and (4.13),

$$\begin{aligned} \left| {{\mathrm{{\text {Re}}\,}}}\frac{\nu \pi }{2} \psi (z) - {{\mathrm{{\text {Im}}\,}}}\theta _n(z) \right| \ge \frac{2 c_6 c_3}{\nu } \epsilon _n + \mathcal {O}(1/n), \end{aligned}$$

and from Lemma 4.1, it follows that \(z = x + iy\) is not a zero if \(c_6\) is large enough.

Thus for large n, all zeros \(z = x + i y\) of \(\widetilde{P}_n\) in \(E_{\delta }\) satisfy

$$\begin{aligned} \left| \frac{\nu }{2 } + n y\right| \le c_6 \epsilon _n. \end{aligned}$$

Then \(in \pi z\) is a zero of \(P_n\), see (2.4), and the real part of this zero is \(-n \pi y\), which differs from \(\frac{\nu \pi }{2}\) by an amount less than \(\pi c_6 \epsilon _n\). This proves Theorem 2.2. \(\square \)