1 Introduction and Main Results

Consider an ensemble of hermitian \(n \times n\) random matrices of the form

$$\begin{aligned} M_n = (d_{jk} w_{jk})_{j,k = 1}^{n} \end{aligned}$$
(1.1)

where

$$\begin{aligned}&d_{jk} = p^{-1/2} {\left\{ \begin{array}{ll} 1\ \text {with probability}\, \dfrac{p}{n};\\ 0\ \text {with probability}\, 1 - \dfrac{p}{n}; \end{array}\right. } \nonumber \\&w_{jk} = w_{jk}^{(1)} + iw_{jk}^{(2)},\, j \ne k \end{aligned}$$
(1.2)

and \(\{w_{jk}^{(1)}, w_{jk}^{(2)}, w_{ll} : 1 \le j < k \le n, 1 \le l \le n\}\) are i.i.d. random variables with zero mean such that

$$\begin{aligned} 2\mathbf {E} \Bigg \{\Big |w_{jk}^{(1)}\Big |^2\Bigg \} = 2\mathbf {E} \Bigg \{\Big |w_{jk}^{(2)}\Big |^2\Bigg \} = \mathbf {E} \Big \{\big |w_{ll}\big |^2\Big \} = 1,\quad j \ne k. \end{aligned}$$
(1.3)

Here and everywhere below \(\mathbf {E}\) denotes the expectation with respect to all random variables. \(\{d_{jk} : j \le k\}\) are also independent of each other and of \(w_{jk}^{(1)}, w_{jk}^{(2)}, w_{ll}\).

These matrices are known as “weighted” adjacency matrices of random Erdős–Rényi \(G(n, \frac{p}{n})\) graphs, with \(\{d_{jk}\}\) corresponding to the standard adjacency matrix and \(\{w_{jk}\}\)—the set of independent weights, which we take to be Gaussian. These matrices are widely discussed in the last few years since they demonstrate a kind of interpolation between a “sparse” matrix with finite p, when there is only a finite number of nonzero elements in each line, and the matrix with \(p = n\) coinciding with Gaussian unitary ensemble (GUE). The results on the convergence of normalized eigenvalue counting measure

$$\begin{aligned} N_n(\Delta ) = \#\Big \{\lambda _j^{(n)} \in \Delta ,\, j = 1, \ldots , n\Big \}/n, \quad N_n(\mathbb {R}) = 1 \end{aligned}$$

of these matrices in the case of finite p were obtain in [20, 21] on the physical level of rigour, then in [1] for \(w_{jk} = 1\) and in [14] for arbitrary \(\{w_{jk}\}\) independent on \(\{d_{jk}\}\) and having four moments.

It was also shown that for \(p \rightarrow \infty \) the limiting eigenvalue distribution coincides with GUE

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } N_n(\Delta ) = \int \limits _\Delta \rho _{sc}(\lambda )d\lambda , \qquad \rho _{sc}(\lambda ) = \frac{1}{2\pi }\sqrt{4 - \lambda ^2}\cdot \mathbbm {1}_{[-2, 2]}, \end{aligned}$$
(1.4)

while for finite p the limiting measure is a solution of a rather complicated nonlinear integral equation which is difficult for the analysis. It is known that the support of the limiting measure (spectrum) is the whole real line. Central Limit Theorem for linear eigenvalue statistics was proven in [26] for finite p and in [27] for \(p \rightarrow \infty \). For the local regime it was conjectured the existence of the critical value \(p_c > 1\) (see [5]) such that for \(p > p_c\) the eigenvalues are strongly correlated and are characterized by GUE matrix statistics, for \(p < p_c\) the eigenvalues are uncorrelated and follow Poison statistics. The conjecture was confirmed by numerical calculations [15] and by supersymmetry approach (SUSY) [8, 18] on the physical lever of rigour. Notice, that the results of the present paper confirm the existence of similar threshold for the second correlation function of characteristic polynomials. Rigorous results for the local eigenvalue statistics were obtained recently in [4, 10]. First for \(p \gg n^{2/3}\) and then for \(p \gg n^\varepsilon \) with any \(\varepsilon > 0\) it was shown that the spectral correlation functions of sparse hermitian random matrices in the bulk of the spectrum converge in the weak sense to that of GUE. For the edge of the spectrum, it was proved in [12] that for \(p \gg n^{2/3}\) the limiting probability \(P\{\max \limits _j\lambda _j^{(n)} > 2 + x/n^{2/3}\}\) admits a certain universal upper bound, whereas the result of [13] implies that for \(p \ll n^{1/5}\) the limiting probability \(P\{\max \limits _j\lambda _j^{(n)} > 2 + x/p\}\) is zero. Note that more advanced results for the edge eigenvalue statistics were obtained in [28] for so-called random d-regular graphs. It was shown that if \(3 \le d \ll n^{2/3}\) and \(w_{jk} = \pm 1\) then the scaled largest eigenvalue of (1.1) converges in distribution to the Tracy–Widom law.

The correlation functions of characteristic polynomials formally do not characterize the local eigenvalue statistics. However, from the SUSY point of view, their analysis is similar to that for spectral correlation functions. In combination with the fact that the analysis for correlation functions of characteristic polynomials usually is simpler than that for spectral correlation functions, it causes that such an analysis is often the first step in studies of local regimes.

The moments of the characteristic polynomials were studied for a lot of random matrix ensembles: for Gaussian orthogonal ensemble in [6], for circular unitary ensemble in [7, 11], for \(\beta \)-models with \(\beta = 2\) in [2, 29] and with \(\beta = 1,\,2,\,4\) in [3, 17]. Götze and Kösters in [9] have studied the second order correlation function of the characteristic polynomials of the Wigner matrix with arbitrary distributed entries, possessing the forth moments, by the method of generation functions. The result was generalized soon on the correlation functions of any even order by T. Shcherbina in [22] where it was proposed the method which allowed to apply SUSY technique (or the Grassmann integration technique) to study the correlation functions of characteristic polynomials of random matrices with non Gaussian entries. The proposed method appeared to be rather powerful and since that was successfully applied to study characteristic polynomials of sample covariance matrices (see [23]) and band matrices [24, 25].

In the present paper we apply the method of [22] to study characteristic polynomials of sparse matrices. To be more precise, let us introduce our main definitions. The mixed moments or the correlation functions of the characteristic polynomials have the form

$$\begin{aligned} F_{2m}(\Lambda ) = \mathbf {E}\left\{ \prod \limits _{j = 1}^{2m} \det (M_n - \lambda _j)\right\} , \end{aligned}$$
(1.5)

where \(\Lambda = {\text {diag}}\{\lambda _1, \ldots , \lambda _{2m}\}\) are real or complex parameters which may depend on n.

We are interested in the asymptotic behavior of (1.5) for matrices (1.1), as \(n \rightarrow \infty \), for

$$\begin{aligned} \lambda _j = \lambda _0 + \frac{x_j}{n}, \quad j = \overline{1, 2m}, \end{aligned}$$

where \(\lambda _0\), \(\{x_j\}_{j = 1}^{2m}\) are real numbers and notation \(j = \overline{1, 2m}\) means that j varies from 1 to 2m.

Set also

$$\begin{aligned} D_{2m}(\Lambda ) = \frac{F_{2m}(\Lambda )}{\left( \prod \limits _{j = 1}^{2m}F_{2m}(\lambda _j I)\right) ^{1/2m}}, \qquad \lambda _*(p) = \left\{ \begin{array}{cc} \sqrt{4 - 8/p}, &{} \text {if } p > 2;\\ 0, &{} \text {if } p \le 2. \end{array}\right. \end{aligned}$$
(1.6)

Theorem 1

Let an ensemble of sparse random matrices be defined by (1.1)–(1.3) for finite p and let \(w_{jk}^{(1)},\, w_{jk}^{(2)}\) be Gaussian random variables. Then the correlation function of two characteristic polynomials (1.5) for \(m = 1\) satisfies the asymptotic relations

  1. (i)

    for \(\lambda _0 \in (-\lambda _*(p), \lambda _*(p))\)

    $$\begin{aligned} \lim _{n \rightarrow \infty } D_2\left( \Lambda \right) = \frac{\sin ((x_1-x_2)\sqrt{\lambda _*(p)^2-\lambda _0^2}/2)}{(x_1-x_2)\sqrt{\lambda _*(p)^2-\lambda _0^2}/2}; \end{aligned}$$
  2. (ii)

    for \(\lambda _0 \notin (-\lambda _*(p), \lambda _*(p))\)

    $$\begin{aligned} \lim _{n \rightarrow \infty } D_2\left( \Lambda \right) = 1, \end{aligned}$$

where \(D_2\) and \(\lambda _*(p)\) are defined in (1.6).

Remarks

  1. 1.

    The theorem shows that the second order correlation function has a threshold \(p = 2\), i.e. if \(p > 2\) there are two types of the asymptotic behavior—cases (i) and (ii), if \(p \le 2\) there is only one type of the asymptotic behavior—case (ii).

  2. 2.

    If we let \(\lambda _0\) depend on n, the asymptotic regimes (i) and (ii) are fully agreed.

  3. 3.

    Note that \(\lambda _*(p) \rightarrow 2\), as \(p \rightarrow \infty \), and since the limiting spectrum is always \([-2, 2]\) (see (1.4)), therefore for \(p \rightarrow \infty \) one expects GUE behavior for all \(\lambda _0 \in (-2, 2)\) (i.e. for all \(\lambda _0\) in the interior of the limiting spectrum, cf (1.4)); we confirm this in Theorem 2.

Theorem 2

Let an ensemble of diluted random matrices be defined by (1.1)–(1.3), \(p \rightarrow \infty \) and let \(w_{jk}^{(1)},\, w_{jk}^{(2)}\) be Gaussian random variables. Then the correlation function of characteristic polynomials (1.5) for \(\lambda _0 \in (-2, 2)\) satisfies the asymptotic relation

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } D_{2m}(\Lambda ) = \frac{\hat{S}_{2m}(X)}{\hat{S}_{2m}(I)}, \end{aligned}$$

with \(X = {{\mathrm{diag}}}\{x_1, \ldots , x_{2m}\}\) and

$$\begin{aligned} \hat{S}_{2m}(X) = \frac{\det \left\{ \dfrac{\sin (\pi \rho _{sc}(\lambda _0)(x_j - x_{m + k}))}{\pi \rho _{sc}(\lambda _0)(x_j - x_{m + k})}\right\} _{j,k = 1}^m}{\Delta (x_1, \ldots , x_m)\Delta (x_{m + 1}, \ldots , x_{2m})} \end{aligned}$$
(1.7)

where \(\Delta (y_1, \ldots , y_m)\) is the Vandermonde determinant of \(y_1, \ldots , y_m\).

Notice that \(\hat{S}_{2m}(I)\) is well defined because the difference of the rows \(j_1\) and \(j_2\) in the determinant in (1.7) is of order \(O(x_{j_1} - x_{j_2})\), as \(x_{j_1} \rightarrow x_{j_2}\). The same is true for columns.

To formulate our last result we introduce the Airy kernel

$$\begin{aligned} \mathbb {A}(x, y) = \frac{Ai(x)Ai'(y) - Ai'(x)Ai(y)}{x - y}, \end{aligned}$$
(1.8)

where Ai(x) is the Airy function.

Theorem 3

Let an ensemble of diluted random matrices be defined by (1.1)–(1.3), \(p \rightarrow \infty \), and let \(w_{jk}^{(1)},\, w_{jk}^{(2)}\) be Gaussian random variables. Then the correlation function of two characteristic polynomials (1.5) for \(m = 1\) satisfies the asymptotic relations

  1. (i)

    If \(\frac{n^{2/3}}{p} \rightarrow \infty \)

    $$\begin{aligned} \lim _{n \rightarrow \infty } D_2(2I + n^{-2/3}X) = 1; \end{aligned}$$
  2. (ii)

    If \(\frac{n^{2/3}}{p} \rightarrow c\)

    $$\begin{aligned} \lim _{n \rightarrow \infty } D_2(2I + n^{-2/3}X) = \frac{\mathbb {A}(x_1 + 2c, x_2 + 2c)}{\sqrt{\mathbb {A}(x_1 + 2c, x_1 + 2c) \mathbb {A}(x_2 + 2c, x_2 + 2c)}}, \end{aligned}$$

where \(D_2\) is defined in (1.6) and \(\mathbb {A}\) is defined in (1.8). For \(\lambda _0 = -2\) similar assertions are also valid.

Remarks

  1. 1.

    Notice, that the case \(p \gg n^{2/3}\) corresponds to the case \(c = 0\) in (ii).

  2. 2.

    The results of Theorem 3 are in a good agreement with the results of [12, 13] in the sense that the asymptotic behavior changes when p crosses the rate \(n^{2/3}\). However, in [13] it is argued that in the case \(p \ll n^{2/3}\) the appropriate scale is \(p^{-1}\) instead of \(n^{-2/3}\). We postpone the study of \(F_2\) with the scaling \(p^{-1}\), as well as the related study of \(F_2\) near \(\lambda _*(p)\) for finite p, to subsequent publications.

The paper is organized as follows. In Sect. 2 we obtain a convenient integral representation for \(F_{2m}\) using integration over the Grassmann variables and Harish–Chandra/Itzykson–Zuber formula for integrals over the unitary group. Sections 3, 4 and 5 deal with the proof of the Theorems 1, 2 and 3 respectively. The proof is based on the steepest descent method applied to the integral representation.

Notice also that everywhere below we denote by C various n-independent constants, which can be different in different formulas.

2 Integral Representation

To formulate the result of the section, the following notations are introduced

  • $$\begin{aligned} \Delta ({{\mathrm{diag}}}\{y_j\}_{j = 1}^{k}) = \Delta (\{y_j\}_{j = 1}^{k})\; \text {is the Vandermonde determinant of} \{y_j\}_{j = 1}^{k}; \end{aligned}$$
    (2.1)
  • \({{\mathrm{End}}}V\) is the set of linear operators on a linear space V;

  • $$\begin{aligned} I_{n, k} = \Big \{\alpha \in \mathbb {Z}^k | 1 \le \alpha _1 < \ldots < \alpha _k \le n\Big \}. \end{aligned}$$
    (2.2)

    The lexicographical order on \(I_{n, k}\) is denoted by \(\prec \);

  • \(\mathcal {H}_{2m, l}\) is the space of self-adjoint operators in \({\text {End}} \Lambda ^l \mathbb {C}^{2m}\) (see [30, Chapter 8.4] for definition of \(\Lambda ^q V\));

  • $$\begin{aligned} dB = dP_l(B) = \prod \limits _{\alpha \in I_{2m, l}} dB_{\alpha \alpha } \prod \limits _{\alpha \prec \beta } d\mathfrak {R}B_{\alpha \beta } d\mathfrak {I}B_{\alpha \beta } \end{aligned}$$
    (2.3)

    is a measure on \(\mathcal {H}_{2m, l}\). \(B_{\alpha \beta }\) denotes the corresponding entry of the matrix of B in some basis. It is easy to see that \(dP_l(UBU^*) = dP_l(B)\) for any unitary matrix U, so the definition is correct.

  • $$\begin{aligned} H_m = \prod \limits _{l = 2}^{2m} \mathcal {H}_{2m, l}; \end{aligned}$$
    (2.4)
  • Set also

    $$\begin{aligned} A_{2m}(G_1, G) = \sum \limits _{\begin{array}{c} k_1 + 2k_2 + \ldots + 2mk_{2m} = 2m \\ k_j \in \mathbb {Z}_+ \end{array}}(2m)!\prod \limits _{q = 1}^{2m} \frac{1}{(q!)^{k_q}k_q!} \bigwedge \limits _{s = 1}^{2m} (b_{s}G_{s} - \tilde{b}_sI)^{\wedge k_{s}} \end{aligned}$$
    (2.5)

    where \(\{b_s\}_{s = 1}^\infty \), \(\{\tilde{b}_s\}_{s = 1}^\infty \) are the sequences of certain np-dependent numbers and

    $$\begin{aligned} G = (G_2,&\ldots , G_{2m}), \quad G_l \in {{\mathrm{End}}}\Lambda ^l \mathbb {C}^{2m}, \quad l = \overline{1,2m}. \end{aligned}$$

    Exterior product \(A \wedge B\) of operators is defined in Sect. 1 in Appendix. Since \(\dim \Lambda ^{2m} \mathbb {C}^{2m} = 1\), the space \({\text {End}} \Lambda ^{2m} \mathbb {C}^{2m}\) may be identified with the \(\mathbb {C}\). In (2.5) \(\bigwedge \limits _{s = 1}^{2m} (b_{s}G_{s} - \tilde{b}_sI)^{\wedge k_{s}}\) is understood as \(\Big \{\bigwedge \limits _{s = 1}^{2m} (b_{s}G_{s} - \tilde{b}_sI)^{\wedge k_{s}}\Big \}_{1\ldots n;1\ldots n}\).

  • $$\begin{aligned} C_n^{(2m)}(X) = \pi ^m \left( \frac{1}{2} \right) ^{\frac{1}{2}(2^{2m} - 1)} \left( \frac{n}{\pi } \right) ^{\frac{1}{2}\left( \left( {\begin{array}{c}4m\\ 2m\end{array}}\right) - 1\right) } \exp \Big \{\frac{1}{2n}\sum \limits _{j = 1}^{2m} x_j^2\Big \}. \end{aligned}$$
    (2.6)

We prove the following integral representation for the correlation function \(F_{2m}.\)

Proposition 1

Let \(M_n\) be a random matrix of the form (1.1)–(1.3), where \(w_{jk}^{(1)}\), \(w_{jk}^{(2)}\), \(j < k\), \(w_{ll}\) have Gaussian distribution. Then the correlation function (1.5) admits the representation

$$\begin{aligned} F_{2m}(\Lambda )= & {} C_n^{(2m)}(X) \frac{i^{2m^2 - m}\exp \left\{ \lambda _0\sum \limits _{j = 1}^{2m} x_j\right\} }{\Delta (X)} \nonumber \\&\times \int \limits _{H_m}\int \limits _{\mathbb {R}^{2m}} \Delta (T) \exp \left\{ -i\sum \limits _{j=1}^{2m} x_jt_j\right\} e^{nf_{2m}(T, R)} dT dR, \end{aligned}$$
(2.7)

where \(R = (R_2, \ldots , R_{2m})\), \(R_l \in \mathcal {H}_{2m, l}\), \(dR = \prod \limits _{j = 2}^{2m} dR_j\), \(T = {{\mathrm{diag}}}\{t_j\}_{j = 1}^{2m}\), \(dT = \prod \limits _{j = 1}^{2m} dt_j\),

$$\begin{aligned} f_{2m}(T, R) = \log A_{2m}(T, R) - \frac{1}{2}\left( \sum \limits _{j=1}^{2m} (t_j+i\lambda _0)^2 + \sum \limits _{l = 2}^{2m} {\text {tr}} R_l^2\right) \end{aligned}$$
(2.8)

and all other notation is defined at the beginning of the section.

Remark 1

In the special case \(m = 1\), the representation (2.7) simplifies to

$$\begin{aligned} F_2(\Lambda ) = C_n(X) \frac{ie^{\lambda _0(x_1 + x_2)}}{x_1 - x_2}\int \limits _{\mathbb {R}^3} (t_1 - t_2) \exp \left\{ -i\sum \limits _{j=1}^2 x_jt_j\right\} e^{nf(T, s)} dT ds, \end{aligned}$$
(2.9)

where \(C_n(X) = C_n^{(2)}(X)\) and

$$\begin{aligned} f(T, s) = \log (b_2 s - t_1 t_2) - \frac{1}{2}\left( \sum \limits _{j=1}^2 (t_j+i\lambda _0)^2 + s^2\right) . \end{aligned}$$
(2.10)

The proof of Proposition 1 is based on the method of integration over the Grassmann variables, the required properties of which are reviewed in Sect. 1 in Appendix. The proof of the proposition is given in Sect. 2.1.

2.1 Proof of Proposition 1

Let us transform \(F_{2m}(\Lambda )\), using (6.1)

$$\begin{aligned} F_{2m}(\Lambda )= & {} \mathbf {E} \left\{ \prod \limits _{j = 1}^{2m} \det (M - \lambda _j) \right\} \nonumber \\= & {} \mathbf {E} \left\{ \displaystyle \int \exp \left\{ -\sum \limits _{l=1}^{2m} \left( \sum \limits _{1 \le j, k \le n} d_{jk}w_{jk}\overline{\psi }_{jl}\psi _{kl} - \sum \limits _{j=1}^n \lambda _l \overline{\psi }_{jl}\psi _{jl} \right) \right\} \prod \limits _{l = 1}^{2m} \prod \limits _{j = 1}^n d\overline{\psi }_{jl}d\psi _{jl} \right\} \end{aligned}$$

Averaging first with respect to \(\{w_{jk}\}\), we obtain

$$\begin{aligned} F_{2m}(\Lambda ) = \mathbf {E}\left\{ \int \exp \left\{ \sum \limits _{j < k} d_{jk}^2\chi _{jk}\chi _{kj} + \sum \limits _{j=1}^n \left( \frac{1}{2}d_{jj}^2\chi _{jj}^2 + \sum \limits _{l = 1}^{2m} \lambda _l\overline{\psi }_{jl}\psi _{jl}\right) \right\} \prod \limits _{l = 1}^{2m} \prod \limits _{j = 1}^n d\overline{\psi }_{jl}d\psi _{jl} \right\} , \end{aligned}$$

where, in order to simplify formulas below we denote

$$\begin{aligned} \chi _{jk} = \sum \limits _{l = 1}^{2m} \overline{\psi }_{jl}\psi _{kl}. \end{aligned}$$

Since evidently \((\chi _{jk}\chi _{kj})^{2m + 1} = 0\), we get

$$\begin{aligned} \mathbf {E}\left\{ e^{d_{jk}^2 \chi _{jk}\chi _{kj}} \right\} = \mathbf {E}\left\{ 1 + \sum \limits _{l = 1}^{2m} \frac{1}{l!} d_{jk}^{2l} (\chi _{jk}\chi _{kj})^l \right\} = 1 + \sum \limits _{l = 1}^{2m} \frac{1}{l!} \cdot \frac{1}{p^{l - 1} n} (\chi _{jk}\chi _{kj})^l, \quad j \le k. \end{aligned}$$

Define the numbers \(\{a_l\}_{l=1}^{2m}\) by the identity

$$\begin{aligned} \exp \left\{ \sum \limits _{l = 1}^{2m} a_l(\chi _{jk}\chi _{kj})^l\right\} = 1 + \sum \limits _{l = 1}^{2m} \frac{1}{l!} \cdot \frac{1}{p^{l - 1} n} (\chi _{jk}\chi _{kj})^l. \end{aligned}$$

Observe that

$$\begin{aligned} a_1 = \frac{1}{n},\quad a_2 = \frac{n - p}{2pn^2} \end{aligned}$$

and that

$$\begin{aligned} a_l \sim \frac{C}{p^{l - 1}n},\, n \rightarrow \infty , \\ a_l = 0,\, l > 1,\, \text {if } p = n.\nonumber \end{aligned}$$
(2.11)

Then \(F_{2m}(\Lambda )\) can be represented as

$$\begin{aligned} F_{2m}(\Lambda )= & {} \int \exp \left\{ \sum \limits _{j < k} \sum \limits _{l = 1}^{2m} a_l(\chi _{jk}\chi _{kj})^l + \sum \limits _{j=1}^n \left( \sum \limits _{l = 1}^m \frac{1}{2^l}a_l\chi _{jj}^{2l} + \sum \limits _{l = 1}^{2m} \lambda _l\overline{\psi }_{jl}\psi _{jl}\right) \right\} \nonumber \\&\prod \limits _{l = 1}^{2m} \prod \limits _{j = 1}^n d\overline{\psi }_{jl}d\psi _{jl}.\qquad \quad \end{aligned}$$
(2.12)

To facilitate the reading, the remaining steps are first explained in the simpler case \(m = 1\) and only then in the general case.

2.1.1 Case \(m = 1\)

Let us transform the exponent of (2.12).

$$\begin{aligned} \chi _{jk}\chi _{kj}&= \sum \limits _{\alpha , \beta \in I_{2,1}} \overline{\psi }_{j\alpha _1}\psi _{k\alpha _1}\overline{\psi }_{k\beta _1}\psi _{j\beta _1} = -\sum \limits _{\alpha , \beta \in I_{2,1}} \overline{\psi }_{j\alpha _1}\psi _{j\beta _1}\overline{\psi }_{k\beta _1}\psi _{k\alpha _1}, \\ (\chi _{jk}\chi _{kj})^2&= 4\prod \limits _{l = 1}^2 \overline{\psi }_{jl}\psi _{kl}\overline{\psi }_{kl}\psi _{jl} = 4\prod \limits _{l = 1}^2 \overline{\psi }_{jl}\psi _{jl}\overline{\psi }_{kl}\psi _{kl} = 4\prod \limits _{l = 1}^2 \overline{\psi }_{jl}\psi _{jl} \prod \limits _{q = 1}^2 \overline{\psi }_{kq}\psi _{kq}, \end{aligned}$$

where \(I_{2, 1}\) is defined in (2.2). Since \(\psi _{jl}^2 = \overline{\psi }\phantom {\psi }_{jl}^2 = 0\), we have

$$\begin{aligned} \sum \limits _{j < k} \chi _{jk}\chi _{kj} + \sum \limits _{j=1}^n \frac{1}{2}\chi _{jj}^2= & {} -\sum \limits _{l = 1}^2 \frac{1}{2}\left( \sum \limits _{j = 1}^n \overline{\psi }_{jl}\psi _{jl} \right) ^2 - \left( \sum \limits _{j}\overline{\psi }_{j1}\psi _{j2} \right) \left( \sum \limits _{j}\overline{\psi }_{j2}\psi _{j1} \right) .\nonumber \\ \sum \limits _{j < k} (\chi _{jk}\chi _{kj})^2= & {} 2\left( \sum \limits _{j = 1}^n \prod \limits _{l = 1}^2 \overline{\psi }_{jl}\psi _{jl}\right) ^2. \end{aligned}$$

Hubbard–Stratonovich transformation (6.2) applied to (2.12) yields

$$\begin{aligned} \exp \left\{ a_1\left( \sum \limits _{j < k} \chi _{jk}\chi _{kj} + \sum \limits _{j=1}^n \frac{1}{2}\chi _{jj}^2\right) \right\} = \frac{n^2}{2\pi ^2}\int \limits _{\mathcal {H}_2} \exp \left\{ -\frac{n}{2}\left( \sum \limits _{j = 1}^2 t_j^2 + 2(u^2 + v^2)\right) \right\} \\ \prod \limits _{j = 1}^n \exp \left\{ i\sum \limits _{l = 1}^2 t_l \overline{\psi }_{jl}\psi _{jl} + i(u - iv)\overline{\psi }_{j2}\psi _{j1} + i(u + iv)\overline{\psi }_{j1}\psi _{j2} \right\} dQ; \end{aligned}$$
$$\begin{aligned} \exp \left\{ a_2\sum \limits _{j < k} (\chi _{jk}\chi _{kj})^2 \right\}= & {} \sqrt{\frac{n}{2\pi }}\int \limits _{\mathbb {R}} \exp \bigg \{ -\frac{n}{2}s^2 \bigg \} \\&\times \prod \limits _{j = 1}^n \exp \left\{ -s\sqrt{4na_2}\cdot \overline{\psi }_{j1}\psi _{j1}\overline{\psi }_{j2}\psi _{j2} \right\} ds, \end{aligned}$$

where \(\mathcal {H}_2\) is the space of self-adjoint operators in \({\text {End}} \mathbb {C}^{2}\) and

$$\begin{aligned} Q= & {} \left( \begin{array}{ll} t_1 &{} u + iv \\ u - iv &{} t_{2} \end{array} \right) ; \\ dQ= & {} dt_1 dt_2 dudv. \end{aligned}$$

Set \(b_2 = -\sqrt{4na_2} = -\sqrt{\frac{2(n-p)}{pn}}\). Now we can integrate over the Grassmann variables

$$\begin{aligned} F_2(\Lambda ) = 2\bigg (\frac{n}{2\pi }\bigg )^{5/2}\int \limits _{\mathbb {R}} e^{-\frac{n}{2}s^2} \int \limits _{\mathcal {H}_2} (b_2 s - \det {(Q - i\Lambda )})^n e^{-\frac{n}{2}{\text {tr}}{Q^2}}dQ ds \end{aligned}$$

Change the variables \(t_j \rightarrow t_j + i\lambda _j\) and move the line of integration back to the real axis. Indeed, consider the rectangular contour with vertices in the points \((-R, 0)\), (R, 0), \((R, -i\lambda _j)\) and \((-R, -i\lambda _j)\). Since the integrand is a holomorphic on \(\mathbb {C}\) function, the integral over this contour is zero. Because the integrand is a polynomial multiplied by exponent, the integral over the vertical sides of the contour tends to 0 when \(R \rightarrow \infty \). So, recalling that \(\lambda _j = \lambda _0 + x_j/n\), we can write

$$\begin{aligned} F_2(\Lambda )= & {} \frac{C_n(X)}{\pi } \cdot e^{\lambda _0(x_1+x_2)} \\&\times \int \limits _{\mathbb {R}} e^{-\frac{n}{2}s^2} \int \limits _{\mathcal {H}_2} (b_2 s - \det {Q})^n e^{-\frac{n}{2}{\text {tr}}{(Q+i\Lambda _0)^2}} \exp \{-i{{\mathrm{tr}}}XQ \}dQ ds, \end{aligned}$$

where

$$\begin{aligned} C_n(X) = n\bigg (\frac{n}{2\pi }\bigg )^{3/2} e^{\frac{1}{2n}\big (x_1^2+x_2^2\big )}. \end{aligned}$$
(2.13)

Let us change the variables \(Q \rightarrow U^*TU\), where U is a unitary matrix and \(T = {\text {diag}}\{t_1, t_2\}\). Then dQ changes to \(\frac{\pi }{2}(t_1 - t_2)^2 dt_1 dt_2\) and

$$\begin{aligned} F_2(\Lambda )= & {} \frac{1}{2} C_n(X) e^{\lambda _0(x_1+x_2)}\int \limits _{\mathbb {R}} e^{-\frac{n}{2}s^2} \int \limits (t_1 - t_2)^2 (b_2 s - t_1t_2)^n \\&\exp \left\{ -\frac{n}{2}\sum \limits _{j=1}^2 (t_j+i\lambda _0)^2\right\} \int \limits _{U_2} e^{-i{\text {tr}}{(XU^{*}TU)}}dU_2(U) dT ds, \end{aligned}$$

where \(U_2\) is the group of the unitary \(2 \times 2\) matrices, \(dU_2(U)\) is the normalized to unity Haar measure, \(dT = dt_1 dt_2\).

The integration over the unitary group using the Harish–Chandra/Itsykson–Zuber formula (6.3) implies the assertion of Remark 1.

2.1.2 General Case \(m > 1\)

Let us transform the exponent of (2.12).

$$\begin{aligned} (\chi _{jk}\chi _{kj})^l&= (l!)^2 \sum \limits _{\alpha , \beta \in I_{2m, l}} \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{k\alpha _q}\overline{\psi }_{k\beta _q}\psi _{j\beta _q} \\&= (-1)^l (l!)^2 \sum \limits _{\alpha , \beta \in I_{2m, l}} \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \prod \limits _{q = 1}^l \overline{\psi }_{k\beta _q}\psi _{k\alpha _q}, \end{aligned}$$

where \(I_{2m, l}\) is defined in (2.2).

Since \(\psi _{jl}^2 = \overline{\psi }_{jl}^2 = 0\), we have

$$\begin{aligned} \sum \limits _{j < k} (\chi _{jk}\chi _{kj})^l + \sum \limits _{j=1}^n \frac{1}{2}\chi _{jj}^{2l}= & {} (-1)^l (l!)^2 \left( \sum \limits _{j < k} \sum \limits _{\alpha , \beta \in I_{2m,l}} \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \prod \limits _{q = 1}^l \overline{\psi }_{k\beta _q}\psi _{k\alpha _q}\right. \\&+\left. \frac{1}{2}\sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \prod \limits _{q = 1}^l \overline{\psi }_{j\beta _q}\psi _{j\alpha _q}\right) \\= & {} \frac{1}{2} (-1)^l (l!)^2 \sum \limits _{\alpha , \beta \in I_{2m, l}}\sum \limits _{j, k} \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \prod \limits _{q = 1}^l \overline{\psi }_{k\beta _q}\psi _{k\alpha _q} \\= & {} \frac{1}{2} (-1)^l (l!)^2 \sum \limits _{\alpha , \beta }\left( \sum \limits _{j} \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q}\right) \left( \sum \limits _{k} \prod \limits _{q = 1}^l \overline{\psi }_{k\beta _q}\psi _{k\alpha _q}\right) \\= & {} (-1)^l (l!)^2 \left( \frac{1}{2}\sum \limits _\alpha \left( \sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\alpha _q} \right) ^2 \right. \\&+ \left. \sum \limits _{\alpha \prec \beta } \left( \sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \right) \left( \sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\beta _q}\psi _{j\alpha _q} \right) \right) , \end{aligned}$$

where \(\prec \) is the lexicographical order.

Hubbard–Stratonovich transformation (6.2) yields

$$\begin{aligned}&\exp \left\{ (-1)^l (l!)^2 \frac{a_l}{2} \left( \sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\alpha _q} \right) ^2\right\} \\&\qquad = \sqrt{\frac{n}{2\pi }}\int \exp \left\{ b_l (Q_l)_{\alpha \alpha } \left( \sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\alpha _q} \right) - \frac{n}{2}(Q_l)_{\alpha \alpha }^2\right\} d(Q_l)_{\alpha \alpha }, \\&\exp \left\{ (-1)^l (l!)^2 a_l \left( \sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \right) \left( \sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\beta _q}\psi _{j\alpha _q} \right) \right\} \\&\qquad = \frac{n}{\pi }\int \exp \{-n|(Q_l)_{\alpha \beta }|^2\} \exp \left\{ b_l\left( \sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \right) (Q_l)_{\alpha \beta } \right. \\&\qquad \;\; \left. +\, b_l\left( \sum \limits _{j = 1}^n \prod \limits _{q = 1}^l \overline{\psi }_{j\beta _q}\psi _{j\alpha _q} \right) \overline{(Q_l)_{\alpha \beta }}\right\} d\mathfrak {R}(Q_l)_{\alpha \beta } d\mathfrak {I}(Q_l)_{\alpha \beta }, \end{aligned}$$

where

$$\begin{aligned} b_l = i^l l!\sqrt{na_l}. \end{aligned}$$
(2.14)

The above computations compose into the following representation of the exponent of (2.12)

$$\begin{aligned} \exp \left\{ a_l\left( \sum \limits _{j < k} (\chi _{jk}\chi _{kj})^l + \frac{1}{2} \sum \limits _{j=1}^n \chi _{jj}^{2l}\right) \right\}= & {} \left( \frac{1}{2} \right) ^{\frac{1}{2}\left( {\begin{array}{c}2m\\ l\end{array}}\right) } \left( \frac{n}{\pi } \right) ^{\frac{1}{2}\left( {\begin{array}{c}2m\\ l\end{array}}\right) ^2} \int \limits _{\mathcal {H}_{2m, l}} \exp \Bigg \{ -\frac{n}{2}{\text {tr}}Q_l^2 \Bigg \} \nonumber \\&\prod \limits _{j = 1}^n \exp \left\{ b_l\sum \limits _{\alpha , \beta } (Q_l)_{\alpha \beta } \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \right\} dQ_l\nonumber \\ \end{aligned}$$
(2.15)

where \(\mathcal {H}_{2m, l}\) is the space of self-adjoint operators in \({\text {End}} \Lambda ^l \mathbb {C}^{2m}\) and \(dQ_l\) is defined in (2.3). Therefore, substitution of (2.15) into (2.12) gives us

$$\begin{aligned} F_{2m}(\Lambda )= & {} Z_n^{(2m)}\int \limits _{\mathcal {H}_{2m, 1} \times H_m} \prod \limits _{l = 1}^{2m} \exp \Bigg \{ -\frac{n}{2}{\text {tr}} Q_l^2 \Bigg \} \prod \limits _{j = 1}^n \exp \left\{ b_l \sum \limits _{\alpha , \beta } (Q_l)_{\alpha \beta } \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \right. \nonumber \\&-\left. \Bigg (\frac{1}{2} - \frac{1}{2^l}\Bigg )a_l \chi _{jj}^{2l} +\lambda _l\overline{\psi }_{jl}\psi _{jl}\right\} d\overline{\psi }_{jl}d\psi _{jl} \prod \limits _{l = 1}^{2m} dQ_l, \end{aligned}$$
(2.16)

where \(H_m\) is defined in (2.4) and

$$\begin{aligned} \nonumber Z_n^{(2m)} = \bigg ( \frac{1}{2} \bigg )^{\frac{1}{2}(2^{2m} - 1)} \bigg ( \frac{n}{\pi } \bigg )^{\frac{1}{2}\left( \left( {\begin{array}{c}4m\\ 2m\end{array}}\right) - 1\right) }. \end{aligned}$$

Now we can expand the exponents of (2.16) into the series

$$\begin{aligned}&\exp \left\{ \sum \limits _{l = 1}^{2m} b_l\sum \limits _{\alpha , \beta } (Q_l)_{\alpha \beta } \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} - \left( \frac{1}{2} - \frac{1}{2^l}\right) a_l \chi _{jj}^{2l} + \lambda _l\overline{\psi }_{jl}\psi _{jl} \right\} \nonumber \\&\quad = \exp \left\{ \sum \limits _{l = 1}^{2m}\sum \limits _{\alpha , \beta } (\widetilde{Q}_l)_{\alpha \beta } \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \right\} = \sum \limits _{k = 1}^{2m} \frac{1}{k!}\left( \sum \limits _{l = 1}^{2m}\sum \limits _{\alpha , \beta } (\widetilde{Q}_l)_{\alpha \beta } \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q}\right) ^k,\nonumber \\ \end{aligned}$$
(2.17)

where

$$\begin{aligned} \begin{array}{cc} \widetilde{Q}_1 = b_1Q_1 + \Lambda , &{}\quad \widetilde{Q}_l = b_lQ_l - \tilde{b}_lI, \\ \tilde{b}_{2l} = (2^{-1} - 2^{-l})a_{l}, &{}\quad \tilde{b}_{2l - 1} = 0. \end{array} \end{aligned}$$

The most important terms contain all 4m Grassmann variables \(\{\psi _{js},\, \overline{\psi }_{js}\}_{s = 1}^{2m}\), because the other terms become zeros after integration over Grassmann variables. Thus, expansion of (2.17) with Lemma 6 implies

$$\begin{aligned}&\int \exp \left\{ \sum \limits _{l = 1}^{2m}\sum \limits _{\alpha , \beta } (\widetilde{Q}_l)_{\alpha \beta } \prod \limits _{q = 1}^l \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \right\} \prod \limits _{l = 1}^{2m} d\overline{\psi }_{jl}d\psi _{jl} \\&= \int \sum \limits _{\begin{array}{c} k_1 + 2k_2 + \ldots + 2mk_{2m} = 2m \\ k_j \in \mathbb {Z}_+ \end{array}} \frac{1}{(k_1 + \ldots + k_{2m})!} \cdot \frac{(k_1 + \ldots + k_{2m})!}{k_1! \ldots k_{2m}!} \cdot \frac{(2m)!}{(1!)^{k_1} \ldots ((2m)!)^{k_{2m}}} \\&\quad \times \sum \limits _{\alpha , \beta \in I_{2m, 2m}} \left( \bigwedge \limits _{s = 1}^{2m} \widetilde{Q}_s^{\wedge k_s}\right) _{\alpha \beta } \prod \limits _{q = 1}^{2m} \overline{\psi }_{j\alpha _q}\psi _{j\beta _q} \prod \limits _{l = 1}^{2m} d\overline{\psi }_{jl}d\psi _{jl}, \end{aligned}$$

where only the most important terms remain. Integration over the Grassmann variables and substitution of the result into (2.16) gives us

$$\begin{aligned} F_{2m}(\Lambda ) = Z_n^{(2m)}\int \limits _{H_m}\int \limits _{\mathcal {H}_{2m, 1}} \left( A_{2m}(\hat{Q}_1, Q)\right) ^n \exp \left\{ -\frac{n}{2}\sum \limits _{l = 1}^{2m} {\text {tr}}{Q_l^2}\right\} dQ_1 dQ, \end{aligned}$$

where \(\hat{Q}_1 = Q_1 + \frac{1}{b_1}\Lambda = Q_1 - i\Lambda \), \(Q = (Q_2, \ldots , Q_{2m})\), \(dQ = \prod \limits _{l = 2}^{2m} dQ_l\) and \(A_{2m}\) is defined in (2.5).

Change the variables \((Q_1)_{jj} \rightarrow (Q_1)_{jj} + i\lambda _j\) and move the line of integration back to the real axis. Similarly to the case \(m = 1\), the Cauchy theorem yields

$$\begin{aligned} F_{2m}(\Lambda )= & {} \frac{C_n^{(2m)}(X)}{\pi ^m} \cdot \exp \left\{ \lambda _0\sum \limits _{j = 1}^{2m} x_j\right\} \int \limits _{H_{m}}\int \limits _{\mathcal {H}_{2m, 1}} \left( A_{2m}(Q_1, Q)\right) ^n \\&\times \exp \left\{ -\frac{n}{2}\left( {{\mathrm{tr}}}(Q_1 + i\Lambda _0)^2 + \sum \limits _{l = 2}^{2m} {{\mathrm{tr}}}{Q_l^2} \right) \right\} e^{-i{{\mathrm{tr}}}XQ_1}dQ_1 dQ, \end{aligned}$$

where \(C_n^{(2m)}(X)\) is defined in (2.6). Let us change the variables \(Q_1 = U^*TU\), where U is a unitary operator and \(T = {\text {diag}}\{t_j\}_{j = 1}^{2m}\). Then dQ changes to \(\pi ^m K_{2m}^{-1}\Delta (T)^2 dT\) and

$$\begin{aligned} F_{2m}(\Lambda )= & {} \frac{C_n^{(2m)}(X)}{K_{2m}} \cdot \exp \left\{ \lambda _0\sum \limits _{j = 1}^{2m} x_j\right\} \int \limits _{H_m} \int \limits _{\mathbb {R}^{2m}} \Delta (T)^2 \left( A_{2m}(Q_1, Q)\right) ^n \nonumber \\&\times \exp \left\{ -\frac{n}{2}\left( \sum \limits _{j=1}^{2m} (t_j + i\lambda _0)^2 + \sum \limits _{l = 2}^{2m} {\text {tr}}{Q_l^2} \right) \right\} \int \limits _{U_{2m}} e^{-i{\text {tr}} XU^{*}TU}dU_{2m}(U) dT dQ,\nonumber \\ \end{aligned}$$
(2.18)

where \(K_{2m} = \prod \limits _{j = 1}^{2m} j!\), \(\Delta (T)\) is defined in (2.1), \(U_{2m}\) is the subgroup of the unitary operators in \({{\mathrm{End}}}\mathbb {C}^{2m}\), \(dU_{2m}(U)\) is the normalized to unity Haar measure, \(dT = \prod \limits _{j = 1}^{2m} dt_j\). Transform \(A_{2m}(Q_1, Q)\)

$$\begin{aligned}&(b_1Q_{1})^{\wedge k_1}\wedge \bigwedge \limits _{s = 2}^{2m} (b_sQ_{s} - \tilde{b}_sI)^{\wedge k_s}\\&\quad = (U^*b_1TU)^{\wedge k_1}\wedge \bigwedge \limits _{s = 2}^{2m} ((U^*U)^{\wedge s}(b_sQ_{s} - \tilde{b}_sI)(U^*U)^{\wedge s})^{\wedge k_s}, \end{aligned}$$

where I is the identity operator. The assertion (iv) of Proposition 2 implies

$$\begin{aligned}&(U^*b_1TU)^{\wedge k_1}\wedge \bigwedge \limits _{s = 2}^{2m} ((U^*U)^{\wedge s}(b_sQ_{s} - \tilde{b}_sI)(U^*U)^{\wedge s})^{\wedge k_s} \\&\quad = (U^*)^{\wedge 2m} \bigg ((b_1T)^{\wedge k_1}\wedge \bigwedge \limits _{s = 2}^{2m} (b_sU^{\wedge s}Q_{s}(U^*)^{\wedge s} - \tilde{b}_sI)^{\wedge k_s}\bigg ) U^{\wedge 2m} \\&\quad = (b_1T)^{\wedge k_1}\wedge \bigwedge \limits _{s = 2}^{2m} (b_sU^{\wedge s}Q_{s}(U^*)^{\wedge s} - \tilde{b}_sI)^{\wedge k_s}. \end{aligned}$$

Change the variables \(Q_l = (U^*)^{\wedge l}R_l U^{\wedge l}\), \(l = \overline{2, 2m}\). Since \(U^{\wedge l}\) is the unitary operator , dQ changes to \(dR = \prod \limits _{l = 2}^{2m} dR_l\). Then (2.18) implies

$$\begin{aligned} F_{2m}(\Lambda )= & {} \frac{C_n^{(2m)}(X)}{K_{2m}} \cdot \exp \left\{ \lambda _0\sum \limits _{j = 1}^{2m} x_j\right\} \int \limits _{H_m} \int \limits _{\mathbb {R}^{2m}} \Delta (T)^2 \left( A_{2m}(T, R)\right) ^n \\&\times \exp \left\{ -\frac{n}{2}\left( \sum \limits _{j=1}^{2m} (t_j + i\lambda _0)^2 + \sum \limits _{l = 2}^{2m} {\text {tr}}{R_l^2} \right) \right\} \int \limits _{U_{2m}} e^{-i{\text {tr}} XU^{*}TU}dU_{2m}(U)dT dR, \end{aligned}$$

where \(R = (R_2, \ldots , R_{2m})\).

$$\begin{aligned} \int \limits _{H_m} \left( A_{2m}(T, R)\right) ^n \exp \left\{ -\frac{n}{2}\left( \sum \limits _{j=1}^{2m} (t_j + i\lambda _0)^2 + \sum \limits _{l = 2}^{2m} {\text {tr}}{R_l^2} \right) \right\} dR \end{aligned}$$
(2.19)

is a symmetric function of \(\{t_j\}_{j = 1}^{2m}\). Indeed, after swapping \(t_{j_1}\) and \(t_{j_2}\) and changing the variables \(R_l \rightarrow (\mathcal {M}_{j_1j_2}^*)^{\wedge l}R_l\mathcal {M}_{j_1j_2}^{\wedge l}\), where \(\mathcal {M}_{j_1j_2}\) is the unit matrix, in which rows \(j_1\) and \(j_2\) are swapped, the integrand in (2.19) remains unchanged. Hence, the integration over the unitary group using the Harish–Chandra/Itsykson–Zuber formula (6.3) can be done, which yields the assertion of Proposition 1.

3 Proof of Theorem 1

To find asymptotics of \(F_2(\Lambda )\), we apply the steepest descent method to the integral representation (2.9). As usual, the key technical point of the steepest descent method is to choose a good contour of integration (in our case it is 3 dimension space of \((t_1, t_2, s)\)), which contains the stationary point \((t_1^*, t_2^*, s^*)\) of f and then to prove that for any \((t_1, t_2, s)\) in our “contour”

$$\begin{aligned} \mathfrak {R}f(t_1, t_2, s) \le \mathfrak {R}f(t_1^*, t_2^*, s^*). \end{aligned}$$
(3.1)

Let us introduce the function \(h_\alpha : \mathbb {R}^5 \rightarrow \mathbb {R}\)

$$\begin{aligned} h_{\alpha }(t_1, t_2, s, b_2, \lambda _0) = \frac{1}{2}\left( \log A - \sum \limits _{j=1}^2 t_j^2 - s^2 - \left( \frac{1 - \alpha }{\alpha }\right) ^2 b_2^2 - 2\alpha (1 - \alpha )\lambda _0^2 \right) , \end{aligned}$$
(3.2)

where

$$\begin{aligned} A = \left( b_2 s - t_1 t_2 + \alpha ^2\lambda _0^2\right) ^2 + \alpha ^2\lambda _0^2(t_1 + t_2)^2. \end{aligned}$$
(3.3)

Then \(\mathfrak {R}f\) at our “contour” (which is defined further) has the form

$$\begin{aligned} \mathfrak {R}f\left( T, s\right) = h_{\alpha }(\mathfrak {R}t_1, \mathfrak {R}t_2, s, b_2, \lambda _0) + \frac{1}{2}\left( \frac{1-\alpha }{\alpha }\right) ^2 b_2^2 + (1-\alpha )\lambda _0^2 \end{aligned}$$

for some \(\alpha \). To prove (3.1), we use the following lemma.

Lemma 1

Let \(h_\alpha \) be defined by (3.2) and (3.3). Then for every \(\alpha \in [1/2,\,1)\), \(t_1\), \(t_2\), s, \(b_2\), \(\lambda _0 \in \mathbb {R}\) the following inequality holds

$$\begin{aligned} h_{\alpha }(t_1, t_2, s, b_2, \lambda _0) \le \frac{1}{2}\log \left( \frac{\alpha }{1 - \alpha }\right) ^2 - 1 \end{aligned}$$
(3.4)

Moreover, the equality holds if and only if at least one of the following conditions is satisfied

  1. (a)

    \(\alpha = 1/2,\, t_1 = -t_2 = \pm \sqrt{4 - 4b_2^2 - \lambda _0^2}/2,\, s = b_2\);

  2. (b)

    \(\alpha = 1/2,\, t_1 = t_2 = \pm \sqrt{4 - 4b_2^2 - \lambda _0^2}/2,\, s = -b_2,\, b_2\lambda _0 = 0\);

  3. (c)

    \(t_1 = t_2 = 0,\, s = b_2\frac{1-\alpha }{\alpha },\, \alpha (1-\alpha )\lambda _0^2+\left( \frac{1-\alpha }{\alpha }\right) ^2b_2^2 = 1\);

  4. (d)

    \(t_1 = t_2 = 0,\, s = -b_2\frac{1-\alpha }{\alpha },\, b_2 = \pm \frac{\alpha }{1 - \alpha },\, \lambda _0 = 0\).

Proof

Rewrite the inequality (3.4) in the form

$$\begin{aligned} \log \frac{1 - \alpha }{\alpha }A^{1/2} + 1 \le \frac{1}{2}\Big (t_1^2 + t_2^2 + s^2 + d^2 + 2\alpha (1 - \alpha )\lambda _0^2\Big ), \end{aligned}$$

where \(d = \frac{1 - \alpha }{\alpha }b_2\). Since

$$\begin{aligned} \log \frac{1 - \alpha }{\alpha }A^{1/2} \le \frac{1 - \alpha }{\alpha }A^{1/2} - 1, \end{aligned}$$
(3.5)

it is sufficient to prove

$$\begin{aligned} \left( \frac{1 - \alpha }{\alpha }\right) ^2 A \le \frac{1}{4}\left( t_1^2 + t_2^2 + s^2 + d^2 + 2\alpha (1 - \alpha )\lambda _0^2\right) ^2. \end{aligned}$$
(3.6)

Recalling (3.3), we have

$$\begin{aligned} \left( \frac{1 - \alpha }{\alpha }\right) ^2 A= & {} s^2d^2 + \left( \frac{1 - \alpha }{\alpha }\right) ^2 t_1^2t_2^2 + \alpha ^2(1 - \alpha )^2\lambda _0^4 \\&- \, 2\frac{1 - \alpha }{\alpha }sdt_1t_2 + 2\alpha (1 - \alpha )\lambda _0^2sd + (1 - \alpha )^2\lambda _0^2\big (t_1^2 - t_2^2\big ). \end{aligned}$$

(3.6) is transformed into

$$\begin{aligned}&s^2d^2 + \left( \frac{1 - \alpha }{\alpha }\right) ^2 t_1^2t_2^2 - 2\frac{1 - \alpha }{\alpha }sdt_1t_2 + 2\alpha (1 - \alpha )\lambda _0^2sd \\&\quad +\, (1 - \alpha )^2\lambda _0^2(t_1^2 - t_2^2) \le \frac{1}{4}(t_1^2 + t_2^2)^2 + \frac{1}{4}(s^2 + d^2)^2 \\&\quad +\, \frac{1}{2}(t_1^2 + t_2^2)(s^2 + d^2) + \alpha (1 - \alpha )\lambda _0^2(t_1^2 + t_2^2) + \alpha (1 - \alpha )\lambda _0^2(s^2 + d^2). \end{aligned}$$

The last inequality is the sum of following obvious inequalities

$$\begin{aligned} (1 - \alpha )^2\lambda _0^2(t_1^2 - t_2^2)&\le \alpha (1 - \alpha )\lambda _0^2(t_1^2 + t_2^2), \end{aligned}$$
(3.7)
$$\begin{aligned} s^2d^2&\le \frac{1}{4}(s^2 + d^2)^2, \end{aligned}$$
(3.8)
$$\begin{aligned} \left( \frac{1 - \alpha }{\alpha }\right) ^2 t_1^2t_2^2&\le t_1^2t_2^2 \le \frac{1}{4}(t_1^2 + t_2^2)^2, \end{aligned}$$
(3.9)
$$\begin{aligned} - 2\frac{1 - \alpha }{\alpha }sdt_1t_2&\le 2|sdt_1t_2| \le \frac{1}{2}(t_1^2 + t_2^2)(s^2 + d^2), \end{aligned}$$
(3.10)
$$\begin{aligned} 2\alpha (1 - \alpha )\lambda _0^2sd&\le \alpha (1 - \alpha )\lambda _0^2(s^2 + d^2). \end{aligned}$$
(3.11)

It remains to determine conditions when the equality in (3.4) holds. It holds if and only if the equalities in (3.5), (3.7)–(3.11) hold. Let \((n')\) denotes the corresponding equality for inequality (n). Then

$$\begin{aligned} (3.5')&\Leftrightarrow A^{1/2} = \frac{\alpha }{1 - \alpha }, \\ (3.8')&\Leftrightarrow s^2 = d^2, \nonumber \\ (3.9')&\Rightarrow t_1^2 = t_2^2.\nonumber \end{aligned}$$
(3.12)

Everywhere below until the end of the proof we assume that \(s^2 = d^2\) and \(t_1^2 = t_2^2\). Then

$$\begin{aligned} (3.7')&\Leftrightarrow \left[ \begin{array}{l} \alpha = 1/2; \\ \lambda _0t_1 = 0; \end{array} \right. \\ (3.9')&\Leftrightarrow \left[ \begin{array}{l} \alpha = 1/2; \\ t_1 = 0; \end{array} \right. \\ (3.10')&\Leftrightarrow \left[ \begin{array}{l} {\left\{ \begin{array}{ll} \alpha = 1/2; \\ sdt_1t_2 \le 0; \end{array}\right. } \\ t_1s = 0; \end{array} \right. \\ (3.11')&\Leftrightarrow \left[ \begin{array}{l} sd \ge 0; \\ \lambda _0 = 0. \end{array} \right. \end{aligned}$$

Let us consider the following cases

  1. 1.

    \(t_1 = 0\).

    1. 1.1

      \(\lambda _0 = 0\). Then (3.12) is transformed into \((b_2s)^2 = \big (\frac{\alpha }{1 - \alpha }\big )^2\). Since \(s^2 = d^2\), we get \(b_2 = \pm \frac{\alpha }{1 - \alpha }\) that implies (d).

    2. 1.2

      \(sd \ge 0\). Then (3.12) is equivalent to \(\alpha ^2\lambda _0^2+\frac{1-\alpha }{\alpha }b_2^2 = \frac{\alpha }{1 - \alpha }\) that implies (c).

  2. 2.

    \(t_1 \ne 0 \Rightarrow \alpha = 1/2\).

    1. 2.1

      \(s = 0\). Hence, (3.12) is transformed into \(\lambda _0^2/4 + t_1^2 = 1\) that implies (b).

    2. 2.2

      \(s \ne 0 \Rightarrow dst_1t_2 < 0\).

      1. 2.2.1

        \(sd > 0\). Then (3.12) is transformed into \(\lambda _0^2/4 + b_2^2 + t_1^2 = 1\). Condition (a) is satisfied.

      2. 2.2.2

        \(sd < 0 \Rightarrow \lambda _0 = 0\). Then (3.12) is transformed into \(b_2^2 + t_1^2 = 1\). Condition (b) is satisfied.

Finally, it is easy to check that the values of \(h_{\alpha }\) at the points satisfying (a)–(d) are equal to the r.h.s. of (3.4). \(\square \)

Now we are ready to prove Theorem 1. We start from the lemma

Lemma 2

Let all conditions of Theorem 1 are hold and \(\lambda _0 \in (\lambda _*(p), \lambda _*(p))\). Then \(F_2(\Lambda )\) satisfies the asymptotic relation

$$\begin{aligned} F_2(\Lambda )= & {} 2n \exp \big \{n(\lambda _0^2+b_2^2-2)/2 + \lambda _0(x_1 + x_2)/2\big \} \nonumber \\&\times \, \frac{\sin ((x_1-x_2)\sqrt{4-4b_2^2-\lambda _0^2}/2)}{(x_1-x_2)}(1 + o(1)), \end{aligned}$$
(3.13)

where \(b_2\) is defined in (2.14).

Proof

Set

$$\begin{aligned} t_\eta ^{*}&= \frac{(-1)^\eta }{2} \sqrt{4 - 4b_2^2 - \lambda _0^2}; \nonumber \\ T_*^{(\eta \nu )}&= {{\mathrm{diag}}}\left\{ t_\eta ^*, t_\nu ^* \right\} - i\Lambda _0/2, \end{aligned}$$
(3.14)

where \(\eta , \nu = 1, 2\).

Consider the contour \(\mathfrak {I}t_1 = \mathfrak {I}t_2 = -\lambda _0/2,\, s \in \mathbb {R}\). It contains the points \((t_1^* - i\lambda _0/2, t_2^* - i\lambda _0/2, b_2)\) and \((t_2^* - i\lambda _0/2, t_1^* - i\lambda _0/2, b_2)\), which are the stationary points of f. The contour may contain another stationary points of f, but this fact does not affect the proof, except the case \(\lambda _0 = 0\) for which the points \((t_1^*, t_1^*, -b_2)\) and \((t_2^*, t_2^*, -b_2)\) are also under consideration. First, consider the case \(\lambda _0 \ne 0\).

Shift the variables \(t_\eta \rightarrow t_\eta - i\lambda _0/2\) in (2.9) and restrict the integration domain by

$$\begin{aligned} B_R = \left\{ (t_1, t_2, s) \in \mathbb {R}^3: \max \{|t_1|, |t_2|, |s|\} \le R\right\} . \end{aligned}$$

Then

$$\begin{aligned} F_2(\Lambda ) = C_n(X) \frac{ie^{\lambda _0(x_1 + x_2)/2}}{(x_1 - x_2)} \int \limits _{B_R} g(T, s)e^{n f\left( T - \frac{i}{2}\Lambda _0, s\right) } dT ds + O(e^{-nR^2/4}),\, n \rightarrow \infty , \end{aligned}$$
(3.15)

where f is defined in (2.10) and

$$\begin{aligned} g(T, s)&= (t_1 - t_2) \exp \left\{ -i\sum \limits _{\eta =1}^2 x_\eta t_\eta \right\} . \end{aligned}$$

Then it is easy to see, that for \(\eta \ne \nu \)

$$\begin{aligned} f(T_*^{(\eta \nu )}, b_2)&= \frac{1}{2}b_2^2 + \frac{1}{2}\lambda _0^2 - 1; \\ f''(T_*^{(\eta \nu )}, b_2)&= -\left( \begin{array}{lll} (t_\nu ^{*} - i\lambda _0/2)^2 + 1 &{} b_2^2 &{} -b_2 (t_\nu ^{*} - i\lambda _0/2) \\ b_2^2 &{} (t_\eta ^{*} - i\lambda _0/2)^2 + 1 &{} -b_2 (t_\eta ^{*} - i\lambda _0/2) \\ -b_2 (t_\nu ^{*} - i\lambda _0/2) &{} -b_2 (t_\eta ^{*} - i\lambda _0/2) &{} b_2^2 + 1 \end{array} \right) ; \\ \det f''(T_*^{(\eta \nu )}, b_2)&= -(4 - 4b_2^2 - \lambda _0^2) < -(\lambda _*(p)^2 - \lambda _0^2); \\ \mathfrak {R}f''(T_*^{(\eta \nu )}, b_2)&= -\left( \begin{array}{lll} (t_\nu ^{*})^2 - \lambda _0^2/4 + 1 &{} b_2^2 &{} -b_2 t_\nu ^{*} \\ b_2^2 &{} (t_\eta ^{*})^2 - \lambda _0^2/4 + 1 &{} -b_2 t_\eta ^{*} \\ -b_2 t_\nu ^{*} &{} -b_2 t_\eta ^{*} &{} b_2^2 + 1 \end{array} \right) ; \\ \det \mathfrak {R}f''(T_*^{(\eta \nu )}, b_2)&= -\left( 1 - \lambda _0^2/4\right) (4 - 4b_2^2 - \lambda _0^2) < -(\lambda _*(p)^2 - \lambda _0^2)^2/4, \end{aligned}$$

where \(T_*^{(\eta \nu )}\) is defined in (3.14).

Note that

$$\begin{aligned} \mathfrak {R}f\left( T - \frac{i}{2}\Lambda _0, s\right) = h_{1/2}(t_1, t_2, s, b_2, \lambda _0) + \frac{1}{2}b_2^2 + \frac{1}{2}\lambda _0^2, \end{aligned}$$

where \(h_\alpha \) is defined in (3.2). According to Lemma 1, \(\mathfrak {R}f\left( T - \frac{i}{2}\Lambda _0, s\right) \), as a function of real variables \(t_1\), \(t_2\), s, attains its maximum at \((T_*^{(\eta \nu )}, b_2)\). Hence, \(\mathfrak {R}f''(T_*^{(\eta \nu )}, b_2)\) is nonpositive, but since \(\det \mathfrak {R}f''(T_*^{(\eta \nu )}, b_2) < 0\), \(\mathfrak {R}f''(T_*^{(\eta \nu )}, b_2)\) is negative definite.

Let \(V_n^{(\eta \nu )}\) be a \(n^{-1/2} \log n\text {-neighborhood}\) of the point \((T_*^{(\eta \nu )}, b_2)\) and let \(V_n\) everywhere below denote the union of such neighborhoods of the stationary points under consideration, unless otherwise stated. Then for \(\left( T, s\right) \notin V_n\) and sufficiently large n we have

$$\begin{aligned}&\mathfrak {R}f(T_*^{(\eta \nu )}, b_2) - \mathfrak {R}f\left( T - \frac{i}{2}\Lambda _0, s\right) \\&\quad \ge \min _{\eta \ne \nu } \min _{\left( T - \frac{i}{2}\Lambda _0, s\right) \in \partial V_n^{(\eta \nu )}} \left\{ \mathfrak {R}f(T_*^{(\eta \nu )}, b_2) - \mathfrak {R}f\left( T - \frac{i}{2}\Lambda _0, s\right) \right\} \ge C\frac{\log ^2 n}{n}, \end{aligned}$$

Thus we can restrict the integration domain by \(V_n\).

Set \(q = (t_1, t_2, s),\, q^{*} = (t_\eta ^{*}, t_\nu ^{*}, b_2).\) Then expanding f and g by the Taylor formula and changing the variables \(q \rightarrow n^{-1/2}q + q^{*}\), we get

$$\begin{aligned} F_2(\Lambda )= & {} n^{-3/2} C_n(X) \cdot \frac{i \exp \{n(\lambda _0^2+b_2^2-2)/2 + \lambda _0(x_1 + x_2)/2\}}{(x_1 - x_2)} \\&\times \left( \sum \limits _{\eta \ne \nu } \int \limits _{[-\log n, \log n]^3} g(\mathfrak {R}T_*^{(\eta \nu )}, b_2) \exp \left\{ \frac{1}{2} q f''(T_*^{(\eta \nu )}, b_2) q^{T}\right\} dq + o(1)\right) . \end{aligned}$$

This is true because \(g(\mathfrak {R}T_*^{(\eta \nu )}, b_2) \ne 0\). Performing the Gaussian integration, we obtain

$$\begin{aligned} F_2(\Lambda )= & {} \left( \frac{2\pi }{n}\right) ^{3/2} C_n(X) \cdot \frac{i \exp \Big \{n(\lambda _0^2+b_2^2-2)/2 + \lambda _0(x_1 + x_2)/2\Big \}}{(x_1 - x_2)} \nonumber \\&\times \bigg (\sum \limits _{\eta \ne \nu } g(\mathfrak {R}T_*^{(\eta \nu )}, b_2) {\det }^{-1/2} \{ -f''(T_*^{(\eta \nu )}, b_2) \} + o(1)\bigg ). \end{aligned}$$
(3.16)

Since

$$\begin{aligned} g(\mathfrak {R}T_*^{(\eta \nu )}, b_2) = 2t_\eta ^{*} e^{-it_\eta ^{*}(x_1 - x_2)}, \quad \eta \ne \nu , \end{aligned}$$

and \(C_n\) has the form (2.13), we have (3.13).

If \(\lambda _0 = 0\), then repeating the above steps, we obtain the formula similar to (3.16). The only difference is that there are two more terms (i.e. there are as many terms as stationary points) in the sum. Since g is zero at the points with \(t_1= t_2\), we have exactly (3.16) and hence the asymptotic equality (3.13) is also valid. \(\square \)

The assertion (i) of the theorem follows immediately from Lemma 2.

Lemma 3

Let all conditions of Theorem 1 are hold and \(\lambda _0^2 > 4 - 4b_2^2 + \varepsilon \) for some \(\varepsilon > 0\). Then \(F_2(\Lambda )\) satisfies the asymptotic relations

  1. (i)

    for \(\lambda _0 \ne 0\)

    $$\begin{aligned} F_2(\Lambda ) = \frac{\alpha ^2 \exp \Big \{n\widehat{A} + (1 - \alpha )\lambda _0(x_1 + x_2)\Big \}}{(2\alpha -1)^{3/2}(2-\alpha (1-\alpha )(3-2\alpha )\lambda _0^2)^{1/2}}(1 + o(1)), \end{aligned}$$
    (3.17)

    where \(\alpha \) and \(\widehat{A}\) satisfy

    $$\begin{aligned} \alpha \in (1/2, 1), \qquad&\alpha (1-\alpha )\lambda _0^2+\left( \frac{1-\alpha }{\alpha }\right) ^2b_2^2 - 1 = 0, \end{aligned}$$
    (3.18)
    $$\begin{aligned}&\widehat{A} = f \left( - i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\right) . \end{aligned}$$
    (3.19)
  2. (ii)

    for \(\lambda _0 = 0\)

    $$\begin{aligned} F_2(\Lambda ) = b_2^n e^{-n/2} \frac{b_2^2}{(b_2^2 - 1)^{3/2}}\left( b_2 + 1 + (-1)^n (b_2 - 1) \right) (1 + o(1)). \end{aligned}$$
    (3.20)

Proof

Choose \(\mathfrak {I}t_1 = \mathfrak {I}t_2 = -\alpha \lambda _0\), \(s \in \mathbb {R}\) as the good contour with the stationary point \(\left( -i\alpha \lambda _0, -i\alpha \lambda _0, \frac{1-\alpha }{\alpha }b_2\right) \), where \(\alpha \) satisfies (3.18). Existence and uniqueness of such \(\alpha \) follow from the fact that the l.h.s. of (3.18) is a monotone decreasing function of \(\alpha \) whose values at \(\alpha = 1/2\) and \(\alpha = 1\) have different signs. Everywhere below we assume that \(\alpha \) is a solution of (3.18).

If \(\lambda _0 = 0\), we have two stationary points at the contour—\((0, 0, \pm 1)\).

Consider the case \(\lambda _0 > \varepsilon \). Shifting the variables \(t_j \rightarrow t_j - i\alpha \lambda _0\), similarly to (3.15) we get

$$\begin{aligned} F_2(\Lambda ) = C_n(X) \frac{ie^{(1 - \alpha )\lambda _0(x_1 + x_2)}}{(x_1 - x_2)} \int \limits _{B_R} g(T, s)e^{n f\left( T - i\alpha \Lambda _0, s\right) } dT ds + O(e^{-nR^2/4}). \end{aligned}$$
(3.21)

It is easy to check that

$$\begin{aligned} f \bigg (- i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\bigg )&= \frac{1}{2}\left( \frac{1-\alpha }{\alpha }\right) ^2 b_2^2 + (1-\alpha )\lambda _0^2 + \log {\frac{\alpha }{1-\alpha }} - 1; \\ \nonumber f'' \left( - i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\right)&= -\left( \begin{array}{lll} 1 - (1 - \alpha )^2\lambda _0^2 &{} \left( \frac{1 - \alpha }{\alpha }\right) ^3 b_2^2 &{} ib_2 \lambda _0\frac{(1 - \alpha )^2}{\alpha } \\ \left( \frac{1 - \alpha }{\alpha }\right) ^3 b_2^2 &{} 1 - (1 - \alpha )^2\lambda _0^2 &{} ib_2 \lambda _0\frac{(1 - \alpha )^2}{\alpha } \\ ib_2 \lambda _0\frac{(1 - \alpha )^2}{\alpha } &{} ib_2 \lambda _0\frac{(1 - \alpha )^2}{\alpha } &{} 1 + b_2^2 \left( \frac{1 - \alpha }{\alpha }\right) ^2 \end{array} \right) ; \end{aligned}$$
(3.22)
$$\begin{aligned} \det f'' \bigg (- i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\bigg )&= -\frac{2\alpha - 1}{\alpha ^2} \bigg (\alpha (1 - \alpha )(2\alpha - 1)\lambda _0^2 + 2b_2^2 \left( \frac{1 - \alpha }{\alpha }\right) ^2\bigg ) < 0; \\ \nonumber \det \mathfrak {R}f'' \bigg (- i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\bigg )&= -\frac{2\alpha - 1}{\alpha ^2}\bigg (1 + b_2^2 \left( \frac{1 - \alpha }{\alpha }\right) ^2\bigg ) \\ \nonumber&\quad \times \bigg (\alpha (1 - \alpha )(2\alpha - 1)\lambda _0^2 + b_2^2 \left( \frac{1 - \alpha }{\alpha }\right) ^2\bigg ) < 0. \end{aligned}$$
(3.23)

In addition,

$$\begin{aligned} \mathfrak {R}f\left( T- i\alpha \Lambda _0, s\right) = h_{\alpha }(t_1, t_2, s, b_2, \lambda _0) + \frac{1}{2}\left( \frac{1-\alpha }{\alpha }\right) ^2 b_2^2 + (1-\alpha )\lambda _0^2 \end{aligned}$$

with \(h_\alpha \) of (3.2). So, similarly to the proof of Lemma 2 one can write

$$\begin{aligned} F_2(\Lambda )= & {} C_n(X) \frac{ie^{n\widehat{A} + (1 - \alpha )\lambda _0(x_1 + x_2)}}{(x_1 - x_2)} \nonumber \\&\times \bigg ( \int \limits _{V_n} g(T, s)e^{n (f\left( T - i\alpha \Lambda _0, s\right) - \widehat{A})} dT ds + O\left( e^{-C \log ^2 n} \right) \bigg ), \end{aligned}$$
(3.24)

where \(V_n = U_{n^{-1/2}\log n}\left( \left( 0, \frac{1-\alpha }{\alpha }b_2\right) \right) \) and \(\widehat{A}\) is defined in (3.19).

Repeating the argument of Lemma 2, we get

$$\begin{aligned} F_2(\Lambda )= & {} n^{-3/2} C_n(X) \cdot \frac{i \exp \{n\widehat{A} + (1 - \alpha )\lambda _0(x_1 + x_2)\}}{(x_1 - x_2)} (1 + o(1)) \\&\times \int \limits _{[-\log n, \log n]^3} \frac{1}{\sqrt{n}}\bigg (\left<g'\left( 0, \frac{1-\alpha }{\alpha }b_2\right) , q \right> + \frac{1}{2\sqrt{n}}q g''\left( 0, \frac{1-\alpha }{\alpha }b_2\right) q^T \bigg ) \\&\times e^{n (f\left( T - i\alpha \Lambda _0, s\right) - \widehat{A})} dq, \end{aligned}$$

where \(\langle \cdot , \cdot \rangle \) denotes the scalar product in \(\mathbb {R}^3\). The first integral is zero, because f is a symmetric with respect to \(t_1\) and \(t_2\) function and \(\dfrac{\partial g}{\partial t_1}\left( 0, \frac{1-\alpha }{\alpha }b_2\right) = -\dfrac{\partial g}{\partial t_2}\left( 0, \frac{1-\alpha }{\alpha }b_2\right) = 1\). Expanding the exponent into the Taylor series, we obtain

$$\begin{aligned} F_2(\Lambda )= & {} n^{-5/2} C_n(X) \cdot \frac{i \exp \{n\widehat{A} + (1 - \alpha )\lambda _0(x_1 + x_2)\}}{(x_1 - x_2)} (1 + o(1)) \nonumber \\&\times \int \limits _{\mathbb {R}^3} \frac{1}{2}q g''\left( 0, \frac{1-\alpha }{\alpha }b_2\right) q^T \exp \left\{ \frac{1}{2} q f''\left( -i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\right) q^{T}\right\} dq.\nonumber \\ \end{aligned}$$
(3.25)

It is easy to see that

$$\begin{aligned} g''\left( 0, \frac{1-\alpha }{\alpha }b_2\right) = \left( \begin{array}{lll} -2ix_1 &{} i(x_1-x_2) &{} 0 \\ i(x_1-x_2) &{} 2ix_2 &{} 0 \\ 0 &{} 0 &{} 0 \end{array} \right) . \end{aligned}$$

Computing the integral in (3.25), we have (3.17).

If \(\lambda _0 = 0\), then \(\alpha = \frac{b_2}{b_2 + 1}\). As it was mentioned above, in this case there are two stationary points at the contour. The asymptotics of the integral (3.21) in the neighborhood of the second stationary point (i.e. \((0, 0, -1)\)) is computed by the same way as above. (3.17) is transformed into (3.20). \(\square \)

The assertion (ii) follows from (3.17) and (3.20).

Now we proceed to the proof of agreement between cases (i) and (ii) of Theorem 1.

Lemma 4

Let all conditions of Theorem 1 are hold and \(\lambda _0^2 = 4 - 4b_2^2 - \delta _n\), \(\delta _n \rightarrow 0\). Then \(F_2(\Lambda )\) satisfies the asymptotic relation

$$\begin{aligned} F_2(\Lambda ) = Y_n \exp \{n(2 - 3b_2^2 - \delta _n)/2 + \lambda _0(x_1 + x_2)/2\}(1 + o(1)). \end{aligned}$$
(3.26)

where

$$\begin{aligned} Y_n = \left\{ \begin{array}{l@{\quad }l} n\delta _n^{1/2}, &{} \text {if }\, \delta _n > 0, n\delta _n^2 \rightarrow \infty ; \\ Cn^{3/4}, &{} \text {if }\, n\delta _n^2 \rightarrow \text {const}; \\ C(-\delta _n)^{-3/2}, &{} \text {if }\, \delta _n < 0, n\delta _n^2 \rightarrow \infty . \end{array}\right. \end{aligned}$$

Proof

Consider the case \(\delta _n \ge 0\). Choose the same contour as in the proof of Lemma 2. Stationary points are also the same.

Change of the variables \(\tau = t_1 + t_2\), \(\sigma = t_1 - t_2\) gives us

$$\begin{aligned} F_2(\Lambda ) = C_n(X) \frac{ie^{\lambda _0(x_1 + x_2)}}{2(x_1 - x_2)}\int \limits _{\mathbb {R}^3} \sigma e^{-i((x_1 + x_2)\tau + (x_1 - x_2)\sigma )/2} e^{n\tilde{f}(\tau , \sigma , s)} d\tau d\sigma ds, \end{aligned}$$

where \(\tilde{f}(\tau , \sigma , s) = f(T, s)\). Set

$$\begin{aligned} \tau ^*&= -i\lambda _0; \\ \sigma _\eta ^*&= t_\eta ^* - t_{3 - \eta }^* = (-1)^\eta \sqrt{4 - 4b_2^2 - \lambda _0^2} = (-1)^\eta \delta _n^{1/2}. \end{aligned}$$

It is easy to see that

$$\begin{aligned} \tilde{f}(\tau ^*, \sigma _\eta ^*, b_2)&= 1 - \frac{3}{2}b_2^2 - \frac{1}{2}\delta _n; \\ \tilde{f}''(\tau ^*, \sigma _\eta ^*, b_2)&= -\left( \begin{array}{lll} b_2^2 + \delta _n/4 &{} (-1)^\eta i\lambda _0\delta _n^{1/2}/4 &{} ib_2\lambda _0/2 \\ (-1)^\eta i\lambda _0\delta _n^{1/2}/4 &{} \delta _n/4 &{} (-1)^\eta b_2\delta _n^{1/2}/2 \\ ib_2\lambda _0/2 &{} (-1)^\eta b_2\delta _n^{1/2}/2 &{} b_2^2 + 1 \end{array} \right) ; \\ \det \tilde{f}''(\tau ^*, \sigma _\eta ^*, b_2)&= -\delta _n/4; \\ \frac{\partial ^3 \tilde{f}}{\partial \sigma ^3}(\tau ^*, \sigma _\eta ^*, b_2)&= \frac{(-1)^{\eta + 1}}{4} \delta _n^{1/2}(\delta _n - 3); \\ \frac{\partial ^4 \tilde{f}}{\partial \sigma ^4}(\tau ^*, \sigma _\eta ^*, b_2)&= -\frac{3}{4}. \end{aligned}$$

Let us choose \(V_n\) as a union of the products of the neighborhoods of \(\tau ^*\), \(\sigma _1^*\), \(b_2\) and \(\tau ^*\), \(\sigma _2^*\), \(b_2\) such that the radii of the neighborhoods corresponding to \(\tau \) and s are equal to \(\log n/\sqrt{n}\), whereas the radius of the neighborhood corresponding to \(\sigma \) is equal to \(\log n/\sqrt{n\delta _n}\), if \(n\delta _n^2 \rightarrow \infty \), and to \(\log n/n^{1/4}\) otherwise. Similarly to the proof of Lemma 2 it can be proved that for \(\left( \tau , \sigma , s\right) \notin V_n\) and sufficiently big n

$$\begin{aligned} \mathfrak {R}\tilde{f}(\tau ^*, \sigma _\eta ^*, b_2) - \mathfrak {R}\tilde{f}(\tau , \sigma , s) \ge C\frac{\log ^2 n}{n}, \end{aligned}$$
(3.27)

Let \(n\delta _n^2 \rightarrow \infty \). Then by the same way as before, with the only one distinction that the change of the variable \(\sigma \) is \(\sigma \rightarrow (n\delta _n)^{-1/2}\sigma + \sigma _\eta ^*\), we obtain

$$\begin{aligned} F_2(\Lambda )= & {} n^{-3/2} C_n(X) \cdot \frac{i \exp \{n(2 - 3b_2^2 - \delta _n)/2 + \lambda _0(x_1 + x_2)/2\}}{2(x_1 - x_2)} (1 + o(1)) \nonumber \\&\times \sum \limits _{\eta =1}^2 \int \limits _{[-\log n, \log n]^3} \left( (-1)^\eta + i(x_1 - x_2)\delta _n^{1/2}/2 + \frac{\sigma }{\delta _n\sqrt{n}} \right) \nonumber \\&\times \, e^{\frac{1}{2} A_{\tilde{f}}^{(\eta )}(\tau , \sigma , s)} d\tau d\sigma ds, \end{aligned}$$
(3.28)

where \(A_{\tilde{f}}^{(\eta )}\) is a quadratic form, defined by the matrix, which is obtained from \(\tilde{f}''(\tau ^*, \sigma _\eta ^*, b_2)\) by dividing by \(\delta _n^{1/2}\) of all numbers in the second line and the second column, i.e.

$$\begin{aligned} A_{\tilde{f}}^{(\eta )}(\tau , \sigma , s) = (\tau , \delta _n^{-1/2}\sigma , s)\tilde{f}''(\tau ^*, \sigma _\eta ^*, b_2)(\tau , \delta _n^{-1/2}\sigma , s)^{T}. \end{aligned}$$

Therefore we have (3.26).

Now let \(n\delta _n^2 \rightarrow 0\). Then, changing the variables \(\sigma ^2 = \tilde{\sigma }\), we get

$$\begin{aligned} F_2(\Lambda ) = C_n(X) \frac{e^{\lambda _0(x_1 + x_2)}}{2(x_1 - x_2)}\int \limits _0^{+\infty } d\tilde{\sigma } \int \limits _{\mathbb {R}^2} \sin ((x_1 - x_2)\sqrt{\tilde{\sigma }}/2)e^{-i(x_1 + x_2)\tau /2} e^{n\tilde{f}(\tau , \sqrt{\tilde{\sigma }}, s)} d\tau ds. \end{aligned}$$

Let \(\tilde{V}_n\) be a product of the \(\frac{\log n}{\sqrt{n}}\text {-neighborhoods}\) of 0 and \(b_2\). Then we can shift the variable \(\tau \rightarrow \tau - i\lambda _0\) and in view of (3.27) restrict the integration domain by \(\left[ 0, \frac{\log ^2 n}{\sqrt{n}}\right] \times \tilde{V}_n\)

$$\begin{aligned} F_2(\Lambda )= & {} C_n(X) \frac{e^{\lambda _0(x_1 + x_2)/2}}{2(x_1 - x_2)} \int \limits _0^{\frac{\log ^2 n}{\sqrt{n}}}(1 + o(1)) d\tilde{\sigma } \\&\times \int \limits _{\tilde{V}_n} \sin ((x_1 - x_2)\sqrt{\tilde{\sigma }}/2)e^{-i(x_1 + x_2)\tau /2} e^{n\tilde{f}(\tau - i\lambda _0, \sqrt{\tilde{\sigma }}, s)} d\tau ds. \end{aligned}$$

Expanding \(\tilde{f}\) and sin by the Taylor formula near \((-i\lambda _0, 0, b_2)\) and 0 respectively and changing the variables \(\tau \rightarrow n^{-1/2}\tau \), \(\tilde{\sigma } \rightarrow n^{-1/2}\tilde{\sigma }\), \(s \rightarrow n^{-1/2}s + b_2\), we have

$$\begin{aligned} F_2(\Lambda )= & {} n^{-3/2}C_n(X) \frac{\exp \{n(2 - 3b_2^2 - \delta _n)/2 + \lambda _0(x_1 + x_2)/2\}}{2(x_1 - x_2)} \nonumber \\&\times \int \limits _0^{\log ^2 n} (1 + o(1))d\tilde{\sigma } \int \limits _{[-\log n,\, \log n]^2} (x_1 - x_2)\frac{\sqrt{\tilde{\sigma }}}{2\root 4 \of {n}} e^{\frac{1}{2} B_{\tilde{f}}(\tau , \tilde{\sigma }, s)} d\tau ds, \end{aligned}$$
(3.29)

where \(B_{\tilde{f}}\) is a quadratic form defined by the matrix

$$\begin{aligned} \left( \begin{array}{lll} \frac{\partial ^2 \tilde{f}}{\partial \tau ^2} &{} \frac{1}{2} \frac{\partial ^3 \tilde{f}}{\partial \tau \partial \sigma ^2} &{} \frac{\partial ^2 \tilde{f}}{\partial \tau \partial s} \\ \frac{1}{2} \frac{\partial ^3 \tilde{f}}{\partial \tau \partial \sigma ^2} &{} \frac{1}{12} \frac{\partial ^4 \tilde{f}}{\partial \sigma ^4} &{} \frac{1}{2} \frac{\partial ^3 \tilde{f}}{\partial \sigma ^2 \partial s} \\ \frac{\partial ^2 \tilde{f}}{\partial \tau \partial s} &{} \frac{1}{2} \frac{\partial ^3 \tilde{f}}{\partial \sigma ^2 \partial s} &{} \frac{\partial ^2 \tilde{f}}{\partial s^2} \end{array} \right) (-i\lambda _0, 0, b_2) = -\left( \begin{array}{lll} b_2^2 &{} i\lambda _0/8 &{} ib_2\lambda _0/2 \\ i\lambda _0/8 &{} 1/16 &{} b_2/4 \\ ib_2\lambda _0/2 &{} b_2/4 &{} 1 + b_2^2 \end{array} \right) + o(1)P, \end{aligned}$$

where all entries of the matrix P are units. Performing the Gaussian integration, we obtain

$$\begin{aligned} F_2(\Lambda )= & {} \frac{2\pi }{n^{3/2}} \cdot \frac{C_n(X)}{4\root 4 \of {n}} \cdot \exp \{n(2 - 3b_2^2 - \delta _n)/2 + \lambda _0(x_1 + x_2)/2\} \\&\int \limits _0^{+\infty } \sqrt{\tilde{\sigma }} \exp \left\{ -\frac{\tilde{\sigma }^2}{32}\right\} d\tilde{\sigma } (1 + o(1)) \end{aligned}$$

that imply (3.26).

If \(n\delta _n^2 \rightarrow \text {const}\), then there is a certain third power polynomial instead of \(B_{\tilde{f}}\) in the last exponent in (3.29). Thus, the asymptotics of \(F_2(\Lambda )\) differs from (3.26) only by multiplicative n-independent constant, which is absorbed by C in \(Y_n\). For negative \(\delta _n\), if \(n\delta _n^2 \rightarrow 0\), all the changes in equations appear in multiplication by factors which are equal to \(1 + o(1)\), so (3.26) is unchanged . If \(n\delta _n^2 \rightarrow \infty \), then the combination of the argument of the case \(\delta _n \ge 0\) and of Lemma 3 causes some changes in (3.28) and implies \(Y_n = C(-\delta _n)^{-3/2}\) in (3.26). \(\square \)

4 Proof of Theorem 2

As in the case of Theorem 1, the proof of Theorem 2 is based on the application of the steepest descent method to the integral representation of \(F_{2m}\) obtained in Sect. 2. For this end a “good contour” and stationary points of \(f_{2m}\), defined in (2.8), have to be chosen. We start from the choice of stationary points.

If \(p = n\) and hence \(b_l = \tilde{b}_{2k} = 0\), \(l > 1\), the proper stationary points are

$$\begin{aligned} t_j = t_j^* = {\left\{ \begin{array}{ll} \phantom {-}t^* \\ -\overline{t^*} \end{array}\right. },\, j = \overline{1,2m}, \quad R_l = 0, \end{aligned}$$

where

$$\begin{aligned} t^* = \frac{1}{2}\left( -i\lambda _0 + \sqrt{4 - \lambda _0^2}\right) . \end{aligned}$$

Set

$$\begin{aligned}&\widetilde{T} = {{\mathrm{diag}}}\{t_j^*\}_{j = 1}^{2m}, \\&b = (b_2, \ldots , b_{2m}, \tilde{b}_4, \tilde{b}_6, \ldots , \tilde{b}_{2m}).\nonumber \end{aligned}$$
(4.1)

Since

$$\begin{aligned}&\left. \left( A_{2m}(T, R)f_{2m}'(T, R)\right) '\right| _{\begin{array}{c} T = \widetilde{T}\\ R = 0\\ b = 0 \end{array}} \\&\quad = (-1)^{m - 1}{{\mathrm{diag}}}\Big \{1 + \overline{t_1^*}^2, \ldots , 1 + \overline{t_{2m}^*}^2, e_1, \ldots , e_{\left( {\begin{array}{c}4m\\ 2m\end{array}}\right) - 4m^2 - 1}\Big \},\quad e_j = 1 \text { or } 2 \end{aligned}$$

is nondegenerate for \(\lambda _0 \in (-2, 2)\), for sufficiently small b there exists the unique solution

$$\begin{aligned} T = T(b),\, R = R(b) \end{aligned}$$
(4.2)

of the equation

$$\begin{aligned} A_{2m}(T, R)f_{2m}'(T, R) = 0 \end{aligned}$$
(4.3)

such that \(T(0) = \widetilde{T}\), \(R(0) = 0\), and the solution continuously depends on b and \(\lambda _0\). When \(p \rightarrow \infty \), (2.11) and (2.14) yield \(b \rightarrow 0\). Therefore, the solutions (4.2) are the required stationary points.

Lemma 5

The solution (4.2) also has the following properties

  1. (1)

    \(T_{j_1 j_1}(b) = T_{j_2 j_2}(b)\), \(T_{k_1 k_1}(b) = T_{k_2 k_2}(b)\) for all \(j_1, j_2 \in I_+\), \(k_1, k_2 \in I_-\), where \(I_+ = \{j,\, t_j^* = t^*\}\), \(I_- = \{j,\, t_j^* = -\overline{t^*}\}\);

  2. (2)

    \((R_l)_{\alpha \beta }(b) = 0\), \(l = \overline{2, 2m}\), \(\alpha \ne \beta \).

Proof

Let

$$\begin{aligned} \pi T = {{\mathrm{diag}}}\{t_{\pi (j)}\}, \quad \pi R = (\pi R_2, \ldots , \pi R_{2m}), \quad \pi \in S_{2m}, \end{aligned}$$

where \(\pi R_l\) is the such matrix that \((\pi R_l)_{\alpha \beta } = (R_l)_{\alpha _\pi \beta _\pi }\), multi-index \(\pi (\alpha )\) contains numbers \(\pi (\alpha _1), \ldots , \pi (\alpha _l)\) sorted by increasing and \(S_{2m}\) is the group of permutations of length 2m. Then \(\forall \pi \in S_{2m}\) \(f_{2m}(\pi T, \pi R) = f_{2m}(T, R)\). So, it is sufficiently to proof the lemma for those stationary points, for which \(t_1^* = \ldots = t_k^* = -\overline{t_{k + 1}^*} = \ldots = -\overline{t_{2m}^*} = t^*\).

We are going to prove that there exists the solution of (4.3) that satisfies conditions (1) and (2) and \(T(0) = \widetilde{T}\), \(R(0) = 0\). It is equivalent to existence of the solution of the system

$$\begin{aligned} A_{2m}(T, R)\frac{\partial }{\partial t_j}f_{2m}(T, R)= & {} 0, \quad j = 1, 2m; \\ A_{2m}(T, R)\frac{\partial }{\partial (R_l)_{\alpha \alpha }}f_{2m}(T, R)= & {} 0, \quad \alpha \in I_{2m,l},\, l = \overline{2,2m},\nonumber \end{aligned}$$
(4.4)

where T and R satisfy (1) and (2), with respect to the variables \(t_1\), \(t_{2m}\), \((R_l)_{\alpha \alpha }\). Since the derivative of the l.h.s. of the system at \(T = \widetilde{T}\), \(R = 0\), \(b = 0\) is nondegenerate, there exists the solution of it. In view of uniqueness of the solution of (4.3), the solution of (4.4) coincides with (4.2). \(\square \)

The next step is the choice of the “contour” (in this case it is a \(d_{2m}\)-dimensional manifold, \(d_{2m} = (\left( {\begin{array}{c}4m\\ 2m\end{array}}\right) - 4m^2 - 1 + 2m)\)). For each variable we consider some contour and the required manifold \(\hat{M}_{2m}\) will be the product of these contours. Fix some variable. Order the corresponding components of the stationary points by increasing of the real part (if real parts are equal, order by increasing of the imaginary part). Then the contour is a polygonal chain that connect points by the order described above. Infinite segments of the polygonal chain are parallel to the real axis and directed from the first point to the left and from the last point to the right.

The Cauchy theorem and (2.7) imply

$$\begin{aligned} F_{2m}(\Lambda ) = C_n^{(2m)}(X) \frac{i^{2m^2 - m}\exp \left\{ \lambda _0\sum \limits _{j = 1}^{2m} x_j\right\} }{\Delta (X)} \int \limits _{\hat{M}_{2m}} g_{2m}(T) e^{nf_{2m}(T, R)} dT dR, \end{aligned}$$

where

$$\begin{aligned} g_{2m}(T) = \Delta (T) \exp \left\{ -i\sum \limits _{j=1}^{2m} x_jt_j\right\} . \end{aligned}$$

Moreover,

$$\begin{aligned} F_{2m}(\Lambda ) = C_n^{(2m)}(X) \frac{i^{2m^2 - m}\exp \left\{ \lambda _0\sum \limits _{j = 1}^{2m} x_j\right\} }{\Delta (X)} \left( \int \limits _{\hat{M}_{2m}^N} g_{2m}(T) e^{nf_{2m}(T, R)} dT dR + r(n, N)\right) , \end{aligned}$$

where

$$\begin{aligned} \hat{M}_{2m}^N = \{\zeta \in \hat{M}_{2m} | \left\| \mathfrak {R}\zeta \right\| _{\infty } \le N\}, \\ |r(n, N)| < Ce^{-nN^2/4},\, N \rightarrow \infty , \end{aligned}$$

with \(\left\| y \right\| _\infty = \max \limits _j |y_j|\).

Let \(\varepsilon > 0\) be an arbitrary positive number. Then, since \((T(b), R(b)) \underset{n \rightarrow \infty }{\longrightarrow } (\widetilde{T}, 0)\), for sufficiently big n and for every \((T, R) \in \hat{M}_{2m}^N\) we have

$$\begin{aligned} \left| \mathfrak {R}f_{2m}^0\bigg (\mathfrak {R}T - \frac{i}{2}\Lambda _0, \mathfrak {R}R\bigg ) - \mathfrak {R}f_{2m}^0(T, R)\right| < \varepsilon , \end{aligned}$$

where \(f_{2m}^0(T, R) = f_{2m}(T, R)|_{b = 0}\). Also, \(f_{2m}(T, R) \rightrightarrows f_{2m}^0(T, R)\), \(n \rightarrow \infty \), \((T, R) \in K\) for any compact set K. Hence, for sufficiently big n

$$\begin{aligned} \big | \mathfrak {R}f_{2m}(T, R) - \mathfrak {R}f_{2m}^0(T, R) \big | < \varepsilon , \quad (T, R) \in \hat{M}_{2m}^N. \end{aligned}$$

Consider the point \((T^0, R^0) \in \hat{M}_{2m}^N\) such that \(\mathfrak {R}T^0 = \mathfrak {R}\widetilde{T}\), \(\mathfrak {R}R^0 = 0\). Then \(\mathfrak {R}f_{2m}(T^0, R^0) > \mathfrak {R}f_{2m}^0(\widetilde{T}, 0) - 2\varepsilon \). Thus,

$$\begin{aligned} \max _{(T, R) \in \hat{M}_{2m}^N} \mathfrak {R}f_{2m}(T, R) > \mathfrak {R}f_{2m}^0(\widetilde{T}, 0) - 2\varepsilon = \max _{\begin{array}{c} \mathfrak {I}T = \Lambda _0/2\\ |\mathfrak {R}t_j| \le N \end{array}} \mathfrak {R}f_{2m}^0 (T, 0) - 2\varepsilon . \end{aligned}$$

Therefore, if \(\mathfrak {R}f_{2m}(T^1, R^1) = \max \limits _{(T, R) \in \hat{M}_{2m}^N} \mathfrak {R}f_{2m}(T, R)\), then

$$\begin{aligned}&(T^1, R^1) \in \Big \{(T, R) \in \hat{M}_{2m}^N |\, \mathfrak {R}f_{2m}(T, R) > \mathfrak {R}f_{2m}^0(\widetilde{T}, 0) - 2\varepsilon \Big \} \\&\quad \subset \Big \{(T, R) \in \hat{M}_{2m}^N |\, \mathfrak {R}f_{2m}^0(\mathfrak {R}T - \frac{i}{2}\Lambda _0, \mathfrak {R}R) > \mathfrak {R}f_{2m}^0(\widetilde{T}, 0) - 4\varepsilon \Big \}. \end{aligned}$$

So, it is evident, that \((T^1, R^1) \rightarrow (\widetilde{T}, 0)\), \(n \rightarrow \infty \) for certain \(\widetilde{T}\) of the form (4.1).

Let \(V_n(T(b), R(b))\) be the neighborhood of the stationary point (T(b), R(b)), which contains the corresponding maximum point of \(\mathfrak {R}f_{2m}\) with its \(\frac{\log n}{\sqrt{n}}\)-neighborhood, and \({\text {diam}} V_n(T(b), R(b)) \rightarrow 0\). It can be assumed that the union of these neighborhoods is invariant for map \((T, R) \rightarrow (\pi T, \pi R)\) for every \(\pi \in S_{2m}\). Then, by the same reasons as in the proof of Theorem 1, we can restrict the integration domain by the union of the neighborhoods \(V_n\). Shifting the variables \(T \rightarrow T + \mathfrak {I}T(b)\), \(R \rightarrow R + \mathfrak {I}R(b)\) in each neighborhood and expanding \(g_{2m}\) by the Taylor formula, we get

$$\begin{aligned} F_{2m}(\Lambda )= & {} C_n^{(2m)}(X) \frac{i^{2m^2 - m}\exp \left\{ \lambda _0\sum \limits _{j = 1}^{2m} x_j\right\} }{\Delta (X)}(1 + o(1)) \nonumber \\&\times \sum e^{nf_{2m}(T(b), R(b))}\int \limits _{\hat{V}_n(T(b), R(b))} \left( \sum \limits _{|\alpha | \le 2m(m - 1)}n^{-|\alpha |/2}D^\alpha g_{2m}(T(b))t^\alpha \right. \nonumber \\&\left. +\, r_{g_{2m}}^{(2m(m - 1))}(T)\right) e^{nf_{2m}(T + \mathfrak {I}T(b), R + \mathfrak {I}R(b))} dT dR, \end{aligned}$$
(4.5)

where the summation is over all stationary points under consideration and

$$\begin{aligned}&\hat{V}_n(T(b), R(b)) = V_n(T(b), R(b)) - \mathfrak {I}(T(b), R(b)); \\&|r_{g_{2m}}^{(2m(m - 1))}(T)| \le C\sum \limits _{j} |t_j|^{2m(m - 1) + 1}. \end{aligned}$$

The number of terms of the Taylor series in (4.5) is the minimal number that allows us to obtain nonzero asymptotics.

Fix some stationary point (T(b), R(b)) and some multi-index \(\alpha \). Let \(\beta \le \alpha \) be a multi-index with \(\beta _{j_1} = \beta _{j_2}\) for some \(j_1 \ne j_2\), \(j_1, j_2 \in I_+\) or \(j_1, j_2 \in I_-\), where \(\beta \le \alpha \Leftrightarrow \forall j\ \beta _j \le \alpha _j\). Then

$$\begin{aligned}&\int \limits _{\hat{V}_n(T(b), R(b))} \left( D^{\alpha - \beta }\Delta (T)D^\beta \exp \left\{ -i\sum \limits _{j = 1}^{2m}x_j t_j\right\} \Biggl |_{T = T(b)} t^\alpha \right. \\&\quad \left. +\, D^{\hat{\alpha } - \beta }\Delta (T)D^\beta \exp \left\{ -i\sum \limits _{j = 1}^{2m}x_j t_j\right\} \Biggl |_{T = T(b)} t^{\hat{\alpha }}\right) e^{nf_{2m}(T + \mathfrak {I}T(b), R + \mathfrak {I}R(b))} dTdR = 0, \end{aligned}$$

where \(\hat{\alpha }\) is the multi-index, which is obtained by swapping \(\alpha _{j_1}\) and \(\alpha _{j_2}\) in \(\alpha \). Hence, in the sum in (4.5) only the summands with \(|\alpha | = 2m(m - 1)\) remain.

Changing the variables \(T \rightarrow n^{-1/2}T\), \(R \rightarrow n^{-1/2} R\), we get

$$\begin{aligned} F_{2m}(\Lambda )= & {} n^{-d_{2m}/2} C_n^{(2m)}(X) \frac{i^{2m^2 - m}\exp \left\{ \lambda _0\sum \limits _{j = 1}^{2m} x_j\right\} }{\Delta (X)}(1 + o(1)) \sum \limits \bigg [ e^{nf_{2m}(T(b), R(b))} \nonumber \\&\times \int \limits _{\mathcal {V}_n(T(b), R(b))} \bigg (n^{-m(m - 1)}\sum \limits _{|\alpha | = 2m(m - 1)}D^\alpha g_{2m}(T(b))t^\alpha + r_{g_{2m}}^{(2m(m - 1))}\left( \frac{1}{\sqrt{n}}T\right) \bigg ) \nonumber \\&\times \exp \left\{ \frac{1}{2} q f_{2m}''(T(b), R(b)) q^T\right\} \left( 1 + \frac{r_6(q)}{\sqrt{n}}\right) dq\bigg ], \end{aligned}$$
(4.6)

where q is a vector which consists of all integration variables, \(dq = dTdR\), \(\mathcal {V}_n(T(b), R(b)) = \sqrt{n}\hat{V}_n(T(b), R(b))\) and \(|r_6(q)| \le C \sum \limits _{j} |q_j|^3\).

(4.4) implies that, as \(n \rightarrow \infty \),

$$\begin{aligned} (R_l)_{\alpha \alpha }&= o(b_2), \quad l = \overline{3, 2m}; \\ (R_2)_{\alpha \alpha }&= \frac{b_2}{T_{\alpha _1 \alpha _1}(0)T_{\alpha _2 \alpha _2}(0)} + o(b_2); \\ T_{jj}(b)&= T_{jj}(0) - \frac{(2m - \chi (j))T_{jj}(0) + \chi (j) - 1}{T_{jj}(0)^3 + T_{jj}(0)^5}b_2^2 + o(b_2^2), \end{aligned}$$

where

$$\begin{aligned} \chi (j) = {\left\{ \begin{array}{ll} \#I_+, \quad \text {if } j \in I_+; \\ \#I_-, \quad \text {if } j \in I_-. \end{array}\right. } \end{aligned}$$

Therefore,

$$\begin{aligned} \mathfrak {R}f_{2m}(T(b), R(b)) = (-1)^m + \chi (j)(2m - \chi (j))\cdot \frac{\lambda _0^2(4 - \lambda _0^2)}{2}b_2^2 + o(b_2^2). \end{aligned}$$

Thus, the value of the \(\mathfrak {R}f_{2m}\) at the stationary points of the form (4.2) with \(\#I_+ = m\) is greater than that at the other stationary points of such a form for \(\lambda _0 \in (-2, 2)\backslash \{0\}\). For \(\lambda _0 = 0\) the values of the \(\mathfrak {R}f_{2m}\) at the stationary points are equal, because (4.2) continuously depends on \(\lambda _0\). This yields that the sum in (4.6) may be restricted to the sum only over the stationary points with \(\#I_+ = m\) (for \(\lambda _0 = 0\) the other summands have the less order of n). We have

$$\begin{aligned} F_{2m}(\Lambda )= & {} n^{-d_{2m}/2 - m(m - 1)} C_n^{(2m)}(X) \frac{i^{2m^2 - m}\exp \left\{ \lambda _0\sum \limits _{j = 1}^{2m} x_j\right\} }{\Delta (X)}(1 + o(1)) \\&\times \left( \sum \limits _{\#I_+ = m} e^{nf_{2m}(T(b), R(b))} \int \limits _{\mathbb {R}^{d_{2m}}} \sum \limits _{|\alpha | = 2m(m - 1)}D^\alpha g_{2m}(T(b))t^\alpha \right. \\&\times \left. \exp \left\{ \frac{1}{2} q f_{2m}''(T(b), R(b)) q^T\right\} dq\right) . \end{aligned}$$

Consider the term with \(I_+ = \{1, 2, \ldots , m\}\). The Gaussian integration gives us

$$\begin{aligned}&C n^{m^2} i^{3m^2 - 2m}\frac{\Delta (x_1, \ldots , x_m)\Delta (x_{m + 1}, \ldots , x_{2m})}{\Delta (X)}\exp \left\{ \frac{mn}{2}(\lambda _0^2 - 2) + \frac{\lambda _0}{2}\sum \limits _{j = 1}^{2m} x_j\right\} \\&\times \exp \left\{ -\frac{i\sqrt{4 - \lambda _0^2}}{2}\sum \limits _{j = 1}^m(x_j - x_{m + j})\right\} (1 + o(1)) = C n^{m^2} (-1)^{m(m + 1)/2} \\&\times \frac{\exp \left\{ -\frac{i\sqrt{4 - \lambda _0^2}}{2}\sum \limits _{j = 1}^m(x_j - x_{m + j})\right\} }{i^{m}\prod \limits _{j,k = 1}^m (x_j - x_{m + k})} \exp \left\{ \frac{mn}{2}(\lambda _0^2 - 2) + \frac{\lambda _0}{2}\sum \limits _{j = 1}^{2m} x_j\right\} (1 + o(1)), \end{aligned}$$

where C is some real n-independent constant.

On the other hand,

$$\begin{aligned} \hat{S}_{2m}(X) = \frac{\det \left\{ \dfrac{e^{i\pi \rho _{sc}(\lambda _0)(x_j - x_{m + k})} - e^{-i\pi \rho _{sc}(\lambda _0)(x_j - x_{m + k})}}{(x_j - x_{m + k})}\right\} _{j,k = 1}^m}{(2i\pi \rho _{sc}(\lambda _0))^m \Delta (x_1, \ldots , x_m)\Delta (x_{m + 1}, \ldots , x_{2m})}, \end{aligned}$$

where \(\hat{S}_{2m}\) is defined in (1.7) and \(\rho _{sc}\) is defined in (1.4). The determinant in the l.h.s. is the sum of \(\exp \Big \{i\pi \rho _{sc}(\lambda _0)\sum \limits _{j = 1}^m \varepsilon _j x_j\Big \}\) over all collections \(\{\varepsilon _j\}\), which consist of m elements \(+1\) and m elements \(-1\), with certain coefficients. Since (see [19, Problem 7.3])

$$\begin{aligned} (-1)^{m(m - 1)/2}\frac{\prod \limits _{j < k} (u_j - u_k)(v_j - v_k)}{\prod \limits _{j,k = 1}^m (u_j - v_k)} = \det \left\{ \frac{1}{u_j - v_k}\right\} _{j,k = 1}^m, \end{aligned}$$

the coefficient under \(\exp \Big \{-i\pi \rho _{sc}(\lambda _0)\sum \limits _{j = 1}^m (x_j - x_{m + j})\Big \}\) is

$$\begin{aligned} \frac{\det \left\{ \dfrac{1}{(x_{m + j} - x_{k})}\right\} _{j,k = 1}^m}{(2i\pi \rho _{sc}(\lambda _0))^m \Delta (x_1, \ldots , x_m)\Delta (x_{m + 1}, \ldots , x_{2m})} = \frac{(-1)^{m(m + 1)/2}}{(2i\pi \rho _{sc}(\lambda _0))^m \prod \limits _{j,k = 1}^m (x_j - x_{m + k})}. \end{aligned}$$

The other coefficients can be computed by the same way. Therefore,

$$\begin{aligned} F_{2m}(\Lambda ) = Cn^{m^2} \exp \left\{ \frac{mn}{2}(\lambda _0^2 - 2) + \frac{\lambda _0}{2}\sum \limits _{j = 1}^{2m} x_j\right\} \hat{S}_{2m}(X)(1 + o(1)). \end{aligned}$$

The assertion of the theorem follows.

5 Proof of Theorem 3

In this section we consider the case \(p \rightarrow \infty \). As in the proof of Lemma 3, the good contour is \(\mathfrak {I}t_1 = \mathfrak {I}t_2 = -\alpha \lambda _0\), \(s \in \mathbb {R}\) with the stationary point \(\left( -i\alpha \lambda _0, -i\alpha \lambda _0, \frac{1-\alpha }{\alpha }b_2\right) \), where \(\alpha \) satisfies (3.18).

Set \(\beta = 2\alpha - 1\). Then (3.18) transforms to \(b_2 = \beta (1 + \beta )(1 - \beta )^{-1}\), and hence \(\beta = \sqrt{\frac{2}{p}}(1 + o(1))\). Substitution of \(\lambda _0 = \pm 2\), \(\alpha = \frac{1 + \beta }{2}\) and \(b_2 = \beta (1 + \beta )(1 - \beta )^{-1}\) in (3.22) and (3.23) yields

$$\begin{aligned} f'' \bigg (- i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\bigg )&= -\left( \begin{array}{lll} \beta (2 + \beta ) &{} \beta ^2(1 - \beta )(1 + \beta )^{-1} &{} \pm i\beta (1 - \beta ) \\ \beta ^2(1 - \beta )(1 + \beta )^{-1} &{} \beta (2 + \beta ) &{} \pm i\beta (1 - \beta ) \\ \pm i\beta (1 - \beta ) &{} \pm i\beta (1 - \beta ) &{} 1 + \beta ^2 \end{array} \right) ; \\ \det f'' \bigg (- i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\bigg )&= -\frac{4\beta }{(1 + \beta )^2}\cdot (2 - (1 + \beta )(1 - \beta )(2 - \beta )) \\&= -\frac{4\beta ^2(1 + 2\beta - \beta ^2)}{(1 + \beta )^2}. \end{aligned}$$

In addition,

$$\begin{aligned} \frac{\partial ^3 f}{\partial t_j^3}\left( -i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\right)&= -2i{\text {sign}}(\lambda _0)(1 - \beta )^3. \end{aligned}$$

If (i) \(\frac{n^{2/3}}{p} \rightarrow \infty \), then \(V_n\) is chosen as a product of the neighborhoods of 0, 0, and \(\frac{1 - \alpha }{\alpha } b_2\) such that the radius of the corresponding to \(t_1\) and \(t_2\) neighborhoods is \(\frac{\log n}{\sqrt{n\beta }}\), but the radius of the corresponding to s neighborhood is \(\frac{\log n}{\sqrt{n}}\). We have (3.24).

Change of variables \(T \rightarrow (n\beta )^{-1/2}T\), \(s \rightarrow n^{-1/2}s + \frac{1 - \alpha }{\alpha } b_2\) and repeating of the argument of the proof of Lemma 3 yield

$$\begin{aligned} F_2(2I + n^{-2/3}X) = Cp \exp \{n\widehat{A} + n^{1/3}\lambda _0(x_1 + x_2)/2\}(1 + o(1)), \end{aligned}$$

where C is some absolute constant and \(\widehat{A} = f \left( - i\alpha \Lambda _0, \frac{1-\alpha }{\alpha }b_2\right) \).

The assertion (i) follows.

Let now (ii) \(\frac{n^{2/3}}{p} \rightarrow c\). Chose \(V_n\) the same as in the case (i), but the radius of the neighborhoods corresponding to \(t_1\) and \(t_2\) is \(\frac{\log n}{\root 3 \of {n}}\). Then (3.24) is also valid. The Cauchy theorem implies

where the integration domain over s is not changed, but the ones over \(t_j\) become \(\{ |z| \le n^{-1/3}\log n\ |\ \arg z = -\pi /6 \text { or } \arg z = -5\pi /6 \}\).

Changing the variables \(T \rightarrow n^{-1/3}T\), \(s \rightarrow n^{-1/2}s + \frac{1 - \alpha }{\alpha } b_2\), we obtain

$$\begin{aligned} F_2(2I + n^{-2/3}X)= & {} n^{-11/6} C_n(X) \cdot \frac{i \exp \{n\widehat{A} + 2n^{1/3}(1 - \alpha )(x_1 + x_2)\}}{(x_1 - x_2)} (1 + o(1)) \\&\times \int \limits _{\mathbb {R}} e^{-\frac{1+\beta ^2}{2}s^2} ds \int \limits _{\gamma \times \gamma } \left( t_1 - t_2 \right) \exp \bigg \{ -\sum \limits _{j = 1}^2 \left( \frac{i}{3}(1 - \beta )^3 t_j^3 \right. \\&\left. +\, \frac{(2 + \beta )\sqrt{c}}{\sqrt{2}} t_j^2 + i x_j t_j\right) \bigg \} dT \end{aligned}$$

where

$$\begin{aligned} \gamma = \{\arg z = -\pi /6 \text { or } \arg z = -5\pi /6\}. \end{aligned}$$

Change of the variables \(\tau _j = t_j + i\sqrt{2c}\) and using of the Cauchy theorem gives us

$$\begin{aligned} F_2(2I + n^{-2/3}X)= & {} Cn^{2/3} \cdot \frac{i \exp \{n\widehat{A} + (2n^{1/3}(1 - \alpha ) + \sqrt{2c})(x_1 + x_2)\}}{(x_1 - x_2)} (1 + o(1)) \\&\times \int \limits _{\gamma \times \gamma } \left( \tau _1 - \tau _2 \right) \exp \left\{ -\sum \limits _{j = 1}^2 \left( \frac{i}{3}\tau _j^3 + i (x_j + 2c) \tau _j\right) \right\} d\tau _1 d\tau _2, \end{aligned}$$

where C is some absolute constant. Taking into account (1.8) and (2.13), we get

$$\begin{aligned} F_2(2I + n^{-2/3}X)= & {} Cn^{2/3} \exp \Big \{n\widehat{A} + (2n^{1/3}(1 - \alpha ) + \sqrt{2c})(x_1 + x_2)\Big \} \\&\times \,\mathbb {A}(x_1 + 2c, x_2 + 2c)(1 + o(1)). \end{aligned}$$

which completes the proof of the assertion (ii).