1 Introduction

The nonlinear steepest descent method for oscillatory Riemann–Hilbert (RH) problems is developed by Deift and Zhou (1993) (also called Deift–Zhou method) based on earlier work of Manakov (1974b) and Its (1981), from which the long-time asymptotic behavior of the solution for the MKdV equation is obtained rigorously. Subsequently, the long-time asymptotic behaviors for a number of integrable nonlinear evolution equations associated with \(2\times 2\) matrix spectral problems have been studied by using the nonlinear steepest descent method, for example, the KdV equation, the nonfocusing nonlinear Schrödinger equation, the sine-Gordon equation, the derivative nonlinear Schrödinger equation, the Camassa–Holm equation and others (Deift and Zhou 1993; Deift et al. 1993; Cheng et al. 1999; Kitaev and Vartanian 1997, 1999, 2000; Grunert and Teschl 2009; Boutet de Monvel et al. 2009; Xu and Fan 2015).

The principal subject of this paper concerns the long-time asymptotic behavior for the Cauchy problem of the coupled nonlinear Schrödinger (CNLS) equation,

$$\begin{aligned} {\left\{ \begin{array}{ll} i u_t+u_{xx}+2(|u|^2+|v|^2)u=0,\quad (x,t)\in \mathbb {R}\times [0,+\infty ),\\ i v_t+v_{xx}+2(|u|^2+|v|^2)v=0,\\ u(x,0)=u_0(x),\quad v(x,0)=v_0(x), \end{array}\right. } \end{aligned}$$
(1)

by means of the nonlinear steepest descent method, where u(xt) and v(xt) are complex-valued, \(u_0(x)\) and \(v_0(x)\) lie in the Sobolev space \(H^{1,1}(\mathbb {R})=\{f(x)\in \mathscr {L}^2(\mathbb {R}): f'(x),xf(x)\in \mathscr {L}^2(\mathbb {R})\}\). Moreover, \(u_0(x)\) and \(v_0(x)\) are assumed to be generic so that \(\det a(k)\) defined in the following context is nonzero in \(\mathbb {C}_-\). The set of such generic functions is an open dense subset of \(H^{1,1}(\mathbb {R})\) (Beals et al. 1988; Zhou 1998; Deift and Park 2011), which we denote by \(\mathcal {G}\). To our knowledge, there have been no results about the long-time asymptotic behavior for the Cauchy problem of the CNLS equation. Here there are two reasons for choosing the CNLS equation as a model problem: firstly, due to its physical interest. Equation (1), also called Manakov model, describes the propagation of an optical pulse in a birefringent optical fiber (Manakov 1974a) and arises in the context of multicomponent Bose–Einstein condensates (Busch and Anglin 2001). The adjoint symmetry constraints of the CNLS equation and its algebro-geometric solutions have been studied in Ma and Zhou (2002a), Ma and Zhou (2002b), Wu et al. (2016), Ma (2017). Secondly, the spectral problem of the CNLS equation is a \(3\times 3\) matrix case (Manakov 1974a). Although the asymptotic behaviors for a number of integrable nonlinear evolution equations associated with \(2\times 2\) matrix spectral problems have been derived, there is just a little of literature (Boutet de Monvel and Shepelsky 2013; Boutet de Monvel et al. 2015) about integrable nonlinear evolution equations associated with \(3\times 3\) matrix spectral problems. Therefore, it is very important to study the long-time asymptotics of integrable nonlinear evolution equations associated with \(3\times 3\) matrix spectral problems.

In this paper, our main purpose is to extend the nonlinear steepest descent method to studying multicomponent integrable nonlinear evolution equations. The analysis here presents for a few novelties: (a) Similar to Ablowitz et al. (2004), Geng et al. (2015), Liu and Geng (2016), all the \(3\times 3\) matrices in this paper can be rewritten as \(2\times 2\) block ones. Thus we can directly formulate the \(3\times 3\) matrix RH problem by the combinations of the entries in matrix-valued eigenfunctions. Compared with the idea of constructing the \(3\times 3\) matrix RH problem by the Fredholm integral equation in Boutet de Monvel and Shepelsky (2013), this procedure is more convenient for analyzing the multicomponent nonlinear evolution equations. (b) A function \(\delta (k)\) is always introduced in the context of nonlinear steepest descent method. In Deift and Zhou (1993), Deift et al. (1993), Cheng et al. (1999), Kitaev and Vartanian (1997), Kitaev and Vartanian (1999), Kitaev and Vartanian (2000), Grunert and Teschl (2009), Boutet de Monvel et al. (2009), Xu and Fan (2015), the function \(\delta \) can be solved explicitly by the Plemelj formula because the function \(\delta \) satisfies a scalar RH problem. However, the function \(\delta (k)\) in this paper satisfies a \(2\times 2\) matrix RH problem. The unsolvability of the \(2\times 2\) matrix function \(\delta (k)\) is a challenge for us. Noticing that our aim is to study the asymptotic behavior for solution of the CNLS equation, a nature idea is using the available function \(\det \delta (k)\) to approximate \(\delta (k)\) by error control.

The main result of this paper is expressed as follows:

Theorem 1

Let (u(xt), v(xt)) be the solution for the Cauchy problem of CNLS equation (1) with \(u_0,v_0\in \mathcal {G} \). Then, for \(\left| \frac{x}{t}\right| \leqslant C\), the leading asymptotics of (u(xt), v(xt)) has the form

$$\begin{aligned} (u(x,t),v(x,t))=\frac{\nu \Gamma (i\nu )(8t)^{-i\nu }\gamma (k_0)\hbox {e}^{4ik_0^2t+2\tilde{\chi }(k_0)+\frac{\pi \nu }{2}-\frac{i\pi }{4}}}{2\sqrt{\pi t}}+O(t^{-1}\log t), \end{aligned}$$
(2)

where \(k_0=-\frac{x}{4t}\), C is a constant, \(\Gamma (\cdot )\) is a Gamma function, the vector-valued function \(\gamma (k)\) is defined by (14), and

$$\begin{aligned} \nu= & {} -\frac{1}{2\pi } \log (1+|\gamma (k_0)|^2),\\ \tilde{\chi }(k_0)= & {} \frac{1}{2\pi i}\left[ \int _{-\infty }^{k_0-1}\frac{\log \left( 1+|\gamma (\xi )|^2\right) }{\xi -k_0}\mathrm{d}\xi \right. \\&\left. +\int _{k_0-1}^{k_0}\frac{\log \left( 1+|\gamma (\xi )|^2\right) -\log (1+|\gamma (k_0)|^2)}{\xi -k_0}\mathrm{d}\xi \right] . \end{aligned}$$

The outline of this paper is as follows: In Sect. 2, we transform the solution of the Cauchy problem for the CNLS equation to that of a matrix RH problem. In Sect. 3, we use the nonlinear steepest descent method for RH problem to prove Theorem 1.

2 The Riemann–Hilbert Problem

In this section, we shall derive the Volterra integral equations which the Jost solutions satisfy and a scattering matrix from the Lax pair of CNLS equation (1). Then the Cauchy problem of the CNLS equation turns into a RH problem.

Let us consider a \(3\times 3\) matrix Lax pair of CNLS equation (1):

$$\begin{aligned}&\psi _x=(ik\sigma +U)\psi , \end{aligned}$$
(3a)
$$\begin{aligned}&\psi _t=(2ik^2\sigma +\tilde{U})\psi , \end{aligned}$$
(3b)

where \(\psi \) is a matrix-valued function and k is the spectral parameter,

$$\begin{aligned} \begin{aligned}&\sigma ={\text {diag}}(-1,1,1),\quad U=i \begin{pmatrix} 0&{}\quad q\\ q^\dag &{}\quad 0 \end{pmatrix},\quad q=(u,v),\\&\tilde{U}=2k U+i U_x\sigma +i U^2\sigma , \end{aligned} \end{aligned}$$
(4)

and the superscript “\(\dag \)” denotes Hermitian conjugate of a matrix. It is necessary to introduce a new eigenfunction \(\mu \) by \(\mu =\psi \mathrm {e}^{-i(kx+2k^2t)\sigma }\). Then Eq. (3a) becomes

$$\begin{aligned} \mu _x=ik[\sigma ,\mu ]+U\mu , \end{aligned}$$
(5)

where \([\sigma ,\mu ]=\sigma \mu -\mu \sigma \). The matrix Jost solutions \(\mu _{\pm }\) of Eq. (5) are defined by the following Volterra integral equations:

$$\begin{aligned} \mu _{\pm }(k;x,t)=I+\int _{\pm \infty }^x{\mathrm{e}}^{ik(x-\xi )\hat{\sigma }}U(\xi ,t)\mu _{\pm }(k;\xi ,t)\hbox {d}\xi , \end{aligned}$$
(6)

where \(\hat{\sigma }\) acts on a \(3\times 3\) matrix X by \(\hat{\sigma }X=[\sigma , X]\), and \(\mathrm {e}^{\hat{\sigma }}X=\mathrm {e}^{\sigma }X\mathrm {e}^{-\sigma }\). We write \(\mu _{\pm }=(\mu _{\pm L},\mu _{\pm R})\), where \(\mu _{\pm L}\) denote the first column of \(\mu _\pm \), and \(\mu _{\pm R}\) denote the last two columns. Based on the exponential factor in the integral equations, it is easy to see that \(\mu _{-L}\) and \(\mu _{+R}\) are analytic in the upper complex k-plane \(\mathbb {C}_+\), while \(\mu _{+L}\) and \(\mu _{-R}\) are analytic in the lower complex k-plane \(\mathbb {C}_-\). Moreover, we have

$$\begin{aligned} (\mu _{-L},\mu _{+R})= & {} I+O\left( \frac{1}{k}\right) ,\quad k\in \mathbb {C}_+,\\ (\mu _{+L},\mu _{-R})= & {} I+O\left( \frac{1}{k}\right) ,\quad k\in \mathbb {C}_-. \end{aligned}$$

Noticing the fact that U is traceless implies \(\det \mu _{\pm }\) are independent of x, then the evaluation of \(\det \mu _\pm \) at \(x=\pm \infty \) deduces that \(\det \mu _\pm =1\). In addition, since \(\mu _{\pm }\mathrm {e}^{i(kx+2k^2t)\sigma }\) are two solutions of linear equation (3), they are not independent but linearly related, that is, there exists a scattering matrix s(k) such that

$$\begin{aligned} \mu _-=\mu _+\mathrm {e}^{i(kx+2k^2t)\hat{\sigma }}s(k),\quad \det s=1. \end{aligned}$$
(7)

Resorting to the symmetry property \(U^{\dag }=-U\), functions \(\mu ^{\dag }(k^*;x,t)\) and \(\mu ^{-1}(k;x,t)\) satisfy the same differential equation, where the superscript “\(*\)” represents complex conjugate. The initial conditions \(\mu _\pm ^{\dag }(k^*;\pm \infty ,t)=\mu _\pm ^{-1}(k;\pm \infty ,t)=I\) together with the above argument imply that

$$\begin{aligned} \mu _{\pm }^{\dag }(k^*)=\mu _{\pm }^{-1}(k), \quad s^{\dag }(k^*)=s^{-1}(k). \end{aligned}$$
(8)

Let a \(3\times 3\) matrix A be rewritten as a block form

$$\begin{aligned}A=\begin{pmatrix} A_{11}&{}\quad A_{12}\\ A_{21}&{}\quad A_{22} \end{pmatrix}, \end{aligned}$$

where \(A_{11}\) is scalar. It follows from Eqs. (7) and (8) that

$$\begin{aligned} s^\dag _{11}(k^*)=\det [s_{22}(k)], \quad s^\dag _{21}(k^*)=-s_{12}(k)\mathrm {adj}[s_{22}(k)], \end{aligned}$$

where \(\mathrm {adj} (A)\) denotes the adjoint matrix of A in the context of linear algebra. Thus s(k) can be rewritten as the following block form:

$$\begin{aligned} s(k)=\begin{pmatrix} \det [a^\dag (k^*)]&{}\quad b(k)\\ -\mathrm {adj}\left[ a^\dag (k^*)\right] b^\dag (k^*)&{}\quad a(k) \end{pmatrix}, \end{aligned}$$
(9)

where a(k) is a \(2\times 2\) matrix-valued function and b(k) is a row vector-valued function. The evaluation of Eq. (7) at \(t=0\) shows that

$$\begin{aligned} s(k)=\lim _{x\rightarrow +\infty }\mathrm {e}^{-ikx\hat{\sigma }}\mu _-(k;x,0). \end{aligned}$$
(10)

Thus a(k) and b(k) satisfy the integral equations

$$\begin{aligned} a(k)= & {} I+i\int _{-\infty }^{+\infty }q^\dag (\xi ,0)\mu _{-12}(k;\xi ,0)\hbox {d}\xi , \end{aligned}$$
(11)
$$\begin{aligned} b(k)= & {} i\int _{-\infty }^{+\infty }\mathrm {e}^{2ik\xi }q(\xi ,0)\mu _{-22}(k;\xi ,0)\hbox {d}\xi , \end{aligned}$$
(12)

from which it is easy to see that a(k) is analytic for \(k\in \mathbb {C}_-\).

Theorem 2

Let M(kxt) be analytic for \(k\in \mathbb {C}\backslash \mathbb {R}\) and satisfy the RH problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} M_+(k;x,t)=M_-(k;x,t)J(k;x,t), &{}\quad k\in \mathbb {R},\\ M(k;x,t)\rightarrow I,&{}\quad k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(13)

where

$$\begin{aligned} M_\pm (k;x,t)= & {} \lim _{\varepsilon \rightarrow 0^+}M(k\pm i\varepsilon ;x,t),\nonumber \\ J(k;x,t)= & {} \begin{pmatrix} 1+\gamma (k)\gamma ^\dag (k^*)&{}\quad -\mathrm {e}^{-2it\theta }\gamma (k)\\ -\mathrm {e}^{2it\theta }\gamma ^\dag (k^*)&{}\quad I \end{pmatrix},\nonumber \\ \theta (k;x,t)= & {} \frac{kx}{t}+2k^2,\quad \gamma (k)= b(k)a^{-1}(k), \end{aligned}$$
(14)

with \(\gamma (k)\in H^{1,1}(\mathbb {R})\) and \(\sup _{k\in \mathbb {R}}\gamma (k)<\infty \). Then the solution of this RH problem exists and is unique. Define

$$\begin{aligned} (u(x,t), v(x,t))=2\lim _{k\rightarrow \infty } (kM(k;x,t))_{12}, \end{aligned}$$
(15)

which solves the Cauchy problem of CNLS equation (1).

Proof

The existence and uniqueness for the solution of RH problem (13) follow by the fact that J(kxt) is positive definite [see pp. 590–591 in Ablowitz and Fokas (2003)].

Based on the symmetry property of jump matrix J(kxt), one obtains that M(kxt) and \((M^\dag )^{-1}(k^*;x,t)\) satisfy the same RH problem (13). Taking into account the uniqueness for the solution of the RH problem, we arrive at \(M(k;x,t)=(M^\dag )^{-1}(k^*;x,t)\). Notice that the asymptotic expansion of M(kxt)

$$\begin{aligned} M(k;x,t)=I+\frac{M_1(x,t)}{k}+\frac{M_2(x,t)}{k^2}+\cdots ,\quad k\rightarrow \infty , \end{aligned}$$

which implies that \((M_1)_{12}=-(M_1)_{21}\).

In the following, we shall prove that (u(xt), v(xt)) defined by (15) solves the Cauchy problem of CNLS equation (1) with the help of the dressing method (Fokas 2008). Set

$$\begin{aligned} \mathscr {L} M= & {} \partial _xM-ik[\sigma ,M]-UM, \end{aligned}$$
(16a)
$$\begin{aligned} \mathscr {N} M= & {} \partial _tM-2ik^2[\sigma ,M]-\tilde{U}M, \end{aligned}$$
(16b)

where U and \(\tilde{U}\) are defined by (4). Therefore we have

$$\begin{aligned} (\mathscr {L}M)_+=(\mathscr {L}M)_-J,\quad (\mathscr {N}M)_+=(\mathscr {N}M)_-J. \end{aligned}$$

If (uv) is defined by (15), then \(\mathscr {L} M\) satisfies the homogeneous RH problem

$$\begin{aligned}{\left\{ \begin{array}{ll} (\mathscr {L}M)_+=(\mathscr {L}M)_-J,\\ \mathscr {L}M=O(1/k), \quad k\rightarrow \infty , \end{array}\right. } \end{aligned}$$

which gives rise to

$$\begin{aligned} \mathscr {L}M\equiv 0. \end{aligned}$$
(17)

Furthermore, comparing the coefficients of \(O(\frac{1}{k})\) in the asymptotic expansion of (17), we can obtain \(M_1^{(O)}=\frac{i}{2}\sigma U\), \(\partial _x M_1^{(D)}=-\frac{i}{2}\sigma U^2\), where superscripts “(O)” and “(D)” denote the off-diagonal and diagonal parts of block matrix, respectively. Therefore \(\mathscr {N} M\) satisfies the homogeneous RH problem

$$\begin{aligned}{\left\{ \begin{array}{ll} (\mathscr {N}M)_+=(\mathscr {N}M)_-J,\\ \mathscr {N}M=O(1/k), \quad k\rightarrow \infty , \end{array}\right. } \end{aligned}$$

which means that

$$\begin{aligned} \mathscr {N}M\equiv 0. \end{aligned}$$
(18)

The compatibility condition of Eqs. (17) and (18) yields CNLS equation (1). As a consequence, one obtains that the function (uv) defined by Eq. (15) solves the CNLS equation. Moreover, it is not difficult to verify that the resulting (uv) at \(t=0\) satisfies the initial conditions in a way similar to the arguments in Fokas (2008) and Fokas et al. (2005). \(\square \)

3 The Long-Time Asymptotics

In this section, we make the following basic notations:

  1. 1.

    For any matrix M define \(|M|=(\mathrm {tr}M^\dag M)^{\frac{1}{2}}\) and for any matrix function \(A(\cdot )\) define \(\Vert A(\cdot )\Vert _p=\Vert |A(\cdot )|\Vert _p\).

  2. 2.

    For two quantities A and B define \(A\lesssim B\) if there exists a constant \(C>0\) such that \(|A|\leqslant CB\). If C depends on the parameter \(\alpha \), we shall say that \(A\lesssim _{\alpha } B\).

  3. 3.

    For any oriented contour \(\Sigma \), we say that the left side is “\(+\)” and the right side is “−”.

3.1 Factorization of the Jump Matrix

An analogue of the classical steepest descent method for Riemann–Hilbert problems was invented by Deift and Zhou (1993). The key idea is to notice that the jump matrix involves the same phase function as appeared in the Fourier integral, but the approach is slightly complicated by the fact that both exponential factors \(\mathrm {e}^{\pm 2i\theta }\) appear in the jump matrix. To see how to deal with this, note that the jump matrix enjoys two distinct factorizations:

The stationary point of \(\theta (k)\) is denoted by \( k_0\), i.e., \(\left. \frac{\text {d} \theta }{\text {d} k}\right| _{k= k_0}=0\), where \(k_0=-\frac{x}{4t}\), thus \(\theta =2(k^2-2k_0k)\). In this paper, we focus on the physically interesting region \(|k_0|\leqslant C\), where C is a constant.

Let \(\delta (k)\) be the solution of the matrix Riemann–Hilbert problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \delta _+(k)=(I+\gamma ^\dag (k)\gamma (k)) \delta _-(k), &{}\quad k<k_0,\\ \delta (k)\rightarrow I,&{}\quad k\rightarrow \infty . \end{array}\right. } \end{aligned}$$
(19)

Then we have

$$\begin{aligned} {\left\{ \begin{array}{ll} \det \delta _+(k)=(\det \delta _-(k))(1+|\gamma (k)|^2),\quad &{}k<k_0,\\ \det \delta (k)\rightarrow 1,&{}k\rightarrow \infty . \end{array}\right. } \end{aligned}$$
(20)

In terms of the positive definiteness of the jump matrix \(I+\gamma ^\dag (k)\gamma (k)\) and the vanishing lemma (Ablowitz and Fokas 2003), one infers that \(\delta \) exists and is unique. A direct calculation shows that \(\det \delta \) is solved by the Plemelj formula (Ablowitz and Fokas 2003)

$$\begin{aligned} \det \delta (k)=\mathrm {e}^{\chi (k)}, \end{aligned}$$
(21)

where

$$\begin{aligned} \chi (k)=\frac{1}{2\pi i}\int _{-\infty }^{k_0}\frac{\log \left( 1+|\gamma (\xi )|^2\right) }{\xi -k}\hbox {d}\xi . \end{aligned}$$

Indeed, the above integral is singular as \(k\rightarrow k_0\). In fact, we write the integral in the form

$$\begin{aligned} \begin{aligned} \int _{-\infty }^{k_0}\frac{\log \left( 1+|\gamma (\xi )|^2\right) }{\xi -k}\hbox {d}\xi&=\int _{-\infty }^{k_0-1}\frac{\log \left( 1+|\gamma (\xi )|^2\right) }{\xi -k}\hbox {d}\xi \\&\quad +\log (1+|\gamma (k_0)|^2)\int _{k_0-1}^{k_0}\frac{1}{\xi -k}\hbox {d}\xi \\&\quad +\int _{k_0-1}^{k_0}\frac{\log \left( 1+|\gamma (\xi )|^2\right) -\log (1+|\gamma (k_0)|^2)}{\xi -k}\hbox {d}\xi \\&=\int _{-\infty }^{k_0-1}\frac{\log \left( 1+|\gamma (\xi )|^2\right) }{\xi -k}\hbox {d}\xi \\&\quad +\log (1+|\gamma (k_0)|^2) \log (k-k_0)\\&\quad -\log (1+|\gamma (k_0)|^2)\log (k-k_0+1)\\&\quad +\int _{k_0-1}^{k_0}\frac{\log \left( 1+|\gamma (\xi )|^2\right) -\log (1+|\gamma (k_0)|^2)}{\xi -k}\hbox {d}\xi ,\\ \end{aligned} \end{aligned}$$

which implies that all the terms with the exception of the term \(\log (k-k_0)\) are analytic for k in a neighborhood of \(k_0\). Therefore, \(\det \delta \) can be written in the form

$$\begin{aligned} \det \delta (k)=(k-k_0)^{i\nu }\mathrm {e}^{\tilde{\chi }(k)}, \end{aligned}$$

where

$$\begin{aligned} \nu= & {} -\frac{1}{2\pi } \log (1+|\gamma (k_0)|^2)<0,\\ \tilde{\chi }(k_0)= & {} \frac{1}{2\pi i}\left[ \int _{-\infty }^{k_0-1}\frac{\log \left( 1+|\gamma (\xi )|^2\right) }{\xi -k_0}\hbox {d}\xi \right. \\&\left. +\int _{k_0-1}^{k_0}\frac{\log \left( 1+|\gamma (\xi )|^2\right) -\log (1+|\gamma (k_0)|^2)}{\xi -k_0}\hbox {d}\xi \right] . \end{aligned}$$

As \(k<k_0\), it follows from (19) that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0^+}\delta (k-i\varepsilon )=(I+\gamma ^\dag (k)\gamma (k))^{-1}\lim _{\varepsilon \rightarrow 0^+}\delta (k+i\varepsilon ). \end{aligned}$$

Let \(f(k)=(\delta ^\dag (k^*))^{-1}\), then we have

$$\begin{aligned} \begin{aligned} f_+(k)&=\lim _{\varepsilon \rightarrow 0^+}f(k+i\varepsilon )=\lim _{\varepsilon \rightarrow 0^+}(\delta ^\dag (k^*-i\varepsilon ))^{-1}\\&=\left( (\lim _{\varepsilon \rightarrow 0^+}\delta (k-i\varepsilon ))^\dag \right) ^{-1}=(I+\gamma ^\dag (k)\gamma (k))\lim _{\varepsilon \rightarrow 0^+}(\delta ^\dag (k^*+i\varepsilon ))^{-1},\\&=(I+\gamma ^\dag (k)\gamma (k))f_-(k). \end{aligned} \end{aligned}$$

Noticing the uniqueness of the solution for the matrix RH problem, we arrive at

$$\begin{aligned} \delta (k)=(\delta ^\dag (k^*))^{-1}. \end{aligned}$$
(22)

Inserting Eq. (22) in Eq. (19), we obtain

$$\begin{aligned} |\delta _-(k)|^2= & {} {\left\{ \begin{array}{ll} 2-\frac{|\gamma (k)|^2}{1+|\gamma (k)|^2}, &{}\quad k<k_0,\\ 2,&{}\quad k>k_0, \end{array}\right. }\\ |\delta _+(k)|^2= & {} {\left\{ \begin{array}{ll} 2+|\gamma (k)|^2, &{}\quad k<k_0,\\ 2,&{}\quad k>k_0, \end{array}\right. }\\ |\det \delta _-(k)|\leqslant & {} 1,\quad |\det \delta _+(k)|\leqslant 1+|\gamma (k)|^2<\infty , \end{aligned}$$

for fixed \(k\in \mathbb {R}\). Hence, by the maximum principle, we have

$$\begin{aligned} |\delta (k)|\leqslant \text {const}<\infty , \quad |\det \delta (k)|\leqslant \text {const}<\infty , \end{aligned}$$
(23)

for all \(k\in \mathbb {C}\).

Let

$$\begin{aligned} \Delta (k)=\begin{pmatrix} \det \delta (k)&{}\quad 0\\ 0&{}\quad \delta ^{-1}(k) \end{pmatrix}, \end{aligned}$$

and

$$\begin{aligned} \rho (k)={\left\{ \begin{array}{ll} -\frac{\gamma (k)}{1+\gamma (k)\gamma ^\dag (k^*)},&{}\quad k<k_0,\\ \gamma (k), &{}\quad k>k_0. \end{array}\right. } \end{aligned}$$

Define \(M^{\Delta }\) by

$$\begin{aligned} M^{\Delta }(k;x,t)=M(k;x,t)\Delta ^{-1}(k;x,t), \end{aligned}$$
(24)

and reverse the orientation for \(k>k_0\) as shown in Fig. 1. Then \(M^{\Delta }\) satisfies the RH problem on \(\mathbb {R}\) oriented as in Fig. 1,

$$\begin{aligned} {\left\{ \begin{array}{ll} M^\Delta _+(k;x,t)=M^\Delta _-(k;x,t)J^\Delta (k;x,t),&{}\quad k\in \mathbb {R},\\ M^\Delta (k;x,t)\rightarrow I, &{}\quad k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(25)

where

$$\begin{aligned}J^\Delta (k)= \begin{pmatrix} 1&{}\quad 0\\ \frac{ \mathrm {e}^{2it\theta }\delta _-^{-1}(k)\rho ^\dag (k^*)}{\det \delta _-(k)}&{}\quad I \end{pmatrix}\begin{pmatrix} 1&{}\quad \mathrm {e}^{-2it\theta }(\det \delta _+)(k) \rho (k)\delta _+(k)\\ 0&{}\quad I \end{pmatrix}. \end{aligned}$$
Fig. 1
figure 1

The oriented contour on \(\mathbb {R}\)

In what follows, we deform the contour. It is convenient to introduce analytic approximations of \(\rho (k)\).

3.2 Decomposition of \(\rho (k)\)

Let L denote the contour

$$\begin{aligned} L:\,\{k=k_0+\alpha \mathrm {e}^{\frac{3\pi i}{4}}:-\infty<\alpha < +\infty \}. \end{aligned}$$
(26)

Lemma 3

The vector-valued function \(\rho (k)\) has a decomposition

$$\begin{aligned} \rho (k)=R(k)+h_1(k)+h_2(k),\quad k\in \mathbb {R}, \end{aligned}$$
(27)

where R(k) is piecewise rational and \(h_2(k)\) has an analytic continuation to L satisfying

$$\begin{aligned}&|\mathrm {e}^{-2it\theta (k)}h_1(k)|\lesssim \frac{1}{(1+|k-k_0|^2)t^l},\quad k\in \mathbb {R}, \end{aligned}$$
(28)
$$\begin{aligned}&|\mathrm {e}^{-2it\theta (k)}h_2(k)|\lesssim \frac{1}{(1+|k-k_0|^2)t^l},\quad k\in L, \end{aligned}$$
(29)

for an arbitrary positive integer l. Taking Schwartz conjugate

$$\begin{aligned} \rho ^\dag (k^*)=R^\dag (k^*)+h_1^\dag (k^*)+h_2^\dag (k^*) \end{aligned}$$

leads to the same estimate for \(\mathrm {e}^{2it\theta (k)}h_1^\dag (k^*)\), \(\mathrm {e}^{2it\theta (k)}h_2^\dag (k^*)\), and \(\mathrm {e}^{2it\theta (k)}R^\dag (k^*)\) on \(\mathbb {R}\cup L^*\).

Proof

It is enough to consider \(k\geqslant k_0\), the case \(k\leqslant k_0\) is similar. As \(k\geqslant k_0\), \(\rho (k)=\gamma (k)\). By using the Taylor’s expansion, we have

$$\begin{aligned} (k-k_0-i)^{m+5}\rho (k)= & {} \sum _{j=0}^m\rho _j(k_0)(k-k_0)^j\\&+\,\frac{1}{m!}\int _{k_0}^k((\cdot -k_0-i)^{m+5}\rho (\cdot ))^{(m+1)}(\xi )(k-\xi )^m\hbox {d}\xi , \end{aligned}$$

and define

$$\begin{aligned} R(k)=\frac{\sum _{j=0}^{m}\rho _j(k_0)(k-k_0)^j}{(k-k_0-i)^{m+5}},\quad h(k)=\rho (k)-R(k). \end{aligned}$$

Then the following result is straightforward:

$$\begin{aligned} \frac{\text {d}^j\rho (k)}{\text {d}k^j}|_{k=k_0}=\frac{\text {d}^jR(k)}{\text {d}k^j}|_{k=k_0},\quad 0\leqslant j\leqslant m. \end{aligned}$$

In what follows, for fixed \(m\in \mathbb {Z}_+\), we assume that m is of the form

$$\begin{aligned} m=3p+1,\quad p\in \mathbb {Z}_+, \end{aligned}$$

for convenience. As the map \(k\mapsto \theta (k)=2(k^2-2k_0k)\) is one-to-one in \(k\geqslant k_0\), we define

$$\begin{aligned} f(\theta )={\left\{ \begin{array}{ll} \frac{(k-k_0-i)^{p+2}}{(k-k_0)^p}h(k)=\frac{(k-k_0)^{2p+2}}{(k-k_0-i)^{2p+4}}g(k),\quad &{}\quad \theta \geqslant -2k_0^2,\\ 0,&{}\quad \theta <-2k_0^2, \end{array}\right. } \end{aligned}$$
(30)

where

$$\begin{aligned} g(k)=\frac{1}{m!}\int _0^1\left( (\cdot -k_0-i)^{m+5}\rho (\cdot )\right) ^{(m+1)}(k_0+\xi (k-k_0))(1-u)^m\hbox {d}\xi , \end{aligned}$$

which implies that

$$\begin{aligned} \left| \frac{\text {d}^jg(k)}{\text {d}k^j}\right| \lesssim 1, \quad k\geqslant k_0. \end{aligned}$$

By the Fourier transform, we have

$$\begin{aligned} f(\theta )=\frac{1}{\sqrt{2\pi }}\int _{-\infty }^{+\infty }\mathrm {e}^{is\theta }\hat{f}(s)\hbox {d}s, \end{aligned}$$

where

$$\begin{aligned} \hat{f}(s)=\frac{1}{\sqrt{2\pi }}\int _{-\infty }^{+\infty }\mathrm {e}^{-is\theta }f(\theta )\hbox {d}\theta . \end{aligned}$$

Thus, as \(0\leqslant j\leqslant p+1\),

$$\begin{aligned} \begin{aligned} \int _{-2k_0^2}^{+\infty }\left| \frac{\text {d}^j f(\theta )}{\text {d}\theta ^j}\right| ^2\text {d}\theta =&\int _{k_0}^{+\infty }\left| \left( \frac{1}{4(k-k_0)}\frac{\text {d}}{\text {d}k}\right) ^j\left[ \frac{(k-k_0)^{2p+2}}{(k-k_0-i)^{2p+4}}g(k)\right] \right| ^2(4k-4k_0)\hbox {d}k\\ \lesssim&\int _{k_0}^{+\infty }\left| \frac{(k-k_0)^{2p+2-j}}{(k-k_0-i)^{2p+4}}\right| ^2(k-k_0)\hbox {d}k\lesssim 1. \end{aligned} \end{aligned}$$

Resorting to the Plancherel theorem, we have

$$\begin{aligned} \int _{-\infty }^{+\infty }(1+s^2)^j\left| \hat{f}(s)\right| ^2\hbox {d}s\lesssim 1,\quad 0\leqslant j\leqslant p+1. \end{aligned}$$

Now we give the decomposition of h(k) as follows

$$\begin{aligned} \begin{aligned} h(k)&= \frac{(k-k_0)^p}{(k-k_0-i)^{p+2}}\int _t^{+\infty }\mathrm {e}^{is\theta }\hat{f}(s)\frac{\text {d}s}{\sqrt{2\pi }} +\frac{(k-k_0)^p}{(k-k_0-i)^{p+2}}\int _{-\infty }^t\mathrm {e}^{is\theta }\hat{f}(s)\frac{\text {d}s}{\sqrt{2\pi }}\\&=h_1(k)+h_2(k). \end{aligned} \end{aligned}$$

For \(k\geqslant k_0\), we obtain

$$\begin{aligned} \begin{aligned} \left| \mathrm {e}^{-2it\theta (k)}h_1(k)\right|&\lesssim |k-k_0-i|^{-2}\int _t^{+\infty }\left| \hat{f}(s)\right| \frac{\text {d}s}{\sqrt{2\pi }}\\&\leqslant |k-k_0-i|^{-2}\left( \int _t^{+\infty }(1+s^2)^{-(p+1)}\frac{\text {d}s}{\sqrt{2\pi }}\right) ^{\frac{1}{2}}\\&\quad \left( \int _t^{+\infty }(1+s^2)^{p+1}\left| \hat{f}(s)\right| ^2\frac{\text {d}s}{\sqrt{2\pi }}\right) ^{\frac{1}{2}}\\&\lesssim |k-k_0-i|^{-2}t^{-p-\frac{1}{2}}, \end{aligned} \end{aligned}$$

where \(0\leqslant r\leqslant p+1.\) For k on the line \(\{k:k_0+\varepsilon \mathrm {e}^{\frac{3\pi i}{4}}\},\varepsilon <0\), we have

$$\begin{aligned} \begin{aligned} \left| \mathrm {e}^{-2it\theta (k)}h_2(k)\right|&\lesssim \frac{\varepsilon ^p\mathrm {e}^{-t\,\mathrm {Re}\,i\theta (k)}}{|k-k_0-i|^{p+2}}\int _{-\infty }^{t}\mathrm {e}^{(s-t)\,\mathrm {Re}\,i\theta (k)}\left| \hat{f}(s)\right| \frac{\text {d}s}{\sqrt{2\pi }}\\&\leqslant \frac{\varepsilon ^p\mathrm {e}^{-2\varepsilon ^2t}}{|k-k_0-i|^{p+2}}\left( \int _{-\infty }^{+\infty }(1+s^2)^{-1}\frac{\text {d}s}{\sqrt{2\pi }}\right) ^{\frac{1}{2}}\\&\quad \left( \int _{-\infty }^{+\infty }(1+s^2)\left| \hat{f}(s)\right| ^2\frac{\text {d}s}{\sqrt{2\pi }}\right) ^{\frac{1}{2}}\\&\lesssim |k-k_0-i|^{-2}t^{-\frac{p}{2}}. \end{aligned} \end{aligned}$$

Here l is an arbitrary positive integer and \(m=3p+1\) is sufficiently large such that \(p-\frac{1}{2}>\frac{p}{2}\) are greater than l. The proof is completed. \(\square \)

3.3 First Contour Deformation

In this subsection, the original RH problem turns into an equivalent RH problem formulated on an augmented contour \(\Sigma \), where \(\Sigma =L\cup L^*\cup \mathbb {R}\) is oriented in Fig. 2.

Note that \(J^\Delta (k;x,t)\) can be rewritten as

$$\begin{aligned} J^\Delta (k;x,t)=(b_-)^{-1}b_+, \end{aligned}$$

where \(b_\pm =I\pm \omega _\pm \),

Lemma 4

Define

$$\begin{aligned} M^\sharp (k;x,t)={\left\{ \begin{array}{ll} M^\Delta (k;x,t),\quad &{}\quad k\in \Omega _1\cup \Omega _2,\\ M^\Delta (k;x,t)(b_-^a)^{-1},&{}\quad k\in \Omega _3\cup \Omega _4,\\ M^\Delta (k;x,t)(b_+^a)^{-1},&{}\quad k\in \Omega _5\cup \Omega _6. \end{array}\right. } \end{aligned}$$
(31)

Then \(M^\sharp (k;x,t)\) solves the following RH problem on \(\Sigma \),

$$\begin{aligned} {\left\{ \begin{array}{ll} M^\sharp _+(k;x,t)=M^\sharp _-(k;x,t)J^\sharp (k;x,t),&{}\quad k\in \Sigma ,\\ M^\sharp (k;x,t)\rightarrow I,&{}\quad k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(32)

where

$$\begin{aligned} J^\sharp =(b_-^\sharp )^{-1}b_+^\sharp ={\left\{ \begin{array}{ll} (b_-^o)^{-1}b_+^o,&{}\quad k\in \mathbb {R},\\ I^{-1}b_+^a,&{}\quad k\in L,\\ (b_-^a)^{-1}I, &{}\quad k\in L^*. \end{array}\right. } \end{aligned}$$
(33)
Fig. 2
figure 2

The orient jump contour \(\Sigma \) and domains \(\{\Omega _j\}_1^6\)

In the following, we construct the solution of the above RH problem (32) by using the approach in Beals and Coifman (1984). Assume that

$$\begin{aligned} \omega _\pm ^\sharp =\pm (b_\pm ^\sharp -I), \quad \omega ^\sharp =\omega _+^\sharp +\omega _-^\sharp , \end{aligned}$$

and let

$$\begin{aligned} (C_\pm f)(k)=\int _{\Sigma }\frac{f(\xi )}{\xi -k_\pm }\frac{\text {d}\xi }{2\pi i}, \quad k\in \Sigma ,\quad f\in {\mathscr {L}}^2(\Sigma ), \end{aligned}$$

where \(C_+f\ (C_-f)\) denotes the left (right) boundary value for the oriented contour \(\Sigma \). Define the operator \(C_{\omega ^\sharp }: {\mathscr {L}}^2(\Sigma )+{\mathscr {L}}^\infty (\Sigma )\rightarrow {\mathscr {L}}^2(\Sigma ) \) by

$$\begin{aligned} C_{\omega ^\sharp }f=C_+\left( f\omega _-^\sharp \right) +C_-\left( f\omega _+^\sharp \right) , \end{aligned}$$
(34)

for any \(3\times 3\) matrix-valued function f.

Theorem 5

(Beals and Coifman 1984) If \(\mu ^\sharp (k;x,t)\in {\mathscr {L}}^2(\Sigma )+{\mathscr {L}}^\infty (\Sigma )\) is the solution of the singular integral equation

$$\begin{aligned} \mu ^\sharp =I+C_{\omega ^\sharp }\mu ^\sharp . \end{aligned}$$

Then

$$\begin{aligned} M^\sharp (k;x,t)=I+\int _{\Sigma }\frac{\mu ^\sharp (\xi ;x,t)\omega ^\sharp (\xi ;x,t)}{\xi -k}\frac{\mathrm{d}\xi }{2\pi i} \end{aligned}$$

solves RH problem (32).

Theorem 6

The solution (u(xt), v(xt)) of the Cauchy problem for CNLS equation (1) is expressed by

$$\begin{aligned} \begin{aligned} (u(x,t), v(x,t))&=2\lim _{k\rightarrow \infty } (kM^\sharp (k;x,t))_{12}\\&=\frac{i}{\pi }\left( \int _\Sigma \mu ^\sharp (\xi ;x,t)\omega ^\sharp (\xi ;x,t)\mathrm{d}\xi \right) _{12}\\&=\frac{i}{\pi }\left( \int _\Sigma \left( (1-C_{\omega ^\sharp })^{-1}I\right) (\xi ;x,t)\omega ^\sharp (\xi ;x,t)\mathrm{d}\xi \right) _{12}. \end{aligned} \end{aligned}$$
(35)

Proof

It is easy to verify that the result of this theorem holds with the aid of Eqs. (23), (24), (31), Theorem 2, and the following fact

$$\begin{aligned} |\mathrm {e}^{-2it\theta (k)}h_2(k)|\lesssim & {} \frac{|k-k_0|^p}{|k-k_0-i|^{p+2}}\mathrm {e}^{-t\mathrm {Re} i\theta (k)}\left| \int _{-\infty }^t\mathrm {e}^{i(s-t)\theta (k)}\hat{f}(s)\frac{\text {d}s}{\sqrt{2\pi }}\right| \\\lesssim & {} |k-k_0-i|^{-2},\quad k\in \Omega _5\cup \Omega _6,\quad k\rightarrow \infty ,\\ |\mathrm {e}^{-2it\theta (k)}R(k)|\lesssim & {} |k-k_0-i|^{-5},\quad k\in \Omega _5\cup \Omega _6,\quad k\rightarrow \infty . \end{aligned}$$

\(\square \)

3.4 Second Contour Deformation

In this subsection, we shall reduce the RH problem \(M^\sharp \) on \(\Sigma \) to the RH problem \(M'\) on \(\Sigma '\), where \(\Sigma '=\Sigma \backslash \mathbb {R}=L\cup L^*\) is orientated as in Fig. 2. Further, we estimate the error between the RH problems on \(\Sigma \) and \(\Sigma '\). Then it is proved that the solution of CNLS equation (1) can be expressed in terms of \(M'\) by adding an error term.

Let \(\omega ^e=\omega ^a+\omega ^b\), where

  1. (1)

    is supported on \(\mathbb {R}\) and composed of terms of type \(h_1(k)\) and \(h_1^\dag (k^*)\);

  2. (2)

    \(\omega ^b\) is supported on \(L\cup L^*\) and composed of the contribution to \(\omega ^\sharp \) from terms of type \(h_2(k)\) and \(h_2^\dag (k^*)\).

Define \(\omega '=\omega ^\sharp -\omega ^e\), then \(\omega '=0\) on \(\mathbb {R}\). Thus, \(\omega '\) is supported on \(\Sigma '\) with contribution to \(\omega ^\sharp \) from R(k) and \(R^\dag (k^*)\).

Lemma 7

For arbitrary positive integer l, as \(t\rightarrow \infty \), then

$$\begin{aligned}&\Vert \omega ^a\Vert _{{\mathscr {L}}^1(\mathbb {R})\cap {\mathscr {L}}^2(\mathbb {R})\cap {\mathscr {L}}^\infty (\mathbb {R})}\lesssim t^{-l}, \end{aligned}$$
(36)
$$\begin{aligned}&\Vert \omega ^b\Vert _{{\mathscr {L}}^1(L\cup L^*)\cap {\mathscr {L}}^2(L\cup L^*)\cap {\mathscr {L}}^\infty (L\cup L^*)}\lesssim t^{-l}, \end{aligned}$$
(37)
$$\begin{aligned}&\Vert \omega '\Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim t^{-\frac{1}{4}},\quad \Vert \omega '\Vert _{{\mathscr {L}}^1(\Sigma )}\lesssim t^{-\frac{1}{2}}. \end{aligned}$$
(38)

Proof

Using Lemma 3, a direct calculation shows that Eqs. (36) and (37) hold. From the definition of R(k), we have

$$\begin{aligned} |R(k)|\lesssim (1+|k-k_0|^5)^{-1},\quad \mathrm {Re}i\theta (k)=2\alpha ^2, \end{aligned}$$

on the contour \(\{k=k_0+\alpha \mathrm {e}^{\frac{3\pi i}{4}}:-\infty<\alpha <+\infty \}\). By means of inequality (23), we have

$$\begin{aligned} |\mathrm {e}^{-2it\theta }[\det \delta (k)]R(k)\delta (k)|\lesssim \mathrm {e}^{-4\alpha ^2t}(1+|k-k_0|^5)^{-1}, \end{aligned}$$

from which estimate (38) follows by simple computations. \(\square \)

Lemma 8

As \(t\rightarrow \infty \), \((1-C_{\omega '})^{-1}: {\mathscr {L}}^2(\Sigma )\rightarrow {\mathscr {L}}^2(\Sigma )\) exists and is uniformly bounded:

$$\begin{aligned} \Vert (1-C_{\omega '})^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim 1. \end{aligned}$$

Furthermore, \(\Vert (1-C_{\omega ^\sharp })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim 1\).

Proof

See Deift and Zhou (1993) and references therein. \(\square \)

Lemma 9

As \(t\rightarrow \infty \),

$$\begin{aligned} \int _{\Sigma }\left( (1-C_{\omega ^\sharp })^{-1}I\right) (\xi )\omega ^\sharp (\xi )\mathrm{d}\xi = \int _{\Sigma }\left( (1-C_{\omega '})^{-1}I\right) (\xi )\omega '(\xi )\mathrm{d}\xi +O(t^{-l}).\qquad \end{aligned}$$
(39)

Proof

A direct computation shows that

$$\begin{aligned} \begin{aligned} \left( (1-C_{\omega ^\sharp })^{-1}I\right) \omega ^\sharp =&\left( (1-C_{\omega '})^{-1}I\right) \omega '+\omega ^e+\left( (1-C_{\omega '})^{-1}(C_{\omega ^e}I)\right) \omega ^\sharp \\&+\left( (1-C_{\omega '})^{-1}(C_{\omega '}I)\right) \omega ^e\\&+\left( (1-C_{\omega '})^{-1}C_{\omega ^e}(1-C_{\omega ^\sharp })^{-1}\right) (C_{\omega ^\sharp }I)\omega ^\sharp . \end{aligned} \end{aligned}$$

It follows from Lemma 7 that

$$\begin{aligned}&\Vert \omega ^e\Vert _{{\mathscr {L}}^1(\Sigma )}\leqslant \Vert \omega ^a\Vert _{{\mathscr {L}}^1(\mathbb {R})}+\Vert \omega ^b\Vert _{{\mathscr {L}}^1(L\cup L^*)}\lesssim t^{-l},\\&\quad \Vert \left( (1-C_{\omega '})^{-1}(C_{\omega ^e}I)\right) \omega ^\sharp \Vert _{{\mathscr {L}}^1(\Sigma )}\leqslant \Vert (1-C_{\omega '})^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )} \Vert C_{\omega ^e}I\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )} \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\lesssim \Vert \omega ^e\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim t^{-l-\frac{1}{4}},\\&\quad \Vert \left( (1-C_{\omega '})^{-1}(C_{\omega '}I)\right) \omega ^e\Vert _{{\mathscr {L}}^1(\Sigma )}\leqslant \Vert (1-C_{\omega '})^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )} \Vert C_{\omega '}I\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^e\Vert _{{\mathscr {L}}^2(\Sigma )}\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\lesssim \Vert \omega '\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^e\Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim t^{-l-\frac{1}{4}},\\&\quad \Vert \left( (1-C_{\omega '})^{-1}C_{\omega ^e}(1-C_{\omega ^\sharp })^{-1}\right) (C_{\omega ^\sharp }I)\omega ^\sharp \Vert _{{\mathscr {L}}^1(\Sigma )}\\&\quad \quad =\!\Vert (1-C_{\omega '})^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^e}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert (1-C_{\omega ^\sharp })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^\sharp }I\Vert _{{\mathscr {L}}^2(\Sigma )} \Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )}\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\lesssim \Vert \omega ^e\Vert _{{\mathscr {L}}^\infty (\Sigma )}\Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )}^2\lesssim t^{-l-\frac{1}{2}}. \end{aligned}$$

This completes the proof of the theorem. \(\square \)

Note As \(k\in \Sigma \backslash \Sigma '\), \(\omega '(k)=0\), let \(C_{\omega '}|_{{\mathscr {L}}^2(\Sigma ')}\) denote the restriction of \(C_{\omega '}\) to \({\mathscr {L}}^2(\Sigma ')\). For simplicity, we write \(C_{\omega '}|_{{\mathscr {L}}^2(\Sigma ')}\) as \(C_{\omega '}\). Then

$$\begin{aligned} \int _{\Sigma }\left( (1-C_{\omega '})^{-1}I\right) (\xi )\omega '(\xi )\hbox {d}\xi =\int _{\Sigma '}\left( (1-C_{\omega '})^{-1}I\right) (\xi )\omega '(\xi )\hbox {d}\xi . \end{aligned}$$

Lemma 10

As \(t\rightarrow \infty \),

$$\begin{aligned} (u(x,t),v(x,t))=\frac{i}{\pi }\left( \int _{\Sigma '}\left( (1-C_{\omega '})^{-1}I\right) (\xi )\omega '(\xi )\mathrm{d}\xi \right) _{12}+O(t^{-l}). \end{aligned}$$
(40)

Proof

A direct consequence of Theorem 6 and Lemma 9. \(\square \)

Corollary 11

As \(t\rightarrow \infty \),

$$\begin{aligned} (u(x,t),v(x,t))=2\lim _{k\rightarrow \infty }(kM'(k;x,t))_{12}+O(t^{-l}), \end{aligned}$$
(41)

where \(M'(k;x,t)\) satisfies the RH problem

$$\begin{aligned}{\left\{ \begin{array}{ll} M'_+(k;x,t)=M'_-(k;x,t)J'(k;x,t),&{}\quad k\in \Sigma ',\\ M'(k;x,t)\rightarrow I, &{}\quad k\rightarrow \infty , \end{array}\right. }\end{aligned}$$

where

Proof

Set \(\mu '=(1-C_{\omega '} )^{-1}I\) and

$$\begin{aligned} M'(k;x,t)=I+\int _{\Sigma '}\frac{\mu '(\xi ;x,t)\omega '(\xi ;x,t)}{\xi -k}\frac{\hbox {d}\xi }{2\pi i}. \end{aligned}$$

In a way similar to Theorem 6, we can prove this corollary in terms of (40). \(\square \)

Fig. 3
figure 3

The oriented contour \(\Sigma _0\)

3.5 Rescaling and Further Reduction of the RH Problems

In this subsection, we localize the jump matrix of the RH problem to the neighborhood of the stationary phase point \(k_0\). Under suitable scaling of the spectral parameter, the RH problem is reduced to a RH problem with constant jump matrix which can be solved explicitly.

Let \(\Sigma _0\) denote the contour \(\{k=\alpha \mathrm {e}^{\pm i\frac{\pi }{4}}:\alpha \in \mathbb {R}\}\) oriented as in Fig. 3. Define the scaling operator

$$\begin{aligned}&N: {\mathscr {L}}^2(\Sigma ')\rightarrow {\mathscr {L}}^2(\Sigma _0),\nonumber \\&\quad f(k)\mapsto (Nf)(k)=f\left( k_0+\frac{k}{\sqrt{8t}}\right) , \end{aligned}$$
(42)

and set \(\hat{\omega }=N\omega '\). A simple change-of-variable argument shows that

$$\begin{aligned} C_{\omega '}=N^{-1}C_{\hat{\omega }}N, \end{aligned}$$

where the operator \(C_{\hat{\omega }}\) is a bounded map from \({\mathscr {L}}^2(\Sigma _0)\) into \({\mathscr {L}}^2(\Sigma _0)\).

One infers that

$$\begin{aligned}\hat{\omega }=\hat{\omega }_{+}=\begin{pmatrix} 0&{}\quad (Ns_1)(k)\\ 0&{}\quad 0 \end{pmatrix} \end{aligned}$$

on \(\hat{L}=\{k=\alpha \mathrm {e}^{\frac{3\pi i}{4}}:-\infty<\alpha < +\infty \}\), and

$$\begin{aligned}\hat{\omega }=\hat{\omega }_{-}=\begin{pmatrix} 0&{}\quad 0\\ (Ns_2)(k) &{}\quad 0 \end{pmatrix} \end{aligned}$$

on \(\hat{L}^*\), where

$$\begin{aligned} s_1(k)=\mathrm {e}^{-2it\theta (k)}(\det \delta )(k) R(k)\delta (k),\quad s_2(k)=\frac{\mathrm {e}^{2it\theta (k)}\delta ^{-1}(k)R^\dag (k^*) }{ \det \delta (k)}. \end{aligned}$$

Lemma 12

As \(t\rightarrow \infty \), and \(k\in \hat{L}\), for an arbitrary positive integer l, then

$$\begin{aligned} |(N\tilde{\delta })(k)|\lesssim t^{-l}, \end{aligned}$$
(43)

where \(\tilde{\delta }(k)=\mathrm {e}^{-2it\theta (k)} R(k)[\delta (k) -(\det \delta )(k) I]\).

Proof

It follows from Eqs. (19) and (20) that \(\tilde{\delta }\) satisfies the following RH problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \tilde{\delta }_+(k)=\tilde{\delta }_-(k)(1+|\gamma (k)|^2)+\mathrm {e}^{-2it\theta (k)}f(k),&{}\quad k\in (-\infty ,k_0),\\ \tilde{\delta }(k)\rightarrow 0,&{}\quad k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(44)

where \(f(k)=[R(\gamma ^\dag \gamma -|\gamma |^2I)\delta _-](k)\).

The solution for the above RH problem can be expressed by

$$\begin{aligned} \tilde{\delta }(k)= & {} X(k)\int _{-\infty }^{k_0}\frac{\mathrm {e}^{-2it\theta (k)}f(\xi )}{X_+(\xi )(\xi -k)}\hbox {d}\xi ,\\ X(k)= & {} \mathrm {e}^{\frac{1}{2\pi i}\int _{-\infty }^{k_0}\frac{\log \left( 1+\left| \gamma (k)\right| ^2\right) }{\xi -k}\mathrm{d} \xi }. \end{aligned}$$

Observe that

$$\begin{aligned} \begin{aligned} R\gamma ^\dag \gamma -|\gamma |^2R&=(R-\rho )\gamma ^\dag \gamma -|\gamma |^2(R-\rho )\\&=(h_1+h_2)\sigma _3\sigma _1\gamma ^\dag \gamma \sigma _1\sigma _3, \end{aligned} \end{aligned}$$

thus \(f(k)=O((k-k_0)^l)\). Similar to Lemma 3, f(k) can be decomposed into two parts: \(f_1(k)\) and \(f_2(k)\), where \(f_2(k)\) has an analytic continuation to \(L_t\) satisfying

$$\begin{aligned}&|\mathrm {e}^{-2it\theta (k)}f_1(k)|\lesssim \frac{1}{(1+|k-k_0+\frac{1}{t}|^2)t^l},\quad k\in \mathbb {R}, \end{aligned}$$
(45)
$$\begin{aligned}&|\mathrm {e}^{-2it\theta (k)}f_2(k)|\lesssim \frac{1}{(1+|k-k_0+\frac{1}{t}|^2)t^l},\quad k\in L_t, \end{aligned}$$
(46)

where

$$\begin{aligned} L_t:\,\left\{ k=k_0-\frac{1}{t}+\alpha \mathrm {e}^{\frac{3\pi i}{4}}:0\leqslant \alpha <+\infty \right\} , \end{aligned}$$

(see Fig. 4).

Fig. 4
figure 4

The contour \(L_t\)

As \(k\in \hat{L}\), we have

$$\begin{aligned} \begin{aligned} (N\tilde{\delta })(k)&=X\left( k_0+\frac{k}{\sqrt{8t}}\right) \int _{k_0-\frac{1}{t}}^{k_0}\frac{\mathrm {e}^{-2it\theta (\xi )}f(\xi )}{X_+(\xi )\left( \xi -k_0-\frac{k}{\sqrt{8t}}\right) }\hbox {d}\xi \\&\quad +X\left( k_0+\frac{k}{\sqrt{8t}}\right) \int _{-\infty }^{k_0-\frac{1}{t}}\frac{\mathrm {e}^{-2it\theta (\xi )}f_1(\xi )}{X_+(\xi )\left( \xi -k_0-\frac{k}{\sqrt{8t}}\right) }\hbox {d}\xi \\&\quad +X\left( k_0+\frac{k}{\sqrt{8t}}\right) \int _{-\infty }^{k_0-\frac{1}{t}}\frac{\mathrm {e}^{-2it\theta (\xi )}f_2(\xi )}{X_+(\xi )\left( \xi -k_0-\frac{k}{\sqrt{8t}}\right) }\hbox {d}\xi .\\&=I_1+I_2+I_3.\\ |I_1|&\lesssim \int _{k_0-\frac{1}{t}}^{k_0}\frac{|f(\xi )|}{|\xi -k_0-\frac{k}{\sqrt{8t}}|}\hbox {d}\xi \lesssim t^{-l}\left| \log \left| 1-\frac{2\sqrt{2}}{kt^{\frac{1}{2}}}\right| \right| \lesssim t^{-l-\frac{1}{2}},\\ |I_2|&\lesssim \int _{-\infty }^{k_0-\frac{1}{t}}\frac{|\mathrm {e}^{-2it\theta (\xi )}f_1(\xi )|}{|\xi -k_0-\frac{k}{\sqrt{8t}}|}\hbox {d}\xi \leqslant \frac{\sqrt{2}\pi }{2} t^{-l+1}\lesssim t^{-l+1}. \end{aligned} \end{aligned}$$

As a consequence of Cauchy’s theorem, we can evaluate \(I_3\) along the contour \(L_t\) instead of the interval \((-\infty ,k_0-\frac{1}{t})\) and obtain \(|I_3|\lesssim t^{-l+1}\) in a similar way. Therefore, it is easy to see that (43) holds. \(\square \)

Note There is a similar estimate

$$\begin{aligned} |(N\hat{\delta })(k)|\lesssim t^{-l},\quad t\rightarrow \infty ,\quad k\in \hat{L}^*, \end{aligned}$$
(47)

where \(\hat{\delta }(k)=\mathrm {e}^{2it\theta (k)} [\delta ^{-1}(k)- (\det \delta )^{-1}(k)I]R^\dag (k^*)\).

Theorem 13

As \(t\rightarrow \infty \),

$$\begin{aligned} (u(x,t),v(x,t))=\frac{1}{\sqrt{2t}} \lim _{k\rightarrow \infty }\left( kM^0(k;x,t)\right) _{12}+O\left( t^{-1}\log t\right) , \end{aligned}$$
(48)

where \(M^0(k;x,t)\) satisfies the RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} M^0_+(k)=M^0_-(k)J^0(k), &{}\quad k\in \Sigma ^0,\\ M^0(k)\rightarrow I,&{}\quad k\rightarrow \infty . \end{array}\right. } \end{aligned}$$
(49)

Here \(J^{0}=(I-\omega ^0_-)^{-1}(I+\omega ^0_+)\) and

$$\begin{aligned} \omega ^0= & {} \omega ^0_{+}={\left\{ \begin{array}{ll} \begin{pmatrix} 0&{}\quad \eta ^2k^{2i\nu }\mathrm {e}^{-\frac{1}{2}ik^2}\gamma (k_0)\\ 0&{}\quad 0 \end{pmatrix}, \quad k\in \Sigma _1^0,\\ \begin{pmatrix} 0&{}-\eta ^2k^{2i\nu }\mathrm {e}^{-\frac{1}{2}ik^2}\frac{\gamma (k_0)}{1+|\gamma (k_0)|^2}\\ 0&{}0 \end{pmatrix}, \quad k\in \Sigma _3^0, \end{array}\right. } \end{aligned}$$
(50)
$$\begin{aligned} \eta= & {} (8t)^{-\frac{1}{2}i\nu }\mathrm {e}^{2ik_0^2t+\tilde{\chi }(k_0)}, \end{aligned}$$
(51)
(52)

Proof

It follows from (43) and Lemma 3.35 in Deift and Zhou (1993) that

$$\begin{aligned} \Vert \hat{\omega }-\omega ^0\Vert _{{\mathscr {L}}^\infty (\Sigma ^0)\cap {\mathscr {L}}^1(\Sigma ^0)\cap {\mathscr {L}}^2(\Sigma ^0)}\lesssim t^{-\frac{1}{2}}\log t,\quad t\rightarrow \infty . \end{aligned}$$

Thus,

$$\begin{aligned}&\int _{\Sigma ' }\left( (1-C_{\omega ' })^{-1}I \right) (\xi )\omega ' (\xi )\hbox {d}\xi \nonumber \\&\quad =\int _{\Sigma ' }\left( N^{-1}(1-C_{\hat{\omega }})^{-1}N I \right) (\xi )\omega ' (\xi )\hbox {d}\xi \nonumber \\&\quad =\int _{\Sigma ' }\left( (1-C_{\hat{\omega }})^{-1}I \right) \left( (\xi -k_0)\sqrt{8t}\right) N \omega ' \left( (\xi -k_0)\sqrt{8t}\right) \hbox {d}\xi \nonumber \\&\quad =\frac{1}{\sqrt{8 t}}\int _{\Sigma ^0}\left( (1-C_{\hat{\omega }})^{-1}I \right) (\xi )\hat{\omega }(\xi )\hbox {d}\xi \nonumber \\&\quad =\frac{1}{\sqrt{8 t}}\int _{\Sigma ^0}\left( (1-C_{\omega ^0})^{-1}I \right) (\xi )\omega ^0(\xi )\hbox {d}\xi +O\left( t^{-1}\log t\right) . \end{aligned}$$
(53)

For \(k\in \mathbb {C}\backslash \Sigma ^0\), let

$$\begin{aligned} M^0(k)=I+\int _{\Sigma ^0}\frac{\left( (1-C_{\omega ^0})^{-1}I\right) (\xi )\omega ^0(\xi )}{\xi -k}\frac{\text {d} \xi }{2\pi i}. \end{aligned}$$
(54)

Then \(M^0\) solves the above RH problem. From above arguments and Lemma 10, it is straightforward to prove this theorem. \(\square \)

Note In particular, if

$$\begin{aligned} M^0(k)=I+\frac{M^0_1}{k}+O(k^{-2}),\quad k\rightarrow \infty , \end{aligned}$$

then

$$\begin{aligned} (u(x,t),v(x,t))=\frac{1}{\sqrt{2t}} \left( M^0_1\right) _{12}+O\left( t^{-1}\log t\right) . \end{aligned}$$

3.6 Solving the Model Problem

In this subsection, we shall discuss the leading asymptotics of solution for the Cauchy problem of the CNLS equation. In order to give \((M^0_1)_{12}\) explicitly, it is convenient to introduce the following transformation

$$\begin{aligned} \Psi (k)=H(k) k^{-i\nu \sigma }\mathrm {e}^{\frac{1}{4}ik^2\sigma },\quad H(k)=\eta ^{\sigma } M^0(k)\eta ^{-\sigma }, \end{aligned}$$

which implies that

$$\begin{aligned} \Psi _+(k)=\Psi _-(k)v(k_0),\quad v=k^{i\nu \hat{\sigma }}\mathrm {e}^{-\frac{1}{4}ik^2\hat{\sigma }}\eta ^{\hat{\sigma }} J^0. \end{aligned}$$

Because the jump matrix is constant along each ray, we have

$$\begin{aligned} \frac{\hbox {d}\Psi _+(k)}{\hbox {d}k}=\frac{\hbox {d}\Psi _-(k)}{\hbox {d}k}v(k_0), \end{aligned}$$

from which it follows that \(\frac{\text {d}\Psi (k)}{\text {d}k}\Psi ^{-1}(k)\) has no jump discontinuity along any of the rays. Moreover, from the relation between H(k) and \(\Psi (k)\), we have

$$\begin{aligned} \begin{aligned} \frac{\hbox {d}\Psi (k)}{\hbox {d}k}\Psi ^{-1}(k)&=\frac{\hbox {d} H(k)}{\hbox {d}k}H^{-1}(k)+\frac{1}{2}ikH(k)\sigma H^{-1}(k)-\frac{i\nu }{k}H(k)\sigma H^{-1}(k)\\&=O\left( \frac{1}{k}\right) +\frac{1}{2}ik\sigma -\frac{1}{2}i\eta ^{\sigma }\left[ \sigma ,M_1^0\right] \eta ^{-\sigma }. \end{aligned} \end{aligned}$$

It follows by the Liouville’s theorem that

$$\begin{aligned} \frac{\hbox {d}\Psi (k)}{\hbox {d}k}-\frac{1}{2}ik\sigma \Psi (k)=\beta \Psi (k), \end{aligned}$$
(55)

where

$$\begin{aligned} \beta =-\frac{1}{2}i\eta ^{\sigma }\left[ \sigma ,M_1^0\right] \eta ^{-\sigma }=\begin{pmatrix} 0&{}\quad \beta _{12}\\ \beta _{21}&{}\quad 0 \end{pmatrix}.\end{aligned}$$

In particular,

$$\begin{aligned} \left( M^0_1\right) _{12}=-i\eta ^2\beta _{12}. \end{aligned}$$
(56)

It is further possible to show that the solution of the RH problem for \(M^0(k)\) is unique, and therefore we have an identity:

$$\begin{aligned} (M^0(k^*))^\dag =(M^0(k))^{-1}, \end{aligned}$$

which implies that \(\beta _{12}=-\beta ^\dag _{21}\).

Let

$$\begin{aligned} \Psi (k)=\begin{pmatrix} \Psi _{11}(k)&{}\quad \Psi _{12}(k)\\ \Psi _{21}(k)&{}\quad \Psi _{22}(k) \end{pmatrix}.\end{aligned}$$

From (55) we obtain

$$\begin{aligned} \frac{\text {d}^2\Psi _{11}(k)}{\text {d}k^2}= & {} \left( \beta _{12}\beta _{21}-\frac{i}{2}-\frac{1}{4}k^2\right) \Psi _{11}(k),\nonumber \\ \beta _{12}\Psi _{21}(k)= & {} \frac{\text {d}\Psi _{11}(k)}{\text {d}k}+\frac{i}{2}k\Psi _{11},\nonumber \\ \frac{\text {d}^2\beta _{12}\Psi _{22}(k)}{\text {d}k^2}= & {} \left( \beta _{12}\beta _{21}+\frac{i}{2}-\frac{1}{4}k^2\right) \beta _{12}\Psi _{22}(k),\nonumber \\ \Psi _{12}(k)= & {} \frac{1}{\beta _{12}\beta _{21}}\left( \frac{\text {d}\beta _{12}\Psi _{22}(k)}{\text {d}k}-\frac{i}{2}k\beta _{12}\Psi _{22}(k)\right) . \end{aligned}$$
(57)

As is well known, the Weber’s equation

$$\begin{aligned} \frac{\text {d}^2g(\zeta )}{\text {d}\zeta ^2}+\left( a+\frac{1}{2}-\frac{\zeta ^2}{4}\right) g(\zeta )=0 \end{aligned}$$

has the solution

$$\begin{aligned} g(\zeta )=c_1 D_a(\zeta )+c_2D_{a}(-\zeta ), \end{aligned}$$

where \(D_a(\cdot )\) denotes the standard parabolic-cylinder function and satisfies

$$\begin{aligned}&\frac{\text {d}D_a(\zeta )}{\text {d}\zeta }+\frac{\zeta }{2}D_a(\zeta )-aD_{a-1}(\zeta )=0,\nonumber \\&\quad D_a(\pm \zeta )=\frac{\Gamma (a+1)\mathrm {e}^{\frac{i\pi a}{2}}}{\sqrt{2\pi }}D_{-a-1}(\pm i\zeta )+\frac{\Gamma (a+1)\mathrm {e}^{-\frac{i\pi a}{2}}}{\sqrt{2\pi }}D_{-a-1}(\mp i\zeta ).\qquad \quad \end{aligned}$$
(58)

As \(\zeta \rightarrow \infty \), from Whittaker and Watson (1927) we have

$$\begin{aligned} D_a(\zeta )={\left\{ \begin{array}{ll} \zeta ^a\mathrm {e}^{-\frac{\zeta ^2}{4} }(1+O(\zeta ^{-2})),&{}\quad |\arg \zeta |<\frac{3\pi }{4},\\ \zeta ^a\mathrm {e}^{-\frac{\zeta ^2}{4} }(1+O(\zeta ^{-2}))-\frac{\sqrt{2\pi }}{\Gamma (-a)}\mathrm {e}^{a\pi i+\frac{\zeta ^2}{4}}\zeta ^{-a-1}(1+O(\zeta ^{-2})),&{}\quad \frac{\pi }{4}<\arg \zeta<\frac{5\pi }{4},\\ \zeta ^a\mathrm {e}^{-\frac{\zeta ^2}{4} }(1+O(\zeta ^{-2}))-\frac{\sqrt{2\pi }}{\Gamma (-a)}\mathrm {e}^{-a\pi i+\frac{\zeta ^2}{4}}\zeta ^{-a-1}(1+O(\zeta ^{-2})),&{}\quad -\frac{5\pi }{4}<\arg \zeta <-\frac{\pi }{4}, \end{array}\right. } \end{aligned}$$
(59)

where \(\varGamma \) is the Gamma function.

Set \(a=i\beta _{12}\beta _{21}\),

$$\begin{aligned} \Psi _{11}(k)= & {} c_1D_a\left( \mathrm {e}^{-\frac{3\pi i}{4}}k\right) +c_2D_{a}\left( \mathrm {e}^{\frac{\pi i}{4}}k\right) ,\\ \beta _{12}\Psi _{22}(k)= & {} c_3D_{-a}\left( \mathrm {e}^{\frac{3\pi i}{4}}k\right) +c_4D_{-a}\left( \mathrm {e}^{-\frac{\pi i}{4}}k\right) . \end{aligned}$$

As \(\arg k\in (-\frac{\pi }{4},\frac{\pi }{4})\) and \(k\rightarrow \infty \), we arrive at

$$\begin{aligned} \Psi _{11}(k)k^{-i\nu }\mathrm {e}^{\frac{ik^2}{4}}\rightarrow 1,\quad \Psi _{22}(k)k^{i\nu }\mathrm {e}^{-\frac{ik^2}{4}}\rightarrow I, \end{aligned}$$

then

$$\begin{aligned} \Psi _{11}(k)= & {} \mathrm {e}^{\frac{\pi \nu }{4}}D_a(\mathrm {e}^{\frac{\pi i}{4}}k),\quad \nu =\beta _{12}\beta _{21},\\ \beta _{12}\Psi _{22}(k)= & {} \beta _{12}\mathrm {e}^{\frac{\pi \nu }{4}}D_{-a}(\mathrm {e}^{-\frac{\pi i}{4}}k). \end{aligned}$$

Consequently,

$$\begin{aligned} \beta _{12}\Psi _{21}(k)= & {} a\mathrm {e}^{\frac{\pi (\nu +i)}{4}}D_{a-1}(\mathrm {e}^{\frac{\pi i}{4}}k),\\ \Psi _{12}(k)= & {} \beta _{12} \mathrm {e}^{\frac{\pi (\nu -3i)}{4}}D_{-a-1}(\mathrm {e}^{-\frac{\pi i}{4}}k). \end{aligned}$$

For \(\arg k\in (-\frac{3\pi }{4},-\frac{\pi }{4})\) and \(k\rightarrow \infty \), we have

$$\begin{aligned} \Psi _{11}(k)k^{-i\nu }\mathrm {e}^{\frac{ik^2}{4}}\rightarrow 1,\quad \Psi _{22}(k)k^{i\nu }\mathrm {e}^{-\frac{ik^2}{4}}\rightarrow I, \end{aligned}$$

then

$$\begin{aligned} \Psi _{11}(k)= & {} \mathrm {e}^{\frac{\pi \nu }{4}}D_a(\mathrm {e}^{\frac{\pi i}{4}}k),\\ \beta _{12}\Psi _{22}(k)= & {} \beta _{12} \mathrm {e}^{-\frac{3\pi \nu }{4}}D_{-a}(\mathrm {e}^{\frac{3\pi i}{4}}k). \end{aligned}$$

Consequently,

$$\begin{aligned} \beta _{12}\Psi _{21}(k)= & {} a\mathrm {e}^{\frac{\pi (\nu +i)}{4}}D_{a-1}(\mathrm {e}^{\frac{\pi i}{4}}k),\\ \Psi _{12}(k)= & {} \beta _{12}\mathrm {e}^{\frac{\pi (i-3\nu )}{4}}D_{-a-1}(\mathrm {e}^{\frac{3\pi i}{4}}k). \end{aligned}$$

Along the ray \(\arg k=-\frac{\pi }{4}\),

$$\begin{aligned} \Psi _+(k)= & {} \Psi _-(k)\begin{pmatrix} 1&{}\quad \gamma (k_0)\\ 0&{}\quad I \end{pmatrix},\\ \beta _{12}\mathrm {e}^{\frac{\pi (i-3\nu )}{4}}D_{-a-1}(\mathrm {e}^{\frac{3\pi i}{4}}k)= & {} \mathrm {e}^{\frac{\pi \nu }{4}}D_a(\mathrm {e}^{\frac{\pi i}{4}}k)\gamma (k_0)+\beta _{12}\mathrm {e}^{\frac{\pi (\nu -3i)}{4}}D_{-a-1}(\mathrm {e}^{-\frac{\pi i}{4}}k). \end{aligned}$$

It follows from Eq. (58) that

$$\begin{aligned} D_{a}(\mathrm {e}^{\frac{\pi i}{4}}k)=\frac{\Gamma (a+1)\mathrm {e}^{\frac{i\pi a}{2}}}{\sqrt{2\pi }}D_{-a-1}(\mathrm {e}^{\frac{3\pi i}{4}}k)+\frac{\Gamma (a+1)\mathrm {e}^{-\frac{ i\pi a}{2}}}{\sqrt{2\pi }}D_{-a-1}(\mathrm {e}^{-\frac{\pi i}{4}}k). \end{aligned}$$

Then we separate the coefficients of the two independent functions and obtain

$$\begin{aligned} \beta _{12}=\frac{\mathrm {e}^{\frac{\pi \nu }{2}-\frac{\pi i}{4}}\Gamma (a+1)}{\sqrt{2\pi }}\gamma (k_0)=\frac{\mathrm {e}^{\frac{\pi \nu }{2}+\frac{\pi i}{4}}\nu \Gamma (i\nu )}{\sqrt{2\pi }}\gamma (k_0). \end{aligned}$$
(60)

In conclusion, main Theorem 1 is the consequence of Theorem 13 and Eqs. (51), (56), and (60).