1 Introduction

In 1993, Deift and Zhou proposed the nonlinear steepest descent method [14], which is very effective for studying the long-time asymptotic behavior of nonlinear evolution equations. Since then, the long-time asymptotics for a lot of integrable nonlinear evolution equations associated with \(2\times 2\) matrix spectral problems was studied by this method, such as the focusing nonlinear Schrödinger equation, the modified nonlinear Schrödinger equation, the derivative nonlinear Schrödinger equation, the Korteweg-de Vries equation, the Camassa–Holm equation, the sine-Gordon equation, the Fokas–Lenells equation, the short pulse equation, the discrete nonlinear Schrödinger equation, the Toda equation and so on [2, 3, 6, 8, 11,12,13, 15, 23, 24, 30,31,32, 38, 42, 43, 47, 48]. The starting point of the nonlinear steepest descent method is the Riemann–Hilbert (RH) problems related to the corresponding integrable nonlinear evolution equations. The RH approach is effective in dealing with integrable equations [18, 27,28,29]. We can introduce the Deift–Zhou deformations to the contour of the RH problem and transform the basic RH problem to a model one, from which the leading-order asymptotics of the RH problem can be derived. Recently, the long-time asymptotics for some integrable nonlinear evolution equations associated with \(3\times 3\) matrix spectral problems are studied in accordance with the procedures of the Deift–Zhou nonlinear steepest descent method, for example, the Degasperis–Procesi equation, the coupled nonlinear Schrödinger equation, the Sasa–Satsuma equation and others [7, 17, 19, 36].

The main purpose of this paper is to study the long-time asymptotics for the Cauchy problem of a new type coupled nonlinear Schrödinger (CNLS) Eq.  [39]

$$\begin{aligned} {\left\{ \begin{array}{ll} iq_{1t}+q_{1xx}+2(|q_1|^2-2|q_2|^2)q_1-2q_2^2q_{1}^*=0,\\ iq_{2t}+q_{2xx}+2(2|q_1|^2-|q_2|^2)q_2+2q_1^2q_2^*=0,\\ q_1(x,0)=q_1^0(x),\quad q_2(x,0)=q_2^0(x), \end{array}\right. } \end{aligned}$$
(1.1)

associated with \(4\times 4\) matrix spectral problem, where \(q_1(x,t)\) and \(q_2(x,t)\) are two potentials, the asterisk denotes the complex conjugate, the initial values \(q_1^0(x)\) and \(q_2^0(x)\) lie in the the Schwartz space \({\mathscr {S}}({\mathbb {R}})=\{f(x)\in C^\infty ({\mathbb {R}}):\sup _{x\in {\mathbb {R}}}|x^\alpha \partial ^\beta f(x)|<\infty ,\forall \alpha ,\beta \in {\mathbb {N}}\}\) and they are assumed to be generic so that \(\det X_2\) and \(\det X_4\) defined in the following context is nonzero in \({\mathbb {C}}_-\) (Fig. 1). In physics, \(q_1\) and \(q_2\) in (1.1) are slowly varying complex amplitudes of two interacting optical modes, the terms \(|q_1|^2q_1\), \(|q_2|^2q_2\) and the terms \(|q_1|^2q_2\), \(|q_2|^2q_1\) represent the self-phase modulation and the cross-phase modulation, respectively [41]. The last terms in (1.1) are the coherent coupling terms and govern the energy exchange between two axes of the fiber. Besides, the coherent coupling is called the positive coherent coupling because the ratio between the coefficients of the cross-phase modulation terms and the coherent coupling terms is 2 [49]. Such focusing–defocusing-type nonlinearly coupled magneto-optic wave guide equation was studied in [9], and this equation is also a possible realization of the recently studied artificial metamaterials [35]. There were a great many researches about the new type CNLS Eq. (1.1) in mathematics. In Ref. [49], the bright vector one- and two-soliton solutions including one-peak and two-peak solitons were constructed with the help of Darboux transformation [16, 33, 34]. The solutions generated on the vanishing and non-vanishing backgrounds, including solitons, breathers and rogue waves and their dynamic behaviors were studied [25]. Single- and double-hump solitons for the new type CNLS equation and their interactions have been discussed in [37]. The first-order twist rogue-wave pair, periodic and soliton-like rational solutions for this equation were obtained by virtue of the binary Darboux transformation [50]. The rogue waves and modulation instability for the new type CNLS equation were studied with the aid of generalized Darboux transformations [10]. In [26], the RH approach was applied to study the Cauchy problem of the new type CNLS equation, from which the representation of the solution was given. Moreover, the explicit theta function representations of solutions for the CNLS equation were obtained [46] with the aid of Baker–Akhiezer functions and the meromorphic functions [22, 44]. It is found that various integrable equations with N-peakon solutions are usually negative flows in the hierarchies [20, 21].

In this paper, we shall extend the nonlinear steepest descent method to study the long-time asymptotic behavior to solutions of the Cauchy problem of the new type CNLS Eq. (1.1). The most striking feature of this equation is that its Lax pair is \(4\times 4\) matrix spectral problems, which present some challenges. Different from the \(2\times 2\) matrix spectral problem, the \(4\times 4\) matrix spectral problem is of higher order, so it is more difficult to carry out spectral analysis on the \(4\times 4\) matrix spectral problem, and the expression form of corresponding RH problem is more complex. In general, a scalar RH problem needs to be introduced when the nonlinear steepest descent method is used to deal with the integrable nonlinear evolution equation related to the \(2\times 2\) matrix spectral problem, and the scalar RH problem is explicitly solved by Plemelj formula. However, in the case of \(4\times 4\) matrix spectral problem, it is necessary to introduce two corresponding matrix RH problems, which cannot be solved in explicit form unless the RH problems have special forms. The last step of the nonlinear steepest descent method is to solve the model RH problem. For the \(2\times 2\) case, the resulting basic RH problem is reduced to a model \(2\times 2\) matrix RH problem, whose solution can be expressed by parabolic cylinder functions. But for the \(4\times 4\) case, the solution of the model \(4\times 4\) matrix RH problem cannot be directly represented by parabolic cylinder functions. To overcome these difficulties, our strategies in this paper are as follows. (i) We write the \(4\times 4\) matrices as their \(2\times 2\) block forms. The components of the \(2\times 2\) block forms are all \(2\times 2\) matrices. The corresponding forms are clearer and simplify the calculation. (ii) For the matrix RH problems that are introduced in the process of long-time asymptotic analysis, we consider the determinants of the matrix RH problems, that are scalar, and the corresponding scalar RH problems are solvable. We give the proper estimates between the terms including the matrix functions and the scalar functions. (iii) We deal with the last model problem by analyzing the components of the relevant matrix, by which the relevant matrix functions are reduced to diagonal forms with the aid of their properties. Then they can be transformed to Weber’s equation, and their components are expressed by the parabolic cylinder functions. Finally, by the asymptotic expansion of the parabolic cylinder function, the model problem is solved, from which the long-time asymptotics to solutions for the Cauchy problem of the new type CNLS equation are obtained.

The outline of this paper is as follows. In Sect. 2, we get the Jost solutions and the scattering relations from the \(4\times 4\) Lax pair of the new type CNLS equation and its spectral analysis. The basic RH problem is obtained through calculations. In Sect. 3, we begin with the basic RH problem and deform the contour of the basic RH problem several times with the aid of the nonlinear steepest descent method. Finally, a solvable model problem is constructed, from which the long-time asymptotics of the solution to the Cauchy problem of the new type CNLS equation are obtained in Theorem 3.4.

2 The Basic Riemann–Hilbert Problem

In this section, we begin with the Lax pair of the new type CNLS Eq. (1.1). With the aid of the inverse scattering theory, the basic RH problem is obtained from the initial problem for the new type CNLS Eq. (1.1). Let us consider a \(4\times 4\) matrix Lax pair of the new type CNLS equation:

$$\begin{aligned} \psi _x=(-ik\sigma _4+U)\psi , \end{aligned}$$
(2.1a)
$$\begin{aligned} \psi _t=(-2ik^2\sigma _4+V)\psi , \end{aligned}$$
(2.1b)

where \(\psi \) is a matrix function and k is the spectral parameter, \(\sigma _4=\mathrm {diag}(1,1,-1,-1)\),

$$\begin{aligned}&U=\left( \begin{array}{cccc} 0&{}0&{}q_1&{}q_2\\ 0&{}0&{}-q_2&{}q_1\\ -q_1^*&{}-q_2^*&{}0&{}0\\ q_2^*&{}-q_1^*&{}0&{}0\\ \end{array}\right) , \end{aligned}$$
(2.2)
$$\begin{aligned}&V=\left( \begin{array}{cccc} i(|q_1|^2-|q_2|^2)&{}i(q_1^*q_2+q_1q_2^*)&{}iq_{1x}+2kq_1&{}iq_{2x}+2kq_2\\ i(-q_1^*q_2-q_1q_2^*)&{}i(|q_1|^2-|q_2|^2)&{}-iq_{2x}-2kq_2&{}iq_{1x}+2kq_1\\ iq^*_{1x}-2kq_1^*&{}iq_{2x}^*-2kq_2^*&{}i(-|q_1|^2+|q_2|^2)&{}i(-q_1^*q_2-q_1q_2^*)\\ -iq_{2x}^*+2kq_2^*&{}iq^*_{1x}-2kq_1^*&{}i(q_1^*q_2+q_1q_2^*)&{}i(-|q_1|^2+|q_2|^2)\\ \end{array}\right) .\nonumber \\ \end{aligned}$$
(2.3)

A direct calculation shows that the zero-curvature equation between (2.1a) and (2.1b) is equivalent to the new type CNLS Eq. (1.1). Throughout this paper, we write the \(4\times 4\) matrix as the \(2\times 2\) block form and the elements of the block form are all \(2\times 2\) matrices. For example, the \(4\times 4\) matrix U is written as the \(2\times 2\) block form

$$\begin{aligned} U= \left( \begin{array}{cc} {{\textbf {0}}}_{2\times 2} &{} Q\\ -Q^*&{} {{\textbf {0}}}_{2\times 2}\\ \end{array}\right) , \end{aligned}$$
(2.4)

where

$$\begin{aligned} Q=\left( \begin{array}{cl} q_1&{}q_2\\ -q_2&{}q_1\\ \end{array}\right) . \end{aligned}$$
(2.5)

Let \(\mu =\psi e^{ik\sigma _4x+2ik^2\sigma _4t}\), where the matrix exponential \(e^{\sigma _4}=\mathrm {diag}(e,e,e^{-1},e^{-1})\). Then we have by using (2.1a) and (2.1b) that

$$\begin{aligned}&\mu _x=-ik[\sigma _4,\mu ]+U\mu , \end{aligned}$$
(2.6)
$$\begin{aligned}&\mu _t=-2ik^2[\sigma _4,\mu ]+V\mu , \end{aligned}$$
(2.7)

where \([\sigma _4,\mu ]=\sigma _4\mu -\mu \sigma _4\). We introduce the matrix Jost solutions \(\mu _\pm \) of Eq. (2.6) by the Volterra integral equations:

$$\begin{aligned} \mu _\pm (k;x,t)=I_{4\times 4}+\int _{\pm \infty }^xe^{ik\sigma _4(\xi -x)}U(k;\xi ,t)\mu _\pm (k;\xi ,t)e^{-ik\sigma _4(\xi -x)}\, \mathrm {d}\xi , \end{aligned}$$
(2.8)

with the asymptotic condition \(\mu _\pm \rightarrow I_{4\times 4}\) as \(x\rightarrow \pm \infty \), where \(I_{4\times 4}\) is the \(4\times 4\) identity matrix. Let \(\mu _\pm \)= (\(\mu _{\pm L},\mu _{\pm R})\), where \(\mu _{\pm L}\) denote the first and the second columns of the \(4\times 4\) matrices \(\mu _{\pm }\), and \(\mu _{\pm R}\) denote the third and the fourth columns of the \(4\times 4\) matrices \(\mu _{\pm }\). We introduce a \(4\times 4\) matrix \(\Gamma _1\) and the Pauli matrix by

$$\begin{aligned} \Gamma _1=\left( \begin{array}{cc} \sigma _1&{}{{\textbf {0}}}_{2\times 2}\\ {{\textbf {0}}}_{2\times 2}&{}\sigma _1\\ \end{array}\right) ,\quad \sigma _1=\left( \begin{array}{cc} 0&{}1\\ 1&{}0\\ \end{array}\right) . \end{aligned}$$
(2.9)

For convenience, we first introduce three operators \({\mathscr {T}}\), \(T_1\) and \(T_2\) in the following. For a \(2\times 2\) matrix \(N_2(k)\), the operator \({\mathscr {T}}\) defined by

$$\begin{aligned} {\mathscr {T}}N_2(k)= {\mathscr {T}}\left( \begin{array}{cc} N_2^{(11)}(k) &{} N_2^{(12)}(k)\\ N_2^{(21)}(k) &{} N_2^{(22)}(k) \end{array}\right) =\left( \begin{array}{cc} N_2^{(22)}(k) &{} -N_2^{(21)}(k)\\ -N_2^{(12)}(k) &{} N_2^{(11)}(k) \end{array}\right) , \end{aligned}$$
(2.10)

where \(N_2^{(ij)}(k), (i,j=1,2),\) denote the (ij) entry of \(N_2(k)\). A simple computation shows that \({\mathscr {T}}(A_1B_1)={\mathscr {T}}(A_1){\mathscr {T}}(B_1)\) and \({\mathscr {T}}(A_1+B_1)={\mathscr {T}}(A_1)+{\mathscr {T}}(B_1)\), where \(A_1\) and \(B_1\) are \(2\times 2\) matrices. For a \(4\times 4\) matrix \(N_4(k)\), the operators \(T_1\) and \(T_2\) are given by

$$\begin{aligned} T_1N_4(k)= T_1\left( \begin{array}{cc} N_4^{(11)}(k) &{} N_4^{(12)}(k)\\ N_4^{(21)}(k) &{} N_4^{(22)}(k) \end{array}\right) =\left( \begin{array}{cc} N_4^{(22)}(k) &{} -N_4^{(21)}(k)\\ -N_4^{(12)}(k) &{} N_4^{(11)}(k) \end{array}\right) ^*, \end{aligned}$$
(2.11)
$$\begin{aligned} T_2N_4(k)= T_2\left( \begin{array}{cc} N_4^{(11)}(k) &{} N_4^{(12)}(k)\\ N_4^{(21)}(k) &{} N_4^{(22)}(k) \end{array}\right) =\left( \begin{array}{cc} {\mathscr {T}}N_4^{(11)}(k) &{} {\mathscr {T}}N_4^{(12)}(k)\\ {\mathscr {T}}N_4^{(21)}(k) &{} {\mathscr {T}}N_4^{(22)}(k) \end{array}\right) , \end{aligned}$$
(2.12)

where \(N_4^{(ij)}(k), (i,j=1,2),\) denote the (ij) entry of \(2\times 2\) block form of \(N_4(k)\). It is not difficult to verify that \(T_j(A_2B_2)=T_j(A_2)T_j(B_2)\) and \(T_j(A_2+B_2)=T_j(A_2)+T_j(B_2)\), \((j=1,2)\), where \(A_2\) and \(B_2\) are \(4\times 4\) matrices.

Lemma 2.1

The matrix Jost solutions \(\mu _\pm (k;x,t)\) have the properties:

  1. (i)

    \(\mu _{-L}\) and \(\mu _{+R}\) is analytic in the upper complex k-plane \({\mathbb {C}}_+\);

  2. (ii)

    \(\mu _{+L}\) and \(\mu _{-R}\) is analytic in the lower complex k-plane \({\mathbb {C}}_-\);

  3. (iii)

    \(\det \mu _\pm =1\), and \(\mu _\pm \) satisfy the symmetry conditions \(\Gamma _1\mu _\pm ^\dag (k^*)\Gamma _1=\mu _\pm ^{-1}(k)\) and \(T_1\mu (k^*)=\mu (k)=T_2\mu (k)\), where “ †" denotes the Hermite conjugate;

  4. (iv)

    \(\mu _\pm \rightarrow I_{4\times 4}\) as \(k\rightarrow \infty \).

Proof

The \(4\times 4\) matrix functions \(\mu _{\pm }\) are written as the \(2\times 2\) block forms \((\mu _{\pm ij})_{2\times 2}\). A simple computation shows that

$$\begin{aligned} e^{ik\sigma _4(\xi -x)}U\mu _\pm e^{-ik\sigma _4(\xi -x)}= \left( \begin{array}{cc} Q\mu _{\pm 21} &{} e^{2ik(\xi -x)}Q\mu _{\pm 22}\\ -e^{-2ik(\xi -x)}Q^*\mu _{\pm 11} &{} -Q^*\mu _{\pm 12}\\ \end{array}\right) . \end{aligned}$$

Then the analytic properties of the Jost solutions \(\mu _\pm \) can be obtained from the exponential terms in the Volterra integral Eq. (2.8). Note that U has the symmetry relation \(\Gamma _1U^\dag (k^*)\Gamma _1=-U(k)\) and \(T_1U=U=T_2U\), and \(\mu _\pm ^{-1}\) satisfies

$$\begin{aligned} (\mu _\pm ^{-1})_x=-ik[\sigma _4,\mu _\pm ^{-1}]-\mu _\pm ^{-1}U, \end{aligned}$$
(2.13)

which implies the symmetry relation of \(\mu _\pm \).

On the basis of the traceless of U, we arrive at

$$\begin{aligned} \begin{aligned} (\det \mu _\pm )_x&=\mathrm {tr}[(\mathrm {adj}\mu _\pm )(\mu _\pm )_x]\\&=\mathrm {tr}\{(\mathrm {adj}\mu _\pm )[-ik(\sigma _4\mu _\pm -\mu _\pm \sigma _4)+U\mu _\pm ]\}\\&=-ik\mathrm {tr}[\mathrm {adj}\mu _\pm (\sigma _4\mu _\pm -\mu _\pm \sigma _4)]+\det (\mu _\pm )\mathrm {tr}(U)\\&=0, \end{aligned} \end{aligned}$$

which together with the asymptotic properties of \(\mu _\pm \) lead to \(\det \mu _\pm =1\), where \(\mathrm {adj}X\) is the adjoint of matrix X, and \(\mathrm {tr}X\) is the trace of matrix X. Then we write \(\mu \) as

$$\begin{aligned} \mu =C_0+\frac{C_1}{k}+O(k^{-2}), \end{aligned}$$
(2.14)

where \(C_0\) and \(C_1\) are independent of k. We first consider the x-part. Substituting (2.14) into (2.6) yields

$$\begin{aligned} C_0^{(12)}=C_0^{(21)}=0,\quad C_{0x}^{(11)}=0,\quad C_{0x}^{(22)}=0, \end{aligned}$$
(2.15)

where \(C_0^{(ij)}\) is the (ij) entry of the \(2\times 2\) block form of the \(4\times 4\) matrix \(C_0\). In the same way, the computation of the t-part gives that

$$\begin{aligned} C_{0t}^{(11)}=0,\quad C_{0t}^{(22)}=0. \end{aligned}$$
(2.16)

Because \(\mu \rightarrow I_{4\times 4}\) as \(x\rightarrow \pm \infty \), then \(C_0^{(11)}\rightarrow I_{2\times 2}\) and \(C_0^{(22)}\rightarrow I_{2\times 2}\) as \(k\rightarrow \infty \), which means that \(C_0=I_{4\times 4}\). \(\square \)

Because \(\mu _\pm e^{-ik\sigma _4 x-2ik^2\sigma _4 t}\) satisfy the same differential Eqs. (2.1a) and (2.1b), they are linear related. There exists a \(4\times 4\) scattering matrix s(k) such that

$$\begin{aligned} \mu _-(k)=\mu _+(k) e^{-ik\sigma _4 x-2ik^2\sigma _4 t}s(k)e^{ik\sigma _4 x+2ik^2\sigma _4 t},\quad \det s(k)=1, \quad k\in {\mathbb {R}}. \end{aligned}$$
(2.17)

From the symmetry property of \(\mu _\pm (k)\) in Lemma 2.1, the scattering matrix s(k) has symmetry conditions

$$\begin{aligned}&\Gamma _1s^\dag (k^*)\Gamma _1=s^{-1}(k), \end{aligned}$$
(2.18)
$$\begin{aligned}&T_1s(k^*)=s(k)=T_2s(k). \end{aligned}$$
(2.19)

We express s(k) in \(2\times 2\) block form

$$\begin{aligned} s(k)=\left( \begin{array}{cc} X_1(k) &{} X_2(k)\\ X_3(k) &{} X_4(k)\\ \end{array}\right) . \end{aligned}$$
(2.20)

A direct calculation shows from the evaluation of (2.17) at \(t=0\) that

$$\begin{aligned} s(k)=\lim _{x\rightarrow +\infty }e^{ik\sigma _4 x}\mu _-(k;x,0)e^{-ik\sigma _4 x}, \end{aligned}$$
(2.21)

which implies

$$\begin{aligned}&X_2=\int _{-\infty }^{+\infty }Q(\xi ,0)\mu _{-22}(k;\xi ,0)e^{2ik\xi }\, \mathrm {d}\xi , \end{aligned}$$
(2.22)
$$\begin{aligned}&X_4=I_{2\times 2}-\int _{-\infty }^{+\infty }Q^*(\xi ,0)\mu _{-12}(k;\xi ,0)\, \mathrm {d}\xi . \end{aligned}$$
(2.23)

Define a matrix function by

$$\begin{aligned} \gamma (k)=X_2(k)X_4^{-1}(k) \end{aligned}$$
(2.24)

and \(\gamma _{ij}(k), (i,j,=1,2),\) denotes the (ij) entry of matrix function \(\gamma (k)\).

Lemma 2.2

The function \(\gamma (k)\) has the symmetry properties

$$\begin{aligned}&\sigma _1\gamma ^\dag (k^*)\sigma _1=\gamma ^*(k^*), \end{aligned}$$
(2.25)
$$\begin{aligned}&\gamma (k)={\mathscr {T}}\gamma (k), \end{aligned}$$
(2.26)

which means that \(\gamma _{11}(k)=\gamma _{22}(k)\) and \(\gamma _{12}(k)=-\gamma _{21}(k)\).

Proof

Using (2.18), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} \sigma _1X_1^\dag (k^*)\sigma _1=\left( X_1(k)-X_2(k)X_4^{-1}(k)X_3(k)\right) ^{-1},\\ \sigma _1X_3^\dag (k^*)\sigma _1=\left( X_3(k)-X_4(k)X_2^{-1}(k)X_1(k)\right) ^{-1}. \end{array}\right. } \end{aligned}$$
(2.27)

From (2.19), it is clear that

$$\begin{aligned} \left( \begin{array}{cc} X^*_4(k^*) &{} -X^*_3(k^*)\\ -X^*_2(k^*) &{} X^*_1(k^*)\\ \end{array}\right) =\left( \begin{array}{cc} X_1(k) &{} X_2(k)\\ X_3(k) &{} X_4(k)\\ \end{array}\right) =\left( \begin{array}{cc} {\mathscr {T}}X_1(k) &{} {\mathscr {T}}X_2(k)\\ {\mathscr {T}}X_3(k) &{} {\mathscr {T}}X_4(k)\\ \end{array}\right) , \end{aligned}$$
(2.28)

which implies that

$$\begin{aligned} X^*_4(k^*)=X_1(k),\quad -X^*_3(k^*)=X_2(k), \end{aligned}$$
(2.29)
$$\begin{aligned} X_j(k)={\mathscr {T}}X_j(k),\ j=1,2,3,4. \end{aligned}$$
(2.30)

From (2.27), we arrive at

$$\begin{aligned} X_3(k)X_1^{-1}(k)=&\sigma _1\{[X_3(k^*)-X_4(k^*)X_2^{-1}(k^*)X_1(k^*)]^{-1}\}^\dag [X_1(k^*)\\&\quad -X_2(k^*)X_4^{-1}(k^*)X_3(k^*)]^\dag \sigma _1\\ =&-\sigma _1(X_2(k^*)X_4^{-1}(k^*))^\dag \sigma _1\\ =&-\sigma _1\gamma ^\dag (k^*)\sigma _1. \end{aligned}$$

In the meantime, (2.29) shows that

$$\begin{aligned} X_3(k)X_1^{-1}(k)=-X_2^*(k^*)[X_4^*(k^*)]^{-1}=-\gamma ^*(k^*). \end{aligned}$$

So we get (2.25) immediately. From the definition of \(\gamma (k)\) and (2.30),

$$\begin{aligned} \gamma (k)=X_2(k)X_4^{-1}(k)=[{\mathscr {T}}X_2(k)][{\mathscr {T}}X_4^{-1}(k)]={\mathscr {T}}\gamma (k). \end{aligned}$$

\(\square \)

Let

$$\begin{aligned} M(k;x,t)= {\left\{ \begin{array}{ll} \left( \mu _{-L}(k)X_1^{-1},\mu _{+R}(k)\right) , &{} k\in {\mathbb {C}}_+,\\ \left( \mu _{+L}(k),\mu _{-R}(k)X_4^{-1}(k)\right) , &{} k\in {\mathbb {C}}_-.\\ \end{array}\right. } \end{aligned}$$
(2.31)

From the analytic properties of \(\mu _\pm \) in Lemma 2.1 and (2.17), it is easy to see that \(X_1\) and \(X_4\) are analytic in \({\mathbb {C}}_+\) and \({\mathbb {C}}_-\), respectively. Then M(kxt) is analytic for \(k\in {\mathbb {C}}\backslash {\mathbb {R}}\), and \(M(k;x,t)\rightarrow I_{4\times 4}\) as \(k\rightarrow \infty \) because of the asymptotics of \(\mu _\pm \).

Fig. 1
figure 1

The contour on \({\mathbb {R}}\)

Theorem 2.1

The piecewise-analytic function M(kxt) determined by (2.31) satisfies the RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} M_+(k;x,t)=M_-(k;x,t)J(k;x,t),\quad k\in {\mathbb {R}},\\ M(k;x,t)\rightarrow I_{4\times 4},\quad k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(2.32)

where \(M_+(k;x,t)\) and \(M_-(k;x,t)\) denote the limiting values as k approaches the contour \({\mathbb {R}}\) from the left and the right along the contour, respectively, the oriented contour is on \({\mathbb {R}}\) in Fig. 1,

$$\begin{aligned} J(k;x,t)= & {} \left( \begin{array}{cc} I_{2\times 2}+\gamma (k)\sigma _1\gamma ^\dag (k^*)\sigma _1 &{} -\gamma (k)e^{-2it\theta }\\ -\sigma _1\gamma ^\dag (k^*)\sigma _1 e^{2it\theta } &{} I_{2\times 2}\\ \end{array}\right) , \end{aligned}$$
(2.33)
$$\begin{aligned} \theta (k)= & {} \frac{x}{t}k+2k^2, \end{aligned}$$
(2.34)

and function \(\gamma (k)\) lies in the Schwartz space and satisfies assumption for \(k\in {\mathbb {R}}\)

$$\begin{aligned} |\gamma _{12}(k)|<1,\quad |\gamma _{12}(k)|\leqslant |\gamma _{11}(k)|\leqslant \mathrm {constant},\quad \gamma _{11}(k)\gamma _{12}^*(k)+\gamma _{11}^*(k)\gamma _{12}(k)\ne 0. \end{aligned}$$
(2.35)

In addition, the solution of RH problem (2.32) exists and is unique, and

$$\begin{aligned} Q(x,t)=2i\lim _{k\rightarrow \infty }[k(M(k;x,t))_{12}] \end{aligned}$$
(2.36)

solves the Cauchy problem of the new type CNLS Eq. (1.1).

Proof

Combining with the expression of M(kxt) and the scattering relation (2.17), we can obtain the jump condition and the corresponding RH problem (2.32) by straightforward calculations. Note that M(kxt) is analytic and its normalization condition. According to the Vanishing Lemma [1], the solution of the RH problem (2.32) is existent and unique because \((J(k)+J^\dag (k))/2\) is definite. Notice that M(kxt) has the asymptotic expansion

$$\begin{aligned} M(k;x,t)=I_{4\times 4}+\frac{M_1(x,t)}{k}+\frac{M_2(x,t)}{k^2}+O(k^{-3}). \end{aligned}$$
(2.37)

(2.36) is obtained by substituting the asymptotic expansion into (2.6). \(\square \)

Note. The function \(\gamma (k)\) has the symmetry relation (2.25), so the jump matrix J(k) also can be written as

$$\begin{aligned} J= \left( \begin{array}{cc} I_{2\times 2}+\gamma (k)\gamma ^*(k^*) &{} -\gamma (k)e^{-2it\theta }\\ -\gamma ^*(k^*) e^{2it\theta } &{} I_{2\times 2}\\ \end{array}\right) , \end{aligned}$$

which will also be used in the remainder of this paper.

3 Long-Time Asymptotic Behavior

In this section, based on the basic RH problem (2.32), we extend the nonlinear steepest descent method to the long-time asymptotic behavior of the solution to the initial value problem of the new type CNLS Eq. (1.1). By calculating that \(\mathrm {d}\theta /\mathrm {d}k=0\), we get the stationary point \(k_0=-x/(4t)\). It is the essence of the nonlinear steepest descent method that the basic RH problem is deformed through several steps and becomes a model RH problem. By solving the model RH problem, we obtain the long-time asymptotics of solutions to the Cauchy problem of the new type CNLS Eq. (1.1).

3.1 Reorientation of the Jump Contour

The key to nonlinear steepest descent method is to find the factorization forms of the jump matrix. There are two different triangular factorizations for jump matrix J(kxt) of the basic RH problem (2.32). In order to make these two kinds of decomposition have a unified form, we have to introduce two other RH problems. By reorienting the contour, we first convert the basic RH problem to a new RH problem on \({\mathbb {R}}\).

Notice that the two triangular factorizations of J(kxt) are

$$\begin{aligned} J={\left\{ \begin{array}{ll} \left( \begin{array}{cc} I_{2\times 2} &{} -e^{-2it\theta }\gamma (k)\\ 0 &{} I_{2\times 2}\\ \end{array}\right) \left( \begin{array}{cc} I_{2\times 2} &{} 0\\ -e^{2it\theta }\sigma _1\gamma ^{\dag }(k^{*})\sigma _1 &{} I_{2\times 2}\\ \end{array}\right) ,\\ \left( \begin{array}{cc} I_{2\times 2} &{} 0\\ -e^{2it\theta }\sigma _1\gamma ^{\dag }(k^{*})\sigma _1D_1^{-1} &{} I_{2\times 2}\\ \end{array}\right) \left( \begin{array}{cc} D_1 &{} 0\\ 0 &{} D_2\\ \end{array}\right) \left( \begin{array}{cc} I_{2\times 2} &{} -e^{-2it\theta }D_1^{-1}\gamma (k)\\ 0 &{} I_{2\times 2}\\ \end{array}\right) , \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} D_1=I_{2\times 2}+\gamma (k)\sigma _1\gamma ^{\dag }(k^{*})\sigma _1,\quad D_2=\left( I_{2\times 2}+\sigma _1\gamma ^{\dag }(k^{*})\sigma _1\gamma (k)\right) ^{-1}. \end{aligned}$$
(3.1)

We introduce two \(2\times 2\) matrix functions \(\delta _1(k)\) and \(\delta _2(k)\) that satisfy the RH problems

$$\begin{aligned} {\left\{ \begin{array}{ll} \delta _{1+}(k)=\delta _{1-}(k)(I_{2\times 2}+\gamma (k)\sigma _1\gamma ^{\dag }(k^{*})\sigma _1),\quad k\in (-\infty ,k_0),\\ \delta _1(k)\rightarrow I_{2\times 2},\quad k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(3.2)

and

$$\begin{aligned} {\left\{ \begin{array}{ll} \delta _{2+}(k)=(I_{2\times 2}+\sigma _1\gamma ^{\dag }(k^{*})\sigma _1\gamma (k))\delta _{2-}(k),\quad k\in (-\infty ,k_0),\\ \delta _2(k)\rightarrow I_{2\times 2},\quad k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(3.3)

respectively. The solutions of the above two RH problems exist and are unique because of Vanishing Lemma [1]. For convenience, we introduce the notations. For any matrix function \(A(\cdot )\), we define \(|A|=\sqrt{\mathrm {tr}(A^\dag A)}\) and \(\Vert A(\cdot )\Vert _p=\Vert |A(\cdot )|\Vert _p\).

Lemma 3.1

The two functions \(\delta _j(k)\) have the symmetry relations

$$\begin{aligned} \delta _j^{-1}(k)=\sigma _1\delta _j^\dag (k^*)\sigma _1, \quad {\mathscr {T}}\delta _j(k)=\delta _j(k),\quad j=1,2. \end{aligned}$$
(3.4)

In addition, \(|\delta _1(k)|\) and \(|\delta _2(k)|\) are all bounded for \(k\in {\mathbb {C}}\).

Proof

We first consider \(\delta _1(k)\). For \(k\in (-\infty ,k_0)\), from (3.2), we have

$$\begin{aligned} \lim _{\epsilon \rightarrow 0^+}\delta _1(k+i\epsilon )=\lim _{\epsilon \rightarrow 0^+}\delta _1(k-i\epsilon )(I_{2\times 2}+\gamma (k)\sigma _1\gamma ^{\dag }(k^{*})\sigma _1). \end{aligned}$$
(3.5)

Let \(f_1(k)=(\sigma _1\delta _1^\dag (k^*)\sigma _1)^{-1}\), then

$$\begin{aligned} f_{1+}(k)=&\lim _{\epsilon \rightarrow 0^+}f_{1+}(k+i\epsilon )\\ =&\lim _{\epsilon \rightarrow 0^+}(\sigma _1\delta _1^\dag (k-i\epsilon )\sigma _1)^{-1}\\ =&\lim _{\epsilon \rightarrow 0^+}\sigma _1(\delta _1^\dag (k+i\epsilon ))^{-1}\sigma _1\sigma _1(I_{2\times 2}+\sigma _1\gamma (k)\sigma _1\gamma ^{\dag }(k^{*}))\sigma _1\\ =&\lim _{\epsilon \rightarrow 0^+}(\sigma _1\delta _1^\dag (k+i\epsilon )\sigma _1)^{-1}(I_{2\times 2}+\gamma (k)\sigma _1\gamma ^{\dag }(k^{*})\sigma _1)\\ =&f_{1-}(k)(I_{2\times 2}+\gamma (k)\sigma _1\gamma ^{\dag }(k^{*})\sigma _1). \end{aligned}$$

Moreover, let \(f_2(k)={\mathscr {T}}\delta _1(k)\),

$$\begin{aligned} f_{2+}(k)=&\lim _{\epsilon \rightarrow 0^+}f_{2+}(k+i\epsilon )\\ =&\lim _{\epsilon \rightarrow 0^+}{\mathscr {T}}\delta _1(k+i\epsilon )\\ =&\lim _{\epsilon \rightarrow 0^+}{\mathscr {T}}[\delta _1(k-i\epsilon )(I_{2\times 2}+\gamma (k)\sigma _1\gamma ^{\dag }(k^{*})\sigma _1)]\\ =&\lim _{\epsilon \rightarrow 0^+}[{\mathscr {T}}\delta _1(k-i\epsilon )](I_{2\times 2}+\gamma (k)\sigma _1\gamma ^{\dag }(k^{*})\sigma _1)\\ =&f_{2-}(k)(I_{2\times 2}+\gamma (k)\sigma _1\gamma ^{\dag }(k^{*})\sigma _1). \end{aligned}$$

By uniqueness, \(f_1(k)=(\sigma _1\delta _1^\dag (k^*)\sigma _1)^{-1}=\delta _1(k)\) and \(f_2(k)={\mathscr {T}}\delta _1(k)=\delta _1(k)\). Similarly, the symmetry properties of \(\delta _2(k)\) can be obtained, from which we obtain (3.4). For \(\delta _{1+}\), a simple calculation shows that

$$\begin{aligned} \sigma _1\delta ^\dag _{1+}(k)\sigma _1\delta _{1+}(k)=I_{2\times 2}+\gamma (k)\gamma ^*(k^{*}). \end{aligned}$$
(3.6)

The above equality implies that

$$\begin{aligned}&0\leqslant |\delta _{1+}^{(11)}(k)|^2-|\delta _{1+}^{(12)}(k)|^2\leqslant \mathrm {const},\end{aligned}$$
(3.7)
$$\begin{aligned}&0<|(\delta _{1+}^{(11)}(k))^*\delta _{1+}^{(12)}(k)+\delta _{1+}^{(11)}(k)(\delta _{1+}^{(12)}(k))^*|\leqslant \mathrm {const}, \end{aligned}$$
(3.8)

where the \(\delta _{1+}^{(ij)}(k), (i,j=1,2),\) denote the (ij) entry of matrix \(\delta _{1+}(k)\). Conversely, if \(|\delta _{1+}^{(12)}(k)|\) is unbounded, then (3.8) is not true. So \(|\delta _{1+}^{(12)}(k)|\) is bounded and (3.7) shows that \(|\delta _{1+}^{(11)}(k)|\) is also bounded. To sum up, \(|\delta _{1+}(k)|\) is bounded when \(k\in {\mathbb {R}}\). Similarly, \(|\delta _{1-}(k)|\), \(|\delta _{2+}(k)|\) and \(|\delta _{2-}(k)|\) are all bounded when \(k\in {\mathbb {R}}\). Hence, by the maximum principle, we have for \(j=1,2\),

$$\begin{aligned} \vert \delta _j(k)\vert \leqslant \mathrm {const}<\infty ,\quad k\in {\mathbb {C}}. \end{aligned}$$
(3.9)

\(\square \)

We now define a matrix spectral function

$$\begin{aligned} \rho (k)= {\left\{ \begin{array}{ll} \gamma (k), &{} k\in [k_0,+\infty ),\\ -\left( I_{2\times 2}+\gamma (k)\sigma _1\gamma ^{\dag }(k^{*})\sigma _1\right) ^{-1}\gamma (k), &{} k\in (-\infty ,k_0),\\ \end{array}\right. } \end{aligned}$$
(3.10)

and

$$\begin{aligned} M^\Delta (k;x,t)=M(k;x,t)\Delta ^{-1}(k), \end{aligned}$$

where

$$\begin{aligned} \Delta (k)=\left( \begin{array}{cc} \delta _1(k) &{} 0\\ 0 &{} \delta _2^{-1}(k)\\ \end{array}\right) . \end{aligned}$$
Fig. 2
figure 2

The oriented contour on \({\mathbb {R}}\)

We reverse the orientation for \(k\in [k_0,+\infty )\) as shown in Fig. 2 and \(M^\Delta (k;x,t)\) satisfies the RH problem on the reoriented contour

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}M^\Delta _+(k;x,t)=M^\Delta _-(k;x,t)J^\Delta (k;x,t),\quad k\in {\mathbb {R}},\\ &{}M^\Delta (k;x,t)\rightarrow I_{4\times 4},\quad k\rightarrow \infty ,\\ \end{array}\right. } \end{aligned}$$
(3.11)

where the jump matrix \(J^\Delta (k;x,t)\) has a decomposition

$$\begin{aligned} \begin{aligned} J^\Delta (k;x,t)=&\left( \begin{array}{cc} I_{2\times 2} &{} 0\\ e^{2it\theta }\delta _{2-}^{-1}(k)\rho ^\dag (k^*)\delta _{1-}^{-1}(k) &{} I_{2\times 2}\\ \end{array}\right) \left( \begin{array}{cc} I_{2\times 2} &{} e^{-2it\theta }\delta _{1+}(k)\rho (k)\delta _{2+}(k)\\ 0 &{} I_{2\times 2}\\ \end{array}\right) \\ \triangleq&(b_-)^{-1}b_+. \end{aligned} \end{aligned}$$
(3.12)

3.2 Extend to the Augmented Contour

In this subsection, the spectral function \(\rho (k)\) is the main subject of study. We give the decompositions of the spectral function \(\rho (k)\), which reveals that \(\rho (k)\) consists of an analytic part and a small nonanalytic part. Based on the decompositions, we can split the triangular matrices \(b_\pm \) in (3.12) to two parts. The boundary of analytic domains of the analytic parts is corresponding to an augmented contour, and we can construct a new RH problem on it.

For convenience, we introduce \(L=\{k=k_0+ue^{\frac{3\pi i}{4}}:u\in (-\infty ,+\infty )\}\) and the notations: \(A\lesssim B\) for two quantities A and B if there exists a constant \(C>0\) such that \(|A|\leqslant CB\).

Theorem 3.1

The matrix spectral function \(\rho (k)\) has the decompositions

$$\begin{aligned} \rho (k)=h_1(k)+h_2(k)+R(k),\quad k\in {\mathbb {R}}, \end{aligned}$$
(3.13)

where R(k) is a piecewise-rational function, \(h_2(k)\) has a analytic continuation to L and they admit the following estimates

$$\begin{aligned} \vert e^{-2it\theta (k)}h_1(k)\vert \lesssim \frac{1}{(1+\vert k\vert ^{2})t^{l}}, \quad k\in {\mathbb {R}}, \end{aligned}$$
(3.14)
$$\begin{aligned} \vert e^{-2it\theta (k)}h_2(k)\vert \lesssim \frac{1}{(1+\vert k\vert ^{2})t^{l}}, \quad k\in L, \end{aligned}$$
(3.15)

for an arbitrary positive integer l. The Schwartz conjugate of \(\rho (k)\) shows that

$$\begin{aligned} \rho ^\dag (k^*)=h_1^\dag (k^*)+h_2^\dag (k^*)+R^\dag (k^*), \end{aligned}$$
(3.16)

and \(e^{2it\theta (k)}h_1^\dag (k^*)\) and \(e^{2it\theta (k)}h_2^\dag (k^*)\) have the same estimates on \({\mathbb {R}}\cup L^*\).

Proof

We first consider the function \(\rho (k)\) for \(k\in [k_0,+\infty )\), and it has a similar process for \(k\in (-\infty ,k_0)\). In order to replace the function \(\rho (k)\) by a rational function with well-controlled errors, we expand \((k-i)^{m+5}\rho (k)\) in a Taylor series around \(k_0\),

$$\begin{aligned}&(k-i)^{m+5}\rho (k)=\mu _0+\mu _1(k-k_0)+\cdots +\mu _m(k-k_0)^m\nonumber \\&\quad +\frac{1}{m!}\int _{k_0}^k((\xi -i)^{m+5}\rho (\xi ))^{(m+1)}(k-\xi )^m\, \mathrm {d}\xi . \end{aligned}$$
(3.17)

Then we define

$$\begin{aligned}&R(k)=\dfrac{\sum _{j=0}^m\mu _j(k-k_0)^j}{(k-i)^{m+5}}, \end{aligned}$$
(3.18)
$$\begin{aligned}&h(k)=\rho (k)-R(k). \end{aligned}$$
(3.19)

So the following result is straightforward

$$\begin{aligned} \left. \frac{\mathrm {d}^j\rho (k)}{\mathrm {d}k^j}\right| _{k=k_0}=\left. \frac{\mathrm {d}^jR(k)}{\mathrm {d}k^j}\right| _{k=k_0},\quad 0\leqslant j\leqslant m. \end{aligned}$$
(3.20)

For the convenience, we assume that \(m=4p+1, p\in {\mathbb {Z}}_+\). Set

$$\begin{aligned} \alpha (k)=\frac{(k-k_0)^p}{(k-i)^{p+2}}. \end{aligned}$$
(3.21)

By the Fourier inversion, we arrive at

$$\begin{aligned} (h/\alpha )(k)=\int _{-\infty }^{+\infty }e^{is\theta }\widehat{(h/\alpha )}(s)\, \bar{\mathrm {d}}s, \end{aligned}$$
(3.22)

where

$$\begin{aligned}&\widehat{(h/\alpha )}(s)=\int _{k_0}^{+\infty }e^{-is\theta }(h/\alpha )(k)\, \bar{\mathrm {d}}\theta (k), \end{aligned}$$
(3.23)
$$\begin{aligned}&\bar{\mathrm {d}}s=\frac{\mathrm {d}s}{\sqrt{2\pi }},\quad \bar{\mathrm {d}}\theta =\frac{\mathrm {d}\theta }{\sqrt{2\pi }}. \end{aligned}$$
(3.24)

From (3.17), (3.19) and (3.21), it is easy to see that

$$\begin{aligned} (h/\alpha )(k)=\frac{(k-k_0)^{m+1-p}}{(k-i)^{m+3-p}}g(k,k_0)=\frac{(k-k_0)^{3p+2}}{(k-i)^{3p+4}}g(k,k_0), \end{aligned}$$
(3.25)

where

$$\begin{aligned} g(k,k_0)=\frac{1}{m!}\int _0^1((k_0+u(k-k_0)-i)^{m+5}\rho (k_0+u(k-k_0)))^{(m+1)}(1-u)^m\, \mathrm {d}u.\nonumber \\ \end{aligned}$$
(3.26)

Noting that

$$\begin{aligned} \left| \frac{\mathrm {d}^j}{\mathrm {d}k^j}g(k,k_0)\right| \lesssim 1, \end{aligned}$$
(3.27)

we have

$$\begin{aligned}&\int _{k_0}^{+\infty }\left| \left( \frac{\mathrm {d}}{\mathrm {d}\theta }\right) ^j(h/\alpha )(k(\theta ))\right| ^2\, |\bar{\mathrm {d}}\theta |\\&\quad =\int _{k_0}^{+\infty }\left| \left( \frac{1}{4k-4k_0}\frac{\mathrm {d}}{\mathrm {d}k}\right) ^j(h/\alpha )(k)\right| ^2(4k-4k_0)\, \bar{\mathrm {d}}k\\&\quad \lesssim \int _{k_0}^{+\infty }\left| \frac{(k-k_0)^{3p+2-2j}}{(k-i)^{3p+4}}\right| ^2(k-k_0)\, \bar{\mathrm {d}}k\lesssim 1,\quad \mathrm {for}\quad 0\leqslant j\leqslant \frac{3p+2}{2}. \end{aligned}$$

Using the Plancherel formula [40],

$$\begin{aligned} \int _{-\infty }^{+\infty }(1+s^2)^j\left| \widehat{(h/\alpha )}(s)\right| ^2\, \mathrm {d}s \lesssim 1, \end{aligned}$$
(3.28)

we split h(k) into two parts

$$\begin{aligned} \begin{aligned} h(k)=&\alpha (k)\int _{t}^{+\infty }e^{is\theta }\widehat{(h/\alpha )}(s)\, \bar{\mathrm {d}}s+\alpha (k)\int _{-\infty }^{t}e^{is\theta }\widehat{(h/\alpha )}(s)\, \bar{\mathrm {d}}s\\ \triangleq&h_1(k)+h_2(k). \end{aligned} \end{aligned}$$
(3.29)

For \(k\geqslant k_0\), we have

$$\begin{aligned} |e^{-2it\theta }h_1(k)|&\leqslant |\alpha (k)|\int _t^{+\infty }\left| \widehat{(h/\alpha )}(s)\right| \, \bar{\mathrm {d}}s\\&\lesssim \left| \frac{(k-k_0)^p}{(k-i)^{p+2}}\right| \left( \int _{t}^{+\infty }(1+s^2)^{-r}\, \mathrm {d}s\right) ^{1/2}\left( \int _{t}^{+\infty }(1+s^2)^{r}\left| \widehat{(h/\alpha )}(s)\right| ^2\, \mathrm {d}s\right) ^{1/2}\\&\lesssim t^{-r+\frac{1}{2}}, \quad \mathrm {for}\quad r\leqslant \frac{3p+2}{2}. \end{aligned}$$

Noting (2.34), we denote \(k_R\) and \(k_I\) as the real part and image part of complex parameter k,

$$\begin{aligned} \mathrm {Re}(i\theta )&=\mathrm {Re}(2i(k_R+ik_I)^2-4k_0(k_R+ik_I))\\&=4k_I(k_0-k_R). \end{aligned}$$

It is obvious that

  1. (i)

    \(\mathrm {Re}(i\theta )>0\) when \(\mathrm {Re}k>k_0, \mathrm {Im}k<0\) or \(\mathrm {Re}k<k_0, \mathrm {Im}k>0\);

  2. (ii)

    \(\mathrm {Re}(i\theta )<0\) when \(\mathrm {Re}k>k_0, \mathrm {Im}k>0\) or \(\mathrm {Re}k<k_0, \mathrm {Im}k<0\).

Then \(h_2(k)\) has a analytic continuation to the line \(\{k=k_0+ue^{-\frac{\pi i}{4}}:u\in [0,+\infty )\}\). On this line, we have

$$\begin{aligned} |e^{-2it\theta }h_2(k)|&\leqslant e^{-2t\mathrm {Re}(i\theta )}\left| \frac{(k-k_0)^p}{(k-i)^{p+2}}\right| \int _{-\infty }^t\left| e^{is\theta }\widehat{(h/\alpha )}(s)\right| \, \bar{\mathrm {d}}s\\&\lesssim \frac{u^pe^{-t\mathrm {Re}(i\theta )}}{|k-i|^{p+2}}\int _{-\infty }^te^{(s-t)\mathrm {Re}(i\theta )}\left| \widehat{(h/\alpha )}(s)\right| \, \mathrm {d}s\\&\lesssim \frac{u^pe^{-t\mathrm {Re}(i\theta )}}{|k-i|^{p+2}}\left( \int _{-\infty }^t(1+s^2)^{-1}\, \mathrm {d}s\right) ^{1/2}\left( \int _{-\infty }^t(1+s^2)\left| \widehat{(h_1/\alpha )}(s)\right| ^2\, \mathrm {d}s\right) ^{1/2}\\&\lesssim \frac{u^pe^{-t\mathrm {Re}(i\theta )}}{|k-i|^{p+2}}, \end{aligned}$$

where

$$\begin{aligned} \theta (k)=2(k-k_0)^2-2k_0^2. \end{aligned}$$
(3.30)

Then we arrive at \(\mathrm {Re}(i\theta )=2u^2\) and

$$\begin{aligned} |e^{-2it\theta }h_2(k)|&\lesssim \frac{(\sqrt{t}u)^pe^{-2u^2t}u^p}{|k-i|^{p+2}(\sqrt{t}u)^p}\lesssim \frac{1}{|k-i|^{2}t^{p/2}}. \end{aligned}$$
(3.31)

This accomplishes the estimates of \(h_1(k)\) and \(h_2(k)\) for \(k\geqslant k_0\). In the case of \(k<k_0\), it is similar. To sum up, we let l be an arbitrary positive integer and \((3p-1)/2\) and p/2 be great than l. Then we can obtain (3.14) and (3.15). \(\square \)

Note. From the symmetry relation of \(\gamma (k)\) (2.25) and the definition of function \(\rho (k)\), for \(k\geqslant k_0\),

$$\begin{aligned} \sigma _1\rho ^\dag (k^*)\sigma _1=\sigma _1\gamma ^\dag (k^*)\sigma _1=\gamma ^*(k^*)=\rho ^*(k^*). \end{aligned}$$

For \(k<k_0\),

$$\begin{aligned} \sigma _1\rho ^\dag (k^*)\sigma _1=&-\sigma _1\left[ (I_{2\times 2}+\gamma (k^*)\sigma _1\gamma ^{\dag }(k)\sigma _1)^{-1}\gamma (k^*)\right] ^\dag \sigma _1\\ =&-\sigma _1\gamma ^\dag (k^*)\sigma _1\left[ \sigma _1(I_{2\times 2}+\sigma _1\gamma (k^*)\sigma _1\gamma ^{\dag }(k))\sigma _1\right] ^{-1}\\ =&-\gamma ^*(k^*)\left[ I_{2\times 2}+\gamma (k)\gamma ^*(k^*)\right] ^{-1}\\ =&-\left[ I_{2\times 2}+\gamma ^*(k^*)\gamma (k)\right] ^{-1}\gamma ^*(k^*)\\ =&\rho ^*(k^*). \end{aligned}$$

Notice the definition of \(h_1(k)\), \(h_2(k)\) and R(k) in (3.18) and (3.29), the symmetry property of \(\rho (k)\) yields that

$$\begin{aligned}&\sigma _1h_1^\dag (k^*)\sigma _1=h_1^*(k^*), \end{aligned}$$
(3.32)
$$\begin{aligned}&\sigma _1h_2^\dag (k^*)\sigma _1=h_2^*(k^*),\end{aligned}$$
(3.33)
$$\begin{aligned}&\sigma _1R^\dag (k^*)\sigma _1=R^*(k^*), \end{aligned}$$
(3.34)

which will be used in the following.

Based on the result of Theorem 3.1, \(b_\pm \) have decompositions

$$\begin{aligned} \begin{aligned} b_{+}=b^{o}_{+}b^{a}_{+}\triangleq (I_{4\times 4}+\omega ^{o}_{+})(I_{4\times 4}+\omega ^{a}_{+});\\ b_{-}=b^{o}_{-}b^{a}_{-}\triangleq (I_{4\times 4}-\omega ^{o}_{-})(I_{4\times 4}-\omega ^{a}_{-}), \end{aligned} \end{aligned}$$
(3.35)

where

$$\begin{aligned}&b^{o}_{+}= \left( \begin{array}{cc} I_{2\times 2} &{} e^{-2it\theta }\delta _{1+}(k)h_1(k)\delta _{2+}(k)\\ 0 &{} I_{2\times 2}\\ \end{array}\right) ,\\&b^{a}_{+}= \left( \begin{array}{cc} I_{2\times 2} &{} e^{-2it\theta }\delta _{1+}(k)(h_2(k)+R(k))\delta _{2+}(k)\\ 0 &{} I_{2\times 2}\\ \end{array}\right) ;\\&b^{o}_{-}= \left( \begin{array}{cc} I_{2\times 2} &{} 0\\ -e^{2it\theta }\delta _{2-}^{-1}(k)\sigma _1h_1^\dag (k^*)\sigma _1\delta _{1-}^{-1}(k) &{} I_{2\times 2}\\ \end{array}\right) ,\\&b^{a}_{-}= \left( \begin{array}{cc} I_{2\times 2} &{} 0\\ -e^{2it\theta }\delta _{2-}^{-1}(k)\sigma _1(h_2^\dag (k^*)+R^\dag (k^*))\sigma _1\delta _{1-}^{-1}(k) &{} I_{2\times 2}\\ \end{array}\right) . \end{aligned}$$
Fig. 3
figure 3

The oriented contour \(\Sigma \)

Define the oriented contour \(\Sigma \) by \(\Sigma ={\mathbb {R}}\cup L\cup L^*\) as in Fig. 3. Set

$$\begin{aligned} M^\sharp (k;x,t)= {\left\{ \begin{array}{ll} M^\Delta (k;x,t), &{} k\in \Omega _{1}\cup \Omega _{2},\\ M^\Delta (k;x,t)(b_+^a)^{-1}, &{} k\in \Omega _{3}\cup \Omega _{4},\\ M^\Delta (k;x,t)(b_-^a)^{-1}, &{} k\in \Omega _{5}\cup \Omega _{6}.\\ \end{array}\right. } \end{aligned}$$
(3.36)

From the analytic properties of \(b_\pm ^a\) in Theorem 3.1, \(M^\sharp (k;x,t)\) is analytic in \({\mathbb {C}}\backslash \Sigma \).

Lemma 3.2

\(M^\sharp (k;x,t)\) is the solution of the RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} M^\sharp _+(k;x,t)=M^\sharp _-(k;x,t)J^\sharp (k;x,t), &{} k\in \Sigma ,\\ M^\sharp (k;x,t)\rightarrow I_{4\times 4}, &{} k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(3.37)

where the jump matrix \(J^\sharp (k;x,t)\) satisfies

$$\begin{aligned}&J^\sharp (k;x,t)=(b_-^\sharp )^{-1}b_+^\sharp , \end{aligned}$$
(3.38)
$$\begin{aligned}&b_-^\sharp = {\left\{ \begin{array}{ll} I, &{} k\in L,\\ b_-^a, &{} k\in L^*,\\ b_-^o, &{} k\in {\mathbb {R}},\\ \end{array}\right. }\quad b_+^\sharp = {\left\{ \begin{array}{ll} b_+^a, &{} k\in L,\\ I, &{} k\in L^*,\\ b_+^o, &{} k\in {\mathbb {R}}.\\ \end{array}\right. } \end{aligned}$$
(3.39)

Proof

A simple calculation shows that the jump condition (3.38) holds with the aid of the RH problem (3.11) and the decomposition of \(b_\pm \). From the definition of \(M^\sharp (k;x,t)\) in (3.36), the normalized condition of \(M^\sharp (k;x,t)\) depends on the convergence of \(b_\pm \) as \(k\rightarrow \infty \). We first consider the case of \(k\in \Omega _{3}\). According to the boundedness of \(\delta _1(k)\) and \(\delta _2(k)\) in (3.9), and the definition of R(k) and \(h_2(k)\) in (3.18) and (3.29), we have

$$\begin{aligned}&\vert e^{-2it\theta }\delta _{1+}(k)(h_2(k)+R(k))\delta _{2+}(k)\vert \\&\quad \lesssim \vert e^{-2it\theta }h_2(k)\vert +\vert e^{-2it\theta }R(k)\vert \\&\quad \lesssim |\alpha (k)|e^{-t\mathrm {Re}(i\theta )}\left| \int _{-\infty }^te^{i(s-t)\theta }\widehat{(h/\alpha )}(k)\, \mathrm {d}s\right| +\dfrac{\vert \sum _{j=0}^m\mu _j(k-k_0)^j\vert }{\vert (k-i)^{m+5}\vert }\\&\quad \lesssim \dfrac{1}{\vert k-i\vert ^{2}}+\dfrac{1}{\vert k-i\vert ^{5}}. \end{aligned}$$

Then we obtain that \(M^\sharp (k;x,t)\rightarrow I_{4\times 4}\) when \(k\in \Omega _{3}\) and \(k\rightarrow \infty \). In other domains, the result can be similarly proved. \(\square \)

Up to now, we have transformed the basic RH problem (2.32) into RH problem (3.37). We aim to obtain the solution M(kxt) in (2.36) and now we need find the solution \(M^\sharp (k;x,t)\). The RH problem (3.37) can be solved by using the approach in [4]. Assume that

$$\begin{aligned} \omega ^\sharp _\pm =\pm (b^\sharp _\pm -I_{4\times 4}), \quad \omega ^\sharp =\omega ^\sharp _++\omega ^\sharp _-, \end{aligned}$$
(3.40)

and define the Cauchy operators \(C_\pm \) on \(\Sigma \) by

$$\begin{aligned} (C_\pm f)(k)=\int _\Sigma \frac{f(\xi )}{\xi -k_\pm }\frac{\mathrm {d}\xi }{2\pi i},\quad k\in \Sigma ,\ f\in {\mathscr {L}}^2(\Sigma ), \end{aligned}$$
(3.41)

where \(C_+f(C_-f)\) denotes the left (right) boundary value on the oriented contour \(\Sigma \) in Fig. 3. We introduce the operator \(C_{\omega ^\sharp }:{\mathscr {L}}^2(\Sigma )+{\mathscr {L}}^\infty (\Sigma )\rightarrow {\mathscr {L}}^2(\Sigma )\) by

$$\begin{aligned} C_{\omega ^{\sharp }}f=C_+\left( f\omega ^\sharp _-\right) +C_-\left( f\omega ^\sharp _+\right) \end{aligned}$$
(3.42)

for a \(4\times 4\) matrix function f. Suppose that \(\mu ^\sharp (k;x,t)\in {\mathscr {L}}^2(\Sigma )+{\mathscr {L}}^\infty (\Sigma )\) is the solution of the singular integral equation

$$\begin{aligned} \mu ^\sharp =I_{4\times 4}+C_{\omega ^\sharp }\mu ^\sharp . \end{aligned}$$

Then

$$\begin{aligned} M^\sharp (k;x,t)=I_{4\times 4}+\int _\Sigma \dfrac{\mu ^\sharp (\xi ;x,t)\omega ^\sharp (\xi ;x,t)}{\xi -k} \, \frac{\mathrm {d}\xi }{2\pi i} \end{aligned}$$
(3.43)

solves RH problem (3.37). Indeed,

$$\begin{aligned} \begin{aligned} M_\pm ^\sharp (k)=&I_{4\times 4}+C_\pm (\mu ^\sharp (k)\omega ^\sharp (k))\\ =&I_{4\times 4}+C_\pm (\mu ^\sharp (k)\omega _+^\sharp (k))+C_\pm (\mu ^\sharp (k)\omega _-^\sharp (k))\\ =&I_{4\times 4}\pm \mu ^\sharp (k)\omega _\pm ^\sharp (k)+C_{\omega ^\sharp }\mu ^\sharp (k)\\ =&\mu ^\sharp (k) b_\pm ^\sharp (k). \end{aligned} \end{aligned}$$
(3.44)

It is obvious that

$$\begin{aligned} M_+^\sharp =\mu ^\sharp b_+^\sharp =M_-^\sharp (b_-^\sharp )^{-1}b_+^\sharp =M_-^\sharp J^\sharp . \end{aligned}$$
(3.45)

Theorem 3.2

The solution Q(xt) for the Cauchy problem of the new type CNLS Eq. (1.1) is expressed by

$$\begin{aligned} Q(x,t)=-\frac{1}{\pi }\left( \int _\Sigma \left( (1-C_{\omega ^\sharp })^{-1}I_{4\times 4}\right) (\xi )\omega ^\sharp (\xi )\, \mathrm {d}\xi \right) _{12}. \end{aligned}$$
(3.46)

Proof

From (2.36), (3.36) and (3.43), we arrive at

$$\begin{aligned} \begin{aligned} Q(x,t)&=2i\lim _{k\rightarrow \infty }\left( kM^\Delta (k;x.t)\right) _{12}\\&=2i\lim _{k\rightarrow \infty }\left( kM^\sharp (k;x.t)\right) _{12}\\&=-\frac{1}{\pi }\left( \int _\Sigma \mu ^\sharp (\xi )\omega ^\sharp (\xi )\,\mathrm {d}\xi \right) _{12}\\&=-\frac{1}{\pi }\left( \int _\Sigma ((1-C_{\omega ^\sharp })^{-1}I_{4\times 4})(\xi )\omega ^\sharp (\xi )\,\mathrm {d}\xi \right) _{12}. \end{aligned} \end{aligned}$$

\(\square \)

3.3 The Third Transformation

In this subsection, we establish a RH problem on the contour \(\Sigma ^\prime =\Sigma \backslash {\mathbb {R}}=L\cup L^*\), which is derived from the RH problem on the contour \(\Sigma \). Then we figure out the estimates of the errors between the two RH problems.

From (3.35), (3.39) and (3.40), we find that \(\omega ^\sharp \) is made up of the terms \(h_1(k)\), \(h_1^\dag (k^*)\), \(h_2(k)\), \(h_2^\dag (k^*)\), R(k) and \(R^\dag (k^*)\). A simple calculation shows that

$$\begin{aligned} \omega ^\sharp = {\left\{ \begin{array}{ll} \left( \begin{array}{cc} 0 &{} e^{-2it\theta }\delta _{1+}(k)(h_2(k)+R(k))\delta _{2+}(k)\\ 0 &{} 0\\ \end{array}\right) , &{} k\in L,\\ \left( \begin{array}{cc} 0 &{} 0\\ e^{2it\theta }\delta _{2-}^{-1}(k)\sigma _1(h_2^\dag (k^*)+R^\dag (k^*))\sigma _1\delta _{1-}^{-1}(k) &{} 0\\ \end{array}\right) , &{} k\in L^*,\\ \left( \begin{array}{cc} 0 &{} e^{-2it\theta }\delta _{1+}(k)h_1(k)\delta _{2+}(k)\\ e^{2it\theta }\delta _{2-}^{-1}(k)\sigma _1h_1^\dag (k^*)\sigma _1\delta _{1-}^{-1}(k) &{} 0\\ \end{array}\right) ,&k\in {\mathbb {R}}. \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.47)

Noting that \(\omega ^\sharp \) is the sum of different terms, we can split \(\omega ^\sharp \) into three parts: \(\omega ^\sharp =\omega ^a+\omega ^b+\omega ^\prime \), where \(\omega ^a=\omega ^\sharp |_{{\mathbb {R}}}\) and is made up of the terms \(h_1(k)\) and \(h_1^\dag (k^*)\); \(\omega ^b=0\) on \(\Sigma \backslash (L\cup L^*)\) and is made up of the terms \(h_2(k)\) and \(h_2^\dag (k^*)\); \(\omega ^\prime \)=0 on \(\Sigma \backslash \Sigma ^\prime \) and is made up of the terms R(k) and \(R^\dag (k^*)\).

Let \(\omega ^e=\omega ^a+\omega ^b\). So \(\omega ^\sharp =\omega ^e+\omega ^\prime \). Then we immediately get the following estimates.

Lemma 3.3

For arbitrary positive integer l, as \(t\rightarrow \infty \), we have the estimates

$$\begin{aligned}&\Vert \omega ^a\Vert _{{\mathscr {L}}^1({\mathbb {R}})\cap {\mathscr {L}}^2({\mathbb {R}})\cap {\mathscr {L}}^\infty ({\mathbb {R}})}\lesssim t^{-l}, \end{aligned}$$
(3.48)
$$\begin{aligned}&\Vert \omega ^b\Vert _{{\mathscr {L}}^1(L\cup L^*)\cap {\mathscr {L}}^2(L\cup L^*)\cap {\mathscr {L}}^\infty (L\cup L^*)}\lesssim t^{-l},\end{aligned}$$
(3.49)
$$\begin{aligned}&\Vert \omega ^\prime \Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim t^{-\frac{1}{4}},\quad \Vert \omega ^\prime \Vert _{{\mathscr {L}}^1(\Sigma )}\lesssim t^{-\frac{1}{2}}. \end{aligned}$$
(3.50)

Proof

We consider the case of \(\omega ^a\) first. It is significant to compute \(|\omega ^a|\) if we want to give the estimates for \(\Vert |\omega ^a|\Vert _{{\mathscr {L}}^p({\mathbb {R}})}\), \((p=1,2,\infty )\). Notice the boundedness of \(\delta _1(k)\) and \(\delta _2(k)\) in (3.9) and the estimates in Theorem 3.1, we have

$$\begin{aligned} |\omega ^a|=&\left( \left| e^{-2it\theta }\delta _1(k)h_1(k)\delta _2(k)\right| ^2+\left| \delta _2^{-1}(k)\sigma _1h_1^\dag (k^*)\sigma _1\delta _1^{-1}(k)\right| ^2\right) ^{\frac{1}{2}}\\ \leqslant&\left| e^{-2it\theta }\delta _1(k)h_1(k)\delta _2(k)\right| +\left| \delta _2^{-1}(k)\sigma _1h_1^\dag (k^*)\sigma _1\delta _1^{-1}(k)\right| \\ \lesssim&\left| e^{-2it\theta }h_1(k)\right| +\left| e^{2it\theta }(k)h_1^\dag (k^*)\right| \\ \lesssim&\frac{1}{(1+\vert k\vert ^{2})t^{l}}, \end{aligned}$$

which leads to the estimates (3.48). Similarly, we can prove by a simple calculation that (3.49) also holds. Next, we prove estimate (3.50). On the line \(\{k=k_0+ue^{\frac{3\pi i}{4}}:u\in (-\infty ,0)\}\), from the definition of R(k) in (3.18), we have

$$\begin{aligned} |R(k)|=\dfrac{\vert \sum _{j=0}^m\mu _j(k-k_0)^j\vert }{\vert (k-i)^{m+5}\vert }\lesssim \dfrac{1}{1+\vert k\vert ^{5}}, \end{aligned}$$
(3.51)

and |R(k)| has the same bound on the line \(\{k=k_0+ue^{\frac{3\pi i}{4}}:u\in [0,+\infty )\}\). Resorting to \(\mathrm {Re}(i\theta )=2u^2\) on L, we arrive at

$$\begin{aligned} \left| e^{-2it\theta }\delta _1(k)R(k)\delta _2(k)\right| \lesssim e^{-4tu^2}(1+|k|^5)^{-1}. \end{aligned}$$

It is similar for \(R^\dag (k^*)\) on \(L^*\) that

$$\begin{aligned} \left| e^{2it\theta }\delta _2^{-1}(k)\sigma _1R^\dag (k^*)\sigma _1\delta _1^{-1}(k)\right| \lesssim e^{-4tu^2}(1+|k|^5)^{-1}. \end{aligned}$$

Then (3.50) holds through direct computations. \(\square \)

From Proposition 2.23 and Corollary 2.25 in [14], we know that operator \((1-C_{\omega ^\prime })^{-1}:{\mathscr {L}}^2(\Sigma )\rightarrow {\mathscr {L}}^2(\Sigma )\) exists and is uniformly bounded, and

$$\begin{aligned} \Vert (1-C_{\omega ^\prime })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim 1,\quad \Vert (1-C_{\omega ^\sharp })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim 1. \end{aligned}$$
(3.52)

Lemma 3.4

As \(t\rightarrow \infty \), then

$$\begin{aligned} \int _\Sigma ((1-C_{\omega ^\sharp })^{-1}I_{4\times 4})(\xi )\omega ^\sharp (\xi ) \, \mathrm {d}\xi = \int _{\Sigma ^\prime }((1-C_{\omega ^\prime })^{-1}I_{4\times 4})(\xi )\omega ^\prime (\xi ) \, \mathrm {d}\xi +O(t^{-l}).\nonumber \\ \end{aligned}$$
(3.53)

Proof

It is easy to see that

$$\begin{aligned} \begin{aligned} \left( (1-C_{\omega ^\sharp }\right) ^{-1}I_{4\times 4})\omega ^\sharp =&\left( (1-C_{\omega ^\prime })^{-1}I_{4\times 4}\right) \omega ^\prime \\&+\left( (1-C_{\omega ^\prime })^{-1}(C_{\omega ^\sharp }-C_{\omega ^\prime })(1-C_{\omega ^\sharp })^{-1}I_{4\times 4}\right) \omega ^\sharp \\&+((1-C_{\omega ^\prime })^{-1}I_{4\times 4})(\omega ^\sharp -\omega ^\prime )\\ =&\left( (1-C_{\omega ^\prime })^{-1}I_{4\times 4}\right) \omega ^\prime +\omega ^e+\left( (1-C_{\omega ^\prime })^{-1}(C_{\omega ^e}I_{4\times 4})\right) \omega ^\sharp \\&+((1-C_{\omega ^\prime })^{-1}(C_{\omega ^\prime }I_{4\times 4}))\omega ^e\\&+\left( (1-C_{\omega ^\prime })^{-1}C_{\omega ^e}(1-C_{\omega ^\sharp })^{-1}\right) (C_{\omega ^\sharp }I_{4\times 4})\omega ^\sharp . \end{aligned} \end{aligned}$$
(3.54)

From Lemma 3.3 and (3.52), we get

$$\begin{aligned} \begin{aligned}&\Vert \omega ^e\Vert _{{\mathscr {L}}^1(\Sigma )} \leqslant \Vert \omega ^a\Vert _{{\mathscr {L}}^1({\mathbb {R}})}+\Vert \omega ^b\Vert _{{\mathscr {L}}^1(L\cup L^*)}\lesssim t^{-l},\\&\begin{aligned} \left\| \left( (1-C_{\omega ^\prime })^{-1}(C_{\omega ^e}I_{4\times 4})\right) \omega ^\sharp \right\| _{{\mathscr {L}}^1(\Sigma )} \leqslant&\Vert (1-C_{\omega ^\prime })^{-1} C_{\omega ^e}I_{4\times 4}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )}\\ \leqslant&\Vert (1-C_{\omega ^\prime })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^e}I_{4\times 4}\Vert _{{\mathscr {L}}^2(\Sigma )}\\&\times (\Vert \omega ^a\Vert _{{\mathscr {L}}^2({\mathbb {R}})}+\Vert \omega ^b\Vert _{{\mathscr {L}}^2(L\cup L^*)}+\Vert \omega ^\prime \Vert _{{\mathscr {L}}^2(\Sigma )})\\ \lesssim&\Vert \omega ^e\Vert _{{\mathscr {L}}^2(\Sigma )}(t^{-l}+t^{-l}+t^{-\frac{1}{4}}) \lesssim t^{-l-\frac{1}{4}}, \end{aligned}\\&\begin{aligned} \left\| \left( (1-C_{\omega ^\prime })^{-1}(C_{\omega ^\prime }I_{4\times 4})\right) \omega ^e\right\| _{{\mathscr {L}}^1(\Sigma )} \leqslant&\Vert (1-C_{\omega ^\prime }^{-1}) C_{\omega ^\prime }I_{4\times 4}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^e\Vert _{{\mathscr {L}}^2(\Sigma )}\\ \leqslant&\Vert (1-C_{\omega ^\prime }^{-1})\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^\prime }I_{4\times 4}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^e\Vert _{{\mathscr {L}}^2(\Sigma )}\\ \lesssim&\Vert \omega ^\prime \Vert _{{\mathscr {L}}^2(\Sigma )}(\Vert \omega ^a\Vert _{{\mathscr {L}}^2({\mathbb {R}})}+\Vert \omega ^b\Vert _{{\mathscr {L}}^2(L\cup L^*)}) \lesssim t^{-l-\frac{1}{4}}, \end{aligned}\\&\left\| \left( (1-C_{\omega ^\prime })^{-1}C_{\omega ^e}(1-C_{\omega ^\sharp })^{-1}\right) (C_{\omega ^\sharp }I_{4\times 4})\omega ^\sharp \right\| _{{\mathscr {L}}^1(\Sigma )}\\&\quad \leqslant \Vert (1-C_{\omega ^\prime })^{-1}C_{\omega ^e}(1-C_{\omega ^\sharp })^{-1}(C_{\omega ^\sharp }I_{4\times 4})\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )}\\&\quad \leqslant \Vert (1-C_{\omega ^\prime })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert (1-C_{\omega ^\sharp })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^e}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^\sharp }I_{4\times 4}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )}\\&\quad \lesssim \Vert \omega ^e\Vert _{{\mathscr {L}}^\infty (\Sigma )}\Vert \omega ^\sharp \Vert ^2_{{\mathscr {L}}^2(\Sigma )} \lesssim t^{-l-\frac{1}{2}}. \end{aligned} \end{aligned}$$

Substituting the above estimates into (3.54), the proof is completed. \(\square \)

Lemma 3.5

As \(t\rightarrow \infty \), the solution Q(xt) for the Cauchy problem of the new type CNLS Eq. (1.1) has the asymptotic estimate

$$\begin{aligned} Q(x,t)=-\frac{1}{\pi }\left( \int _{\Sigma ^\prime }((1-C_{\omega ^\prime })^{-1}I_{4\times 4})(\xi )\omega ^\prime (\xi )\, \mathrm {d}\xi \right) _{12}+O(t^{-l}). \end{aligned}$$
(3.55)

Proof

A direct consequence of (3.46) and (3.53). \(\square \)

Set

$$\begin{aligned} M^\prime (k;x,t)=I_{4\times 4}+\int _{\Sigma ^\prime }\frac{\mu ^\prime (\xi ;x,t)\omega ^\prime (\xi ;x,t)}{\xi -k} \, \frac{\mathrm {d}\xi }{2\pi i}, \end{aligned}$$

where \(\mu ^\prime =(1-C_{\omega ^\prime })^{-1}I_{4\times 4}\). We can convert (3.55) to

$$\begin{aligned} Q(x,t)=2i\lim _{k\rightarrow \infty }(kM^\prime )_{12}+O(t^{-l}). \end{aligned}$$
(3.56)

Just like the way we found the solution \(M^\sharp (k;x,t)\) in (3.43) of RH problem (3.37), one can construct a RH problem that \(M^\prime (k;x,t)\) satisfies,

$$\begin{aligned} {\left\{ \begin{array}{ll} M_+^\prime (k;x,t)=M_-^\prime (k;x,t)J^\prime (k;x,t), &{} k\in \Sigma ^\prime , \\ M^\prime (k;x,t)\rightarrow I_{4\times 4}, &{} k\rightarrow \infty ,\\ \end{array}\right. } \end{aligned}$$
(3.57)

where

$$\begin{aligned}&\omega ^\prime =\omega _+^\prime +\omega _-^\prime ,\\&J^\prime =(b_-^\prime )^{-1}b_+^\prime =(I_{4\times 4}-\omega _-^\prime )^{-1}(I_{4\times 4}+\omega _+^\prime ),\\&b_+^\prime = {\left\{ \begin{array}{ll} \left( \begin{array}{cc} I_{2\times 2} &{} e^{-2it\theta }\delta _1(k)R(k)\delta _2(k)\\ 0 &{} I_{2\times 2}\\ \end{array}\right) , &{} k\in L,\\ I_{4\times 4}, &{} k\in L^*,\\ \end{array}\right. }\\&b_-^\prime = {\left\{ \begin{array}{ll} I_{4\times 4}, &{} k\in L,\\ \left( \begin{array}{cc} I_{2\times 2} &{} 0\\ -e^{2it\theta }\delta _2^{-1}(k)\sigma _1R^\dag (k^*)\sigma _1\delta _1^{-1}(k) &{} I_{2\times 2}\\ \end{array}\right) , &{} k\in L^*.\\ \end{array}\right. } \end{aligned}$$

3.4 Scaling and Translation of the Contour

At the end of the previous subsection, the leading-order asymptotics for the solution to the Cauchy problem of the new type CNLS Eq. (1.1) has been transformed into an integral on \(\Sigma ^\prime \) through the stationary point \(k_0\) from (3.55). But it should be noted that we cannot solve the corresponding RH problem (3.57) explicitly. Hence, the RH problem still needs further transformation until it becomes a solvable model RH problem. In this subsection, we construct a contour that is a cross through the origin and establish a corresponding RH problem on it. Firstly, we introduce the oriented contour \(\Sigma _0=\{k=ue^{\frac{\pi i}{4}}:u\in {\mathbb {R}}\}\cup \{k=ue^{-\frac{\pi i}{4}}:u\in {\mathbb {R}}\}\) as in Fig. 4.

Fig. 4
figure 4

The oriented contour \(\Sigma _0\)

Define the scaling operator

$$\begin{aligned} \begin{aligned} N:&{\mathscr {L}}^2(\Sigma ^\prime )\rightarrow {\mathscr {L}}^2(\Sigma _0),\\&f(k)\mapsto (Nf)(k)=f\left( k_0+\frac{k}{\sqrt{8t}}\right) . \end{aligned} \end{aligned}$$
(3.58)

Let \({{\hat{\omega }}}=N\omega ^\prime \), then

$$\begin{aligned} C_{\omega ^\prime }=N^{-1}C_{{{\hat{\omega }}}}N. \end{aligned}$$
(3.59)

Indeed, for a \(4\times 4\) matrix function f, we have

$$\begin{aligned} NC_{\omega ^\prime }f=N\left( C_+(f\omega _-^\prime )+C_+(f\omega _-^\prime )\right) =C_{{{\hat{\omega }}}}Nf. \end{aligned}$$

Notice that \(\omega ^\prime \) is made up of the terms \(\delta _1(k)\), \(\delta _1^{-1}(k)\), \(\delta _2(k)\) and \(\delta _2^{-1}(k)\). However, \(\delta _1(k)\) and \(\delta _2(k)\) satisfy the matrix RH problems (3.2) and (3.3), respectively, which means that they cannot be solved in explicit form. If we consider the determinants of these two RH problems, they become the same scalar RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \det \delta _{+}(k)=(1+\det (\gamma (k)\gamma ^*(k^*))+\mathrm {tr}(\gamma (k)\gamma ^*(k^*)))\det \delta _{-}(k),\quad k\in (-\infty ,k_0),\\ \det \delta (k)\rightarrow 1,\quad k\rightarrow \infty , \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.60)

which can be solved by the Plemelj formula [1]

$$\begin{aligned} \det \delta (k)=(k-k_0)^{i\nu }e^{\chi (k)}, \end{aligned}$$
(3.61)

where

$$\begin{aligned} \nu =&-\frac{1}{2\pi }\log (1+\det (\gamma (k_0)\gamma ^*(k_0))+\mathrm {tr}(\gamma (k_0)\gamma ^*(k_0))),\\ \chi (k)=&\frac{1}{2\pi i}\Big [\int _{k_0-1}^{k_0}\log \left( \frac{1+\det (\gamma (\xi )\gamma ^*(\xi ^*))+\mathrm {tr}(\gamma (\xi )\gamma ^*(\xi ^*))}{1+\det (\gamma (k_0)\gamma ^*(k_0))+\mathrm {tr}(\gamma (k_0)\gamma ^*(k_0))}\right) \, \frac{\mathrm {d}\xi }{\xi -k}\\&+\int _{-\infty }^{k_0-1}\log \left( 1+\det (\gamma (\xi )\gamma ^*(\xi ^*))+\mathrm {tr}(\gamma (\xi )\gamma ^*(\xi ^*))\right) \, \frac{\mathrm {d}\xi }{\xi -k}\\&-\log (1+\det (\gamma (k_0)\gamma ^*(k_0))+\mathrm {tr}(\gamma (k_0)\gamma ^*(k_0)))\log (k-k_0+1)\Big ]. \end{aligned}$$

Resorting to the symmetry relations of \(\delta _1(k)\) and \(\delta _2(k)\), we arrive at \(|\det \delta (k)|\lesssim 1\) for \(k\in {\mathbb {C}}\). A direct computation shows by the scaling operator act on the exponential term and \(\det \delta (k)\) that

$$\begin{aligned} N(e^{-it\theta }\det \delta )=\delta ^0\delta ^1(k), \end{aligned}$$

where \(\delta ^0\) is independent of k and

$$\begin{aligned} \delta ^0=e^{2itk_0^2}(8t)^{-\frac{i\nu }{2}}e^{\chi (k_0)}. \end{aligned}$$
(3.62)

Let \({{\hat{L}}}=\{\sqrt{8t}ue^{-\frac{\pi i}{4}}:u\in {\mathbb {R}}\}\). In a way similar to the proof of Lemma 3.35 in [14], we obtain by a straightforward computation that

$$\begin{aligned}&\left| (NR)(k)(\delta ^1(k))^2-R(k_0\pm )k^{2i\nu }e^{-\frac{1}{2}ik^2}\right| \lesssim \frac{\log t}{\sqrt{t}},\quad k\in {{\hat{L}}}, \end{aligned}$$
(3.63)
$$\begin{aligned}&\left| (NR^\dag )(k^*)(\delta ^1(k))^{-2}-R^\dag (k_0\pm )k^{-2i\nu }e^{\frac{1}{2}ik^2}\right| \lesssim \frac{\log t}{\sqrt{t}},\quad k\in {{\hat{L}}}^*, \end{aligned}$$
(3.64)

where

$$\begin{aligned} R(k_0+)=\gamma (k_0),\quad R(k_0-)=-\left( I_{2\times 2}+\gamma (k_0)\sigma _1\gamma ^\dag (k_0)\sigma _1\right) ^{-1}\gamma (k_0). \end{aligned}$$

Then we obtain the expression of \({{\hat{\omega }}}\):

$$\begin{aligned}&{{\hat{\omega }}}={\hat{\omega }}_{+}= \left( \begin{array}{cc} 0 &{} (Ns_1)(k)\\ 0 &{} 0\\ \end{array}\right) , \quad k\in {{\hat{L}}}, \end{aligned}$$
(3.65)
$$\begin{aligned}&{{\hat{\omega }}}={\hat{\omega }}_{-}= \left( \begin{array}{cc} 0 &{} 0\\ (Ns_2)(k) &{} 0\\ \end{array}\right) ,\quad k\in {{\hat{L}}}^*, \end{aligned}$$
(3.66)

where

$$\begin{aligned} s_1(k)=e^{-2it\theta }\delta _1(k)R(k)\delta _2(k),\quad s_2(k)=e^{2it\theta }\delta _2^{-1}(k)\sigma _1R^\dag (k^*)\sigma _1\delta _1^{-1}(k). \end{aligned}$$
(3.67)

Lemma 3.6

As \(t\rightarrow \infty \) and \(k\in {{\hat{L}}}\), then

$$\begin{aligned} \vert (N{{\tilde{\delta }}}_1)(k)\vert \lesssim t^{-1},\quad \vert (N{{\tilde{\delta }}}_2)(k)\vert \lesssim t^{-1}, \end{aligned}$$
(3.68)

where

$$\begin{aligned} {{\tilde{\delta }}}_1(k)=e^{-2it\theta }[\delta _1(k)R(k)-\det \delta (k)R(k)],\\ {{\tilde{\delta }}}_2(k)=e^{-2it\theta }[R(k)\delta _2(k)-\det \delta (k)R(k)]. \end{aligned}$$

Proof

Let us consider the case of \({{\tilde{\delta }}}_1(k)\) first. Another case is similar. Noting that \(\delta _1(k)\) and \(\det \delta (k)\) satisfy the RH problems (3.2) and (3.60), respectively, then \({{\tilde{\delta }}}_1(k)\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} {{\tilde{\delta }}}_{1+}(k)={{\tilde{\delta }}}_{1-}(k)(1+\det (\gamma (k)\gamma ^*(k^*))+\mathrm {tr}(\gamma (k)\gamma ^*(k^*))+e^{-2it\theta }f(k),\quad k\in (-\infty ,k_0),\\ {{\tilde{\delta }}}_1(k)\rightarrow 0,\quad k\rightarrow \infty , \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.69)

where \(f(k)=\delta _{1-}(k)\left( \gamma (k)\gamma ^*(k^*)-(\det (\gamma (k)\gamma ^*(k^*))+\mathrm {tr}(\gamma (k)\gamma ^*(k^*)))I_{2\times 2}\right) R(k)\). This RH problem can be solved by

$$\begin{aligned}&{{\tilde{\delta }}}_1(k)=X(k)\int _{-\infty }^{k_0}\frac{e^{-2it\theta (\xi )}f(\xi )}{X_{+}(\xi )(\xi -k)}\, \frac{\mathrm {d}\xi }{2\pi i},\\&X(k)=\mathrm {exp}\left\{ \frac{1}{2\pi i}\int _{-\infty }^{k_0}\frac{\log (1+\det (\gamma (\xi )\gamma ^*(\xi ^*))+\mathrm {tr}(\gamma (\xi )\gamma ^*(\xi ^*)))}{\xi -k}\, \mathrm {d}\xi \right\} . \end{aligned}$$

Resorting to the definition of \(\rho (k)\) in (3.10), one deduces that

$$\begin{aligned} \begin{aligned}&\left( \gamma (k)\gamma ^*(k^*)-(\det (\gamma (k)\gamma ^*(k^*))+\mathrm {tr}(\gamma (k)\gamma ^*(k^*)))I_{2\times 2}\right) R(k)\\&\quad =\left[ (\det (\gamma (k)\gamma ^*(k^*))+\mathrm {tr}(\gamma (k)\gamma ^*(k^*)))I_{2\times 2}-\gamma (k)\gamma ^*(k^*)\right] (\rho (k)-R(k))\\&\qquad -\gamma ^*(k^*)\det \gamma (k). \end{aligned} \end{aligned}$$

From Theorem 3.1, \(\rho -R=h_1+h_2\). Similarly, \(\gamma ^*(k^*)\) also has a decomposition \(\gamma ^*=\tilde{h}_1+{{\tilde{h}}}_2+{{\tilde{R}}}\). We split f(k) into three parts \(f=f_1+f_2+f_3\), where \(f_1\) consists of the terms including \(h_1\) and \({{\tilde{h}}}_1\), \(f_2\) consists of the terms including \(h_2\) and \({{\tilde{h}}}_2\), \(f_3\) consists of the terms including \({{\tilde{R}}}\). Notice that \(|(\det (\gamma \gamma ^*)+\mathrm {tr}(\gamma \gamma ^*))I_{2\times 2}-\gamma \gamma ^*|\) is consist of the modulus of the components of \(\gamma \) and \(\det \gamma \). Then \(f_2\) and \(f_3\) have an analytic continuation to \(L_t=\{k=k_0-\frac{1}{t}+ue^{\frac{3\pi i}{4}}:u\in (0,+\infty )\}\) (Fig. 5).

Fig. 5
figure 5

The contour \(L_t\)

Moreover, \(f_1(k)+f_2(k)=O((k-k_0)^l)\) and they satisfy

$$\begin{aligned}&\vert e^{-2it\theta }f_{1}(k)\vert \lesssim \frac{1}{(1+\vert k\vert ^{2})t^{l}}, \quad k\in {\mathbb {R}}, \end{aligned}$$
(3.70)
$$\begin{aligned}&\vert e^{-2it\theta }f_{2}(k)\vert \lesssim \frac{1}{(1+\vert k\vert ^{2})t^{l}}, \quad k\in L_{t}. \end{aligned}$$
(3.71)

As \(k\in {{\hat{L}}}\), we arrive at

$$\begin{aligned} (N\tilde{\delta _1})(k)=&X(\frac{k}{\sqrt{8t}}+k_{0})\int _{k_0-\frac{1}{t}}^{k_0}\frac{e^{-2it\theta (\xi )}f(\xi )}{X_+(\xi )(\xi -k_0-\frac{k}{\sqrt{8t}})}\, \frac{d\xi }{2\pi i}\\&+X(\frac{k}{\sqrt{8t}}+k_{0})\int _{-\infty }^{k_0-\frac{1}{t}}\frac{e^{-2it\theta (\xi )}f_1(\xi )}{X_+(\xi )(\xi -k_0-\frac{k}{\sqrt{8t}})}\, \frac{d\xi }{2\pi i}\\&+X(\frac{k}{\sqrt{8t}}+k_{0})\int _{-\infty }^{k_0-\frac{1}{t}}\frac{e^{-2it\theta (\xi )}f_2(\xi )}{X_+(\xi )(\xi -k_0-\frac{k}{\sqrt{8t}})}\, \frac{d\xi }{2\pi i}\\&+X(\frac{k}{\sqrt{8t}}+k_{0})\int _{-\infty }^{k_0-\frac{1}{t}}\frac{e^{-2it\theta (\xi )}f_3(\xi )}{X_+(\xi )(\xi -k_0-\frac{k}{\sqrt{8t}})}\, \frac{d\xi }{2\pi i}\\ \triangleq&A_{1}+A_{2}+A_{3}+A_4, \end{aligned}$$

and a direct calculation shows that

$$\begin{aligned}&\vert A_1\vert \lesssim \left| \int _{k_0-\frac{1}{t}}^{k_0}\frac{e^{-2it\theta (\xi )}f(\xi )}{X_+(\xi )(\xi -k_0-\frac{k}{\sqrt{8t}})}\, \mathrm {d}\xi \right| \lesssim t^{-1},\\&\vert A_2\vert \lesssim \int _{-\infty }^{k_0-\frac{1}{t}}\frac{|e^{-2it\theta (\xi )}f_1(\xi )|}{|\xi -k_0-\frac{k}{\sqrt{8t}}|}\, \mathrm {d}\xi \lesssim t^{-l}\sqrt{2}t\int _{-\infty }^{0}\frac{1}{1+\xi ^2}\, \mathrm {d}\xi \lesssim t^{-l+1}. \end{aligned}$$

Similarly, we can evaluate \(A_{3}\) along the contour \(L_{t}\) instead of the interval \((-\infty ,k_0-\frac{1}{t})\) because of Cauchy’s Theorem, and \(|A_{3}|\lesssim t^{-l+1}\). For \(k\in L_t\), we have

$$\begin{aligned} \vert A_4\vert \lesssim \sqrt{2}t^{-1}\int _{-\infty }^{k_0-\frac{1}{t}}\frac{1}{1+|\xi |^5}\, \mathrm {d}\xi \lesssim t^{-1}. \end{aligned}$$

Then the estimates for \(|(N{{\tilde{\delta }}}_1)(k)|\) and \(|(N{{\tilde{\delta }}}_2)(k)|\) are similarly computed. \(\square \)

Corollary 3.1

When \(k\in {{\hat{L}}}^*\) and as \(t\rightarrow \infty \), then

$$\begin{aligned} \vert (N{\hat{\delta }}_{1})(k)\vert \lesssim t^{-1},\quad \vert (N{\hat{\delta }}_{2})(k)\vert \lesssim t^{-1}, \end{aligned}$$
(3.72)

where

$$\begin{aligned} {\hat{\delta }}_{1}(k)=e^{2it\theta }[R^\dag (k^*)\delta _1^{-1}(k)-(\det \delta (k))^{-1}R^\dag (k^*)],\\ {\hat{\delta }}_{2}(k)=e^{2it\theta }[\delta _2^{-1}(k)R^\dag (k^*)-(\det \delta (k))^{-1}R^\dag (k^*)]. \end{aligned}$$

Next, we construct \(\omega ^0\) on the contour \(\Sigma _0\). Let

$$\begin{aligned}&\omega ^0=\omega ^0_+= {\left\{ \begin{array}{ll} \left( \begin{array}{cc} 0 &{} (\delta ^0)^2k^{2i\nu }e^{-\frac{1}{2}ik^2}\gamma (k_0)\\ 0 &{} 0\\ \end{array}\right) , &{} k\in \Sigma _0^1,\\ \left( \begin{array}{cc} 0 &{} -(\delta ^0)^2k^{2i\nu }e^{-\frac{1}{2}ik^2}(I_{2\times 2}+\gamma (k_0)\gamma ^*(k_0))^{-1}\gamma (k_0)\\ 0 &{} 0\\ \end{array}\right) , &{} k\in \Sigma _0^3,\\ \end{array}\right. } \end{aligned}$$
(3.73)
$$\begin{aligned}&\omega ^0=\omega ^0_-= {\left\{ \begin{array}{ll} \left( \begin{array}{cc} 0 &{} 0\\ (\delta ^0)^{-2}k^{-2i\nu }e^{\frac{1}{2}ik^2}\gamma ^*(k_0) &{} 0\\ \end{array}\right) , &{} k\in \Sigma _0^2,\\ \left( \begin{array}{cc} 0 &{} 0\\ -(\delta ^0)^{-2}k^{-2i\nu }e^{\frac{1}{2}ik^2}\gamma ^*(k_0)(I_{2\times 2}+\gamma (k_0)\gamma ^*(k_0))^{-1} &{} 0\\ \end{array}\right) , &{} k\in \Sigma _0^4.\\ \end{array}\right. } \end{aligned}$$
(3.74)

It follows from (3.63) and (3.64) that, as \(t\rightarrow \infty \),

$$\begin{aligned} \Vert {{\hat{\omega }}}-\omega ^0\Vert _{{\mathscr {L}}^1(\Sigma _0)\cap {\mathscr {L}}^2(\Sigma _0)\cap {\mathscr {L}}^\infty (\Sigma _0)} \lesssim \frac{\log {t}}{\sqrt{t}}. \end{aligned}$$
(3.75)

Theorem 3.3

As \(t\rightarrow \infty \), the asymptotic of solution Q(xt) to the Cauchy problem of the new type CNLS Eq. (1.1) has the form

$$\begin{aligned} \begin{aligned} Q(x,t)=-\frac{1}{\pi \sqrt{8t}}\left( \int _{\Sigma _0}((1-C_{\omega ^0})^{-1}I_{4\times 4})(\xi )\omega ^0(\xi )\, \mathrm {d}\xi \right) _{12}+O\left( \frac{\log t}{t}\right) . \end{aligned} \end{aligned}$$
(3.76)

Proof

A simple computation shows that

$$\begin{aligned}&\left( (1-C_{{{\hat{\omega }}}})^{-1}I_{4\times 4}\right) {{\hat{\omega }}}-\left( (1-C_{\omega ^0})^{-1}I_{4\times 4}\right) {\omega ^0}\\&\quad =\left( (1-C_{{{\hat{\omega }}}})^{-1}I_{4\times 4}\right) ({{\hat{\omega }}}-{\omega ^0})+(1-C_{{{\hat{\omega }}}})^{-1}(C_{{{\hat{\omega }}}}-C_{\omega ^0})(1-C_{\omega ^0})^{-1}I_{4\times 4}{\omega ^0}\\&\quad =({{\hat{\omega }}}-{\omega ^0})+\left( (1-C_{{{\hat{\omega }}}})^{-1}C_{{{\hat{\omega }}}}I_{4\times 4}\right) ({{\hat{\omega }}}-{\omega ^0})+(1-C_{{{\hat{\omega }}}})^{-1}(C_{{{\hat{\omega }}}}-C_{\omega ^0})I_{4\times 4}{\omega ^0}\\&\qquad +(1-C_{{{\hat{\omega }}}})^{-1}(C_{{{\hat{\omega }}}}-C_{\omega ^0})(1-C_{\omega ^0})^{-1}C_{\omega ^0}I_{4\times 4}{\omega ^0}. \end{aligned}$$

Utilizing the triangle inequality and the inequalities (3.75), we have

$$\begin{aligned}&\int _{\Sigma _0}\left( (1-C_{{{\hat{\omega }}}})^{-1}I_{4\times 4}\right) (\xi ){{\hat{\omega }}}(\xi )\,\mathrm {d}\xi \\&\quad =\int _{\Sigma _0}\left( (1-C_{\omega ^0})^{-1}I_{4\times 4}\right) (\xi ){\omega ^0}(\xi )\,\mathrm {d}\xi +O\left( \frac{\log {t}}{\sqrt{t}}\right) . \end{aligned}$$

According to (), we obtain by using a simple change of variable that

$$\begin{aligned} \begin{aligned}&\int _{\Sigma ^\prime }\left( (1-C_{\omega ^\prime })^{-1}I_{4\times 4}\right) (\xi )\omega ^\prime (\xi )\,\mathrm {d}\xi \\&\quad =\int _{\Sigma ^\prime }\left( N^{-1}(1-C_{{{\hat{\omega }}}})^{-1}NI_{4\times 4}\right) (\xi )\omega ^\prime (\xi )\,\mathrm {d}\xi \\&\quad =\int _{\Sigma ^\prime }\left( (1-C_{{{\hat{\omega }}}})^{-1}I_{4\times 4}\right) ((\xi -k_0)\sqrt{8t})N\omega ^\prime ((\xi -k_0)\sqrt{8t})\,\mathrm {d}\xi \\&\quad =\frac{1}{\sqrt{8t}}\int _{\Sigma _0}\left( (1-C_{{{\hat{\omega }}}})^{-1}I_{4\times 4}\right) (\xi ){{\hat{\omega }}}(\xi )\,\mathrm {d}\xi \\&\quad =\frac{1}{\sqrt{8t}}\int _{\Sigma _0}\left( (1-C_{\omega ^0})^{-1}I_{4\times 4}\right) (\xi )\omega ^0(\xi )\,\mathrm {d}\xi +O\left( \frac{\log t}{t}\right) .\\ \end{aligned} \end{aligned}$$

Then (3.76) can be deduced from (3.55). \(\square \)

For \(k\in {\mathbb {C}}\backslash \Sigma _0\), set

$$\begin{aligned} M^0(k;x,t)=I_{4\times 4}+\int _{\Sigma _0}\frac{\left( (1-C_{\omega ^0})^{-1}I_{4\times 4}\right) (\xi )\omega ^0(\xi )}{\xi -k} \, \frac{\mathrm {d}\xi }{2\pi i}. \end{aligned}$$
(3.77)

Then \(M^0(k;x,t)\) is the solution of the RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} M^0_+(k;x,t)=M^0_-(k;x,t)J^0(k;x,t), &{} k\in \Sigma _0, \\ M^0(k;x,t)\rightarrow I_{4\times 4}, &{} k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(3.78)

where \(J^0=(b^0_-)^{-1}b^0_+=(I_{4\times 4}-\omega _-^0)^{-1}(I_{4\times 4}+\omega _+^0)\). In particular, we have

$$\begin{aligned} M^0(k)=I_{4\times 4}+\frac{M^0_1}{k}+O(k^{-2}), \quad k\rightarrow \infty . \end{aligned}$$
(3.79)

From (3.77) and (3.79), the coefficients of the term \(k^{-1}\) are as follows

$$\begin{aligned} M^0_1=-\int _{\Sigma _0}\left( (1-C_{\omega ^0})^{-1}I_{4\times 4}\right) (\xi )\omega ^0(\xi )\, \frac{\mathrm {d}\xi }{2\pi i}. \end{aligned}$$
(3.80)

Besides, by the uniqueness, we get that \(T_2M^0_1=M^0_1\).

Corollary 3.2

As \(t\rightarrow \infty \), the solution Q(xt) for the Cauchy problem of the new type CNLS Eq. (1.1) is expressed by

$$\begin{aligned} Q(x,t)=\frac{i}{\sqrt{2t}}\left( M_1^0\right) _{12}+O\left( \frac{\log t}{t}\right) . \end{aligned}$$
(3.81)

Proof

Substituting (3.80) into (3.76), we immediately obtain (3.81). \(\square \)

3.5 Solving the Model Problem

In this subsection, we construct a model RH problem from the RH problem (3.78). Because the leading-order asymptotics for the solution Q(xt) in (3.81) only contain the term \(M_1^0\), we solve the model RH problem and obtain the explicit expression of \(M_1^0\) through the standard parabolic cylinder function. For this purpose, we introduce

$$\begin{aligned} \Psi (k)=H(k)k^{i\nu \sigma _4}e^{-\frac{1}{4}ik^2\sigma _4},\quad H(k)=(\delta ^0)^{-\sigma _4}M^0(k)(\delta ^0)^{\sigma _4}. \end{aligned}$$
(3.82)

From (3.78), a model \(4\times 4\) matrix RH problem can be constructed on the contour \(\Sigma _0\) as below,

$$\begin{aligned} \Psi _+(k)=\Psi _-(k)v(k_0),\quad v=e^{\frac{1}{4}ik^2\sigma _4}k^{-i\nu \sigma _4}(\delta ^0)^{-\sigma _4}J^0(k)(\delta ^0)^{\sigma _4} k^{i\nu \sigma _4}e^{-\frac{1}{4}ik^2{\sigma _4}}.\nonumber \\ \end{aligned}$$
(3.83)

The normalization in (3.78) shows that \(\Psi (k)\rightarrow k^{i\nu \sigma _4}e^{-\frac{1}{4}ik^2\sigma _4}\) as \(k\rightarrow \infty \). Besides, the jump matrix of the above model matrix RH problem is independent of k along each of the four rays \(\Sigma _0^1, \Sigma _0^2, \Sigma _0^3, \Sigma _0^4\), so

$$\begin{aligned} \frac{\mathrm {d}\Psi _+(k)}{\mathrm {d}k}=\frac{\mathrm {d}\Psi _-(k)}{\mathrm {d}k}v(k_0). \end{aligned}$$
(3.84)

Combining (3.83) and (3.84), we obtain

$$\begin{aligned} \frac{\mathrm {d}\Psi _+(k)}{\mathrm {d}k}+\frac{1}{2}ik\sigma _4\Psi _+(k)=\left( \frac{\mathrm {d}\Psi _-(k)}{\mathrm {d}k}+\frac{1}{2}ik\sigma _4\Psi _-(k)\right) v(k_0). \end{aligned}$$
(3.85)

Then \((\mathrm {d}\Psi /\mathrm {d}k+\frac{1}{2}ik\sigma _4\Psi )\Psi ^{-1}\) has no jump discontinuity along each of the four rays. In addition, from the relation between \(\Psi (k)\) and H(k), we have

$$\begin{aligned} \left( \frac{\mathrm {d}\Psi (k)}{\mathrm {d}k}+\frac{1}{2}ik\sigma _4\Psi (k)\right) \Psi ^{-1}(k)=&\frac{\mathrm {d}H(k)}{\mathrm {d}k}H^{-1}(k)-\frac{ik}{2}H(k)\sigma _4 H^{-1}(k)\\&+\frac{i\nu }{k}H(k)\sigma _4 H^{-1}(k)+\frac{1}{2}ik\sigma _4\\ =&O(k^{-1})+\frac{i}{2}(\delta ^0)^{-\sigma _4}[\sigma _4, M^0_1](\delta ^0)^{\sigma _4}. \end{aligned}$$

It follows by the Liouville’s theorem that

$$\begin{aligned} \frac{\mathrm {d}\Psi (k)}{\mathrm {d}k}+\frac{1}{2}ik\sigma _4\Psi (k)=\beta \Psi (k), \end{aligned}$$
(3.86)

where

$$\begin{aligned} \beta =\frac{i}{2}(\delta ^0)^{-\sigma _4}[\sigma _4, M^0_1](\delta ^0)^{\sigma _4} =\left( \begin{array}{cc} 0 &{} \beta _{12}\\ \beta _{21} &{} 0 \end{array}\right) . \end{aligned}$$

Moreover,

$$\begin{aligned} (M_1^0)_{12}=-i(\delta ^0)^2\beta _{12}. \end{aligned}$$
(3.87)

The RH problem (3.78) shows that

$$\begin{aligned} \sigma _4\Gamma _1(M^0(k^*))^\dag \Gamma _1\sigma _4=(M^0(k))^{-1}, \end{aligned}$$
(3.88)

which implies that \(\beta _{12}=\sigma _1\beta _{21}^\dag \sigma _1\). Set

$$\begin{aligned} \Psi (k) =\left( \begin{array}{cc} \Psi _{11}(k) &{} \Psi _{12}(k)\\ \Psi _{21}(k) &{} \Psi _{22}(k)\\ \end{array}\right) , \end{aligned}$$

and \(\Psi _{ij}(k), (i,j=1,2),\) are all \(2\times 2\) matrices. From (3.86) and its differential, we obtain

$$\begin{aligned}&\frac{\mathrm {d}^{2}\Psi _{11}(k)}{\mathrm {d}k^{2}}+\left[ (\frac{1}{2}i+\frac{1}{4}k^2)I_{2\times 2}-\beta _{12}\beta _{21}\right] \Psi _{11}(k)=0, \end{aligned}$$
(3.89)
$$\begin{aligned}&\beta _{12}\Psi _{21}(k)=\frac{\mathrm {d}\Psi _{11}(k)}{\mathrm {d}k}+\frac{1}{2}ik\Psi _{11}(k),\end{aligned}$$
(3.90)
$$\begin{aligned}&\frac{\mathrm {d}^{2}\beta _{12}\Psi _{22}(k)}{\mathrm {d}k^{2}}+\left[ (-\frac{1}{2}i+\frac{1}{4}k^2)I_{2\times 2}-\beta _{12}\beta _{21}\right] \beta _{12}\Psi _{22}(k)=0,\end{aligned}$$
(3.91)
$$\begin{aligned}&\Psi _{12}(k)=(\beta _{12}\beta _{21})^{-1}\left( \frac{\mathrm {d}\beta _{12}\Psi _{22}(k)}{\mathrm {d}k}-\frac{1}{2}ik\beta _{12}\Psi _{22}(k)\right) . \end{aligned}$$
(3.92)

Then \(\beta _{12}\) has the symmetry relation \(\beta _{12}={\mathscr {T}}\beta _{12}\), which can be inferred from the RH problem (3.78). For the convenience, we assume that the \(2\times 2\) matrices \(\beta _{12}\) and \(\beta _{12}\beta _{21}\) have the forms

$$\begin{aligned} \beta _{12}=\left( \begin{array}{cc} A &{} B \\ -B &{} A \\ \end{array}\right) ,\quad \beta _{12}\beta _{21}= \left( \begin{array}{cc} {{\tilde{A}}} &{} {{\tilde{B}}} \\ -{{\tilde{B}}} &{} {{\tilde{A}}} \\ \end{array}\right) . \end{aligned}$$
(3.93)

It is clear that \({{\tilde{A}}}\) and \({{\tilde{B}}}\) are real. Set \(\Psi _{11}=(\Psi _{11}^{(ij)})_{2\times 2}\). We consider the (1,1) entry and (2,1) entry of Eq. (3.89) that are

$$\begin{aligned} \frac{\mathrm {d}^{2}\Psi _{11}^{(11)}(k)}{\mathrm {d}k^{2}}+(\frac{1}{2}i+\frac{1}{4}k^2)\Psi _{11}^{(11)}(k)-{{\tilde{A}}}\Psi _{11}^{(11)}(k)-{{\tilde{B}}}\Psi _{11}^{(21)}(k)=0,\end{aligned}$$
(3.94)
$$\begin{aligned} \frac{\mathrm {d}^{2}\Psi _{11}^{(21)}(k)}{\mathrm {d}k^{2}}+(\frac{1}{2}i+\frac{1}{4}k^2)\Psi _{11}^{(21)}(k)+\tilde{B}\Psi _{11}^{(11)}(k)-{{\tilde{A}}}\Psi _{11}^{(21)}(k)=0. \end{aligned}$$
(3.95)

If we set s satisfies \((s-{{\tilde{A}}})^2-{{\tilde{B}}}^2=0\), then (3.94) becomes

$$\begin{aligned}&\frac{\mathrm {d}^{2}}{\mathrm {d}k^{2}}[(s-\tilde{A})\Psi _{11}^{(21)}(k)-\tilde{B}\Psi _{11}^{(11)}(k)]+(\frac{1}{2}i\nonumber \\&\quad +\frac{1}{4}k^2-s)[(s-\tilde{A})\Psi _{11}^{(21)}(k)-{{\tilde{B}}}\Psi _{11}^{(11)}(k)]=0. \end{aligned}$$
(3.96)

It is obvious that we can transform the above equations to Weber’s equations by simple change of variables. As is well known, the standard parabolic-cylinder function \(D_{a}(\zeta )\) and \(D_{a}(-\zeta )\) constitute the basic solution set of Weber’s equation

$$\begin{aligned} \frac{\mathrm {d}^{2}g(\zeta )}{\mathrm {d}\zeta ^{2}}+\left( a+\frac{1}{2}-\frac{\zeta ^{2}}{4}\right) g(\zeta )=0, \end{aligned}$$

whose general solution is denoted by

$$\begin{aligned} g(\zeta )=C_{1}D_{a}(\zeta )+C_{2}D_{a}(-\zeta ), \end{aligned}$$

where \(C_1\) and \(C_2\) are two arbitrary constants. Set \(a=is\),

$$\begin{aligned} (s-{{\tilde{A}}})\Psi _{11}^{(21)}(k)-\tilde{B}\Psi _{11}^{(11)}(k)=c_1D_a(e^{\frac{\pi i}{4}}k)+c_2D_a(e^{-\frac{3\pi i}{4}}k), \end{aligned}$$
(3.97)

where \(c_1\) and \(c_2\) are constants. First, the solution \(c_1D_a(e^{\frac{\pi i}{4}}k)+c_2D_a(e^{-\frac{3\pi i}{4}}k)\) is nontrivial, otherwise the large k expansion of \(\Psi (k)\) is false. Besides, notice that as \(k\rightarrow \infty \),

$$\begin{aligned} \Psi _{11}(k)\rightarrow k^{i\nu }e^{-\frac{1}{4}ik^2}I_{2\times 2}. \end{aligned}$$
(3.98)

From [45], the parabolic-cylinder function has a asymptotic expansion

$$\begin{aligned} D_{a}(\zeta )= {\left\{ \begin{array}{ll} \zeta ^{a}e^{-\frac{\zeta ^{2}}{4}}(1+O(\zeta ^{-2})), &{} \left| \arg {\zeta }\right|<\frac{3\pi }{4},\\ \zeta ^{a}e^{-\frac{\zeta ^{2}}{4}}(1+O(\zeta ^{-2}))-\frac{\sqrt{2\pi }}{\Gamma (-a)}e^{a\pi i+\frac{\zeta ^{2}}{4}}\zeta ^{-a-1}(1+O(\zeta ^{-2})), &{} \frac{\pi }{4}<\arg {\zeta }<\frac{5\pi }{4},\\ \zeta ^{a}e^{-\frac{\zeta ^{2}}{4}}(1+O(\zeta ^{-2}))-\frac{\sqrt{2\pi }}{\Gamma (-a)}e^{-a\pi i+\frac{\zeta ^{2}}{4}}\zeta ^{-a-1}(1+O(\zeta ^{-2})), &{} -\frac{5\pi }{4}<\arg {\zeta }<-\frac{\pi }{4}, \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.99)

as \(\zeta \rightarrow \infty \), where \(\Gamma (\cdot )\) is the Gamma function. Along the line \(k=\sigma e^{-\frac{\pi i}{4}}(\sigma >0)\), using the asymptotic expansions (3.98) and (3.99) to evaluate the left-hand side and the right-hand side of (3.97), we get that \(c_1=-\tilde{B}k^{i\nu -a}e^{\frac{-a\pi i}{4}}\) and \(c_2=0\). In the meantime, \(\Psi _{11}^{(11)}(k)\) and \(\Psi _{11}^{(21)}(k)\) are not linear dependent because they satisfy the asymptotic expansion (3.98). The equality (3.97) shows that the coefficient of \(\Psi _{11}^{(21)}(k)\) is unique, which means that s is unique. Combining with the definition of s, there is a unique solution of the equation

$$\begin{aligned} (s-{{\tilde{A}}}-i{{\tilde{B}}})(s-{{\tilde{A}}}+i{{\tilde{B}}})=0, \end{aligned}$$
(3.100)

which leads to \({{\tilde{B}}}=0\). Hence, we assume that \(\beta _{12}\beta _{21}={{\tilde{A}}}I_{2\times 2}\). Then (3.89) becomes

$$\begin{aligned} \frac{\mathrm {d}^{2}}{\mathrm {d}k^{2}}\left( \begin{array}{cc} \Psi _{11}^{(11)} &{} \Psi _{11}^{(12)} \\ \Psi _{11}^{(21)} &{} \Psi _{11}^{(22)} \\ \end{array}\right) +(\frac{1}{2}i+\frac{1}{4}k^2-{{\tilde{A}}})\left( \begin{array}{cc} \Psi _{11}^{(11)} &{} \Psi _{11}^{(12)} \\ \Psi _{11}^{(21)} &{} \Psi _{11}^{(22)} \\ \end{array}\right) =0. \end{aligned}$$
(3.101)

It can be seen that \(\Psi _{11}^{(11)}\), \(\Psi _{11}^{(12)}\), \(\Psi _{11}^{(21)}\) and \(\Psi _{11}^{(22)}\) satisfy the same equation, respectively. Set \({{\tilde{a}}}=i{{\tilde{A}}}\), similar to (3.97), \(\Psi _{11}^{(12)}\) can be expressed by the linear combination of \(D_{{{\tilde{a}}}}(e^{\frac{\pi i}{4}}k)\) and \(D_{\tilde{a}}(e^{-\frac{3\pi i}{4}}k)\). Notice that \(\Psi _{11}^{(12)}\rightarrow 0\) as \(k\rightarrow \infty \) and the asymptotic expansion (3.99). It easy to see that \(\Psi _{11}^{(12)}=0\). A similar computation shows that \(\Psi _{11}^{(21)}=0\). Then \(\Psi _{11}(k)\) is a diagonal matrix and

$$\begin{aligned} \Psi _{11}(k)=\left[ c_1^{(1)}D_{{{\tilde{a}}}}(e^{\frac{\pi i}{4}}k)+c_2^{(1)}D_{{{\tilde{a}}}}(e^{-\frac{3\pi i}{4}}k)\right] I_{2\times 2}, \end{aligned}$$
(3.102)

where \(c_1^{(1)}\) and \(c_2^{(1)}\) are constants. A similar analysis can be applied to \(\Psi _{22}(k)\) and we have

$$\begin{aligned} A\Psi _{22}(k)=\left[ c_1^{(2)}D_{-{{\tilde{a}}}}(e^{-\frac{\pi i}{4}}k)+c_2^{(2)}D_{-{{\tilde{a}}}}(e^{\frac{3\pi i}{4}}k)\right] I_{2\times 2}, \end{aligned}$$
(3.103)

where \(c_1^{(2)}\) and \(c_2^{(2)}\) are constants. Next, we first consider the case when \(\arg {k}\in (-\frac{\pi }{4},\frac{\pi }{4})\). Notice that as \(k\rightarrow \infty \),

$$\begin{aligned} \Psi _{11}(k)k^{-i\nu }e^{\frac{ik^{2}}{4}}\rightarrow I_{2\times 2}, \quad \Psi _{22}(k)k^{i\nu }e^{-\frac{ik^{2}}{4}}\rightarrow I_{2\times 2}. \end{aligned}$$

Then we arrive at

$$\begin{aligned}&\Psi _{11}(k)=e^{\frac{\pi \nu }{4}}D_{{{\tilde{a}}}}(e^{\frac{\pi i}{4}}k)I_{2\times 2}, \quad {{\tilde{a}}}=i\nu ,\\&\Psi _{22}(k)=e^{\frac{\pi \nu }{4}}D_{-{{\tilde{a}}}}(e^{-\frac{\pi i}{4}}k)I_{2\times 2}. \end{aligned}$$

Besides, the parabolic cylinder function follows that [5]

$$\begin{aligned} \frac{\mathrm {d}D_{a}(\zeta )}{\mathrm {d}\zeta }+\frac{\zeta }{2}D_{a}(\zeta )-aD_{a-1}(\zeta )=0. \end{aligned}$$
(3.104)

Then we have

$$\begin{aligned} \Psi _{21}(k)=\beta _{12}^{-1}{{\tilde{a}}}e^{\frac{\pi \nu }{4}}e^{\frac{\pi i}{4}}D_{{{\tilde{a}}}-1}(e^{\frac{\pi i}{4}}k). \end{aligned}$$

For \(\arg {k}\in (\frac{\pi }{4},\frac{3\pi }{4})\) and \(k\rightarrow \infty \),

$$\begin{aligned} \Psi _{11}(k)k^{-i\nu }e^{\frac{ik^{2}}{4}}\rightarrow I_{2\times 2}, \quad \Psi _{22}(k)k^{i\nu }e^{-\frac{ik^{2}}{4}}\rightarrow I_{2\times 2}. \end{aligned}$$

We obtain

$$\begin{aligned}&\Psi _{11}(k)=e^{-\frac{3\pi \nu }{4}}D_{{{\tilde{a}}}}(e^{-\frac{3\pi i}{4}}k)I_{2\times 2}, \quad {{\tilde{a}}}=i\nu ,\\&\Psi _{22}(k)=e^{\frac{\pi \nu }{4}}D_{-{{\tilde{a}}}}(e^{-\frac{\pi i}{4}}k)I_{2\times 2}, \end{aligned}$$

which imply

$$\begin{aligned} \Psi _{21}(k)=\beta _{12}^{-1}\tilde{a}e^{-\frac{3\pi \nu }{4}}e^{-\frac{3\pi i}{4}}D_{\tilde{a}-1}(e^{-\frac{3\pi i}{4}}k). \end{aligned}$$

Along the ray \(\arg k=\frac{\pi }{4}\), one infers that

$$\begin{aligned} \Psi _{+}(k)=\Psi _{-}(k) \left( \begin{array}{cc} I_{2\times 2} &{} 0\\ \gamma ^*(k_0) &{} I_{2\times 2}\\ \end{array}\right) . \end{aligned}$$
(3.105)

Noticing the (2, 1) entry of the RH problem above, we obtain

$$\begin{aligned} \beta _{12}^{-1}{{\tilde{a}}}e^{\frac{\pi (i+\nu )}{4}}D_{\tilde{a}-1}(e^{\frac{\pi i}{4}}k)=e^{\frac{\pi \nu }{4}}D_{-\tilde{a}}(e^{\frac{3\pi i}{4}}k)\gamma ^\dag (k_0)+\beta _{12}^{-1}\tilde{a}e^{-\frac{\pi (3\nu +3i)}{4}}D_{{{\tilde{a}}}-1}(e^{-\frac{3\pi i}{4}}k). \end{aligned}$$

The parabolic-cylinder function satisfies [5]

$$\begin{aligned} D_{a}(\zeta )=\frac{\Gamma (a+1)}{\sqrt{2\pi }}\left( e^{\frac{1}{2}a\pi i}D_{-a-1}(i\zeta )+e^{-\frac{1}{2}a\pi i}D_{-a-1}(-i\zeta )\right) . \end{aligned}$$
(3.106)

We can split the term of \(D_{-{{\tilde{a}}}}(e^{\frac{3\pi i}{4}}k)\) into the terms of \(D_{{{\tilde{a}}}-1}(e^{\frac{\pi i}{4}}k)\) and \(D_{\tilde{a}-1}(e^{-\frac{3\pi i}{4}}k)\). By separating the coefficients of the two independent functions, we obtain

$$\begin{aligned} \beta _{12}=\frac{\nu \sqrt{2\pi }e^{-\frac{\pi \nu }{2}}e^{\frac{3\pi i}{4}}}{\Gamma (-i\nu +1)\det \gamma ^*(k_0)} \left( \begin{array}{cc} \gamma _{22}^*(k_0) &{} -\gamma _{12}^*(k_0)\\ -\gamma _{21}^*(k_0) &{} \gamma _{11}^*(k_0) \end{array}\right) . \end{aligned}$$
(3.107)

Finally, from (3.81), (3.87) and (3.107), we have the main result of this paper as follows:

Theorem 3.4

Let \((q_{1}(x,t),q_{2}(x,t))\) be the solution for the Cauchy problem of the new type CNLS Eq. (1.1) and the initial values \(q_1^0(x)\), \(q_2^0(x)\in {\mathscr {S}}({\mathbb {R}})\). Then, for \(|x/t|<C\), the long-time asymptotics of the solution to the Cauchy problem of the new type CNLS equation has the form

$$\begin{aligned} (q_{1}(x,t),q_{2}(x,t))=\frac{\sqrt{\pi }(\delta ^0)^2e^{-\frac{\pi \nu }{2}}e^{\frac{-3\pi i}{4}}}{\sqrt{t}\Gamma (-i\nu )\det \gamma ^*(k_0)}(\gamma _{11}^*(k_0),-\gamma _{12}^*(k_0))+O(\frac{\log t}{t}),\nonumber \\ \end{aligned}$$
(3.108)

where C is a fixed constant, \(\Gamma (\cdot )\) is the Gamma function, \(\gamma _{ij}(k)\) is the (ij) entry of the matrix function \(\gamma (k)\) defined in (2.24) and satisfy (2.35), and

$$\begin{aligned}&\delta ^0=e^{2itk_0^2}(8t)^{-\frac{i\nu }{2}}e^{\chi (k_0)},\quad k_0=-\frac{x}{4t},\\&\nu =-\frac{1}{2\pi }\log (1+\det (\gamma (k_0)\gamma ^*(k_0))+\mathrm {tr}(\gamma (k_0)\gamma ^*(k_0))),\\&\begin{aligned} \chi (k_0)=&\frac{1}{2\pi i}\Big [\int _{k_0-1}^{k_0}\log \left( \frac{1+\det (\gamma (\xi )\gamma ^*(\xi ^*))+\mathrm {tr}(\gamma (\xi )\gamma ^*(\xi ^*))}{1+\det (\gamma (k_0)\gamma ^*(k_0))+\mathrm {tr}(\gamma (k_0)\gamma ^*(k_0))}\right) \, \frac{\mathrm {d}\xi }{\xi -k_0}\\&+\int _{-\infty }^{k_0-1}\log \left( 1+\det (\gamma (\xi )\gamma ^*(\xi ^*))+\mathrm {tr}(\gamma (\xi )\gamma ^*(\xi ^*))\right) \, \frac{\mathrm {d}\xi }{\xi -k_0}\Big ]. \end{aligned} \end{aligned}$$