1 Introduction

Since the work of Deift and Zhou [9], the nonlinear steepest descent method (also called Deift–Zhou method) has become a powerful tool for analyzing long-time asymptotic behavior of solutions of nonlinear integrable equations, from which the long-time asympotic behaviors of solutions of many integrable nonlinear equations associated with \(2\times 2\) matrix spectral problems are obtained, such as the KdV equation, nonlinear Schrödinger equation, sine-Gordon equation, derivative nonlinear Schrödinger equation, Camassa–Holm equation and so on [2, 3, 6, 8, 10, 16, 20, 21, 23, 28, 29, 32, 37]. A natural idea is to generalize the nonlinear steepest descent method to nonlinear integrable equations associated with higher-order matrix Lax pairs, which is often difficult. The obvious difference between the \(2\times 2\) case and the higher-order case is that the former needs to deal with a simple scalar Riemann–Hilbert (RH) problem, while the latter needs to deal with a complicated matrix RH problem. In general, the solution of the matrix RH problem can not be found in explicit form, but the scalar case can be explicitly solved by the Plemelj formula [1]. Fortunately, the nonlinear steepest descent method was successfully generalized to study the long-time asymptotic behavior of solutions for integrable nonlinear evolution equations associated with \(3\times 3\) matrix spectral problems, for example, the Degasperis–Procesi equation, coupled nonlinear Schrödinger equation and Sasa-Satsuma equation and so on [5, 7, 11, 15, 27].

The main goal of the present paper is to extend the nonlinear steepest descent method to study the long-time asymptotic behavior for the Cauchy problem of the three-component derivative nonlinear Schrödinger (DNLS) equation

$$\begin{aligned} {\left\{ \begin{array}{ll} iq_{1t}=q_{1xx}+i[(|q_1|^2+|q_2|^2+|q_3|^2)q_1]_x,\\ iq_{2t}=q_{2xx}+i[(|q_1|^2+|q_2|^2+|q_3|^2)q_2]_x,\\ iq_{3t}=q_{3xx}+i[(|q_1|^2+|q_2|^2+|q_3|^2)q_3]_x,\\ q_1(x,0)=q_1^0(x),\quad q_2(x,0)=q_2^0(x),\quad q_3(x,0)=q_3^0(x), \end{array}\right. } \end{aligned}$$
(1.1)

associated with the \(4\times 4\) Lax pair, where \(q_1(x,t)\), \(q_2(x,t)\) and \(q_3(x,t)\) are three complex potentials, the initial value functions \(q_1^0(x)\), \(q_2^0(x)\) and \(q_3^0(x)\) lie in the Schwartz space \({\mathscr {S}}({\mathbb {R}})=\{f(x)\in C^\infty ({\mathbb {R}}):\sup _{x\in {\mathbb {R}}}|x^\alpha \partial ^\beta f(x)|<\infty ,\forall \alpha ,\beta \in {\mathbb {N}}\}\) and they are assumed to be generic so that \(\det a(\lambda )\) defined in the following context is nonzero in \(D_-\) (Fig. 1) and its boundary. To our knowledge, there have been no results about the long-time asymptotic behavior for the Cauchy problem of the three-component DNLS equation. There are many literatures on the three-component DNLS equation. The gauge transformations among the generalised DNLS equations were derived, from which the inner relations of these equations are discussed [17]. Based on the theory of algebraic curves, the explicit theta function representations of solutions for the derivative nonlinear Schrödinger equation are obtained [13] with the help of the Baker–Akhiezer functions and the meromorphic functions [12, 24, 34]. The multi-component hybrid nonlinear Schrödinger equations were studied in [14, 18]. The Darboux transformation of the three-component DNLS equation is constructed, from which various interactions of localized waves for the three-component DNLS equation were derived with the help of the special vector solution generated from the corresponding Lax pair [25, 33, 36]. The unified transform method was applied to the initial-value problem problems for the vector DNLS equation [26]. By virtue of a gauge transformation, a new multi-component extension of the DNLS equation proposed by Kaup–Newell was also obtained [31]. On the one hand, the three-component DNLS equation is associated with \(4\times 4\) matrix spectral problem, the structure of the spectral problem and the spectral analysis become more complicated. Especially, the construction of the basic RH problem is different from the cases of \(2\times 2\) spectral problems. Besides, the triangular decomposition of the initial jump matrix is a key step in the Deift–Zhou nonlinear steepest descent method. In the case of \(2\times 2\) matrix spectral problem, one can introduce an extra scalar RH problem to obtain the triangular decompositions of the jump matrix and the scalar RH problem can be solved by the Plemelj formula. But in the case of the three-component DNLS equation, we have to introduce an extra \(3\times 3\) matrix RH problem. In general, the matrix RH problem can not be solved. Fortunately, we find that the determinant of solution of the \(3\times 3\) matrix RH problem satisfied a scalar RH problem, which can replace the solution of the matrix RH problem in controlled error terms. On the other hand, the study of the long-time asymptotics for the three-component DNLS equation is the basic work if we want to obtain the long-time asymptotics for the n-component DNLS equation.

The main result of this paper is expressed as follows:

Theorem 1.1

Let \((q_1(x,t),q_2(x,t),q_3(x,t))\) be the solution for the Cauchy problem of three-component DNLS equation (1.1), and the initial-value functions \(q_1^0\), \(q_2^0\) and \(q_3^0\in {\mathscr {S}}({\mathbb {R}})\). Assume that the determinant of the matrix-valued spectral function \(a(\lambda )\) is nonzero in \(D_-\) (Fig. 1) and its boundary. Then, for \(-\frac{x}{t}>C\) and \(t\rightarrow \infty \), the leading asymptotics of \((q_1(x,t),q_2(x,t),q_3(x,t))\) has the form

$$\begin{aligned} (q_1(x,t),q_2(x,t),q_3(x,t))=\frac{1}{2\sqrt{\pi tk_0}}e^{\Theta }{{\tilde{\nu }}}(k_0)\Gamma (-i{{\tilde{\nu }}}(k_0))\gamma (k_0)W_0+O\left( \frac{\log {t}}{tk_0}\right) , \end{aligned}$$
(1.2)

where C is a fixed positive constant, \(\Gamma (\cdot )\) is the Gamma function, the vector function \(\gamma (k)\) is defined in (2.46), and

$$\begin{aligned} k_0&=-\frac{x}{4t}, \quad {{\tilde{\nu }}}(k)=\frac{1}{2\pi }\log (1-k\vert \gamma (k)\vert ^2),\\ \Theta&=\frac{3\pi i}{4}+\frac{\pi {{\tilde{\nu }}}(k_0)}{2}-4itk_0^2+i{{\tilde{\nu }}}(k_0)\log (8t)+2\chi (k_0)-i\int _{-\infty }^{k_0}\frac{{{\tilde{\nu }}}(s)}{s}\, \mathrm {d}s,\\ W_0&=I-i\int _{-\infty }^{k_0}\frac{{{\tilde{\nu }}}(s)}{s(1-e^{-2\pi {{\tilde{\nu }}}(s)})}\gamma ^\dag (s)\gamma (s)\, \mathrm {d}s,\\ \chi (k_0)&=-\frac{1}{2\pi i}\left( \int _{k_0-1}^{k_0}\log \left( \frac{1-\xi \vert \gamma (\xi )\vert ^2}{1-k_0\vert \gamma (k_0)\vert ^2}\right) \, \frac{\mathrm {d}\xi }{\xi -k_0}\right. \\&\quad +\left. \int _{-\infty }^{k_0-1}\log \left( 1-\xi \vert \gamma (\xi )\vert ^2\right) \, \frac{\mathrm {d}\xi }{\xi -k_0}\right) . \end{aligned}$$

The present paper is organized as follows. In Sect. 2, we first deduce the basic RH problem with the aid of the inverse scattering method. The Lax pair of the three-component DNLS equation and the relevant matrices are written as block forms. Then we introduce a transformation of the spectral parameter and obtain a reduced RH problem. In Sect. 3, we deal with the reduced RH problem via the nonlinear steepest descent method. By the reorientation, extended, cut and rescaling, we convert the RH problem to a model RH problem, which can be solved by the parabolic cylinder function. Finally we obtain the long-time asymptotic for for the Cauchy problem of the three-component DNLS equation.

2 The Riemann–Hilbert problem

2.1 The basic Riemann–Hilbert problem

In this subsection, we shall derive the Volterra integral equations satisfied by the Jost solutions with removed exponents and a scattering matrix from the Lax pair of three-component DNLS equation (1.1). Then the Cauchy problem of the three-component DNLS equation turns into a basic RH problem. Let us consider a \(4\times 4\) matrix Lax pair of three-component DNLS equation (1.1):

$$\begin{aligned} \psi _x=(i\lambda ^2\sigma +{{\tilde{U}}})\psi , \end{aligned}$$
(2.1a)
$$\begin{aligned} \psi _t=(2i\lambda ^4\sigma +{{\tilde{V}}})\psi , \end{aligned}$$
(2.1b)

where \(\psi \) is a matrix-valued function and \(\lambda \) is the spectral parameter, \(\sigma =\mathrm {diag}\{1,-1,-1,-1\}\),

$$\begin{aligned} {{\tilde{U}}}= & {} \left( \begin{array}{cccc} 0&{}\lambda q_1&{}\lambda q_2&{}\lambda q_3\\ \lambda q_1^*&{}0&{}0&{}0\\ \lambda q_2^*&{}0&{}0&{}0\\ \lambda q_3^*&{}0&{}0&{}0\\ \end{array}\right) , \end{aligned}$$
(2.2)
$$\begin{aligned} {{\tilde{V}}}= & {} \left( \begin{array}{cccc} i\lambda ^2(|q_1|^2+|q_2|^2+|q_3|^2) &{}{{\tilde{V}}}_{12} &{}{{\tilde{V}}}_{13} &{}{{\tilde{V}}}_{14}\\ {{\tilde{V}}}_{21}&{}-i\lambda ^2|q_1|^2 &{}-i\lambda ^2q_2q_1^*&{}-i\lambda ^2q_3q_1^*\\ {{\tilde{V}}}_{31}&{}-i\lambda ^2q_1q_2^*&{}-i\lambda ^2|q_2|^2 &{}-i\lambda ^2q_3q_2^*\\ {{\tilde{V}}}_{21}&{}-i\lambda ^2q_1q_3^*&{}-i\lambda ^2q_2q_3^*&{}-i\lambda ^2|q_3|^2 \end{array}\right) ,\nonumber \\ \end{aligned}$$
(2.3)
$$\begin{aligned} {{\tilde{V}}}_{1,j+1}= & {} 2\lambda ^3q_j+\lambda [q_j(|q_1|^2+|q_2|^2+|q_3|^2)-iq_{jx}],\quad (j=1,2,3),\end{aligned}$$
(2.4)
$$\begin{aligned} \tilde{V}_{j+1,1}= & {} 2\lambda ^3q_j^*+\lambda [q_j^*(|q_1|^2+|q_2|^2+|q_3|^2)+iq_{jx}^*], \quad (j=1,2,3). \end{aligned}$$
(2.5)

Throughout this paper, we stipulate the following notation: any \(4\times 4\) matrix \(A=(a_{jk})\) is written in a \(2\times 2\) block form, that is

$$\begin{aligned} A=\left( \begin{array}{cc} A_{11} &{} A_{12}\\ A_{21} &{} A_{22}\\ \end{array}\right) , \end{aligned}$$
(2.6)

where \(A_{11}\) is scalar, \(A_{12}\) is a three-dimensional row vector, \(A_{21}\) is a three-dimensional column vector and \(A_{22}\) is a \(3\times 3\) matrix. For example, let \(q=(q_1,q_2,q_3)\) and we can rewrite \({{\tilde{U}}}\) of (2.3) as the block form

$$\begin{aligned} {\tilde{U}}=\left( \begin{array}{cc} 0&{}\lambda q\\ \lambda q^\dag &{}0_{3\times 3}\\ \end{array}\right) . \end{aligned}$$

Now we need to introduce a transformation that guarantees the solutions of the spectral problem approach identity matrix as \(\lambda \rightarrow \infty \). For this purpose, we define

$$\begin{aligned} Y(x,t)=\left( \begin{array}{cc} e^{\int _{(-\infty ,0)}^{(x,t)}w_1(x^\prime ,t^\prime )} &{} 0\\ 0 &{} W(x,t)\\ \end{array}\right) , \end{aligned}$$
(2.7)

where

$$\begin{aligned} w_1(x,t)=-\frac{i}{2}qq^\dag \mathrm {d}x+\left( -\frac{3i}{4}|q|^4+\frac{1}{2}qq_x^\dag -\frac{1}{2}q_xq^\dag \right) \mathrm {d}t, \end{aligned}$$
(2.8)

and W(xt) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} W_x=\frac{i}{2}q^\dag q W,\\ W_t=\left( \frac{3i}{4}|q|^2q^\dag q-\frac{1}{2}q_x^\dag q+\frac{1}{2}q^\dag q_x\right) W \end{array}\right. } \end{aligned}$$
(2.9)

with the condition \(W(x,t)\rightarrow I\) as \(x\rightarrow -\infty ,t=0\). Then, W(xt) is equivalent to the equation

$$\begin{aligned} W(x,t)&=I+\int _{(-\infty ,0)}^{(x,t)}\frac{i}{2}q^\dag qW(x^\prime ,t^\prime )\mathrm {d}x^\prime \nonumber \\&\quad +\left( \frac{3i}{4}|q|^2q^\dag q-\frac{1}{2}q_x^\dag q+\frac{1}{2}q^\dag q_x\right) W(x^\prime ,t^\prime )\mathrm {d}t^\prime . \end{aligned}$$
(2.10)

In fact,

$$\begin{aligned} \left( -\frac{i}{2}qq^\dag \right) _t=\left( -\frac{3i}{4}|q|^4+\frac{1}{2}qq_x^\dag -\frac{1}{2}q_xq^\dag \right) _x,\quad W_{xt}=W_{tx}. \end{aligned}$$
(2.11)

A direct calculation shows that

$$\begin{aligned} Y_x=\left( \begin{array}{cc} -\frac{i}{2}qq^\dag &{} 0\\ 0 &{} \frac{i}{2}q^\dag q\\ \end{array}\right) Y\triangleq EY. \end{aligned}$$
(2.12)

Note that \(\mathrm {tr}E=0\), where \(\mathrm {tr}X\) denotes the trace of matrix X. Then

$$\begin{aligned} (\det Y)_x&=\mathrm {tr}((\mathrm {adj}Y)Y_x)\nonumber \\&=\mathrm {tr}((\mathrm {adj}Y)EY)\nonumber \\&=\det Y\mathrm {tr}(E)\nonumber \\&=0, \end{aligned}$$
(2.13)

where \(\mathrm {adj}X\) denotes the adjoint of matrix X. There is a similar consequence for \(Y_t\). Together with (2.11) and \(Y\rightarrow I\) as \(x\rightarrow -\infty \), we know that \(\det Y=1\), which means that Y is invertible. Let \(\mu =Y^{-1}\psi e^{-i\lambda ^2\sigma x-2i\lambda ^4\sigma t}\). A direct calculation shows that (2.1a) and (2.1b) can be written in the equivalent forms

$$\begin{aligned} \mu _x= & {} i\lambda ^2[\sigma ,\mu ]+U\mu ,\end{aligned}$$
(2.14a)
$$\begin{aligned} \mu _t= & {} 2i\lambda ^4[\sigma ,\mu ]+V\mu , \end{aligned}$$
(2.14b)

where \(e^{\sigma }=\mathrm {diag}\{e,e^{-1},e^{-1},e^{-1}\}\), \([\cdot ,\cdot ]\) is the commutator, \([\sigma ,\mu ]=\sigma \mu -\mu \sigma \) and

$$\begin{aligned} U=Y^{-1}{{\tilde{U}}}Y-Y^{-1}Y_x,\quad V=Y^{-1}{{\tilde{V}}}Y-Y^{-1}Y_t. \end{aligned}$$
(2.15)

Then U has the block form

$$\begin{aligned} U=\left( \begin{array}{cc} \frac{i}{2}qq^\dag &{}\lambda e^{-\int _{(-\infty ,0)}^{(x,t)}w_1}qW\\ \lambda W^{-1}q^\dag e^{\int _{(-\infty ,0)}^{(x,t)}w_1}&{}-\frac{i}{2}W^{-1}q^\dag qW\\ \end{array}\right) . \end{aligned}$$
(2.16)

Then matrix Jost solutions with removed exponents \(\mu _\pm \) of Eq. (2.14a) are defined by the following Volterra integral equations:

$$\begin{aligned} \mu _\pm (\lambda ;x,t)=I+\int _{\pm \infty }^xe^{i\lambda ^2\sigma (x-\xi )}U(\lambda ;\xi ,t)\mu _\pm (\lambda ;\xi ,t)e^{-i\lambda ^2\sigma (x-\xi )} \mathrm {d}\xi , \end{aligned}$$
(2.17)

where I is the \(4\times 4\) identity matrix. Let \(\mu _\pm = (\mu _{\pm L},\mu _{\pm R})\), where \(\mu _{\pm L}\) denote the first column of \(\mu _{\pm }\), and \(\mu _{\pm R}\) denote the second, third and the fourth columns of \(\mu _{\pm }\).

Lemma 2.1

The matrix Jost solutions with removed exponents \(\mu _\pm (\lambda ;x,t)\) have the properties:

  1. (i)

    \(\mu _{-L}\) and \(\mu _{+R}\) is analytic in \(D_+=\{\lambda \in {\mathbb {C}}|\arg \lambda \in (\frac{\pi }{2},\pi )\cup (-\frac{\pi }{2},0)\}\);

  2. (ii)

    \(\mu _{+L}\) and \(\mu _{-R}\) is analytic in \(D_-=\{\lambda \in {\mathbb {C}}|\arg \lambda \in (0,\frac{\pi }{2})\cup (-\pi ,-\frac{\pi }{2})\}\);

  3. (iii)

    \(\det \mu _\pm =1\), and \(\mu _\pm \) satisfy the symmetry conditions \(\sigma \mu _\pm ^\dag (\lambda ^*)\sigma =\mu _\pm ^{-1}(\lambda )=\mu _\pm ^\dag (-\lambda ^*)\), where “†" denotes the Hermite conjugate, the domains \(D_{\pm }\) are given in Fig. 1.

Proof

Note that the exponential term in the Volterra integral equation (2.17) and the real part of the exponential determine the analytic property of the Jost solutions with removed exponents \(\mu _\pm \). According to the expression of matrix U, it is not difficult to verify the symmetry relations

$$\begin{aligned} \sigma U^\dag (\lambda ^*)\sigma =-U(\lambda ),\quad U^\dag (-\lambda ^*)=-U(\lambda ). \end{aligned}$$
(2.18)

Using \((\mu _\pm ^{-1})_x=i\lambda ^2[\sigma ,\mu _\pm ^{-1}]-\mu _\pm ^{-1}U\), we can get the symmetry relations of \(\mu _\pm \). At last, \(\det \mu _\pm =1\) are because of the traceless of U. \(\square \)

Fig. 1
figure 1

The domain \(D_\pm \)

Because \(Y\mu _\pm e^{i\lambda ^2\sigma x+2i\lambda ^4\sigma t}\) satisfy the same differential Eqs. (2.1a) and (2.1b), they are linear related. Then there exists a scattering matrix \(s(\lambda )\) such that

$$\begin{aligned} \mu _-=\mu _+ e^{i\lambda ^2\sigma x+2i\lambda ^4\sigma t}s(\lambda )e^{-i\lambda ^2\sigma x-2i\lambda ^4\sigma t},\quad \det s(\lambda )=1. \end{aligned}$$
(2.19)

The scattering matrix \(s(\lambda )\) has symmetry conditions

$$\begin{aligned} \sigma s^\dag (\lambda ^*)\sigma =s^{-1}(\lambda )=s^\dag (-\lambda ^*), \end{aligned}$$
(2.20)

which are derived from the symmetry conditions of \(\mu _\pm \) and (2.19). From the symmetry conditions of \(s(\lambda )\), we have

$$\begin{aligned} s_{11}(\lambda )=\det s_{22}^\dag (\lambda ^*),\quad s_{21}(\lambda )=\mathrm {adj}[s_{22}^\dag (\lambda ^*)]s_{12}^\dag (\lambda ^*), \end{aligned}$$
(2.21)

where \(\mathrm {adj}X\) denotes the adjoint of matrix X. Then \(s(\lambda )\) can be written as the block form

$$\begin{aligned} s(\lambda )= \left( \begin{array}{l@{\quad }l} \det a^\dag (\lambda ^*) &{} b(\lambda )\\ \mathrm {adj}[a^\dag (\lambda ^*)]b^\dag (\lambda ^*) &{} a(\lambda )\\ \end{array}\right) , \end{aligned}$$
(2.22)

where \(a(\lambda )\) is analytic in \(D_-\) and

$$\begin{aligned} a(\lambda )=a(-\lambda ),\quad b(\lambda )=-b(-\lambda ). \end{aligned}$$
(2.23)

Note For simplicity, we consider the case that \(\det a(\lambda )\ne 0\) in \(D_-\) in this paper. In fact, the zeros of \(\det a(\lambda )\ne 0\) are corresponding to the residue conditions of the basic RH problem, so we exclude the soliton-type phenomena.

A direct calculation shows from the evaluation of (2.19) at \(t=0\) that

$$\begin{aligned} s(\lambda )=\lim _{x\rightarrow +\infty }e^{-i\lambda ^2\sigma x}\mu _-(\lambda ;x,0)e^{i\lambda ^2\sigma x}, \end{aligned}$$
(2.24)

which implies

$$\begin{aligned} a(\lambda )&=I_{3\times 3}+\int _{-\infty }^{+\infty }(\lambda W^{-1}(\xi ,0)q^\dag (\xi ,0) e^{\int _{(-\infty ,0)}^{(\xi ,0)}w_1(x^\prime ,t^\prime )}\mu _{-12}(\xi ,0)\nonumber \\&\quad -\frac{i}{2}W^{-1}(\xi ,0)q^\dag (\xi ,0) q(\xi ,0)W(\xi ,0)\mu _{-22}(\xi ,0))\, \mathrm {d}\xi ,\end{aligned}$$
(2.25)
$$\begin{aligned} b(\lambda )&=\int _{-\infty }^{+\infty }e^{-2i\lambda ^2\xi }(-\frac{i}{2}q(\xi ,0)q^\dag (\xi ,0)\mu _{-12}(\xi ,0)\nonumber \\&\quad +\lambda e^{-\int _{(-\infty ,0)}^{(\xi ,0)}w_1(x^\prime ,t^\prime )}q(\xi ,0)W(\xi ,0)\mu _{-22}(\xi ,0))\, \mathrm {d}\xi . \end{aligned}$$
(2.26)

Theorem 2.1

Let \(N(\lambda ;x,t)\) be analytic for \(\lambda \in {\mathbb {C}}\backslash {{\hat{\Gamma }}}\) and satisfy the RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} N_+(\lambda ;x,t)=N_-(\lambda ;x,t)J_N(\lambda ;x,t), &{} \lambda \in {{\hat{\Gamma }}},\\ N(\lambda ;x,t)\rightarrow I, &{} \lambda \rightarrow \infty , \end{array}\right. } \end{aligned}$$
(2.27)

where the contour \({{\hat{\Gamma }}}=\{\lambda :\mathrm {Im}(\lambda ^2)=0\}\), \(N_+(\lambda ;x,t)\) and \(N_-(\lambda ;x,t)\) denote the limiting values as \(\lambda \) approaches the contour \({{\hat{\Gamma }}}\) from the left and the right along the contour, respectively,

$$\begin{aligned} J_N= & {} \left( \begin{array}{cc} 1-{{\tilde{\gamma }}}(\lambda ){{\tilde{\gamma }}}^\dag (\lambda ^*) &{} -{{\tilde{\gamma }}}(\lambda )e^{-2it\theta }\\ {{\tilde{\gamma }}}^\dag (\lambda ^*) e^{2it\theta } &{} I\\ \end{array}\right) , \end{aligned}$$
(2.28)
$$\begin{aligned} \theta (\lambda )= & {} -\frac{x}{t}\lambda ^2-2\lambda ^4,\quad {{\tilde{\gamma }}}(\lambda )=b(\lambda )a^{-1}(\lambda ), \end{aligned}$$
(2.29)

the function \({{\tilde{\gamma }}}(\lambda )\) lies in the Schwartz space and satisfies

$$\begin{aligned} {{\tilde{\gamma }}}(-\lambda )=-{{\tilde{\gamma }}}(\lambda ),\quad \sup _{\lambda \in {{\hat{\Gamma }}}}|{{\tilde{\gamma }}}(\lambda )|<1. \end{aligned}$$
(2.30)

Then the solution of RH problem (2.27) exists and is unique,

$$\begin{aligned} q(x,t)=(q_1(x,t),q_2(x,t),q_3(x,t))=e^{-\frac{i}{2}\int _{-\infty }^xq(x^\prime )q^\dag (x^\prime )\, \mathrm {d}x^\prime }{{\tilde{q}}}(x,t)W^{-1}(x,t), \end{aligned}$$
(2.31)

where \({{\tilde{q}}}(x,t)\) is a three-dimensional row vector given by

$$\begin{aligned} \tilde{q}(x,t)=-2i\lim _{\lambda \rightarrow \infty }\{\lambda (N(\lambda ;x,t))_{12}\} \end{aligned}$$
(2.32)

solves the Cauchy problem of three-component DNLS equation (1.1).

Proof

Noting the analyticity of \(\mu _{\pm L}\) and \(\mu _{\pm R}\), we define sectionally analytic function \(N(\lambda ;x,t)\) by

$$\begin{aligned} N(\lambda ;x,t)= {\left\{ \begin{array}{ll} \left( \frac{\mu _{-L}(\lambda )}{\det a^\dag (\lambda ^*)},\mu _{+R}(\lambda )\right) , &{} \lambda \in D_+,\\ \left( \mu _{+L}(\lambda ),\mu _{-R}(\lambda )a^{-1}(\lambda )\right) , &{} \lambda \in D_-.\\ \end{array}\right. } \end{aligned}$$
(2.33)

Note that we have known the analytic properties of \(\mu _{\pm L}\) and \(\mu _{\pm R}\) in Lemma 2.1. Besides, the analytic properties of \(\det a^\dag (\lambda ^*)\) and \(a(\lambda )\) can be derived from (2.19) and (2.22). The scattering relation (2.19) is equivalent to

$$\begin{aligned} {\left\{ \begin{array}{ll} \mu _{-L}=\mu _{+L}\det a^\dag (\lambda ^*)+\mu _{+R}\mathrm {adj}[a^\dag (\lambda ^*)]b^\dag (\lambda ^*)e^{2i\lambda ^2 x+4i\lambda ^4 t},\\ \mu _{-R}=\mu _{+L}b(\lambda )e^{-2i\lambda ^2 x-4i\lambda ^4 t}+\mu _{+R}a(\lambda ), \end{array}\right. } \end{aligned}$$

which means that

$$\begin{aligned} {\left\{ \begin{array}{ll} \mu _{-L}=\mu _{+L}(\det a^\dag (\lambda ^*)-b(\lambda )a^{-1}(\lambda )\mathrm {adj}[a^\dag (\lambda ^*)]b^\dag (\lambda ^*))\\ \qquad \qquad +\mu _{-R}a^{-1}(\lambda )\mathrm {adj}[a^\dag (\lambda ^*)]b^\dag (\lambda ^*)e^{2i\lambda ^2 x+4i\lambda ^4 t},\\ \mu _{+R}=-\mu _{+L}b(\lambda )a^{-1}(\lambda )e^{-2i\lambda ^2 x-4i\lambda ^4 t}+\mu _{-R}a^{-1}(\lambda ). \end{array}\right. } \end{aligned}$$

Combining with the expressions of \(N(\lambda ;x,t)\) and \({{\tilde{\gamma }}}(\lambda )\), we can obtain the jump condition and the corresponding RH problem (2.27) by straight-forward calculations. According to the Vanishing Lemma [1], the solution of the RH problem (2.27) is existent and unique because of the positive definiteness of the jump matrix \(J_N(\lambda ;x,t)\). The function \({{\tilde{\gamma }}}(\lambda )\) lies in the Schwartz space because the initial values of the three-component DNLS equation lie in the Schwartz space [4, 22]. Using the Volterra integral equation and the Neumann series-type successive approximations argument, the spectral function \({{\tilde{\gamma }}}(\lambda )\) also lies in Schwartz space. In the case of three-component DNLS equation, the matrix \(\sigma \) and U in the spectral problem are written in the \(2\times 2\) block forms. First, we introduce the notation below. For a \(4\times 4\) matrix X, define

$$\begin{aligned} X^{(d)}= \left( \begin{array}{cc} X^{(11)} &{} 0 \\ 0 &{} X^{(22)} \\ \end{array}\right) ,\quad X^{(o)}= \left( \begin{array}{cc} 0 &{} X^{(12)} \\ X^{(21)} &{} 0 \\ \end{array}\right) , \end{aligned}$$
(2.34)

where \(X^{(ij)}(ij=1,2)\) denotes the (ij) entry of the \(2\times 2\) block form of X. Generally, suppose \(\partial _x^j q\in {\mathscr {L}}^1, 0\leqslant j\leqslant l, 1<l\). Let \(\mu _0, \mu _1,\ldots ,\mu _l\) satisfy

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}x}\mu _j-U^{(d)}\mu _{j}=U^{(o)}\mu _{j+1}+[i\sigma ,\mu _{j+2}],\quad j=0,1,2,\ldots , \end{aligned}$$
(2.35)

where \(\mu _0=I_{4\times 4}\). Especially, \(U^{(o)}=-[i\sigma ,\mu _1]\) and \(-U^{(d)}=U^{(o)}\mu _{1}+[i\sigma ,\mu _{2}]\). In fact, considering the diagonal and off-diagonal parts of (2.35), respectively, we can obtain \(\mu _1,\mu _2,\ldots ,\mu _l\). We introduce

$$\begin{aligned} \mu ^l=\sum _{j=0}^{l}\lambda ^{-j}\mu _j(x), \end{aligned}$$
(2.36)

which implies that

$$\begin{aligned} \partial _x(\mu ^{-1}\mu ^l)+i\lambda ^2[\sigma ,\mu ^{-1}\mu ^l]=\lambda ^{-l}\mu ^{-1}(X_1+\lambda X_2), \end{aligned}$$
(2.37)

where \(X_1\) and \(X_2\) are independent of \(\lambda \). Notice the boundary conditions of \(\mu _\pm (x,\lambda )\), we define \(\mu _j(x)\rightarrow I\) as \(x\rightarrow \pm \infty \), respectively. It is easy to see that \(\mu _\pm ^{-1}\mu ^l\) satisfies an integral equation similar to (2.17). So for large \(\lambda \), we have

$$\begin{aligned} |\mu _\pm (x,\lambda )-\mu ^l(x,\lambda )|\leqslant C|\lambda |^{-l+1}.\quad (C\ \text {is a constant}) \end{aligned}$$
(2.38)

To sum up, there are matrix-valued functions \(\mu _j\) such that \(\mu _0=I_{4\times 4}\) and as \(\lambda \rightarrow \infty \),

$$\begin{aligned} \left| \mu _\pm (x,\lambda )-\sum _{j=0}^{l}\lambda ^{-j}\mu _j(x)\right| =O(|\lambda |^{-l+1}). \end{aligned}$$
(2.39)

Set

$$\begin{aligned} {{\tilde{J}}}=\left( \begin{array}{cc} I_{2\times 2}-{{\tilde{\gamma }}}(\lambda ){{\tilde{\gamma }}}^\dag (\lambda ^*) &{} -{{\tilde{\gamma }}}(\lambda ) \\ {{\tilde{\gamma }}}^\dag (\lambda ^*) &{} I_{2\times 2} \\ \end{array}\right) . \end{aligned}$$
(2.40)

Then

$$\begin{aligned} |{{\tilde{J}}}-I_{4\times 4}|\leqslant |M_+-M_-|\leqslant C|\lambda |^{-l+1}. \end{aligned}$$
(2.41)

According to the argument above, if q belongs to Schwartz space, then \({{\tilde{J}}}-I_{4\times 4}\) is rapidly decreasing when \(\lambda \rightarrow \infty \). Similar to Lemma 6.30 in [4] (See also [22]), if each entry of \(x^lq\in {\mathscr {L}}^1\), then \(\partial _\lambda ^{j}({{\tilde{J}}}-I_{4\times 4})\rightarrow 0\) as \(\lambda \rightarrow \infty \), \(0\leqslant j \leqslant l\). To sum up, we know that if the initial values of three-component DNLS equation lie in Schwartz space, then \(|\lambda ^\alpha \partial ^\beta {{\tilde{\gamma }}}(\lambda )|\) is bounded for any \(\alpha ,\beta \in {\mathbb {N}}\), i.e., \({{\tilde{\gamma }}}(k)\) lies in Schwartz space. Notice that \(N(\lambda ;x,t)\) has the asymptotic expansion

$$\begin{aligned} N(\lambda ;x,t)=I+\frac{N_1(x,t)}{\lambda }+\frac{N_2(x,t)}{\lambda ^2}+O(\lambda ^{-3}). \end{aligned}$$
(2.42)

We can obtain (2.32) as long as we substitute the asymptotic expansion into (2.17). \(\square \)

2.2 The transformation of spectral parameters

According to Eq. (2.29), the power of spectral parameter \(\lambda \) in function \(\theta (\lambda )\) is even, which makes the zeros of \(\mathrm {d}\theta /\mathrm {d}\lambda \) complicated. For the sake of simplicity, we introduce a new spectral parameter \(k=\lambda ^2\) as in [19]. We define a new matrix function M(kxt) by

$$\begin{aligned} M(k;x,t)= \left( \begin{array}{cc} 1 &{} 0\\ -\frac{i{{\tilde{q}}}^\dag (x,t)}{2} &{} I\\ \end{array}\right) \lambda ^{-\frac{1}{2}\sigma }N(\lambda ;x,t)\lambda ^{\frac{1}{2}\sigma },\quad \lambda \in {\mathbb {C}}\backslash {{\hat{\Gamma }}}. \end{aligned}$$
(2.43)

Theorem 2.2

M(kxt) satisfies the RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} M_+(k;x,t)=M_-(k;x,t)J(k;x,t), &{} k\in {\mathbb {R}},\\ M(k;x,t)\rightarrow I, &{} k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(2.44)

where

$$\begin{aligned} J&= \left( \begin{array}{cc} 1-k\gamma (k)\gamma ^\dag (k^*) &{} -\gamma (k)e^{-2it\theta }\\ k\gamma ^\dag (k^*) e^{2it\theta } &{} I\\ \end{array}\right) , \end{aligned}$$
(2.45)
$$\begin{aligned} \theta (k)&=-\frac{x}{t}k-2k^2,\quad \gamma (k)=\frac{{{\tilde{\gamma }}}(\lambda )}{\lambda }, \end{aligned}$$
(2.46)

and the function \({{\tilde{q}}}(x,t)\) is given by

$$\begin{aligned} {{\tilde{q}}}(x,t)=-2i\lim _{k\rightarrow \infty }(k(M(k,x,t))_{12}). \end{aligned}$$
(2.47)

Proof

Substituting (2.43) into the RH problem (2.27), we arrive at a new reduced RH problem (2.44). From (3.54), it is easy to see that \(M_+(k;x,t)\) is analytic in lower half complex k-plane \({\mathbb {C}}_-\) and \(M_-(k;x,t)\) is analytic in upper half complex k-plane \({\mathbb {C}}_+\). So the contour of the RH problem is \({\mathbb {R}}\). We consider the asymptotic expansion of \(N(\lambda ;x,t)\) in (2.43). Resorting to (2.43), we obtain

$$\begin{aligned} \begin{aligned} (M(k;x,t))_{21}&=\left( \left( \begin{array}{cc} 1 &{} 0\\ -\frac{i{{\tilde{q}}}^\dag (x,t)}{2} &{} I\\ \end{array}\right) \lambda ^{-\frac{1}{2}\sigma }(I+\frac{N_1(x,t)}{\lambda }+O(\lambda ^{-2}))\lambda ^{\frac{1}{2}\sigma }\right) _{21}\\&=-\frac{i{{\tilde{q}}}^\dag (x,t)}{2}+(N_1(x,t))_{21}+O(\lambda ^{-1})\\&=O(\lambda ^{-1}). \end{aligned} \end{aligned}$$
(2.48)

In the similar way, we have

$$\begin{aligned} (M(k;x,t))_{12}=\frac{(N_1(x,t))_{12}}{k}+o(k^{-1}), \end{aligned}$$
(2.49)

which implies the expression of function \({{\tilde{q}}}(x,t)\) in (2.47). \(\square \)

3 Long-time asymptotic behaviour

In this section, we study the long-time asymptotic behaviour of the solution of the Cauchy problem of the three-component DNLS equation (1.1). We deal with the reduced RH problem (2.44) by the nonlinear steepest descent method. It is worth noting that the single stationary point \(k_0=-x/(4t)\), which is zero of \(\mathrm {d}\theta /\mathrm {d}k\). Then it is necessary to deform the contour to find the leading-order asymptotics for the solution. There are five main steps in the analysis: reorientation, extended, cut, rescaling and solving.

3.1 The first transformation: reorientation

The key step in the nonlinear steepest descent method is to find the factorization forms of the jump matrix. The jump matrix J(kxt) of the RH problem (2.44) has two triangular factorizations, which are lower/upper factorization and upper/lower factorization. In order to have a uniform form for these two decompositions, an effective approach is to introduce a suitable RH problem. To this end, we have to reorient the contour of the RH problem.

The two triangular factorizations of J(kxt) are

$$\begin{aligned} J= {\left\{ \begin{array}{ll} \left( \begin{array}{cc} 1 &{} -e^{-2it\theta }\gamma (k)\\ 0 &{} I\\ \end{array}\right) \left( \begin{array}{cc} 1 &{} 0\\ e^{2it\theta }k\gamma ^{\dag }(k^{*}) &{} I\\ \end{array}\right) ,\\ \left( \begin{array}{cc} 1 &{} 0\\ \frac{e^{2it\theta }k\gamma ^{\dag }(k^{*})}{1-k\gamma (k)\gamma ^{\dag }(k^{*})} &{} I\\ \end{array}\right) \left( \begin{array}{cc} 1-k\gamma (k)\gamma ^{\dag }(k^{*}) &{} 0\\ 0 &{} (I-k\gamma ^{\dag }(k^{*})\gamma (k))^{-1}\\ \end{array}\right) \left( \begin{array}{cc} 1 &{} -\frac{e^{-2it\theta }\gamma (k)}{1-k\gamma (k)\gamma ^{\dag }(k^{*})}\\ 0 &{} I\\ \end{array}\right) . \end{array}\right. } \end{aligned}$$

We introduce a \(3\times 3\) matrix-valued function \(\delta (k)\) to satisfy the RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \delta _{+}(k)=(I-k\gamma ^{\dag }(k^{*})\gamma (k))\delta _{-}(k), &{} k\in (-\infty ,k_0),\\ \delta (k)\rightarrow I, &{} k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(3.1)

which implies a scalar RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \det \delta _{+}(k)=(1-k\gamma (k)\gamma ^\dag (k^*))\det \delta _{-}(k), &{} k\in (-\infty ,k_0),\\ \det \delta (k)\rightarrow 1, &{} k\rightarrow \infty . \end{array}\right. } \end{aligned}$$
(3.2)

The scalar RH problem (3.2) has a solution

$$\begin{aligned} \det \delta (k)=(k-k_0)^{-i\nu }e^{\chi (k)}, \end{aligned}$$
(3.3)

where

$$\begin{aligned} \nu&=\frac{1}{2\pi }\log (1-k_0\vert \gamma (k_0)\vert ^2),\\ \chi (k)&=-\frac{1}{2\pi i}\left( \int _{k_0-1}^{k_0}\log \left( \frac{1-\xi \vert \gamma (\xi )\vert ^2}{1-k_0\vert \gamma (k_0)\vert ^2}\right) \, \frac{\mathrm {d}\xi }{\xi -k}\right. \\&\quad +\left. \int _{-\infty }^{k_0-1}\log \left( 1{-}\xi \vert \gamma (\xi )\vert ^2\right) \, \frac{\mathrm {d}\xi }{\xi -k}{-}\log (1{-}k_0\vert \gamma (k_0)\vert ^2)\log (k{-}k_0{+}1)\right) . \end{aligned}$$

In order to discuss the boundedness, it is necessary to introduce |A| for a matrix A. We define \(|A|=(\mathrm {tr}A^\dag A)^{1/2}\) and \(\Vert A(\cdot )\Vert _p=\Vert |A(\cdot )|\Vert _p\) for a matrix function \(A(\cdot )\). Noticing the symmetry condition \(\delta ^{-1}(k)=\delta ^\dag (k^*)\), when \(k\in (-\infty ,k_0)\), we have

$$\begin{aligned} |\delta _+(k)|^2= & {} \mathrm {tr}[I-k\gamma ^{\dag }(k^{*})\gamma (k)]=2-k\gamma (k)\gamma ^\dag (k^*), \end{aligned}$$
(3.4)
$$\begin{aligned} |\delta _-(k)|^2= & {} \mathrm {tr}[(I-k\gamma ^{\dag }(k^{*})\gamma (k))^{-1}]=1-\frac{k\gamma (k)\gamma ^\dag (k^*)}{1-k\gamma (k)\gamma ^\dag (k^*)};\end{aligned}$$
(3.5)
$$\begin{aligned} |\det \delta _+(k)|^2= & {} 1-k\gamma (k)\gamma ^\dag (k^*),\quad |\det \delta _-(k)|^2=(1-k\gamma (k)\gamma ^\dag (k^*))^{-1}. \end{aligned}$$
(3.6)

There are the similar computations for \(k\in (k_0,+\infty )\). Hence, by the maximum principle, we have

$$\begin{aligned} \vert \delta (k)\vert \leqslant \mathrm {const}<\infty ,\quad \vert \det \delta (k)\vert \leqslant \mathrm {const}<\infty ,\quad k\in {\mathbb {C}}. \end{aligned}$$
(3.7)

Let the vector spectral function

$$\begin{aligned} \rho (k)= {\left\{ \begin{array}{ll} \gamma (k), &{} k\in [k_0,+\infty ),\\ -\dfrac{\gamma (k)}{1-k\gamma (k)\gamma ^\dag (k^*)}, &{} k\in (-\infty ,k_0),\\ \end{array}\right. } \end{aligned}$$
(3.8)

and

$$\begin{aligned} M^\Delta (k;x,t)=M(k;x,t)\Delta ^{-1}(k), \end{aligned}$$

where

$$\begin{aligned} \Delta (k)=\left( \begin{array}{cc} \det \delta (k) &{} 0\\ 0 &{} \delta ^{-1}(k)\\ \end{array}\right) . \end{aligned}$$
Fig. 2
figure 2

The oriented contour on \({\mathbb {R}}\)

We reverse the orientation for \(k\in (k_0,+\infty )\) as in Fig. 2 and \(M^\Delta (k;x,t)\) satisfies the RH problem on the reoriented contour

$$\begin{aligned} {\left\{ \begin{array}{ll} M^\Delta _+(k;x,t)=M^\Delta _-(k;x,t)J^\Delta (k;x,t), &{} k\in {\mathbb {R}},\\ M^\Delta (k;x,t)\rightarrow I, &{} k\rightarrow \infty ,\\ \end{array}\right. } \end{aligned}$$
(3.9)

where the jump matrix \(J^\Delta (k;x,t)\) has a decomposition

$$\begin{aligned} J^\Delta (k;x,t)&=(b_-)^{-1}b_+= \left( \begin{array}{cc} 1 &{} 0\\ -\dfrac{e^{2it\theta }\delta _-^{-1}(k)k\rho ^\dag (k^*)}{\det \delta _-(k)} &{} I\\ \end{array}\right) \nonumber \\&\quad \left( \begin{array}{cc} 1 &{} e^{-2it\theta }\rho (k)\delta _+(k)[\det \delta _+(k)]\\ 0 &{} I\\ \end{array}\right) . \end{aligned}$$
(3.10)

3.2 Extend to the augmented contour

In this section, we extend the RH problem (3.9) to a RH problem on the augmented contour. We consider the spectral function \(\rho (k)\) and split it into an analytic part and a small nonanalytic remainder. Then we can construct a RH problem on the basis of the decomposition of spectral function. We define \(L=\{k=k_0+uk_0e^{\frac{\pi i}{4}}:u\in (-\infty ,+\infty )\}\).

In what follows, in order not to cause confusion, we introduce the symbol “\(\lesssim \)”. Then we define \(A\lesssim B\) for two quantities A and B if there exists a constant \(C>0\) such that \(|A|\leqslant CB\).

Theorem 3.1

The vector spectral function \(\rho (k)\) and \(k\rho ^\dag (k^*)\) have the decompositions

$$\begin{aligned} \rho (k)&=h_1^o(k)+h_1^a(k)+R_1(k),\quad k\in {\mathbb {R}},\nonumber \\ k\rho ^\dag (k^*)&=h_2^o(k)+h_2^a(k)+R_2(k),\quad k\in {\mathbb {R}}, \end{aligned}$$
(3.11)

where \(R_1(k)\) and \(R_2(k)\) are piecewise-rational functions, \(h_1^a(k)\) has a analytic continuation to L and \(h_2^a(k)\) has a analytic continuation to \(L^*\). And they admit the following estimates

$$\begin{aligned} \vert e^{-2it\theta (k)}h_j^o(k)\vert \lesssim \frac{1}{(1+\vert k\vert ^{2})(tk_0)^{l}}, \quad k\in {\mathbb {R}}, \end{aligned}$$
(3.12)
$$\begin{aligned} \vert e^{-2it\theta (k)}h_j^a(k)\vert \lesssim \frac{1}{(1+\vert k\vert ^{2})(tk_0)^{l}}, \quad k\in L_j, \end{aligned}$$
(3.13)

for an arbitrary positive integer l, where \(j=1,2\) and \(L_1=L, L_2=L^*\).

Proof

At first, we consider the function \(\rho (k)\) for \(k\in [k_0,+\infty )\) and it has a similar process for \(k\in (-\infty ,k_0)\). In order to replace the function \(\rho (k)\) by a rational function with well-controlled errors, we expand \((k+i)^{m+5}\rho (k)\) in a Taylor series around \(k_0\),

$$\begin{aligned} (k+i)^{m+5}\rho (k)&=\mu _0+\mu _1(k-k_0)+\cdots +\mu _m(k-k_0)^m\nonumber \\&\quad +\frac{1}{m!}\int _{k_0}^k((\xi +i)^{m+5}\rho (\xi ))^{(m+1)}(k-\xi )^m\, \mathrm {d}\xi . \end{aligned}$$
(3.14)

Then we define

$$\begin{aligned} R_1(k)= & {} \dfrac{\sum _{j=0}^m\mu _j(k-k_0)^j}{(k+i)^{m+5}}, \end{aligned}$$
(3.15)
$$\begin{aligned} h_1(k)= & {} \rho (k)-R_1(k). \end{aligned}$$
(3.16)

So the following result is straightforward

$$\begin{aligned} \left. \frac{\mathrm {d}^j\rho (k)}{\mathrm {d}k^j}\right| _{k=k_0}=\left. \frac{\mathrm {d}^jR_1(k)}{\mathrm {d}k^j}\right| _{k=k_0},\quad 0\leqslant j\leqslant m. \end{aligned}$$
(3.17)

For the convenience, we assume that \(m=4p+1, p\in {\mathbb {Z}}_+\). Set

$$\begin{aligned} \alpha (k)= & {} \frac{(k/k_0-1)^p}{(k-i)^{p+2}}, \end{aligned}$$
(3.18)
$$\begin{aligned} {{\tilde{\theta }}}= & {} \frac{4k_0k-2k^2}{k_0}. \end{aligned}$$
(3.19)

By the Fourier inversion, we arrive at

$$\begin{aligned} (h_1/\alpha )(k)=\int _{-\infty }^{+\infty }e^{is{{\tilde{\theta }}}}\widehat{(h_1/\alpha )}(s)\, \bar{\mathrm {d}}s, \end{aligned}$$
(3.20)

where

$$\begin{aligned} \widehat{(h_1/\alpha )}(s)= & {} \int _{k_0}^{+\infty }e^{-is{{\tilde{\theta }}}}(h_1/\alpha )(k)\, \bar{\mathrm {d}}{{\tilde{\theta }}}(k), \end{aligned}$$
(3.21)
$$\begin{aligned} \bar{\mathrm {d}}s= & {} \frac{\mathrm {d}s}{\sqrt{2\pi }},\quad \bar{\mathrm {d}}{{\tilde{\theta }}}=\frac{\mathrm {d}{{\tilde{\theta }}}}{\sqrt{2\pi }}. \end{aligned}$$
(3.22)

From (3.14), (3.16) and (3.183.19), it is easy to see that

$$\begin{aligned} (h_1/\alpha )(k)=\frac{(k-k_0)^{m+1-p}}{(k-i)^{m+3-p}}g(k,k_0)=\frac{(k-k_0)^{3p+2}}{(k-i)^{3p+4}}g(k,k_0), \end{aligned}$$
(3.23)

where

$$\begin{aligned} g(k,k_0)=\frac{1}{m!}\int _0^1((k_0+u(k-k_0)-i)^{m+5}\rho (k_0+u(k-k_0)))^{(m+1)}(1-u)^m\, \mathrm {d}u. \end{aligned}$$
(3.24)

Noting that

$$\begin{aligned} \left| \frac{\mathrm {d}^j}{\mathrm {d}k^j}g(k,k_0)\right| \lesssim 1, \end{aligned}$$
(3.25)

we have

$$\begin{aligned}&\int _{k_0}^{+\infty }\left| \left( \frac{\mathrm {d}}{\mathrm {d}{{\tilde{\theta }}}}\right) ^j(h_1/\alpha )(k({{\tilde{\theta }}}))\right| ^2\, |\bar{\mathrm {d}}{{\tilde{\theta }}}|\\&\quad =\int _{k_0}^{+\infty }\left| \left( \frac{k_0}{-4k+4k_0}\frac{\mathrm {d}}{\mathrm {d}k}\right) ^j(h_1/\alpha )(k)\right| ^2\frac{4k-4k_0}{k_0}\, \bar{\mathrm {d}}k\\&\quad \lesssim \int _{k_0}^{+\infty }\left| \frac{(k-k_0)^{3p+2-2j}}{(k-i)^{3p+4}}\right| ^2(k-k_0)\, \bar{\mathrm {d}}k\lesssim 1,\quad \mathrm {for}\quad 0\leqslant j\leqslant \frac{3p+2}{2}. \end{aligned}$$

Using the Plancherel formula [30],

$$\begin{aligned} \int _{-\infty }^{+\infty }(1+s^2)^j\left| \widehat{(h_1/\alpha )}(s)\right| ^2\, \mathrm {d}s \lesssim 1, \end{aligned}$$
(3.26)

we split \(h_1(k)\) into two parts

$$\begin{aligned} h_1(k)&=\alpha (k)\int _{tk_0}^{+\infty }e^{is\theta }\widehat{(h_1/\alpha )}(s)\, \bar{\mathrm {d}}s+\alpha (k)\int _{-\infty }^{tk_0}e^{is\theta }\widehat{(h_1/\alpha )}(s)\, \bar{\mathrm {d}}s\nonumber \\&\triangleq h_1^o(k)+h_1^a(k). \end{aligned}$$
(3.27)

For \(k\geqslant k_0\), we have

$$\begin{aligned} |e^{-2it\theta }h_1^o(k)|&\leqslant |\alpha (k)|\int _{tk_0}^{+\infty }\left| \widehat{(h_1/\alpha )}(s)\right| \, \bar{\mathrm {d}}s\\&\lesssim \left| \frac{(k/k_0-1)^p}{(k-i)^{p+2}}\right| \left( \int _{tk_0}^{+\infty }(1+s^2)^{-r}\, \mathrm {d}s\right) ^{1/2}\\&\quad \left( \int _{tk_0}^{+\infty }(1+s^2)^{r}\left| \widehat{(h_1/\alpha )}(s)\right| ^2\, \mathrm {d}s\right) ^{1/2}\\&\lesssim (tk_0)^{-r+\frac{1}{2}}, \quad \mathrm {for}\quad r\leqslant \frac{3p+2}{2}. \end{aligned}$$

Noting (2.46), it is not difficult to prove that (i) \(\mathrm {Re}(i\theta )>0\) when \(\mathrm {Re}k>k_0, \mathrm {Im}k>0\) or \(\mathrm {Re}k<k_0, \mathrm {Im}k<0\); (ii) \(\mathrm {Re}(i\theta )<0\) when \(\mathrm {Re}k<k_0, \mathrm {Im}k>0\) or \(\mathrm {Re}k>k_0, \mathrm {Im}k<0\). Then \(h_1^a(k)\) has a analytic continuation to the line \(\{k=k_0+uk_0e^{\frac{\pi i}{4}}:u\in [0,+\infty )\}\). On this line,

$$\begin{aligned} |e^{-2it\theta }h_1^a(k)|&\leqslant e^{-2tk_0\mathrm {Re}(i{{\tilde{\theta }}})}\left| \frac{(k/k_0-1)^p}{(k-i)^{p+2}}\right| \int _{-\infty }^{tk_0}\left| e^{is{{\tilde{\theta }}}}\widehat{(h_1/\alpha )}(k)\right| ^2\, \bar{\mathrm {d}}s\\&\lesssim \frac{u^pe^{-tk_0\mathrm {Re}(i{{\tilde{\theta }}})}}{|k-i|^{p+2}}\int _{-\infty }^{tk_0}e^{(s-tk_0)\mathrm {Re}(i{{\tilde{\theta }}})}\left| \widehat{(h_1/\alpha )}(k)\right| ^2\, \mathrm {d}s\\&\lesssim \frac{u^pe^{-tk_0\mathrm {Re}(i{{\tilde{\theta }}})}}{|k-i|^{p+2}}\left( \int _{-\infty }^{tk_0}(1+s^2)^{-1}\, \mathrm {d}s\right) ^{1/2}\\&\quad \left( \int _{-\infty }^{tk_0}(1+s^2)\left| \widehat{(h_1/\alpha )}(k)\right| ^2\, \mathrm {d}s\right) ^{1/2}\\&\lesssim \frac{u^pe^{-tk_0\mathrm {Re}(i{{\tilde{\theta }}})}}{|k-i|^{p+2}}. \end{aligned}$$

Notice that

$$\begin{aligned} {{\tilde{\theta }}}(k)=\frac{-2(k-k_0)^2+2k_0^2}{k_0}, \end{aligned}$$
(3.28)

which implies \(\mathrm {Re}(i{{\tilde{\theta }}})=2u^2k_0\). So

$$\begin{aligned} |e^{-2it\theta }h_1^a(k)|\lesssim \frac{u^pe^{-2u^2k_0^2t}}{|k-i|^{p+2}}\lesssim \frac{1}{|k-i|^{2}(tk_0)^{p/2}}. \end{aligned}$$
(3.29)

We accomplish the estimates of \(h_1^o(k)\) and \(h_1^a(k)\) for \(k\geqslant k_0\). For the case of \(k<k_0\), it is similar. The same method can be applied for \(k\rho ^\dag (k^*)\), from which we have the estimates of \(h_2^o(k)\) and \(h_2^a(k)\). To sum up, we let l be an arbitrary positive integer and be less than \((3p+1)/2\) and p/2. Then we can obtain (3.12) and (3.13). \(\square \)

Based on the result of Theorem 3.1, we can split \(b_\pm \) further

$$\begin{aligned} b_{+}&=b^{o}_{+}b^{a}_{+}=(I+\omega ^{o}_{+})(I+\omega ^{a}_{+})\nonumber \\&=\left( \begin{array}{cc} 1&{}e^{-2it\theta }[\mathrm {det}\delta _+(k)]h_1^o(k)\delta _+(k)\\ 0&{}I_{3\times 3}\\ \end{array}\right) \nonumber \\&\quad \left( \begin{array}{cc} 1&{}e^{-2it\theta }[\mathrm {det}\delta _+(k)][h_1^a(k)+R_1(k)]\delta _+(k)\\ 0&{}I_{3\times 3}\\ \end{array}\right) , \end{aligned}$$
(3.30)
$$\begin{aligned} b_{-}&=b^{o}_{-}b^{a}_{-}=(I-\omega ^{o}_{-})(I-\omega ^{a}_{-})\nonumber \\&=\left( \begin{array}{cc} 1&{}0\\ \dfrac{e^{2it\theta }\delta _-^{-1}(k)h_2^o(k)}{\mathrm {det}\delta _-(k)}&{}I_{3\times 3}\\ \end{array}\right) \left( \begin{array}{cc} 1&{}0\\ \dfrac{e^{2it\theta }\delta _-^{-1}(k)[h_2^a(k)+R_2(k)]}{\mathrm {det}\delta _-(k)}&{}I_{3\times 3}\\ \end{array}\right) . \end{aligned}$$
(3.31)

Define the oriented contour \(\Sigma \) by \(\Sigma ={\mathbb {R}}\cup L\cup L^*\) as in Fig. 3. Set

$$\begin{aligned} M^\sharp (k;x,t)= {\left\{ \begin{array}{ll} M^\Delta (k;x,t), &{} k\in \text{\O}mega _{1}\cup \text{\O}mega _{2},\\ M^\Delta (k;x,t)(b_+^a)^{-1}, &{} k\in \text{\O}mega _{3}\cup \text{\O}mega _{4},\\ M^\Delta (k;x,t)(b_-^a)^{-1}, &{} k\in \text{\O}mega _{5}\cup \text{\O}mega _{6}.\\ \end{array}\right. } \end{aligned}$$
(3.32)

Lemma 3.1

\(M^\sharp (k;x,t)\) is the solution of the RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} M^\sharp _+(k;x,t)=M^\sharp _-(k;x,t)J^\sharp (k;x,t), &{} k\in \Sigma ,\\ M^\sharp (k;x,t)\rightarrow I, &{} k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(3.33)

where the jump matrix \(J^\sharp (k;x,t)\) satisfies

$$\begin{aligned} J^\sharp (k;x,t)= & {} (b_-^\sharp )^{-1}b_+^\sharp ,\end{aligned}$$
(3.34)
$$\begin{aligned} b_-^\sharp= & {} {\left\{ \begin{array}{ll} I, &{} k\in L,\\ b_-^a, &{} k\in L^*,\\ b_-^o, &{} k\in {\mathbb {R}},\\ \end{array}\right. }\quad b_+^\sharp = {\left\{ \begin{array}{ll} b_+^a, &{} k\in L,\\ I, &{} k\in L^*,\\ b_+^o, &{} k\in {\mathbb {R}}.\\ \end{array}\right. } \end{aligned}$$
(3.35)

Proof

According to the RH problem (3.9) and the decomposition of \(b_\pm \), we can construct the RH problem (3.33). The asymptotics of \(M^\sharp (k;x,t)\) can be derived from the convergence of \(b_\pm \) as \(k\rightarrow \infty \). Let us first confine our discussion to the domain \(\text{\O}mega _{3}\). Based on the boundedness of \(\delta (k)\) and \(\det \delta (k)\) in (3.7), and the definition of \(h_1^a(k)\) and \(R_1(k)\) in (3.27) and (3.15), we have

$$\begin{aligned}&\vert e^{-2it\theta }[\mathrm {det}\delta (k)][h_1^a(k)+R_1(k)]\delta (k)\vert \\&\quad \leqslant \vert e^{-2it\theta }h_1^a(k)\vert +\vert e^{-2it\theta }R_1(k)\vert \\&\quad \lesssim |\alpha (k)|e^{-tk_0\mathrm {Re}(i{{\tilde{\theta }}})}\left| \int _{-\infty }^{tk_0}e^{i(s-tk_0){{\tilde{\theta }}}}\widehat{(h_1/\alpha )}(k)\, \mathrm {d}s\right| +\dfrac{\vert \sum _{j=0}^m\mu _j(k-k_0)^j\vert }{\vert (k+i)^{m+5}\vert }\\&\quad \lesssim \dfrac{1}{\vert k-i\vert ^{2}}+\dfrac{1}{\vert k+i\vert ^{5}},\quad \mathrm {for}\quad k\in \text{\O}mega _{3}. \end{aligned}$$

Then we obtain that \(M^\sharp (k;x,t)\rightarrow I\) when \(k\in \text{\O}mega _{3}\) and \(k\rightarrow \infty \). In other domains, the result can be similarly proved. \(\square \)

Fig. 3
figure 3

The oriented contour \(\Sigma \)

In the following, we construct the solution of the above RH problem (3.33) by using the approach in [4]. Assume that

$$\begin{aligned} \omega ^\sharp _\pm =\pm (b^\sharp _\pm -I), \quad \omega ^\sharp =\omega ^\sharp _++\omega ^\sharp _-, \end{aligned}$$
(3.36)

and define the Cauchy operators \(C_\pm \) on \(\Sigma \) by

$$\begin{aligned} (C_\pm f)(k)=\int _\Sigma \frac{f(\xi )}{\xi -k_\pm }\frac{\mathrm {d}\xi }{2\pi i},\quad k\in \Sigma ,\ f\in {\mathscr {L}}^2(\Sigma ), \end{aligned}$$
(3.37)

where \(C_+f(C_-f)\) denotes the left (right) boundary value for the oriented contour \(\Sigma \) in Fig. 3. We introduce the operator \(C_{\omega ^\sharp }:{\mathscr {L}}^2(\Sigma )+{\mathscr {L}}^\infty (\Sigma )\rightarrow {\mathscr {L}}^2(\Sigma )\) by

$$\begin{aligned} C_{\omega ^{\sharp }}f=C_+\left( f\omega ^\sharp _-\right) +C_-\left( f\omega ^\sharp _+\right) \end{aligned}$$
(3.38)

for a \(4\times 4\) matrix-valued function f. Suppose that \(\mu ^\sharp (k;x,t)\in {\mathscr {L}}^2(\Sigma )+{\mathscr {L}}^\infty (\Sigma )\) is the solution of the singular integral equation

$$\begin{aligned} \mu ^\sharp =I+C_{\omega ^\sharp }\mu ^\sharp . \end{aligned}$$

Then

$$\begin{aligned} M^\sharp (k;x,t)=I+\int _\Sigma \dfrac{\mu ^\sharp (\xi ;x,t)\omega ^\sharp (\xi ;x,t)}{\xi -k} \, \frac{\mathrm {d}\xi }{2\pi i} \end{aligned}$$
(3.39)

solves RH problem (3.33). Indeed,

$$\begin{aligned} M_\pm ^\sharp (k)&=I+C_\pm (\mu ^\sharp (k)\omega ^\sharp (k))\nonumber \\&=I+C_\pm (\mu ^\sharp (k)\omega _+^\sharp (k))+C_\pm (\mu ^\sharp (k)\omega _-^\sharp (k))\nonumber \\&=I\pm \mu ^\sharp (k)\omega _\pm ^\sharp (k)+C_{\omega ^\sharp }\mu ^\sharp (k)\nonumber \\&=\mu ^\sharp (k) b_\pm ^\sharp (k). \end{aligned}$$
(3.40)

It is obvious that

$$\begin{aligned} M_+^\sharp =\mu ^\sharp b_+^\sharp =M_-^\sharp (b_-^\sharp )^{-1}b_+^\sharp =M_-^\sharp J^\sharp . \end{aligned}$$
(3.41)

Theorem 3.2

The function \({{\tilde{q}}}(x,t)\) is expressed by

$$\begin{aligned} \tilde{q}(x,t)=\frac{1}{\pi }\left( \int _\Sigma \left( (1-C_{\omega ^\sharp })^{-1}I\right) (\xi )\omega ^\sharp (\xi )\, \mathrm {d}\xi \right) _{12}. \end{aligned}$$
(3.42)

Proof

From (2.47), (3.32) and (3.39), we arrive at

$$\begin{aligned} {{\tilde{q}}}(x,t)&=-2i\lim _{k\rightarrow \infty }\left( kM^\Delta (k;x.t)\right) _{12}\\&=-2i\lim _{k\rightarrow \infty }\left( kM^\sharp (k;x.t)\right) _{12}\\&=\frac{1}{\pi }\left( \int _\Sigma \mu ^\sharp (\xi ;x,t)\omega ^\sharp (\xi )\,\mathrm {d}\xi \right) _{12}\\&=\frac{1}{\pi }\left( \int _\Sigma ((1-C_{\omega ^\sharp })^{-1}I)(\xi )\omega ^\sharp (\xi )\,\mathrm {d}\xi \right) _{12}. \end{aligned}$$

\(\square \)

3.3 The third transformation

In this subsection, we shall convert the RH problem on the contour \(\Sigma \) to a RH problem on the contour \(\Sigma ^\prime =\Sigma \backslash {\mathbb {R}}=L\cup L^*\) and estimate the errors between the two RH problems. From (3.30), (3.31), (3.35) and (3.36), we find that \(\omega ^\sharp \) is made up of the terms including \(h_j^o(k)\), \(h_j^a(k)\) and \(R_j(k)\) \((j=1,2)\). Because \(\omega ^\sharp \) is the sum of different terms, we can split \(\omega ^\sharp \) into three parts: \(\omega ^\sharp =\omega ^a+\omega ^b+\omega ^\prime \), where \(\omega ^a=\omega ^\sharp |_{{\mathbb {R}}}\) and is made up of the terms including \(h_1^o(k)\) and \(h_2^o(k)\), \(\omega ^b=0\) on \(\Sigma \backslash (L\cup L^*)\) and is made up of the terms including \(h_1^a(k)\) and \(h_2^a(k)\), \(\omega ^\prime =0\) on \(\Sigma \backslash \Sigma ^\prime \) and is made up of the terms including \(R_1(k)\) and \(R_2(k)\).

Let \(\omega ^e=\omega ^a+\omega ^b\). So \(\omega ^\sharp =\omega ^e+\omega ^\prime \). Then we immediately get the following estimates.

Lemma 3.2

For arbitrary positive integer l, as \(t\rightarrow \infty \), we have the estimates

$$\begin{aligned}&\Vert \omega ^a\Vert _{{\mathscr {L}}^1({\mathbb {R}})\cap {\mathscr {L}}^2({\mathbb {R}})\cap {\mathscr {L}}^\infty ({\mathbb {R}})}\lesssim (tk_0)^{-l}, \end{aligned}$$
(3.43)
$$\begin{aligned}&\Vert \omega ^b\Vert _{{\mathscr {L}}^1(L\cup L^*)\cap {\mathscr {L}}^2(L\cup L^*)\cap {\mathscr {L}}^\infty (L\cup L^*)}\lesssim (tk_0)^{-l},\end{aligned}$$
(3.44)
$$\begin{aligned}&\Vert \omega ^\prime \Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim (tk_0)^{-\frac{1}{4}},\quad \Vert \omega ^\prime \Vert _{{\mathscr {L}}^1(\Sigma )}\lesssim (tk_0)^{-\frac{1}{2}}. \end{aligned}$$
(3.45)

Proof

We first we consider the case of \(\omega ^a\). It is necessary to compute \(|\omega ^a|\) if we want to give the estimates for \(\Vert |\omega ^a|\Vert _{{\mathscr {L}}^p}\). Notice the boundedness of \(\delta (k)\) and \(\det \delta (k)\) in (3.7) and the estimates in Theorem 3.1, we have

$$\begin{aligned} |\omega ^a|&=\left( |e^{-2it\theta }[\det \delta (k)]h_1^o(k)\delta (k)|^2+\left| \frac{e^{2it\theta }\delta ^{-1}(k)h_2^o(k)}{\det \delta (k)}\right| ^2\right) ^{\frac{1}{2}}\\&\leqslant |e^{-2it\theta }[\det \delta (k)]h_1^o(k)\delta (k)|+\left| \frac{e^{2it\theta }\delta ^{-1}(k)h_2^o(k)}{\det \delta (k)}\right| \\&\lesssim |e^{-2it\theta }h_1^o(k)|+|e^{2it\theta }(k)h_2^o(k)|\\&\lesssim \frac{1}{(1+\vert k\vert ^{2})(tk_0)^{l}}, \end{aligned}$$

which implies that estimate (3.43) holds. Similarly, we can prove by a simple calculation that (3.44) also holds. Next, we prove estimate (3.45). Noting the definition of \(R_1(k)\) in (3.15), we have

$$\begin{aligned} |R_1|\leqslant \dfrac{\vert \sum _{j=0}^m\mu _j(k-k_0)^j\vert }{\vert (k+i)^{m+5}\vert }\lesssim \dfrac{1}{1+\vert k\vert ^{5}}. \end{aligned}$$
(3.46)

Resorting to \(\mathrm {Re}(i\theta )=2u^2k_0^2\) on \(L=\{k=k_0+uk_0e^{\frac{\pi i}{4}}:u\in (-\infty ,+\infty )\}\), we arrive at

$$\begin{aligned} \vert e^{-2it\theta }[\det \delta (k)]R_1(k)\delta (k)\vert \lesssim e^{-4tu^2k_0^2}(1+|k|^5)^{-1}. \end{aligned}$$

It is similar for \(R_2(k)\) on \(L^*\) that

$$\begin{aligned} \vert e^{2it\theta }\delta ^{-1}(k)R_2(k)[\mathrm {det}\delta (k)]^{-1}\vert \lesssim e^{-4tu^2k_0^2}(1+|k|^5)^{-1}. \end{aligned}$$
(3.47)

Then (3.45) holds through direct computations. \(\square \)

Lemma 3.3

As \(t\rightarrow \infty \), \((1-C_{\omega ^\prime })^{-1}:{\mathscr {L}}^2(\Sigma )\rightarrow {\mathscr {L}}^2(\Sigma )\) exists and is uniformly bounded:

$$\begin{aligned} \Vert (1-C_{\omega ^\prime })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim 1,\quad \Vert (1-C_{\omega ^\sharp })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\lesssim 1. \end{aligned}$$
(3.48)

Proof

It follows from Proposition 2.23 and Corollary 2.25 in [9]. \(\square \)

Lemma 3.4

As \(t\rightarrow \infty \), then

$$\begin{aligned} \int _\Sigma ((1-C_{\omega ^\sharp })^{-1}I)(\xi )\omega ^\sharp (\xi ) \, \mathrm {d}\xi = \int _{\Sigma ^\prime }((1-C_{\omega ^\prime })^{-1}I)(\xi )\omega ^\prime (\xi ) \, \mathrm {d}\xi +O((tk_0)^{-l}). \end{aligned}$$
(3.49)

Proof

It is easy to see that

$$\begin{aligned}&\left( (1-C_{\omega ^\sharp }\right) ^{-1}I)\omega ^\sharp \nonumber \\&\quad = \left( (1-C_{\omega ^\prime })^{-1}I\right) \omega ^\prime +\left( (1-C_{\omega ^\prime })^{-1}(C_{\omega ^\sharp }-C_{\omega ^\prime })(1-C_{\omega ^\sharp })^{-1}I\right) \omega ^\sharp \nonumber \\&\qquad +((1-C_{\omega ^\prime })^{-1}I)(\omega ^\sharp -\omega ^\prime )\nonumber \\&\quad =\left( (1-C_{\omega ^\prime })^{-1}I\right) \omega ^\prime +\omega ^e+\left( (1-C_{\omega ^\prime })^{-1}(C_{\omega ^e}I)\right) \omega ^\sharp \nonumber \\&\qquad +((1-C_{\omega ^\prime })^{-1}(C_{\omega ^\prime }I))\omega ^e\nonumber \\&\qquad +\left( (1-C_{\omega ^\prime })^{-1}C_{\omega ^e}(1-C_{\omega ^\sharp })^{-1}\right) (C_{\omega ^\sharp }I)\omega ^\sharp . \end{aligned}$$
(3.50)

From Lemma 3.2 and (3.48), we get

$$\begin{aligned}&\Vert \omega ^e\Vert _{{\mathscr {L}}^1(\Sigma )} \leqslant \Vert \omega ^a \Vert _{{\mathscr {L}}^1({\mathbb {R}})}+\Vert \omega ^b\Vert _{{\mathscr {L}}^1(L\cup L^*)}\lesssim (tk_0)^{-l},\\&\Vert \left( (1-C_{\omega ^\prime })^{-1}(C_{\omega ^e}I)\right) \omega ^\sharp \Vert _{{\mathscr {L}}^1(\Sigma )}\\&\quad \leqslant \Vert (1-C_{\omega ^\prime })^{-1} C_{\omega ^e}I\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )}\\&\quad \leqslant \Vert (1-C_{\omega ^\prime })^{-1}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^e}I\Vert _{{\mathscr {L}}^2(\Sigma )}\\&\times (\Vert \omega ^a\Vert _{{\mathscr {L}}^2({\mathbb {R}})}+\Vert \omega ^b\Vert _{{\mathscr {L}}^2(L\cup L^*)}+\Vert \omega ^\prime \Vert _{{\mathscr {L}}^2(\Sigma )})\\&\lesssim \Vert \omega ^e\Vert _{{\mathscr {L}}^2(\Sigma )}((tk_0)^{-l}+(tk_0)^{-l}+(tk_0)^{-\frac{1}{4}})\\&\lesssim (tk_0)^{-l-\frac{1}{4}}, \end{aligned}$$
$$\begin{aligned}&\begin{aligned}&\begin{aligned} \Vert \left( (1-C_{\omega ^\prime })^{-1}(C_{\omega ^\prime }I)\right) \omega ^e\Vert _{{\mathscr {L}}^1(\Sigma )}&\leqslant \Vert (1-C_{\omega ^\prime }^{-1}) C_{\omega ^\prime }I\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^e\Vert _{{\mathscr {L}}^2(\Sigma )}\\&\leqslant \Vert (1-C_{\omega ^\prime }^{-1})\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^\prime }I\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^e\Vert _{{\mathscr {L}}^2(\Sigma )}\\&\lesssim \Vert \omega ^\prime \Vert _{{\mathscr {L}}^2(\Sigma )}(\Vert \omega ^a\Vert _{{\mathscr {L}}^2({\mathbb {R}})}+\Vert \omega ^b\Vert _{{\mathscr {L}}^2(L\cup L^*)})\\&\lesssim (tk_0)^{-l-\frac{1}{4}}, \end{aligned}\\&\Vert \left( (1-C_{\omega ^\prime })^{-1}C_{\omega ^e}(1-C_{\omega ^\sharp })^{-1}\right) (C_{\omega ^\sharp }I)\omega ^\sharp \Vert _{{\mathscr {L}}^1(\Sigma )}\\&\quad \leqslant \Vert (1-C_{\omega ^\prime })^{-1}C_{\omega ^e}(1-C_{\omega ^\sharp })^{-1}(C_{\omega ^\sharp }I)\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )}\\&\quad \leqslant \Vert (1{-}C_{\omega ^\prime })^{{-}1}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert (1{-}C_{\omega ^\sharp })^{{-}1}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^e}\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert C_{\omega ^\sharp }I\Vert _{{\mathscr {L}}^2(\Sigma )}\Vert \omega ^\sharp \Vert _{{\mathscr {L}}^2(\Sigma )}\\&\quad \lesssim \Vert \omega ^e\Vert _{{\mathscr {L}}^\infty (\Sigma )}\Vert \omega ^\sharp \Vert ^2_{{\mathscr {L}}^2(\Sigma )} \lesssim (tk_0)^{-l-\frac{1}{2}}. \end{aligned} \end{aligned}$$

Substituting the above estimates into (3.50), the proof is completed. \(\square \)

Lemma 3.5

As \(t\rightarrow \infty \), The function \({{\tilde{q}}}(x,t)\) has the asymptotic estimate

$$\begin{aligned} \tilde{q}(x,t)=\frac{1}{\pi }\left( \int _{\Sigma ^\prime }((1-C_{\omega ^\prime })^{-1}I)(\xi )\omega ^\prime (\xi )\, \mathrm {d}\xi \right) _{12}+O((tk_0)^{-l}), \end{aligned}$$
(3.51)

Proof

A direct consequence of (3.42) and (3.49). \(\square \)

Set

$$\begin{aligned} M^\prime (k;x,t)=I+\int _{\Sigma ^\prime }\frac{\mu ^\prime (\xi ;x,t)\omega ^\prime (\xi ;x,t)}{\xi -k} \, \frac{\mathrm {d}\xi }{2\pi i}, \end{aligned}$$

where \(\mu ^\prime =(1-C_{\omega ^\prime })^{-1}I\). Then (3.51) is equivalent to

$$\begin{aligned} {{\tilde{q}}}(x,t)=-2i\lim _{k\rightarrow \infty }(k(M^\prime )_{12})+O((tk_0)^{-l}). \end{aligned}$$
(3.52)

Noting that \(M^\sharp (k;x,t)\) in (3.39) is the solution of RH problem (3.33), we can construct a RH problem that \(M^\prime (k;x,t)\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} M_+^\prime (k;x,t)=M_-^\prime (k;x,t)J^\prime (k;x,t), &{} k\in \Sigma ^\prime , \\ M^\prime (k;x,t)\rightarrow I, &{} k\rightarrow \infty ,\\ \end{array}\right. } \end{aligned}$$
(3.53)

where

$$\begin{aligned}&J^\prime =(b_-^\prime )^{-1}b_+^\prime =(I-\omega _-^\prime )^{-1}(I+\omega _+^\prime ),\\&\omega ^\prime =\omega _+^\prime +\omega _-^\prime ,\\&b_+^\prime = {\left\{ \begin{array}{ll} \left( \begin{array}{cc} 1 &{} e^{-2it\theta }[\det \delta (k)]R_1(k)\delta (k)\\ 0 &{} I_{3\times 3}\\ \end{array} \right) , &{} k\in L,\\ I, &{} k\in L^*,\\ \end{array}\right. }\\&b_-^\prime = {\left\{ \begin{array}{ll} I, &{} k\in L,\\ \left( \begin{array}{cc} 1 &{} 0\\ \dfrac{e^{2it\theta }\delta ^{-1}(k)R_2(k)}{\det \delta (k)} &{} I_{3\times 3}\\ \end{array} \right) , &{} k\in L^*.\\ \end{array}\right. } \end{aligned}$$

3.4 Rescaling and further reduction of the RH problems

In the previous subsection, we find that the leading-order asymptotics for solutions of the three-component DNLS equation (1.1) is an integral on \(\Sigma ^\prime \) through the stationary point \(k_0\). Next, we shall change the contour to a cross through the origin. Then we introduce the oriented contour \(\Sigma _0=\{k=uk_0e^{\frac{\pi i}{4}}:u\in {\mathbb {R}}\}\cup \{k=uk_0e^{-\frac{\pi i}{4}}:u\in {\mathbb {R}}\}\) in Fig. 4 and define the scaling operator

$$\begin{aligned} N:{\mathscr {L}}^2(\Sigma ^\prime )\rightarrow & {} {\mathscr {L}}^2(\Sigma _0)\nonumber \\ f(k)\mapsto & {} (Nf)(k)=f\left( k_0+\frac{k}{\sqrt{8tk_0}}\right) . \end{aligned}$$
(3.54)

Let \({{\hat{\omega }}}=N\omega ^\prime \). Then for a \(4\times 4\) matrix-valued function f,

$$\begin{aligned} NC_{\omega ^\prime }f=N\left( C_+(f\omega _-^\prime )+C_+(f\omega _-^\prime )\right) =C_{{{\hat{\omega }}}}Nf, \end{aligned}$$
(3.55)

which means \(C_{\omega ^\prime }=N^{-1}C_{{{\hat{\omega }}}}N\). A direct computation shows by the scaling operator act on the exponential term and \(\det \delta (k)\) that

$$\begin{aligned} N(e^{-it\theta }\det \delta )=\delta ^0\delta ^1(k), \end{aligned}$$

where \(\delta ^0\) is independent of k and

$$\begin{aligned} \delta ^0=e^{-2itk_0^2}(8t)^\frac{i\nu }{2}e^{\chi (k_0)}. \end{aligned}$$
(3.56)

Set \({{\hat{L}}}=\{\sqrt{8tk_0}uk_0e^{\frac{\pi i}{4}}:u\in {\mathbb {R}}\}\). In a similar way as the proof of Lemma 3.35 in [9], we obtain by a straightforward computation that

$$\begin{aligned} \left| (NR_1)(k)(\delta ^1(k))^2-R_1(k_0\pm )k^{-2i\nu }e^{\frac{1}{2}ik^2}\right|\lesssim & {} \frac{\log t}{\sqrt{tk_0}},\quad k\in {{\hat{L}}}, \end{aligned}$$
(3.57)
$$\begin{aligned} \left| (NR_2)(k)(\delta ^1(k))^{-2}-R_2(k_0\pm )k^{2i\nu }e^{-\frac{1}{2}ik^2}\right|\lesssim & {} \frac{\log t}{\sqrt{tk_0}},\quad k\in {{\hat{L}}}^*, \end{aligned}$$
(3.58)

where

$$\begin{aligned} R_1(k_0+)= & {} \gamma (k_0),\quad R_1(k_0-)=-\frac{\gamma (k_0)}{1-k_0|\gamma (k_0)|^2};\\ R_2(k_0+)= & {} k_0\gamma ^\dag (k_0),\quad R_1(k_0-)=-\frac{k_0\gamma ^\dag (k_0)}{1-k_0|\gamma (k_0)|^2}. \end{aligned}$$

Then we figure out the expression of \({{\hat{\omega }}}\)

$$\begin{aligned} {{\hat{\omega }}}={\hat{\omega }}_{+}= \left( \begin{array}{cc} 0 &{} (Ns_1)(k)\\ 0 &{} \mathbf{0 }_{3\times 3}\\ \end{array}\right) \end{aligned}$$
(3.59)

on \({{\hat{L}}}\), and

$$\begin{aligned} {{\hat{\omega }}}={\hat{\omega }}_{-}= \left( \begin{array}{cc} 0 &{} 0\\ (Ns_2)(k) &{} \mathbf{0 }_{3\times 3}\\ \end{array}\right) \end{aligned}$$
(3.60)

on \({{\hat{L}}}^*\), where

$$\begin{aligned} s_1(k)=e^{-2it\theta }[\det \delta (k)]R_1(k)\delta (k),\quad s_2(k)=-\dfrac{e^{2it\theta }\delta ^{-1}(k)R_2(k)}{\det \delta (k)}. \end{aligned}$$
(3.61)
Fig. 4
figure 4

The oriented contour \(\Sigma _0\)

Lemma 3.6

As \(t\rightarrow \infty \) and \(k\in {{\hat{L}}}\), for an arbitrary positive integer l,

$$\begin{aligned} \vert (N{{\tilde{\delta }}})(k)\vert \lesssim t^{-l}, \end{aligned}$$
(3.62)

where

$$\begin{aligned} {{\tilde{\delta }}}(k)=e^{-2it\theta }[R_1(k)\delta (k)-R_1(k)\det \delta (k)]. \end{aligned}$$

Proof

Because \(\delta (k)\) and \(\det \delta (k)\) satisfy the RH problems (3.1) and (3.2), \({{\tilde{\delta }}}(k)\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} {{\tilde{\delta }}}_+(k)={{\tilde{\delta }}}_-(k)(1-k\vert \gamma (k)\vert ^2)+e^{-2it\theta }f(k), &{} k\in (-\infty ,k_0),\\ {{\tilde{\delta }}}(k)\rightarrow 0, &{} k\rightarrow \infty , \end{array}\right. } \end{aligned}$$
(3.63)

where \(f(k)=R_1(k)(k\vert \gamma \vert ^2I-k\gamma ^\dag \gamma )\delta _-(k)\). This RH problem can be solved as below

$$\begin{aligned} {\tilde{\delta }}(k)= & {} X(k)\int _{-\infty }^{k_0}\frac{e^{-2it\theta (\xi )}f(\xi )}{X_{+}(\xi )(\xi -k)}\, \frac{d\xi }{2\pi i},\\ X(k)= & {} \mathrm {exp}\left\{ \frac{1}{2\pi i}\int _{-\infty }^{k_0}\frac{\log (1-\xi |\gamma (\xi )|^{2})}{\xi -k}\, d\xi \right\} . \end{aligned}$$

Resorting to the definition of \(\rho (k)\) in (3.8), one deduces that \(\rho \vert \gamma \vert ^2-\rho \gamma ^\dag \gamma =0\). Therefore we have

$$\begin{aligned} R_1(k\vert \gamma \vert ^{2}I-k\gamma ^{\dag }\gamma )&=(R_1-\rho )(k\vert \gamma \vert ^{2}I-k\gamma ^{\dag }\gamma )\\&=(h_1^o+h_1^a)(k\gamma ^{\dag }\gamma -k\vert \gamma \vert ^{2}I). \end{aligned}$$

Notice that \(|k\gamma ^{\dag }\gamma -k\vert \gamma \vert ^{2}I|\) is consist of the modulus of the components of \(\gamma \). Define \(L_t\) by \(\{k=k_0-\frac{1}{t}+uk_0e^{-\frac{3\pi i}{4}}:u\in [0,+\infty )\}\) as in Fig. 5. Similar to Theorem 3.1, f(k) has a decomposition: \(f(k)=f_1(k)+f_2(k)\), where \(f_2(k)\) has an analytic continuation to \(L_{t}\). Moreover, \(f_1(k)\) and \(f_2(k)\) satisfy

$$\begin{aligned} \vert e^{-2it\theta }f_{1}(k)\vert\lesssim & {} \frac{1}{(1+\vert k\vert ^{2})(tk_0)^{l}}, \quad k\in {\mathbb {R}}, \end{aligned}$$
(3.64)
$$\begin{aligned} \vert e^{-2it\theta }f_{2}(k)\vert\lesssim & {} \frac{1}{(1+\vert k\vert ^{2})(tk_0)^{l}}, \quad k\in L_{t}. \end{aligned}$$
(3.65)
Fig. 5
figure 5

The contour \(L_t\)

As \(k\in {{\hat{L}}}\), we arrive at

$$\begin{aligned} (N{\tilde{\delta }})(k)&=X\left( \frac{k}{\sqrt{8tk_0}}+k_{0}\right) \int _{k_0-\frac{1}{t}}^{k_0}\frac{e^{-2it\theta (\xi )}f(\xi )}{X_+(\xi )\left( \xi -k_0-\frac{k}{\sqrt{8tk_0}}\right) }\, \frac{d\xi }{2\pi i}\\&\quad +X\left( \frac{k}{\sqrt{8tk_0}}+k_{0}\right) \int _{-\infty }^{k_0-\frac{1}{t}}\frac{e^{-2it\theta (\xi )}f_1(\xi )}{X_+(\xi )\left( \xi -k_0-\frac{k}{\sqrt{8tk_0}}\right) }\, \frac{d\xi }{2\pi i}\\&\quad +X\left( \frac{k}{\sqrt{8tk_0}}+k_{0}\right) \int _{-\infty }^{k_0-\frac{1}{t}}\frac{e^{-2it\theta (\xi )}f_2(\xi )}{X_+(\xi )\left( \xi -k_0-\frac{k}{\sqrt{8tk_0}}\right) }\, \frac{d\xi }{2\pi i}\\&=A_{1}+A_{2}+A_{3}, \end{aligned}$$

and a direct calculation shows that

$$\begin{aligned}&\vert A_1\vert \lesssim \left| \int _{k_0-\frac{1}{t}}^{k_0}\frac{e^{-2it\theta (\xi )}f(\xi )}{X_+(\xi )\left( \xi -k_0-\frac{k}{\sqrt{8tk_0}}\right) }\, \mathrm {d}\xi \right| \lesssim t^{-l-1},\\&\vert A_2\vert \lesssim \int _{-\infty }^{k_0-\frac{1}{t}}\frac{|e^{-2it\theta (\xi )}f_1(\xi )|}{\left| \xi -k_0-\frac{k}{\sqrt{8tk_0}}\right| }\, \mathrm {d}\xi \lesssim t^{-l}\sqrt{2}t\int _{0}^{+\infty }\frac{1}{1+\xi ^2}\, \mathrm {d}\xi \lesssim t^{-l+1}. \end{aligned}$$

Similarly, we can evaluate \(A_{3}\) along the contour \(L_{t}\) instead of the interval \((-\infty ,k_0-\frac{1}{t})\) because of Cauchy’s Theorem, and \(|A_{3}|\lesssim t^{-l+1}\). Finally, (3.62) is proved. \(\square \)

Corollary 3.1

When \(k\in {{\hat{L}}}^*\) and as \(t\rightarrow \infty \), for an arbitrary positive integer l,

$$\begin{aligned} \vert (N_A{{\hat{\delta }}})(k)\vert \lesssim t^{-l}, \end{aligned}$$
(3.66)

where \({\hat{\delta }}(k)=e^{2it\theta }[\delta ^{-1}(k)-(\det \delta (k))^{-1}I]R_2(k)\).

Here we construct \(\omega ^0\) on the contour \(\Sigma _0\). Let

$$\begin{aligned}&\omega ^0=\omega ^0_+= {\left\{ \begin{array}{ll} \left( \begin{array}{cc} 0 &{} (\delta ^0)^2k^{-2i\nu }e^{\frac{1}{2}ik^2}\gamma (k_0)\\ 0 &{} \mathbf{0 }_{3\times 3}\\ \end{array}\right) , &{} k\in \Sigma _0^2,\\ \left( \begin{array}{cc} 0 &{} -(\delta ^0)^2k^{-2i\nu }e^{\frac{1}{2}ik^2}\frac{\gamma (k_0)}{1-k_0\vert \gamma (k_0)\vert ^2}\\ 0 &{} \mathbf{0 }_{3\times 3}\\ \end{array}\right) , &{} k\in \Sigma _0^4,\\ \end{array}\right. } \end{aligned}$$
(3.67)
$$\begin{aligned}&\omega ^0=\omega ^0_-= {\left\{ \begin{array}{ll} \left( \begin{array}{cc} 0 &{} 0\\ -(\delta ^0)^{-2}k^{2i\nu }e^{-\frac{1}{2}ik^2}k_0\gamma ^\dag (k_0) &{} \mathbf{0 }_{3\times 3}\\ \end{array}\right) , &{} k\in \Sigma _0^1,\\ \left( \begin{array}{cc} 0 &{} 0\\ (\delta ^0)^{-2}k^{-2i\nu }e^{\frac{1}{2}ik^2}\frac{k_0\gamma ^\dag (k_0)}{1-k_0\vert \gamma (k_0)\vert ^2} &{} \mathbf{0 }_{3\times 3}\\ \end{array}\right) , &{} k\in \Sigma _0^3.\\ \end{array}\right. } \end{aligned}$$
(3.68)

It follows from (3.57) and (3.58) that, as \(t\rightarrow \infty \),

$$\begin{aligned} \Vert {{\hat{\omega }}}-\omega ^0\Vert _{{\mathscr {L}}^1(\Sigma _0)\cap {\mathscr {L}}^2(\Sigma _0)\cap {\mathscr {L}}^\infty (\Sigma _0)} \lesssim \frac{\log {t}}{\sqrt{tk_0}}. \end{aligned}$$
(3.69)

Theorem 3.3

As \(t\rightarrow \infty \), then

$$\begin{aligned} \tilde{q}(x,t)=\frac{1}{\pi \sqrt{8tk_0}}\left( \int _{\Sigma _0}((1-C_{\omega ^0})^{-1}I)(\xi )\omega ^0(\xi )\, \mathrm {d}\xi \right) _{12}+O\left( \frac{\log t}{tk_0}\right) . \end{aligned}$$
(3.70)

Proof

Notice that

$$\begin{aligned}&\left( (1-C_{{{\hat{\omega }}}})^{-1}I\right) {{\hat{\omega }}}-\left( (1-C_{\omega ^0})^{-1}I\right) {\omega ^0}\\&\quad =\left( (1-C_{{{\hat{\omega }}}})^{-1}I\right) ({{\hat{\omega }}}-{\omega ^0})+(1-C_{{{\hat{\omega }}}})^{-1}(C_{{{\hat{\omega }}}}-C_{\omega ^0})(1-C_{\omega ^0})^{-1}I{\omega ^0}\\&\quad =({{\hat{\omega }}}-{\omega ^0})+\left( (1-C_{{{\hat{\omega }}}})^{-1}C_{{{\hat{\omega }}}}I\right) ({{\hat{\omega }}}-{\omega ^0})+(1-C_{{{\hat{\omega }}}})^{-1}(C_{{{\hat{\omega }}}}-C_{\omega ^0})I{\omega ^0}\\&\qquad +(1-C_{{{\hat{\omega }}}})^{-1}(C_{{{\hat{\omega }}}}-C_{\omega ^0})(1-C_{\omega ^0})^{-1}C_{\omega ^0}I{\omega ^0}. \end{aligned}$$

Utilizing the triangle inequality and the inequalities (3.69), we have

$$\begin{aligned} \int _{\Sigma _0}\left( (1-C_{{{\hat{\omega }}}})^{-1}I\right) (\xi ){{\hat{\omega }}}(\xi )\,\mathrm {d}\xi =\int _{\Sigma _0}\left( (1-C_{\omega ^0})^{-1}I\right) (\xi ){\omega ^0}(\xi )\,\mathrm {d}\xi +O\left( \frac{\log {t}}{\sqrt{tk_0}}\right) . \end{aligned}$$

According to (3.55), we obtain by using a simple change of variable that

$$\begin{aligned}&\int _{\Sigma ^\prime }\left( (1-C_{\omega ^\prime })^{-1}I\right) (\xi )\omega ^\prime (\xi )\,\mathrm {d}\xi \\&\quad =\int _{\Sigma ^\prime }\left( N^{-1}(1-C_{{{\hat{\omega }}}})^{-1}NI\right) (\xi )\omega ^\prime (\xi )\,\mathrm {d}\xi \\&\quad =\int _{\Sigma ^\prime }\left( (1-C_{{{\hat{\omega }}}})^{-1}I\right) ((\xi -k_0)\sqrt{8tk_0})N\omega ^\prime ((\xi -k_0)\sqrt{8tk_0})\,\mathrm {d}\xi \\&\quad =\frac{1}{\sqrt{8tk_0}}\int _{\Sigma _0}\left( (1-C_{{{\hat{\omega }}}})^{-1}I\right) (\xi ){{\hat{\omega }}}(\xi )\,\mathrm {d}\xi \\&\quad =\frac{1}{\sqrt{8tk_0}}\int _{\Sigma _0}\left( (1-C_{\omega ^0})^{-1}I\right) (\xi )\omega ^0(\xi )\,\mathrm {d}\xi +O\left( \frac{\log t}{tk_0}\right) ,\\ \end{aligned}$$

which together with (3.51) imply (3.70). \(\square \)

For \(k\in {\mathbb {C}}\backslash \Sigma _0\), set

$$\begin{aligned} M^0(k;x,t)=I+\int _{\Sigma _0}\frac{\left( (1-C_{\omega ^0})^{-1}I\right) (\xi )\omega ^0(\xi )}{\xi -k} \, \frac{\mathrm {d}\xi }{2\pi i}. \end{aligned}$$
(3.71)

Then \(M^0(k;x,t)\) is the solution of the RH problem

$$\begin{aligned} {\left\{ \begin{array}{ll} M^0_+(k;x,t)=M^0_-(k;x,t)J^0(k;x,t), &{} k\in \Sigma _0, \\ M^0(k;x,t)\rightarrow I, &{} k\rightarrow \infty . \end{array}\right. } \end{aligned}$$
(3.72)

where \(J^0=(b^0_-)^{-1}b^0_+=(I-\omega _-^0)^{-1}(I+\omega _+^0)\). In particular, we have

$$\begin{aligned} M^0(k)=I+\frac{M^0_1}{k}+O(k^{-2}), \quad k\rightarrow \infty , \end{aligned}$$
(3.73)

where

$$\begin{aligned} M^0_1=-\int _{\Sigma _A}\left( (1-C_{\omega ^0})^{-1}I\right) (\xi )\omega ^0(\xi )\, \frac{d\xi }{2\pi i}. \end{aligned}$$
(3.74)

Corollary 3.2

As \(t\rightarrow \infty \),

$$\begin{aligned} \tilde{q}(x,t)=-\frac{i}{\sqrt{2tk_0}}\left( M_1^0\right) _{12}+O\left( \frac{\log t}{tk_0}\right) . \end{aligned}$$
(3.75)

Proof

Substituting (3.74) into (3.70), We immediately obtain (3.75). \(\square \)

3.5 Solving the model problem

In this subsection, we pay attention to \(M_1^0\) and the RH problem that \(M^0(k;x,t)\) satisfies. This RH problem can be transformed to a model problem and we can obtain the explicit expression of \(M_1^0\) through the standard parabolic cylinder function. For this purpose, We introduce

$$\begin{aligned} \Psi (k)=H(k)k^{-i\nu \sigma }e^{\frac{1}{4}ik^2\sigma },\quad H(k)=(\delta ^0)^{-\sigma }M^0(k)(\delta ^0)^\sigma . \end{aligned}$$
(3.76)

It is easy to see from (3.72) that

$$\begin{aligned} \Psi _+(k)=\Psi _-(k)v(k_0),\quad v=e^{-\frac{1}{4}ik^2{{\hat{\sigma }}}}k^{i\nu \sigma }J^0(k)k^{-i\nu \sigma }. \end{aligned}$$
(3.77)

The jump matrix is independent of k along each ray of four rays \(\Sigma _0^1, \Sigma _0^2, \Sigma _0^3, \Sigma _0^4\), so

$$\begin{aligned} \frac{\mathrm {d}\Psi _+(k)}{\mathrm {d}k}=\frac{\mathrm {d}\Psi _-(k)}{\mathrm {d}k}v(k_0). \end{aligned}$$
(3.78)

Combining (3.77) and (3.78), we obtain

$$\begin{aligned} \frac{\mathrm {d}\Psi _+(k)}{\mathrm {d}k}-\frac{1}{2}ik\sigma \Psi _+(k)=\left( \frac{\mathrm {d}\Psi _-(k)}{\mathrm {d}k}-\frac{1}{2}ik\sigma \Psi _-(k)\right) v(k_0). \end{aligned}$$
(3.79)

Then \((\mathrm {d}\Psi /\mathrm {d}k-\frac{1}{2}ik\sigma \Psi )\Psi ^{-1}\) has no jump discontinuity along each of the four rays. In addition, from the relation between \(\Psi (k)\) and H(k), we have

$$\begin{aligned} \left( \frac{\mathrm {d}\Psi (k)}{\mathrm {d}k}-\frac{1}{2}ik\sigma \Psi (k)\right) \Psi ^{-1}(k)&=\frac{\mathrm {d}H(k)}{\mathrm {d}k}H^{-1}(k)+\frac{ik}{2}H(k)\sigma H^{-1}(k)\\&\quad -\frac{i\nu }{k}H(k)\sigma H^{-1}(k)-\frac{1}{2}ik\sigma \\&=O(k^{-1})-\frac{i}{2}(\delta ^0)^{-\sigma }[\sigma , M^0_1](\delta ^0)^{\sigma }. \end{aligned}$$

It follows by the Liouville’s Theorem that

$$\begin{aligned} \frac{\mathrm {d}\Psi (k)}{\mathrm {d}k}-\frac{1}{2}ik\sigma \Psi (k)=\beta \Psi (k), \end{aligned}$$
(3.80)

where

$$\begin{aligned} \beta =-\frac{i}{2}(\delta ^0)^{-\sigma }[\sigma , M^0_1](\delta ^0)^{\sigma } =\left( \begin{array}{cc} 0 &{} \beta _{12}\\ \beta _{21} &{} 0 \end{array}\right) . \end{aligned}$$

Moreover,

$$\begin{aligned} (M_1^0)_{12}=i(\delta ^0)^2\beta _{12}. \end{aligned}$$
(3.81)

Set

$$\begin{aligned} \Psi (k) =\left( \begin{array}{cc} \Psi _{11}(k) &{} \Psi _{12}(k)\\ \Psi _{21}(k) &{} \Psi _{22}(k)\\ \end{array}\right) . \end{aligned}$$

From (3.80) and its differential we obtain

$$\begin{aligned}&\frac{\mathrm {d}^{2}\Psi _{11}(k)}{\mathrm {d}k^{2}}+\left( -\frac{1}{2}i+\frac{1}{4}k^2-\beta _{12}\beta _{21}\right) \Psi _{11}(k)=0,\\&\beta _{12}\Psi _{21}(k)=\frac{\mathrm {d}\Psi _{11}(k)}{\mathrm {d}k}-\frac{1}{2}ik\Psi _{11}(k),\\&\frac{\mathrm {d}^{2}\beta _{12}\Psi _{22}(k)}{\mathrm {d}k^{2}}+\left( \frac{1}{2}i+\frac{1}{4}k^2-\beta _{12}\beta _{21}\right) \beta _{12}\Psi _{22}(k)=0,\\&\Psi _{12}(k)=\frac{1}{\beta _{12}\beta _{21}}\left( \frac{\mathrm {d}\beta _{12}\Psi _{22}(k)}{\mathrm {d}k}+\frac{1}{2}ik\beta _{12}\Psi _{22}(k)\right) . \end{aligned}$$

It is obvious that we can transform the above equations to Weber’s equations by a simple change of variables. As is well known, the standard parabolic-cylinder function \(D_{a}(\zeta )\) and \(D_{a}(-\zeta )\) constitute the fundamental set of solutions of the Weber’s equation

$$\begin{aligned} \frac{\mathrm {d}^{2}g(\zeta )}{\mathrm {d}\zeta ^{2}}+\left( a+\frac{1}{2}-\frac{\zeta ^{2}}{4}\right) g(\zeta )=0. \end{aligned}$$

We denote the general solution by

$$\begin{aligned} g(\zeta )=C_{1}D_{a}(\zeta )+C_{2}D_{a}(-\zeta ), \end{aligned}$$

where \(C_1\) and \(C_2\) are two arbitrary constants. Set \(a=-i\beta _{12}\beta _{21}\). Then

$$\begin{aligned} \Psi _{11}(k)= & {} c_1D_a\left( e^{-\frac{\pi i}{4}}k\right) +c_2D_a\left( e^{\frac{3\pi i}{4}}k\right) , \end{aligned}$$
(3.82)
$$\begin{aligned} \beta _{12}\Psi _{22}(k)= & {} c_3D_{-a}\left( e^{\frac{\pi i}{4}}k\right) +c_4D_{-a}\left( e^{-\frac{3\pi i}{4}}k\right) . \end{aligned}$$
(3.83)

where \(c_j(j=1,2,3,4)\) are constants.

Next, we first consider the case when \(\arg {k}\in (-\frac{\pi }{4},\frac{\pi }{4})\) and \(k\rightarrow \infty \). Therefore, we have

$$\begin{aligned} \Psi _{11}(k)k^{i\nu }e^{-\frac{ik^{2}}{4}}\rightarrow 1, \quad \Psi _{22}(k)k^{-i\nu }e^{\frac{ik^{2}}{4}}\rightarrow I. \end{aligned}$$

Notice that the parabolic-cylinder function has a asymptotic expansion [35]

$$\begin{aligned} D_{a}(\zeta )= {\left\{ \begin{array}{ll} \zeta ^{a}e^{-\frac{\zeta ^{2}}{4}}(1+O(\zeta ^{-2})), &{} \left| \arg {\zeta }\right|<\frac{3\pi }{4},\\ \zeta ^{a}e^{-\frac{\zeta ^{2}}{4}}(1+O(\zeta ^{-2}))-\frac{\sqrt{2\pi }}{\Gamma (-a)}e^{a\pi i+\frac{\zeta ^{2}}{4}}\zeta ^{-a-1}(1+O(\zeta ^{-2})), &{} \frac{\pi }{4}<\arg {\zeta }<\frac{5\pi }{4},\\ \zeta ^{a}e^{-\frac{\zeta ^{2}}{4}}(1+O(\zeta ^{-2}))-\frac{\sqrt{2\pi }}{\Gamma (-a)}e^{-a\pi i+\frac{\zeta ^{2}}{4}}\zeta ^{-a-1}(1+O(\zeta ^{-2})), &{} -\frac{5\pi }{4}<\arg {\zeta }<-\frac{\pi }{4}, \end{array}\right. } \end{aligned}$$
(3.84)

as \(\zeta \rightarrow \infty \), where \(\Gamma (\cdot )\) is the Gamma function. Then we arrive at

$$\begin{aligned} \Psi _{11}(k)&=e^{\frac{\pi \nu }{4}}D_{a}\left( e^{-\frac{\pi i}{4}}k\right) , \quad \nu =\beta _{12}\beta _{21},\\ \beta _{12}\Psi _{22}(k)&=\beta _{12}e^{\frac{\pi \nu }{4}}D_{-a}\left( e^{\frac{\pi i}{4}}k\right) . \end{aligned}$$

Besides,

$$\begin{aligned} \frac{\mathrm {d}D_{a}(\zeta )}{\mathrm {d}\zeta }+\frac{\zeta }{2}D_{a}(\zeta )-aD_{a-1}(\zeta )=0. \end{aligned}$$
(3.85)

Then we have

$$\begin{aligned} \beta _{12}\Psi _{21}(k)= & {} e^{\frac{\pi \nu }{4}}e^{-\frac{\pi i}{4}}aD_{a-1}\left( e^{-\frac{\pi i}{4}}k\right) ,\\ \Psi _{12}(k)= & {} \beta _{12}e^{\frac{\pi \nu }{4}}e^{\frac{3\pi i}{4}}D_{-a-1}\left( e^{\frac{\pi i}{4}}k\right) . \end{aligned}$$

For \(\arg {k}\in (\frac{\pi }{4},\frac{3\pi }{4})\) and \(k\rightarrow \infty \), we obtain

$$\begin{aligned} \Psi _{11}(k)k^{i\nu }e^{-\frac{ik^{2}}{4}}\rightarrow 1, \quad \Psi _{22}(k)k^{-i\nu }e^{\frac{ik^{2}}{4}}\rightarrow I, \end{aligned}$$

which imply

$$\begin{aligned}&\Psi _{11}(k)=e^{\frac{\pi \nu }{4}}D_{a}\left( e^{-\frac{\pi i}{4}}k\right) ,&\beta _{12}\Psi _{21}(k)=e^{\frac{\pi \nu }{4}}e^{-\frac{\pi i}{4}}aD_{a-1}\left( e^{-\frac{\pi i}{4}}k\right) ,\\&\beta _{12}\Psi _{22}(k)=\beta _{12}e^{-\frac{3\pi \nu }{4}}D_{-a}\left( e^{-\frac{3\pi i}{4}}k\right) ,&\Psi _{12}(k)=\beta _{12}e^{-\frac{3\pi \nu }{4}}e^{-\frac{\pi i}{4}}D_{-a-1}\left( e^{-\frac{3\pi i}{4}}k\right) . \end{aligned}$$

Along the ray \(\arg k=\frac{\pi }{4}, \) one infers

$$\begin{aligned} \Psi _{+}(k)=\Psi _{-}(k) \left( \begin{array}{cc} 1 &{} \gamma (k_0)\\ 0 &{} I_{3\times 3}\\ \end{array}\right) . \end{aligned}$$
(3.86)

Notice the (1, 2) entry of the RH problem,

$$\begin{aligned}&\beta _{12}e^{\frac{\pi (-i-3\nu )}{4}}D_{-a-1}(e^{-\frac{3\pi i}{4}}k)\\&\quad =e^{\frac{\pi \nu }{4}}D_a\left( e^{-\frac{\pi i}{4}}k\right) \gamma (k_0)+\beta _{12}e^{\frac{\pi (\nu +3i)}{4}}D_{-a-1}\left( e^{\frac{\pi i}{4}}k\right) , \end{aligned}$$

and the equality that the parabolic-cylinder function satisfies [35]

$$\begin{aligned} D_{a}(\pm \zeta )=\frac{\Gamma (a+1)e^{\frac{i\pi a}{2}}}{\sqrt{2\pi }}D_{-a-1}(\pm i\zeta )+\frac{\Gamma (a+1)e^{-\frac{i\pi a}{2}}}{\sqrt{2\pi }}D_{-a-1}(\mp i\zeta ), \end{aligned}$$
(3.87)

we can split the term of \(D_a(e^{-\frac{\pi i}{4}}k)\) into the terms of \(D_{-a-1}(e^{-\frac{3\pi i}{4}}k)\) and \(D_{-a-1}(e^{\frac{\pi i}{4}}k)\). By separating the coefficients of the two independent functions, we obtain

$$\begin{aligned} \beta _{12}=e^{\frac{\pi i}{4}}e^{\frac{\pi \nu }{2}}\frac{\Gamma (a+1)}{\sqrt{2\pi }}\gamma (k_0) =\frac{e^{-\frac{\pi i}{4}}e^{\frac{\pi \nu }{2}}\nu \Gamma (-i\nu )}{\sqrt{2\pi }}\gamma (k_0). \end{aligned}$$
(3.88)

From (2.31), we have

$$\begin{aligned} qq^\dag ={{\tilde{q}}} {{\tilde{q}}}^\dag ,\quad q^\dag q=W{{\tilde{q}}}^\dag \tilde{q}W^{-1}. \end{aligned}$$
(3.89)

We denote \({{\tilde{\nu }}}(k)=\frac{1}{2\pi }\log (1-k\vert \gamma (k)\vert ^2)\) and a simple computation shows that

$$\begin{aligned} \begin{aligned} \int _{-\infty }^xqq^\dag \, \mathrm {d}x^\prime&=\int _{-\infty }^x\left| {{\tilde{q}}}\right| ^2\, \mathrm {d}x^\prime \\&=\frac{1}{4t}\int _{-\infty }^x\frac{-2\nu }{k_0}\, \mathrm {d}x^\prime +O\left( \frac{\log {t}}{\sqrt{tk_0}}\right) \\&=\int _{-\infty }^{k_0}\frac{2{{\tilde{\nu }}}(s)}{s}\, \mathrm {d}s+O\left( \frac{\log {t}}{\sqrt{tk_0}}\right) . \end{aligned} \end{aligned}$$
(3.90)

A similar method is applied to the term \(q^\dag q\) and we have

$$\begin{aligned} W^{-1}=I-i\int _{-\infty }^{k_0}\frac{{{\tilde{\nu }}}(s)}{s(1-e^{-2\pi {{\tilde{\nu }}}(s)})}\gamma ^\dag (s)\gamma (s)\, \mathrm {d}s+O\left( \frac{\log t}{\sqrt{tk_0}}\right) . \end{aligned}$$
(3.91)

Finally, we can obtain (1.2) from (2.31), (3.75), (3.81), (3.88), (3.90) and (3.91).