1 Introduction

As one of the most important theories in the nonlinear integrable PDEs over the last 50 years, the inverse scattering transform (IST) was pursued mostly for the zero boundary conditions (ZBCs), i.e., the potentials vanish for large spatial variable. However, recent studies have shown that the nonzero boundary conditions (NZBCs) are relevant to the research of modulation instability and the generation mechanism of rogue wave [1,2,3,4]. From a mathematical viewpoint, such problems are less well characterized. Following that, a nature issue is to formulate the IST in the NZBCs cases. Specifically, we focus on the N-component nonlinear Schrödinger (NLS) equation

$$\begin{aligned} \textrm{i}{\textbf{q}}_t+{\textbf{q}}_{xx}+2\kappa {\textbf{q}}{\textbf{q}}^\dag {\textbf{q}}={\textbf{0}},\quad {\textbf{q}}=(q_1,\ldots ,q_N)^T, \end{aligned}$$
(1.1)

where \(\kappa =\pm 1\) means the focusing (resp. defocusing) case. It is well known that the scalar version is referred to as the celebrated NLS equation and the 2-component case as the Manakov system. There are two main motivations for studying the multi-component case.

On the one hand, the N-component NLS equation not only possesses some remarkably rich symmetries but also arises in many physical fields such as nonlinear optics, fluid mechanics, plasmas physics and multi-component Bose-Einstein condensates [5,6,7,8]. On the other hand, let us recall the research works about the IST for the N-component NLS equation. The scalar case with ZBCs was developed in Ref. [9] (see also Refs. [10,11,12,13]), the 2-component case was dealt in Ref. [7], and the theory can be extended to any multi-component NLS equations with ZBCs in a straightforward way [14, 15]. Unlike the case of ZBCs, the IST with NZBCs is more complicated because the spectral parameter lies in the two-sheeted Riemann surface rather than the complex plane. The early work for the NLS equation with NZBCs was shown in Refs. [16, 17] and was revisited in Refs. [18,19,20,21]. The work about the defocusing Manakov system was accomplished in Ref. [22] and developed in Ref. [23]. The focusing Manakov system with NZBC was studied in Ref. [24] using similar methods. By generalizing the tensor approach, important advance in the multi-component defocusing NLS equation with NZBCs was outlined in Ref. [25]. Recently, a more rigorous analysis of the IST for the 3-component defocusing NLS equation with NZBC was presented in Ref. [26], and it was pointed out that for the multi-component focusing NLS equation with NZBCs, the complete theory of the inverse scattering transformation is still open. Unfortunately, the above methods can not be applied directly to the arbitrary N-component focusing NLS equation with NZBCs. Despite several development has been established in Refs. [23,24,25,26], there are still some crucial problems, in which the most important ones are the symmetry of jump matrix, the existence and uniqueness of the solution for Riemann–Hilbert (RH) problem, the verification of reconstruction formula. Furthermore, it is well known that the focusing NLS equation and the defocusing NLS equation are fundamentally different, the IST for the focusing case is much more involved than the defocusing case because there are four different fundamental domains of analyticity instead of two.

The purpose of this work is to overcome some challenging difficulties and to present a characterization of the IST for the N-component focusing NLS equation

$$\begin{aligned} \textrm{i}{\textbf{q}}_t+{\textbf{q}}_{xx}+2{\textbf{q}}{\textbf{q}}^\dag {\textbf{q}}-2q_0^2{\textbf{q}}={\textbf{0}},\quad N\geqslant 2,\quad q_0>0, \end{aligned}$$
(1.2)

with NZBCs for large \(\vert x\vert \):

$$\begin{aligned} \lim _{x\rightarrow \pm \infty }{\textbf{q}}(x,t)={\textbf{q}}_\pm ={\textbf{q}}_0\textrm{e}^{\textrm{i}\theta _\pm },\quad \text{ for } \text{ any } \text{ fixed }\ t\in {\mathbb {R}}^+, \end{aligned}$$
(1.3)

where \({\textbf{q}}_0\) is a constant complex valued vector of modulus \(q_0 \), \(\theta _\pm \in [0,2\pi )\). Indeed, by two gauge transformations \({\textbf{q}}(x,t)\rightarrow {\textbf{q}}(x,t)\textrm{e}^{-2\textrm{i}q_0^2t}\) and \({\textbf{q}}(x,t)\rightarrow q_0 {\textbf{q}}( q_0x, q_0^2 t)\), Eq. (1.2) can be converted into the N-component focusing NLS equation (1.1) and the multi-component Gross–Pitaevskii equation \( \textrm{i}{\textbf{q}}_t+{\textbf{q}}_{xx}+2{\textbf{q}}{\textbf{q}}^\dag {\textbf{q}}-2{\textbf{q}}={\textbf{0}}\), respectively. In addition, given the U(N) invariance of Eq. (1.2), without loss of generality, \({\textbf{q}}_0\) can be chosen to be the form \({\textbf{q}}_0=(\underbrace{0,\ldots ,0}_{N-1},q_0)^T\).

As it will be shown, when it comes to focusing the N-component focusing NLS equation with NZBCs, the analysis process becomes extremely difficult, so we have to introduce some new concepts and new tools. The innovation of this paper is mainly reflected in the following aspects: (i) Instead of the method in Ref. [18], two modified Lax pairs are first used to set up the Volterra integral equations and investigate the analyticity properties of the scattering matrix entries. In addition, compared with Ref. [18], a slightly weaker nonzero boundary condition is required in this context. (ii) Instead of the tensor approach, a generalized cross product operation in \({\mathbb {C}}^{N+1}\), consistent with the common cross product operation in \({\mathbb {R}}^3\), is introduced to generate the auxiliary eigenfunctions. The decompositions of the auxiliary eigenfunctions and the symmetries of a complete set of eigenfunctions are established by virtue of the adjugate matrix. Especially, some essential identities, which may be also useful to the ISTs, are proved for the first time. (iii) Unlike the cases with ZBCs or the scalar cases with NZBCs, it does not seem obvious for the multi-component cases with NZBCs that the inverse problem can be solved uniquely. In this paper, the existence and the uniqueness of the solution for the inverse problem are proved by virtue of Zhou’s vanishing Lemma (see Refs. [27, 28]), and the reconstruction formula is verified by the dressing method (see Ref. [29]).

The organization of this paper is as follows: In Sect. 2, we prove that the direct scattering problem is well defined for potential \({\textbf{q}}\) such that \({\textbf{q}}(\cdot ,t)-{\textbf{q}}_\pm \) lies in the appropriate functional space. As a consequence, we obtain the integral representations for the scattering data and establish the analyticity of the eigenfunctions and the scattering data. In Sect. 3, we prove that the solution of the N-component focusing NLS equation with NZBCs can be expressed in terms of the unique solution of a \(3\times 3\) block matrix RH problem, which is directly formulated by a combination of the eigenfunctions and the scattering data. In the reflectionless case, some exact solutions are obtained in Sect. 4.

The following basic notations will be used throughout this paper: We denote by \({\bar{z}}\) the complex conjugate of a complex number z, and denote \({\hat{z}}=-\frac{q_0^2}{z}\). When used with a matrix \({\textbf{A}}\), \(\bar{{\textbf{A}}}\) denotes the element-wise complex conjugate, \({\textbf{A}}^T\) denotes the transpose, \({\textbf{A}}^\dag \) denotes the conjugate transpose. In addition, we use \({\textbf{A}}^*\) to denote the adjugate of a square matrix \({\textbf{A}}\). We use \({\textbf{I}}\) and \({\textbf{0}}\) to denote an appropriately sized identity and zero matrix, respectively. For any (matrix-valued) function f(z), we denote \({\tilde{f}}(z)=\overline{f({\bar{z}})}\). Let

$$\begin{aligned}{} & {} D_1=\left\{ z\in {\mathbb {C}}|\textrm{Im}z>0, \vert z\vert>q_0\right\} ,\quad D_2=\left\{ z\in {\mathbb {C}}|\textrm{Im}z<0,\vert z\vert>q_0\right\} ,\\{} & {} D_3=\left\{ z\in {\mathbb {C}}|\textrm{Im}z<0, \vert z\vert<q_0\right\} , \quad D_4=\left\{ z\in {\mathbb {C}}|\textrm{Im}z>0, \vert z\vert <q_0\right\} ,\\{} & {} {\mathbb {C}}^\pm =\left\{ z\in {\mathbb {C}}|\textrm{Im}z > rless 0\right\} , \qquad \qquad \ \ \quad C_0=\{z\in {\mathbb {C}}|\vert z\vert =q_0\},\\{} & {} \Gamma _\pm ={\mathbb {R}}\cup ({\mathbb {C}}^\mp \cap C_0), \qquad \qquad \qquad \quad \Gamma ={\mathbb {R}}\cup C_0. \end{aligned}$$

Furthermore, for a set D in the complex plane \({\mathbb {C}}\), \({\bar{D}}\) represents its closure. For an \((N+1)\)-order square matrix \({\textbf{A}}\), without otherwise specified, we write it in block form as \({\textbf{A}}=({\textbf{A}}_1,{\textbf{A}}_2,{\textbf{A}}_3)\), where \({\textbf{A}}_1\) represents the first column, \({\textbf{A}}_3\) represents the last column, and \({\textbf{A}}_2\) represents the rest of columns. The notation \({\textbf{A}}(z)\) holds for \(z\in ({\mathcal {D}}_1,{\mathcal {D}}_2,{\mathcal {D}}_3)\), means that \( {\textbf{A}}_1(z)\), \({\textbf{A}}_2(z)\) and \({\textbf{A}}_3(z)\) hold for \(z\in {\mathcal {D}}_1\), \({\mathcal {D}}_2\) and \({\mathcal {D}}_3\), respectively. Also, sometimes, \({\textbf{A}}\) is rewritten in another block form as

$$\begin{aligned} {\textbf{A}}=\begin{pmatrix} {\textbf{A}}_{11}&{} {\textbf{A}}_{12}&{} {\textbf{A}}_{13}\\ {\textbf{A}}_{21}&{}{\textbf{A}}_{22}&{}{\textbf{A}}_{23}\\ {\textbf{A}}_{31}&{}{\textbf{A}}_{32}&{}{\textbf{A}}_{33}\\ \end{pmatrix}, \end{aligned}$$

where \({\textbf{A}}_{11}\) and \({\textbf{A}}_{33}\) are scalar.

Fig. 1
figure 1

(Left) The regions \(D_+\) (shaded) and \(D_-\) (white) in the complex z-plane. (Right) The oriented contours for the RH problem

2 Direct problem

2.1 Lax pair, Riemann surface and uniformization

The N-component focusing NLS equation (1.2) admits the Lax pair:

$$\begin{aligned} \psi _x={\textbf{U}}\psi ,\quad \psi _t={\textbf{V}}\psi , \end{aligned}$$
(2.1)

with

$$\begin{aligned}&\textbf{U}(x,t;k)=-\textrm{i}k\sigma +\textbf{Q},\quad{} & {} \textbf{V}(x,t;k)=2\textrm{i}k^2\sigma -2k\textbf{Q}-\textrm{i}\sigma (\textbf{Q}_x-\textbf{Q}^2-q_0^2\textbf{I}),\nonumber \\ \end{aligned}$$
(2.2)
$$\begin{aligned}&\sigma =\begin{pmatrix} 1&{}\textbf{0}\\ \textbf{0}&{}-\textbf{I} \end{pmatrix},{} & {} \textbf{Q}(x,t)=\begin{pmatrix} 0&{}-\textbf{q}^\dag \\ \textbf{q}&{}\textbf{0} \end{pmatrix}, \end{aligned}$$
(2.3)

where \({\textbf{q}}=(q_1,\ldots ,q_N)^T\), and k is a constant spectral parameter. In order to introduce the Jost solutions, it is necessary to study the asymptotic spectral problems

$$\begin{aligned} \phi _x={\textbf{U}}_\pm \phi ,\quad \phi _t={\textbf{V}}_\pm \phi , \end{aligned}$$
(2.4)

with

$$\begin{aligned}&{\textbf{U}}_\pm =-\textrm{i}k\sigma +{\textbf{Q}}_\pm ,\quad {\textbf{Q}}_\pm =\begin{pmatrix} 0&{}-\textbf{q}^\dag _\pm \\ \textbf{q}_\pm &{}\textbf{0}\end{pmatrix}, \end{aligned}$$
(2.5a)
$$\begin{aligned}&{\textbf{V}}_\pm =2\textrm{i}k^2\sigma -2k{\textbf{Q}}_\pm +\textrm{i}\sigma {\textbf{Q}}_\pm ^2+\textrm{i}q_0^2\sigma . \end{aligned}$$
(2.5b)

The eigenvalues of \({\textbf{U}}_\pm \) are \(\pm \textrm{i}\lambda \) and \(\textrm{i}k\) (with multiplicity \(N-1\)), the eigenvalues of \({\textbf{V}}_\pm \) are \(\pm 2\textrm{i}k\lambda \) and \(-\textrm{i}(k^2+\lambda ^2)\) (with multiplicity \(N-1\)), where

$$\begin{aligned} \lambda (k)=\sqrt{k^2+q_0^2}. \end{aligned}$$
(2.6)

We fix the branch cut \([-\textrm{i}q_0,\textrm{i}q_0]\), from which \(\sqrt{k^2+q_0^2}\) is well defined by setting \(\arg (k\pm \textrm{i}q_0)\in [-\frac{\pi }{2},\frac{3\pi }{2})\). The two branches, , do not get mixed up with each other, but are interchanged in passing from one edge of the cut \([-\textrm{i}q_0,\textrm{i}q_0]\) to the other. By gluing the two copies with cut, it can be compactified a two-sheeted Riemann surface \(\Upsilon =\left\{ (k,\lambda )\in {\mathbb {C}}^2|\lambda ^2=k^2+q_0^2\right\} \), where represents the first (resp. second) sheet. The multi-valued function \(\lambda =\lambda (k)\) becomes a single valued function \(\lambda = \lambda ({\textbf{P}})\) of a point \({\textbf{P}}\) on the Riemann surface \(\Upsilon \): if \({\textbf{P}}=(k,\lambda )\in \Upsilon \), then \(\lambda ({\textbf{P}})=\lambda \) (the projection on the the \(\lambda \)-axis). Define the uniformization map \(z:\Upsilon \mapsto {\mathbb {C}}\),

$$\begin{aligned} \quad z(k,\lambda )=k+\lambda , \end{aligned}$$
(2.7)

whose inverse map is

$$\begin{aligned} k=\frac{1}{2}\left( z+{\hat{z}}\right) ,\quad \lambda =\frac{1}{2}\left( z-{\hat{z}}\right) , \end{aligned}$$
(2.8)

where \({\hat{z}}=-\frac{q_0^2}{z}\). Note that, the branch cut is mapped onto \(C_0\), the first (resp. second) sheet is mapped onto the exterior (resp. interior) of \(C_0\), , , , where represents the point at infinity in sheet (see [19, 24] for further details).

It is easy to see that two solution matrices of the asymptotic spectral problems (2.4) read

$$\begin{aligned} \phi _\pm (x,t;z)={\textbf{E}}_\pm (z)\textrm{e}^{\textrm{i}{\varvec{\Theta }}(x,t;z)}, \end{aligned}$$
(2.9)

where

$$\begin{aligned} \textbf{E}_{\pm }(z)=\textbf{I}-\frac{\textrm{i}}{z}\sigma \textbf{Q}_\pm =\begin{pmatrix} 1 &{} \textbf{0}&{} \frac{\textrm{i}\bar{q}_\pm }{z}\\ \textbf{0}&{}\textbf{I} &{} \textbf{0}\\ \frac{\textrm{i}q_\pm }{z}&{}\textbf{0}&{}1 \end{pmatrix},\quad q_\pm =q_0\textrm{e}^{\textrm{i}\theta _\pm },\end{aligned}$$
(2.10)

and \({\varvec{\Theta }}(x,t;z)\) is an \((N+1)\times (N+1)\) matrix defined by

$$\begin{aligned}{} & {} {\varvec{\Theta }}(x,t;z)= {\text {diag}}(\theta _1(x,t;z),\underbrace{\theta _2(x,t;z),\ldots ,\theta _2(x,t;z)}_{N-1},-\theta _1(x,t;z)),\nonumber \\ \end{aligned}$$
(2.11)
$$\begin{aligned}{} & {} \theta _1(x,t;z)=-\lambda x+2k\lambda t, \quad \theta _2(x,t;z)=kx-(k^2+\lambda ^2)t. \end{aligned}$$
(2.12)

Indeed, each column of \({\textbf{E}}_{\pm }(z)\) is a common eigenvector of \({\textbf{U}}_\pm \) and \({\textbf{V}}_\pm \), respectively, and

$$\begin{aligned}&\det ({\textbf{E}}_\pm (z))=1+\frac{q_0^2}{z^2}\triangleq \gamma (z), \end{aligned}$$
(2.13)
$$\begin{aligned}&{\textbf{E}}_\pm (z){\textbf{E}}_\pm ^\dag ({\bar{z}})={\textbf{E}}_\pm ^\dag ({\bar{z}}){\textbf{E}}_\pm (z)={\text {diag}}(\gamma (z),\underbrace{1,\ldots ,1}_{N-1},\gamma (z))\triangleq {\textbf{H}}(z). \end{aligned}$$
(2.14)

2.2 Jost solutions and scattering matrix

We look for two fundamental solution matrices \(\psi _\pm (x,t;z)\), which also known as Jost solutions, of (2.1) that satisfies the boundary conditions

$$\begin{aligned} \psi _\pm (x,t;z)= \phi _\pm (x,t;z)+o(1), \quad x\rightarrow \pm \infty ,\quad z\in (\Gamma _\pm ,{\mathbb {R}},\Gamma _\pm ). \end{aligned}$$
(2.15)

The reason we are interested especially in \((\Gamma _\pm ,{\mathbb {R}},\Gamma _\pm )\) is that \(\psi _\pm (x,t;z)\) are oscillatory rather than exponentially decreasing or increasing for large \(\vert x\vert \).

Introduce the modified eigenfunctions

$$\begin{aligned} \mu _\pm (x,t;z)=\psi _\pm (x,t;z)\textrm{e}^{-\textrm{i}\varvec{\Theta }(x,t;z)}, \end{aligned}$$
(2.16)

by which two equivalent forms are obtained as follows:

$$\begin{aligned}{} & {} ({\textbf{E}}_\pm ^{-1}\mu _\pm )_x=[\textrm{i}{\varvec{\Lambda }}, {\textbf{E}}_\pm ^{-1}\mu _\pm ]+{\textbf{E}}_\pm ^{-1} \Delta {\textbf{Q}}_\pm \mu _\pm , \end{aligned}$$
(2.17a)
$$\begin{aligned}{} & {} ({\textbf{E}}_\mp ^{-1}\mu _\pm )_x=[\textrm{i}{\varvec{\Lambda }}, {\textbf{E}}_\mp ^{-1}\mu _\pm ]+{\textbf{E}}_\mp ^{-1} \Delta {\textbf{Q}}_\mp \mu _\pm , \end{aligned}$$
(2.17b)

where \({\varvec{\Lambda }}(z)={\text {diag}}(-\lambda ,\underbrace{k,\ldots ,k}_{N-1},\lambda )\), \(\Delta {\textbf{Q}}_\pm ={\textbf{Q}}- {\textbf{Q}}_\pm \), the (xtz)-dependence is omitted for brevity. Equation (2.17) is equivalent to

$$\begin{aligned}{} & {} (\textrm{e}^{-\textrm{i}x{\varvec{\Lambda }}}{\textbf{E}}_\pm ^{-1}\mu _\pm \textrm{e}^{\textrm{i}x{\varvec{\Lambda }}})_x=\textrm{e}^{-\textrm{i}x{\varvec{\Lambda }}}{\textbf{E}}_\pm ^{-1}\Delta {\textbf{Q}}_\pm \mu _\pm \textrm{e}^{\textrm{i}x{\varvec{\Lambda }}}, \end{aligned}$$
(2.18a)
$$\begin{aligned}{} & {} (\textrm{e}^{-\textrm{i}x{\varvec{\Lambda }}}{\textbf{E}}_\mp ^{-1}\mu _\pm \textrm{e}^{\textrm{i}x{\varvec{\Lambda }}})_x=\textrm{e}^{-\textrm{i}x{\varvec{\Lambda }}}{\textbf{E}}_\mp ^{-1}\Delta {\textbf{Q}}_\mp \mu _\pm \textrm{e}^{\textrm{i}x{\varvec{\Lambda }}}. \end{aligned}$$
(2.18b)

Integrating (2.18a) from \(\pm \infty \) to x, and noting the asymptotics

$$\begin{aligned} \mu _\pm = {\textbf{E}}_\pm +o(1),\quad x\rightarrow \pm \infty , \end{aligned}$$
(2.19)

we find that \(\mu _\pm \) satisfy the Volterra integral equations

$$\begin{aligned} \mu _\pm = {\textbf{E}}_\pm +\int _{\pm \infty }^x {\textbf{E}}_\pm \textrm{e}^{\textrm{i}(x-y){\varvec{\Lambda }}}{\textbf{E}}_\pm ^{-1}\Delta {\textbf{Q}}_\pm \mu _\pm \textrm{e}^{-\textrm{i}(x-y){\varvec{\Lambda }}}\textrm{d}y. \end{aligned}$$
(2.20)

Also, integrating (2.18b) from 0 to x, we state

$$\begin{aligned} \mu _\pm ={\textbf{E}}_\mp \textrm{e}^{\textrm{i}x{\varvec{\Lambda }}}{\textbf{E}}_\mp ^{-1}\mu _\pm (0,t;z)\textrm{e}^{-\textrm{i}x{\varvec{\Lambda }}}+\int _0^x {\textbf{E}}_\mp \textrm{e}^{\textrm{i}(x-y){\varvec{\Lambda }}}{\textbf{E}}_\mp ^{-1}\Delta {\textbf{Q}}_\mp \mu _\pm \textrm{e}^{-\textrm{i}(x-y){\varvec{\Lambda }}}\textrm{d}y.\nonumber \\ \end{aligned}$$
(2.21)

Splicing (2.20) and (2.21) together, we arrive at

$$\begin{aligned} \mu _\pm&= {\textbf{E}}_\pm +\int _{\pm \infty }^x {\textbf{E}}_\pm \textrm{e}^{\textrm{i}(x-y){\varvec{\Lambda }}}{\textbf{E}}_\pm ^{-1}\Delta {\textbf{Q}}_\pm \mu _\pm \textrm{e}^{-\textrm{i}(x-y){\varvec{\Lambda }}}\textrm{d}y,\quad x\in {\mathbb {R}}^\pm , \end{aligned}$$
(2.22a)
$$\begin{aligned} \mu _\pm&={\textbf{E}}_\mp \textrm{e}^{\textrm{i}x{\varvec{\Lambda }}}{\textbf{E}}_\mp ^{-1}\mu _\pm (0,t;z)\textrm{e}^{-\textrm{i}x{\varvec{\Lambda }}}\nonumber \\&\quad +\int _{0}^x {\textbf{E}}_\mp \textrm{e}^{\textrm{i}(x-y){\varvec{\Lambda }}}{\textbf{E}}_\mp ^{-1}\Delta {\textbf{Q}}_\mp \mu _\pm \textrm{e}^{-\textrm{i}(x-y){\varvec{\Lambda }}}\textrm{d}y,\quad x\in {\mathbb {R}}^\mp , \end{aligned}$$
(2.22b)

where \(\mu (0,t;z)\) obtained from (2.22a) is used at the initial condition in (2.22b).

Remark 2.1

The reason we introduce two equivalent modified Lax pairs or Volterra integral equations is that \(\Delta {\textbf{Q}}_\pm \notin L^1({\textbf{R}})\), which presents some difficulties for investigating the analyticities of \(\mu _{\pm }(x,t;z)\) for \(x\in {\mathbb {R}}\). In Refs. [23, 24], only considering the integral equation (2.20), one needs another method in Ref. [18] to analyze the analyticity properties of the scattering matrix entries. However, one can avoid such a problem by combining two equivalent modified Lax pairs.

Remark 2.2

For all \(x\in {\mathbb {R}}\),

$$\begin{aligned} {\textbf{E}}_\pm (z)\textrm{e}^{\textrm{i}x{\varvec{\Lambda }}(z)}{\textbf{E}}_\pm ^{-1}(z)=\begin{pmatrix} \frac{z\textrm{e}^{-\textrm{i}\lambda x}-{\hat{z}}\textrm{e}^{\textrm{i}\lambda x}}{2\lambda }&{}\quad {\textbf{0}}&{}\quad -\frac{\textrm{i}\bar{q}_\pm (\textrm{e}^{-\textrm{i}\lambda x}-\textrm{e}^{\textrm{i}\lambda x})}{2\lambda }\\ {\textbf{0}}&{}\quad \textrm{e}^{\textrm{i}kx}{\textbf{I}}&{}\quad {\textbf{0}}\\ \frac{\textrm{i}q_\pm (\textrm{e}^{-\textrm{i}\lambda x}-\textrm{e}^{\textrm{i}\lambda x})}{2\lambda }&{}\quad {\textbf{0}}&{}\quad \frac{z\textrm{e}^{\textrm{i}\lambda x}-{\hat{z}}\textrm{e}^{-\textrm{i}\lambda x}}{2\lambda } \end{pmatrix}. \end{aligned}$$
(2.23)

At \(z=\pm \textrm{i}q_0\), although the matrices \({\textbf{E}}_\pm (z)\) are degenerate, the expressions \({\textbf{E}}_\pm (z)\textrm{e}^{\textrm{i}x{\varvec{\Lambda }}(z)}{\textbf{E}}_\pm ^{-1}(z)\) remain finite as \(z\rightarrow \pm \textrm{i}q_0\),

$$\begin{aligned} \lim _{z\rightarrow \textrm{i}q_0}{\textbf{E}}_\pm (z)\textrm{e}^{\textrm{i}x{\varvec{\Lambda }}(z)}{\textbf{E}}_\pm ^{-1}(z)= \begin{pmatrix} 1 + q_0x&{}\quad {\textbf{0}} &{}\quad -{\bar{q}}_\pm x \\ {\textbf{0}} &{}\quad \textrm{e}^{- q_0x}{\textbf{I}} &{}\quad {\textbf{0}} \\ q_\pm x&{}\quad {\textbf{0}} &{}\quad 1- q_0x \end{pmatrix}, \\ \lim _{z\rightarrow -\textrm{i}q_0}{\textbf{E}}_\pm (z)\textrm{e}^{\textrm{i}x{\varvec{\Lambda }}(z)}{\textbf{E}}_\pm ^{-1}(z)= \begin{pmatrix} 1 - q_0x&{}\quad {\textbf{0}} &{}\quad -{\bar{q}}_\pm x \\ {\textbf{0}} &{}\quad \textrm{e}^{ q_0x}{\textbf{I}} &{}\quad {\textbf{0}} \\ q_\pm x&{}\quad {\textbf{0}} &{}\quad 1+ q_0x \end{pmatrix}. \end{aligned}$$

In the following, we must prove that the Jost solutions or the modified ones are well defined. Set

$$\begin{aligned} L^{1}({\mathbb {R}}^\pm ){} & {} = \left\{ {\textbf{F}}(x)\in {\mathbb {C}}^N\left| \int _{{\mathbb {R}}^\pm }\Vert {\textbf{F}}(x)\Vert _1\textrm{d}x<+\infty \right. \right\} ,\\ L^{1,j}({\mathbb {R}}^\pm ){} & {} =\left\{ {\textbf{F}}(x)\in {\mathbb {C}}^N\left| \int _{{\mathbb {R}}^\pm }(1+\vert x\vert )^j\Vert {\textbf{F}}(x)\Vert _1\textrm{d}x<+\infty \right. \right\} , \end{aligned}$$

and \(\Vert \cdot \Vert _1\) is the \(L^1\) vector norm.

Theorem 2.3

Suppose that \({\textbf{q}}(\cdot ,t)-{\textbf{q}}_\pm \in L^{1,1}({\mathbb {R}}^{\pm })\), \(\mu _+(x,t;z)\) is analytic for \(z\in (D_2,{\mathbb {C}}^+,D_3)\) and \(\mu _-(x,t;z)\) is analytic for \(z\in (D_1,{\mathbb {C}}^-,D_4)\). Meanwhile, \(\mu _+(x,t;z)\) is continuous up to \((\Gamma _+,{\mathbb {R}},\Gamma _+)\) and \(\mu _-(x,t;z)\) is continuous up to \((\Gamma _-,{\mathbb {R}},\Gamma _-)\).

Proof

We only give the proofs about \(\mu _{-1}(x,t;z)\) and \(\mu _{-2}(x,t;z)\), the rest of the theorem is proved similarly. Firstly, we consider \(\mu _{-1}(x,t;z)\). Let

$$\begin{aligned} {\textbf{G}}_\pm (x;z)=\begin{pmatrix} \frac{z-{\hat{z}}\textrm{e}^{2\textrm{i}\lambda x}}{2\lambda }&{}\quad {\textbf{0}}&{}\quad -\frac{\textrm{i}\bar{q}_\pm (1-\textrm{e}^{2\textrm{i}\lambda x})}{2\lambda }\\ {\textbf{0}}&{}\quad \textrm{e}^{\textrm{i}zx}{\textbf{I}}&{}\quad {\textbf{0}}\\ \frac{\textrm{i}q_\pm (1-\textrm{e}^{2\textrm{i}\lambda x})}{2\lambda }&{}\quad {\textbf{0}}&{}\quad \frac{z\textrm{e}^{2\textrm{i}\lambda x}-{\hat{z}}}{2\lambda } \end{pmatrix}. \end{aligned}$$
(2.24)

Case i: \(x\in {\mathbb {R}}^-\). It follows from (2.22a) that

$$\begin{aligned} \mu _{-1}(x,t;z)=\begin{pmatrix} 1\\ {\textbf{0}}\\ \frac{\textrm{i}q_-}{z} \end{pmatrix}+\int _{-\infty }^x{\textbf{G}}_-(x-y;z)\Delta {\textbf{Q}}_-(y,t)\mu _{-1}(y,t;z)\textrm{d}y. \end{aligned}$$
(2.25)

We introduce a Neumann series \(\mu _{-1}(x,t;z)=\sum _{n=0}^\infty \mu _n(x,t;z)\) for the solution of (2.25), where

$$\begin{aligned}{} & {} \mu _0(x,t;z)=\begin{pmatrix} 1\\ {\textbf{0}}\\ \frac{\textrm{i}q_-}{z} \end{pmatrix},\quad \mu _{n+1}(x,t;z)=\int _{-\infty }^x{\textbf{K}}_1(x,y,t;z)\mu _{n}(y,t;z)\textrm{d}y, \nonumber \\{} & {} {\textbf{K}}_1(x,y,t;z)={\textbf{G}}_-(x-y;z)\Delta {\textbf{Q}}_-(y,t). \end{aligned}$$
(2.26)

As \(z\in D_1\), \({\textbf{G}}_-(x-y;z)\) is analytic and continuous up to the boundary \(\partial D_1\). In the integrand \(y\leqslant x\), so if \( z\in D_1\cup \Gamma _-\), by maximum modulus principle, we conclude that

$$\begin{aligned} \Vert {\textbf{G}}_-(x-y;z)\Vert _1\leqslant c_1(1+\vert x-y\vert )\leqslant c_1(1+\vert y\vert ),\quad c_1=\max \{1,2q_0\}.\nonumber \\ \end{aligned}$$
(2.27)

Consequently,

$$\begin{aligned} \Vert {\textbf{K}}_1(x,y,t;z)\Vert _1\leqslant c_1(1+\vert y\vert )\Vert {\textbf{q}}(y,t)-{\textbf{q}}_-\Vert _1. \end{aligned}$$
(2.28)

By induction, we can prove that

$$\begin{aligned} \Vert \mu _n(x,t;z)\Vert _1\leqslant & {} \frac{2}{n!}\left( c_1\int _{-\infty }^x(1+\vert y\vert )\Vert {\textbf{q}}(y,t)-{\textbf{q}}_-\Vert _1\textrm{d}y\right) ^n \nonumber \\\leqslant & {} \frac{2c_1^n}{n!}\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_-\Vert _{L^{1,1}({\mathbb {R}}^{-})}^n. \end{aligned}$$
(2.29)

It follows that the infinite series converges absolutely and uniformly by comparison with an exponential series,

$$\begin{aligned} \left\| \sum _{n=0}^\infty \mu _n(x,t;z)\right\| _1\leqslant 2\exp (c_1\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_-\Vert _{L^{1,1}({\mathbb {R}}^-)}), \text{ for } \text{ all }\ x\in {\mathbb {R}}^-, z\in D_1\cup \Gamma _-.\nonumber \\ \end{aligned}$$
(2.30)

It is easy to use the uniform convergence of the Neumann series to prove that \(\mu _{-1}(x,t;z)\) is in fact a solution of the Volterra equation (2.25) whenever \(z\in D_1\cup \Gamma _-\). The uniqueness of this solution follows by the fact that a certain power of the operator \(\int _{-\infty }^x{\textbf{K}}_1(x,y,t;z)(\cdot )\textrm{d}y\) is a contraction operator. Note that, \(\mu _0(x,t;z)\) is analytic for \(z\in D_1\) and continuous up to the boundary \(\partial D_1\). As \(n\geqslant 1\), for \(z\in \bar{D}_1\),

$$\begin{aligned} \Vert {\textbf{K}}_1(x,y,t;z)\mu _{n-1}(y,t;z)\Vert _1{} & {} \leqslant \frac{2c_1^{n}}{(n-1)!}(1+\vert y\vert )\Vert {\textbf{q}}(y,t)\nonumber \\{} & {} \quad -{\textbf{q}}_-\Vert _1\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_-\Vert _{L^{1,1}({\mathbb {R}}^-)}^{n-1}, \end{aligned}$$
(2.31)

by Lebesgue dominated convergence theorem,

$$\begin{aligned} \lim _{\begin{array}{c} z\rightarrow z_0\\ z,z_0\in \bar{D}_1 \end{array}}\int _{-\infty }^x{\textbf{K}}_1(x,y,t;z)\mu _{n-1}(y,t;z)\textrm{d}y=\int _{-\infty }^x{\textbf{K}}_1(x,y,t;z_0)\mu _{n-1}(y,t;z_0)\textrm{d}y.\nonumber \\ \end{aligned}$$
(2.32)

This proves \(\mu _n(x,t;z)\) is a continuous function of z in \(\bar{D}_1\). Suppose that \(\mu _{n-1}(x,t;z)\) is analytic for \(z\in D_1\), let C be a piecewise-smooth closed curve contained in \(D_1\). Thus,

$$\begin{aligned} \oint _C\mu _n(x,t;z)\textrm{d}z=\oint _C\int _{-\infty }^x{\textbf{K}}_1(x,y,t;z)\mu _{n-1}(y,t;z)\textrm{d}y\textrm{d}z. \end{aligned}$$
(2.33)

The estimate (2.31) yields the above integrand lies in \(L^1({\mathbb {R}}^-\times C)\). By Fubini’s theorem and Cauchy’s integral theorem, we have

$$\begin{aligned} \oint _C\mu _n(x,t;z)\textrm{d}z=\int _{-\infty }^x\oint _C{\textbf{K}}_1(x,y,t;z)\mu _{n-1}(y,t;z)\textrm{d}z\textrm{d}y=0. \end{aligned}$$
(2.34)

By Morera’s theorem, \(\mu _n(x,t;z)\) is analytic for \(z\in D_1\). Since a uniformly convergent series of analytic functions converges to an analytic function, \(\mu _{-1}(x,t;z)\) is analytic for \(z\in D_1\). Also, \(\mu _{-1}(x,t;z)\) is continuous in \(\bar{D}_1\) respect to z.

Case ii: \(x\in {\mathbb {R}}^+\). It follows from (2.22b) that

$$\begin{aligned} \frac{\mu _{-1}(x,t;z)}{1+\vert x\vert }{} & {} =\frac{{\textbf{G}}_+(x;z)}{1+\vert x\vert }\mu _{-1}(0,t;z)\nonumber \\{} & {} \quad +\int _{0}^x \frac{1+\vert y\vert }{1+\vert x\vert } {\textbf{G}}_+(x-y;z)\Delta {\textbf{Q}}_+(y,t)\frac{\mu _{-1}(y,t;z)}{1+\vert y\vert } \textrm{d}y.\nonumber \\ \end{aligned}$$
(2.35)

We introduce a Neumann series \( \frac{\mu _{-1}(x,t;z)}{1+\vert x\vert }=\sum _{n=0}^\infty \nu _n(x,t;z)\) for the solution of (2.35), where

$$\begin{aligned} \begin{aligned}&\nu _0(x,t;z)=\frac{{\textbf{G}}_+(x;z)}{1+\vert x\vert }\mu _{-1}(0,t;z),\quad \nu _{n+1}(x,t;z)=\int _{0}^x{\textbf{K}}_2(x,y,t;z)\nu _{n}(y,t;z)\textrm{d}y,\\&{\textbf{K}}_2(x,y,t;z)=\frac{1+\vert y\vert }{1+\vert x\vert } {\textbf{G}}_+(x-y;z)\Delta {\textbf{Q}}_+(y,t). \end{aligned}\nonumber \\ \end{aligned}$$
(2.36)

As \(z\in D_1\cup \Gamma _-\), similar to (2.27), we have

$$\begin{aligned} \Vert {\textbf{G}}_+(x;z)\Vert _1\leqslant c_1(1+\vert x\vert ),\quad \Vert {\textbf{G}}_+(x-y;z)\Vert _1\leqslant c_1(1+\vert x-y\vert )\leqslant c_1(1+\vert x\vert ).\nonumber \\ \end{aligned}$$
(2.37)

Therefore,

$$\begin{aligned}{} & {} \Vert \nu _0(x,t;z)\Vert _1\leqslant 2c_1\exp (c_1\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_-\Vert _{L^{1,1}({\mathbb {R}}^-)})\triangleq c_2(t), \end{aligned}$$
(2.38)
$$\begin{aligned}{} & {} \Vert {\textbf{K}}_2(x,y,t;z)\Vert _1\leqslant c_1(1+\vert y\vert )\Vert {\textbf{q}}(y,t)-{\textbf{q}}_+\Vert _1. \end{aligned}$$
(2.39)

For all \(x\in {\mathbb {R}}^+\), \(z\in D_1\cup \Gamma _-\), we obtain the uniform convergence

$$\begin{aligned} \left\| \frac{\mu _{-1}(x,t;z)}{1+\vert x\vert }\right\| _1=\left\| \sum _{n=0}^\infty \nu _n(x,t;z)\right\| _1\leqslant c_2(t)\exp (c_1\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_+\Vert _{L^{1,1}({\mathbb {R}}^+)}).\nonumber \\ \end{aligned}$$
(2.40)

Similarly, \(\mu _{-1}(x,t;z)\) is well defined in \(D_1\cup \Gamma _-\) respect to z, analytic for \(z\in D_1\) and continuous up to the boundary \(\partial D_1\).

Secondly, we consider \(\mu _{-2}(x,t;z)\). Let

$$\begin{aligned} \hat{{\textbf{G}}}_\pm (x;z)=\begin{pmatrix} \frac{z\textrm{e}^{-\textrm{i}z x}-\hat{z}\textrm{e}^{-\textrm{i}\hat{z} x}}{2\lambda }&{}\textbf{0}&{} -\frac{\textrm{i}\bar{q}_\pm (\textrm{e}^{-\textrm{i}z x}-\textrm{e}^{-\textrm{i}\hat{z} x})}{2\lambda }\\ \textbf{0}&{}\textbf{I}&{}\textbf{0}\\ \frac{\textrm{i}q_\pm (\textrm{e}^{-\textrm{i}z x}-\textrm{e}^{-\textrm{i}\hat{z} x})}{2\lambda }&{}\textbf{0}&{}\frac{z\textrm{e}^{-\textrm{i}\hat{z} x}-\hat{z}\textrm{e}^{-\textrm{i} z x}}{2\lambda } \end{pmatrix}. \end{aligned}$$
(2.41)

Note that,

$$\begin{aligned} \lim _{z\rightarrow -\textrm{i}q_0} \hat{{\textbf{G}}}_\pm (x;z)=\begin{pmatrix} (1 - q_0x)\textrm{e}^{-q_0x}&{} \textbf{0} &{} -\bar{q}_\pm x\textrm{e}^{-q_0x} \\ \textbf{0} &{}\textbf{I} &{} \textbf{0} \\ q_\pm x\textrm{e}^{-q_0x}&{} \textbf{0} &{} (1+ q_0x)\textrm{e}^{-q_0x} \end{pmatrix}, \end{aligned}$$
(2.42)

as \(z\in {\mathbb {R}}\),

$$\begin{aligned}{} & {} \left| \frac{\textrm{i}q_\pm (\textrm{e}^{-\textrm{i}z x}-\textrm{e}^{-\textrm{i}{\hat{z}} x})}{2\lambda }\right| \leqslant 1, \end{aligned}$$
(2.43)
$$\begin{aligned}{} & {} \left| \frac{z\textrm{e}^{-\textrm{i}z x}-{\hat{z}}\textrm{e}^{-\textrm{i}{\hat{z}} x}}{2\lambda }\right| = \left| \textrm{e}^{-\textrm{i}z x}+\frac{{\hat{z}}(\textrm{e}^{-\textrm{i}z x}-\textrm{e}^{-\textrm{i}{\hat{z}} x})}{2\lambda }\right| \leqslant 3. \end{aligned}$$
(2.44)

Case i: \(x\in {\mathbb {R}}^-\). It follows from (2.22a) that

$$\begin{aligned} \mu _{-2}(x,t;z)=\begin{pmatrix} {\textbf{0}}\\ {\textbf{I}}\\ {\textbf{0}} \end{pmatrix}+\int _{-\infty }^x\hat{{\textbf{G}}}_-(x-y;z)\Delta {\textbf{Q}}_-(y,t)\mu _{-2}(y,t;z)\textrm{d}y. \end{aligned}$$
(2.45)

We introduce a Neumann series \(\mu _{-2}(x,t;z)=\sum _{n=0}^\infty {\hat{\mu }}_n(x,t;z)\) for the solution of (2.45), where

$$\begin{aligned} \begin{aligned}&{\hat{\mu }}_0(x,t;z)=\begin{pmatrix} {\textbf{0}}\\ {\textbf{I}}\\ {\textbf{0}} \end{pmatrix},\quad {\hat{\mu }}_{n+1}(x,t;z)=\int _{-\infty }^x\hat{{\textbf{K}}}_1(x,y,t;z){\hat{\mu }}_{n}(y,t;z)\textrm{d}y,\\&\hat{{\textbf{K}}}_1(x,y,t;z)=\hat{{\textbf{G}}}_-(x-y;z)\Delta {\textbf{Q}}_-(y,t). \end{aligned} \end{aligned}$$
(2.46)

\(\hat{{\textbf{G}}}_-(x-y;z)\) is analytic for \(z\in {\mathbb {C}}^-\backslash \{-\textrm{i}q_0\}\) and continuous in \({\mathbb {C}}^-\cup {\mathbb {R}}\). In the integrand \(y\leqslant x\), so if \( z\in {\mathbb {C}}^-\cup {\mathbb {R}}\), by maximum modulus principle, we conclude that

$$\begin{aligned} \Vert \hat{{\textbf{G}}}_-(x-y;z)\Vert _1\leqslant 4. \end{aligned}$$
(2.47)

Consequently,

$$\begin{aligned} \Vert \hat{{\textbf{K}}}_1(x,y,t;z)\Vert _1\leqslant 4\Vert {\textbf{q}}(y,t)-{\textbf{q}}_-\Vert _1. \end{aligned}$$
(2.48)

By induction, we can prove that

$$\begin{aligned} \begin{aligned} \Vert {\hat{\mu }}_n(x,t;z)\Vert _1&\leqslant \frac{ 4^n}{n!}\left( \int _{-\infty }^x\Vert {\textbf{q}}(y,t)-{\textbf{q}}_-\Vert _1\textrm{d}y\right) ^n\\&\leqslant \frac{4^{n}}{n!}\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_-\Vert _{L^{1}({\mathbb {R}}^{-})}^n. \end{aligned}\nonumber \\ \end{aligned}$$
(2.49)

It follows that the infinite series converges absolutely and uniformly by comparison with an exponential series,

$$\begin{aligned} \left\| \sum _{n=0}^\infty {\hat{\mu }}_n(x,t;z)\right\| _1\leqslant \exp (4\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_-\Vert _{L^{1}({\mathbb {R}}^-)}), \text{ for } \text{ all }\ x\in {\mathbb {R}}^-, z\in {\mathbb {C}}^-\cup {\mathbb {R}}.\nonumber \\ \end{aligned}$$
(2.50)

It is easy to use the uniform convergence of the Neumann series to prove that \(\mu _{-2}(x,t;z)\) is a solution matrix of the Volterra equation (2.45) whenever \(z\in {\mathbb {C}}^-\cup {\mathbb {R}}\). The uniqueness of this solution follows by the fact that a certain power of the operator \(\int _{-\infty }^x\hat{{\textbf{K}}}_1(x,y,t;z)(\cdot )\textrm{d}y\) is a contraction operator. Note that, \({\hat{\mu }}_0(x,t;z)\) is analytic for \(z\in {\mathbb {C}}^-\) and continuous up to the boundary \({\mathbb {R}}\). As \(n\geqslant 1\), for \(z\in {\mathbb {C}}^-\cup {\mathbb {R}}\),

$$\begin{aligned} \Vert \hat{{\textbf{K}}}_1(x,y,t;z){\hat{\mu }}_{n-1}(y,t;z)\Vert _1\leqslant \frac{ 4^{n}}{(n-1)!}\Vert {\textbf{q}}(y,t)-{\textbf{q}}_-\Vert _1\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_-\Vert _{L^{1}({\mathbb {R}}^-)}^{n-1},\nonumber \\ \end{aligned}$$
(2.51)

by Lebesgue dominated convergence theorem,

$$\begin{aligned} \lim _{\begin{array}{c} z\rightarrow z_0\\ z,z_0\in {\mathbb {C}}^-\cup {\mathbb {R}} \end{array}}\int _{-\infty }^x\hat{{\textbf{K}}}_1(x,y,t;z){\hat{\mu }}_{n-1}(y,t;z)\textrm{d}y=\int _{-\infty }^x\hat{{\textbf{K}}}_1(x,y,t;z_0){\hat{\mu }}_{n-1}(y,t;z_0)\textrm{d}y.\nonumber \\ \end{aligned}$$
(2.52)

This proves \({\hat{\mu }}_n(x,t;z)\) is a continuous function of z in \({\mathbb {C}}^-\cup {\mathbb {R}}\). Suppose that \({\hat{\mu }}_{n-1}(x,t;z)\) is analytic for \(z\in {\mathbb {C}}^-\), let C be a piecewise-smooth closed curve contained in \({\mathbb {C}}^-\). Thus,

$$\begin{aligned} \oint _C{\hat{\mu }}_n(x,t;z)\textrm{d}z=\oint _C\int _{-\infty }^x\hat{{\textbf{K}}}_1(x,y,t;z){\hat{\mu }}_{n-1}(y,t;z)\textrm{d}y\textrm{d}z. \end{aligned}$$
(2.53)

The estimate (2.51) yields the above integrand lies in \(L^1({\mathbb {R}}^-\times C)\). By Fubini’s theorem and Cauchy’s integral theorem, we have

$$\begin{aligned} \oint _C{\hat{\mu }}_n(x,t;z)\textrm{d}z=\int _{-\infty }^x\oint _C\hat{{\textbf{K}}}_1(x,y,t;z){\hat{\mu }}_{n-1}(y,t;z)\textrm{d}z\textrm{d}y=0. \end{aligned}$$
(2.54)

By Morera’s theorem, \({\hat{\mu }}_n(x,t;z)\) is analytic for \(z\in {\mathbb {C}}^-\). Since a uniformly convergent series of analytic functions converges to an analytic function, \(\mu _{-2}(x,t;z)\) is analytic for \(z\in {\mathbb {C}}^-\). Also, \(\mu _{-2}(x,t;z)\) is continuous in \({\mathbb {C}}^-\cup {\mathbb {R}}\) respect to z.

Case ii: \(x\in {\mathbb {R}}^+\). It follows from (2.22b) that

$$\begin{aligned} \mu _{-2}(x,t;z)=\hat{{\textbf{G}}}_+(x;z)\mu _{-2}(0,t;z)+\int _{0}^x \hat{{\textbf{G}}}_+(x-y;z)\Delta {\textbf{Q}}_+(y,t)\mu _{-2}(y,t;z) \textrm{d}y.\nonumber \\ \end{aligned}$$
(2.55)

We introduce a Neumann series \( \mu _{-2}(x,t;z)=\sum _{n=0}^\infty {\hat{\nu }}_n(x,t;z)\) for the solution of (2.55), where

$$\begin{aligned} \begin{aligned}&{\hat{\nu }}_0(x,t;z)=\hat{{\textbf{G}}}_+(x;z)\mu _{-2}(0,t;z),\\&{\hat{\nu }}_{n+1}(x,t;z)=\int _{0}^x\hat{{\textbf{K}}}_2(x,y,t;z){\hat{\nu }}_{n}(y,t;z)\textrm{d}y,\\&\hat{{\textbf{K}}}_2(x,y,t;z)=\hat{{\textbf{G}}}_+(x-y;z)\Delta {\textbf{Q}}_+(y,t). \end{aligned}\nonumber \\ \end{aligned}$$
(2.56)

As \(z\in {\mathbb {C}}^-\cup {\mathbb {R}}\), similar to (2.47), we have

$$\begin{aligned} \Vert \hat{{\textbf{G}}}_+(x;z)\Vert _1\leqslant 4. \end{aligned}$$
(2.57)

Therefore,

$$\begin{aligned}{} & {} \Vert {\hat{\nu }}_0(x,t;z)\Vert _1\leqslant 4\exp (4\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_-\Vert _{L^{1}({\mathbb {R}}^-)})\triangleq c_3(t), \end{aligned}$$
(2.58)
$$\begin{aligned}{} & {} \Vert \hat{{\textbf{K}}}_2(x,y,t;z)\Vert _1\leqslant 4\Vert {\textbf{q}}(y,t)-{\textbf{q}}_+\Vert _1. \end{aligned}$$
(2.59)

For all \(x\in {\mathbb {R}}^+\), \(z\in {\mathbb {C}}^-\cup {\mathbb {R}}\), we obtain the uniform convergence

$$\begin{aligned} \left\| \mu _{-2}(x,t;z)\right\| _1=\left\| \sum _{n=0}^\infty {\hat{\nu }}_n(x,t;z)\right\| _1\leqslant c_3(t)\exp (4\Vert {\textbf{q}}(\cdot ,t)-{\textbf{q}}_+\Vert _{L^{1}({\mathbb {R}}^+)}).\nonumber \\ \end{aligned}$$
(2.60)

Similarly, \(\mu _{-2}(x,t;z)\) is well defined in \({\mathbb {C}}^-\cup {\mathbb {R}}\) respect to z, analytic for \(z\in {\mathbb {C}}^-\) and continuous up to the boundary \({\mathbb {R}}\). \(\square \)

Remark 2.4

Unlike what happens for the defocusing case, the defect of analyticity for the focusing N-component NLS equation does not increase with the number of components. In fact, for any \(N\geqslant 2\), one has exactly N analytic eigenfunctions in each of the domains \(D_1,\ldots , D_4\), and hence, only one additional eigenfunction per each of the domain is required to obtain a fundamental set of analytic solutions.

Lemma 2.5

Under the same hypotheses as in Theorem 2.3, for all z in the interior of their corresponding domains of analyticity, \(\mu _{\pm 2}(x,t;z)\) are bounded for all \(x\in {\mathbb {R}}\), \(\mu _{\pm 1}(x,t;z)\) and \(\mu _{\pm 3}(x,t;z)\) are bounded for \(x\in {\mathbb {R}}^\pm \), \(\frac{\mu _{\pm 1}(x,t;z)}{1+\vert x\vert }\) and \(\frac{\mu _{\pm 3}(x,t;z)}{1+\vert x\vert }\) are bounded for \(x\in {\mathbb {R}}^\mp \).

Proof

The first column of \(\mu _-(x,t;z)\) follows from (2.30) and (2.40), and \(\mu _{-2}(x,t;z)\) follows from (2.50) and (2.60). The rest of this lemma is obtained similarly. \(\square \)

Theorem 2.3 shows that \(\mu _\pm (x,t;z)\) are continuous up to \((\Gamma _\pm ,{\mathbb {R}},\Gamma _\pm )\), respectively. Moreover, formally differentiating the Volterra integral equation (2.22) with respect to z and performing a similar Neumann series analysis, one can show the following:

Corollary 2.6

Under the same hypotheses as in Theorem 2.3, \(\partial _z\mu _\pm (x,t;z)\) are continuous for \(z\in (\Gamma _\pm ,{\mathbb {R}},\Gamma _\pm )\).

Remark 2.7

It follows trivially from the above theorem that the columns of \(\psi _\pm (x,t;z)\) have the same analyticity properties as \(\mu _\pm (x,t;z)\). Moreover, if \({\textbf{q}}(\cdot ,t)-{\textbf{q}}_\pm \in L^1({\mathbb {R}}^\pm )\), the modified eigenfunctions \(\mu _{\pm }(x,t;z)\) are also are analytic for \(z\in (D_2,{\mathbb {C}}^+,D_3)\), \((D_1,{\mathbb {C}}^-,D_4)\), respectively. However, \(\mu _{\pm 1}(x,t;z)\) and \(\mu _{\pm 3}(x,t;z)\) are not continuous at the branch points \(\pm \textrm{i}q_0\). Furthermore, noting that \({\textbf{q}}(\cdot ,t)-{\textbf{q}}_\pm \) is needed to be lied in \( L^{1,2}({\mathbb {R}}^\pm )\) in Ref. [18], in this context, we require a slightly weaker condition \({\textbf{q}}(\cdot ,t)-{\textbf{q}}_\pm \in L^{1,1}({\mathbb {R}}^\pm )\).

In the following, we will introduce the scattering matrix. Observing the traceless nature of \({\textbf{Q}}(x,t)-{\textbf{Q}}_\pm \), by Abel’s Theorem, we arrive at \(\partial _x\det ( {\textbf{E}}_\pm ^{-1}(z)\mu _\pm (x,t;z))=0\). Therefore, we may compute the determinant of \( {\textbf{E}}_\pm ^{-1}(z)\mu _\pm (x,t;z)\) with the limit \(x\rightarrow \pm \infty \). Consequently, equation (2.19) implies

$$\begin{aligned} \det (\mu _\pm (x,t;z))=\det ({\textbf{E}}_\pm (z))=\gamma (z), \quad z\in {\mathbb {R}}, \end{aligned}$$
(2.61)

i.e.,

$$\begin{aligned} \det (\psi _\pm (x,t;z))=\gamma (z)\textrm{e}^{\textrm{i}(N-1)\theta _2(x,t;z)},\quad z\in {\mathbb {R}}. \end{aligned}$$
(2.62)

Since both \(\psi _+(x,t;z)\) and \(\psi _-(x,t;z)\) are the fundamental solutions of the Lax pair (2.1), there exists a \((N+1)\times (N+1)\) matrix \({\textbf{s}}(z)\) independent of x and t such that

$$\begin{aligned} \psi _-(x,t;z)=\psi _+(x,t;z){\textbf{s}}(z), \end{aligned}$$
(2.63)

where

$$\begin{aligned} {\textbf{s}}(z)=\begin{pmatrix} {\textbf{s}}_{11}(z) &{}\quad {\textbf{s}}_{12}(z) &{}\quad {\textbf{s}}_{13}(z) \\ {\textbf{s}}_{21}(z) &{}\quad {\textbf{s}}_{22}(z) &{}\quad {\textbf{s}}_{23}(z) \\ {\textbf{s}}_{31}(z) &{}\quad {\textbf{s}}_{32}(z) &{}\quad {\textbf{s}}_{33}(z) \end{pmatrix} \end{aligned}$$
(2.64)

is usually referred to as the scattering matrix. Taking the determinants of both sides of (2.63) and recalling (2.62), we state

$$\begin{aligned} \det ({\textbf{s}}(z))=1,\quad z\in {\mathbb {R}}. \end{aligned}$$
(2.65)

Let

$$\begin{aligned} {\textbf{S}}(z)={\textbf{s}}^{-1}(z)=\begin{pmatrix} {\textbf{S}}_{11}(z) &{}\quad {\textbf{S}}_{12}(z) &{}\quad {\textbf{S}}_{13}(z) \\ {\textbf{S}}_{21}(z) &{}\quad {\textbf{S}}_{22}(z) &{}\quad {\textbf{S}}_{23}(z) \\ {\textbf{S}}_{31}(z) &{}\quad {\textbf{S}}_{32}(z) &{}\quad {\textbf{S}}_{33}(z) \end{pmatrix}, \end{aligned}$$
(2.66)

thus,

$$\begin{aligned} \psi _+(x,t;z)=\psi _-(x,t;z){\textbf{S}}(z). \end{aligned}$$
(2.67)

Evaluating (2.63) at \((+\infty ,0)\), recalling (2.16) and (2.22), we conclude that

$$\begin{aligned} {\textbf{s}}(z)={\textbf{E}}_+^{-1}(z)\mu _-(0,0;z)+\int _{0}^{+\infty } \textrm{e}^{-\textrm{i}x{\varvec{\Lambda }}(z)}{\textbf{E}}_+^{-1}(z)\Delta {\textbf{Q}}_+(x,0)\mu _-(x,0;z) \textrm{e}^{\textrm{i}x{\varvec{\Lambda }}(z)}\textrm{d}x.\nonumber \\ \end{aligned}$$
(2.68)

Similarly,

$$\begin{aligned} {\textbf{S}}(z)={\textbf{E}}_-^{-1}(z)\mu _+(0,0;z)-\int ^{0}_{-\infty } \textrm{e}^{-\textrm{i}x{\varvec{\Lambda }}(z)}{\textbf{E}}_-^{-1}(z)\Delta {\textbf{Q}}_-(x,0)\mu _+(x,0;z) \textrm{e}^{\textrm{i}x{\varvec{\Lambda }}(z)}\textrm{d}x.\nonumber \\ \end{aligned}$$
(2.69)

From Theorem 2.3, Lemma 2.5 and the above integral representations (2.68)–(2.69), we can obtain the following theorem obviously.

Corollary 2.8

Under the same hypotheses as in Theorem 2.3, \({\textbf{s}}_{11}(z), {\textbf{s}}_{13}(z), {\textbf{s}}_{31}(z)\) and \({\textbf{s}}_{33}(z)\) are well defined in \(\Gamma _-\backslash \{\textrm{i}q_0\}\), \({\textbf{S}}_{11}(z), {\textbf{S}}_{13}(z), {\textbf{S}}_{31}(z)\) and \({\textbf{S}}_{33}(z)\) are well defined in \(\Gamma _+\backslash \{-\textrm{i}q_0\}\), the remainder of scattering coefficients are well defined in \({\mathbb {R}}\). Furthermore, the following scattering coefficients can be analytically continued to the corresponding regions:

$$\begin{aligned}{} & {} {\textbf{s}}_{11}(z): D_1, \qquad {\textbf{s}}_{22}(z):{\mathbb {C}}^-, \qquad {\textbf{s}}_{33}(z):D_4, \end{aligned}$$
(2.70a)
$$\begin{aligned}{} & {} {\textbf{S}}_{11}(z):D_2, \qquad {\textbf{S}}_{22}(z):{\mathbb {C}}^+, \qquad {\textbf{S}}_{33}(z):D_3. \end{aligned}$$
(2.70b)

2.3 Auxiliary eigenfunctions

In order to pose the inverse problem, that is, to formulate an \((N+1)\times (N+1)\) matrix RH problem, we need to have a complete set of analytic eigenfunctions in any given domain \(D_j\), \(j=1,\ldots ,4\). However, only N of the columns of \(\mu _+(x,t;z)\) and \(\mu _-(x,t;z)\) are analytic in \(D_j\). To offset the incompleteness of analyticity, we introduce a “generalized cross product" for vectors in \({\mathbb {C}}^{N+1}\) as follows:

Definition 2.9

(Generalized cross product) For all \({\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}\in {\mathbb {C}}^{N+1}\), let

$$\begin{aligned} {\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_N]=\sum _{j=1}^{N+1}\det ({\textbf{u}}_1,\ldots ,{\textbf{u}}_N,{\textbf{e}}_j){\textbf{e}}_j, \end{aligned}$$
(2.71)

where \(\{{\textbf{e}}_1,\ldots ,{\textbf{e}}_{N+1}\}\) represents the standard basis for \({\mathbb {R}}^{N+1}\).

Specially, as \(N=2\), \({\mathcal {G}}[{\textbf{u}}_1,{\textbf{u}}_2]={\textbf{u}}_1\times {\textbf{u}}_2\), represents the common cross product in \({\mathbb {R}}^3\). Like this case, \({\mathcal {G}}[\cdot ]\) is also multi-linear and totally antisymmetric.

Lemma 2.10

For any \({\textbf{A}}\in {\mathbb {C}}^{(N+1)\times (N+1)}\), \({\textbf{B}}\in {\mathbb {C}}^{n\times n}(n\leqslant N)\), \(1\leqslant l\leqslant N-n+1\), then

$$\begin{aligned}&\sum _{j=1}^N {\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{j-1},{\textbf{A}}{\textbf{u}}_j,{\textbf{u}}_{j+1},\ldots ,{\textbf{u}}_N]=\left( \textrm{trace}({\textbf{A}}){\textbf{I}}-{\textbf{A}}^T\right) {\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_N], \end{aligned}$$
(2.72a)
$$\begin{aligned}&{\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{l-1},({\textbf{u}}_{l},\ldots ,{\textbf{u}}_{l+n-1}){\textbf{B}},{\textbf{u}}_{l+n},\ldots ,{\textbf{u}}_N]=\det ( {\textbf{B}}){\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_N]. \end{aligned}$$
(2.72b)

Proof

Set

$$\begin{aligned} {\textbf{F}}(t;{\textbf{u}},{\textbf{A}})={\mathcal {G}}[({\textbf{I}}+t{\textbf{A}}){\textbf{u}}_1, \ldots ,({\textbf{I}}+t{\textbf{A}}){\textbf{u}}_N]. \end{aligned}$$
(2.73)

Since \(\det ({\textbf{I}}+t{\textbf{A}})\) is continuous for \(t\in {\mathbb {R}}\), there exists a positive number \(\delta \) such that \({\textbf{I}}+t{\textbf{A}}\) is non-degenerate for \(|t|<\delta \). The definition of \({\mathcal {G}}[\cdot ]\) yields the left-hand side of (2.72a)

$$\begin{aligned} {\sum _{j=1}}^N {\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{j-1},{\textbf{A}}{\textbf{u}}_j,{\textbf{u}}_{j+1},\ldots ,{\textbf{u}}_N]=\left. \frac{\textrm{d}{\textbf{F}}(t;{\textbf{u}},{\textbf{A}})}{\textrm{d}t}\right| _{t=0}. \end{aligned}$$
(2.74)

On the other hand, for \(| t | < \delta \),

$$\begin{aligned} \begin{aligned} {\textbf{F}}(t;{\textbf{u}},{\textbf{A}})&=\sum _{j=1}^{N+1}\det (({\textbf{I}}+t{\textbf{A}}){\textbf{u}}_1, \ldots ,({\textbf{I}}+t{\textbf{A}}){\textbf{u}}_N,{\textbf{e}}_j){\textbf{e}}_j\\&=\det ({\textbf{I}}+t{\textbf{A}})\sum _{j=1}^{N+1}\det ({\textbf{u}}_1,\ldots ,{\textbf{u}}_N,({\textbf{I}}+t{\textbf{A}})^{-1}{\textbf{e}}_j){\textbf{e}}_j\\&\triangleq f(t;{\textbf{A}}){\textbf{F}}_1(t;{\textbf{u}},{\textbf{A}}). \end{aligned}\nonumber \\ \end{aligned}$$
(2.75)

Indeed,

$$\begin{aligned} \left. \frac{\textrm{d}f(t;{\textbf{A}})}{\textrm{d}t}\right| _{t=0}{} & {} =\textrm{trace}({\textbf{A}}), \end{aligned}$$
(2.76)
$$\begin{aligned} \left. \frac{\textrm{d}{\textbf{F}}_1(t;{\textbf{u}},{\textbf{A}})}{\textrm{d}t}\right| _{t=0}{} & {} =-\sum _{j=1}^{N+1}\det ({\textbf{u}}_1,\ldots ,{\textbf{u}}_N,{\textbf{A}}{\textbf{e}}_j){\textbf{e}}_j\nonumber \\{} & {} =-\sum _{j=1}^{N+1}\det ({\textbf{u}}_1,\ldots ,{\textbf{u}}_N,\sum _{l=1}^{N+1}({\textbf{e}}_l{\textbf{e}}_l^T){\textbf{A}}{\textbf{e}}_j){\textbf{e}}_j\nonumber \\{} & {} =-\sum _{j=1}^{N+1}\det ({\textbf{u}}_1,\ldots ,{\textbf{u}}_N,\sum _{l=1}^{N+1}({\textbf{e}}_l^T{\textbf{A}}{\textbf{e}}_j){\textbf{e}}_l){\textbf{e}}_j\nonumber \\{} & {} =-\sum _{l=1}^{N+1}\det ({\textbf{u}}_1,\ldots ,{\textbf{u}}_N,{\textbf{e}}_l)\sum _{j=1}^{N+1}{\textbf{e}}_j({\textbf{e}}_l^T{\textbf{A}}{\textbf{e}}_j)^T\nonumber \\{} & {} =-\sum _{l=1}^{N+1}\det ({\textbf{u}}_1,\ldots ,{\textbf{u}}_N,{\textbf{e}}_l)\sum _{j=1}^{N+1}({\textbf{e}}_j{\textbf{e}}_j^T){\textbf{A}}^T{\textbf{e}}_l\nonumber \\{} & {} =-{\textbf{A}}^T\sum _{l=1}^{N+1}\det ({\textbf{u}}_1,\ldots ,{\textbf{u}}_N,{\textbf{e}}_l){\textbf{e}}_l\nonumber \\{} & {} =-{\textbf{A}}^T{\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_N]. \end{aligned}$$
(2.77)

The right-hand side of (2.72a)

$$\begin{aligned}{} & {} \textrm{trace}({\textbf{A}}){\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_N]-{\textbf{A}}^T{\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_N] \nonumber \\{} & {} \quad =\left. \frac{\textrm{d}f(t;{\textbf{A}})}{\textrm{d}t}\right| _{t=0}{\textbf{F}}_1(0;{\textbf{u}},{\textbf{A}})+f(0;{\textbf{A}})\left. \frac{\textrm{d}{\textbf{F}}_1(t;{\textbf{u}},{\textbf{A}})}{\textrm{d}t}\right| _{t=0} \nonumber \\{} & {} \quad = \left. \frac{\textrm{d}{\textbf{F}}(t;{\textbf{u}},{\textbf{A}})}{\textrm{d}t}\right| _{t=0}.\nonumber \\ \end{aligned}$$
(2.78)

In addition, the identity (2.72b) is the trivial result of Definition 2.9. \(\square \)

Remark 2.11

In Refs. [23, 24, 26], four identities always are introduced to construct the auxiliary eigenfunctions in \({\mathbb {C}}^3\) or \({\mathbb {C}}^4\). However, verifying four similar identities directly in higher dimensional space will bring us a heavy calculation burden. Fortunately, combining with the differential operation of determinant function, we have proved the identity (2.72a), which is essential to construct the auxiliary eigenfunctions, especially in the arbitrary N-component case. In addition, those identities in Refs. [23, 24, 26] are the special cases of (2.72a).

Indeed, observing that \({\textbf{Q}}^T=-\bar{{\textbf{Q}}}\), by virtue of the identity (2.72a), we can directly verify the following fact.

Proposition 2.12

Suppose that \(\psi _1(x,t;z),\ldots ,\psi _N(x,t;z)\) are N arbitrary solution vectors of the Lax pair (2.1), then

$$\begin{aligned} \Psi (x,t;z)=\textrm{e}^{\textrm{i}(N-1)\theta _2(x,t;z)}{\mathcal {G}}[{\tilde{\psi }}_1(x,t;z),\ldots ,{\tilde{\psi }}_N(x,t;z)] \end{aligned}$$
(2.79)

is a solution vector of the Lax pair (2.1), where \({\tilde{\psi }}_j(x,t;z)=\overline{\psi _j(x,t;{\bar{z}})}\) for \(j=1,\ldots ,N\).

The above functions \({\tilde{\psi }}_j(x,t;z)\ (j=1,\ldots , N+1)\) are also called the adjoint Jost solutions, the analyticity of which can be derived obviously by Riemann–Schwarz symmetry principle. Note that, a simple relation exists between the adjoint Jost solutions and the Jost solutions of the original Lax pair (2.1):

Lemma 2.13

Under the same hypotheses as in Theorem 2.3, as \(z\in {\mathbb {R}}\),

$$\begin{aligned}&\gamma _{j_{N+1}}\psi _{\pm , j_{N+1}}(x,t;z)=\textrm{e}^{\textrm{i}(N-1)\theta _2(x,t;z)}{\mathcal {G}}[{\tilde{\psi }}_{\pm , j_1}(x,t;z),\ldots ,{\tilde{\psi }}_{\pm , j_N}(x,t;z)], \end{aligned}$$
(2.80a)
$$\begin{aligned}&\gamma _{j_{N+1}}{\tilde{\psi }}_{\pm , j_{N+1}}(x,t;z)=\textrm{e}^{\textrm{i}(1-N)\theta _2(x,t;z)}{\mathcal {G}}[\psi _{\pm , j_1}(x,t;z),\ldots ,\psi _{\pm , j_N}(x,t;z)], \end{aligned}$$
(2.80b)

where

$$\begin{aligned} \gamma _j={\left\{ \begin{array}{ll} 1,&{}j=1,N+1,\\ \gamma (z),\quad &{}j=2,\ldots ,N, \end{array}\right. } \end{aligned}$$
(2.81)

and \((j_1,\ldots ,j_{N+1})\) is an even permutation of \((1,\ldots ,N+1)\), \(\psi _{\pm ,j}\) and \({\tilde{\psi }}_{\pm ,j}\) represent the j-th columns of \(\psi _{\pm }\) and \({\tilde{\psi }}_{\pm }\), respectively.

Definition 2.14

Introducing four new solutions of the original Lax pair (2.1):

$$\begin{aligned}&\chi _1(x,t;z)=\textrm{e}^{\textrm{i}(N-1)\theta _2(x,t;z)}{\mathcal {G}}[{\tilde{\psi }}_{+1}(x,t;z),{\tilde{\psi }}_{-2}(x,t;z)], \end{aligned}$$
(2.82a)
$$\begin{aligned}&\chi _2(x,t;z)=\textrm{e}^{\textrm{i}(N-1)\theta _2(x,t;z)}{\mathcal {G}}[{\tilde{\psi }}_{-1}(x,t;z),{\tilde{\psi }}_{+2}(x,t;z)], \end{aligned}$$
(2.82b)
$$\begin{aligned}&\chi _3(x,t;z)=\textrm{e}^{\textrm{i}(N-1)\theta _2(x,t;z)}{\mathcal {G}}[{\tilde{\psi }}_{+2}(x,t;z),{\tilde{\psi }}_{-3}(x,t;z)], \end{aligned}$$
(2.82c)
$$\begin{aligned}&\chi _4(x,t;z)=\textrm{e}^{\textrm{i}(N-1)\theta _2(x,t;z)}{\mathcal {G}}[{\tilde{\psi }}_{-2}(x,t;z),{\tilde{\psi }}_{+3}(x,t;z)], \end{aligned}$$
(2.82d)

we called \(\chi _1(x,t;z),\ldots , \chi _4(x,t;z)\) the auxiliary eigenfunctions. Analogous to the Jost eigenfunctions, the auxiliary eigenfunctions have a modified form

$$\begin{aligned}{} & {} {\textbf{m}}_j(x,t;z)=\chi _j(x,t;z)\textrm{e}^{\textrm{i}\theta _1(x,t;z)}, \qquad \qquad j=1,2, \end{aligned}$$
(2.83a)
$$\begin{aligned}{} & {} {\textbf{m}}_j(x,t;z)=(-1)^N\chi _j(x,t;z)\textrm{e}^{-\textrm{i}\theta _1(x,t;z)}, \qquad j=3,4. \end{aligned}$$
(2.83b)

Considering the \(\psi _\pm \)’s corresponding domains of analyticity, by Riemann–Schwarz symmetry principle, we state

Lemma 2.15

Under the same hypotheses as in Theorem 2.3, as \(j=1,\ldots ,4\), the auxiliary eigenfunction \(\chi _j(x,t;z)\) and the modified form \({\textbf{m}}_j(x,t;z)\) are analytic for \(z\in D_j\) and continuous up to the boundary \(\partial D_j\), respectively.

2.4 Symmetries

Similar to the scalar case and the 2-component case, the scattering problem admits two symmetries corresponding to the involutions: \((k,\lambda )\rightarrow ({\bar{k}},{\bar{\lambda }})\) and \((k,\lambda )\rightarrow (k,-\lambda )\), i.e., in terms of uniformization variable z: \(z\rightarrow {\bar{z}}\) and \(z\rightarrow {\hat{z}}\). Indeed, there is also another symmetry corresponding to \({\bar{z}}\rightarrow {\hat{z}}\) or \(z\rightarrow \hat{{\bar{z}}}\), which can be obtained by combining the first two symmetries.

2.4.1 First symmetry: \(z\rightarrow {\bar{z}}\)

Proposition 2.16

If \(\psi (x,t;z)\) is a matrix solution of the Lax pair (2.1), then

$$\begin{aligned} \partial _x(\psi ^\dag (x,t;{\bar{z}})\psi (x,t;z))=\partial _t(\psi ^\dag (x,t;{\bar{z}})\psi (x,t;z))={\textbf{0}}. \end{aligned}$$
(2.84)

The above statement is a straightforward consequence of the symmetries of the Lax pair (2.1). Considering the asymptotic conditions as in (2.15) for \(\psi _\pm (x,t;z)\), we deduce

$$\begin{aligned} \psi _\pm ^\dag (x,t;{\bar{z}})\psi _\pm (x,t;z)={\textbf{H}}(z). \end{aligned}$$
(2.85)

Especially,

$$\begin{aligned}{} & {} \psi ^\dag _{+1}(x,t;{\bar{z}}) \psi _{+2}(x,t;z)={\textbf{0}},\qquad z\in D_1, \end{aligned}$$
(2.86a)
$$\begin{aligned}{} & {} \psi ^\dag _{-1}(x,t;{\bar{z}}) \psi _{-2}(x,t;z)={\textbf{0}},\qquad z\in D_2, \end{aligned}$$
(2.86b)
$$\begin{aligned}{} & {} \psi ^\dag _{-3}(x,t;{\bar{z}}) \psi _{-2}(x,t;z)={\textbf{0}},\qquad z\in D_3, \end{aligned}$$
(2.86c)
$$\begin{aligned}{} & {} \psi ^\dag _{+3}(x,t;{\bar{z}}) \psi _{+2}(x,t;z)={\textbf{0}},\qquad z\in D_4. \end{aligned}$$
(2.86d)

According to (2.63) and (2.67), we state

$$\begin{aligned} {\textbf{s}}(z)={\textbf{H}}^{-1}(z)\psi _+^\dag (x,t;{\bar{z}})\psi _-(x,t;z),\quad {\textbf{S}}(z)={\textbf{H}}^{-1}(z)\psi _-^\dag (x,t;{\bar{z}})\psi _+(x,t;z).\quad \end{aligned}$$
(2.87)

Consequently, the scattering matrices \({\textbf{s}}(z)\) and \({\textbf{S}}(z)\) satisfy the symmetry

$$\begin{aligned} {\textbf{s}}(z)={\textbf{H}}^{-1}(z){\textbf{S}}^\dag ({\bar{z}}){\textbf{H}}(z). \end{aligned}$$
(2.88)

Componentwise,

$$\begin{aligned}&{\textbf{s}}_{12}(z)=\frac{{\textbf{S}}^\dag _{21}({\bar{z}})}{\gamma (z)},\quad {\textbf{s}}_{21}(z)=\gamma (z){\textbf{S}}^\dag _{12}({\bar{z}}),\qquad z\in {\mathbb {R}}, \end{aligned}$$
(2.89a)
$$\begin{aligned}&{\textbf{s}}_{23}(z)=\gamma (z){\textbf{S}}^\dag _{32}({\bar{z}}),\qquad {\textbf{s}}_{32}(z)=\frac{{\textbf{S}}^\dag _{23}({\bar{z}})}{\gamma (z)},\qquad z\in {\mathbb {R}}, \end{aligned}$$
(2.89b)
$$\begin{aligned}&{\textbf{s}}_{13}(z)=\overline{{\textbf{S}}_{31}({\bar{z}})},\quad {\textbf{s}}_{31}(z)=\overline{{\textbf{S}}_{13}({\bar{z}})},\quad z\in \Gamma _-\backslash \{\textrm{i}q_0\}. \end{aligned}$$
(2.89c)

Particularly,

$$\begin{aligned}{} & {} \overline{{\textbf{S}}_{11}({\bar{z}})}={\textbf{s}}_{11}(z)=\frac{1}{\gamma (z)}\psi ^\dag _{+1}(x,t;{\bar{z}})\psi _{-1}(x,t;z),\qquad z\in \bar{D}_1\backslash \{\textrm{i}q_0\},\qquad \end{aligned}$$
(2.90a)
$$\begin{aligned}{} & {} {\textbf{S}}^\dag _{22}({\bar{z}})={\textbf{s}}_{22}(z)=\psi ^\dag _{+2}(x,t;{\bar{z}})\psi _{-2}(x,t;z),\quad \quad z\in \bar{{\mathbb {C}}}_-, \end{aligned}$$
(2.90b)
$$\begin{aligned}{} & {} \overline{{\textbf{S}}_{33}({\bar{z}})}={\textbf{s}}_{33}(z)=\frac{1}{\gamma (z)}\psi ^\dag _{+3}(x,t;{\bar{z}})\psi _{-3}(x,t;z),\quad \quad z\in \bar{D}_4\backslash \{\textrm{i}q_0\}.\qquad \end{aligned}$$
(2.90c)

2.4.2 Second symmetry: \(z\rightarrow {\hat{z}}\)

Proposition 2.17

If \(\psi (x,t;z)\) is a matrix solution of the Lax pair (2.1), so is \(\psi (x,t;{\hat{z}})\).

Considering the asymptotic conditions in (2.15) for \(\psi _\pm (x,t;z)\), we find

$$\begin{aligned} \psi _\pm (x,t;z)=\psi _\pm (x,t;{\hat{z}})\varvec{\Pi }_\pm (z), \end{aligned}$$
(2.91)

where

$$\begin{aligned} \varvec{\Pi }_\pm (z)=\begin{pmatrix} 0 &{} \textbf{0} &{} \frac{\textrm{i}\bar{q}_\pm }{z} \\ \textbf{0} &{} \textbf{I} &{} \textbf{0} \\ \frac{\textrm{i}q_\pm }{z} &{} \textbf{0} &{} 0 \end{pmatrix}. \end{aligned}$$
(2.92)

In particular,

$$\begin{aligned}&\psi _{\pm 1}(x,t;z)=\frac{\textrm{i}q_\pm }{z} \psi _{\pm 3}(x,t;{\hat{z}}),\quad z\in \bar{D}_2\ (\text{ resp. }\ \bar{D}_1), \end{aligned}$$
(2.93a)
$$\begin{aligned}&\psi _{\pm 3}(x,t;z)=\frac{\textrm{i}{\bar{q}}_\pm }{z}\psi _{\pm 1}(x,t;{\hat{z}}), \quad z\in \bar{D}_3\ (\text{ resp. } \bar{D}_4), \end{aligned}$$
(2.93b)
$$\begin{aligned}&\psi _{\pm 2}(x,t;z)=\psi _{\pm 2}(x,t;{\hat{z}}),\quad z\in \bar{{\mathbb {C}}}_\pm , \end{aligned}$$
(2.93c)

which implies the auxiliary eigenfunctions satisfy the symmetries

$$\begin{aligned}&{\textbf{m}}_1(x,t;z)=\frac{\textrm{i}{\bar{q}}_+}{z}{\textbf{m}}_4(x,t;{\hat{z}}),\quad z\in \bar{D}_1, \end{aligned}$$
(2.94a)
$$\begin{aligned}&{\textbf{m}}_2(x,t;z)=\frac{\textrm{i}{\bar{q}}_-}{z}{\textbf{m}}_3(x,t;{\hat{z}}),\quad z\in \bar{D}_2. \end{aligned}$$
(2.94b)

Combining (2.63), (2.67) with (2.91), we find

$$\begin{aligned} {\textbf{s}}({\hat{z}})={\varvec{\Pi }}_+(z){\textbf{s}}(z){\varvec{\Pi }}^{-1}_-(z),\quad {\textbf{S}}({\hat{z}})={\varvec{\Pi }}_-(z){\textbf{S}}(z){\varvec{\Pi }}^{-1}_+(z). \end{aligned}$$
(2.95)

Componentwise,

$$\begin{aligned}{} & {} {\textbf{s}}_{12}(z)=-\frac{\textrm{i}z}{q_+}{\textbf{s}}_{32}({\hat{z}}), \qquad {\textbf{s}}_{21}(z)=\frac{\textrm{i}q_-}{z}{\textbf{s}}_{23}({\hat{z}}), \qquad z\in {\mathbb {R}}, \end{aligned}$$
(2.96a)
$$\begin{aligned}{} & {} {\textbf{s}}_{32}(z)=-\frac{\textrm{i}z}{{\bar{q}}_+}{\textbf{s}}_{12}({\hat{z}}),\quad {\textbf{s}}_{23}(z)=\frac{\textrm{i}{\bar{q}}_-}{z}{\textbf{s}}_{21}({\hat{z}}), \quad z\in {\mathbb {R}}, \end{aligned}$$
(2.96b)
$$\begin{aligned}{} & {} {\textbf{s}}_{13}(z)=\frac{{\bar{q}}_-}{q_+}{\textbf{s}}_{31}({\hat{z}}),\quad {\textbf{s}}_{31}(z)=\frac{q_-}{{\bar{q}}_+}{\textbf{s}}_{13}({\hat{z}}),\quad z\in \Gamma _-\backslash \{\textrm{i}q_0\}, \end{aligned}$$
(2.96c)
$$\begin{aligned}{} & {} {\textbf{s}}_{11}(z)=\frac{q_-}{q_+}{\textbf{s}}_{33}({\hat{z}}),\quad z\in \bar{D}_1\backslash \{\textrm{i}q_0\},\quad \end{aligned}$$
(2.96d)
$$\begin{aligned}{} & {} {\textbf{s}}_{22}(z)={\textbf{s}}_{22}({\hat{z}}),\quad z\in \bar{{\mathbb {C}}}_-,\quad \end{aligned}$$
(2.96e)
$$\begin{aligned}{} & {} {\textbf{s}}_{33}(z)=\frac{{\bar{q}}_-}{{\bar{q}}_+}{\textbf{s}}_{11}({\hat{z}}),\quad z\in \bar{D}_4\backslash \{\textrm{i}q_0\}.\quad \end{aligned}$$
(2.96f)

A similar set of relations obviously holds for the components of \({\textbf{S}}(z)\),

$$\begin{aligned}{} & {} {\textbf{S}}_{12}(z)=-\frac{\textrm{i}z}{q_-}{\textbf{S}}_{32}({\hat{z}}), \qquad {\textbf{S}}_{21}(z)=\frac{\textrm{i}q_+}{z}{\textbf{S}}_{23}({\hat{z}}), \qquad z\in {\mathbb {R}}, \end{aligned}$$
(2.97a)
$$\begin{aligned}{} & {} {\textbf{S}}_{32}(z)=-\frac{\textrm{i}z}{{\bar{q}}_-}{\textbf{S}}_{12}({\hat{z}}), \quad {\textbf{S}}_{23}(z)=\frac{\textrm{i}{\bar{q}}_+}{z}{\textbf{S}}_{21}({\hat{z}}),\quad z\in {\mathbb {R}}, \end{aligned}$$
(2.97b)
$$\begin{aligned}{} & {} {\textbf{S}}_{13}(z)=\frac{{\bar{q}}_+}{q_-}{\textbf{S}}_{31}({\hat{z}}),\quad {\textbf{S}}_{31}(z)=\frac{q_+}{{\bar{q}}_-}{\textbf{S}}_{13}({\hat{z}}),\quad z\in \Gamma _+\backslash \{-\textrm{i}q_0\}, \end{aligned}$$
(2.97c)
$$\begin{aligned}{} & {} {\textbf{S}}_{11}(z)=\frac{q_+}{q_-}{\textbf{S}}_{33}({\hat{z}}),\quad z\in \bar{D}_2\backslash \{-\textrm{i}q_0\},\quad \end{aligned}$$
(2.97d)
$$\begin{aligned}{} & {} {\textbf{S}}_{22}(z)={\textbf{S}}_{22}({\hat{z}}),\quad z\in \bar{{\mathbb {C}}}_+,\quad \end{aligned}$$
(2.97e)
$$\begin{aligned}{} & {} {\textbf{S}}_{33}(z)=\frac{{\bar{q}}_+}{{\bar{q}}_-}{\textbf{S}}_{11}({\hat{z}}),\quad z\in \bar{D}_3\backslash \{-\textrm{i}q_0\}.\quad \end{aligned}$$
(2.97f)

2.4.3 Combined symmetry

In the inverse problem, the following reflection coefficients will appear,

$$\begin{aligned}{} & {} \rho _1(z)=\frac{{\textbf{s}}_{21}(z)}{{\textbf{s}}_{11}(z)}=\gamma (z)\frac{{\textbf{S}}_{12}^\dag ({\bar{z}})}{\overline{{\textbf{S}}_{11}({\bar{z}})}},\quad \quad z\in {\mathbb {R}}, \end{aligned}$$
(2.98a)
$$\begin{aligned}{} & {} \rho _2(z)=\frac{{\textbf{s}}_{31}(z)}{{\textbf{s}}_{11}(z)}=\frac{\overline{{\textbf{S}}_{13}({\bar{z}})}}{\overline{{\textbf{S}}_{11}({\bar{z}})}},\quad \quad z\in \Gamma _-\backslash \{\textrm{i}q_0\}, \end{aligned}$$
(2.98b)
$$\begin{aligned}{} & {} \rho _3(z)={\textbf{s}}_{32}(z){\textbf{s}}^{-1}_{22}(z)=\frac{1}{\gamma (z)}{\textbf{S}}^\dag _{23}({\bar{z}})({\textbf{S}}^\dag _{22}({\bar{z}}))^{-1},\qquad z\in {\mathbb {R}}. \end{aligned}$$
(2.98c)

Considering the transform \(z\rightarrow {\hat{z}}\), we have

$$\begin{aligned}{} & {} \rho _1({\hat{z}})=-\frac{\textrm{i}z}{{\bar{q}}_+}\frac{{\textbf{s}}_{23}(z)}{{\textbf{s}}_{33}(z)}=-\frac{\textrm{i}z\gamma (z)}{{\bar{q}}_+}\frac{{\textbf{S}}^\dag _{32}({\bar{z}})}{\overline{{\textbf{S}}_{33}({\bar{z}})}},\quad \quad z\in {\mathbb {R}}, \end{aligned}$$
(2.99a)
$$\begin{aligned}{} & {} \rho _2({\hat{z}})=\frac{q_+}{{\bar{q}}_+}\frac{{\textbf{s}}_{13}(z)}{{\textbf{s}}_{33}(z)}=\frac{q_+}{{\bar{q}}_+}\frac{\overline{{\textbf{S}}_{31}({\bar{z}})}}{\overline{{\textbf{S}}_{33}({\bar{z}})}},\quad \quad z\in \Gamma _-\backslash \{\textrm{i}q_0\}, \end{aligned}$$
(2.99b)
$$\begin{aligned}{} & {} \rho _3({\hat{z}})=\frac{\textrm{i}q_+}{z}{\textbf{s}}_{12}(z){\textbf{s}}^{-1}_{22}(z)=\frac{\textrm{i}q_+}{z\gamma (z)}{\textbf{S}}^\dag _{21}({\bar{z}})({\textbf{S}}^\dag _{22}({\bar{z}}))^{-1},\quad \quad \quad z\in {\mathbb {R}}.\nonumber \\ \end{aligned}$$
(2.99c)

From (2.87)–(2.90), it follows that

$$\begin{aligned}{} & {} \rho _1(z)=\gamma (z)\frac{\psi _{+2}^\dag ( {\bar{z}})\psi _{-1}(z)}{\psi _{+1}^\dag ( {\bar{z}})\psi _{-1}(z)},\qquad z\in {\mathbb {R}}, \end{aligned}$$
(2.100a)
$$\begin{aligned}{} & {} \rho _2(z)=\frac{\psi _{+3}^\dag ( {\bar{z}})\psi _{-1}(z)}{\psi _{+1}^\dag ( {\bar{z}})\psi _{-1}(z)},\qquad z\in \Gamma _-\backslash \{\textrm{i}q_0\}, \end{aligned}$$
(2.100b)
$$\begin{aligned}{} & {} \rho _3(z)=\psi _{+3}^\dag ( {\bar{z}})\psi _{-2}(z)[\gamma (z)\psi _{+2}^\dag ( {\bar{z}})\psi _{-2}( z)]^{-1},\quad z\in {\mathbb {R}}. \end{aligned}$$
(2.100c)

The above expressions together with Corollary 2.6 give the following

Corollary 2.18

Under the same hypotheses as in Theorem 2.3, and suppose that none of the zeros of \({\textbf{s}}_{11}(z)\) and \(\det ({\textbf{s}}_{22}(z))\) occurs on \(\Gamma _-\), then \(\rho _1(z)\), \(\rho _3(z)\in C^1({\mathbb {R}})\) and \(\rho _2(z)\in C^1(\Gamma _-)\).

Remark 2.19

For \(z\in {\mathbb {R}}\), it follows trivially from \({\textbf{S}}(z){\textbf{s}}(z)={\textbf{I}}\) that

$$\begin{aligned}&{\textbf{S}}_{11}(z)({\textbf{S}}_{31}(z){\textbf{s}}_{12}(z)+{\textbf{S}}_{32}(z){\textbf{s}}_{22}(z)+{\textbf{S}}_{33}(z){\textbf{s}}_{32}(z))={\textbf{0}}, \end{aligned}$$
(2.101a)
$$\begin{aligned}&{\textbf{S}}_{31}(z)({\textbf{S}}_{11}(z){\textbf{s}}_{12}(z)+{\textbf{S}}_{12}(z){\textbf{s}}_{22}(z)+{\textbf{S}}_{13}(z){\textbf{s}}_{32}(z))={\textbf{0}}, \end{aligned}$$
(2.101b)

the subtraction of which yields

$$\begin{aligned} \rho _3(z)={\textbf{s}}_{32}(z){\textbf{s}}^{-1}_{22}(z)=\frac{{\textbf{S}}_{31}(z){\textbf{S}}_{12}(z)-{\textbf{S}}_{11}(z){\textbf{S}}_{32}(z)}{{\textbf{S}}_{11}(z){\textbf{S}}_{33}(z)-{\textbf{S}}_{13}(z){\textbf{S}}_{31}(z)}. \end{aligned}$$
(2.102)

Equations (2.98) and (2.99) allow us to conclude

$$\begin{aligned} \rho _3(z)=\frac{q_+\overline{\rho _2(\hat{{\bar{z}}})}\rho _1^\dag ({\bar{z}})-\textrm{i}{\hat{z}}\rho _1^\dag (\hat{{\bar{z}}})}{\gamma (z)({\bar{q}}_+-q_+\overline{\rho _2({\bar{z}}) \rho _2(\hat{{\bar{z}}})})}, \end{aligned}$$
(2.103)

which implies the scattering coefficients \(\rho _1(z)\), \(\rho _2(z)\) and \(\rho _3(z)\) are not independent.

2.5 Behavior at branch points

In order to properly specify growth conditions for the RH problem, we should analyze the behaviors of the eigenfunctions and the scattering data near the branch points. It follows from Theorem 2.3 and Lemma 2.15 that \(\mu _{-1}(x,t;z)\), \(\mu _{+2}(x,t;z)\), \(\mu _{-3}(x,t;z)\), \({\textbf{m}}_1(x,t;z)\) and \({\textbf{m}}_4(x,t;z)\)are well defined and continuous at the branch point \(z=\textrm{i}q_0\), \(\mu _{+1}(x,t;z)\), \(\mu _{-2}(x,t;z)\), \(\mu _{+3}(x,t;z)\), \({\textbf{m}}_2(x,t;z)\) and \({\textbf{m}}_3(x,t;z)\) are well defined and continuous at the branch point \(z=-\textrm{i}q_0\). Furthermore, it follows from (2.15) that all of the above eigenfunctions are nonzero at the branch points.

From (2.87), it follows that the scattering coefficients \({\textbf{s}}_{11}(z)\), \({\textbf{s}}_{13}(z)\), \({\textbf{s}}_{31}(z)\) and \({\textbf{s}}_{33}(z)\) have a pole at the branch point \(z=\textrm{i}q_0\).

Corollary 2.20

Under the same hypotheses as in Theorem 2.3, as \(z\rightarrow \textrm{i}q_0\),

$$\begin{aligned} \begin{aligned}&(z-\textrm{i}q_0){\textbf{s}}_{11}(z)=O(1), \qquad (z-\textrm{i}q_0){\textbf{s}}_{13}(z)=O(1),\\&(z-\textrm{i}q_0){\textbf{s}}_{31}(z)=O(1), \qquad (z-\textrm{i}q_0){\textbf{s}}_{33}(z)=O(1). \end{aligned} \end{aligned}$$
(2.104)

Similarly, the scattering coefficients \({\textbf{S}}_{11}(z)\), \({\textbf{S}}_{13}(z)\), \({\textbf{S}}_{31}(z)\) and \({\textbf{S}}_{33}(z)\) have a pole at the branch point \(z=-\textrm{i}q_0\).

Corollary 2.21

Under the same hypotheses as in Theorem 2.3, as \(z\rightarrow -\textrm{i}q_0\),

$$\begin{aligned} \begin{aligned}&(z+\textrm{i}q_0) {\textbf{S}}_{11}(z)=O(1), \qquad (z+\textrm{i}q_0){\textbf{S}}_{13}(z)=O(1),\\&(z+\textrm{i}q_0){\textbf{S}}_{31}(z)=O(1), \qquad (z+\textrm{i}q_0){\textbf{S}}_{33}(z)=O(1). \end{aligned} \end{aligned}$$
(2.105)

Considering the asymptotics for the columns of \(\psi _\pm (x,t; \mp \textrm{i}q_0)\) as \(x\rightarrow \pm \infty \), respectively, we state

$$\begin{aligned} \psi _{-1}(x,t;\textrm{i}q_0)=\textrm{e}^{\textrm{i}\theta _-}\psi _{-3}(x,t;\textrm{i}q_0),\quad \psi _{+1}(x,t;-\textrm{i}q_0)=-\textrm{e}^{\textrm{i}\theta _+}\psi _{+3}(x,t;-\textrm{i}q_0).\nonumber \\ \end{aligned}$$
(2.106)

It follows trivially from (2.87), (2.98) and (2.106) that

$$\begin{aligned} \lim _{z\rightarrow \textrm{i}q_0}\rho _2(z)=\lim _{z\rightarrow \textrm{i}q_0}\rho _2({\hat{z}})=-\textrm{e}^{\textrm{i}\theta _+}, \end{aligned}$$
(2.107)

which implies that the jump matrices in Sect. 3 are not singular at \(\pm \textrm{i}q_0\).

2.6 Asymptotic behavior as \(z\rightarrow \infty \) and \(z\rightarrow 0\)

In the context of IST with ZBCs, we need to investigate the asymptotic behaviors of the eigenfunctions and the scattering data as the spectral parameter approaches infinity. However, since two points \(\infty _{1}\) and \(\infty _2\) at infinity on the Riemann surface \(\Upsilon \) are mapped to infinity and zero in the complex z plane, respectively, we should consider the asymptotics both as \(z\rightarrow \infty \) and \(z\rightarrow 0\). Substituting the Wentzel–Kramers–Brillouin expansions of the columns of the modified Jost solutions into (2.17) and collecting the terms \(O(z^j)\) as in Ref. [18], or substituting the formal expansions \(\mu _\pm (x,t;z)=\sum _{j=1}^\infty \mu _\pm ^{(j)}(x,t;z)\) into the Volterra integral equation (2.22) as in Ref. [21], we obtain the following asymptotics:

Lemma 2.22

Suppose that \({\textbf{q}}(\cdot ,t)-{\textbf{q}}_\pm \in L^{1,1}({\mathbb {R}}^{\pm })\) and \({\textbf{q}}(\cdot ,t)\) is continuously differentiable with \({\textbf{q}}_x(\cdot ,t)\in L^{1}({\mathbb {R}}^{\pm })\), as \(z\rightarrow \infty \) in the appropriate regions that each column is well defined,

$$\begin{aligned} \mu _{\pm }(x,t;z)=\begin{pmatrix} 1&{}\quad \frac{\textrm{i}{\textbf{q}}^\dag (x,t)}{z}\\ \frac{\textrm{i}{\textbf{q}}(x,t)}{z}&{} {\textbf{I}} \end{pmatrix}+O(z^{-1}). \end{aligned}$$
(2.108)

Similarly, as \(z\rightarrow 0\) in the appropriate regions that each column is well defined,

$$\begin{aligned} \mu _{\pm 1}(x,t;z)&=\begin{pmatrix} \frac{\textbf{q}^\dag (x,t) \textbf{q}_\pm }{q_0^2}\\ \frac{\textrm{i}\textbf{q}_\pm }{z} \end{pmatrix}+O(1), \end{aligned}$$
(2.109a)
$$\begin{aligned} \mu _{\pm 2}(x,t;z)&=\begin{pmatrix} -\frac{\textrm{i}z }{q_0^2}\textbf{q}^\dag (x,t)\\ \textbf{I} \end{pmatrix}({\epsilon }_1,\ldots ,{\epsilon }_{N-1})+O(z),\end{aligned}$$
(2.109b)
$$\begin{aligned} \mu _{\pm 3}(x,t;z)&=\begin{pmatrix} \frac{\textrm{i}\bar{q}_\pm }{z}\\ \frac{\textbf{q}(x,t)}{q_\pm } \end{pmatrix}+O(1), \end{aligned}$$
(2.109c)

where \(\{{{\epsilon }}_1,\ldots ,{\epsilon }_{N}\}\) represents the standard basis for \({\mathbb {R}}^{N}\).

Consequently, we can calculate the asymptotics of \({\textbf{m}}_j\ (j=1,\ldots ,4)\) by the definitions (2.14) and (2.14) of the auxiliary eigenfunctions and the modified ones, respectively.

Corollary 2.23

Under the same hypotheses as in Lemma 2.22, as \(z\rightarrow \infty \) in the appropriate regions that each column is well defined,

$$\begin{aligned} {\textbf{m}}_1(x,t;z)&=\begin{pmatrix} \frac{\textrm{i}{\textbf{q}}^\dag (x,t)\epsilon _N}{z}\\ {\textbf{0}}\\ 1 \end{pmatrix}+O(z^{-1}), \end{aligned}$$
(2.110a)
$$\begin{aligned} {\textbf{m}}_2(x,t;z)&=\begin{pmatrix} \frac{\textrm{i}{\textbf{q}}^\dag (x,t)\epsilon _N}{z}\\ {\textbf{0}}\\ 1 \end{pmatrix}+O(z^{-1}). \end{aligned}$$
(2.110b)

Similarly, as \(z\rightarrow 0\) in the appropriate regions that each column is well defined,

$$\begin{aligned} {\textbf{m}}_3(x,t;z)&=\begin{pmatrix} \frac{{\textbf{q}}^\dag (x,t)\epsilon _N}{{\bar{q}}_-}\\ {\textbf{0}}\\ \frac{\textrm{i}q_-}{z} \end{pmatrix}+O(1), \end{aligned}$$
(2.111a)
$$\begin{aligned} {\textbf{m}}_4(x,t;z)&=\begin{pmatrix} \frac{{\textbf{q}}^\dag (x,t)\epsilon _N}{{\bar{q}}_+}\\ {\textbf{0}}\\ \frac{\textrm{i}q_+}{z} \end{pmatrix}+O(1). \end{aligned}$$
(2.111b)

Remark 2.24

Indeed, by virtue of the symmetries (2.93) and (2.94), Eqs. (2.109) and (2.23) can be obtained directly from Eqs. (2.108) and (2.23), respectively.

Next, it follows from the asymptotics in Lemma 2.22 and the scattering relation (2.90) that the scattering matrix entries have the asymptotic behaviors:

Corollary 2.25

Under the same hypotheses as in Lemma 2.22, as \(z\rightarrow \infty \) in the appropriate regions that each column is well defined,

$$\begin{aligned}{} & {} {\textbf{s}}_{11}(z)=1+O(z^{-1}), \quad {\textbf{S}}_{11}(z)=1+O(z^{-1}), \end{aligned}$$
(2.112a)
$$\begin{aligned}{} & {} {\textbf{s}}_{22}(z)={\textbf{I}}+O(z^{-1}), \qquad {\textbf{S}}_{22}(z)={\textbf{I}}+O(z^{-1}). \end{aligned}$$
(2.112b)

Similarly, as \(z\rightarrow 0\) in the appropriate regions that each column is well defined,

$$\begin{aligned}{} & {} {\textbf{s}}_{22}(z)={\textbf{I}}+O(z), \qquad {\textbf{S}}_{22}(z)={\textbf{I}}+O(z), \end{aligned}$$
(2.113a)
$$\begin{aligned}{} & {} {\textbf{s}}_{33}(z)=\frac{q_+}{q_-}+O(z), \quad {\textbf{S}}_{33}(z)=\frac{ q_-}{q_+}+O(z). \end{aligned}$$
(2.113b)

Lemma 2.22 together with (2.100) immediately implies the following:

Corollary 2.26

Under the same hypotheses as in Lemma 2.22, as \(z\rightarrow \infty \),

$$\begin{aligned} \rho _1(z)=O(z^{-1}),\quad \rho _2(z)=O(z^{-1}),\quad \rho _3(z)=O(z^{-1}). \end{aligned}$$
(2.114)

3 Inverse problem

For \(z\in D_j,\ j=1,\ldots ,4\), an extra analytic eigenfunction \(\chi _j(x,t;z)\) is generated by virtue of the generalized cross product. Therefore, \(\mu _\pm (x,t;z)\) and \(\{\chi _j(x,t;z)\}_1^4\) make up a complete set of analytic eigenfunctions for solving the inverse problem. In the following, we will introduce a new operator and several identities, which play a key role in decomposing the auxiliary eigenfunctions and expressing symmetries.

3.1 Decomposition of the auxiliary eigenfunctions

Definition 3.1

For all \({\textbf{u}}_1,\ldots ,{\textbf{u}}_{N+1}\in {\mathbb {C}}^{N+1}\), define

$$\begin{aligned} {\mathscr {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N+1}]=-\sum _{l=1}^{N+1}\sum _{j=1}^{N+1}\det \begin{pmatrix} {\textbf{u}} &{}\quad {\textbf{e}}_j \\ {\textbf{e}}_l^T &{} 0 \end{pmatrix}{\textbf{e}}_j{\textbf{e}}_l^T, \end{aligned}$$
(3.1)

where \({\textbf{u}}=({\textbf{u}}_1,\ldots ,{\textbf{u}}_{N+1})\). Consequently, the l-th column of \({\mathscr {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N+1}]\) reads \(l=1,\ldots ,N+1\),

$$\begin{aligned} {\mathscr {G}}_l[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N+1}]= {\mathscr {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N+1}]{\textbf{e}}_l=-\sum _{j=1}^{N+1}\det \begin{pmatrix} {\textbf{u}} &{}\quad {\textbf{e}}_j \\ {\textbf{e}}_l^T &{} 0 \end{pmatrix}{\textbf{e}}_j.\nonumber \\ \end{aligned}$$
(3.2)

By direct calculations, it is easy to verify the following relation among the adjugate matrix \((\cdot )^*\), the generalized cross product \({\mathcal {G}}[\cdot ]\) and the operator \({\mathscr {G}}[\cdot ]\),

$$\begin{aligned} {\textbf{u}}^*=\begin{pmatrix} (-1)^{N}{\mathcal {G}}^T[{\textbf{u}}_2,\ldots ,{\textbf{u}}_{N+1}]\\ (-1)^{N-1}{\mathcal {G}}^T[{\textbf{u}}_1,{\textbf{u}}_3\ldots ,{\textbf{u}}_{N+1}]\\ \vdots \\ (-1)^{N+1-j}{\mathcal {G}}^T[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{j-1},{\textbf{u}}_{j+1},\ldots ,{\textbf{u}}_{N+1}]\\ \vdots \\ {\mathcal {G}}^T[{\textbf{u}}_1,\ldots ,{\textbf{u}}_N] \end{pmatrix}={\mathscr {G}}^T[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N+1}].\nonumber \\ \end{aligned}$$
(3.3)

Lemma 3.2

For all \({\textbf{u}}\in {\mathbb {C}}^{(N+1)\times (N+1)}\),

$$\begin{aligned} {\text {rank}}({\mathscr {G}}[{\textbf{u}}])={\left\{ \begin{array}{ll} N+1,\quad &{} {\text {rank}}({\textbf{u}})=N+1,\\ 1,&{}{\text {rank}}({\textbf{u}})=N,\\ 0,&{}{\text {rank}}({\textbf{u}})<N. \end{array}\right. } \end{aligned}$$
(3.4)

Lemma 3.3

For all \({\textbf{u}}_1,\ldots ,{\textbf{u}}_{N},{\textbf{v}}_1,\ldots ,{\textbf{v}}_{N}\in {\mathbb {C}}^{N+1}\),

$$\begin{aligned}&{\mathscr {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}, {\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]]=\left( {\textbf{v}}({\textbf{u}}^T_{(1)}{\textbf{v}})^*,{\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}]\right) , \end{aligned}$$
(3.5a)
$$\begin{aligned}&{\mathscr {G}}[ {\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N],{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}]=(-1)^N\left( {\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}],{\textbf{v}}({\textbf{u}}^T_{(1)}{\textbf{v}})^*\right) . \end{aligned}$$
(3.5b)

where \({\textbf{v}}=({\textbf{v}}_1,\ldots ,{\textbf{v}}_{N})\), \({\textbf{u}}_{(1)}=({\textbf{u}}_1,\ldots ,{\textbf{u}}_{N})\).

Proof

Firstly, we give the proof of (3.5a). Since

$$\begin{aligned} {\mathscr {G}}_{N+1}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}, {\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]]={\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}], \end{aligned}$$
(3.6)

we need to prove the remaining part

$$\begin{aligned} ({\mathscr {G}}_1,\ldots ,{\mathscr {G}}_{N})[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}, {\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]]={\textbf{v}}({\textbf{u}}^T_{(1)}{\textbf{v}})^*. \end{aligned}$$
(3.7)

If \({\textbf{v}}_1,\ldots ,{\textbf{v}}_N\) are linear dependent, the identities \({\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]={\textbf{0}}\) and \({\textbf{v}}({\textbf{u}}^T_{(1)}{\textbf{v}})^*={\textbf{0}}\) yield (3.7). If not, there exists \({\textbf{v}}_{N+1}\in {\mathbb {C}}^{N+1}\) such that \({\textbf{v}}_{(1)}=({\textbf{v}}_1,\ldots ,{\textbf{v}}_{N+1})\) is nonsingular. According to (3.3), we obtain \({\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]=({\textbf{v}}^*_{(1)})^T{\textbf{e}}_{N+1}\). Indeed,

$$\begin{aligned}&({\mathscr {G}}_1,\ldots ,{\mathscr {G}}_{N})[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N},{\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]] \nonumber \\&\quad =({\mathscr {G}}_1,\ldots ,{\mathscr {G}}_{N})[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}, {\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]](\epsilon _1,\ldots ,\epsilon _N)^T \nonumber \\&\quad =\sum _{l=1}^N{\mathscr {G}}_l[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}, {\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]]\epsilon _l^T\qquad (\text{ by } \text{ Eqs. }\ (3.2),(3.3)) \nonumber \\&\quad =-\sum _{l=1}^N\sum _{j=1}^{N+1}\det \begin{pmatrix} {\textbf{u}}_{(1)}&{}\quad ({\textbf{v}}^*_{(1)})^T{\textbf{e}}_{N+1}&{}\quad {\textbf{e}}_j \nonumber \\ {\epsilon }_l^T&{}0 &{} 0 \end{pmatrix}{\textbf{e}}_j\epsilon _l^T \nonumber \\&\quad =-\sum _{l=1}^N\sum _{j=1}^{N+1}\frac{1}{\det ({\textbf{v}}_{(1)})}\det \begin{pmatrix} {\textbf{v}}_{(1)}^T&{}\quad {\textbf{0}} \nonumber \\ {\textbf{0}}&{}1 \end{pmatrix}\det \begin{pmatrix} {\textbf{u}}_{(1)}&{}\quad ({\textbf{v}}^*_{(1)})^T{\textbf{e}}_{N+1}&{}\quad {\textbf{e}}_j \\ {\epsilon }_l^T&{}0 &{} 0 \end{pmatrix}{\textbf{e}}_j\epsilon _l^T \nonumber \\&\quad =-\sum _{l=1}^N\sum _{j=1}^{N+1}\det \begin{pmatrix} {\textbf{v}}_{(1)}^T{\textbf{u}}_{(1)}&{}\quad {\textbf{e}}_{N+1}&{}\quad {\textbf{v}}_{(1)}^T{\textbf{e}}_j \\ {\epsilon }_l^T&{}0 &{} 0 \end{pmatrix}{\textbf{e}}_j\epsilon _l^T \nonumber \\&\quad =-\sum _{l=1}^N\sum _{j=1}^{N+1}\det \begin{pmatrix} {\textbf{v}}^T{\textbf{u}}_{(1)}&{}\quad {\textbf{v}}^T{\textbf{e}}_j \\ {\epsilon }_l^T&{}0 \end{pmatrix}{\textbf{e}}_j\epsilon _l^T\qquad (\text{ by }\ {\textbf{I}}=\sum _{n=1}^N\epsilon _n\epsilon _n^T) \nonumber \\&\quad =-\sum _{l=1}^N\sum _{j=1}^{N+1}\det \begin{pmatrix} {\textbf{v}}^T{\textbf{u}}_{(1)}&{}\quad \sum _{n=1}^N(\epsilon _n^T{\textbf{v}}^T{\textbf{e}}_j)\epsilon _n \\ {\epsilon }_l^T&{}0 \end{pmatrix}{\textbf{e}}_j\epsilon _l^T \nonumber \\&\quad =-\sum _{l=1}^N\sum _{n=1}^{N}\det \begin{pmatrix} {\textbf{v}}^T{\textbf{u}}_{(1)}&{}\quad \epsilon _n \\ {\epsilon }_l^T&{}0 \end{pmatrix}\sum _{j=1}^{N+1}{\textbf{e}}_j({\textbf{e}}_j^T{\textbf{v}}\epsilon _n)\epsilon _l^T \nonumber \\&\quad =-\sum _{l=1}^N\sum _{n=1}^{N}\det \begin{pmatrix} {\textbf{v}}^T{\textbf{u}}_{(1)}&{}\quad \epsilon _n \\ {\epsilon }_l^T&{}0 \end{pmatrix}{\textbf{v}}\epsilon _n\epsilon _l^T\qquad (\text{ by } \text{ Eqs. }\ (3.1), (3.3)) \nonumber \\&\quad ={\textbf{v}}{\mathscr {G}}[{\textbf{v}}^T{\textbf{u}}_{(1)}]={\textbf{v}}({\textbf{u}}_{(1)}^T{\textbf{v}})^*,\nonumber \\ \end{aligned}$$
(3.8)

where \(\{{\epsilon }_1,\ldots ,{{\epsilon }}_{N}\}\) represents the standard basis for \({\mathbb {R}}^{N}\). The identity (3.5b) is proved similarly. \(\square \)

Consequently, the relation (3.3) yields the following repeated cross product identity

$$\begin{aligned} {\mathcal {G}}[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N-1}, {\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]] =-{\mathscr {G}}_N[{\textbf{u}}_1,\ldots ,{\textbf{u}}_{N}, {\mathcal {G}}[{\textbf{v}}_1,\ldots ,{\textbf{v}}_N]], \end{aligned}$$
(3.9)

which generalizes the triple cross product formula \({\textbf{u}}_1\times ({\textbf{v}}_1\times {\textbf{v}}_2)=({\textbf{u}}_1^T{\textbf{v}}_2){\textbf{v}}_1-({\textbf{u}}_1^T{\textbf{v}}_1){\textbf{v}}_2\) in \({\mathbb {R}}^3\).

Lemma 3.4

Under the same hypotheses as in Theorem 2.3, the auxiliary eigenfunctions have the following decompositions:

$$\begin{aligned} \chi _1( z){} & {} ={\textbf{s}}_{11}(z)\psi _{-3}( z)-{\textbf{s}}_{13}(z)\psi _{-1}( z),\quad \quad z\in \Gamma _-\backslash \{\textrm{i}q_0\},\nonumber \\{} & {} =\det ( {\textbf{S}}_{22}(z))\psi _{+3}( z)-\psi _{+2}( z){\textbf{S}}_{22}^*(z){\textbf{S}}_{23}(z),\quad \quad z\in {\mathbb {R}},\nonumber \\ \end{aligned}$$
(3.10a)
$$\begin{aligned} \chi _2( z){} & {} ={\textbf{S}}_{11}(z)\psi _{+3}( z)-{\textbf{S}}_{13}(z)\psi _{+1}( z),\quad \quad z\in \Gamma _+\backslash \{-\textrm{i}q_0\},\nonumber \\{} & {} =\det ( {\textbf{s}}_{22}(z))\psi _{-3}( z)-\psi _{-2}( z){\textbf{s}}_{22}^*(z){\textbf{s}}_{23}(z),\quad \quad z\in {\mathbb {R}},\nonumber \\ \end{aligned}$$
(3.10b)
$$\begin{aligned} (-1)^N\chi _3( z){} & {} ={\textbf{S}}_{33}(z)\psi _{+1}( z)-{\textbf{S}}_{31}(z)\psi _{+3}( z),\quad \quad z\in \Gamma _+\backslash \{-\textrm{i}q_0\},\nonumber \\{} & {} =\det ( {\textbf{s}}_{22}(z))\psi _{-1}( z)-\psi _{-2}( z){\textbf{s}}_{22}^*(z){\textbf{s}}_{21}(z),\quad \quad z\in {\mathbb {R}},\nonumber \\ \end{aligned}$$
(3.10c)
$$\begin{aligned} (-1)^N\chi _4( z){} & {} ={\textbf{s}}_{33}(z)\psi _{-1}( z)-{\textbf{s}}_{31}(z)\psi _{-3}( z),\quad \quad z\in \Gamma _-\backslash \{\textrm{i}q_0\},\nonumber \\{} & {} =\det ( {\textbf{S}}_{22}(z))\psi _{+1}( z)-\psi _{+2}( z){\textbf{S}}_{22}^*(z){\textbf{S}}_{21}(z),\qquad z\in {\mathbb {R}},\nonumber \\ \end{aligned}$$
(3.10d)

where the (xt)-dependence is suppressed for brevity.

Proof

We suppress the (xtz)-dependence for simplicity. As \(z\in \Gamma _-\backslash \{\textrm{i}q_0\}\),

$$\begin{aligned} \begin{aligned} \chi _1&=\textrm{e}^{\textrm{i}(N-1)\theta _2}{\mathcal {G}}[\tilde{{\textbf{S}}}_{11}{\tilde{\psi }}_{-1}+{\tilde{\psi }}_{-2}\tilde{{\textbf{S}}}_{21}+\tilde{{\textbf{S}}}_{31}{\tilde{\psi }}_{-3},{\tilde{\psi }}_{-2}]\\&=\textrm{e}^{\textrm{i}(N-1)\theta _2}\tilde{{\textbf{S}}}_{11}{\mathcal {G}}[{\tilde{\psi }}_{-1},{\tilde{\psi }}_{-2}]\\&\quad +\textrm{e}^{\textrm{i}(N-1)\theta _2}\tilde{{\textbf{S}}}_{31}{\mathcal {G}}[{\tilde{\psi }}_{-3},{\tilde{\psi }}_{-2}]\qquad (\text{ by } \text{ Eqs. }\ (2.80), (2.89), (2.90))\\&={\textbf{s}}_{11}\psi _{-3}-{\textbf{s}}_{13}\psi _{-1}. \end{aligned}\nonumber \\ \end{aligned}$$
(3.11)

As \(z\in {\mathbb {R}}\),

$$\begin{aligned} \begin{aligned} \chi _1&=\textrm{e}^{\textrm{i}(N-1)\theta _2}{\mathcal {G}}[{\tilde{\psi }}_{+1},{\tilde{\psi }}_{+1}\tilde{{\textbf{s}}}_{12}+{\tilde{\psi }}_{+2}\tilde{{\textbf{s}}}_{22}+{\tilde{\psi }}_{+3}\tilde{{\textbf{s}}}_{32}]\\&=\textrm{e}^{\textrm{i}(N-1)\theta _2}{\mathcal {G}}[{\tilde{\psi }}_{+1},{\tilde{\psi }}_{+2}\tilde{{\textbf{s}}}_{22}+{\tilde{\psi }}_{+3}\tilde{{\textbf{s}}}_{32}]\qquad \qquad \qquad (\text{ by } \text{ Eqs. }\ (3.2), (3.3))\nonumber \\&=\textrm{e}^{\textrm{i}(N-1)\theta _2}{\mathscr {G}}_{N+1}[{\tilde{\psi }}_{+1},{\tilde{\psi }}_{+2}\tilde{{\textbf{s}}}_{22}+{\tilde{\psi }}_{+3}\tilde{{\textbf{s}}}_{32},{\tilde{\psi }}_{+3}].\nonumber \\ \end{aligned} \end{aligned}$$
(3.12)

Indeed,

$$\begin{aligned}{} & {} {\mathscr {G}}[{\tilde{\psi }}_{+1},{\tilde{\psi }}_{+2}\tilde{{\textbf{s}}}_{22}+{\tilde{\psi }}_{+3}\tilde{{\textbf{s}}}_{32},{\tilde{\psi }}_{+3}]\\{} & {} \quad ={\mathscr {G}}\left[ ({\tilde{\psi }}_{+1},{\tilde{\psi }}_{+2},{\tilde{\psi }}_{+3}) \begin{pmatrix} 1 &{}\quad {\textbf{0}} &{}\quad 0 \\ {\textbf{0}} &{}\quad \tilde{{\textbf{s}}}_{22} &{}\quad {\textbf{0}} \\ 0 &{}\quad \tilde{{\textbf{s}}}_{32} &{}\quad 1 \end{pmatrix}\right] \qquad \qquad (\text{ by } \text{ Eq. }\ (3.3))\nonumber \\{} & {} \quad =\left[ \left( ({\tilde{\psi }}_{+1},{\tilde{\psi }}_{+2},{\tilde{\psi }}_{+3}) \begin{pmatrix} 1 &{}\quad {\textbf{0}} &{}\quad 0 \\ {\textbf{0}} &{}\quad \tilde{{\textbf{s}}}_{22} &{}\quad {\textbf{0}} \\ 0 &{}\quad \tilde{{\textbf{s}}}_{32} &{}\quad 1 \end{pmatrix}\right) ^*\right] ^T\qquad \quad (\text{ by } \text{ Eq. }\ (3.3))\nonumber \\{} & {} \quad ={\mathscr {G}}[{\tilde{\psi }}_{+1},{\tilde{\psi }}_{+2},{\tilde{\psi }}_{+3}]\begin{pmatrix} 1 &{}\quad {\textbf{0}} &{}\quad 0 \\ {\textbf{0}} &{}\quad \tilde{{\textbf{s}}}^T_{22} &{}\quad \tilde{{\textbf{s}}}^T_{32} \\ 0 &{}\quad {\textbf{0}} &{}\quad 1 \end{pmatrix}^*\qquad \qquad (\text{ by } \text{ Eq. }\ (2.80))\nonumber \\{} & {} \quad =\textrm{e}^{-\textrm{i}(N-1)\theta _2}(\psi _{+1},\gamma \psi _{+2},\psi _{+3})\begin{pmatrix} \det (\tilde{{\textbf{s}}}^T_{22}) &{}\quad {\textbf{0}} &{}\quad 0 \\ {\textbf{0}} &{}\quad (\tilde{{\textbf{s}}}^T_{22})^* &{}\quad -(\tilde{{\textbf{s}}}^T_{22})^*\tilde{{\textbf{s}}}^T_{32} \\ 0 &{}\quad {\textbf{0}} &{}\quad \det (\tilde{{\textbf{s}}}^T_{22}) \end{pmatrix} \quad (\text{ by } \text{ Eqs. }\ (2.89), (2.90))\nonumber \\{} & {} \quad =\textrm{e}^{-\textrm{i}(N-1)\theta _2}(\psi _{+1},\gamma \psi _{+2},\psi _{+3})\begin{pmatrix} \det ({\textbf{S}}_{22}) &{}\quad {\textbf{0}} &{}\quad 0 \\ {\textbf{0}} &{}\quad {\textbf{S}}_{22}^* &{}\quad -\frac{{\textbf{S}}_{22}^*{\textbf{S}}_{23} }{\gamma } \\ 0 &{}\quad {\textbf{0}} &{}\quad \det ({\textbf{S}}_{22}) \end{pmatrix}. \end{aligned}$$
(3.13)

Therefore,

$$\begin{aligned} \chi _1 =\det ({\textbf{S}}_{22})\psi _{+3}-\psi _{+2}{\textbf{S}}_{22}^*{\textbf{S}}_{23}. \end{aligned}$$
(3.14)

The remainder of Lemma 3.4 is proved similarly. \(\square \)

In the following, we will consider the inverse scattering transformation, which can be characterized in terms of a \(3\times 3\) block matrix RH problem.

3.2 Riemann–Hilbert problem

In order to coincide with the focusing Manakov system with NZBCs in Ref. [24], we define the piecewise meromorphic function \({\textbf{M}}(x,t;z)\) as

$$\begin{aligned} \begin{aligned} {\textbf{M}}(x,t;z)&=\left( \frac{\mu _{-1}(x,t;z)}{{\textbf{s}}_{11}(z)},\mu _{+2}(x,t;z),\frac{ {\textbf{m}}_1(x,t;z)}{\det ({\textbf{S}}_{22}(z))}\right) ,\quad z\in D_1,\\ {\textbf{M}}(x,t;z)&=\left( \mu _{+1}(x,t;z),\mu _{-2}(x,t;z){\textbf{s}}_{22}^{-1}(z),\frac{ {\textbf{m}}_2(x,t;z)}{{\textbf{S}}_{11}(z)}\right) ,\quad z\in D_2,\\ {\textbf{M}}(x,t;z)&=\left( \frac{{\textbf{m}}_3(x,t;z)}{{\textbf{S}}_{33}(z)},\mu _{-2}(x,t;z){\textbf{s}}_{22}^{-1}(z),\mu _{+3}(x,t;z)\right) ,\quad z\in D_3,\\ {\textbf{M}}(x,t;z)&=\left( \frac{{\textbf{m}}_4(x,t;z)}{\det ({\textbf{S}}_{22}(z))},\mu _{+2}(x,t;z),\frac{\mu _{-3}(x,t;z)}{{\textbf{s}}_{33}(z)}\right) ,\quad z\in D_4. \end{aligned}\nonumber \\ \end{aligned}$$
(3.15)

By virtue of Lemma 3.3, we prove that

Lemma 3.5

Under the same hypotheses as in Theorem 2.3, as \(z\in {\mathbb {C}}\backslash \Gamma \),

$$\begin{aligned} {\mathscr {G}}[{\textbf{M}}(x,t;z)]=\gamma (z)\tilde{{\textbf{M}}}(x,t;z){\textbf{H}}^{-1}(z), \quad \det ( {\textbf{M}}(x,t;z))=\gamma (z). \end{aligned}$$
(3.16)

Proof

We suppress the (xtz)-dependence for simplicity. As \(z\in D_1\),

$$\begin{aligned} \begin{aligned} {\mathscr {G}}[{\textbf{M}}]&={\mathscr {G}}\left[ \frac{\mu _{-1}}{{\textbf{s}}_{11}},\mu _{+2},\frac{ {\textbf{m}}_1}{\det ({\textbf{S}}_{22})}\right] \qquad \qquad (\text{ by } \text{ Eqs. }\ (2.72b), (2.82),(2.83))\\&={\mathscr {G}}\left[ \frac{\mu _{-1}}{{\textbf{s}}_{11}},\mu _{+2}, {\mathcal {G}}\left[ \tilde{\mu }_{+1},\frac{\tilde{\mu }_{-2}}{\det ({\textbf{S}}_{22})}\right] \right] \qquad (\text{ by } \text{ Lemma }\ (3.3))\\&=\left( \left( \tilde{\mu }_{+1},\frac{\tilde{\mu }_{-2}}{\det ({\textbf{S}}_{22})}\right) \left( \begin{pmatrix} \frac{\mu ^T_{-1}}{{\textbf{s}}_{11}} \\ \mu ^T_{+2} \end{pmatrix}\left( \tilde{\mu }_{+1},\frac{\tilde{\mu }_{-2}}{\det ({\textbf{S}}_{22})}\right) \right) ^*,{\mathcal {G}}\left[ \frac{\mu _{-1}}{{\textbf{s}}_{11}},\mu _{+2}\right] \right) \quad \\&\qquad \begin{pmatrix} \text{ by } \text{ Eqs. } (2.83), (2.86),(2.90) \end{pmatrix}\\&=\left( \left( \tilde{\mu }_{+1},\frac{\tilde{\mu }_{-2}}{\det ({\textbf{S}}_{22})}\right) \begin{pmatrix} \gamma &{}\quad {\textbf{0}} \\ {\textbf{0}}&{}\frac{\tilde{{\textbf{s}}}_{22}}{\det ({\textbf{S}}_{22})} \end{pmatrix}^*,\frac{ \tilde{{\textbf{m}}}_2}{\tilde{{\textbf{S}}}_{11}}\right) \\&=\left( \left( \tilde{\mu }_{+1},\frac{\tilde{\mu }_{-2}}{\det ({\textbf{S}}_{22})}\right) \begin{pmatrix} 1&{}\quad {\textbf{0}} \\ {\textbf{0}}&{}\gamma \det ( {\textbf{S}}_{22})\tilde{{\textbf{s}}}_{22}^{-1} \end{pmatrix},\frac{ \tilde{{\textbf{m}}}_2}{\tilde{{\textbf{S}}}_{11}}\right) \\&= \left( \tilde{\mu }_{+1},\gamma \tilde{\mu }_{-2}\tilde{{\textbf{s}}}_{22}^{-1},\frac{ \tilde{{\textbf{m}}}_2}{\tilde{{\textbf{S}}}_{11}}\right) . \end{aligned} \end{aligned}$$
(3.17)

Similarly, we can prove

$$\begin{aligned} {\mathscr {G}}\left[ \mu _{+1},\mu _{-2}{\textbf{s}}_{22}^{-1},\frac{ {\textbf{m}}_2}{{\textbf{S}}_{11}}\right]&=\left( \frac{\tilde{\mu }_{-1}}{\tilde{{\textbf{s}}}_{11}},\gamma \tilde{\mu }_{+2},\frac{ \tilde{{\textbf{m}}}_1}{\det ( \tilde{{\textbf{S}}}_{22})}\right) ,\quad z\in D_2, \end{aligned}$$
(3.18a)
$$\begin{aligned} {\mathscr {G}}\left[ \frac{{\textbf{m}}_3}{{\textbf{S}}_{33}},\mu _{-2}{\textbf{s}}_{22}^{-1},\mu _{+3}\right]&=\left( \frac{\tilde{{\textbf{m}}}_4}{\det (\tilde{{\textbf{S}}}_{22})},\gamma \tilde{\mu }_{+2},\frac{\tilde{\mu }_{-3}}{\tilde{{\textbf{s}}}_{33}}\right) ,\quad z\in D_3, \end{aligned}$$
(3.18b)
$$\begin{aligned} {\mathscr {G}}\left[ \frac{{\textbf{m}}_4}{\det ({\textbf{S}}_{22})},\mu _{+2},\frac{\mu _{-3}}{{\textbf{s}}_{33}}\right]&=\left( \frac{\tilde{{\textbf{m}}}_3}{\tilde{{\textbf{S}}}_{33}},\gamma \tilde{\mu }_{-2}\tilde{{\textbf{s}}}_{22}^{-1},\tilde{\mu }_{+3}\right) ,\quad z\in D_4, \end{aligned}$$
(3.18c)

which means

$$\begin{aligned} {\mathscr {G}}[{\textbf{M}}(x,t;z)]=\gamma (z)\tilde{{\textbf{M}}}(x,t;z){\textbf{H}}^{-1}(z). \end{aligned}$$
(3.19)

For \(z\in D_1\), the (1, 1)-entry of \({\textbf{M}}^T(x,t;z){\mathscr {G}}[{\textbf{M}}(x,t;z)]=\det ({\textbf{M}}(x,t;z)){\textbf{I}}\) implies

$$\begin{aligned} \det ({\textbf{M}}(x,t;z))=\frac{1}{{\textbf{s}}_{11}(z)}\mu _{-1}^T(x,t;z)\tilde{\mu }_{+1}(x,t;z)=\gamma (z). \end{aligned}$$
(3.20)

Similarly, \(\det ({\textbf{M}}(x,t;z))=\gamma (z)\) holds for \(z\in D_2,D_3,D_4\). \(\square \)

The above lemma, together with the relation (3.3) and the symmetries (2.90), (2.93), (2.94) and (2.96), implies that the piecewise meromorphic function \({\textbf{M}}(x,t;z)\) satisfies two symmetries:

Lemma 3.6

Under the same hypotheses as in Theorem 2.3, as \(z\in {\mathbb {C}}\backslash \Gamma \),

$$\begin{aligned}{} & {} {\textbf{M}}(x,t;z)=({\textbf{M}}^\dag (x,t;{\bar{z}}))^{-1}{\textbf{H}}(z), \end{aligned}$$
(3.21a)
$$\begin{aligned}{} & {} {\textbf{M}}(x,t;z)={\textbf{M}}(x,t;{\hat{z}}){\varvec{\Pi }}_+(z). \end{aligned}$$
(3.21b)

Set

$$\begin{aligned} {\textbf{M}}_{\pm }(x,t;z)=\overset{\angle }{\lim _{\begin{array}{c} z'\rightarrow z\\ z'\in D_{\pm } \end{array}}}{\textbf{M}}(x,t;z'),\quad z\in \Gamma , \end{aligned}$$
(3.22)

and “ \(\overset{\angle }{\lim }\)" means non-tangential limit, \(D_+=D_1\cup D_3\), \(D_-=D_2\cup D_4\). From Theorem 2.3, Lemma 2.15 and Corollary 2.20, 2.21, it follows that \({\textbf{M}}_{\pm }(x,t;z)\) is well defined for \(z\in \Gamma \).

Furthermore, \({\textbf{M}}(x,t;z)\) satisfies the growth conditions near the branch points \(\pm \textrm{i}q_0\):

Lemma 3.7

Under the same hypotheses as in Theorem 2.3,

$$\begin{aligned} \begin{aligned}&{\textbf{M}}_{1}(x,t;z)=O(z-\textrm{i}q_0),\qquad z\in D_1\rightarrow \textrm{i}q_0,\\&{\textbf{M}}_{3}(x,t;z)=O(z-\textrm{i}q_0),\qquad z\in D_4\rightarrow \textrm{i}q_0,\\&{\textbf{M}}_{1}(x,t;z)=O(z+\textrm{i}q_0),\qquad z\in D_3\rightarrow -\textrm{i}q_0,\\&{\textbf{M}}_{3}(x,t;z)=O(z+\textrm{i}q_0),\qquad z\in D_2\rightarrow -\textrm{i}q_0. \end{aligned} \end{aligned}$$
(3.23)

The piecewise meromorphic function \({\textbf{M}}(x,t;z)\) has the jump across the oriented contour \(\Gamma \) as follows:

Lemma 3.8

Under the same hypotheses as in Theorem 2.3,

$$\begin{aligned} {\textbf{M}}_+(x,t;z)={\textbf{M}}_-(x,t;z){\textbf{J}}(x,t;z),\quad z\in \Gamma , \end{aligned}$$
(3.24)

with

$$\begin{aligned} {\textbf{J}}(x,t;z)=\textrm{e}^{\textrm{i}{\varvec{\Theta }}(x,t;z)}{\textbf{J}}_l(z)\textrm{e}^{-\textrm{i}{\varvec{\Theta }}(x,t;z)}, \quad z\in \Gamma _l,\quad l=1,\ldots ,4, \end{aligned}$$
(3.25)
$$\begin{aligned}{} & {} {\textbf{J}}_1(z)=\begin{pmatrix} 1+\frac{\rho _1^\dag ({\bar{z}})\rho _1(z)}{\gamma (z)}+\tilde{\rho }_2(z)\rho _2(z)&{}\quad \frac{\rho _1^\dag ({\bar{z}})}{\gamma (z)}&{}\quad \tilde{\rho }_2(z)-\rho _1^\dag ({\bar{z}})\rho _3^\dag ({\bar{z}})\\ \rho _1(z)&{}{\textbf{I}}&{}-\gamma (z)\rho _3^\dag ({\bar{z}})\\ \rho _2(z)-\rho _3(z)\rho _1(z)&{}-\rho _3(z)&{}1+\gamma (z)\rho _3(z)\rho _3^\dag ({\bar{z}}) \end{pmatrix}, \nonumber \\\end{aligned}$$
(3.26a)
$$\begin{aligned}{} & {} {\textbf{J}}_2(z)=\begin{pmatrix} 1-\frac{q_+}{\bar{q}_+}\tilde{\rho }_2(z)\tilde{\rho }_2(\hat{z})&{}\textbf{0}&{}\tilde{\rho }_2(z)\\ \textbf{0}&{}\textbf{I}&{}\textbf{0}\\ -\frac{q_+}{\bar{q}_+}\tilde{\rho }_2(\hat{z})&{}\textbf{0}&{}1 \end{pmatrix}, \end{aligned}$$
(3.26b)
$$\begin{aligned}{} & {} {\textbf{J}}_3(z)={\varvec{\Pi }}^{-1}_+(z){\textbf{J}}_1^{-1}({\hat{z}}){\varvec{\Pi }}_+(z), \end{aligned}$$
(3.26c)
$$\begin{aligned}{} & {} {\textbf{J}}_4(z)=\begin{pmatrix} 1-\frac{\bar{q}_+}{ q_+}\rho _2(z)\rho _2(\hat{z})&{}\textbf{0}&{}-\frac{\bar{q}_+}{ q_+}\rho _2(\hat{z})\\ \textbf{0}&{}\textbf{I}&{}\textbf{0}\\ \rho _2( z)&{}\textbf{0}&{}1 \end{pmatrix}, \end{aligned}$$
(3.26d)

where the oriented contours \(\Gamma _l\ (l=1,\ldots ,4)\) are depicted in Fig. 1, \(\rho _1(z)\), \(\rho _2(z)\) and \(\rho _3(z)\) are defined in (2.98). Meanwhile, the jump matrix \({\textbf{J}}(x,t;\cdot )\in C^1(\Gamma ) \) and tends to identity matrix both as \(z\rightarrow \infty \) and \(z\rightarrow 0\).

Proof

For \(z\in \Gamma _1\), denote \({\textbf{J}}_1(z)=\begin{pmatrix} \textbf{J}_{11}(z) &{} \textbf{J}_{12}(z)&{} \textbf{J}_{13}(z) \\ \textbf{J}_{21}(z) &{} \textbf{J}_{22}(z)&{} \textbf{J}_{23}(z) \\ \textbf{J}_{31}(z) &{} \textbf{J}_{32}(z)&{} \textbf{J}_{33}(z) \end{pmatrix}\). The jump (3.24) implies that

$$\begin{aligned}&\frac{\psi _{-1}}{{\textbf{s}}_{11}}=\psi _{+1}{\textbf{J}}_{11} +\psi _{-2}{\textbf{s}}_{22}^{-1}{\textbf{J}}_{21} +\frac{\chi _2}{{\textbf{S}}_{11}}{\textbf{J}}_{31}, \end{aligned}$$
(3.27a)
$$\begin{aligned}&\psi _{+2}=\psi _{+1}{\textbf{J}}_{12} +\psi _{-2}{\textbf{s}}_{22}^{-1}{\textbf{J}}_{22} +\frac{\chi _2}{{\textbf{S}}_{11}}{\textbf{J}}_{32},\nonumber \\ \end{aligned}$$
(3.27b)
$$\begin{aligned}&\frac{\chi _1}{\det ({\textbf{S}}_{22})}=\psi _{+1}{\textbf{J}}_{13} +\psi _{-2}{\textbf{s}}_{22}^{-1}{\textbf{J}}_{23} +\frac{\chi _2}{{\textbf{S}}_{11}}{\textbf{J}}_{33}. \end{aligned}$$
(3.27c)

Substituting (2.63), (2.67), (3.10a) and (3.10b) into (3.27), we find

$$\begin{aligned}&\frac{\psi _{-1}}{{\textbf{s}}_{11}}=(\psi _{-1}{\textbf{S}}_{11}+\psi _{-2}{\textbf{S}}_{21}+\psi _{-3}{\textbf{S}}_{31}){\textbf{J}}_{11} +\psi _{-2}{\textbf{s}}_{22}^{-1}{\textbf{J}}_{21} +\frac{\det ({\textbf{s}}_{22})\psi _{-3}-\psi _{-2}{\textbf{s}}_{22}^*{\textbf{s}}_{23}}{{\textbf{S}}_{11}}{\textbf{J}}_{31}, \end{aligned}$$
(3.28a)
$$\begin{aligned}&\psi _{+2}=\psi _{+1}{\textbf{J}}_{12} +(\psi _{+1}{\textbf{s}}_{12}+\psi _{+2}{\textbf{s}}_{22}+\psi _{+3}{\textbf{s}}_{32}){\textbf{s}}_{22}^{-1}{\textbf{J}}_{22} +\frac{{\textbf{S}}_{11}\psi _{+3}-{\textbf{S}}_{13}\psi _{+1}}{{\textbf{S}}_{11}}{\textbf{J}}_{32}, \end{aligned}$$
(3.28b)
$$\begin{aligned}&\psi _{+3}-\psi _{+2}{\textbf{S}}_{22}^{-1}{\textbf{S}}_{23}=\psi _{+1}{\textbf{J}}_{13} +(\psi _{+1}{\textbf{s}}_{12}+\psi _{+2}{\textbf{s}}_{22}+\psi _{+3}{\textbf{s}}_{32}){\textbf{s}}_{22}^{-1}{\textbf{J}}_{23} +\frac{{\textbf{S}}_{11}\psi _{+3}-{\textbf{S}}_{13}\psi _{+1}}{{\textbf{S}}_{11}}{\textbf{J}}_{33}. \end{aligned}$$
(3.28c)

Since \(\det (\psi _\pm )\ne 0\) for \(z\in \Gamma _1\), all columns of \(\psi _{\pm }\) are linearly independent. Comparing the coefficients yields

$$\begin{aligned} {\textbf{J}}_1=\begin{pmatrix} \frac{1}{{\textbf{S}}_{11}{\textbf{s}}_{11}}&{}\quad -\frac{{\textbf{S}}_{11}{\textbf{s}}_{12}+{\textbf{S}}_{13}{\textbf{s}}_{32}}{{\textbf{S}}_{11}}{\textbf{s}}_{22}^{-1}&{}\quad \frac{{\textbf{S}}_{13}}{{\textbf{S}}_{11}}+\frac{{\textbf{S}}_{11}{\textbf{s}}_{12}+{\textbf{S}}_{13}{\textbf{s}}_{32}}{{\textbf{S}}_{11}}{\textbf{s}}_{22}^{-1}{\textbf{S}}_{22}^{-1}{\textbf{S}}_{23}\\ -\frac{{\textbf{s}}_{22}{\textbf{S}}_{21}+{\textbf{s}}_{23}{\textbf{S}}_{31}}{{\textbf{s}}_{11}{\textbf{S}}_{11}}&{}{\textbf{I}}&{}-{\textbf{S}}_{22}^{-1}{\textbf{S}}_{23}\\ -\frac{{\textbf{S}}_{31}}{{\textbf{s}}_{11}\det ({\textbf{s}}_{22})} &{}-{\textbf{s}}_{32}{\textbf{s}}_{22}^{-1}&{}1+{\textbf{s}}_{32}{\textbf{s}}_{22}^{-1}{\textbf{S}}_{22}^{-1}{\textbf{S}}_{23} \end{pmatrix}.\nonumber \\ \end{aligned}$$
(3.29)

Recalling (2.98), (2.99) and the fact \({\textbf{s}}(z){\textbf{S}}(z)={\textbf{S}}(z){\textbf{s}}(z)={\textbf{I}}\) for \(z\in \Gamma _1\), we derive (3.26a) obviously. One can obtain (3.26b)-(3.26d) by a similar way, so we omit the rest of proofs. The statement that the jump matrix \({\textbf{J}}(x,t;\cdot )\in C^1(\Gamma ) \) and tends to identity matrix both as \(z\rightarrow \infty \) and \(z\rightarrow 0\), follows by Corollary 2.18 and 2.26. \(\square \)

Remark 3.9

For \(z\in \Gamma \), note that,

$$\begin{aligned} {\textbf{J}}(x,t;z)={\textbf{H}}^{-1}(z){\textbf{J}}^\dag (x,t;{\bar{z}}){\textbf{H}}(z),\quad {\textbf{J}}(x,t;z)={\varvec{\Pi }}^{-1}_+(z){\textbf{J}}^{-1}(x,t;{\hat{z}}){\varvec{\Pi }}_+(z),\nonumber \\ \end{aligned}$$
(3.30)

which exactly coincide with the symmetries (3.21a) and (3.21b) of \({\textbf{M}}(x,t;z)\). These symmetries play a key role in ensuring the existence and uniqueness of the solution for the RH problem, and the correctness of the reconstruction formula.

Substituting the asymptotics (2.108)−(2.25) of the eigenfunctions and the scattering coefficients into (3.15), we state

Lemma 3.10

Under the same hypotheses as in Lemma 2.22, the piecewise meromorphic function \({\textbf{M}}(x,t;z)\) has the following asymptotic behaviors as \(z\in {\mathbb {C}}\backslash \Gamma \),

$$\begin{aligned}{} & {} {\textbf{M}}(x,t;z)={\textbf{I}}+O(z^{-1}),\qquad z\rightarrow \infty , \end{aligned}$$
(3.31a)
$$\begin{aligned}{} & {} {\textbf{M}}(x,t;z)=({\textbf{I}}+O(z)){\varvec{\Pi }}_+(z),\qquad z\rightarrow 0. \end{aligned}$$
(3.31b)

It follows from (2.90) and (2.96) that

Lemma 3.11

Under the same hypotheses as in Theorem 2.3, as \(z_0\in D_1\), then

$$\begin{aligned}{} & {} {\textbf{s}}_{11}(z_0)=0\Longleftrightarrow {\textbf{S}}_{11}({\bar{z}}_0)=0\Longleftrightarrow {\textbf{S}}_{33}(\hat{{\bar{z}}}_0)=0\Longleftrightarrow {\textbf{s}}_{33}({\hat{z}}_0)=0, \end{aligned}$$
(3.32a)
$$\begin{aligned}{} & {} \det ({\textbf{S}}_{22}(z_0))=0\Longleftrightarrow \det ({\textbf{s}}_{22}({\bar{z}}_0))=0\Longleftrightarrow \det ( {\textbf{s}}_{22}(\hat{{\bar{z}}}_0))=0\Longleftrightarrow \det ({\textbf{S}}_{22}({\hat{z}}_0))=0.\nonumber \\ \end{aligned}$$
(3.32b)

The definition (3.15) implies that \({\textbf{M}}_1(x,t;z)\) can only have singularities at the zeros of \({\textbf{s}}_{11}(z)\) and \(\det ({\textbf{S}}_{22}(z))\). Similarly, \({\textbf{M}}_2(x,t;z)\) can only have singularities at the zeros of \({\textbf{S}}_{11}(z)\) and \(\det ({\textbf{s}}_{22}(z))\), etc. Lemma 3.11 implies that these zeros appear in symmetric quartets: \(z_0\), \({\bar{z}}_0\), \({\hat{z}}_0\), \(\hat{{\bar{z}}}_0\). Therefore, we only need to study the zeros of \({\textbf{s}}_{11}(z)\) and \(\det ({\textbf{S}}_{22}(z))\) for \(z\in D_1\).

Assumption 3.12

Assume that \({\textbf{s}}_{11}(z)\) has \(N_1+N_2\) simple zeros \(\{w_l\}_1^{N_1}\) and \(\{y_m\}_1^{N_2}\) in \(D_1\), \(\det ({\textbf{S}}_{22}(z))\) has \(N_2+N_3\) simple zeros \(\{y_m\}_1^{N_2}\) and \(\{z_n\}_1^{N_3}\) in \(D_1\), where \(\{w_l\}_1^{N_1}\cap \{z_n\}_1^{N_3}=\emptyset \), \(\{y_m\}_1^{N_2}\) are the common zeros of \({\textbf{s}}_{11}(z)\) and \(\det ({\textbf{S}}_{22}(z))\). Assume that none of these zeros occurs in \(\Gamma \).

Lemma 3.13

Under the same hypotheses as in Theorem 2.3, if w is the simple pole of \({\textbf{M}}(x,t;z)\), then

$$\begin{aligned} \underset{{\hat{w}}}{{\text {Res}}} {{\textbf{M}}}(x,t;z)=\frac{q_0^2}{w^2}\left( \underset{w}{{\text {Res}}} {{\textbf{M}}}(x,t;z)\right) {\varvec{\Pi }}_+({\hat{w}}). \end{aligned}$$
(3.33)

Furthermore, suppose that there exists a matrix \({\textbf{A}}\) such that \(\underset{ w}{{\text {Res}}} {{\textbf{M}}}(x,t;z)={{\textbf{M}}}(x,t;w){\textbf{A}}\), then

$$\begin{aligned} \underset{ {\bar{w}}}{{\text {Res}}} {{\textbf{M}}}(x,t;z)=-{{\textbf{M}}}(x,t;{\bar{w}}){\textbf{H}}^{-1}({\bar{w}}){\textbf{A}}^\dag {\textbf{H}}({\bar{w}}). \end{aligned}$$
(3.34)

Proof

Since the pole is simple,

$$\begin{aligned} \begin{aligned} \underset{{\hat{w}}}{{\text {Res}}} {{\textbf{M}}}(x,t;z)&=\left. {[{\textbf{M}}}(x,t;z)(z-{\hat{w}})]\right| _{z={\hat{w}}}\qquad (\text{ by } \text{ Eq. }\ (3.21b))\\&=\left. [{{\textbf{M}}}(x,t;{\hat{z}}){\varvec{\Pi }}_+(z)(z-{\hat{w}})]\right| _{{\hat{z}}= w}\\&=\left. \left[ -\frac{z}{w} {{\textbf{M}}}(x,t;{\hat{z}}){\varvec{\Pi }}_+(z)({\hat{z}}-w)\right] \right| _{{\hat{z}}= w}\\&=\frac{q_0^2}{w^2} \left( \underset{w}{{\text {Res}}} {{\textbf{M}}}(x,t;z)\right) {\varvec{\Pi }}_+({\hat{w}}). \end{aligned}\nonumber \\ \end{aligned}$$
(3.35)

Suppose

$$\begin{aligned}&{\textbf{M}}(x,t;z)=\frac{{\textbf{M}}_{-1}(x,t)}{z-{\bar{w}}}+{\textbf{M}}_0(x,t)+O(z-{\bar{w}}), \quad z\rightarrow {\bar{w}}, \end{aligned}$$
(3.36a)
$$\begin{aligned}&{\textbf{M}}^\dag (x,t;{\bar{z}})=\frac{\check{{\textbf{M}}}_{-1}(x,t)}{z-{\bar{w}}}+\check{{\textbf{M}}}_0(x,t)+O(z-{\bar{w}}), \quad z\rightarrow {\bar{w}}.\nonumber \\ \end{aligned}$$
(3.36b)

Indeed,

$$\begin{aligned} \begin{aligned} \check{{\textbf{M}}}_{-1}(x,t)&=[(z-{\bar{w}}){\textbf{M}}^\dag (x,t;{\bar{z}})]\vert _{z={\bar{w}}}=\left[ ({\bar{z}}-w){\textbf{M}}(x,t;{\bar{z}})\right] ^\dag \vert _{{\bar{z}}=w}\\&= \left( \underset{ w}{{\text {Res}}}\ {{\textbf{M}}}(x,t;z)\right) ^\dag ={\textbf{A}}^\dag {\textbf{M}}^\dag (x,t;w). \end{aligned} \end{aligned}$$
(3.37)

The symmetry (3.21a) implies that

$$\begin{aligned}&{\textbf{M}}_{-1}(x,t){\textbf{H}}^{-1}(z)\check{{\textbf{M}}}_{-1}(x,t)={\textbf{0}}, \end{aligned}$$
(3.38)
$$\begin{aligned}&{\textbf{M}}_0(x,t){\textbf{H}}^{-1}(z)\check{{\textbf{M}}}_{-1}(x,t)+{\textbf{M}}_{-1}(x,t){\textbf{H}}^{-1}(z)\check{{\textbf{M}}}_0(x,t)={\textbf{0}}. \end{aligned}$$
(3.39)

Substituting (3.36) into (3.39), we obtain

$$\begin{aligned} \begin{aligned} {\textbf{0}}&=\left( {\textbf{M}}(x,t;z)-\frac{{\textbf{M}}_{-1}(x,t)}{z-{\bar{w}}}+O(z-{\bar{w}})\right) {\textbf{H}}^{-1}(z)\check{{\textbf{M}}}_{-1}(x,t)\\&\quad +{\textbf{M}}_{-1}(x,t){\textbf{H}}^{-1}(z)\left( {\textbf{M}}^\dag (x,t;{\bar{z}})-\frac{\check{{\textbf{M}}}_{-1}(x,t)}{z-{\bar{w}}}+O(z-{\bar{w}})\right) \quad (\text{ by }\ (3.38))\\&={\textbf{M}}(x,t;z){\textbf{H}}^{-1}(z)\check{{\textbf{M}}}_{-1}(x,t)+{\textbf{M}}_{-1}(x,t){\textbf{H}}^{-1}(z){\textbf{M}}^\dag (x,t;{\bar{z}})+O(z-{\bar{w}}). \end{aligned}\nonumber \\ \end{aligned}$$
(3.40)

Setting \(z={\bar{w}}\), combining (3.21a) with (3.37), we obtain

$$\begin{aligned} \underset{{\bar{w}}}{{\text {Res}}} {{\textbf{M}}}(x,t;z)={\textbf{M}}_{-1}(x,t)=-{\textbf{M}}(x,t;{\bar{w}}){\textbf{H}}^{-1}({\bar{w}}){\textbf{A}}^\dag {\textbf{H}}({\bar{w}}). \end{aligned}$$
(3.41)

\(\square \)

Lemma 3.14

Suppose \({\textbf{A}}(z)\) is a s-order square matrix, w is the simple zero of \(\det ({\textbf{A}}(z))\), then

$$\begin{aligned} {\text {rank}}({\textbf{A}}(w))=s-1, \quad {\text {rank}}({\textbf{A}}^*(w))=1. \end{aligned}$$
(3.42)

Proof

The lemma follows trivially from the fact

$$\begin{aligned} {\dot{\det }}( {\textbf{A}}(w))=\textrm{trace}(\dot{{\textbf{A}}}(w){\textbf{A}}^*(w)), \end{aligned}$$
(3.43)

where “\(\cdot {}\)" represents the differential operator respect to z. If not, i.e., \({\text {rank}}({\textbf{A}}(w))\leqslant s-2\), which implies \({\dot{\det }} ({\textbf{A}}(w))=0\), this a contradiction. \(\square \)

Lemma 3.15

Under the same hypotheses as in Theorem 2.3, and suppose that the set of the singularities are as in Assumption 3.12, the following residue conditions hold:

$$\begin{aligned}&\underset{w_l}{{\text {Res}}} {{\textbf{M}}}(x,t;z)=(a_l\textrm{e}^{-2\textrm{i}\theta _1(x,t;w_l)}{{\textbf{M}}}_3(x,t;w_l),{\textbf{0}},{\textbf{0}}), \end{aligned}$$
(3.44a)
$$\begin{aligned}&\underset{{\bar{w}}_l}{{\text {Res}}} {{\textbf{M}}}(x,t;z)=({\textbf{0}},{\textbf{0}},-{\bar{a}}_l\textrm{e}^{2\textrm{i}\theta _1(x,t;{\bar{w}}_l)} {{\textbf{M}}}_1(x,t;{\bar{w}}_l)), \end{aligned}$$
(3.44b)
$$\begin{aligned}&\underset{\hat{{\bar{w}}}_l}{{\text {Res}}} {{\textbf{M}}}(x,t;z)=\bigg (-\frac{ q_+^2 }{{\bar{w}}_l^2 }{\bar{a}}_l\textrm{e}^{2\textrm{i}\theta _1(x,t;{\bar{w}}_l)} {{\textbf{M}}}_3(x,t;\hat{{\bar{w}}}_l),{\textbf{0}},{\textbf{0}}\bigg ), \end{aligned}$$
(3.44c)
$$\begin{aligned}&\underset{{\hat{w}}_l}{{\text {Res}}} {{\textbf{M}}}(x,t;z)=\bigg ({\textbf{0}},{\textbf{0}},\frac{{\bar{q}}_+^2 }{w_l^2 } a_l\textrm{e}^{-2\textrm{i}\theta _1(x,t;w_l)}{{\textbf{M}}}_1(x,t;{\hat{w}}_l)\bigg ), \end{aligned}$$
(3.44d)
$$\begin{aligned}&\underset{y_m}{{\text {Res}}}\ {{\textbf{M}}}(x,t;z)=\bigg (\textrm{e}^{\textrm{i}(\theta _2-\theta _1)(x,t;y_m)}{{\textbf{M}}}_2(x,t;y_m){\textbf{B}}_m,{\textbf{0}},{\textbf{0}}\bigg ), \end{aligned}$$
(3.45a)
$$\begin{aligned}&\underset{{\bar{y}}_m}{{\text {Res}}} {{\textbf{M}}}(x,t;z)=\bigg ({\textbf{0}},-\frac{\textrm{e}^{\textrm{i}(\theta _1-\theta _2)(x,t;{\bar{y}}_m)}}{\gamma ({\bar{y}}_m)}{{\textbf{M}}}_1(x,t;{\bar{y}}_m){\textbf{B}}_m^\dag ,{\textbf{0}}\bigg ), \end{aligned}$$
(3.45b)
$$\begin{aligned}&\underset{\hat{{\bar{y}}}_m}{{\text {Res}}} {{\textbf{M}}}(x,t;z)=\bigg ({\textbf{0}},-\frac{\textrm{i}q_+\textrm{e}^{\textrm{i}(\theta _1-\theta _2)(x,t;{\bar{y}}_m)}}{{\bar{y}}_m\gamma (\hat{{\bar{y}}}_m)}{{\textbf{M}}}_3(x,t;\hat{{\bar{y}}}_m){\textbf{B}}_m^\dag ,{\textbf{0}}\bigg ), \end{aligned}$$
(3.45c)
$$\begin{aligned}&\underset{{\hat{y}}_m}{{\text {Res}}} {{\textbf{M}}}(x,t;z)=\bigg ({\textbf{0}},{\textbf{0}},-\frac{\textrm{i}{\bar{q}}_+ \textrm{e}^{\textrm{i}(\theta _2-\theta _1)(x,t;y_m)}}{y_m}{{\textbf{M}}}_2(x,t;{\hat{y}}_m){\textbf{B}}_m\bigg ), \end{aligned}$$
(3.45d)
$$\begin{aligned}&\underset{z_n}{{\text {Res}}}{{\textbf{M}}}(x,t;z)=\bigg ({\textbf{0}},{\textbf{0}},\textrm{e}^{\textrm{i}(\theta _1+\theta _2)(x,t;z_n)}{{\textbf{M}}}_2(x,t;z_n){\textbf{C}}_n\bigg ), \end{aligned}$$
(3.46a)
$$\begin{aligned}&\underset{{\bar{z}}_n}{{\text {Res}}}{{\textbf{M}}}(x,t;z)=\bigg ({\textbf{0}},-\frac{\textrm{e}^{-\textrm{i}(\theta _1+\theta _2)(x,t;{\bar{z}}_n)}}{\gamma ({\bar{z}}_n)}{{\textbf{M}}}_3(x,t;{\bar{z}}_n){\textbf{C}}_n^\dag ,{\textbf{0}}\bigg ), \end{aligned}$$
(3.46b)
$$\begin{aligned}&\underset{\hat{{\bar{z}}}_n}{{\text {Res}}}{{\textbf{M}}}(x,t;z)=\bigg ({\textbf{0}},-\frac{\textrm{i}{\bar{q}}_+\textrm{e}^{-\textrm{i}(\theta _1+\theta _2)(x,t;{\bar{z}}_n)}}{{\bar{z}}_n\gamma (\hat{{\bar{z}}}_n)}{{\textbf{M}}}_1(x,t;\hat{{\bar{z}}}_n){\textbf{C}}_n^\dag ,{\textbf{0}}\bigg ), \end{aligned}$$
(3.46c)
$$\begin{aligned}&\underset{{\hat{z}}_n}{{\text {Res}}}{{\textbf{M}}}(x,t;z)=\bigg (-\frac{\textrm{i}q_+\textrm{e}^{\textrm{i}(\theta _1+\theta _2)(x,t;z_n)}}{z_n}{{\textbf{M}}}_2(x,t;{\hat{z}}_n){\textbf{C}}_n,{\textbf{0}},{\textbf{0}}\bigg ), \end{aligned}$$
(3.46d)

where \({\textbf{M}}(x,t;z)=({\textbf{M}}_1(x,t;z),{\textbf{M}}_2(x,t;z),{\textbf{M}}_3(x,t;z))\), \(a_l\), \({\textbf{B}}_m\), \({\textbf{C}}_n\) are constants or \((N-1)\) dimensional constant vectors, \(l=1,\ldots , N_1\), \(m=1,\ldots , N_2\), \(n=1,\ldots , N_3\).

Proof

By virtue of Lemma 3.13 and the symmetry (3.21b), we only need to prove (3.44a), (3.45a) and (3.46a). In the following proofs, we omit the (xtz)-dependence for brevity.

As \(z=\omega _l\). It follows from (3.15) and (3.16) that

$$\begin{aligned}&{\mathscr {G}}(\tilde{\mu }_{+1},\tilde{\mu }_{-2}\tilde{{\textbf{s}}}^{-1}_{22},\tilde{{\textbf{m}}}_2)=\left( \mu _{-1},\gamma \mu _{+2}{\textbf{s}}_{11},\frac{{\textbf{m}}_1}{\det ({\textbf{S}}_{22})}\right) =\left( \mu _{-1},{\textbf{0}},\frac{{\textbf{m}}_1}{\det ({\textbf{S}}_{22})}\right) , \end{aligned}$$
(3.47)
$$\begin{aligned}&\tilde{\mu }_{+1}=(-1)^N{\mathcal {G}}\left[ \mu _{+2},\frac{{\textbf{m}}_1}{\det ({\textbf{S}}_{22})}\right] , \quad \det (\tilde{\mu }_{+1},\tilde{\mu }_{-2}\tilde{{\textbf{s}}}^{-1}_{22},\tilde{{\textbf{m}}}_2)=\gamma \tilde{{\textbf{S}}}_{11}=0. \end{aligned}$$
(3.48)

Consequently, \({\text {rank}}(\tilde{\mu }_{+1},\tilde{\mu }_{-2}\tilde{{\textbf{s}}}^{-1}_{22},\tilde{{\textbf{m}}}_2)\leqslant N\) and \(\frac{{\textbf{m}}_1}{\det ({\textbf{S}}_{22})}\ne {\textbf{0}}\) (if not, \(\tilde{\mu }_{+1}={\textbf{0}}\), this is a contradiction). Combining with Lemma 3.2 yields \({\text {rank}}\left( \mu _{-1},{\textbf{0}},\frac{{\textbf{m}}_1}{\det ({\textbf{S}}_{22})}\right) =1\), i.e., there exists a constant \(b_l\) independent of (xtz) such that \(\mu _{-1}=b_l\textrm{e}^{-2\textrm{i}\theta _1}\frac{{\textbf{m}}_1}{\det ({\textbf{S}}_{22})}\). Setting \(a_l=\frac{b_l}{\dot{{\textbf{s}}}_{11}}\), we complete the proof of (3.44a).

As \(z=y_m\). It follows from (3.15) and (3.16) that

$$\begin{aligned}&{\mathscr {G}}\left( \tilde{\mu }_{+1},\tilde{\mu }_{-2},\frac{\tilde{{\textbf{m}}}_2}{\tilde{{\textbf{S}}}_{11}}\right) =\left( \frac{\det ({\textbf{S}}_{22})\mu _{-1}}{{\textbf{s}}_{11}},\gamma \mu _{+2}{\textbf{S}}^*_{22},{\textbf{m}}_1\right) , \end{aligned}$$
(3.49)
$$\begin{aligned}&{\mathscr {G}}\left( \mu _{-1},\mu _{+2},\frac{{\textbf{m}}_1}{{\textbf{s}}_{11}}\right) =\left( \frac{\det ({\textbf{S}}_{22})\tilde{\mu }_{+1}}{{\textbf{s}}_{11}},\gamma \tilde{\mu }_{-2}\tilde{{\textbf{s}}}^*_{22},\tilde{{\textbf{m}}}_2\right) , \end{aligned}$$
(3.50)
$$\begin{aligned}&\det \left( \tilde{\mu }_{+1},\tilde{\mu }_{-2},\frac{\tilde{{\textbf{m}}}_2}{\tilde{{\textbf{S}}}_{11}}\right) =\gamma \det ({\textbf{S}}_{22})=0,\quad \det \left( \mu _{-1},\mu _{+2},\frac{{\textbf{m}}_1}{{\textbf{s}}_{11}}\right) =0. \end{aligned}$$
(3.51)

Since \(\mu _{+2}\) and \(\tilde{\mu }_{-2}\) are full column rank, combining with Lemma 3.2 and Lemma 3.14 yields

$$\begin{aligned}&1\geqslant {\text {rank}}\left( \frac{\det ({\textbf{S}}_{22})\mu _{-1}}{{\textbf{s}}_{11}},\gamma \mu _{+2}{\textbf{S}}^*_{22},{\textbf{m}}_1\right) \geqslant {\text {rank}}\left( \mu _{+2}{\textbf{S}}^*_{22}\right) =1, \end{aligned}$$
(3.52)
$$\begin{aligned}&1\geqslant {\text {rank}}\left( \frac{\det ({\textbf{S}}_{22})\tilde{\mu }_{+1}}{{\textbf{s}}_{11}},\gamma \tilde{\mu }_{-2}\tilde{{\textbf{s}}}^*_{22},\tilde{{\textbf{m}}}_2\right) \geqslant {\text {rank}}\left( \tilde{\mu }_{-2}\tilde{{\textbf{s}}}^*_{22}\right) =1, \end{aligned}$$
(3.53)

i.e., there exists a constant vector \(\alpha _m\) independent of (xtz) such that \(\mu _{-1}=\textrm{e}^{\textrm{i}(\theta _2-\theta _1)}\mu _{+2}{\textbf{S}}^*_{22}\alpha _m\). Furthermore, (3.49) and (3.53) imply \({\textbf{m}}_1={\textbf{0}}\). Setting \({\textbf{B}}_m=\frac{{\textbf{S}}^*_{22}\alpha _m}{\dot{{\textbf{s}}}_{11}}\), we complete the proof of (3.45a).

As \(z=z_n\), it follows from (3.15) and (3.16) that

$$\begin{aligned}&{\mathscr {G}}\left( \tilde{\mu }_{+1},\tilde{\mu }_{-2},\frac{\tilde{{\textbf{m}}}_2}{\tilde{{\textbf{S}}}_{11}}\right) =\left( \frac{\det ({\textbf{S}}_{22})\mu _{-1}}{{\textbf{s}}_{11}},\gamma \mu _{+2}{\textbf{S}}^*_{22},{\textbf{m}}_1\right) =\left( {\textbf{0}},\gamma \mu _{+2}{\textbf{S}}^*_{22},{\textbf{m}}_1\right) , \end{aligned}$$
(3.54)
$$\begin{aligned}&\det \left( \tilde{\mu }_{+1},\tilde{\mu }_{-2},\frac{\tilde{{\textbf{m}}}_2}{\tilde{{\textbf{S}}}_{11}}\right) =\gamma \det ({\textbf{S}}_{22})=0. \end{aligned}$$
(3.55)

Since \(\mu _{+2}\) is full column rank, combining with Lemma 3.2 and Lemma 3.14 yields

$$\begin{aligned} 1\geqslant {\text {rank}}\left( {\textbf{0}},\gamma \mu _{+2}{\textbf{S}}^*_{22},{\textbf{m}}_1\right) \geqslant {\text {rank}}\left( \mu _{+2}{\textbf{S}}^*_{22}\right) =1, \end{aligned}$$
(3.56)

i.e., there exists a constant vector \(\beta _n\) independent of (xtz) such that \({\textbf{m}}_1=\textrm{e}^{\textrm{i}(\theta _1+\theta _2)}\mu _{+2}{\textbf{S}}^*_{22}\beta _n\). Setting \({\textbf{C}}_n=\frac{{\textbf{S}}^*_{22}\beta _n}{{\dot{\det }}({\textbf{S}}_{22})}\), we complete the proof of (3.46a). \(\square \)

The inverse problem can be formulated in terms of the following:

Riemann–Hilbert Problem 3.16

Find a matrix-valued function \({\textbf{M}}(x,t;z)\) which is sectionally meromorphic in \({\mathbb {C}}\backslash \Gamma \), has simple poles as in Assumption 3.12, satisfies the growth conditions as in Lemma 3.7, the jump conditions as in Lemma 3.8, the asymptotic behaviors as in Lemma 3.10 and the residue conditions as in Lemma 3.15.

Theorem 3.17

Under the same hypotheses as in Lemma 2.22, the matrix \({\textbf{M}}(x,t;z)\) defined by (3.15) satisfies Riemann–Hilbert problem 3.16.

3.3 Reconstruction of potential

Lemma 3.18

(Vanishing Lemma) The regular RH problem for \({\textbf{M}}(x,t;z)\) obtained from RH problem 3.16 by replacing the asymptotic conditions in Lemma 3.10 with

$$\begin{aligned}{} & {} {\textbf{M}}(x,t;z)=O(z^{-1}), \qquad z\rightarrow \infty ,\\{} & {} {\textbf{M}}(x,t;z)= O(1), \quad z\rightarrow 0, \end{aligned}$$

has only the zero solution.

Proof

Let \({\mathcal {H}}(x,t;z)={\textbf{M}}(x,t;z){\textbf{H}}^{-1}(z){\textbf{M}}^\dag (x,t;{\bar{z}})\), it follows from the growth conditions (3.23) that \({\mathcal {H}}(x,t;z)\) is well defined at \(z=\pm \textrm{i}q_0\). Also,

$$\begin{aligned}{} & {} {\mathcal {H}}_+(x,t;z)={\textbf{M}}_-(x,t;z){\textbf{J}}_1(x,t;z){\textbf{H}}^{-1}(z){\textbf{M}}_-^\dag (x,t;{\bar{z}}),\qquad z\in \Gamma _1,\nonumber \\ \end{aligned}$$
(3.57a)
$$\begin{aligned}{} & {} {\mathcal {H}}_-(x,t;z)={\textbf{M}}_-(x,t;z){\textbf{H}}^{-1}(z){\textbf{J}}^\dag _1(x,t;{\bar{z}}){\textbf{M}}_-^\dag (x,t;{\bar{z}}),\qquad z\in \Gamma _1,\nonumber \\ \end{aligned}$$
(3.57b)
$$\begin{aligned}{} & {} {\mathcal {H}}_+(x,t;z)={\textbf{M}}_-(x,t;z){\textbf{J}}_4(x,t;z){\textbf{H}}^{-1}(z){\textbf{M}}_-^\dag (x,t;{\bar{z}}),\quad z\in \Gamma _4,\nonumber \\ \end{aligned}$$
(3.57c)
$$\begin{aligned}{} & {} {\mathcal {H}}_-(x,t;z)={\textbf{M}}_-(x,t;z){\textbf{H}}^{-1}(z){\textbf{J}}^\dag _2(x,t;{\bar{z}}){\textbf{M}}_-^\dag (x,t;{\bar{z}}),\qquad z\in \Gamma _4,\nonumber \\ \end{aligned}$$
(3.57d)

where \({\textbf{J}}_{n}(x,t;z)={\textbf{J}}(x,t;z)\vert _{\Gamma _n}\), \(n=1,\ldots ,4\). The above equations imply \({\mathcal {H}}(x,t;z)\) has no jump across \(\Gamma _1\cup \Gamma _4\), similarly for \(\Gamma _2\cup \Gamma _3\). Since \({\mathcal {H}}(x,t;z)\) is analytic for \(z\in {\mathbb {C}}\backslash \Gamma \), also approaches \({\textbf{0}}\) as \(z\rightarrow \infty \), by Liouville’s theorem, we state

$$\begin{aligned} {\mathcal {H}}(x,t;z)\equiv {\textbf{0}}. \end{aligned}$$
(3.58)

From (3.26) and (3.30), we find \({\textbf{J}}_1(x,t;z){\textbf{H}}^{-1}(z)\) is positive definite for \(z\in \Gamma _1\). Combining with (3.57a3.57b3.57c3.57d) implies \({\textbf{M}}_-(x,t;z)={\textbf{0}}\) for \(z\in \Gamma _1\), similarly for \(z\in \Gamma _3\). As a consequence, \({\textbf{M}}(x,t;z)={\textbf{0}}\) for \(z\in {\mathbb {R}}\), which implies \({\textbf{M}}(x,t;z)\) has no jump across \({\mathbb {R}}\). As is known to all, the zeros of the analytic function are isolated, thus \({\textbf{M}}(x,t;z)\equiv {\textbf{0}}\). \(\square \)

Now, we rewrite any \((N+1)\times (N+1)\) matrix \({\textbf{A}}\) in the following block form

$$\begin{aligned} {\textbf{A}}=\begin{pmatrix} {\textbf{A}}_{11}&{}\quad {\textbf{A}}_{12}&{}\quad {\textbf{A}}_{13}\\ {\textbf{A}}_{21}&{}\quad {\textbf{A}}_{22}&{}\quad {\textbf{A}}_{23} \end{pmatrix}, \end{aligned}$$
(3.59)

where \({\textbf{A}}_{11}\) and \({\textbf{A}}_{13}\) are scalar.

Theorem 3.19

(Reconstruction formula) The solution \({\textbf{M}}(x,t;z)\) of RH problem 3.16 exists uniquely, thus

$$\begin{aligned} {\textbf{q}}(x,t)= -\textrm{i}\lim _{z\rightarrow \infty }z{\textbf{M}}_{21}(x,t;z), \end{aligned}$$
(3.60)

solves the N-component focusing NLS equation (1.2).

Proof

As we know that \({\textbf{M}}(x,t;z)\) may have some poles, however, this singular RH problem can be mapped to a regular one (see Ref. [30]). The solution of the regular RH problem 3.16 (has no singularities) exists uniquely if and only if Vanishing Lemma 3.18 holds (see Refs. [27, 28]). In absence of the possible poles, based on the symmetry properties of the jump matrix \({\textbf{J}}(x,t;z)\), one obtains that \({\textbf{M}}(x,t;z)\), \(({\textbf{M}}^\dag (x,t;{\bar{z}}))^{-1}{\textbf{H}}(z)\) and \({\textbf{M}}(x,t;{\hat{z}})\varvec{\Pi }_+(z)\) satisfy the same RH problem. Taking into account the uniqueness, we conclude

$$\begin{aligned} {\textbf{M}}(x,t;z)=({\textbf{M}}^\dag (x,t;{\bar{z}}))^{-1}{\textbf{H}}(z)={\textbf{M}}(x,t;{\hat{z}})\varvec{\Pi }_+(z). \end{aligned}$$
(3.61)

In the following, we shall prove that \({\textbf{q}}(x,t)\) defined by (3.60) solves the N-component focusing NLS (1.2) by virtue of the dressing method [29]. Set

$$\begin{aligned}{} & {} {\mathscr {A}}(x,t;z)=\partial _x{\textbf{M}}(x,t;z)-{\textbf{U}}(x,t;z){\textbf{M}}(x,t;z)+\textrm{i}{\textbf{M}}(x,t;z)\varvec{\Lambda }(z), \end{aligned}$$
(3.62a)
$$\begin{aligned}{} & {} {\mathscr {B}} (x,t;z)=\partial _t{\textbf{M}}(x,t;z)-{\textbf{V}}(x,t;z){\textbf{M}}(x,t;z)-\textrm{i}{\textbf{M}}(x,t;z)\varvec{\Omega }(z), \end{aligned}$$
(3.62b)

where \({\textbf{U}}(x,t;z)\) and \({\textbf{V}}(x,t;z)\) are defined by (2.2), \({\varvec{\Omega }}(z)={\text {diag}}(-2k\lambda ,\underbrace{k^2+\lambda ^2,\ldots ,k^2+\lambda ^2}_{N-1},2k\lambda )\). Suppose \({\textbf{q}}(x,t)\) is defined by (3.60), and \({\textbf{M}}(x,t;z)\) has the asymptotic expansion forms

$$\begin{aligned}{} & {} {\textbf{M}}(x,t;z)={\textbf{I}}+\frac{{\textbf{M}}_1(x,t)}{z}+\frac{{\textbf{M}}_2(x,t)}{z^2}+O\left( z^{-3}\right) ,\qquad z\rightarrow \infty , \end{aligned}$$
(3.63)
$$\begin{aligned}{} & {} {\textbf{M}}(x,t;z)=\left( {\textbf{I}}+{\mathcal {M}}_1(x,t)z+{\mathcal {M}}_2(x,t)z^2+O(z^3)\right) \varvec{\Pi }_+(z),\qquad z\rightarrow 0.\nonumber \\ \end{aligned}$$
(3.64)

combining with the symmetries in (3.61), we conclude

$$\begin{aligned} {\textbf{Q}}(x,t)=\frac{\textrm{i}}{2}[\sigma ,{\textbf{M}}_1(x,t)],\quad {\textbf{M}}_1(x,t)=-q_0^2{\mathcal {M}}_1(x,t),\quad {\textbf{M}}_2(x,t)=q_0^4{\mathcal {M}}_2(x,t).\nonumber \\ \end{aligned}$$
(3.65)

Direct calculations yield \({\mathscr {A}}(x,t;z)\) satisfies the homogeneous RH problem

$$\begin{aligned}&{\mathscr {A}}_+(x,t;z)={\mathscr {A}}_-(x,t;z){\textbf{J}}(x,t;z),\qquad z\in \Gamma ,\nonumber \\&{\mathscr {A}}(x,t;z)=O(z^{-1}), \quad z\rightarrow \infty ,\nonumber \\&{\mathscr {A}}(x,t;z)=O(1), \quad z\rightarrow 0, \end{aligned}$$
(3.66)

which by Lemma 3.18 implies

$$\begin{aligned} {\mathscr {A}}(x,t;z)\equiv {\textbf{0}}. \end{aligned}$$
(3.67)

Furthermore, substituting the asymptotic expansion (3.633.64) into (3.67), and comparing the coefficients of \(O(z^{-1})\), we obtain

$$\begin{aligned} {[}\sigma ,\partial _x{\textbf{M}}_1(x,t)]=\left[ \sigma ,-\frac{\textrm{i}}{2}[\sigma ,{\textbf{M}}_2(x,t)]+{\textbf{Q}}(x,t){\textbf{M}}_1(x,t)\right] . \end{aligned}$$
(3.68)

It follows from (3.65) and (3.68) that \({\mathscr {B}}(x,t;z)\) satisfies the homogeneous RH problem

$$\begin{aligned}&{\mathscr {B}}_+(x,t;z)={\mathscr {B}}_-(x,t;z){\textbf{J}}(x,t;z),\qquad z\in \Gamma ,\nonumber \\&{\mathscr {B}}(x,t;z)=O(z^{-1}), \quad z\rightarrow \infty ,\nonumber \\&{\mathscr {B}}(x,t;z)=O(1), \quad z\rightarrow 0, \end{aligned}$$
(3.69)

which also by Lemma 3.18 implies that

$$\begin{aligned} {\mathscr {B}}(x,t;z)\equiv {\textbf{0}}. \end{aligned}$$
(3.70)

The compatibility condition of Eqs. (3.67) and (3.70) yields the function \({\textbf{q}}(x,t)\) defined by (3.60) and solves the N-component focusing NLS equation (1.2). \(\square \)

Remark 3.20

In general, if the symmetries of jump matrix, the residue conditions and the asymptotic conditions are sufficient, the above theorem holds naturally by (3.15). In order to verify those symmetries that have been found are adequate, one must prove the reconstruction formula. In other words, the potential also can be expressed in terms of \({\textbf{M}}_{12}(x,t;z)\), the consistency of two reconstructions should be ensured by symmetries.

Indeed, RH problem 3.16 can be regularized by subtracting any pole contributions and the leading order asymptotic behavior at infinity and zero:

$$\begin{aligned} \begin{aligned} {\mathcal {M}}(x,t;z)&={\textbf{M}}(x,t;z)-{\textbf{E}}_+(z)\\&-\sum _{l=1}^{N_1}\left( \frac{\underset{w_l}{{\text {Res}}}{\textbf{M}}}{z-w_l}+\frac{\underset{{\bar{w}}_l}{{\text {Res}}}{\textbf{M}}}{z-{\bar{w}}_l}+\frac{\underset{{\hat{w}}_l}{{\text {Res}}}{\textbf{M}}}{z-{\hat{w}}_l}+\frac{\underset{\hat{{\bar{w}}}_l}{{\text {Res}}}{\textbf{M}}}{z-\hat{{\bar{w}}}_l}\right) \\&-\sum _{m=1}^{N_2}\left( \frac{\underset{y_m}{{\text {Res}}}{\textbf{M}}}{z-y_m}+\frac{\underset{{\bar{y}}_m}{{\text {Res}}}{\textbf{M}}}{z-{\bar{y}}_m}+\frac{\underset{{\hat{y}}_m}{{\text {Res}}}{\textbf{M}}}{z-{\hat{y}}_m}+\frac{\underset{\hat{{\bar{y}}}_m}{{\text {Res}}}{\textbf{M}}}{z-\hat{{\bar{y}}}_m}\right) \\&-\sum _{n=1}^{N_3}\left( \frac{\underset{z_n}{{\text {Res}}}{\textbf{M}}}{z-z_n}+\frac{\underset{{\bar{z}}_n}{{\text {Res}}}{\textbf{M}}}{z-{\bar{z}}_n}+\frac{\underset{{\hat{z}}_n}{{\text {Res}}}{\textbf{M}}}{z-{\hat{z}}_n}+\frac{\underset{\hat{{\bar{z}}}_n}{{\text {Res}}}{\textbf{M}}}{z-\hat{{\bar{z}}}_n}\right) . \end{aligned}\nonumber \\ \end{aligned}$$
(3.71)

Consequently, the piecewise holomorphic function \({\mathcal {M}}(x,t;z)\) satisfies

$$\begin{aligned}&{\mathcal {M}}_+(x,t,;z)-{\mathcal {M}}_-(x,t;z)={\textbf{M}}_-(x,t;z)({\textbf{J}}(x,t;z)-{\textbf{I}}),\quad z\in \Gamma , \end{aligned}$$
(3.72)
$$\begin{aligned}&{\mathcal {M}}(x,t;z)\rightarrow {\textbf{0}},\quad z\rightarrow \infty . \end{aligned}$$
(3.73)

Using the Plemelj–Sokhotski formula, we get

$$\begin{aligned} {\mathcal {M}}(x,t;z)=\frac{1}{2\pi \textrm{i}}\int _{\Gamma } \frac{{\textbf{M}}_-(x,t;\zeta )({\textbf{J}}(x,t;\zeta )-{\textbf{I}})}{\zeta -z}\textrm{d}\zeta . \end{aligned}$$
(3.74)

Equivalently, RH problem 3.16 can be solved by the system of algebraic-integral equations:

$$\begin{aligned} {\textbf{M}}_1(z)&={\textbf{E}}_{+1}(z)+\sum _{l=1}^{N_1}\left( \frac{a_l\textrm{e}^{-2\textrm{i}\theta _1(w_l)}{{\textbf{M}}}_3(w_l)}{z-w_l}-\frac{q_+^2{\bar{a}}_l\textrm{e}^{2\textrm{i}\theta _1({\bar{w}}_l)}{{\textbf{M}}}_3(\hat{{\bar{w}}}_l)}{{\bar{w}}_l^2(z-\hat{{\bar{w}}}_l)}\right) \nonumber \\&+\sum _{m=1}^{N_2}\frac{\textrm{e}^{\textrm{i}(\theta _2-\theta _1)(y_m)}{{\textbf{M}}}_2(y_m){\textbf{B}}_m}{z-y_m}-\textrm{i}q_+\sum _{n=1}^{N_3}\frac{\textrm{e}^{\textrm{i}(\theta _1+\theta _2)(z_n)}{{\textbf{M}}}_2({\hat{z}}_n){\textbf{C}}_n }{z_n(z-{\hat{z}}_n)} \nonumber \\ \end{aligned}$$
(3.75a)
$$\begin{aligned}&+\frac{1}{2\pi \textrm{i}}\int _{\Gamma } \frac{{\textbf{M}}_-(x,t;\zeta )({\textbf{J}}(x,t;\zeta )-{\textbf{I}})_1}{\zeta -z}\textrm{d}\zeta ,\nonumber \\ {\textbf{M}}_2(z)&={\textbf{E}}_{+2}(z)+\frac{1}{2\pi \textrm{i}}\int _{\Gamma } \frac{{\textbf{M}}_-(x,t;\zeta )({\textbf{J}}(x,t;\zeta )-{\textbf{I}})_2}{\zeta -z}\textrm{d}\zeta \nonumber \\&-\sum _{m=1}^{N_2}\left( \frac{\textrm{e}^{\textrm{i}(\theta _1-\theta _2)({\bar{y}}_m)}{{\textbf{M}}}_1({\bar{y}}_m){\textbf{B}}_m^\dag }{\gamma ({\bar{y}}_m)(z-{\bar{y}}_m)}+\frac{\textrm{i}q_+\textrm{e}^{\textrm{i}(\theta _1-\theta _2)({\bar{y}}_m)}{{\textbf{M}}}_3(\hat{{\bar{y}}}_m){\textbf{B}}_m^\dag }{{\bar{y}}_m\gamma (\hat{{\bar{y}}}_m)(z-\hat{{\bar{y}}}_m)}\right) \nonumber \\ \end{aligned}$$
(3.75b)
$$\begin{aligned}&-\sum _{n=1}^{N_3}\left( \frac{\textrm{e}^{-\textrm{i}(\theta _1+\theta _2)({\bar{z}}_n)}{{\textbf{M}}}_3({\bar{z}}_n){\textbf{C}}_n^\dag }{\gamma ({\bar{z}}_n)(z-{\bar{z}}_n)}+\frac{\textrm{i}{\bar{q}}_+\textrm{e}^{-\textrm{i}(\theta _1+\theta _2)({\bar{z}}_n)}{{\textbf{M}}}_1(\hat{{\bar{z}}}_n){\textbf{C}}_n^\dag }{{\bar{z}}_n\gamma (\hat{{\bar{z}}}_n)(z-\hat{{\bar{z}}}_n)}\right) ,\nonumber \\ {\textbf{M}}_3(z)&={\textbf{E}}_{+3}(z)+\sum _{l=1}^{N_1}\left( -\frac{{\bar{a}}_l\textrm{e}^{2\textrm{i}\theta _1({\bar{w}}_l)}{{\textbf{M}}}_1({\bar{w}}_l)}{z-{\bar{w}}_l}+\frac{{\bar{q}}_+^2a_l\textrm{e}^{-2\textrm{i}\theta _1( w_l)}{{\textbf{M}}}_1(\hat{ w}_l)}{ w_l^2(z-\hat{ w}_l)}\right) \nonumber \\&-i{\bar{q}}_+\sum _{m=1}^{N_2}\frac{\textrm{e}^{\textrm{i}(\theta _2-\theta _1)(y_m)}{{\textbf{M}}}_2({\hat{y}}_m){\textbf{B}}_m }{y_m(z-{\hat{y}}_m)}+\sum _{n=1}^{N_3}\frac{\textrm{e}^{\textrm{i}(\theta _1+\theta _2)(z_n)}{{\textbf{M}}}_2(z_n){\textbf{C}}_n}{z-z_n}\\\nonumber&+\frac{1}{2\pi \textrm{i}}\int _{\Gamma } \frac{{\textbf{M}}_-(x,t;\zeta )({\textbf{J}}(x,t;\zeta )-{\textbf{I}})_3}{\zeta -z}\textrm{d}\zeta \end{aligned}$$
(3.75c)

Corollary 3.21

The solution of the N-component focusing NLS equation (1.2) with NZBCs (1.3) is reconstructed as

$$\begin{aligned} {\textbf{q}}(x,t)= & {} {\textbf{q}}_+-\textrm{i}\sum _{l=1}^{N_1}\left( a_l\textrm{e}^{-2\textrm{i}\theta _1(w_l)}{{\textbf{M}}}_{23}(w_l)-\frac{q_+^2}{{\bar{w}}_l^2}{\bar{a}}_l\textrm{e}^{2\textrm{i}\theta _1({\bar{w}}_l)}{{\textbf{M}}}_{23}(\hat{{\bar{w}}}_l)\right) \nonumber \\{} & {} -\textrm{i}\sum _{m=1}^{N_2}\textrm{e}^{\textrm{i}(\theta _2-\theta _1)(y_m)}{{\textbf{M}}}_{22}(y_m){\textbf{B}}_m-\sum _{n=1}^{N_3}\frac{q_+}{z_n}\textrm{e}^{\textrm{i}(\theta _1+\theta _2)(z_n)}{{\textbf{M}}}_{22}({\hat{z}}_n){\textbf{C}}_n \nonumber \\{} & {} +\frac{1}{2\pi }\int _{\Gamma } \left[ {\textbf{M}}_-(x,t;\zeta )({\textbf{J}}(x,t;\zeta )-{\textbf{I}})\right] _{21}\textrm{d}\zeta . \end{aligned}$$
(3.76)

4 Reflectionless potential and exact solutions

We now reconstruct the potential \({\textbf{q}}(x,t)\) explicitly in the reflectionless case, i.e., \(\rho _1(z)={\textbf{0}}\) and \(\rho _2(z)=0\). In this case, there is no jump across the contour \(\Gamma \). As a consequence, the inverse problem reduces to an algebraic system

$$\begin{aligned} {\textbf{M}}_1(z){} & {} ={\textbf{E}}_{+1}(z)+\sum _{l=1}^{2N_1}\frac{\alpha _l{{\textbf{M}}}_3(\omega _l)}{z-\omega _l}+\sum _{m=1}^{N_2+N_3}\frac{{{\textbf{M}}}_2(\lambda _m){\mathcal {B}}_m}{z-\lambda _m},\quad z={\bar{\omega }}_j,{\bar{\lambda }}_n, \end{aligned}$$
(4.1a)
$$\begin{aligned}{} & {} {\textbf{M}}_2(z)={\textbf{E}}_{+2}(z)-\sum _{m=1}^{N_2+N_3}\frac{{{\textbf{M}}}_1({\bar{\lambda }}_m){\mathcal {B}}_m^\dag }{z-{\bar{\lambda }}_m+{\hat{z}}-\hat{{\bar{\lambda }}}_m},\quad z=\lambda _n, \end{aligned}$$
(4.1b)
$$\begin{aligned}{} & {} {\textbf{M}}_3(z)={\textbf{E}}_{+3}(z)-\sum _{l=1}^{2N_1}\frac{{\bar{\alpha }}_l{{\textbf{M}}}_1({\bar{\omega }}_l)}{z-{\bar{\omega }}_l}-\sum _{m=1}^{N_2+N_3}\frac{\textrm{i}{\bar{q}}_+{{\textbf{M}}}_2(\lambda _m){\mathcal {B}}_m}{\lambda _m(z-{{\hat{\lambda }}}_m)},\quad z=\omega _j,\nonumber \\ \end{aligned}$$
(4.1c)

where

$$\begin{aligned}{} & {} \omega _{l}={\left\{ \begin{array}{ll} w_l,\quad &{}l=1,\ldots ,N_1,\\ \hat{{\bar{w}}}_{l-N_1},\quad &{}l=N_1+1,\ldots ,2N_1, \end{array}\right. } \end{aligned}$$
(4.2)
$$\begin{aligned}{} & {} \alpha _{l}(x,t)={\left\{ \begin{array}{ll} a_l\textrm{e}^{-2\textrm{i}\theta _1(x,t;\omega _l)},\quad &{}l=1,\ldots ,N_1,\\ -\frac{\omega _l^2}{{\bar{q}}_+^2}\bar{a}_{l-N_1}\textrm{e}^{-2\textrm{i}\theta _1(x,t;\omega _l)},\quad &{}l=N_1+1,\ldots ,2N_1, \end{array}\right. } \end{aligned}$$
(4.3)
$$\begin{aligned}{} & {} \lambda _m={\left\{ \begin{array}{ll} y_m,\quad &{}m=1,\ldots ,N_2,\\ \hat{z}_{m-N_2},\quad &{}m=N_2+1,\ldots ,N_2+N_3, \end{array}\right. } \end{aligned}$$
(4.4)
$$\begin{aligned}{} & {} {\mathcal {B}}_m(x,t)={\left\{ \begin{array}{ll} \textrm{e}^{\textrm{i}(\theta _2-\theta _1)(x,t;\lambda _m)}{\textbf{B}}_m,\quad &{}m=1,\ldots ,N_2,\\ \frac{\textrm{i}\lambda _m}{{\bar{q}}_+} \textrm{e}^{\textrm{i}(\theta _2-\theta _1)(x,t;\lambda _m)}{\textbf{C}}_{m-N_2},\quad &{}m=N_2+1,\ldots ,N_2+N_3. \end{array}\right. } \end{aligned}$$
(4.5)

4.1 Reconstruction

It follows trivially from (3.76) that

Theorem 4.1

In the reflectionless case, the solution of the N-component focusing NLS (1.2) with NZBCs (1.3) can be expressed by

$$\begin{aligned} {\textbf{q}}(x,t)={\textbf{q}}_+-\textrm{i}\sum _{l=1}^{2N_1}\alpha _l(x,t){{\textbf{M}}}_{23}(x,t;\omega _l)-\textrm{i}\sum _{m=1}^{N_2+N_3}{{\textbf{M}}}_{22}(x,t;\lambda _m){\mathcal {B}}_m(x,t).\nonumber \\ \end{aligned}$$
(4.6)

Substituting (4.1a) into (4.1b) and (4.1c), we get the following algebraic system for \(\{\alpha _j{\textbf{M}}_3(\omega _j)\}_1^{2N_1}\) and \(\{{\textbf{M}}_2(\lambda _n){\mathcal {B}}_n\}_1^{N_2+N_3}\):

$$\begin{aligned} {\textbf{M}}_2(\lambda _n){\mathcal {B}}_n&={\textbf{E}}_{+2}(\lambda _n){\mathcal {B}}_n-\sum _{m=1}^{N_2+N_3}\frac{{{\textbf{E}}}_{+1}({\bar{\lambda }}_m){\mathcal {B}}_m^\dag {\mathcal {B}}_n}{\lambda _n-{\bar{\lambda }}_m+{{\hat{\lambda }}}_n-\hat{{\bar{\lambda }}}_m}\nonumber \\&-\sum _{m=1}^{N_2+N_3}\sum _{j=1}^{N_2+N_3}\frac{({\mathcal {B}}_m^\dag {\mathcal {B}}_n){{\textbf{M}}}_2(\lambda _j){\mathcal {B}}_j}{(\lambda _n-{\bar{\lambda }}_m+{{\hat{\lambda }}}_n-\hat{{\bar{\lambda }}}_m)({\bar{\lambda }}_m-\lambda _j)} \end{aligned}$$
(4.7a)
$$\begin{aligned}&-\sum _{m=1}^{N_2+N_3}\sum _{l=1}^{2N_1}\frac{({\mathcal {B}}_m^\dag {\mathcal {B}}_n)\alpha _l{{\textbf{M}}}_3(\omega _l)}{(\lambda _n-{\bar{\lambda }}_m+{{\hat{\lambda }}}_n-\hat{{\bar{\lambda }}}_m)({\bar{\lambda }}_m-\omega _l)},\nonumber \\ \alpha _j{\textbf{M}}_3(\omega _j)&=\alpha _j{\textbf{E}}_{+3}(\omega _j)-\sum _{l=1}^{2N_1}\frac{\alpha _j{\bar{\alpha }}_l{{\textbf{E}}}_{+1}({\bar{\omega }}_l)}{\omega _j-{\bar{\omega }}_l}-\sum _{l=1}^{2N_1}\sum _{h=1}^{2N_1}\frac{\alpha _j{\bar{\alpha }}_l\alpha _h{{\textbf{M}}}_3(\omega _h)}{(\omega _j-{\bar{\omega }}_l)({\bar{\omega }}_l-\omega _h)}\nonumber \\&-\sum _{l=1}^{2N_1}\sum _{m=1}^{N_2+N_3}\frac{\alpha _j{\bar{\alpha }}_l{{\textbf{M}}}_2(\lambda _m){\mathcal {B}}_m}{(\omega _j-{\bar{\omega }}_l)({\bar{\omega }}_l-\lambda _m)}-\sum _{m=1}^{N_2+N_3}\frac{\textrm{i}{\bar{q}}_+\alpha _j{{\textbf{M}}}_2( \lambda _m){\mathcal {B}}_m }{\lambda _m(\omega _j-{{\hat{\lambda }}}_m)}. \end{aligned}$$
(4.7b)

Solving this system, combining with Theorem 4.1, we find

Corollary 4.2

In the reflectionless case, the solution of the N-component focusing NLS (1.2) with NZBCs (1.3) can be written

$$\begin{aligned} {\textbf{q}}(x,t)={\textbf{q}}_+-\textrm{i}{\textbf{F}}(x,t)({\textbf{I}}+{\textbf{G}}(x,t))^{-1}{\textbf{1}}_{{\mathscr {N}}}, \end{aligned}$$
(4.8)

where the \({\mathscr {N}}\times {\mathscr {N}}\) matrix-valued function \({\textbf{G}}(x,t)=(g_{mn})_{{\mathscr {N}}\times {\mathscr {N}}}\), the \(N\times {\mathscr {N}}\) matrix-valued function \({\textbf{F}}(x,t)=({\textbf{F}}_1,\ldots ,{\textbf{F}}_{{\mathscr {N}}})\), the \({\mathscr {N}}\)-dimensional column vector \({\textbf{1}}_{{\mathscr {N}}}=(1,\ldots ,1)^T\), \({\mathscr {N}}=2N_1+N_2+N_3\),

$$\begin{aligned}{} & {} g_{mn}={\left\{ \begin{array}{ll} \sum _{l=1}^{2N_1}\frac{{\bar{\alpha }}_l\alpha _n}{(\omega _m-{\bar{\omega }}_l)({\bar{\omega }}_l-\omega _n)}, &{} 1\leqslant m,n\leqslant 2N_1,\\ \frac{\textrm{i}{\bar{q}}_+\alpha _n}{q_0^2+\omega _n\lambda _{m-2N_1}}+\sum _{l=1}^{2N_1}\frac{{\bar{\alpha }}_l\alpha _n}{(\lambda _{m-2N_1}-{\bar{\omega }}_l)({\bar{\omega }}_l-\omega _n)}, &{} 1\leqslant n\leqslant 2N_1<m\leqslant {\mathscr {N}},\\ \sum _{l=1}^{N_2+N_3}\frac{({\mathcal {B}}_l^\dag {\mathcal {B}}_{n-2N_1})}{(\omega _m-{\bar{\lambda }}_l)({\bar{\lambda }}_l-\lambda _{n-2N_1}+\hat{{\bar{\lambda }}}_l-{{\hat{\lambda }}}_{n-2N_1})}, &{} 1\leqslant m\leqslant 2N_1<n\leqslant {\mathscr {N}},\\ \sum _{l=1}^{N_2+N_3}\frac{({\mathcal {B}}_l^\dag {\mathcal {B}}_{n-2N_1})}{(\lambda _{m-2N_1}-{\bar{\lambda }}_l)({\bar{\lambda }}_l-\lambda _{n-2N_1}+\hat{{\bar{\lambda }}}_l-{{\hat{\lambda }}}_{n-2N_1})}, &{} 2N_1< m,n\leqslant {\mathscr {N}}, \end{array}\right. }\nonumber \\ \end{aligned}$$
(4.9)
$$\begin{aligned}{} & {} {\textbf{F}}_j={\left\{ \begin{array}{ll} \alpha _j{\textbf{E}}_{+23}(\omega _j)-\sum _{l=1}^{2N_1}\frac{\alpha _j{\bar{\alpha }}_l{{\textbf{E}}}_{+21}({\bar{\omega }}_l)}{\omega _j-{\bar{\omega }}_l},\quad &{} 1\leqslant j\leqslant 2N_1,\\ {\textbf{E}}_{+22}(\lambda _{j-2N_1}){\mathcal {B}}_{j-2N_1}-\sum _{m=1}^{N_2+N_3}\frac{{{\textbf{E}}}_{+21}({\bar{\lambda }}_m){\mathcal {B}}_m^\dag {\mathcal {B}}_{j-2N_1}}{\lambda _{j-2N_1}-{\bar{\lambda }}_m+{{\hat{\lambda }}}_{j-2N_1}-\hat{{\bar{\lambda }}}_m},\quad &{}2N_1<j\leqslant {\mathscr {N}}. \end{array}\right. }\nonumber \\ \end{aligned}$$
(4.10)

Remark 4.3

As \(N_2=N_3=0\), i.e., \({\mathscr {N}}=2N_1\), it follows from (4.10) that the (mn)-element of the matrix \({\textbf{F}}(x,t)\) is zero for \(m=1,\ldots ,N-1\). Consequently, \(q_1=\cdots =q_{N-1}=0\).

4.2 Special exact solution

In the following, we explore the different possibilities for the reflectionless solution in the 3-component case. Without loss of generality, we consider \(q_+=1\), \(q_-=-1\), i.e., \({\textbf{q}}_+=(0,0,1)^T\), \({\textbf{q}}_-=(0,0,-1)^T\). In the reflectionless case, similar to the Manakov system [24], we find that

$$\begin{aligned} {\textbf{s}}_{33}(z)=\prod _{l=1}^{N_1}\frac{z-{\bar{w}}_l}{z-w_l}\frac{z-{\hat{w}}_l}{z-\hat{{\bar{w}}}_l}\prod _{m=1}^{N_2}\frac{z-{\hat{y}}_m}{z-\hat{{\bar{y}}}_m}\prod _{n=1}^{N_3}\frac{z-z_n}{z-{\bar{z}}_n}. \end{aligned}$$
(4.11)

Setting \(z\rightarrow 0\) in the above equation, and comparing the asymptotics in (2.25) yields

$$\begin{aligned} 2\sum _{l=1}^{N_1}\arg w_l+\sum _{m=1}^{N_2}\arg y_m-\sum _{n=1}^{N_3}\arg z_n=\frac{\pi }{2}+\kappa \pi ,\quad \kappa \in {\mathbb {Z}}. \end{aligned}$$
(4.12)

Case (i) \(N_2=0\), \(N_3=0\) (Fig. 2); Since \(q_1=q_2=0\), we only consider the dynamical evolutions of \(q_3\).

Fig. 2
figure 2

Left: \(\vert q_3\vert \) with \(N_1=1\), \(w_1=1+\textrm{i}\), \(a_1=1\); Right: \(\vert q_3\vert \) with \(N_1=2\), \(w_1=1+\textrm{i}\), \(w_2=2\textrm{i}\), \(a_1=1\), \(a_2=\textrm{i}\)

Case (ii) \(N_1=1\), \(N_2=1\), \(N_3=0\) (Fig. 3);

Fig. 3
figure 3

The dynamical evolutions of the solution \({\textbf{q}}\) with \(w_1=\frac{4\textrm{i}}{3}\), \(y_1=\frac{3\textrm{i}}{2}\), \(a_1=1\), \({\textbf{B}}_1=(\textrm{i},2+\textrm{i})^T\)

Case (iii) \(N_1=0\), \(N_2=1\), \(N_3=1\) (Fig. 4);

Fig. 4
figure 4

The dynamical evolutions of the solution \({\textbf{q}}\) with \(y_1=-2+\frac{3\textrm{i}}{2}\), \(z_1=\frac{18+24\textrm{i}}{25}\), \({\textbf{B}}_1=(1,\textrm{i})^T\), \({\textbf{C}}_1=z_1(-1,\textrm{i})^T\)

Case (iv) \(N_1=1\), \(N_2=1\), \(N_3=1\) (Fig. 5);

Fig. 5
figure 5

The dynamical evolutions of the solution \({\textbf{q}}\) with \(\omega _1=\frac{3}{2}\textrm{i}\), \(y_1=\frac{3}{2}+2\textrm{i}\), \(z_1=\frac{18\textrm{i}-24}{25}\), \(\alpha _1=1\), \({\textbf{B}}_1=(\textrm{i},1)^T\), \({\textbf{C}}_1=z_1(\textrm{i},-1)^T\)

Remark 4.4

By virtue of the numerical software “Mathematica,” we have verified that all the expressions for \({\textbf{q}}(x,t)\) in cases \(\mathbf {(i)}-\mathbf {(iv)}\) solve the 3-component focusing NLS equation (1.2) with \(\lim _{x\rightarrow \pm \infty }{\textbf{q}}(x,t)=(0,0,\pm 1)^T\).

5 Conclusion and outlook

As it has shown that this work is more involved than the scalar case and the 2-component case, mainly in understanding the algebraic structure thoroughly. In this paper, we have introduced the idea of “block" and the generalized cross product in multi-dimensional space to develop the IST for the N-component focusing NLS with NZBCs and have characterized the inverse problem in terms of a \(3\times 3\) block matrix RH problem. Moreover, by virtue of the symmetries of the scattering data, we have verified the existence and uniqueness of solution for the above RH problem and have proved that the reconstruction potential \({\textbf{q}}(x,t)\) solves the N-component focusing NLS. We expect these ideas to be useful in investigating the other multi-component integrable equations. However, due to lacking \(N-1\) analytic eigenfunctions rather than one in each sector, those ideas in this work can not be applied to the IST for the N-component defocusing NLS equation with NZBCs in a straightforward way. We should remark that the IST for the multi-component defocusing NLS equation with NZBCs was recently established in Ref. [25, 26]. In spite of some ideas can be extended to the N-component case, a detailed treatment of the symmetries is still open. We believe that the idea of “block” would be useful to characterize the symmetries. In this paper, we consider the case of a solution that tends to \({\textbf{q}}_0 \textrm{e}^{i\theta _\pm }\) as \(x \rightarrow \pm \infty \), where \({\textbf{q}}_0\) is a fixed vector. This is a special case, the problem with general nonzero boundary condition is left as a topic for future work.

Once the long-time asymptotics for the focusing NLS with NZBCs have been analyzed in Refs. [31,32,33,34] by virtue of the nonlinear steepest descend method [35], one could try to investigate the 2-component case. Therefore, it would be an interesting subject to relate our previous work [36] on the 2-component coupled case with ZBCs to the techniques in this paper and Refs. [31,32,33].