1 Introduction

The long time behavior of solutions q(x, t) of the Cauchy initial-value problem for the defocusing nonlinear Schrödinger (NLS) equation

$$\displaystyle \begin{aligned} \mathrm{i} \frac{\partial q}{\partial t} + \frac{\partial^2 q}{\partial x^2} - 2 |q|{}^2 q = 0, \end{aligned} $$
(1)

with initial data decaying for large x:

$$\displaystyle \begin{aligned} q(x,0)=q_0(x)\to 0,\quad |x|\to\infty, {} \end{aligned} $$
(2)

has been studied extensively, under various assumptions on the smoothness and decay properties of the initial data q 0 [3, 5, 6, 8, 10, 19, 20]. The asymptotic behavior takes the following form: as t → +, one has

$$\displaystyle \begin{aligned} q(x,t) = t^{-1/2}\alpha(z_0)\mathrm{e}^{\mathrm{i} x^2/(4t) - \mathrm{i}\nu(z_0)\ln(8t)} + \mathcal{E}(x,t), \end{aligned} $$
(3)

where \(\mathcal {E}(x,t)\) is an error term and for , ν(z) and α(z) are defined by

$$\displaystyle \begin{aligned} \nu(z) := -\frac{1}{2\pi}\ln(1-|r(z)|{}^2),\quad |\alpha(z)|{}^2=\frac{1}{2}\nu(z), {} \end{aligned} $$
(4)

and

$$\displaystyle \begin{aligned} \arg(\alpha(z))=\frac{1}{\pi}\int_{-\infty}^{z}\ln(z-s)\,\mathrm{d} \ln(1-|r(s)|{}^2)+\frac{\pi}{4} + \arg(\Gamma(\mathrm{i}\nu(z)))-\arg(r(z)). {} \end{aligned} $$
(5)

Here z 0 = −x∕(4t), Γ is the gamma function, and r(z) is the so-called reflection coefficient associated to the initial data q 0. The connection between the initial data q 0(x) and the reflection coefficient r(z) is achieved through the spectral theory of the associated self-adjoint Zakharov-Shabat differential operator

$$\displaystyle \begin{aligned} \mathcal{L}:= \mathrm{i} \sigma_3\frac{\mathrm{d}}{\mathrm{d} x} + \mathbf{Q}(x),\quad \sigma_3:=\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix},\quad \mathbf{Q}(x):=\begin{pmatrix} 0 & - \mathrm{i} q_0(x) \\ \mathrm{i} \overline{q_0(x)} & 0 \end{pmatrix}, \end{aligned}$$

acting in as described, for example, in [6]. See also the contribution of Perry in this volume: [17, Section 2].

The modulus |α(z 0)| of the complex amplitude α(z 0) as written in (4) was first obtained by Segur and Ablowitz [19] from trace formulæ under the assumption that q(x, t) has the form (3) where \(\mathcal {E}(x,t)\) is small for large t. Zakharov and Manakov [20] took the form (3) as an ansatz to motivate a kind of WKB analysis of the reflection coefficient r(z) and as a consequence were able to also calculate the phase of α(z 0), obtaining for the first time the phase as written in (5). Its [10] was the first to observe the key role played in the large-time behavior of q(x, t) by an “isomonodromy” problem for parabolic cylinder functions; this problem has been an essential ingredient in all subsequent studies of the large-t limit and as we shall see it is a non-commutative analogue of the Gaussian integral that produces the familiar factors of \(\sqrt {2\pi }\) in the stationary phase approximation of integrals. The first time that the form (3) itself was rigorously deduced from first principles (rather than assumed) and proven to be accurate for large t (incidentally reproducing the formulæ (4)–(5) in an ansatz-free fashion) was in the work of Deift and Zhou [3] (see [6] for a pedagogic description) who brought the recently introduced nonlinear steepest descent method [4] to bear on this problem. Indeed, under the assumption of high orders of smoothness and decay on the initial data q 0, the authors of [3] proved that \(\mathcal {E}(x,t)\) satisfies

(6)

It is reasonable to expect that any estimate of the error term \(\mathcal {E}(x,t)\) would depend on the smoothness and decay assumptions made on q 0, and so it is natural to ask what happens to the estimate (6) if the assumptions on q 0 are weakened. Early in this millennium, Deift and Zhou developed some new tools for the analysis of Riemann-Hilbert problems, originally aimed at studying the long time behavior of perturbations of the NLS equation [7]. Their methods allowed them to establish long time asymptotics for the Cauchy problem (1)–(2) with essentially minimal assumptions on the initial data [8]. Indeed, they assumed the initial data q 0 to lie in the weighted Sobolev space

(7)

It is well known that if , then the associated reflection coefficientFootnote 1 satisfies , where

(8)

and more generally the spectral transform \(\mathcal {R}\) associated with the Zakharov-Shabat operator \(\mathcal {L}\) (6) is a map , \(q_{0} \mapsto r=\mathcal {R}q_0\) that is a bi-Lipschitz bijection [21]. The result of [8] is then that the Cauchy problem (1)–(2) for has a unique weak solution for which (3) holds with an error term \(\mathcal {E}\left (x,t \right )\) that satisfies, for any fixed κ in the indicated range,

(9)

Subsequently, McLaughlin and Miller [13, 14] developed a method for the asymptotic analysis of Riemann-Hilbert problems in which jumps across contours are “smeared out” over a two-dimensional region in the complex plane, resulting in an equivalent \({\overline {\partial }}\) problem that is more easily analyzed. In this paper we adapt and extend this method to the Riemann-Hilbert problem of inverse-scattering associated to the Cauchy problem (1)–(2). The main point of our work is this: by using the \({\overline {\partial }}\) approach, we avoid all delicate estimates involving Cauchy projection operators in L p spaces (which are central to the work in [8]). Instead it is only necessary to estimate certain double integrals, an exercise involving nothing more than calculus. Remarkably, this elementary approach also sharpens the result obtained in [8]. Our result is as follows.

Theorem 1.1

The Cauchy problem (1)(2) with initial data q 0 in the weighted Sobolev space defined by (7) has a unique weak solution having the form (3)(5) in which r(z) is the reflection coefficient associated with q 0 and where the error term satisfies

(10)

The main features of this result are as follows.

  • The error estimate is an improvement over the one reported in [8], i.e., we prove that the endpoint case \(\kappa =\tfrac {1}{4}\) holds in (9). Our methods also suggest that the improved estimate (10) on the error is sharp.

  • As with the result (9) obtained in [8], the improved estimate (10) only requires the condition , i.e., it is not necessary that , but only that r lies in the classical Sobolev space and satisfies |r(z)|≤ ρ for some ρ < 1. Dropping the weighted L 2 condition on r corresponds to admitting rougher initial data q 0. For such data, the solution of the Cauchy problem is of a weaker nature, as discussed at the end of [8].

  • The new \({\overline {\partial }}\) method which is used to derive the estimate (10) affords a considerably less technical proof than previous results.

  • The method used to establish the estimate (10) is readily extended to derive a more detailed asymptotic expansion, beyond the leading term (see the remark at the end of the paper).

Given the reflection coefficient associated with initial data via the spectral transform \(\mathcal {R}\) for the Zakharov-Shabat operator \(\mathcal {L}\), the solution of the Cauchy problem for the nonlinear Schrödinger equation (1) may be described as follows. For full details, we again refer the reader to [17, Section 2]. Consider the following Riemann-Hilbert problem:

Riemann-Hilbert Problem 1

Given parameters , find M = M(z) = M(z;x, t), a 2 × 2 matrix, satisfying the following conditions:

Analyticity

M is an analytic function of z in the domain . Moreover, M has a continuous extension to the real axis from the upper (lower) half-plane denoted M +(z) (M (z)) for .

Jump Condition

The boundary values satisfy the jump condition

(11)

where the jump matrix V M(z) is defined by

(12)

Normalization

There is a matrix M 1(x, t) such that

(13)

From the solution of this Riemann-Hilbert problem, one defines a function q(x, t), , by

$$\displaystyle \begin{aligned} q(x,t) := 2 \mathrm{i} M_{1,12}(x,t). \end{aligned} $$
(14)

The fact of the matter is then that q(x, t) is the solution of the Cauchy problem (1)–(2).

Recent studies of the long-time behavior of the solution of the NLS initial-value problem (1)–(2) have involved the detailed analysis of the solution M to Riemann-Hilbert Problem 1. As regularity assumptions on the initial data q 0 are relaxed, this analysis becomes more involved, technically. The purpose of this manuscript is to carry out a complete analysis of the long-time asymptotic behavior of M under the assumption that (or really, ), as in [6], but via a \(\overline {\partial }\) approach which replaces technical harmonic analysis estimates involving Cauchy projection operators with very straightforward estimates involving some explicit double integrals.

The proof of Theorem 1.1 using the methodology of [13, 14] was originally obtained by the first two authors in 2008 [9]. Since then the technique has been used successfully to study many other related problems of large-time behavior for various integrable equations. In [2], the authors used the methods of [9] to analyze the stability of multi-dark-soliton solutions of (1). In [1], the method of [9] was used to confirm the soliton resolution conjecture for the focusing version of the NLS equation under generic conditions on the discrete spectrum. In [12], the large-time behavior of solutions of the derivative NLS equation was studied using \({\overline {\partial }}\) methods, and in [11] the same techniques were used to establish a form of the soliton resolution conjecture for this equation. Similar \({\overline {\partial }}\) methods more based on the original approach of [13, 14] have also been useful in studying some problems of nonlinear wave theory not necessarily in the realm of large time asymptotics, for instance [15], which deals with boundary-value problems for (1) in the semiclassical limit. Based on this continued interest in \({\overline {\partial }}\) methods, we decided to write this review paper containing all of the results and arguments of [9], some in a new form, as well as some additional expository material which we hope the reader might find helpful.

2 An Unorthodox Approach to the Corresponding Linear Problem

In order to motivate the \({\overline {\partial }}\) steepest descent method, we first consider the Cauchy problem for the linear equation corresponding to (1), namely

$$\displaystyle \begin{aligned} \mathrm{i}\frac{\partial q}{\partial t} +\frac{\partial^2 q}{\partial x^2}=0, \end{aligned} $$
(15)

with initial condition (2) for which . By Fourier transform theory, if

(16)

is the Fourier transform of the initial data, then \(\hat {q}_0\) as a function of also lies in the weighted Sobolev space , and the solution of the Cauchy problem is given in terms of \(\hat {q}_0\) by the integral

(17)

where θ(z;z 0) and z 0 are as defined in (12). It is worth noticing that this formula is exactly what arises from Riemann-Hilbert Problem 1 via the formula (14) if only the jump matrix V M(z) in (12) is replaced with the triangular form

(18)

in which case the solution of Riemann-Hilbert Problem 1 is explicitly given by

(19)

This shows that the reflection coefficient r(z) is a nonlinear analogue of (the complex conjugate of) the Fourier transform \(\hat {q}_0(z)\).

Assuming that is fixed, the method of stationary phase applies to deduce an asymptotic expansion of the integral in (17). The only point of stationary phase is z = z 0, and the classical formula of Stokes and Kelvin yields

$$\displaystyle \begin{aligned} \begin{array}{rcl} q(x,t)&\displaystyle =&\displaystyle \frac{1}{\pi}\sqrt{\frac{2\pi}{t|-2\theta''(z_0;z_0)|}}\hat{q}_0(z_0)\mathrm{e}^{-2\mathrm{i} t\theta(z_0;z_0)-\mathrm{i}\pi/4}+\mathcal{E}(x,t)\\ &\displaystyle =&\displaystyle t^{-1/2}\frac{\hat{q}_0(z_0)\mathrm{e}^{-\mathrm{i}\pi/4}}{2\sqrt{\pi}}\mathrm{e}^{\mathrm{i} x^2/(4t)}+\mathcal{E}(x,t), {} \end{array} \end{aligned} $$
(20)

where the error term \(\mathcal {E}(x,t)\) is of order t −3∕2 as t → + under the best assumptions on \(\hat {q}_0\), assumptions that guarantee that the error has a complete asymptotic expansion in terms proportional via explicit oscillatory factors to descending half-integer powers of t. To derive this expansion from first principles consists of several steps as follows.

  • One introduces a smooth partition of unity to separate contributions to the integral from points z close to z 0 and far from z 0.

  • One uses integration by parts to estimate the contributions from points z far from z 0. This requires having sufficiently many derivatives of \(\hat {q}_0(z)\), which corresponds to having sufficient decay of q 0(x).

  • One approximates \(\hat {q}_0(z)\) locally near z 0 by an analytic function with an accuracy related to the size of t and the number of terms of the expansion that are desired.

  • One uses Cauchy’s theorem to deform the path of integration for the approximating integrand to a diagonal path over the stationary phase point. The slope of the diagonal path produces the phase factor of e−iπ∕4, and the path integral of the leading term \(\hat {q}_0(z_0)\mathrm {e}^{-2\mathrm {i} t\theta (z;z_0)}\) in the local approximation of \(\hat {q}_0(z)\mathrm {e}^{-2\mathrm {i} t\theta (z;z_0)}\) is a Gaussian integral that produces the factor of \(\sqrt {\pi }\).

It is possible to implement all steps of this method assuming, say, that q 0 (and hence also \(\hat {q}_0\)) is a Schwartz-class function. However, as one reduces the regularity of q 0 it becomes impossible to obtain an expansion to all orders. More to the point, even in the presence of Schwartz-class regularity, the proof of the stationary phase expansion by the traditional methods outlined above is complicated, perhaps more so than necessary as we hope to convince the reader.

To explain an alternative approach that bears fruit in the case that is of interest here, let Ω denote a simply-connected region in the complex plane with counter-clockwise oriented piecewise-smooth boundary  Ω. If is differentiable (as a function of two real variables \(u= \operatorname {\mathrm {Re}}(z)\) and \(v= \operatorname {\mathrm {Im}}(z)\)) and extends continuously to  Ω, then it follows from Stokes’ theorem that

$$\displaystyle \begin{aligned} \oint_{\partial\Omega}f(u,v)\,\mathrm{d} z = \iint_\Omega 2\mathrm{i}{\overline{\partial}} f(u,v)\,\mathrm{d} A(u,v) {} \end{aligned} $$
(21)

where dA(u, v) denotes area measure in the plane and where \({\overline {\partial }}\) is the Cauchy-Riemann operator:

$$\displaystyle \begin{aligned} {\overline{\partial}}:=\frac{1}{2}\left(\frac{\partial}{\partial u}+\mathrm{i}\frac{\partial}{\partial v}\right),\quad z=u+\mathrm{i} v, {} \end{aligned} $$
(22)

which annihilates all analytic functions of z = u + iv. Now consider the diagram shown in Fig. 1. We define a function E(u, v) on Ω+ ∪ Ω as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl} E(u,v):=&\displaystyle \cos{}(2\arg(u+\mathrm{i} v-z_0))\hat{q}_0(u) + \left(1-\cos{}(2\arg(u+\mathrm{i} v-z_0))\right)\hat{q}_0(z_0),\\ &\displaystyle u+\mathrm{i} v\in\Omega_+\cup\Omega_-. {} \end{array} \end{aligned} $$
(23)

Observe that:

  • On the boundary v = 0 (i.e., ), we have \(\cos {}(2\arg (u+\mathrm {i} v-z_0))\equiv 1\), so \(E(u,0)=\hat {q}_0(u)\).

  • On the boundary v = z 0 − u, we have \(\cos {}(2\arg (u+\mathrm {i} v-z_0))\equiv 0\), so \(E(u,z_0-u)=\hat {q}_0(z_0)\) which is independent of u.

The first point shows that E(u, v) is an extension of the function \(\hat {q}_0(z)\) from the real z-axis into the domain Ω+ ∪ Ω. The second point shows that the extension evaluates to a constant on the diagonal part of the boundary of Ω+ ∪ Ω. In the interior of Ω+ ∪ Ω, E(u, v) inherits smoothness properties from \(\hat {q}_0(u)\). In particular, under the assumption , we may apply Stokes’ theorem in the form (21) to the functions \(\pm E(u,v)\mathrm {e}^{-2\mathrm {i} t\theta (u+\mathrm {i} v;z_0)}\) on the domains Ω± and add up the results to obtain the formula

$$\displaystyle \begin{aligned} \begin{array}{rcl} q(x,t)&\displaystyle =&\displaystyle \frac{1}{\pi}\int_{z_0+\infty\mathrm{e}^{3\pi\mathrm{i}/4}}^{z_0+\infty\mathrm{e}^{-\mathrm{i}\pi/4}}\hat{q}_0(z_0)\mathrm{e}^{-2\mathrm{i} t\theta(z;z_0)}\,\mathrm{d} z\\ &\displaystyle &\displaystyle +\frac{1}{\pi}\iint_{\Omega_+-\Omega_-}2\mathrm{i}{\overline{\partial}}\left(E(u,v)\mathrm{e}^{-2\mathrm{i} t\theta(u+\mathrm{i} v;z_0)}\right)\,\mathrm{d} A(u,v). {} \end{array} \end{aligned} $$
(24)

The first term on the right-hand side originates from the diagonal boundary of Ω+ ∪ Ω and because E is constant there it is an exact Gaussian integral evaluating to the explicit leading term on the right-hand side of (20). Therefore, the remaining term on the right-hand side of (24) is an exact double-integral representation of the error term \(\mathcal {E}(x,t)\) in the formula (20). Since implies which in turn implies that \(\hat {q}_0(z)\) is defined for all , the leading term in (20) certainly makes sense.

Fig. 1
figure 1

The integration contour in (17) and the unbounded domains Ω+ and Ω in the z = u + iv plane

To estimate the error term we will only use the fact that , i.e., that \(\hat {q}_0\) lies in the (classical, unweighted) Sobolev space . First note that since \(\mathrm {e}^{-2\mathrm {i} t\theta (z;z_0)}\) is an entire function of z, \({\overline {\partial }}\mathrm {e}^{-2\mathrm {i} t\theta (z;z_0)}\equiv 0\), so by the product rule it suffices to have suitable estimates of \({\overline {\partial }} E(u,v)\) for u + iv ∈ Ω±. Indeed,

$$\displaystyle \begin{aligned} \begin{aligned} &\left|\iint_{\Omega_\pm}2\mathrm{i}{\overline{\partial}}\left(E(u,v)\mathrm{e}^{-2\mathrm{i} t\theta(u+\mathrm{i} v;z_0)}\right)\,\mathrm{d} A(u,v)\right|\\ &\qquad \le 2\iint_{\Omega_\pm}|{\overline{\partial}} E(u,v)|\mathrm{e}^{2t\operatorname{\mathrm{Im}}(\theta(u+\mathrm{i} v;z_0))}\,\mathrm{d} A(u,v)\\ &\qquad = 2\iint_{\Omega_\pm}|{\overline{\partial}} E(u,v)|\mathrm{e}^{8t(u-z_0)v}\,\mathrm{d} A(u,v). \end{aligned} {} \end{aligned} $$
(25)

A direct computation using (22) gives

$$\displaystyle \begin{aligned} \begin{aligned} {\overline{\partial}} E(u,v)&={\overline{\partial}}\left[\hat{q}_0(z_0)+\cos{}(2\arg(u+\mathrm{i} v-z_0))\left(\hat{q}_0(u)-\hat{q}_0(z_0)\right)\right]\\ &=\cos{}(2\arg(u+\mathrm{i} v-z_0)){\overline{\partial}}\hat{q}_0(u)\\ &\quad + \left(\hat{q}_0(u)-\hat{q}_0(z_0)\right){\overline{\partial}}\cos{}(2\arg(u+\mathrm{i} v-z_0))\\ &=\frac{1}{2}\cos{}(2\arg(u+\mathrm{i} v-z_0))\hat{q}_0^{\prime}(u)\\ &\quad +\left(\hat{q}_0(u)-\hat{q}_0(z_0)\right){\overline{\partial}}\cos{}(2\arg(u+\mathrm{i} v-z_0)). \end{aligned} \end{aligned} $$
(26)

In polar coordinates (ρ, ϕ) centered at the point and defined by \(u=z_0+\rho \cos {}(\phi )\) and \(v=\rho \sin {}(\phi )\), the Cauchy-Riemann operator (22) takes the equivalent form

$$\displaystyle \begin{aligned} {\overline{\partial}} = \frac{\mathrm{e}^{\mathrm{i}\phi}}{2}\left(\frac{\partial}{\partial\rho}+\frac{\mathrm{i}}{\rho}\frac{\partial}{\partial\phi}\right), {} \end{aligned} $$
(27)

so as \(\arg (u+\mathrm {i} v-z_0)=\phi \) we have

$$\displaystyle \begin{aligned} {\overline{\partial}}\cos{}(2\arg(u+\mathrm{i} v-z_0))=\frac{\mathrm{i}\mathrm{e}^{\mathrm{i}\phi}}{2\rho}\frac{\mathrm{d}}{\mathrm{d}\phi}\cos{}(2\phi)=-\frac{\mathrm{i}\mathrm{e}^{\mathrm{i}\phi}}{\rho}\sin{}(2\phi). \end{aligned} $$
(28)

Therefore we easily obtain the inequality

$$\displaystyle \begin{aligned} |{\overline{\partial}} E(u,v)|\le\frac{1}{2}|\hat{q}_0^{\prime}(u)| + \frac{|\hat{q}_0(u)-\hat{q}_0(z_0)|}{\sqrt{(u-z_0)^2+v^2}},\quad u+\mathrm{i} v\in\Omega_+\cup\Omega_-. {} \end{aligned} $$
(29)

Note that by the fundamental theorem of calculus and the Cauchy-Schwarz inequality,

(30)

so (29) implies that also

(31)

Therefore, using (31) in (25) gives

(32)

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} I^\pm(x,t):=\iint_{\Omega_\pm} |\hat{q}_0^{\prime}(u)|\mathrm{e}^{8t(u-z_0)v}\,\mathrm{d} A(u,v) \quad \text{and}\\ \quad J^\pm(x,t):=\iint_{\Omega_\pm}\frac{\mathrm{e}^{8t(u-z_0)v}}{[(u-z_0)^2+v^2]^{1/4}}\,\mathrm{d} A(u,v). {} \end{array} \end{aligned} $$
(33)

The key point is that for t > 0, the exponential factors are bounded by 1 and decaying at infinity in Ω±. So, by iterated integration, Cauchy-Schwarz, and the change of variable w = t 1∕2(u − z 0),

(34)

In exactly the same way, we also get . Note that K is an absolute constant. The integrals J ±(x, t) are independent of q 0 and by translation of z 0 to the origin and reflection through the origin, the integrals are also independent of x and are obviously equal. To calculate them we introduce rescaled polar coordinates by \(u=z_0+t^{-1/2}\rho \cos {}(\phi )\) and \(v=t^{-1/2}\rho \sin {}(\phi )\) to get

$$\displaystyle \begin{aligned} J^\pm(x,t)=Lt^{-3/4},\quad L:=\int_0^\infty\,\rho\mathrm{d}\rho\int_{3\pi/4}^\pi\,\mathrm{d}\phi \,\rho^{-1/2}\mathrm{e}^{8\rho^2\sin{}(\phi)\cos{}(\phi)} \end{aligned} $$
(35)

It is a calculus exercise to show that the above double integral is convergent and hence defines L as a second absolute constant.

It follows from these elementary calculations that if only , then the error term \(\mathcal {E}(x,t)\) in (20) obeys the estimate

(36)

which decays as t → + at exactly the same rate as in the claimed result for the nonlinear problem as formulated in Theorem 1.1. The same method can be used to obtain higher-order corrections under additional hypotheses of smoothness for the Fourier transform \(\hat {q}_0\). One simply needs to integrate by parts with respect to \(u= \operatorname {\mathrm {Re}}(z)\) in the double integral on the right-hand side of (24).

In the rest of the paper we will show that almost exactly the same elementary estimates suffice to prove the nonlinear analogue of this result, namely Theorem 1.1.

3 Proof of Theorem 1.1

We will prove Theorem 1.1 in several systematic steps. After some preliminary observations involving the jump matrix V M(z) in Riemann-Hilbert Problem 1 in Sects. 3.1 and 3.2, we shall see that the subsequent analysis of Riemann-Hilbert Problem 1 parallels our study of the associated linear problem detailed in Sect. 2. In particular we find natural analogues of the nonanalytic extension method (Sect. 3.3), of the Gaussian integral giving the leading term in the stationary phase formula (Sect. 3.4), and of the simple double integral estimates leading to the proof of its accuracy (Sect. 3.5). Finally, in Sect. 3.6 we assemble the ingredients to arrive at the formula (3) with the improved error estimate, completing the proof of Theorem 1.1.

3.1 Jump Matrix Factorization

The jump matrix V M(z) of Riemann-Hilbert Problem 1 defined in (12) can be factored in two different ways that are useful in different intervals of the jump contour as indicated:

$$\displaystyle \begin{aligned} {\mathbf{V}}_{\mathbf{M}}(z)=\begin{pmatrix}1&-\overline{r(z)}\mathrm{e}^{-2\mathrm{i} t\theta(z;z_0)}\\0 & 1\end{pmatrix}\begin{pmatrix}1 & 0\\r(z)\mathrm{e}^{2\mathrm{i} t\theta(z;z_0)} & 1\end{pmatrix},\quad z>z_0, {} \end{aligned} $$
(37)

and

$$\displaystyle \begin{aligned} {\mathbf{V}}_{\mathbf{M}}(z)=\begin{pmatrix}1 & 0\\\displaystyle\frac{r(z)\mathrm{e}^{2\mathrm{i} t\theta(z;z_0)}}{1-|r(z)|{}^2} & 1\end{pmatrix} (1-|r(z)|{}^2)^{\sigma_3}\begin{pmatrix}1 & \displaystyle -\frac{\overline{r(z)}\mathrm{e}^{-2\mathrm{i} t\theta(z;z_0)}}{1-|r(z)|{}^2}\\0 & 1\end{pmatrix},\quad z<z_0. {} \end{aligned} $$
(38)

The importance of these factorizations is that they provide an algebraic separation of the oscillatory exponential factors \(\mathrm {e}^{\pm 2\mathrm {i} t\theta (z;z_0)}\). Indeed, if the reflection coefficient r(z) is an analytic function of , then in each case the left-most (right-most) factor has an analytic continuation into the lower (upper) half-plane near the indicated half-line that is exponentially decaying to the identity matrix as t → + due to z 0 being a simple critical point of θ(z;z 0). This observation is the basis for the steepest descent method for Riemann-Hilbert problems as first formulated in [4]. In the more realistic case that r(z) is nowhere analytic, this analytic continuation method must be supplemented with careful approximation arguments that are quite detailed [8]. We will proceed differently in Sect. 3.3 below. But first we need to deal with the central diagonal factor in the factorization (38) to be used for z < z 0.

3.2 Modification of the Diagonal Jump

We now show how the diagonal factor \((1-|r(z)|{ }^2)^{\sigma _3}\) in the jump matrix factorization (38) can be replaced with a constant diagonal matrix. Consider the complex scalar function defined by the formula

(39)

This function is important because according to the Plemelj formula, it satisfies the scalar jump conditions δ +(z;z 0) = δ (z;z 0)(1 −|r(z)|2) for z < z 0 and δ +(z;z 0) = δ (z;z 0) for z > z 0. Hence the diagonal matrix \(\delta (z;z_0)^{\sigma _3}\) is typically used in steepest descent theory to deal with the diagonal factor in (38). However, δ(z;z 0) has a mild singularity at z = z 0:

$$\displaystyle \begin{aligned} \delta(z;z_0)=K(z-z_0)^{\mathrm{i}\nu(z_0)}(1+o(1)),\quad z\to z_0,\quad K=K(z_0)=\text{constant}, \end{aligned} $$
(40)

where ν(z 0) is defined in (4) and the power function is interpreted as the principal branch. The use of δ(z;z 0) introduces this singularity unnecessarily into the Riemann-Hilbert analysis. In our approach we will therefore use a related function:

$$\displaystyle \begin{aligned} f(z;z_0):=c(z_0)\delta(z;z_0)(z-z_0)^{-\mathrm{i}\nu(z_0)}, {} \end{aligned} $$
(41)

where the constant c(z 0) is defined by

$$\displaystyle \begin{aligned} \begin{aligned} c(z_0):=&\exp\left(-\frac{1}{2\pi\mathrm{i}}\left[\int_{-\infty}^{z_0-1}\frac{\ln(1-|r(s)|{}^2)}{s-z_0}\,\mathrm{d} s\right.\right.\\ &\left.\left.+\int_{z_0-1}^{z_0}\frac{\ln(1-|r(s)|{}^2)-\ln(1-|r(z_0)|{}^2)}{s-z_0}\,\mathrm{d} s\right]\right)\\ =&\exp\left(\frac{1}{2\pi\mathrm{i}}\int_{-\infty}^{z_0}\ln(z_0-s)\,\mathrm{d}\ln(1-|r(s)|{}^2)\right). \end{aligned} {} \end{aligned} $$
(42)

The function f(z;z 0) has numerous useful properties that we summarize here.

Lemma 3.1 (Properties of f(z;z 0))

Suppose that and there exists ρ < 1 such that |r(z)|≤ ρ holds for all (as is implied by which follows from ). Then

  • The functions f(z;z 0)±1 are well-defined and analytic in z for \(\arg (z-z_0)\in (-\pi ,\pi )\).

  • The functions f(z;z 0)±1 are uniformly bounded independently of :

    (43)
  • The function f(z;z 0) satisfies the following asymptotic condition:

    (44)
  • The functions f(z;z 0)±2 are Hölder continuous with exponent 1∕2. In particular, f(z;z 0)±2 → 1 as z  z 0 and there is a constant K = K(ρ) > 0 such that |f(z;z 0)±2 − 1|≤ K|z  z 0|1∕2 holds whenever \(\arg (z-z_0)\in (-\pi ,\pi )\).

  • The continuous boundary values f ±(z;z 0) taken by f(z;z 0) on for z < z 0 from ±Im(z) > 0 satisfy the jump condition

    $$\displaystyle \begin{aligned} f_+(z;z_0)=f_-(z;z_0)\frac{1-|r(z)|{}^2}{1-|r(z_0)|{}^2},\quad z<z_0. \end{aligned} $$
    (45)

Proof

The assumptions imply in particular that , so for z in a small neighborhood of each point disjoint from the integration contour, the integral in (39) is absolutely convergent and so δ(z;z 0) and δ(z;z 0)−1 are analytic functions of z on that neighborhood. The same argument shows that the first integral in the exponent of the expression (42) for c(z 0) is convergent. Since implies that r(⋅) is Hölder continuous with exponent 1∕2, the condition |r(⋅)|≤ ρ < 1 further implies that \(\ln (1-|r(s)|{ }^2)\) is also Hölder continuous with exponent 1∕2, from which it follows that the second integral in the exponent of the expression (42) is also convergent. Therefore c(z 0) exists, and clearly |c(z 0)| = 1. Since the principal branch of \((z-z_0)^{\mp \mathrm {i}\nu (z_0)}\) is analytic for \(\arg (z-z_0)\in (-\pi ,\pi )\), the analyticity of f(z;z 0)±1 in the same domain follows. This proves the first statement.

In [8, Proposition 2.12] it is asserted that under the hypothesis |r(z)|≤ ρ < 1, the function δ(z;z 0) defined by (39) satisfies the uniform estimates (1 − ρ 2)1∕2 ≤|δ(z;z 0)|±1 ≤ (1 − ρ 2)−1∕2 whenever \(\arg (z-z_0)\in (-\pi ,\pi )\). If \(\arg (z-z_0)=0\), then obviously |δ(z;z 0)| = 1, so it remains to prove the estimates hold for \( \operatorname {\mathrm {Im}}(z)\neq 0\). Following [12], since \(\ln (1-\rho ^2)\le \ln (1-|r(s)|{ }^2)\le 0\), if \(u= \operatorname {\mathrm {Re}}(z)\) and \(v= \operatorname {\mathrm {Im}}(z)\) we have \( \operatorname {\mathrm {Im}}((s-z)^{-1})=v/((s-u)^2+v^2)\), so assuming v > 0,

$$\displaystyle \begin{aligned} \exp\left(\frac{v\ln(1-\rho^2)}{2\pi}\int_{-\infty}^{z_0}\frac{\mathrm{d} s}{(s-u)^2+v^2}\right)\le |\delta(u+\mathrm{i} v;z_0)|. \end{aligned} $$
(46)

Bounding the left-hand side below by extending the integration to (using \(v\ln (1-\rho ^2)<0\)) gives the lower bound (1 − ρ 2)1∕2 ≤|δ(z;z 0)|, and by taking reciprocals, the upper bound |δ(z;z 0)|−1 ≤ (1 − ρ 2)−1∕2 for \( \operatorname {\mathrm {Im}}(z)>0\). The corresponding result for \( \operatorname {\mathrm {Im}}(z)<0\) follows by the exact symmetry \(\delta (\bar {z};z_0)^{-1}=\overline {\delta (z;z_0)}\). Combining these bounds with |c(z 0)| = 1 and the elementary inequalities \((1-\rho ^2)^{1/2}\le (1-|r(z_0)|{ }^2)^{1/2}=\mathrm {e}^{-\pi \nu (z_0)}\le |(z-z_0)^{\mathrm {i}\nu (z_0)}|\le \mathrm {e}^{\pi \nu (z_0)}=(1-|r(z_0)|{ }^2)^{-1/2}\le (1-\rho ^2)^{-1/2}\) holding for \(\arg (z-z_0)\in (-\pi ,\pi )\) then proves the second statement.

Since , from (39) a dominated convergence argument shows that δ(z;z 0) → 1 as z → provided only that the limit is taken in such a way that for some given 𝜖 > 0, dist(z, [−, z 0)) ≥ 𝜖. Combining this fact with (41) proves the third statement.

Analyticity implies Hölder continuity, so provided z is bounded away from the half-line (−, z 0], Hölder-1∕2 continuity of f(z;z 0)±2 is obvious. But, since \(\ln (1-|r(\cdot )|{ }^2)\) is Hölder continuous on with exponent 1∕2, by the Plemelj-Privalov theorem [16, §19] and a related classical result [16, §22], the functions δ(z;z 0)±1 are uniformly Hölder continuous with exponent 1∕2 in any neighborhood of the integration contour except for the endpoint z = z 0, and hence the same is true for the functions f(z;z 0)±2. However, the latter functions are better-behaved near z = z 0. To see this, note that since

(47)

we have from (39) and (41) that

$$\displaystyle \begin{aligned} f(z;z_0)^{\pm 2}&= c(z_0)^{\pm 2}(z-(z_0-1))^{\mp 2\mathrm{i}\nu(z_0)}\exp\left(\pm\frac{1}{\pi\mathrm{i}}\int_{-\infty}^{z_0-1}\frac{\ln(1-|r(s)|{}^2)}{s-z}\,\mathrm{d} s\right)\\ &\quad \cdot\exp\left(\pm\frac{1}{\pi\mathrm{i}}\int_{z_0-1}^{+\infty}\frac{h(s)\,\mathrm{d} s}{s-z}\right) {} \end{aligned} $$
(48)

where \(h(s):=\ln (1-|r(s)|{ }^2)-\ln (1-r(z_0)|{ }^2)\) for s < z 0 and h(s) := 0 for s ≥ z 0. As the first three factors are analytic at z = z 0 while h(s) is Hölder continuous with exponent 1∕2 in a neighborhood of s = z 0, the same arguments cited above apply and yield the desired Hölder continuity of f(z;z 0)±2 near z = z 0. It only remains to show that f(z 0;z 0)±2 = 1, but this follows immediately from (42) and (48). This proves the fourth statement.

Finally, the fifth statement follows from the definition (41) of f(z;z 0) and the jump condition δ +(z;z 0) = δ (z;z 0)(1 −|r(z)|2) for z < z 0. □

Using the diagonal matrix \(f(z;z_0)^{\sigma _3}\) to conjugate the unknown M(z) of Riemann-Hilbert Problem 1 by introducing

(49)

where

$$\displaystyle \begin{aligned} \omega(z_0):=\arg(r(z_0)), {} \end{aligned} $$
(50)

it is easy to check that N(z) satisfies several conditions explicitly related to those of M(z) according to Riemann-Hilbert Problem 1. Indeed, N(z) must be a solution of the following equivalent problem.

Riemann-Hilbert Problem 2

Given parameters , find N = N(z) = N(z;x, t), a 2 × 2 matrix, satisfying the following conditions:

Analyticity

N is an analytic function of z in the domain . Moreover, N has a continuous extension to the real axis from the upper (lower) half-plane denoted N +(z) (N (z)) for .

Jump Condition

The boundary values satisfy the jump condition

(51)

where the jump matrix V N(z) may be written in the alternate forms

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle {\mathbf{V}}_{\mathbf{N}}(z)=\begin{pmatrix}1 &\displaystyle -f(z;z_0)^{2}\overline{r(z)}\mathrm{e}^{\mathrm{i}\omega(z_0)}\mathrm{e}^{-2\mathrm{i} t[\theta(z;z_0)-\theta(z_0;z_0)]}\\0 &\displaystyle 1\end{pmatrix}\\ &\displaystyle &\displaystyle \quad \qquad \qquad \cdot \begin{pmatrix}1 &\displaystyle 0\\ f(z;z_0)^{-2}r(z)\mathrm{e}^{-\mathrm{i}\omega(z_0)}\mathrm{e}^{2\mathrm{i} t[\theta(z;z_0)-\theta(z_0;z_0)]} &\displaystyle 1\end{pmatrix},\quad z>z_0, \end{array} \end{aligned} $$
(52)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle {\mathbf{V}}_{\mathbf{N}}(z):=\begin{pmatrix}1 &\displaystyle 0 \\\displaystyle \frac{f_-(z;z_0)^{-2}r(z)\mathrm{e}^{-\mathrm{i}\omega(z_0)}\mathrm{e}^{2\mathrm{i} t[\theta(z;z_0)-\theta(z_0;z_0)]}}{1-|r(z)|{}^2} &\displaystyle 1\end{pmatrix}\\ &\displaystyle &\displaystyle \quad \cdot(1-|r(z_0)|{}^2)^{\sigma_3}\begin{pmatrix}1 &\displaystyle \displaystyle -\frac{f_+(z;z_0)^{2}\overline{r(z)}\mathrm{e}^{\mathrm{i}\omega(z_0)}\mathrm{e}^{-2\mathrm{i} t[\theta(z;z_0)-\theta(z_0;z_0)]}}{1-|r(z)|{}^2}\\0 &\displaystyle 1\end{pmatrix},\quad z<z_0, \end{array} \end{aligned} $$
(53)

where f +(z;z 0) (f (z;z 0)) is the boundary value taken by f(z;z 0) from the upper (lower) half-plane.

Normalization

There is a matrix N 1(x, t) such that

(54)

Note that the matrix coefficient N 1(x, t) is necessarily related to the coefficient M 1(x, t) in Riemann-Hilbert Problem 1 by a diagonal conjugation:

$$\displaystyle \begin{aligned} {\mathbf{M}}_1(x,t)=\mathrm{e}^{-\mathrm{i}\omega(z_0)\sigma_3/2}\mathrm{e}^{-\mathrm{i} t\theta(z_0;z_0)\sigma_3}c(z_0)^{-\sigma_3}{\mathbf{N}}_1(x,t)c(z_0)^{\sigma_3}\mathrm{e}^{\mathrm{i} t\theta(z_0;z_0)\sigma_3}\mathrm{e}^{\mathrm{i}\omega(z_0)\sigma_3/2}. \end{aligned} $$
(55)

Therefore, the reconstruction formula (14) can be written in terms of N 1(x, t) as

$$\displaystyle \begin{aligned} q(x,t):=2\mathrm{i} \mathrm{e}^{-\mathrm{i}\omega(z_0)}\mathrm{e}^{-2\mathrm{i} t\theta(z_0;z_0)}c(z_0)^{-2}N_{1,12}(x,t). {} \end{aligned} $$
(56)

The net effect of this step is therefore to replace the non-constant diagonal central factor in (38) with its constant value at z = z 0 and to introduce power-law asymptotics at z =  at the cost of slight modifications of the left-most and right-most factors in (37)–(38). In the formula (49) we have also taken the opportunity to conjugate off the constant value of θ(z;z 0) and the phase of r(z) at the critical point z = z 0.

3.3 Nonanalaytic Extensions and \({\overline {\partial }}\) Steepest Descent

The key to the steepest descent method, both in its classical analytic framework and in the \({\overline {\partial }}\) setting, is to get the oscillatory factors \(\mathrm {e}^{\pm 2\mathrm {i} t\theta (z;z_0)}\) off the real axis and into appropriate sectors of the complex z-plane where they decay as t → +. We will accomplish this by exactly the same means as in the linear case, namely by defining non-analytic extensions of the non-oscillatory coefficients of \(\mathrm {e}^{\pm 2\mathrm {i} t\theta (z;z_0)}\) in the left-most and right-most jump matrix factors in (37)–(38) by a slight generalization of the formula (23). In reference to the diagram in Fig. 2, we define sectors

$$\displaystyle \begin{aligned} \begin{aligned} \Omega_1:&\quad 0<\arg(z-z_0)<\frac{1}{4}\pi\\ \Omega_2:&\quad \frac{1}{4}\pi<\arg(z-z_0)<\frac{3}{4}\pi\\ \Omega_3:&\quad \frac{3}{4}\pi<\arg(z-z_0)<\pi\\ \Omega_4:&\quad -\pi<\arg(z-z_0)<-\frac{3}{4}\pi\\ \Omega_5:&\quad -\frac{3}{4}\pi<\arg(z-z_0)<-\frac{1}{4}\pi\\ \Omega_6:&\quad -\frac{1}{4}\pi<\arg(z-z_0)<0. \end{aligned} \end{aligned} $$
(57)

Note that Ω3 = Ω+ and Ω6 = Ω in reference to Fig. 1. Now we define extensions on the domains shaded in Fig. 2 by following a very similar approach as in Sect. 2:

(58)

It is easy to check that:

  • E 1(u, v) evaluates to \(f(z;z_0)^{-2}r(z)\mathrm {e}^{-\mathrm {i}\omega (z_0)}\) for on the boundary of Ω1.

  • E 3(u, v) evaluates to \(-f_+(z;z_0)^2\overline {r(z)}\mathrm {e}^{\mathrm {i}\omega (z_0)}/(1-|r(z)|{ }^2)\) for on the boundary of Ω3.

  • E 4(u, v) evaluates to \(f_-(z;z_0)^{-2}r(z)\mathrm {e}^{-\mathrm {i}\omega (z_0)}/(1-|r(z)|{ }^2)\) for on the boundary of Ω4.

  • E 6(u, v) evaluates to \(-f(z;z_0)^2\overline {r(z)}\mathrm {e}^{\mathrm {i}\omega (z_0)}\) for on the boundary of Ω6.

Thus exactly as in Sect. 2 these formulæ represent extensions of their values on the real sector boundaries into the complex plane that become constant on the diagonal sector boundaries (see (60) below), with the constant chosen in each case to ensure continuity of the extension along the interior boundary of each sector. The only essential difference between the extension formulæ (58) and the formula (23) from Sect. 2 is the way that the factors f(z;z 0)±2 are treated differently from the factors involving r(z); the reason for using f(u + iv;z 0)±2 in (58) rather than f(u;z 0)±2 will become clearer in Sect. 3.5 when we compute \({\overline {\partial }} E_j(u,v)\), j = 1, 3, 4, 6, and take advantage of the fact (see Lemma 3.1) that \({\overline {\partial }} f(u+\mathrm {i} v;z_0)^{\pm 2}\equiv 0\) in the interior of each sector.

Fig. 2
figure 2

The jump contour in Riemann-Hilbert Problem 2 and the sectors Ωj, j = 1, …, 6 in the z = u + iv plane

We use the extensions to “open lenses” about the intervals z < z 0 and z > z 0 by making another substitution:

$$\displaystyle \begin{aligned} \mathbf{O}(u,v;x,t):=\begin{cases} \mathbf{N}(z;x,t)\begin{pmatrix}1&0\\E_1(u,v)\mathrm{e}^{2\mathrm{i} t[\theta(u+\mathrm{i} v;z_0)-\theta(z_0;z_0)]} & 1\end{pmatrix}^{-1},\quad z=u+\mathrm{i} v\in\Omega_1\\ \mathbf{N}(z;x,t),\qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad z=u+\mathrm{i} v\in\Omega_2\\ \mathbf{N}(z;x,t)\begin{pmatrix}1&E_3(u,v)\mathrm{e}^{-2\mathrm{i} t[\theta(u+\mathrm{i} v;z_0)-\theta(z_0;z_0)]} \\0 & 1\end{pmatrix}^{-1},\quad z=u+\mathrm{i} v\in\Omega_3\\ \mathbf{N}(z;x,t)\begin{pmatrix}1&0\\E_4(u,v)\mathrm{e}^{2\mathrm{i} t[\theta(u+\mathrm{i} v;z_0)-\theta(z_0;z_0)]} & 1\end{pmatrix},\ \ \qquad z=u+\mathrm{i} v\in\Omega_4\\ \mathbf{N}(z;x,t),\qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad z=u+\mathrm{i} v\in\Omega_5\\ \mathbf{N}(z;x,t)\begin{pmatrix}1&E_6(u,v)\mathrm{e}^{-2\mathrm{i} t[\theta(u+\mathrm{i} v;z_0)-\theta(z_0;z_0)]} \\ 0 & 1\end{pmatrix},\qquad z=u+\mathrm{i} v\in\Omega_6. \end{cases} {} \end{aligned} $$
(59)

Our notation O(u, v;x, t) reflects the viewpoint that unlike N(z;x, t), z = u + iv, O(u, v;x, t) is not a piecewise-analytic function in the complex plane due to the non-analytic extensions E j(u, v), j = 1, 3, 4, 6. The exponential factors in (59) all have modulus less than 1 and decay exponentially to zero as t → + pointwise in the interior of each of the indicated sectors, a fact that suggests that (59) is a near-identity transformation in the limit t → +. We also have the following property.

Lemma 3.2 (Relation Between N and O for Large )

Let be fixed, and suppose that and that there exists a constant ρ < 1 such that |r(z)|≤ ρ holds for all (conditions that are true for as follows from ). Then holds as z = u + iv ∞ where the decay of the error term is uniform with respect to direction in each sector Ω j, j = 1, …, 6.

Proof

The exponential factors in (59) also decay as z = u + iv → provided that v →. Since means that \((1+|\cdot |)\hat {r}(\cdot )\) is square-integrable where \(\hat {r}\) denotes the Fourier transform of r, the Cauchy-Schwarz inequality implies that also . Hence by the Riemann-Lebesgue Lemma, r(u) is bounded, continuous, and tends to zero as u →. As 1 −|r(u)|2 ≥ 1 − ρ 2 > 0, the same properties hold for r(u)∕(1 −|r(u)|2). Since the hypotheses of Lemma 3.1 hold, f(u + iv;z 0)±2 are bounded functions, so the desired result follows from using extension formulæ (58) in (59). □

Despite the non-analyticity of the extensions, the above proof shows also that each of the extensions E j(u, v), j = 1, 3, 4, 6, is continuous on the relevant sector and therefore O(u, v;x, t) is a piecewise-continuous function of with jump discontinuities across the sector boundaries. We address these jump discontinuities next.

3.4 The Isomonodromy Problem of Its

Although O(u, v;x, t) is not analytic in the sectors shaded in Fig. 2 for essentially the same reason that the double integral error term in (24) does not vanish identically, the fact that the extensions E j(u, v), j = 1, 3, 4, 6, evaluate to constants on the diagonals:

$$\displaystyle \begin{aligned} \begin{aligned} E_1(u-z_0,u)=|r(z_0)|\quad &\text{and}\quad E_6(u-z_0,-u)=-|r(z_0)|,\quad u>z_0,\\ E_3(u-z_0,-u)=-\frac{|r(z_0)|}{1-|r(z_0)|{}^2}\quad &\text{and}\quad E_4(u-z_0,u)=\frac{|r(z_0)|}{1-|r(z_0)|{}^2},\ \ u<z_0, \end{aligned} \end{aligned} $$
(60)

implies that if we introduce the recentered and rescaled independent variable ζ := 2t 1∕2(z − z 0), the jump conditions satisfied by O(u, v;x, t) across the sector boundaries are exactly the same as those satisfied by the matrix function P(ζ;|r(z 0)|) solving the following Riemann-Hilbert problem.

Riemann-Hilbert Problem 3

Let m ∈ [0, 1) be a parameter, and seek a 2 × 2 matrix function P = P(ζ) = P(ζ;m) with the following properties:

Analyticity P(ζ) is an analytic function of ζ in the sectors \(|\arg (\zeta )|<\tfrac {1}{4}\pi \), \(\tfrac {1}{4}\pi <\pm \arg (\zeta )<\tfrac {3}{4}\pi \), and \(\tfrac {3}{4}\pi <\pm \arg (\zeta )<\pi \). It admits a continuous extension from each of these five sectors to its boundary.

Jump Conditions Denoting by P +(ζ) (resp., P (ζ)) the boundary value taken on any one of the rays of the jump contour Σ P from the left (resp., right) according to the orientation shown in Fig. 3, the boundary values are related by P +(ζ;m) = P (ζ;m)V P(ζ;m), where the jump matrix V P(ζ;m) is defined on the five rays of Σ P by

$$\displaystyle \begin{aligned} {\mathbf{V}}_{\mathbf{P}}(\zeta;m):=\begin{cases} \displaystyle \begin{pmatrix}1&0\\m\mathrm{e}^{\mathrm{i}\zeta^2} & 1\end{pmatrix},&\quad \arg(\zeta)=\frac{1}{4}\pi\\ \displaystyle \begin{pmatrix}1&-m\mathrm{e}^{-\mathrm{i}\zeta^2}\\0 & 1\end{pmatrix},&\quad \arg(\zeta)=-\frac{1}{4}\pi\\ \displaystyle \begin{pmatrix}1 & \displaystyle -\frac{m\mathrm{e}^{-\mathrm{i}\zeta^2}}{1-m^2}\\0 & 1\end{pmatrix},&\quad \arg(\zeta)=\frac{3}{4}\pi\\ \displaystyle\begin{pmatrix}1 & 0\\\displaystyle\frac{m\mathrm{e}^{\mathrm{i}\zeta^2}}{1-m^2}&1\end{pmatrix},&\quad \arg(\zeta)=-\frac{3}{4}\pi\\ (1-m^2)^{\sigma_3},&\quad \arg(-\zeta)=0. \end{cases} {} \end{aligned} $$
(61)
Fig. 3
figure 3

The jump contour ΣP and jump matrix V P(ζ;m) for Riemann-Hilbert Problem 3

Normalization as ζ ∞.

This Riemann-Hilbert problem is essentially the isomonodromy problem identified by Its [10], and it is the analogue in the nonlinear setting of the Gaussian integral that is the leading term of the stationary phase expansion (24) in the linear case. Although the jump conditions for O(u, v;x, t) correspond exactly to those of P(ζ;|r(z 0)|), the scaling zζ = 2t 1∕2(z − z 0) introduces an extra factor into the asymptotics as z →; the fact of the matter is that the matrix \((2t^{1/2})^{\mathrm {i}\nu (z_0)\sigma _3}\mathbf {O}(u,v;x,t)\) satisfies the normalization condition of P(ζ;|r(z 0)|), and the constant pre-factor has no effect on the jump conditions. Hence in Sect. 3.5 below we shall use the latter as a parametrix for the former.

However, we first develop the explicit solution of Riemann-Hilbert Problem 3. The first step is to consider the related unknown \(\mathbf {U}(\zeta ;m):=\mathbf {P}(\zeta ;m)\mathrm {e}^{-\mathrm {i}\zeta ^2\sigma _3/2}\) and observe that from the conditions of Riemann-Hilbert Problem 3 that U(ζ;m) is analytic exactly in the same five sectors where P(ζ;m) is, and that it satisfies jump conditions of exactly the form (61) except that the factors \(\mathrm {e}^{\pm \mathrm {i}\zeta ^2}\) are everywhere replaced by 1; in other words, the jump matrix for U(ζ;m) on each jump ray is constant along the ray. It follows that the ζ-derivative U(ζ;m) satisfies the same “raywise constant” jump conditions as does U(ζ;m) itself. Then, since it is easy to prove by Liouville’s theorem that any solution P(ζ;m) of Riemann-Hilbert Problem 3 has unit determinant, it follows that U(ζ;m) is invertible and a calculation shows that the function U(ζ;m)U(ζ;m)−1 is continuous and hence by Morera’s theorem analytic in the whole ζ-plane possibly excepting ζ = 0. We will assume analyticity at the origin as well and show later that this is consistent. As an entire function of ζ, the product U(ζ;m)U(ζ;m)−1 is potentially determined by its asymptotic behavior as ζ →. Assuming further that the normalization condition in Riemann-Hilbert Problem 3 means both that for some matrix coefficient P 1(m) to be determined,

(62)

hold as ζ →, such as would arise from term-by-term differentiation, it follows also that

(63)

as ζ →. Therefore the entire function is determined by Liouville’s theorem to be a linear polynomial:

$$\displaystyle \begin{aligned} \mathbf{U}'(\zeta;m)\mathbf{U}(\zeta;m)^{-1}=-\mathrm{i}\zeta\sigma_3 +\mathrm{i}[\sigma_3,{\mathbf{P}}_1(m)], \end{aligned} $$
(64)

where [A, B] := AB −BA is the matrix commutator. In other words, U(ζ;m) satisfies the first-order system of linear differential equations:

$$\displaystyle \begin{aligned} \frac{\mathrm{d}\mathbf{U}}{\mathrm{d}\zeta}(\zeta;m)=\begin{pmatrix}-\mathrm{i}\zeta & 2\mathrm{i} P_{1,12}(m)\\-2\mathrm{i} P_{1,21}(m) & \mathrm{i}\zeta\end{pmatrix}\mathbf{U}(\zeta;m). {} \end{aligned} $$
(65)

Now, another easy consequence of Liouville’s theorem is that there is at most one solution of Riemann-Hilbert Problem 3. Using the fact that m ∈ [0, 1), it is not difficult to show that if P(ζ;m) is a solution of Riemann-Hilbert Problem 3, then so is

$$\displaystyle \begin{aligned} \sigma_1\overline{\mathbf{P}(\overline{\zeta};m)}\sigma_1,\quad \text{where}\quad \sigma_1:=\begin{pmatrix}0&1\\1&0\end{pmatrix}, \end{aligned} $$
(66)

so by uniqueness it follows that \(\mathbf {P}(\zeta ;m)=\sigma _1\overline {\mathbf {P}(\overline {\zeta };m)}\sigma _1\). Combining this symmetry with the first expansion in (62) shows that \(P_{1,21}(m)=\overline {P_{1,12}(m)}\), so the differential equations can be written in the form

$$\displaystyle \begin{aligned} \frac{\mathrm{d}\mathbf{U}}{\mathrm{d}\zeta}(\zeta;m)=\begin{pmatrix}-\mathrm{i}\zeta & \beta\\\overline{\beta} & \mathrm{i}\zeta\end{pmatrix}\mathbf{U}(\zeta;m),\quad \beta=\beta(m):=2\mathrm{i} P_{1,12}(m). {} \end{aligned} $$
(67)

The constant is unknown, but if it is considered as a parameter, then eliminating the second row shows that the elements U 1j, j = 1, 2, of the first row satisfy Weber’s equation for parabolic cylinder functions in the form:

$$\displaystyle \begin{aligned} \frac{\mathrm{d}^2 U_{1j}}{\mathrm{d} y^2}-\left(\frac{1}{4}y^2+a\right)U_{1j}=0,\quad a:=\frac{1}{2}(1+\mathrm{i}|\beta|{}^2),\quad y:=\sqrt{2}\mathrm{e}^{-\mathrm{i}\pi/4}\zeta,\quad j=1,2. {} \end{aligned} $$
(68)

The solutions of this equation are well-documented in the Digital Library of Mathematical Functions [18, §12]. Equation (68) has particular solutions denoted U(a, ±y) and U(−a, ±iy), where U(⋅, ⋅) is a special functionFootnote 2 with well-known integral representations, asymptotic expansions, and connection formulæ.

The second step is to represent the elements U 1j as linear combinations of a fundamental pair of so-called numerically satisfactory solutions specially adapted to each of the five sectors of analyticity for Riemann-Hilbert Problem 3. Thus, we write

$$\displaystyle \begin{aligned} &U_{1j}(\zeta;m)\\ &=\begin{cases} \beta A_j^{(0)}U(a,y) + \beta B_j^{(0)}U(-a,\mathrm{i} y),&\ |\arg(\zeta)|<\frac{1}{4}\pi,\\ \beta A_j^{(1)}U(a,y) + \beta B_j^{(1)}U(-a,-\mathrm{i} y),&\ \frac{1}{4}\pi<\arg(\zeta)<\frac{3}{4}\pi,\\ \beta A_j^{(-1)}U(a,-y)+\beta B_j^{(-1)}U(-a,\mathrm{i} y),&\ -\frac{3}{4}\pi<\arg(\zeta)<-\frac{1}{4}\pi,\\ \beta A_j^{(2)}U(a,-y)+\beta B_j^{(2)}U(-a,-\mathrm{i} y),&\ \frac{3}{4}\pi<\arg(\zeta)<\pi,\\ \beta A_j^{(-2)}U(a,-y)+\beta B_j^{(-2)}U(-a,-\mathrm{i} y),&\ -\pi<\arg(\zeta)<-\frac{3}{4}\pi, \end{cases} {} \end{aligned} $$
(69)

and then using the first row of (67) along with identities allowing the elimination of derivatives of U [18, Eqs. 12.8.2–12.8.3] we get the following representation of the elements of the second row of U(ζ;m):

$$\displaystyle \begin{aligned} &U_{2j}(\zeta;m)\\ &=\sqrt{2}\mathrm{e}^{-\mathrm{i}\pi/4}\begin{cases} -A_j^{(0)}U(a{-}1,y){+}\mathrm{i}(a{-}\tfrac{1}{2})B_j^{(0)}U(1{-}a,\mathrm{i} y),& |\arg(\zeta)|{<}\frac{1}{4}\pi,\\ -A_j^{(1)}U(a{-}1,y){-}\mathrm{i}(a{-}\tfrac{1}{2})B_j^{(1)}U(1{-}a,-\mathrm{i} y),& \frac{1}{4}\pi{<}\arg(\zeta){<}\frac{3}{4}\pi,\\ A_j^{(-1)}U(a{-}1,-y){+}\mathrm{i}(a{-}\tfrac{1}{2})B_j^{(-1)}U(1{-}a,\mathrm{i} y),& -\frac{3}{4}\pi{<}\arg(\zeta){<}-\frac{1}{4}\pi,\\ A_j^{(2)}U(a{-}1,-y){-}\mathrm{i} (a{-}\tfrac{1}{2})B_j^{(2)}U(1{-}a,-\mathrm{i} y),& \frac{3}{4}\pi{<}\arg(\zeta){<}\pi,\\ A_j^{(-2)}U(a{-}1,-y){-}\mathrm{i}(a{-}\tfrac{1}{2})B_j^{(-2)}U(1{-}a,-\mathrm{i} y),& -\pi{<}\arg(\zeta) {<}-\frac{3}{4}\pi. \end{cases} {}\end{aligned} $$
(70)

Finally, we determine the coefficients \(A_j^{(i)}\) and \(B_j^{(i)}\) for j = 1, 2 and i = 0, ±1, ±2, as well as the value of β = β(m) so that all of the conditions of Riemann-Hilbert Problem 3 are satisfied by \(\mathbf {P}(\zeta ;m)=\mathbf {U}(\zeta ;m)\mathrm {e}^{\mathrm {i}\zeta \sigma _3/2}\). The advantage of using numerically satisfactory fundamental pairs is that the asymptotic expansion [18, Eq. 12.9.1]

$$\displaystyle \begin{aligned} U(a,y)\sim\mathrm{e}^{-\tfrac{1}{4}y^2}y^{-a-\tfrac{1}{2}}\sum_{k=0}^\infty (-1)^k\frac{\left(\tfrac{1}{2}+a\right)_{2k}}{k!(2y^2)^k},\quad y\to\infty,\quad |\arg(y)|<\frac{3}{4}\pi {}\end{aligned} $$
(71)

can be used to determine from (69)–(70) the asymptotic behavior of U(ζ;m) in each sector for the purposes of comparison with the first formula in (63). This immediately shows that for consistency it is necessary to take \(A_1^{(i)}=0\) and \(B_2^{(i)}=0\) for i = 0, ±1, ±2. Next, it is useful to consider the trivial jump conditions for the first column of U(ζ;m) (across \(\arg (\zeta )=-\tfrac {1}{4}\pi \) and \(\arg (\zeta )=\tfrac {3}{4}\pi \)) and for the second column of U(ζ;m) (across \(\arg (\zeta )=\tfrac {1}{4}\pi \) and \(\arg (\zeta )=-\tfrac {3}{4}\pi \)). These imply the identities \(B_1^{(0)}=B_1^{(-1)}\), \(B_1^{(1)}=B_1^{(2)}\) (from matching the first column) and \(A_2^{(0)}=A_2^{(1)}\), \(A_2^{(-2)}=A_2^{(-1)}\) (from matching the second column). The diagonal jump condition satisfied by U(ζ;m) across the negative real axis then yields the additional identities \(B_1^{(-2)}=(1-m^2)^{-1}B_1^{(2)}\) and \(A_2^{(2)}=(1-m^2)^{-1}A_2^{(-2)}\). With this information, we have found that U(ζ;m) necessarily has the form

$$\displaystyle \begin{aligned} \mathbf{U}(\zeta;m) &=\begin{pmatrix} \beta B_1^{(0)}U(-a,\mathrm{i} y) & \beta A_2^{(0)}U(a,y)\\ \sqrt{2}\mathrm{e}^{\mathrm{i}\pi/4}(a-\tfrac{1}{2})B_1^{(0)}U(1-a,\mathrm{i} y) & \sqrt{2}\mathrm{e}^{3\pi\mathrm{i}/4}A_2^{(0)}U(a-1,y) \end{pmatrix},\\ &\quad |\arg(\zeta)|<\frac{1}{4}\pi,{} \end{aligned} $$
(72)
$$\displaystyle \begin{aligned} \mathbf{U}(\zeta;m)&=\begin{pmatrix}\beta B_1^{(1)}U(-a,-\mathrm{i} y) & \beta A_2^{(0)}U(a,y)\\ \sqrt{2}\mathrm{e}^{-3\pi\mathrm{i}/4}(a-\tfrac{1}{2})B_1^{(1)}U(1-a,-\mathrm{i} y) & \sqrt{2}\mathrm{e}^{3\pi\mathrm{i}/4}A_2^{(0)}U(a-1,y) \end{pmatrix},\\ &\quad \frac{1}{4}\pi<\arg(\zeta)<\frac{3}{4}\pi, {} \end{aligned} $$
(73)
$$\displaystyle \begin{aligned} \mathbf{U}(\zeta;m)&=\begin{pmatrix}\beta B_1^{(0)} U(-a,\mathrm{i} y) & \beta A_2^{(-1)}U(a,-y)\\ \sqrt{2}\mathrm{e}^{\mathrm{i}\pi/4}(a-\tfrac{1}{2})B_1^{(0)}U(1-a,\mathrm{i} y) & \sqrt{2}\mathrm{e}^{-\mathrm{i}\pi/4}A_2^{(-1)}U(a-1,-y) \end{pmatrix},\\ &\quad -\frac{3}{4}\pi<\arg(\zeta)<-\frac{1}{4}\pi, {}\end{aligned} $$
(74)
$$\displaystyle \begin{aligned} &\mathbf{U}(\zeta;m)\\ &=\begin{pmatrix} \beta B_1^{(1)} U(-a,-\mathrm{i} y) & \beta(1{-}m^2)^{-1}A_2^{({-}1)}U(a,{-}y)\\ \sqrt{2}\mathrm{e}^{{-}3\pi\mathrm{i}/4}(a{-}\tfrac{1}{2})B_1^{(1)}U(1{-}a,{-}\mathrm{i} y) & \sqrt{2}\mathrm{e}^{{-}\mathrm{i}\pi/4}(1{-}m^2)^{-1}A_2^{(-1)}U(a{-}1,-y) \end{pmatrix},\\ &\quad \frac{3}{4}\pi<\arg(\zeta)<\pi, {} \end{aligned} $$
(75)

and

$$\displaystyle \begin{aligned} &\mathbf{U}(\zeta;m) \\ &=\begin{pmatrix} \beta (1{-}m^2)^{-1}B_1^{(1)}U(-a,{-}\mathrm{i} y) & \beta A_2^{(-1)}U(a,{-}y)\\ \sqrt{2}\mathrm{e}^{{-}3\pi\mathrm{i}/4}(a{-}\tfrac{1}{2})(1{-}m^2)^{-1}B_1^{(1)}U(1{-}a,-\mathrm{i} y) & \sqrt{2}\mathrm{e}^{{-}\mathrm{i}\pi/4}A_2^{(-1)}U(a{-}1,{-}y) \end{pmatrix}, \\ &\quad -\pi<\arg(\zeta)<-\frac{3}{4}\pi. {} \end{aligned} $$
(76)

Appealing again to (71) now shows that U(ζ;m) agrees with the first formula in (63) up to the leading term only if the parameter a in Weber’s equation (68) satisfies

$$\displaystyle \begin{aligned} a-\frac{1}{2}=\frac{1}{2\pi\mathrm{i}}\ln(1-m^2)\quad \implies\quad |\beta|{}^2=-\frac{1}{\pi}\ln(1-m^2)>0, {}\end{aligned} $$
(77)

and the remaining constants \(A_2^{(0)}\), \(A_2^{(-1)}\), \(B_1^{(0)}\), and \(B_1^{(1)}\), are given in terms of β by

$$\displaystyle \begin{aligned} B_1^{(0)}&=\beta^{-1}(1-m^2)^{-1/8}\exp\left(\mathrm{i}\frac{1}{4\pi}\ln(2)\ln(1-m^2)\right)\\ {} A_2^{(0)}&=\frac{1}{\sqrt{2}}(1-m^2)^{-1/8}\mathrm{e}^{-3\pi\mathrm{i}/4}\exp\left(-\mathrm{i}\frac{1}{4\pi}\ln(2)\ln(1-m^2)\right)\\ {} B_1^{(1)}&=\beta^{-1}(1-m^2)^{3/8}\exp\left(\mathrm{i}\frac{1}{4\pi}\ln(2)\ln(1-m^2)\right)\\ {} A_{2}^{(-1)}&=\frac{1}{\sqrt{2}}(1-m^2)^{3/8}\mathrm{e}^{\mathrm{i}\pi/4}\exp\left(-\mathrm{i}\frac{1}{4\pi}\ln(2)\ln(1-m^2)\right). {} \end{aligned} $$
(78)

Only \(\arg (\beta )\) remains to be determined, and for this we recall the nontrivial jump conditions for the first (second) column of U(ζ;m) across the rays \(\arg (\zeta )=\tfrac {1}{4}\pi ,-\tfrac {3}{4}\pi \) (the rays \(\arg (\zeta )=-\tfrac {1}{4}\pi ,\tfrac {3}{4}\pi \)). Actually all four of these jump conditions contain equivalent information due to the fact that the cyclic product of the jump matrices in Riemann-Hilbert Problem 3 about the origin is the identity, so we just examine the transition of the first column across the ray \(\arg (\zeta )=\tfrac {1}{4}\pi \) implied by the jump conditions in Riemann-Hilbert Problem 3. Using all available information, the jump condition matches the connection formula [18, Eq. 12.2.18] if and only if

$$\displaystyle \begin{aligned} \arg(\beta)=\frac{\pi}{4}+\frac{1}{2\pi}\ln(2)\ln(1-m^2)-\arg\left(\Gamma\left(\mathrm{i}\frac{1}{2\pi}\ln(1-m^2)\right)\right). {} \end{aligned} $$
(79)

Combining this with (77) determines β=β(m) and then using (78) in (72)–(76) fully determines U(ζ;m) and hence also \(\mathbf {P}(\zeta ;m)=\mathbf {U}(\zeta ;m)\mathrm {e}^{\mathrm {i}\zeta ^2\sigma _3/2}\). This completes the construction of the necessarily unique solution of Riemann-Hilbert Problem 3. One can easily check directly that U(ζ;m)U(ζ;m)−1 is analytic at ζ = 0, and using (71) (which is known to be a formally differentiable expansion) one confirms the asymptotic expansions (62)–(63), justifying after the fact all assumptions made to arrive at the explicit solution.

We note that for each m ∈ [0, 1), P(ζ;m) is uniformly bounded with respect to , since it is locally bounded and the normalization factor in the asymptotics as ζ → satisfies

$$\displaystyle \begin{aligned} (1-m^2)^{1/2}<|\zeta^{-\ln(1-m^2)/(2\pi \mathrm{i})}|<(1-m^2)^{-1/2},\quad \arg(\zeta)\in (-\pi,\pi). \end{aligned} $$
(80)

Since \(\det (\mathbf {P}(\zeta ;m))=1\), the same holds for P(ζ;m)−1. Moreover, it is not difficult to see that if ∥⋅∥ is a matrix norm, then is a continuous function of m ∈ [0, 1). Therefore the estimates on P(ζ;m) and P(ζ;m)−1 hold uniformly with respect to m ∈ [0, ρ] for any ρ < 1.

3.5 The Equivalent \({\overline {\partial }}\) Problem and Its Solution for Large t

The next part of the proof of Theorem 1.1 is the nonlinear analogue of the estimation of the error \(\mathcal {E}(x,t)\) in the stationary phase formula (20) by double integrals in the z-plane. Here instead of a double integral we will have a double-integral equation arising from a \({\overline {\partial }}\)-problem. To arrive at this problem, we simply define a matrix function E(u, v;x, t) by comparing the “open lenses” matrix \((2t^{1/2})^{\mathrm {i}\nu (z_0)\sigma _3}\mathbf {O}(u,v;x,t)\) with its parametrix P(2t 1∕2(z − z 0);|r(z 0)|):

$$\displaystyle \begin{aligned} \mathbf{E}(u,v;x,t):=(2t^{1/2})^{\mathrm{i}\nu(z_0)\sigma_3}\mathbf{O}(u,v;x,t)\mathbf{P}(2t^{1/2}(u+\mathrm{i} v-z_0);|r(z_0)|)^{-1}. {} \end{aligned} $$
(81)

We claim that E(u, v;x, t) satisfies the following problem.

Problem 1

Let be parameters. Find a 2 × 2 matrix function E = E(u, v) = E(u, v;x, t), with the following properties:

Continuity E is a continuous function of .

Nonanalyticity E is a (weak) solution of the partial differential equation \({\overline {\partial }}\mathbf {E}(u,v)=\mathbf {E}(u,v)\mathbf {W}(u,v)\), where W(u, v) = W(u, v;x, t) is defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathbf{W}(u,v;x,t):=\mathbf{P}(2t^{1/2}(u+\mathrm{i} v-z_0);|r(z_0)|)\boldsymbol{\Delta}(u,v;x,t)\\ \cdot\mathbf{P}(2t^{1/2}(u+\mathrm{i} v-z_0);|r(z_0)|)^{-1}, {} \end{array} \end{aligned} $$
(82)

and

$$\displaystyle \begin{aligned} \boldsymbol{\Delta}(u,v;x,t):=\begin{cases} \begin{pmatrix}0&0\\-{\overline{\partial}} E_1(u,v)\mathrm{e}^{2\mathrm{i} t(\theta(u+\mathrm{i} v;z_0)-\theta(z_0;z_0))} &0\end{pmatrix} ,&\quad u+\mathrm{i} v\in\Omega_1\\ \mathbf{0},&\quad u+\mathrm{i} v\in\Omega_2\\ \begin{pmatrix}0&-{\overline{\partial}} E_3(u,v)\mathrm{e}^{-2\mathrm{i} t(\theta(u+\mathrm{i} v;z_0)-\theta(z_0;z_0))}\\ 0&0\end{pmatrix} ,&\quad u+\mathrm{i} v\in\Omega_3\\ \begin{pmatrix}0&0\\{\overline{\partial}} E_4(u,v)\mathrm{e}^{2\mathrm{i} t(\theta(u+\mathrm{i} v;z_0)-\theta(z_0;z_0))} &0\end{pmatrix} ,&\quad u+\mathrm{i} v\in\Omega_4\\ \mathbf{0},&\quad u+\mathrm{i} v\in\Omega_5\\ \begin{pmatrix}0&{\overline{\partial}} E_6(u,v)\mathrm{e}^{-2\mathrm{i} t(\theta(u+\mathrm{i} v;z_0)-\theta(z_0;z_0))}\\ 0&0\end{pmatrix} ,&\quad u+\mathrm{i} v\in\Omega_6. \end{cases} {} \end{aligned} $$
(83)

Note that W(u, v;x, t) has jump discontinuities across the sector boundaries in general.

Normalization as (u, v) →∞.

To show the continuity, first note that in each of the six sectors Ωj, j = 1, …, 6, E(u, v;x, t) is continuous as a function of (u, v) up to the sector boundary. Indeed, the first factor in (81) is independent of (u, v), and the second factor in (81) has the claimed continuity because this is a property of the solution N(u + iv;x, t) of Riemann-Hilbert Problem 2 and of the change-of-variables formula (59). Finally, P(ζ;m) has unit determinant and its explicit formula in terms of parabolic cylinder functions shows that its restriction to each sector is an entire function of ζ, which guarantees the asserted continuity of the third factor in (81). Moreover, the matrices \((2t^{1/2})^{\mathrm {i}\nu (z_0)\sigma _3}\mathbf {O}(u,v;x,t)\) and P(2t 1∕2(u + iv − z 0);|r(z 0)|) satisfy exactly the same jump conditions across the six rays that form the common boundaries of neighboring sectors, from which it follows that E +(u, v;x, t) = E (u, v;x, t) holds across each of these rays and therefore E(u, v;x, t) may be regarded as a continuous function of .

To show that \({\overline {\partial }}\mathbf {E}=\mathbf {E}\mathbf {W}\) holds, one simply differentiates E(u, v;x, t) in each of the six sectors, using the fact that O(u, v;x, t) is related to N(u + iv;x, t) explicitly by (59) and that both N(u + iv;x, t) and the unit-determinant matrix function P(2t 1∕2(u + iv − z 0);|r(z 0)|) are analytic functions of u + iv in each sector, and hence are annihilated by \({\overline {\partial }}\). The region of non-analyticity of E is therefore the union of shaded sectors shown in Fig. 2.

Finally to show the normalization condition, we recall Lemma 3.2. Therefore, comparing the normalization conditions of Riemann-Hilbert Problem 2 for N(z;x, t) and of Riemann-Hilbert Problem 3 for P(ζ;m) shows that as (u, v) → in .

The rest of this section is devoted to the proof of the following result.

Proposition 3.3

Suppose that with |r(z)|≤ ρ for some ρ < 1. If t > 0 is sufficiently large, then for all there exists a unique solution of \({\overline {\partial }}\)-Problem 1 with the property that

(84)

exists and satisfies

(85)

Proof

To show that \({\overline {\partial }}\)-Problem 1 has a unique solution for t > 0 sufficiently large, and simultaneously obtain estimates for the solution E(u, v;x, t), we formulate a weakly-singular integral equation whose solution is that of \({\overline {\partial }}\)-Problem 1:

(86)

in which the identity matrix is viewed as a constant function on . Indeed, this is a consequence of the distributional identity \({\overline {\partial }} z^{-1}=-\pi \delta \) where δ denotes the Dirac mass at the origin. We will solve the integral equation (86) in the space , by computing the corresponding operator normFootnote 3 of and showing that for large t > 0 it is less than 1. Thus, we begin with the elementary estimate

(87)

Using the uniform boundedness of P(ζ;m) and its inverse with respect to ζ i.e., there exists C > 0 such that ∥P(ζ;m)∥≤ C and ∥P(ζ;m)−1∥≤ C for all and all m ∈ [0, ρ] with ρ < 1, the assumption |r(z)|≤ ρ < 1 gives that

$$\displaystyle \begin{aligned} \|\mathbf{W}(u,v;x,t)\|\le C^2\begin{cases}\mathrm{e}^{-8t(u-z_0)v}|{\overline{\partial}} E_1(u,v)|,&\quad z=u+\mathrm{i} v\in\Omega_1\\ \mathrm{e}^{8t(u-z_0)v}|{\overline{\partial}} E_3(u,v)|,&\quad z=u+\mathrm{i} v\in\Omega_3\\ \mathrm{e}^{-8t(u-z_0)v}|{\overline{\partial}} E_4(u,v)|,&\quad z=u+\mathrm{i} v\in\Omega_4\\ \mathrm{e}^{8t(u-z_0)v}|{\overline{\partial}} E_6(u,v)|,&\quad z=u+\mathrm{i} v\in\Omega_6, \end{cases} {} \end{aligned} $$
(88)

and of course W(u, v;x, t) ≡ 0 on Ω2 ∪ Ω5. By direct computation using (58) along with the analyticity of f(z;z 0)±2 provided by Lemma 3.1 and straightforward estimates of \(\cos {}(2\arg (u+\mathrm {i} v-z_0))\) and its \({\overline {\partial }}\)-derivative as in Sect. 2, we have the following analogues of (29):

$$\displaystyle \begin{aligned} \begin{array}{rcl} |{\overline{\partial}} E_1(u,v)| \le \frac{1}{2}|f(u+\mathrm{i} v;z_0)^{-2}||r'(u)| + \frac{|f(u+\mathrm{i} v;z_0)^{-2}r(u)-r(z_0)|}{\sqrt{(u-z_0)^2+v^2}},&\displaystyle &\displaystyle \\ z=u+\mathrm{i} v\in\Omega_1,&\displaystyle &\displaystyle \end{array} \end{aligned} $$
(89)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle |{\overline{\partial}} E_3(u,v)|\le\frac{1}{2}|f(u+\mathrm{i} v;z_0)^2|\left|\frac{\mathrm{d}}{\mathrm{d} u}\frac{\overline{r(u)}}{1-|r(u)|{}^2} \right|\\ &\displaystyle &\displaystyle {}+\left|\frac{f(u+\mathrm{i} v;z_0)^2\overline{r(u)}}{1-|r(u)|{}^2}-\frac{\overline{r(z_0)}}{1-|r(z_0)|{}^2}\right|\frac{1}{\sqrt{(u-z_0)^2+v^2}},\quad z=u+\mathrm{i} v\in\Omega_3, \end{array} \end{aligned} $$
(90)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle |{\overline{\partial}} E_4(u,v)|\le\frac{1}{2}|f(u+\mathrm{i} v;z_0)^{-2}|\left|\frac{\mathrm{d}}{\mathrm{d} u}\frac{r(u)}{1-|r(u)|{}^2}\right| \\ &\displaystyle &\displaystyle {}+\left|\frac{f(u+\mathrm{i} v;z_0)^{-2}r(u)}{1-|r(u)|{}^2}-\frac{r(z_0)}{1-|r(z_0)|{}^2}\right|\frac{1}{\sqrt{(u-z_0)^2+v^2}},\quad z=u+\mathrm{i} v\in\Omega_4, \end{array} \end{aligned} $$
(91)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} |{\overline{\partial}} E_6(u,v)|\le\frac{1}{2}|f(u+\mathrm{i} v;z_0)^2||r'(u)|+\frac{|f(u+\mathrm{i} v;z_0)^2\overline{r(u)}-\overline{r(z_0)}|}{\sqrt{(u-z_0)^2+v^2}},\\ z=u+\mathrm{i} v\in\Omega_6. \end{array} \end{aligned} $$
(92)

Note that

$$\displaystyle \begin{aligned} \left|\frac{\mathrm{d}}{\mathrm{d} u}\frac{\overline{r(u)}}{1-|r(u)|{}^2}\right|=\left|\frac{\mathrm{d}}{\mathrm{d} u}\frac{r(u)}{1-|r(u)|{}^2}\right|\le\frac{1+\rho^2}{(1-\rho^2)^2}|r'(u)| {} \end{aligned} $$
(93)

holds under the condition |r(u)|≤ ρ < 1. Also, under the same condition,

(94)

where we used Lemma 3.1 and (30), and K > 0 depends on ρ but not on z 0. Exactly the same estimate holds for \(|f(u+\mathrm {i} v;z_0)^2\overline {r(u)}-\overline {r(z_0)}|\). In the same way, but also using (93),

(95)

Therefore again using Lemma 3.1, we see that there are constants L and M depending only on the upper bound ρ < 1 for , on , and on such that

$$\displaystyle \begin{aligned} |\partial E_j(u,v)|\le L|r'(u)| +\frac{M}{[(u-z_0)^2+v^2]^{1/4}}, \ \ z=u+\mathrm{i} v\in\Omega_j,\ \ j=1,3,4,6. {} \end{aligned} $$
(96)

Note that (96) is the nonlinear analogue of the estimate (31).

Combining (96) with (87)–(88) shows that for some constant D independent of ,

(97)

where the four terms are analogues in the nonlinear case of the double integrals defined in (33) for the linear case:

$$\displaystyle \begin{aligned} I^{[1,4]}(u,v;x,t)&:=\iint_{\Omega_1\cup\Omega_4}\frac{|r'(U)|\mathrm{e}^{-8t(U-z_0)V}\,\mathrm{d} A(U,V)}{\sqrt{(U-u)^2+(V-v)^2}},\\ I^{[3,6]}(u,v;x,t)&:=\iint_{\Omega_3\cup\Omega_6}\frac{|r'(U)|\mathrm{e}^{8t(U-z_0)V}\,\mathrm{d} A(U,V)}{\sqrt{(U-u)^2+(V-v)^2}}, \end{aligned} $$
$$\displaystyle \begin{aligned} J^{[1,4]}(u,v;x,t)&:=\iint_{\Omega_1\cup\Omega_4}\frac{\mathrm{e}^{-8t(U-z_0)V}\,\mathrm{d} A(U,V)}{[(U-z_0)^2+V^{2}]^{1/4}\sqrt{(U-u)^2+(V-v)^2}},\quad \text{and}\\ J^{[3,6]}(u,v;x,t)&:=\iint_{\Omega_3\cup\Omega_6}\frac{\mathrm{e}^{8t(U-z_0)V}\,\mathrm{d} A(U,V)}{[(U-z_0)^2+V^{2}]^{1/4}\sqrt{(U-u)^2+(V-v)^2}}. {} \end{aligned} $$
(98)

Estimation of the integrals I [1, 4](u, v;x, t) and J [1, 4](u, v;x, t) requires nearly identical steps as estimation of I [3, 6](u, v;x, t) and J [3, 6](u, v;x, t) (just note that the sign of the exponent always corresponds to decay in the sectors of integration). So for brevity we just deal with I [3, 6](u, v;x, t) and J [3, 6](u, v;x, t).

To estimate I [3, 6](u, v;x, t), by iterated integration we have

$$\displaystyle \begin{aligned} &I^{[3,6]}(u,v;x,t)\\ &\quad =\left[\int_0^{+\infty}\,\mathrm{d} V\int_{-\infty}^{z_0-V}\,\mathrm{d} U +\int_{-\infty}^0\,\mathrm{d} V\int_{z_0-V}^{+\infty}\,\mathrm{d} U\right]\frac{|r'(U)|\mathrm{e}^{8t(U-z_0)V}}{\sqrt{(U-u)^2+(V-v)^2}}\\ &\quad \le\left[\int_{0}^{+\infty}\,\mathrm{d} V\int_{-\infty}^{z_0-V}\,\mathrm{d} U + \int_{-\infty}^0\,\mathrm{d} V\int_{z_0-V}^{+\infty}\,\mathrm{d} U\right]\frac{|r'(U)|\mathrm{e}^{-8tV^2}}{\sqrt{(U-u)^2+(V-v)^2}}. {} \end{aligned} $$
(99)

The inner integrals can be estimated by Cauchy-Schwarz, using the fact that :

(100)

Thus,

(101)

Without loss of generality, suppose that v > 0. Then

(102)

Using monotonicity of \(\sqrt {v-V}\) on V < 0 and the rescaling V = t −1∕2w, we get for the first term:

$$\displaystyle \begin{aligned} \int_{-\infty}^0\frac{\mathrm{e}^{-8tV^2}\,\mathrm{d} V}{\sqrt{v-V}}\le\int_{-\infty}^0\frac{\mathrm{e}^{-8tV^2}\,\mathrm{d} V}{\sqrt{-V}} = t^{-1/4}\int_{-\infty}^0\frac{\mathrm{e}^{-8w^2}\,\mathrm{d} w}{\sqrt{-w}} = \mathcal{O}(t^{-1/4}). {} \end{aligned} $$
(103)

For the second term, we use the inequality eb ≤ Cb −1∕4 for b > 0 and the rescaling V = vw to get

$$\displaystyle \begin{aligned} \int_0^v\frac{\mathrm{e}^{{-}8tV^2}\,\mathrm{d} V}{\sqrt{v{-}V}}\le C(8t)^{{-}1/4}\int_0^v\frac{\mathrm{d} V}{\sqrt{V(v{-}V)}}=C(8t)^{{-}1/4}\int_0^1\frac{\mathrm{d} w}{\sqrt{w(1{-}w)}}{=}\mathcal{O}(t^{{-}1/4}). {} \end{aligned} $$
(104)

Using monotonicity of \(\mathrm {e}^{-8tV^2}\) on V > v and the change of variable V − v = t −1∕2w we get for the third term:

$$\displaystyle \begin{aligned} \int_v^{+\infty}\frac{\mathrm{e}^{-8tV^2}\,\mathrm{d} V}{\sqrt{V-v}}\le \int_v^{+\infty}\frac{\mathrm{e}^{-8t(V-v)^2}\,\mathrm{d} V}{\sqrt{V-v}} = t^{-1/4}\int_0^{+\infty}\frac{\mathrm{e}^{-8w^2}\,\mathrm{d} w}{\sqrt{w}}=\mathcal{O}(t^{-1/4}). {} \end{aligned} $$
(105)

The upper bounds in (103)–(104) are all independent of v (and u), so combining them with (101)–(102) gives

(106)

where C denotes an absolute constant.

To estimate J [3, 6](u, v;x, t) we again introduce iterated integrals in the same way as in (99) to obtain the inequality

$$\displaystyle \begin{aligned} J^{[3,6]}(u,v;x,t)&\le\left[\int_0^{+\infty}\,\mathrm{d} V\int_{-\infty}^{z_0-V}\,\mathrm{d} U + \int_{-\infty}^0\,\mathrm{d} V\int_{z_0-V}^{+\infty}\,\mathrm{d} U\right]\\ &\quad \cdot \frac{\mathrm{e}^{-8tV^2}}{[(U-z_0)^2+V^2]^{1/4} \sqrt{(U-u)^2+(V-v)^2}}. \end{aligned} $$
(107)

Now, to estimate the inner U-integrals we will use Hölder’s inequality with conjugate exponents p > 2 and q < 2. Thus,

(108)

Now, by the change of variable U − z 0 = |V |w,

(109)

where the integral on the right-hand side is convergent as long as p > 2. Similarly, by the change of variable U − u = |V − v|w,

(110)

where the integral on the right-hand side is convergent as long as q > 1. Hence for any conjugate exponents 1 < q < 2 < p <  with p −1 + q −1 = 1, we have for some constant C = C(p, q),

(111)

As before, assume without loss of generality that v > 0. Then

(112)

Using q > 1 and monotonicity of (vV )1∕q−1 on V < 0 along with 1∕p + 1∕q = 1 and the rescaling V = t −1∕2w gives for the first integral

$$\displaystyle \begin{aligned} \begin{aligned} \int_{-\infty}^0\mathrm{e}^{-8tV^2}(-V)^{1/p-1/2}(v-V)^{1/q-1}\,\mathrm{d} V&\le \int_{-\infty}^0\mathrm{e}^{-8tV^2}(-V)^{1/p-1/2 +1/q-1}\,\mathrm{d} V\\ &\hspace{-2.5pc} = \int_{-\infty}^0\mathrm{e}^{-8tV^2}(-V)^{-1/2}\,\mathrm{d} V \\ &\hspace{-2.5pc}= t^{-1/4}\int_{-\infty}^0\mathrm{e}^{-8w^2}(-w)^{-1/2}\,\mathrm{d} w = \mathcal{O}(t^{-1/4}). \end{aligned} {} \end{aligned} $$
(113)

For the second integral, we again recall eb ≤ Cb −1∕4 for b > 0 and rescale by V = vw to get

$$\displaystyle \begin{aligned} \begin{aligned} \int_0^v\mathrm{e}^{-8tV^2}V^{1/p-1/2}(v-V)^{1/q-1}\,\mathrm{d} V&\le C(8t)^{-1/4}\int_0^v V^{1/p-1}(v-V)^{1/q-1}\,\mathrm{d} V\\ &= C(8t)^{-1/4}\int_0^1 w^{1/p-1}(1-w)^{1/q-1}\,\mathrm{d} w\\ &=\mathcal{O}(t^{-1/4}), \end{aligned} {}\end{aligned} $$
(114)

using also q, p < . Finally, for the third integral, we use monotonicity of \(\mathrm {e}^{-8tV^2}\) and V 1∕p−1∕2 (for p > 2) on V > v and make the substitution V − v = t −1∕2w to get

$$\displaystyle \begin{aligned} \begin{aligned} \int_v^{+\infty}\mathrm{e}^{-8tV^2}V^{1/p-1/2}(V-v)^{1/q-1}\,\mathrm{d} V &\le \int_v^{+\infty}\mathrm{e}^{-8t(V-v)^2}(V-v)^{1/p-1/2}\\ &\quad \qquad \quad \cdot (V-v)^{1/q-1}\,\mathrm{d} V\\ &= \int_v^{+\infty}\mathrm{e}^{-8t(V-v)^2}(V-v)^{-1/2}\,\mathrm{d} V\\ &= t^{-1/4}\int_0^{+\infty}\mathrm{e}^{-8w^2}w^{-1/2}\,\mathrm{d} w = \mathcal{O}(t^{-1/4}). \end{aligned} {} \end{aligned} $$
(115)

Since the upper bounds in (113)–(115) are all independent of , combining them with (111)–(112) gives

(116)

where C denotes an absolute constant.

Returning to (97) and taking a supremum over , we see that

(117)

holds where D is a constant depending only on the upper bound ρ < 1 for , on , and on , and where denotes the norm of the weakly-singular integral operator \(\mathcal {J}\) acting in . It is a consequence of (117) that the integral equation (86) is uniquely solvable in by convergent Neumann series for sufficiently large t > 0:

(118)

where \(\mathcal {I}\) denotes the identity operator and the constant function on , and that the solution satisfies

(119)

an estimate that is uniform with respect to . This proves the first assertion in Proposition 3.3.

To prove the existence of the limit E 1(x, t) in (84), note that from the integral equation (86) we have

(120)

The second term satisfies

(121)

Now, following [12], let us examine the resulting double integral for u = 0, i.e., for z = u + iv restricted to the imaginary axis. Some simple trigonometry shows that

$$\displaystyle \begin{aligned} \sup_{(U,V)\in\mathrm{supp}(\mathbf{W}(\cdot,\cdot;x,t))}\sqrt{\frac{U^2+V^2}{U^2 + (V-v)^2}} = 1+\sqrt{2}\frac{|v|}{|v|-|z_0|},\quad |v|>|z_0|. {} \end{aligned} $$
(122)

Therefore, if u = 0, the double integral on the right-hand side of (121) will tend to zero as |v|→ by the Lebesgue dominated convergence theorem provided that . Using (88) and (96), we have

(123)

where (compare with (98), or better yet, (33))

$$\displaystyle \begin{aligned} \begin{aligned} \widetilde{I}^{[1,4]}(x,t)&:=\iint_{\Omega_1\cup\Omega_4}|r'(U)|\mathrm{e}^{-8t(U-z_0)V}\,\mathrm{d} A(U,V),\\ \widetilde{I}^{[3,6]}(x,t)&:=\iint_{\Omega_3\cup\Omega_6}|r'(U)|\mathrm{e}^{8t(U-z_0)V}\,\mathrm{d} A(U,V),\\ \widetilde{J}^{[1,4]}(x,t)&:=\iint_{\Omega_1\cup\Omega_4}\frac{\mathrm{e}^{-8t(U-z_0)V}\,\mathrm{d} A(U,V)}{[(U-z_0)^2+V^2]^{1/4}},\quad \text{and}\\ \widetilde{J}^{[3,6]}(x,t)&:=\iint_{\Omega_3\cup\Omega_6}\frac{\mathrm{e}^{8t(U-z_0)V}\,\mathrm{d} A(U,V)}{[(U-z_0)^2+V^2]^{1/4}}. \end{aligned} \end{aligned} $$
(124)

Noting the resemblance with the double integrals (33) analyzed in Sect. 2, we can immediately obtain the estimate

(125)

for some constant C independent of x. Therefore, the second term on the right-hand side of (120) tends to zero as v → if u = 0 (the limit is not uniform with respect to x since v is compared with z 0 in (122)). Comparing with (84), we obtain from (120) the formula

(126)

and exactly the same argument shows that E 1(x, t) is finite and uniformly decaying as t → +:

(127)

where we have used (119) and (125) and noted that the constants C and D are independent of x. This proves the second assertion in Proposition 3.3. □

3.6 The Solution of the Cauchy Problem (1)–(2) for t > 0 Large

Now we complete the proof of Theorem 1.1 by combining our previous results. The matrix function N(u + iv;x, t) agrees with O(u, v;x, t) for u = 0 and |v| sufficiently large given z 0 = −x∕(4t). Since according to (81),

$$\displaystyle \begin{aligned} \mathbf{O}(u,v;x,t)=(2t^{1/2})^{-\mathrm{i}\nu(z_0)\sigma_3}\mathbf{E}(u,v;x,t)\mathbf{P}(2t^{1/2}(u+\mathrm{i} v-z_0);|r(z_0)|), \end{aligned} $$
(128)

we compute the matrix coefficient N 1(x, t) appearing in (56) by taking a limit along the imaginary axis in (54). Thus, we obtain \({\mathbf {N}}_1(x,t)=(2t^{1/2})^{-\mathrm {i}\nu (z_0)\sigma _3}\mathbf {Q}(x,t)(2t^{1/2})^{\mathrm {i}\nu (z_0)\sigma _3}\), where (using z = u + iv)

(129)

Using (62) and Proposition 3.3 yields

$$\displaystyle \begin{aligned} \mathbf{Q}(x,t)={\mathbf{E}}_1(x,t) + \frac{1}{2}t^{-1/2}{\mathbf{P}}_1(|r(z_0)|). \end{aligned} $$
(130)

Therefore, using (56) gives the following formula for the solution of the Cauchy problem (1)–(2):

$$\displaystyle \begin{aligned} \begin{aligned} q(x,t)&=2\mathrm{i}\mathrm{e}^{-\mathrm{i}\omega(z_0)}\mathrm{e}^{-2\mathrm{i} t\theta(z_0;z_0)}c(z_0)^{-2}(2t^{1/2})^{-2\mathrm{i}\nu(z_0)}Q_{12}(x,t)\\ &= \mathrm{e}^{-\mathrm{i}\omega(z_0)}\mathrm{e}^{-2\mathrm{i} t\theta(z_0;z_0)}c(z_0)^{-2}(2t^{1/2})^{-2\mathrm{i}\nu(z_0)}\left[2\mathrm{i} E_{1,12}(x,t) + \frac{1}{2}t^{-1/2}2\mathrm{i} P_{1,12}(|r(z_0)|)\right]\\ &= \mathrm{e}^{-\mathrm{i}\omega(z_0)}\mathrm{e}^{-2\mathrm{i} t\theta(z_0;z_0)}c(z_0)^{-2}(2t^{1/2})^{-2\mathrm{i}\nu(z_0)}\left[2\mathrm{i} E_{1,12}(x,t) + \frac{1}{2}t^{-1/2}\beta(|r(z_0)|)\right], \end{aligned} {} \end{aligned} $$
(131)

where we recall \(\omega (z_0)=\arg (r(z_0))\), \(\theta (z_0;z_0)=-2z_0^2\), the definition (4) of ν(z 0), the definition (42) of c(z 0), and the definitions (77) and (79) of |β(m = |r(z 0)|)|2 and \(\arg (\beta (m=|r(z_0)|))\) respectively. Since the factors to the left of the square brackets have unit modulus, from Proposition 3.3 it follows that q(x, t) has exactly the representation (3) in which \(|\mathcal {E}(x,t)|=|E_{1,12}(x,t)|=\mathcal {O}(t^{-3/4})\) as t → +, uniformly with respect to x. This completes the proof of Theorem 1.1.

Remark

The use of truncations of the Neumann series (118) for E(u, v;x, t) yields a corresponding asymptotic expansion of q(x, t) as t → +. In other words, it is straightforward (but tedious) to compute explicit corrections to the leading term in the asymptotic formula (3) by expanding \(\mathcal {E}(x,t)\). For instance, the formula (126) gives

(132)

i.e., an explicit double integral plus a remainder. Using the estimates (119) and (125) we find that the remainder term satisfies

(133)

Using this result in (131) gives in place of (3) the corrected asymptotic formula

$$\displaystyle \begin{aligned} q(x,t)=q^{(0)}(x,t) + q^{(1)}(x,t) + \mathcal{E}^{(1)}(x,t) {} \end{aligned} $$
(134)

where

$$\displaystyle \begin{aligned} q^{(0)}(x,t):=t^{-1/2}\alpha(z_0)\mathrm{e}^{\mathrm{i} x^2/(4t)-\mathrm{i}\nu(z_0)\ln(8t)} \end{aligned} $$
(135)

is the leading term in (3),

(136)

is an explicit correction (see (82)–(83)), and where \(\mathcal {E}^{(1)}(x,t)\) is error term satisfying \(\mathcal {E}^{(1)}(x,t)=\mathcal {O}(t^{-1})\) as t → + uniformly with respect to . Theorem 1.1 implies that the correction satisfies as t → +, but the explicit formula (136) allows for a complete analysis of the correction. For instance, we are in a position to seek reflection coefficients r(z) in the Sobolev space with |r(z)|≤ ρ < 1 for which the correction saturates the upper bound of \(\mathcal {O}(t^{-3/4})\), or to determine under which conditions on r(z) the correction term can be smaller. Under additional hypotheses the expansion (134) can be carried out to higher order, with subsequent corrections involving iterated double integrals of W, which in turn involve \({\overline {\partial }}\)-derivatives of the extensions E j, j = 1, 3, 4, 6, and the parabolic cylinder functions contained in the matrix P(ζ;m) solving Riemann-Hilbert Problem 3.