Abstract
We present a new and relatively elementary method for studying the solution of the initial-value problem for dispersive linear and integrable equations in the large-t limit, based on a generalization of steepest descent techniques for Riemann-Hilbert problems to the setting of \({\overline {\partial }}\)-problems. Expanding upon prior work (Dieng and McLaughlin, Long-time asymptotics for the NLS equation via \({\overline {\partial }}\) methods, arXiv:0805.2807, 2008) of the first two authors, we develop the method in detail for the linear and defocusing nonlinear Schrödinger equations, and show how in the case of the latter it gives sharper asymptotics than previously known under essentially minimal regularity assumptions on initial data.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
1 Introduction
The long time behavior of solutions q(x, t) of the Cauchy initial-value problem for the defocusing nonlinear Schrödinger (NLS) equation
with initial data decaying for large x:
has been studied extensively, under various assumptions on the smoothness and decay properties of the initial data q 0 [3, 5, 6, 8, 10, 19, 20]. The asymptotic behavior takes the following form: as t → +∞, one has
where \(\mathcal {E}(x,t)\) is an error term and for , ν(z) and α(z) are defined by
and
Here z 0 = −x∕(4t), Γ is the gamma function, and r(z) is the so-called reflection coefficient associated to the initial data q 0. The connection between the initial data q 0(x) and the reflection coefficient r(z) is achieved through the spectral theory of the associated self-adjoint Zakharov-Shabat differential operator
acting in as described, for example, in [6]. See also the contribution of Perry in this volume: [17, Section 2].
The modulus |α(z 0)| of the complex amplitude α(z 0) as written in (4) was first obtained by Segur and Ablowitz [19] from trace formulæ under the assumption that q(x, t) has the form (3) where \(\mathcal {E}(x,t)\) is small for large t. Zakharov and Manakov [20] took the form (3) as an ansatz to motivate a kind of WKB analysis of the reflection coefficient r(z) and as a consequence were able to also calculate the phase of α(z 0), obtaining for the first time the phase as written in (5). Its [10] was the first to observe the key role played in the large-time behavior of q(x, t) by an “isomonodromy” problem for parabolic cylinder functions; this problem has been an essential ingredient in all subsequent studies of the large-t limit and as we shall see it is a non-commutative analogue of the Gaussian integral that produces the familiar factors of \(\sqrt {2\pi }\) in the stationary phase approximation of integrals. The first time that the form (3) itself was rigorously deduced from first principles (rather than assumed) and proven to be accurate for large t (incidentally reproducing the formulæ (4)–(5) in an ansatz-free fashion) was in the work of Deift and Zhou [3] (see [6] for a pedagogic description) who brought the recently introduced nonlinear steepest descent method [4] to bear on this problem. Indeed, under the assumption of high orders of smoothness and decay on the initial data q 0, the authors of [3] proved that \(\mathcal {E}(x,t)\) satisfies
It is reasonable to expect that any estimate of the error term \(\mathcal {E}(x,t)\) would depend on the smoothness and decay assumptions made on q 0, and so it is natural to ask what happens to the estimate (6) if the assumptions on q 0 are weakened. Early in this millennium, Deift and Zhou developed some new tools for the analysis of Riemann-Hilbert problems, originally aimed at studying the long time behavior of perturbations of the NLS equation [7]. Their methods allowed them to establish long time asymptotics for the Cauchy problem (1)–(2) with essentially minimal assumptions on the initial data [8]. Indeed, they assumed the initial data q 0 to lie in the weighted Sobolev space
It is well known that if , then the associated reflection coefficientFootnote 1 satisfies , where
and more generally the spectral transform \(\mathcal {R}\) associated with the Zakharov-Shabat operator \(\mathcal {L}\) (6) is a map , \(q_{0} \mapsto r=\mathcal {R}q_0\) that is a bi-Lipschitz bijection [21]. The result of [8] is then that the Cauchy problem (1)–(2) for has a unique weak solution for which (3) holds with an error term \(\mathcal {E}\left (x,t \right )\) that satisfies, for any fixed κ in the indicated range,
Subsequently, McLaughlin and Miller [13, 14] developed a method for the asymptotic analysis of Riemann-Hilbert problems in which jumps across contours are “smeared out” over a two-dimensional region in the complex plane, resulting in an equivalent \({\overline {\partial }}\) problem that is more easily analyzed. In this paper we adapt and extend this method to the Riemann-Hilbert problem of inverse-scattering associated to the Cauchy problem (1)–(2). The main point of our work is this: by using the \({\overline {\partial }}\) approach, we avoid all delicate estimates involving Cauchy projection operators in L p spaces (which are central to the work in [8]). Instead it is only necessary to estimate certain double integrals, an exercise involving nothing more than calculus. Remarkably, this elementary approach also sharpens the result obtained in [8]. Our result is as follows.
Theorem 1.1
The Cauchy problem (1)–(2) with initial data q 0 in the weighted Sobolev space defined by (7) has a unique weak solution having the form (3)–(5) in which r(z) is the reflection coefficient associated with q 0 and where the error term satisfies
The main features of this result are as follows.
-
The error estimate is an improvement over the one reported in [8], i.e., we prove that the endpoint case \(\kappa =\tfrac {1}{4}\) holds in (9). Our methods also suggest that the improved estimate (10) on the error is sharp.
-
As with the result (9) obtained in [8], the improved estimate (10) only requires the condition , i.e., it is not necessary that , but only that r lies in the classical Sobolev space and satisfies |r(z)|≤ ρ for some ρ < 1. Dropping the weighted L 2 condition on r corresponds to admitting rougher initial data q 0. For such data, the solution of the Cauchy problem is of a weaker nature, as discussed at the end of [8].
-
The new \({\overline {\partial }}\) method which is used to derive the estimate (10) affords a considerably less technical proof than previous results.
-
The method used to establish the estimate (10) is readily extended to derive a more detailed asymptotic expansion, beyond the leading term (see the remark at the end of the paper).
Given the reflection coefficient associated with initial data via the spectral transform \(\mathcal {R}\) for the Zakharov-Shabat operator \(\mathcal {L}\), the solution of the Cauchy problem for the nonlinear Schrödinger equation (1) may be described as follows. For full details, we again refer the reader to [17, Section 2]. Consider the following Riemann-Hilbert problem:
Riemann-Hilbert Problem 1
Given parameters , find M = M(z) = M(z;x, t), a 2 × 2 matrix, satisfying the following conditions:
Analyticity
M is an analytic function of z in the domain . Moreover, M has a continuous extension to the real axis from the upper (lower) half-plane denoted M +(z) (M −(z)) for .
Jump Condition
The boundary values satisfy the jump condition
where the jump matrix V M(z) is defined by
Normalization
There is a matrix M 1(x, t) such that
From the solution of this Riemann-Hilbert problem, one defines a function q(x, t), , by
The fact of the matter is then that q(x, t) is the solution of the Cauchy problem (1)–(2).
Recent studies of the long-time behavior of the solution of the NLS initial-value problem (1)–(2) have involved the detailed analysis of the solution M to Riemann-Hilbert Problem 1. As regularity assumptions on the initial data q 0 are relaxed, this analysis becomes more involved, technically. The purpose of this manuscript is to carry out a complete analysis of the long-time asymptotic behavior of M under the assumption that (or really, ), as in [6], but via a \(\overline {\partial }\) approach which replaces technical harmonic analysis estimates involving Cauchy projection operators with very straightforward estimates involving some explicit double integrals.
The proof of Theorem 1.1 using the methodology of [13, 14] was originally obtained by the first two authors in 2008 [9]. Since then the technique has been used successfully to study many other related problems of large-time behavior for various integrable equations. In [2], the authors used the methods of [9] to analyze the stability of multi-dark-soliton solutions of (1). In [1], the method of [9] was used to confirm the soliton resolution conjecture for the focusing version of the NLS equation under generic conditions on the discrete spectrum. In [12], the large-time behavior of solutions of the derivative NLS equation was studied using \({\overline {\partial }}\) methods, and in [11] the same techniques were used to establish a form of the soliton resolution conjecture for this equation. Similar \({\overline {\partial }}\) methods more based on the original approach of [13, 14] have also been useful in studying some problems of nonlinear wave theory not necessarily in the realm of large time asymptotics, for instance [15], which deals with boundary-value problems for (1) in the semiclassical limit. Based on this continued interest in \({\overline {\partial }}\) methods, we decided to write this review paper containing all of the results and arguments of [9], some in a new form, as well as some additional expository material which we hope the reader might find helpful.
2 An Unorthodox Approach to the Corresponding Linear Problem
In order to motivate the \({\overline {\partial }}\) steepest descent method, we first consider the Cauchy problem for the linear equation corresponding to (1), namely
with initial condition (2) for which . By Fourier transform theory, if
is the Fourier transform of the initial data, then \(\hat {q}_0\) as a function of also lies in the weighted Sobolev space , and the solution of the Cauchy problem is given in terms of \(\hat {q}_0\) by the integral
where θ(z;z 0) and z 0 are as defined in (12). It is worth noticing that this formula is exactly what arises from Riemann-Hilbert Problem 1 via the formula (14) if only the jump matrix V M(z) in (12) is replaced with the triangular form
in which case the solution of Riemann-Hilbert Problem 1 is explicitly given by
This shows that the reflection coefficient r(z) is a nonlinear analogue of (the complex conjugate of) the Fourier transform \(\hat {q}_0(z)\).
Assuming that is fixed, the method of stationary phase applies to deduce an asymptotic expansion of the integral in (17). The only point of stationary phase is z = z 0, and the classical formula of Stokes and Kelvin yields
where the error term \(\mathcal {E}(x,t)\) is of order t −3∕2 as t → +∞ under the best assumptions on \(\hat {q}_0\), assumptions that guarantee that the error has a complete asymptotic expansion in terms proportional via explicit oscillatory factors to descending half-integer powers of t. To derive this expansion from first principles consists of several steps as follows.
-
One introduces a smooth partition of unity to separate contributions to the integral from points z close to z 0 and far from z 0.
-
One uses integration by parts to estimate the contributions from points z far from z 0. This requires having sufficiently many derivatives of \(\hat {q}_0(z)\), which corresponds to having sufficient decay of q 0(x).
-
One approximates \(\hat {q}_0(z)\) locally near z 0 by an analytic function with an accuracy related to the size of t and the number of terms of the expansion that are desired.
-
One uses Cauchy’s theorem to deform the path of integration for the approximating integrand to a diagonal path over the stationary phase point. The slope of the diagonal path produces the phase factor of e−iπ∕4, and the path integral of the leading term \(\hat {q}_0(z_0)\mathrm {e}^{-2\mathrm {i} t\theta (z;z_0)}\) in the local approximation of \(\hat {q}_0(z)\mathrm {e}^{-2\mathrm {i} t\theta (z;z_0)}\) is a Gaussian integral that produces the factor of \(\sqrt {\pi }\).
It is possible to implement all steps of this method assuming, say, that q 0 (and hence also \(\hat {q}_0\)) is a Schwartz-class function. However, as one reduces the regularity of q 0 it becomes impossible to obtain an expansion to all orders. More to the point, even in the presence of Schwartz-class regularity, the proof of the stationary phase expansion by the traditional methods outlined above is complicated, perhaps more so than necessary as we hope to convince the reader.
To explain an alternative approach that bears fruit in the case that is of interest here, let Ω denote a simply-connected region in the complex plane with counter-clockwise oriented piecewise-smooth boundary ∂ Ω. If is differentiable (as a function of two real variables \(u= \operatorname {\mathrm {Re}}(z)\) and \(v= \operatorname {\mathrm {Im}}(z)\)) and extends continuously to ∂ Ω, then it follows from Stokes’ theorem that
where dA(u, v) denotes area measure in the plane and where \({\overline {\partial }}\) is the Cauchy-Riemann operator:
which annihilates all analytic functions of z = u + iv. Now consider the diagram shown in Fig. 1. We define a function E(u, v) on Ω+ ∪ Ω− as follows:
Observe that:
-
On the boundary v = 0 (i.e., ), we have \(\cos {}(2\arg (u+\mathrm {i} v-z_0))\equiv 1\), so \(E(u,0)=\hat {q}_0(u)\).
-
On the boundary v = z 0 − u, we have \(\cos {}(2\arg (u+\mathrm {i} v-z_0))\equiv 0\), so \(E(u,z_0-u)=\hat {q}_0(z_0)\) which is independent of u.
The first point shows that E(u, v) is an extension of the function \(\hat {q}_0(z)\) from the real z-axis into the domain Ω+ ∪ Ω−. The second point shows that the extension evaluates to a constant on the diagonal part of the boundary of Ω+ ∪ Ω−. In the interior of Ω+ ∪ Ω−, E(u, v) inherits smoothness properties from \(\hat {q}_0(u)\). In particular, under the assumption , we may apply Stokes’ theorem in the form (21) to the functions \(\pm E(u,v)\mathrm {e}^{-2\mathrm {i} t\theta (u+\mathrm {i} v;z_0)}\) on the domains Ω± and add up the results to obtain the formula
The first term on the right-hand side originates from the diagonal boundary of Ω+ ∪ Ω− and because E is constant there it is an exact Gaussian integral evaluating to the explicit leading term on the right-hand side of (20). Therefore, the remaining term on the right-hand side of (24) is an exact double-integral representation of the error term \(\mathcal {E}(x,t)\) in the formula (20). Since implies which in turn implies that \(\hat {q}_0(z)\) is defined for all , the leading term in (20) certainly makes sense.
To estimate the error term we will only use the fact that , i.e., that \(\hat {q}_0\) lies in the (classical, unweighted) Sobolev space . First note that since \(\mathrm {e}^{-2\mathrm {i} t\theta (z;z_0)}\) is an entire function of z, \({\overline {\partial }}\mathrm {e}^{-2\mathrm {i} t\theta (z;z_0)}\equiv 0\), so by the product rule it suffices to have suitable estimates of \({\overline {\partial }} E(u,v)\) for u + iv ∈ Ω±. Indeed,
A direct computation using (22) gives
In polar coordinates (ρ, ϕ) centered at the point and defined by \(u=z_0+\rho \cos {}(\phi )\) and \(v=\rho \sin {}(\phi )\), the Cauchy-Riemann operator (22) takes the equivalent form
so as \(\arg (u+\mathrm {i} v-z_0)=\phi \) we have
Therefore we easily obtain the inequality
Note that by the fundamental theorem of calculus and the Cauchy-Schwarz inequality,
so (29) implies that also
Therefore, using (31) in (25) gives
where
The key point is that for t > 0, the exponential factors are bounded by 1 and decaying at infinity in Ω±. So, by iterated integration, Cauchy-Schwarz, and the change of variable w = t 1∕2(u − z 0),
In exactly the same way, we also get . Note that K is an absolute constant. The integrals J ±(x, t) are independent of q 0 and by translation of z 0 to the origin and reflection through the origin, the integrals are also independent of x and are obviously equal. To calculate them we introduce rescaled polar coordinates by \(u=z_0+t^{-1/2}\rho \cos {}(\phi )\) and \(v=t^{-1/2}\rho \sin {}(\phi )\) to get
It is a calculus exercise to show that the above double integral is convergent and hence defines L as a second absolute constant.
It follows from these elementary calculations that if only , then the error term \(\mathcal {E}(x,t)\) in (20) obeys the estimate
which decays as t → +∞ at exactly the same rate as in the claimed result for the nonlinear problem as formulated in Theorem 1.1. The same method can be used to obtain higher-order corrections under additional hypotheses of smoothness for the Fourier transform \(\hat {q}_0\). One simply needs to integrate by parts with respect to \(u= \operatorname {\mathrm {Re}}(z)\) in the double integral on the right-hand side of (24).
In the rest of the paper we will show that almost exactly the same elementary estimates suffice to prove the nonlinear analogue of this result, namely Theorem 1.1.
3 Proof of Theorem 1.1
We will prove Theorem 1.1 in several systematic steps. After some preliminary observations involving the jump matrix V M(z) in Riemann-Hilbert Problem 1 in Sects. 3.1 and 3.2, we shall see that the subsequent analysis of Riemann-Hilbert Problem 1 parallels our study of the associated linear problem detailed in Sect. 2. In particular we find natural analogues of the nonanalytic extension method (Sect. 3.3), of the Gaussian integral giving the leading term in the stationary phase formula (Sect. 3.4), and of the simple double integral estimates leading to the proof of its accuracy (Sect. 3.5). Finally, in Sect. 3.6 we assemble the ingredients to arrive at the formula (3) with the improved error estimate, completing the proof of Theorem 1.1.
3.1 Jump Matrix Factorization
The jump matrix V M(z) of Riemann-Hilbert Problem 1 defined in (12) can be factored in two different ways that are useful in different intervals of the jump contour as indicated:
and
The importance of these factorizations is that they provide an algebraic separation of the oscillatory exponential factors \(\mathrm {e}^{\pm 2\mathrm {i} t\theta (z;z_0)}\). Indeed, if the reflection coefficient r(z) is an analytic function of , then in each case the left-most (right-most) factor has an analytic continuation into the lower (upper) half-plane near the indicated half-line that is exponentially decaying to the identity matrix as t → +∞ due to z 0 being a simple critical point of θ(z;z 0). This observation is the basis for the steepest descent method for Riemann-Hilbert problems as first formulated in [4]. In the more realistic case that r(z) is nowhere analytic, this analytic continuation method must be supplemented with careful approximation arguments that are quite detailed [8]. We will proceed differently in Sect. 3.3 below. But first we need to deal with the central diagonal factor in the factorization (38) to be used for z < z 0.
3.2 Modification of the Diagonal Jump
We now show how the diagonal factor \((1-|r(z)|{ }^2)^{\sigma _3}\) in the jump matrix factorization (38) can be replaced with a constant diagonal matrix. Consider the complex scalar function defined by the formula
This function is important because according to the Plemelj formula, it satisfies the scalar jump conditions δ +(z;z 0) = δ −(z;z 0)(1 −|r(z)|2) for z < z 0 and δ +(z;z 0) = δ −(z;z 0) for z > z 0. Hence the diagonal matrix \(\delta (z;z_0)^{\sigma _3}\) is typically used in steepest descent theory to deal with the diagonal factor in (38). However, δ(z;z 0) has a mild singularity at z = z 0:
where ν(z 0) is defined in (4) and the power function is interpreted as the principal branch. The use of δ(z;z 0) introduces this singularity unnecessarily into the Riemann-Hilbert analysis. In our approach we will therefore use a related function:
where the constant c(z 0) is defined by
The function f(z;z 0) has numerous useful properties that we summarize here.
Lemma 3.1 (Properties of f(z;z 0))
Suppose that and there exists ρ < 1 such that |r(z)|≤ ρ holds for all (as is implied by which follows from ). Then
-
The functions f(z;z 0)±1 are well-defined and analytic in z for \(\arg (z-z_0)\in (-\pi ,\pi )\).
-
The functions f(z;z 0)±1 are uniformly bounded independently of :
(43) -
The function f(z;z 0) satisfies the following asymptotic condition:
(44) -
The functions f(z;z 0)±2 are Hölder continuous with exponent 1∕2. In particular, f(z;z 0)±2 → 1 as z → z 0 and there is a constant K = K(ρ) > 0 such that |f(z;z 0)±2 − 1|≤ K|z − z 0|1∕2 holds whenever \(\arg (z-z_0)\in (-\pi ,\pi )\).
-
The continuous boundary values f ±(z;z 0) taken by f(z;z 0) on for z < z 0 from ±Im(z) > 0 satisfy the jump condition
$$\displaystyle \begin{aligned} f_+(z;z_0)=f_-(z;z_0)\frac{1-|r(z)|{}^2}{1-|r(z_0)|{}^2},\quad z<z_0. \end{aligned} $$(45)
Proof
The assumptions imply in particular that , so for z in a small neighborhood of each point disjoint from the integration contour, the integral in (39) is absolutely convergent and so δ(z;z 0) and δ(z;z 0)−1 are analytic functions of z on that neighborhood. The same argument shows that the first integral in the exponent of the expression (42) for c(z 0) is convergent. Since implies that r(⋅) is Hölder continuous with exponent 1∕2, the condition |r(⋅)|≤ ρ < 1 further implies that \(\ln (1-|r(s)|{ }^2)\) is also Hölder continuous with exponent 1∕2, from which it follows that the second integral in the exponent of the expression (42) is also convergent. Therefore c(z 0) exists, and clearly |c(z 0)| = 1. Since the principal branch of \((z-z_0)^{\mp \mathrm {i}\nu (z_0)}\) is analytic for \(\arg (z-z_0)\in (-\pi ,\pi )\), the analyticity of f(z;z 0)±1 in the same domain follows. This proves the first statement.
In [8, Proposition 2.12] it is asserted that under the hypothesis |r(z)|≤ ρ < 1, the function δ(z;z 0) defined by (39) satisfies the uniform estimates (1 − ρ 2)1∕2 ≤|δ(z;z 0)|±1 ≤ (1 − ρ 2)−1∕2 whenever \(\arg (z-z_0)\in (-\pi ,\pi )\). If \(\arg (z-z_0)=0\), then obviously |δ(z;z 0)| = 1, so it remains to prove the estimates hold for \( \operatorname {\mathrm {Im}}(z)\neq 0\). Following [12], since \(\ln (1-\rho ^2)\le \ln (1-|r(s)|{ }^2)\le 0\), if \(u= \operatorname {\mathrm {Re}}(z)\) and \(v= \operatorname {\mathrm {Im}}(z)\) we have \( \operatorname {\mathrm {Im}}((s-z)^{-1})=v/((s-u)^2+v^2)\), so assuming v > 0,
Bounding the left-hand side below by extending the integration to (using \(v\ln (1-\rho ^2)<0\)) gives the lower bound (1 − ρ 2)1∕2 ≤|δ(z;z 0)|, and by taking reciprocals, the upper bound |δ(z;z 0)|−1 ≤ (1 − ρ 2)−1∕2 for \( \operatorname {\mathrm {Im}}(z)>0\). The corresponding result for \( \operatorname {\mathrm {Im}}(z)<0\) follows by the exact symmetry \(\delta (\bar {z};z_0)^{-1}=\overline {\delta (z;z_0)}\). Combining these bounds with |c(z 0)| = 1 and the elementary inequalities \((1-\rho ^2)^{1/2}\le (1-|r(z_0)|{ }^2)^{1/2}=\mathrm {e}^{-\pi \nu (z_0)}\le |(z-z_0)^{\mathrm {i}\nu (z_0)}|\le \mathrm {e}^{\pi \nu (z_0)}=(1-|r(z_0)|{ }^2)^{-1/2}\le (1-\rho ^2)^{-1/2}\) holding for \(\arg (z-z_0)\in (-\pi ,\pi )\) then proves the second statement.
Since , from (39) a dominated convergence argument shows that δ(z;z 0) → 1 as z →∞ provided only that the limit is taken in such a way that for some given 𝜖 > 0, dist(z, [−∞, z 0)) ≥ 𝜖. Combining this fact with (41) proves the third statement.
Analyticity implies Hölder continuity, so provided z is bounded away from the half-line (−∞, z 0], Hölder-1∕2 continuity of f(z;z 0)±2 is obvious. But, since \(\ln (1-|r(\cdot )|{ }^2)\) is Hölder continuous on with exponent 1∕2, by the Plemelj-Privalov theorem [16, §19] and a related classical result [16, §22], the functions δ(z;z 0)±1 are uniformly Hölder continuous with exponent 1∕2 in any neighborhood of the integration contour except for the endpoint z = z 0, and hence the same is true for the functions f(z;z 0)±2. However, the latter functions are better-behaved near z = z 0. To see this, note that since
we have from (39) and (41) that
where \(h(s):=\ln (1-|r(s)|{ }^2)-\ln (1-r(z_0)|{ }^2)\) for s < z 0 and h(s) := 0 for s ≥ z 0. As the first three factors are analytic at z = z 0 while h(s) is Hölder continuous with exponent 1∕2 in a neighborhood of s = z 0, the same arguments cited above apply and yield the desired Hölder continuity of f(z;z 0)±2 near z = z 0. It only remains to show that f(z 0;z 0)±2 = 1, but this follows immediately from (42) and (48). This proves the fourth statement.
Finally, the fifth statement follows from the definition (41) of f(z;z 0) and the jump condition δ +(z;z 0) = δ −(z;z 0)(1 −|r(z)|2) for z < z 0. □
Using the diagonal matrix \(f(z;z_0)^{\sigma _3}\) to conjugate the unknown M(z) of Riemann-Hilbert Problem 1 by introducing
where
it is easy to check that N(z) satisfies several conditions explicitly related to those of M(z) according to Riemann-Hilbert Problem 1. Indeed, N(z) must be a solution of the following equivalent problem.
Riemann-Hilbert Problem 2
Given parameters , find N = N(z) = N(z;x, t), a 2 × 2 matrix, satisfying the following conditions:
Analyticity
N is an analytic function of z in the domain . Moreover, N has a continuous extension to the real axis from the upper (lower) half-plane denoted N +(z) (N −(z)) for .
Jump Condition
The boundary values satisfy the jump condition
where the jump matrix V N(z) may be written in the alternate forms
where f +(z;z 0) (f −(z;z 0)) is the boundary value taken by f(z;z 0) from the upper (lower) half-plane.
Normalization
There is a matrix N 1(x, t) such that
Note that the matrix coefficient N 1(x, t) is necessarily related to the coefficient M 1(x, t) in Riemann-Hilbert Problem 1 by a diagonal conjugation:
Therefore, the reconstruction formula (14) can be written in terms of N 1(x, t) as
The net effect of this step is therefore to replace the non-constant diagonal central factor in (38) with its constant value at z = z 0 and to introduce power-law asymptotics at z = ∞ at the cost of slight modifications of the left-most and right-most factors in (37)–(38). In the formula (49) we have also taken the opportunity to conjugate off the constant value of θ(z;z 0) and the phase of r(z) at the critical point z = z 0.
3.3 Nonanalaytic Extensions and \({\overline {\partial }}\) Steepest Descent
The key to the steepest descent method, both in its classical analytic framework and in the \({\overline {\partial }}\) setting, is to get the oscillatory factors \(\mathrm {e}^{\pm 2\mathrm {i} t\theta (z;z_0)}\) off the real axis and into appropriate sectors of the complex z-plane where they decay as t → +∞. We will accomplish this by exactly the same means as in the linear case, namely by defining non-analytic extensions of the non-oscillatory coefficients of \(\mathrm {e}^{\pm 2\mathrm {i} t\theta (z;z_0)}\) in the left-most and right-most jump matrix factors in (37)–(38) by a slight generalization of the formula (23). In reference to the diagram in Fig. 2, we define sectors
Note that Ω3 = Ω+ and Ω6 = Ω− in reference to Fig. 1. Now we define extensions on the domains shaded in Fig. 2 by following a very similar approach as in Sect. 2:
It is easy to check that:
-
E 1(u, v) evaluates to \(f(z;z_0)^{-2}r(z)\mathrm {e}^{-\mathrm {i}\omega (z_0)}\) for on the boundary of Ω1.
-
E 3(u, v) evaluates to \(-f_+(z;z_0)^2\overline {r(z)}\mathrm {e}^{\mathrm {i}\omega (z_0)}/(1-|r(z)|{ }^2)\) for on the boundary of Ω3.
-
E 4(u, v) evaluates to \(f_-(z;z_0)^{-2}r(z)\mathrm {e}^{-\mathrm {i}\omega (z_0)}/(1-|r(z)|{ }^2)\) for on the boundary of Ω4.
-
E 6(u, v) evaluates to \(-f(z;z_0)^2\overline {r(z)}\mathrm {e}^{\mathrm {i}\omega (z_0)}\) for on the boundary of Ω6.
Thus exactly as in Sect. 2 these formulæ represent extensions of their values on the real sector boundaries into the complex plane that become constant on the diagonal sector boundaries (see (60) below), with the constant chosen in each case to ensure continuity of the extension along the interior boundary of each sector. The only essential difference between the extension formulæ (58) and the formula (23) from Sect. 2 is the way that the factors f(z;z 0)±2 are treated differently from the factors involving r(z); the reason for using f(u + iv;z 0)±2 in (58) rather than f(u;z 0)±2 will become clearer in Sect. 3.5 when we compute \({\overline {\partial }} E_j(u,v)\), j = 1, 3, 4, 6, and take advantage of the fact (see Lemma 3.1) that \({\overline {\partial }} f(u+\mathrm {i} v;z_0)^{\pm 2}\equiv 0\) in the interior of each sector.
We use the extensions to “open lenses” about the intervals z < z 0 and z > z 0 by making another substitution:
Our notation O(u, v;x, t) reflects the viewpoint that unlike N(z;x, t), z = u + iv, O(u, v;x, t) is not a piecewise-analytic function in the complex plane due to the non-analytic extensions E j(u, v), j = 1, 3, 4, 6. The exponential factors in (59) all have modulus less than 1 and decay exponentially to zero as t → +∞ pointwise in the interior of each of the indicated sectors, a fact that suggests that (59) is a near-identity transformation in the limit t → +∞. We also have the following property.
Lemma 3.2 (Relation Between N and O for Large )
Let be fixed, and suppose that and that there exists a constant ρ < 1 such that |r(z)|≤ ρ holds for all (conditions that are true for as follows from ). Then holds as z = u + iv →∞ where the decay of the error term is uniform with respect to direction in each sector Ω j, j = 1, …, 6.
Proof
The exponential factors in (59) also decay as z = u + iv →∞ provided that v →∞. Since means that \((1+|\cdot |)\hat {r}(\cdot )\) is square-integrable where \(\hat {r}\) denotes the Fourier transform of r, the Cauchy-Schwarz inequality implies that also . Hence by the Riemann-Lebesgue Lemma, r(u) is bounded, continuous, and tends to zero as u →∞. As 1 −|r(u)|2 ≥ 1 − ρ 2 > 0, the same properties hold for r(u)∕(1 −|r(u)|2). Since the hypotheses of Lemma 3.1 hold, f(u + iv;z 0)±2 are bounded functions, so the desired result follows from using extension formulæ (58) in (59). □
Despite the non-analyticity of the extensions, the above proof shows also that each of the extensions E j(u, v), j = 1, 3, 4, 6, is continuous on the relevant sector and therefore O(u, v;x, t) is a piecewise-continuous function of with jump discontinuities across the sector boundaries. We address these jump discontinuities next.
3.4 The Isomonodromy Problem of Its
Although O(u, v;x, t) is not analytic in the sectors shaded in Fig. 2 for essentially the same reason that the double integral error term in (24) does not vanish identically, the fact that the extensions E j(u, v), j = 1, 3, 4, 6, evaluate to constants on the diagonals:
implies that if we introduce the recentered and rescaled independent variable ζ := 2t 1∕2(z − z 0), the jump conditions satisfied by O(u, v;x, t) across the sector boundaries are exactly the same as those satisfied by the matrix function P(ζ;|r(z 0)|) solving the following Riemann-Hilbert problem.
Riemann-Hilbert Problem 3
Let m ∈ [0, 1) be a parameter, and seek a 2 × 2 matrix function P = P(ζ) = P(ζ;m) with the following properties:
Analyticity P(ζ) is an analytic function of ζ in the sectors \(|\arg (\zeta )|<\tfrac {1}{4}\pi \), \(\tfrac {1}{4}\pi <\pm \arg (\zeta )<\tfrac {3}{4}\pi \), and \(\tfrac {3}{4}\pi <\pm \arg (\zeta )<\pi \). It admits a continuous extension from each of these five sectors to its boundary.
Jump Conditions Denoting by P +(ζ) (resp., P −(ζ)) the boundary value taken on any one of the rays of the jump contour Σ P from the left (resp., right) according to the orientation shown in Fig. 3, the boundary values are related by P +(ζ;m) = P −(ζ;m)V P(ζ;m), where the jump matrix V P(ζ;m) is defined on the five rays of Σ P by
Normalization as ζ →∞.
This Riemann-Hilbert problem is essentially the isomonodromy problem identified by Its [10], and it is the analogue in the nonlinear setting of the Gaussian integral that is the leading term of the stationary phase expansion (24) in the linear case. Although the jump conditions for O(u, v;x, t) correspond exactly to those of P(ζ;|r(z 0)|), the scaling z↦ζ = 2t 1∕2(z − z 0) introduces an extra factor into the asymptotics as z →∞; the fact of the matter is that the matrix \((2t^{1/2})^{\mathrm {i}\nu (z_0)\sigma _3}\mathbf {O}(u,v;x,t)\) satisfies the normalization condition of P(ζ;|r(z 0)|), and the constant pre-factor has no effect on the jump conditions. Hence in Sect. 3.5 below we shall use the latter as a parametrix for the former.
However, we first develop the explicit solution of Riemann-Hilbert Problem 3. The first step is to consider the related unknown \(\mathbf {U}(\zeta ;m):=\mathbf {P}(\zeta ;m)\mathrm {e}^{-\mathrm {i}\zeta ^2\sigma _3/2}\) and observe that from the conditions of Riemann-Hilbert Problem 3 that U(ζ;m) is analytic exactly in the same five sectors where P(ζ;m) is, and that it satisfies jump conditions of exactly the form (61) except that the factors \(\mathrm {e}^{\pm \mathrm {i}\zeta ^2}\) are everywhere replaced by 1; in other words, the jump matrix for U(ζ;m) on each jump ray is constant along the ray. It follows that the ζ-derivative U′(ζ;m) satisfies the same “raywise constant” jump conditions as does U(ζ;m) itself. Then, since it is easy to prove by Liouville’s theorem that any solution P(ζ;m) of Riemann-Hilbert Problem 3 has unit determinant, it follows that U(ζ;m) is invertible and a calculation shows that the function U′(ζ;m)U(ζ;m)−1 is continuous and hence by Morera’s theorem analytic in the whole ζ-plane possibly excepting ζ = 0. We will assume analyticity at the origin as well and show later that this is consistent. As an entire function of ζ, the product U′(ζ;m)U(ζ;m)−1 is potentially determined by its asymptotic behavior as ζ →∞. Assuming further that the normalization condition in Riemann-Hilbert Problem 3 means both that for some matrix coefficient P 1(m) to be determined,
hold as ζ →∞, such as would arise from term-by-term differentiation, it follows also that
as ζ →∞. Therefore the entire function is determined by Liouville’s theorem to be a linear polynomial:
where [A, B] := AB −BA is the matrix commutator. In other words, U(ζ;m) satisfies the first-order system of linear differential equations:
Now, another easy consequence of Liouville’s theorem is that there is at most one solution of Riemann-Hilbert Problem 3. Using the fact that m ∈ [0, 1), it is not difficult to show that if P(ζ;m) is a solution of Riemann-Hilbert Problem 3, then so is
so by uniqueness it follows that \(\mathbf {P}(\zeta ;m)=\sigma _1\overline {\mathbf {P}(\overline {\zeta };m)}\sigma _1\). Combining this symmetry with the first expansion in (62) shows that \(P_{1,21}(m)=\overline {P_{1,12}(m)}\), so the differential equations can be written in the form
The constant is unknown, but if it is considered as a parameter, then eliminating the second row shows that the elements U 1j, j = 1, 2, of the first row satisfy Weber’s equation for parabolic cylinder functions in the form:
The solutions of this equation are well-documented in the Digital Library of Mathematical Functions [18, §12]. Equation (68) has particular solutions denoted U(a, ±y) and U(−a, ±iy), where U(⋅, ⋅) is a special functionFootnote 2 with well-known integral representations, asymptotic expansions, and connection formulæ.
The second step is to represent the elements U 1j as linear combinations of a fundamental pair of so-called numerically satisfactory solutions specially adapted to each of the five sectors of analyticity for Riemann-Hilbert Problem 3. Thus, we write
and then using the first row of (67) along with identities allowing the elimination of derivatives of U [18, Eqs. 12.8.2–12.8.3] we get the following representation of the elements of the second row of U(ζ;m):
Finally, we determine the coefficients \(A_j^{(i)}\) and \(B_j^{(i)}\) for j = 1, 2 and i = 0, ±1, ±2, as well as the value of β = β(m) so that all of the conditions of Riemann-Hilbert Problem 3 are satisfied by \(\mathbf {P}(\zeta ;m)=\mathbf {U}(\zeta ;m)\mathrm {e}^{\mathrm {i}\zeta \sigma _3/2}\). The advantage of using numerically satisfactory fundamental pairs is that the asymptotic expansion [18, Eq. 12.9.1]
can be used to determine from (69)–(70) the asymptotic behavior of U(ζ;m) in each sector for the purposes of comparison with the first formula in (63). This immediately shows that for consistency it is necessary to take \(A_1^{(i)}=0\) and \(B_2^{(i)}=0\) for i = 0, ±1, ±2. Next, it is useful to consider the trivial jump conditions for the first column of U(ζ;m) (across \(\arg (\zeta )=-\tfrac {1}{4}\pi \) and \(\arg (\zeta )=\tfrac {3}{4}\pi \)) and for the second column of U(ζ;m) (across \(\arg (\zeta )=\tfrac {1}{4}\pi \) and \(\arg (\zeta )=-\tfrac {3}{4}\pi \)). These imply the identities \(B_1^{(0)}=B_1^{(-1)}\), \(B_1^{(1)}=B_1^{(2)}\) (from matching the first column) and \(A_2^{(0)}=A_2^{(1)}\), \(A_2^{(-2)}=A_2^{(-1)}\) (from matching the second column). The diagonal jump condition satisfied by U(ζ;m) across the negative real axis then yields the additional identities \(B_1^{(-2)}=(1-m^2)^{-1}B_1^{(2)}\) and \(A_2^{(2)}=(1-m^2)^{-1}A_2^{(-2)}\). With this information, we have found that U(ζ;m) necessarily has the form
and
Appealing again to (71) now shows that U(ζ;m) agrees with the first formula in (63) up to the leading term only if the parameter a in Weber’s equation (68) satisfies
and the remaining constants \(A_2^{(0)}\), \(A_2^{(-1)}\), \(B_1^{(0)}\), and \(B_1^{(1)}\), are given in terms of β by
Only \(\arg (\beta )\) remains to be determined, and for this we recall the nontrivial jump conditions for the first (second) column of U(ζ;m) across the rays \(\arg (\zeta )=\tfrac {1}{4}\pi ,-\tfrac {3}{4}\pi \) (the rays \(\arg (\zeta )=-\tfrac {1}{4}\pi ,\tfrac {3}{4}\pi \)). Actually all four of these jump conditions contain equivalent information due to the fact that the cyclic product of the jump matrices in Riemann-Hilbert Problem 3 about the origin is the identity, so we just examine the transition of the first column across the ray \(\arg (\zeta )=\tfrac {1}{4}\pi \) implied by the jump conditions in Riemann-Hilbert Problem 3. Using all available information, the jump condition matches the connection formula [18, Eq. 12.2.18] if and only if
Combining this with (77) determines β=β(m) and then using (78) in (72)–(76) fully determines U(ζ;m) and hence also \(\mathbf {P}(\zeta ;m)=\mathbf {U}(\zeta ;m)\mathrm {e}^{\mathrm {i}\zeta ^2\sigma _3/2}\). This completes the construction of the necessarily unique solution of Riemann-Hilbert Problem 3. One can easily check directly that U′(ζ;m)U(ζ;m)−1 is analytic at ζ = 0, and using (71) (which is known to be a formally differentiable expansion) one confirms the asymptotic expansions (62)–(63), justifying after the fact all assumptions made to arrive at the explicit solution.
We note that for each m ∈ [0, 1), P(ζ;m) is uniformly bounded with respect to , since it is locally bounded and the normalization factor in the asymptotics as ζ →∞ satisfies
Since \(\det (\mathbf {P}(\zeta ;m))=1\), the same holds for P(ζ;m)−1. Moreover, it is not difficult to see that if ∥⋅∥ is a matrix norm, then is a continuous function of m ∈ [0, 1). Therefore the estimates on P(ζ;m) and P(ζ;m)−1 hold uniformly with respect to m ∈ [0, ρ] for any ρ < 1.
3.5 The Equivalent \({\overline {\partial }}\) Problem and Its Solution for Large t
The next part of the proof of Theorem 1.1 is the nonlinear analogue of the estimation of the error \(\mathcal {E}(x,t)\) in the stationary phase formula (20) by double integrals in the z-plane. Here instead of a double integral we will have a double-integral equation arising from a \({\overline {\partial }}\)-problem. To arrive at this problem, we simply define a matrix function E(u, v;x, t) by comparing the “open lenses” matrix \((2t^{1/2})^{\mathrm {i}\nu (z_0)\sigma _3}\mathbf {O}(u,v;x,t)\) with its parametrix P(2t 1∕2(z − z 0);|r(z 0)|):
We claim that E(u, v;x, t) satisfies the following problem.
∂ Problem 1
Let be parameters. Find a 2 × 2 matrix function E = E(u, v) = E(u, v;x, t), with the following properties:
Continuity E is a continuous function of .
Nonanalyticity E is a (weak) solution of the partial differential equation \({\overline {\partial }}\mathbf {E}(u,v)=\mathbf {E}(u,v)\mathbf {W}(u,v)\), where W(u, v) = W(u, v;x, t) is defined by
and
Note that W(u, v;x, t) has jump discontinuities across the sector boundaries in general.
Normalization as (u, v) →∞.
To show the continuity, first note that in each of the six sectors Ωj, j = 1, …, 6, E(u, v;x, t) is continuous as a function of (u, v) up to the sector boundary. Indeed, the first factor in (81) is independent of (u, v), and the second factor in (81) has the claimed continuity because this is a property of the solution N(u + iv;x, t) of Riemann-Hilbert Problem 2 and of the change-of-variables formula (59). Finally, P(ζ;m) has unit determinant and its explicit formula in terms of parabolic cylinder functions shows that its restriction to each sector is an entire function of ζ, which guarantees the asserted continuity of the third factor in (81). Moreover, the matrices \((2t^{1/2})^{\mathrm {i}\nu (z_0)\sigma _3}\mathbf {O}(u,v;x,t)\) and P(2t 1∕2(u + iv − z 0);|r(z 0)|) satisfy exactly the same jump conditions across the six rays that form the common boundaries of neighboring sectors, from which it follows that E +(u, v;x, t) = E −(u, v;x, t) holds across each of these rays and therefore E(u, v;x, t) may be regarded as a continuous function of .
To show that \({\overline {\partial }}\mathbf {E}=\mathbf {E}\mathbf {W}\) holds, one simply differentiates E(u, v;x, t) in each of the six sectors, using the fact that O(u, v;x, t) is related to N(u + iv;x, t) explicitly by (59) and that both N(u + iv;x, t) and the unit-determinant matrix function P(2t 1∕2(u + iv − z 0);|r(z 0)|) are analytic functions of u + iv in each sector, and hence are annihilated by \({\overline {\partial }}\). The region of non-analyticity of E is therefore the union of shaded sectors shown in Fig. 2.
Finally to show the normalization condition, we recall Lemma 3.2. Therefore, comparing the normalization conditions of Riemann-Hilbert Problem 2 for N(z;x, t) and of Riemann-Hilbert Problem 3 for P(ζ;m) shows that as (u, v) →∞ in .
The rest of this section is devoted to the proof of the following result.
Proposition 3.3
Suppose that with |r(z)|≤ ρ for some ρ < 1. If t > 0 is sufficiently large, then for all there exists a unique solution of \({\overline {\partial }}\)-Problem 1 with the property that
exists and satisfies
Proof
To show that \({\overline {\partial }}\)-Problem 1 has a unique solution for t > 0 sufficiently large, and simultaneously obtain estimates for the solution E(u, v;x, t), we formulate a weakly-singular integral equation whose solution is that of \({\overline {\partial }}\)-Problem 1:
in which the identity matrix is viewed as a constant function on . Indeed, this is a consequence of the distributional identity \({\overline {\partial }} z^{-1}=-\pi \delta \) where δ denotes the Dirac mass at the origin. We will solve the integral equation (86) in the space , by computing the corresponding operator normFootnote 3 of and showing that for large t > 0 it is less than 1. Thus, we begin with the elementary estimate
Using the uniform boundedness of P(ζ;m) and its inverse with respect to ζ i.e., there exists C > 0 such that ∥P(ζ;m)∥≤ C and ∥P(ζ;m)−1∥≤ C for all and all m ∈ [0, ρ] with ρ < 1, the assumption |r(z)|≤ ρ < 1 gives that
and of course W(u, v;x, t) ≡ 0 on Ω2 ∪ Ω5. By direct computation using (58) along with the analyticity of f(z;z 0)±2 provided by Lemma 3.1 and straightforward estimates of \(\cos {}(2\arg (u+\mathrm {i} v-z_0))\) and its \({\overline {\partial }}\)-derivative as in Sect. 2, we have the following analogues of (29):
and
Note that
holds under the condition |r(u)|≤ ρ < 1. Also, under the same condition,
where we used Lemma 3.1 and (30), and K > 0 depends on ρ but not on z 0. Exactly the same estimate holds for \(|f(u+\mathrm {i} v;z_0)^2\overline {r(u)}-\overline {r(z_0)}|\). In the same way, but also using (93),
Therefore again using Lemma 3.1, we see that there are constants L and M depending only on the upper bound ρ < 1 for , on , and on such that
Note that (96) is the nonlinear analogue of the estimate (31).
Combining (96) with (87)–(88) shows that for some constant D independent of ,
where the four terms are analogues in the nonlinear case of the double integrals defined in (33) for the linear case:
Estimation of the integrals I [1, 4](u, v;x, t) and J [1, 4](u, v;x, t) requires nearly identical steps as estimation of I [3, 6](u, v;x, t) and J [3, 6](u, v;x, t) (just note that the sign of the exponent always corresponds to decay in the sectors of integration). So for brevity we just deal with I [3, 6](u, v;x, t) and J [3, 6](u, v;x, t).
To estimate I [3, 6](u, v;x, t), by iterated integration we have
The inner integrals can be estimated by Cauchy-Schwarz, using the fact that :
Thus,
Without loss of generality, suppose that v > 0. Then
Using monotonicity of \(\sqrt {v-V}\) on V < 0 and the rescaling V = t −1∕2w, we get for the first term:
For the second term, we use the inequality e−b ≤ Cb −1∕4 for b > 0 and the rescaling V = vw to get
Using monotonicity of \(\mathrm {e}^{-8tV^2}\) on V > v and the change of variable V − v = t −1∕2w we get for the third term:
The upper bounds in (103)–(104) are all independent of v (and u), so combining them with (101)–(102) gives
where C denotes an absolute constant.
To estimate J [3, 6](u, v;x, t) we again introduce iterated integrals in the same way as in (99) to obtain the inequality
Now, to estimate the inner U-integrals we will use Hölder’s inequality with conjugate exponents p > 2 and q < 2. Thus,
Now, by the change of variable U − z 0 = |V |w,
where the integral on the right-hand side is convergent as long as p > 2. Similarly, by the change of variable U − u = |V − v|w,
where the integral on the right-hand side is convergent as long as q > 1. Hence for any conjugate exponents 1 < q < 2 < p < ∞ with p −1 + q −1 = 1, we have for some constant C = C(p, q),
As before, assume without loss of generality that v > 0. Then
Using q > 1 and monotonicity of (v − V )1∕q−1 on V < 0 along with 1∕p + 1∕q = 1 and the rescaling V = t −1∕2w gives for the first integral
For the second integral, we again recall e−b ≤ Cb −1∕4 for b > 0 and rescale by V = vw to get
using also q, p < ∞. Finally, for the third integral, we use monotonicity of \(\mathrm {e}^{-8tV^2}\) and V 1∕p−1∕2 (for p > 2) on V > v and make the substitution V − v = t −1∕2w to get
Since the upper bounds in (113)–(115) are all independent of , combining them with (111)–(112) gives
where C denotes an absolute constant.
Returning to (97) and taking a supremum over , we see that
holds where D is a constant depending only on the upper bound ρ < 1 for , on , and on , and where denotes the norm of the weakly-singular integral operator \(\mathcal {J}\) acting in . It is a consequence of (117) that the integral equation (86) is uniquely solvable in by convergent Neumann series for sufficiently large t > 0:
where \(\mathcal {I}\) denotes the identity operator and the constant function on , and that the solution satisfies
an estimate that is uniform with respect to . This proves the first assertion in Proposition 3.3.
To prove the existence of the limit E 1(x, t) in (84), note that from the integral equation (86) we have
The second term satisfies
Now, following [12], let us examine the resulting double integral for u = 0, i.e., for z = u + iv restricted to the imaginary axis. Some simple trigonometry shows that
Therefore, if u = 0, the double integral on the right-hand side of (121) will tend to zero as |v|→∞ by the Lebesgue dominated convergence theorem provided that . Using (88) and (96), we have
where (compare with (98), or better yet, (33))
Noting the resemblance with the double integrals (33) analyzed in Sect. 2, we can immediately obtain the estimate
for some constant C independent of x. Therefore, the second term on the right-hand side of (120) tends to zero as v →∞ if u = 0 (the limit is not uniform with respect to x since v is compared with z 0 in (122)). Comparing with (84), we obtain from (120) the formula
and exactly the same argument shows that E 1(x, t) is finite and uniformly decaying as t → +∞:
where we have used (119) and (125) and noted that the constants C and D are independent of x. This proves the second assertion in Proposition 3.3. □
3.6 The Solution of the Cauchy Problem (1)–(2) for t > 0 Large
Now we complete the proof of Theorem 1.1 by combining our previous results. The matrix function N(u + iv;x, t) agrees with O(u, v;x, t) for u = 0 and |v| sufficiently large given z 0 = −x∕(4t). Since according to (81),
we compute the matrix coefficient N 1(x, t) appearing in (56) by taking a limit along the imaginary axis in (54). Thus, we obtain \({\mathbf {N}}_1(x,t)=(2t^{1/2})^{-\mathrm {i}\nu (z_0)\sigma _3}\mathbf {Q}(x,t)(2t^{1/2})^{\mathrm {i}\nu (z_0)\sigma _3}\), where (using z = u + iv)
Using (62) and Proposition 3.3 yields
Therefore, using (56) gives the following formula for the solution of the Cauchy problem (1)–(2):
where we recall \(\omega (z_0)=\arg (r(z_0))\), \(\theta (z_0;z_0)=-2z_0^2\), the definition (4) of ν(z 0), the definition (42) of c(z 0), and the definitions (77) and (79) of |β(m = |r(z 0)|)|2 and \(\arg (\beta (m=|r(z_0)|))\) respectively. Since the factors to the left of the square brackets have unit modulus, from Proposition 3.3 it follows that q(x, t) has exactly the representation (3) in which \(|\mathcal {E}(x,t)|=|E_{1,12}(x,t)|=\mathcal {O}(t^{-3/4})\) as t → +∞, uniformly with respect to x. This completes the proof of Theorem 1.1.
Remark
The use of truncations of the Neumann series (118) for E(u, v;x, t) yields a corresponding asymptotic expansion of q(x, t) as t → +∞. In other words, it is straightforward (but tedious) to compute explicit corrections to the leading term in the asymptotic formula (3) by expanding \(\mathcal {E}(x,t)\). For instance, the formula (126) gives
i.e., an explicit double integral plus a remainder. Using the estimates (119) and (125) we find that the remainder term satisfies
Using this result in (131) gives in place of (3) the corrected asymptotic formula
where
is the leading term in (3),
is an explicit correction (see (82)–(83)), and where \(\mathcal {E}^{(1)}(x,t)\) is error term satisfying \(\mathcal {E}^{(1)}(x,t)=\mathcal {O}(t^{-1})\) as t → +∞ uniformly with respect to . Theorem 1.1 implies that the correction satisfies as t → +∞, but the explicit formula (136) allows for a complete analysis of the correction. For instance, we are in a position to seek reflection coefficients r(z) in the Sobolev space with |r(z)|≤ ρ < 1 for which the correction saturates the upper bound of \(\mathcal {O}(t^{-3/4})\), or to determine under which conditions on r(z) the correction term can be smaller. Under additional hypotheses the expansion (134) can be carried out to higher order, with subsequent corrections involving iterated double integrals of W, which in turn involve \({\overline {\partial }}\)-derivatives of the extensions E j, j = 1, 3, 4, 6, and the parabolic cylinder functions contained in the matrix P(ζ;m) solving Riemann-Hilbert Problem 3.
Notes
- 1.
Since implies that (1 + |x|)q 0(x) is square-integrable, it follows by Cauchy-Schwarz that , which in turn implies that the reflection coefficient r(z) is well-defined for each .
- 2.
In many works on long-time asymptotics for the Cauchy problem (1)–(2) written before the Digital Library of Mathematical Functions was freely available (e.g., [8, 9]), the solution of Riemann-Hilbert Problem 3 was developed in terms of the related function \(D_\nu (y):=U(-\tfrac {1}{2}-\nu ,y)\). Since most formulæ in [18, §12] are phrased in terms of U(⋅, ⋅), we favor the latter.
- 3.
All L p norms of matrix-valued functions in this section depend on the choice of matrix norm, which we always take to be induced by a norm on .
References
M. Borghese, R. Jenkins, and K. D. T.-R. McLaughlin, “Long time asymptotic behavior of the focusing nonlinear Schrödinger equation,” Ann. Inst. H. Poincaré Anal. Non Linéaire35, no. 4, 887–920, 2018.
S. Cuccagna and R. Jenkins, “On the asymptotic stability of N-soliton solutions of the defocusing nonlinear Schrödinger equation,” Commun. Math. Phys.343, 921–969, 2016.
P. Deift, A. Its, and X. Zhou, “Long-time asymptotic for integrable nonlinear wave equations,” in A. S. Fokas and V. E. Zakharov, editors, Important Developments in Soliton Theory 1980–1990, 181–204, Springer-Verlag, Berlin, 1993.
P. Deift and X. Zhou, “A steepest descent method for oscillatory Riemann–Hilbert problems. Asymptotics for the mKdV equation,” Ann. Math.137, 295–368, 1993.
P. Deift and X. Zhou, “Long-time asymptotics for integrable systems. Higher order theory,” Comm. Math. Phys.165, 175–191, 1994.
P. Deift and X. Zhou, Long-time behavior of the non-focusing nonlinear Schrödinger equation — A case study, volume 5 of New Series: Lectures in Math. Sci., University of Tokyo, 1994.
P. Deift and X. Zhou, “Perturbation theory for infinite-dimensional integrable systems on the line. A case study,” Acta Math.188, no. 2, 163–262, 2002.
P. Deift and X. Zhou, “Long-time asymptotics for solutions of the NLS equation with initial data in a weighted Sobolev space,” Comm. Pure Appl. Math.56, 1029–1077, 2003.
M. Dieng and K. D. T.-R. McLaughlin, “Long-time asymptotics for the NLS equation via \({\overline {\partial }}\) methods,” arXiv:0805.2807, 2008.
A. R. Its, “Asymptotic behavior of the solutions to the nonlinear Schrödinger equation, and isomonodromic deformations of systems of linear differential equations,” Dokl. Akad. Nauk SSSR261, 14–18, 1981. (In Russian.)
R. Jenkins, J. Liu, P. Perry, and C. Sulem, “Soliton resolution for the derivative nonlinear Schrödinger equation,” Commun. Math. Phys., doi.org/10.1007/s00220-018-3138-4, 2018.
J. Liu, P. A. Perry, and C. Sulem, “Long-time behavior of solutions to the derivative nonlinear Schrödinger equation for soliton-free initial data,” Ann. Inst. H. Poincaré Anal. Non Linéaire35, no. 1, 217–265, 2018.
K. D. T.-R. McLaughlin and P. D. Miller, “The \(\overline {\partial }\) steepest descent method and the asymptotic behavior of polynomials orthogonal on the unit circle with fixed and exponentially varying nonanalytic weights,” Intern. Math. Res. Papers2006, Article ID 48673, 1–77, 2006.
K. D. T.-R. McLaughlin and P. D. Miller, “The \(\overline {\partial }\) steepest descent method for orthogonal polynomials on the real line with varying weights,” Intern. Math. Res. Notices2008, Article ID rnn075, 1–66, 2008.
P. D. Miller and Z.-Y. Qin, “Initial-boundary value problems for the defocusing nonlinear Schrödinger equation in the semiclassical limit,” Stud. Appl. Math.134, no. 3, 276–362, 2015.
N. I. Muskhelishvili, Singular Integral Equations, Boundary Problems of Function Theory and Their Application to Mathematical Physics, Second edition, Dover Publications, New York, 1992.
P. A. Perry, “Inverse scattering and global well-posedness in one and two space dimensions,” in P. D. Miller, P. Perry, and J.-C. Saut, editors, Nonlinear Dispersive Partial Differential Equations and Inverse Scattering, Fields Institute Communications, volume 83, 161–252, Springer, New York, 2019. https://doi.org/10.1007/978-1-4939-9806-7_4.
F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, and B. V. Saunders, eds., NIST Digital Library of Mathematical Functions, http://dlmf.nist.gov/, Release 1.0.17, 2017.
H. Segur and M. J. Ablowitz, “Asymptotic solutions and conservation laws for the nonlinear Schrödinger equation,” J. Math. Phys.17, 710–713 (part I) and 714–716 (part II), 1976.
V. E. Zakharov and S. V. Manakov, “Asymptotic behavior of nonlinear wave systems integrated by the inverse method,” Sov. Phys. JETP44, 106–112, 1976.
X. Zhou, “The L 2-Sobolev space bijectivity of the scattering and inverse-scattering transforms,” Comm. Pure Appl. Math.51, 697–731, 1989.
Acknowledgements
The first two authors were supported in part by NSF grants DMS-0451495, DMS-0800979, and the second author was supported by NSF Grant DMS-1733967. The third author was supported in part by NSF grant DMS-1812625.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Science+Business Media, LLC, part of Springer Nature
About this chapter
Cite this chapter
Dieng, M., McLaughlin, K.D.TR., Miller, P.D. (2019). Dispersive Asymptotics for Linear and Integrable Equations by the \(\overline {\partial }\) Steepest Descent Method. In: Miller, P., Perry, P., Saut, JC., Sulem, C. (eds) Nonlinear Dispersive Partial Differential Equations and Inverse Scattering. Fields Institute Communications, vol 83. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-9806-7_5
Download citation
DOI: https://doi.org/10.1007/978-1-4939-9806-7_5
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-9805-0
Online ISBN: 978-1-4939-9806-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)