1 Introduction

We first present the binormal flow framework and the obtained results. Then in §1.2 we describe the 1-D cubic nonlinear Schrödinger equation results.

1.1 Evolution of Polygonal Lines Through the Binormal Flow and Intermittency

Vortex filaments in 3-D fluids appear when vorticity is large and concentrated in a thin tube around a curve in \({\mathbb {R}}^3\). The binormal (curvature) flow, that we refer hereafter as BF, is the classical model for one vortex filament dynamics. It was derived by Da Rios 1906 in his PhD advised by Levi-Civita by using a truncated Biot–Savart law and a renormalization in time ([18]). The evolution of a \({\mathbb {R}}^3\)-curve \(\chi (t)\) parametrized by arclength x by the binormal flow is

$$\begin{aligned} \chi _t=\chi _x\wedge \chi _{xx}. \end{aligned}$$
(1)

Keeping in mind the Frenet’s system for the frames of 3-D curves composed by tangent, normal, and binormal vectors (Tnb)

$$\begin{aligned} \left( \begin{array}{c} T\\ n\\ b \end{array}\right) _x= \left( \begin{array}{ccc} 0 &{} c &{} 0 \\ -c &{} 0 &{} \tau \\ 0 &{} -\tau &{} 0 \end{array}\right) \left( \begin{array}{c} T\\ n\\ b \end{array}\right) , \end{aligned}$$

where \(c,\tau \) are the curvature and torsion, the binormal flow can be rewritten as

$$\begin{aligned} \chi _t=c\,b. \end{aligned}$$

BF was also derived as formal asymptotics in [1], and in [12] by using the technique of matched asymptotics in the Navier–Stokes equations (i.e. to balance the cross-section of the tube with the Reynolds number). In the recent paper [27], and still under some hypothesis on the persistence of concentration of vorticity in the tube, BF is rigorously derived; moreover the considered curves are not necessarily smooth. This is based on the existence of a correspondence between the two Hamilton–Poisson structures that give rise to Euler and to BF.

Existence results were given for curves with curvature and torsion in Sobolev spaces of high order ([21, 26, 34, 42]), and more generally existence results for currents in the framework of a weak formulation of the binormal flow ([28]). Recently, the Cauchy problem was shown to be well-posed for curves with a corner and curvature in weighted space ([3]).

An important feature of BF is that the tangent vector of a solution \(\chi (t)\) solves the Schrödinger map onto \({\mathbb {S}}^2\):

$$\begin{aligned} T_t=T\wedge T_{xx}. \end{aligned}$$

Furthermore, Hasimoto remarked in [26] that the function, that he calls the filament function, \(u(t,x)=c(t,x)e^{i\int _0^x\tau (t,s)ds}\), satisfies a focusing 1-D cubic nonlinear Schrödinger equation (NLS).Footnote 1 Hasimoto’s transform can be viewed as an inverse Madelung transform sending Gross–Pitaesvskii equation to compressible Euler equation with quantum pressure. It is known that in order to avoid issues related to vanishing curvature, Bishop parallel frames ([7, 34]) can be used as explained in §4.3.

Several examples of evolutions of curves through the binormal flow were given finding first particular solutions of the 1-D cubic NLS and then solving the corresponding Frenet equations. Some of these examples are consistent at the qualitative level with classical vortex filament dynamics as the line, the ring, the helix and travelling wave type vortices. A special case are the self-similar solutions of the binormal flow. They are constructed from the solutions

$$\begin{aligned} u_\alpha (t,x)=\alpha \frac{e^{i\frac{x^2}{4t}}}{\sqrt{4\pi it}}=\alpha e^{it\Delta }\delta _0(x) \end{aligned}$$
(2)

of the 1-D cubic NLS equation, renormalized in a sense specified in §1.2, with a Dirac mass \(\alpha \delta _0\) at initial time. These BF solutions are of the type \(\chi (t,x)=\sqrt{t}G(\frac{x}{\sqrt{t}})\), and form a a 1-parameter family \(\{\chi _{\alpha }, \alpha \ge 0\}\), with \(\chi _\alpha (t)\) characterized by its curvature \(c_\alpha (t,x)=\frac{\alpha }{\sqrt{t}}\) and its torsion \(\tau _\alpha (t,x)=\frac{x}{2t}\). These solutions were known and used for quite a while in the 80’s ([11, 36, 37, 50]). The existence of a trace at time \(t=0\) was proved rigorously in [25], and in particular it was shown that \(\chi _\alpha (0)\) is a broken line with one corner having an angle \(\theta \) satisfying

$$\begin{aligned} \sin \left( \frac{\theta }{2}\right) =e^{-\pi \frac{\alpha ^2}{2}}. \end{aligned}$$
(3)

In particular the Dirac mass at the NLS level corresponds to the formation of a corner on the curve, but the trace \(\alpha \delta _0\) of the filament function is not the filament function \(\theta \delta _0\) of \(\chi _\alpha (0)\). This turns out to have relevant consequences regarding the lack of continuity of some norms at the time when the corner is created. In [4] it is proved the \(\Vert \widehat{T_x}(\,\cdot , t)\Vert _\infty \) is discontinuous at that time. The same proof works if instead of this norm it is used the following one

$$\begin{aligned} \sup _j\int _{4\pi j}^{4\pi (j+1)}| \widehat{T_x}( x, t)|^2\,dx, \end{aligned}$$

that fits better within the framework of Theorem 1.4, because due to the Frenet equations \(T_x\) it is at the same level of regularity as the corresponding filament functions that solve NLS.

We shall now turn our attention precisely to the evolution of curves that can generate corners in finite time. The case of the formation and instantaneous disappearance of one corner is now well understood thanks to the characterization of the family the self-similar solutions, and the study in [3] of the evolution of non-closed curves with one corner and with curvature in weighted \(L^2\) based spaces. On the other hand, a planar regular polygon with M sides is expected to evolve through the binormal flow to skew polygons with Mq sides at times \(t_{p,q}=\frac{p}{2\pi q}\) for odd q, see the numerical simulations in [23, 28], and [19] where the integration of the Frenet equations at the rational times \(t_{p,q}\) is also done.

In the present paper we place ourselves in the framework of initial data being polygonal lines. The results presented are an important step forward to fill the gap between the case of one corner and the much more delicate issue of closed polygons.

Theorem 1.1

(Evolution of polygonal lines through the binormal flow) Let \(\chi _0\) be an arclength parametrized polygonal line with corners located at \(x\in {\mathbb {Z}}\), with the sequence of angles \(\theta _n\in (0,\pi )\) such that the sequence defined by (cf. (3))

$$\begin{aligned} \sqrt{-\frac{2}{\pi }\log \left( \sin \left( \frac{\theta _n}{2}\right) \right) }, \end{aligned}$$
(4)

belongs to \(l^{2,3}\). Then there exists \(\chi (t)\), smooth solution of the binormal flow (1) on \(t\ne 0\) and solution of (1) in the weak sense on \({\mathbb {R}}\), with

$$\begin{aligned} |\chi (t,x)-\chi _0(x)|\le C\sqrt{t},\quad \forall x\in {\mathbb {R}}, |t|\le 1. \end{aligned}$$

Remark 1.2

Under suitable conditions on the initial data \(\chi _0\), the evolution can have an intermittent behaviour: Proposition 3.2 insures that at times \(t_{p,q}=\frac{1}{2\pi }\frac{p}{q}\) the curvature of \(\chi (t)\) displays concentrations near the locations x such that \(x\in \frac{1}{q}{\mathbb {Z}}\), and \(\chi (t)\) is almost a straight segment in between.

Remark 1.3

There is a striking difference with respect to the case of a polygonal line with just one corner in the following sense. The trajectory in time of the corner located at \((t,x)=(0,0)\) of a self-similar solution, \(\chi _\alpha (t,0)\), is given by a straight line for \(t>0\), as the Frenet frame of \(\chi _\alpha (t)\) is constant at \(x=0\). In §4.11 we show that for the evolution of a polygonal line with several corners the trajectory of each corner, as t goes to 0, is a logarithmic spiral. Therefore, the presence of another corner on a nonclosed curve immediately creates a modification of the trajectory.

The proof goes as follows. In view of (3) and Hasimoto’s transform we consider an appropriate 1-D cubic NLS equation with initial data

$$\begin{aligned} \sum _{k\in {\mathbb {Z}}} \alpha _k\delta _k, \end{aligned}$$

with \(\alpha _k\) complex numbers defined in a precise way from the curvature and torsion angles of \(\chi _0\). Theorem 1.4 gives us a solution u(t) on \(t>0\). From this smooth solution on \(]0,\infty [\) we construct a smooth solution \(\chi (t)\) of the binormal flow on \(]0,\infty [\), that we prove it has a limit \(\chi (0)\) at \(t=0\). Then the goal is to show that modulo a translation and a rotation \(\chi (0)\) is \(\chi _0\). This is done in several steps. First we show that the tangent vector has a limit at \(t=0\). Secondly we show that this limit is piecewise constant, so \(\chi (0)\) is a segment for \(x\in ]n,n+1[,\forall n\in {\mathbb {Z}}\). Then we prove, by analyzing the frame of the curve through paths of self-similar variables, that \(\chi (0)\) presents corners at the same locations as \(\chi _0\), of same angles as \(\chi _0\). We recover the torsion angles of \(\chi _0\) by using also a similar analysis for modulated normal vectors \({\tilde{N}}(t,x)=e^{i\sum _{j\ne x}|\alpha _j|^2\log \frac{x-j}{\sqrt{t}}}N(t,x).\) Therefore we recover \(\chi _0\) modulo a translation and a rotation. This translation and rotation applied to \(\chi (t)\) give us the desired solution of the binormal flow for \(t>0\) with limit \(\chi _0\) at \(t=0\). Uniqueness holds in the class of curves having filament functions of type (10). Using the above recipe to construct the evolution of a polygonal line for \(t>0\) we can extend \(\chi (t)\) to negative times by using the time reversibility of the equation.

1.2 The Cubic NLS on \({\mathbb {R}}\) with Initial Data Given by Several Dirac Masses

We consider the cubic nonlinear Schrödinger equation on \({\mathbb {R}}\)

$$\begin{aligned} i\partial _t u+\Delta u \pm \frac{1}{2} |u|^{2}u=0. \end{aligned}$$
(5)

We first recall the known local well-posedness results, starting with what is known in the framework of Sobolev spaces. The equation is well-posed in \(H^{s}\), for any \(s\ge 0\) ([14, 22]). On the other hand, for \(s<0\) the Cauchy problem is ill-posed: in [29] uniqueness was proved to be lost by using the Galilean transformation, and in [16] norm-inflation phenomena were displayed. We note that the threshold obtained with respect of the scaling invariance \(\lambda u(\lambda ^2t,\lambda x)\) is \(\dot{H}^{-\frac{1}{2}}\). For \(s\le -\frac{1}{2}\) the presence of norm inflating phenomena with loss of regularity was pointed out in [13, 31], and also norm inflation around any data was proved in [43]. Finally a growth control of Sobolev norms of Schwartz solutions for \(-\frac{1}{2}<s<0\) on the line or the circle was shown in [30] and [33].

On the other hand well-posedness holds for data with Fourier transform in \(L^p\) spaces, \(p<+\infty \) ([15, 24, 53]). A natural choice would be to consider initial data with Fourier transform in \(L^\infty \), as this space \({\mathcal {F}}(L^\infty )\) it is also invariant under rescaling.

We shall now focus on the case of initial data of Dirac mass type. Note that the Dirac mass is borderline for \(\dot{H}^{-\frac{1}{2}}\) and that it belongs to \({\mathcal {F}}(L^\infty )\). For an initial datum given by one Dirac mass, \(u(0)=\alpha \delta _0\), the equation is ill-posed. More precisely, it is showed in [29] by using the Galilean invariance, that if there exists a unique solution it should be for positive times

$$\begin{aligned} \alpha \frac{e^{\mp i\frac{|\alpha |^2}{4\pi }\log \sqrt{t}+i\frac{x^2}{4t}}}{\sqrt{4\pi it}}, \end{aligned}$$

and then the initial datum is not recovered. We note here that this issue can be avoided by a simple change of phase that leads to the equation

$$\begin{aligned} \left\{ \begin{array}{l} i\partial _t u+\Delta u \pm \frac{1}{2} \left( |u|^{2}-A(t)\right) u=0,\\ u(0)=\alpha \delta _0, \end{array}\right. \end{aligned}$$

with \(A(t)=\frac{\alpha ^2}{4\pi t}\). With this choice the equation has as a solution precisely the fundamental solution of the linear equation \(u_\alpha (t,x)\) introduced in (2). Adding a real potential A(t) is a very natural geometric normalization, as the BF solution constructed from a NLS solution u(tx) is the same as the one constructed from \(e^{i\phi (t)}u(t,x)\), see §4.3. This type of Wick renormalization has been used in the periodic setting in previous works as in [9, 15, 44] and [45], although the motivation in these cases came just from the need of avoiding some resonant terms that become infinite.

However, even with this geometric renormalization the problem is still ill-posed, in the sense that small regular perturbations of \(u_\alpha (t)\) at time \(t=1\) were proved in [2] to behave near \(t=0\) as \(u_\alpha (t)+e^{i\log t} f(x)\) for some \(f\in H^1\). Therefore there is a loss of phase as t goes to zero.

This loss of phase is a usual phenomena in the setting of the nonlinear Schrödinger equations when singularities are formed, and it is of course a consequence of the gauge invariance of the equation. How to continue the solution after the singularity has been formed is therefore an important issue that appears recurrently in the literature, see for example [10, 38,39,40].

In [3] we found a natural geometric way to continue the BF solution after the singularity, in the shape of a corner, is created. As BF is time reversible, to uniquely continue a solution for negative times requires to get a curve trace \(\chi (0)\) at \(t=0\) and to construct a unique solution for positive times, having as limit at \(t=0\) the inverse oriented curve \(\chi (0,-s)\). Note that using just continuity arguments and the characterization result of the self-similar solutions that was proved in [25] one can construct in an artificial way the continuation of a self-similar solution. A more delicate issue is how to determine the curve trace and its Frenet frame at time \(t=0\) for small regular perturbations of BF self-similar solutions at some positive time, and we based our analysis in [3] on the characterization result of the self-similar solutions that was proved in [25]; in particular the small regular perturbations of BF self-similar solutions at some positive time do not break the self-similar symmetry of the singularity created at \(t=0\).

In Theorem 1.1 we prove that this procedure can be extended, not without difficulties, to the case of a polygonal line, that can be viewed as a rough perturbation of the broken line with one corner. There is no need for the line to be planar, and infinitely many corners are permitted. In this case new problems concerning the phase loss appear at the NLS and frame level, and again the characterization of the self-similar solutions plays a crucial role.

For these reasons in this article we consider as initial data a combination of Dirac masses,

$$\begin{aligned} u(0)=\sum _{k\in {\mathbb {Z}}}\alpha _k\delta _k, \end{aligned}$$
(6)

with coefficients in weighted summation spaces:

$$\begin{aligned} \Vert \{\alpha _k\}\Vert _{l^{p,s}}<\infty , \end{aligned}$$

where

$$\begin{aligned} \Vert \{\alpha _k\}\Vert _{l^{p,s}}:=\sum _{k\in {\mathbb {Z}}}(1+|k|)^{ps}|\alpha _k|^p. \end{aligned}$$

This choice of initial data has its own interest from the point of view of the Schrödinger equation, because as far as we know and for the cubic nonlinearity in one dimension the only results at the critical level of regularity are the ones in [3] mentioned above and that deals with just one Dirac mass. The case of a periodic array of Dirac deltas of the same precise amplitude, was studied in [19] where a candidate for a solution is proposed.

The case of a combination of Dirac masses as initial data for the Schrödinger equation \(|u|^{p-1}u\) with subcritical nonlinearity \(p<3\) was considered in [32]. It was showed that it admits a unique solution, of the form

$$\begin{aligned} u(t,x)=\sum _{k\in {\mathbb {Z}}}A_k(t)e^{it\Delta }\delta _k(x), \end{aligned}$$
(7)

where \(\{A_k\}\in {\mathcal {C}}([0,T];l^{2,1})\cap C^1(]0,T];l^{2,1})\). As the nonlinear power approaches the critical cubic power, things look more singular. In this paper we prove that the same type of ansatz is valid for a naturally renormalized cubic equation.

Let us notice that the initial data (6) has the property

$$\begin{aligned} \widehat{u(0)}(\xi )=\sum _{k\in {\mathbb {Z}}}\alpha _ke^{-ik\xi }, \end{aligned}$$
(8)

and in particular \(\widehat{u(0)}\) is \(2\pi -\)periodic. Moreover, the condition \(\{\alpha _k\}\in l^{2,s}\) translates into \(\widehat{u(0)}\in H^s(0,2\pi )\). Conversely, every \(2\pi -\)periodic function can be decomposed as in (8) and so it represents the Fourier transform on \({\mathbb {R}}\) of a combination of Dirac masses as (7). We denote

$$\begin{aligned}&H^s_{pF}:=\{u\in {\mathcal {S}}'({\mathbb {R}}),\,\, {\hat{u}}(\xi +2\pi )={\hat{u}}(\xi ), {\hat{u}}\in H^s(0,2\pi )\}\subset \{u\in {\mathcal {S}}'({\mathbb {R}}),\\&\{\Vert {\hat{u}}\Vert _{H^s(2\pi j,2\pi (j+1))}\}_j\in l^\infty \}, \end{aligned}$$

and

$$\begin{aligned} \Vert u\Vert _{H^s_{pF}}=\Vert {\hat{u}}\Vert _{H^s(0,2\pi )}. \end{aligned}$$

Our first result concerns the existence of solutions for initial data in \(H^s_{pF}\).

Theorem 1.4

(Solutions of 1-D cubic NLS linked to several Diracs masses as initial data) Let \(s>\frac{1}{2}, 0<\gamma <1\) and \(\{\alpha _k\}\in l^{2,s}\). We consider the 1-D cubic NLS equation:

$$\begin{aligned} \begin{array}{c} i\partial _t u +\Delta u\pm \frac{1}{2}\left( |u|^2-\frac{M}{2\pi t}\right) u=0, \end{array} \end{aligned}$$
(9)

with \(M=\sum _{k\in {\mathbb {Z}}}|\alpha _k|^2\). There exists \(T>0\) and a unique solution on (0, T) of the form

$$\begin{aligned} u(t,x)=\sum _{k\in {\mathbb {Z}}}e^{\mp i\frac{|\alpha _k|^2}{4\pi }\log \sqrt{t}}(\alpha _k+R_k(t))e^{it\Delta }\delta _k(x), \end{aligned}$$
(10)

with

$$\begin{aligned} \sup _{0<t<T}t^{-\gamma }\Vert \{R_k(t)\}\Vert _{l^{2,s}}+t\,\Vert \{\partial _t R_k(t)\}\Vert _{l^{2,s}}<C. \end{aligned}$$
(11)

Moreover, considering as initial data a finite sum of N Dirac masses

$$\begin{aligned} u(0)=\sum _{k\in {\mathbb {Z}}}\alpha _k\delta _k, \end{aligned}$$

with coefficients of equal modulus

$$\begin{aligned} |\alpha _k|=a, \end{aligned}$$
(12)

and equation (9) renormalized with \(M=(N-\frac{1}{2})a^2\), we have a unique solution

$$\begin{aligned} u(t)=e^{it\Delta }u(0)\pm ie^{it\Delta }\int _0^te^{-i\tau \Delta }\left( \left( |u(\tau )|^2-\frac{M}{2\pi \tau }\right) u(\tau )\right) \,\frac{d\tau }{2}, \end{aligned}$$

such that \(\widehat{e^{-it\Delta }u(t)}\in {\mathcal {C}}^1((-T,T),H^s(0,2\pi ))\) with

$$\begin{aligned} \Vert e^{-it\Delta }u(t)-u(0)\Vert _{H^s_{pF}}\le C t^\gamma , \quad \forall t\in (-T,T). \end{aligned}$$

Moreover, if \(s\ge 1\) then the solution is global in time.

Remark 1.5

Note that any \(\alpha _j \) such that (12) does not hold will imply that the corresponding initial value problem is ill posed, similarly at what was proved in [29] and [3] in the case of just one Dirac mass and that we mentioned above.

Remark 1.6

It is worth noting that performing the (reversible) pseudo-conformal transform to the solution u of (9)

$$\begin{aligned} u(t,x)=\frac{e^{i\frac{x^2}{4t}}}{\sqrt{4\pi it}}{\overline{v}}\left( \frac{1}{t},\frac{x}{t}\right) ,\qquad t>0 \end{aligned}$$

we obtain a solution v of

$$\begin{aligned} i\partial _t v +\Delta v\pm \frac{1}{8\pi t}\left( |v|^2-2M\right) v=0. \end{aligned}$$
(13)

This was the procedure we used in [3].

To impose the ansatz (7) on u is equivalent to

$$\begin{aligned} v(t,x)=\sum _{k\in {\mathbb {Z}}}\overline{A_k}\left( \frac{1}{t}\right) e^{-i\frac{tk^2}{4}+i\frac{xk}{2}}. \end{aligned}$$
(14)

Therefore after pseudo-conformal transform our problem reduces to solve (13) in the periodic setting with period \([0,4\pi ]\). Note that from (7) we have that \(|\widehat{u(t)}(\xi )|\) is \(2\pi \) periodic.

The proof of the theorem goes as follows. Plugging the general ansatz (7) into equation (9) leads to a discrete system on \(\{A_k(t)\}\), by using the fact that for fixed t the family \(e^{it\Delta }\delta _k(x)=\frac{e^{i\frac{(x-k)^2}{4t}}}{\sqrt{4\pi it}}\) is an orthonormal family of \(L^2(0,4\pi t)\). We solve the discrete system on \(\{A_k(t)\}\) by a fixed point argument with \(R_k(t)=e^{-i\frac{|\alpha _k|^2}{4\pi }\log \sqrt{t}}A_k(t)-\alpha _k\) satisfying (11). In the case of initial data a finite sum of N Dirac masses with coefficients of equal modulus and equation (9) renormalized with \(M=(N-\frac{1}{2})a^2\), we are led to solve the same fixed point for \(R_k(t)=A_k(t)-\alpha _k\).

Remark 1.7

The resonant part of the discrete system of \(\{A_k(t)\}\) is

$$\begin{aligned} i\partial _t a_k(t)=\frac{1}{8\pi t}a_k(t)\left( 2\sum _j|a_j(t)|^2-|a_k(t)|^2-2M\right) . \end{aligned}$$

It is a non-autonomous singular time-dependent coefficient version of the resonant system of standard 1-D cubic NLS. Indeed, usually for questions concerning the long-time behavior of cubic NLS, one introduces

$$\begin{aligned} v(t)=e^{-it\Delta }u(t). \end{aligned}$$

In the 1-D periodic case the Fourier coefficients of v(t) satisfy the system

$$\begin{aligned} i\partial _t v_k(t)=\sum _{k-j_1+j_2-j_3=0}e^{-it(k^2-j_1^2+j_2^2-j_3^3)}v_{j_1}(t)\overline{v_{j_2}(t)}v_{j_3}(t), \end{aligned}$$

so that the resonant system is:

$$\begin{aligned} i\partial _t a_k(t)=a_k(t)\left( 2\sum _j|a_j(t)|^2-|a_k(t)|^2\right) . \end{aligned}$$

Of course, for 1-D periodic NLS with data in \(H^s, s>\frac{1}{2}\) (that corresponds to \(\{v_n(0)\}\in l^{2,s}\subset l^1\)) there is no issue for obtaining directly the local existence.

Remark 1.8

The regularity of \(\{\alpha _j\}\) might be weakened to \(l^p\) spaces only (\(p<\infty \)), see Remark 2.2. It is evident from (13) that formally

$$\begin{aligned} \partial _t\sum _j|A_j(t)|^2=0, \end{aligned}$$
(15)

and therefore the \(l^2\) norm is preserved.Footnote 2 As a matter of fact this says that the selfsimilar solutions have finite mass for the 1-D cubic NLS when the mass is appropriately defined. This has nothing to do with the complete integrability of the system because still works in the subcritical cases studied in [32].

Note that to solve (13) for \(t\ge T_0>0\) is quite straightforward making use of the available Strichartz estimates in the periodic setting-see [8] and also [41] for a slight modification. However, these methods do not give the behavior of the solution v when time approaches infinity which is absolutely crucial for proving Theorem 1.1. As a consequence we are led to make a more refined analysis. In view of Theorem 1.1 we consider weighted \(l^{2,s}\) spaces; this in particular will allow us to rigorusly prove that (15) holds.

The paper is structured as follows. In the next section we prove Theorem 1.4, and also the extension Theorem 2.3 concerning some cases of Dirac masses not necessary located at integer numbers. Section 3 contains the proof of a Talbot effect for some solutions given by Theorem 1.4. In the last section we prove Theorem 1.1.

2 The 1-D Cubic NLS with Initial Data Given by Several Dirac Masses

In this section we give the proof of Theorem 1.4.

2.1 The Fixed Point Framework

We denote \({\mathcal {N}}(u)=\frac{|u|^2u}{2}\). By plugging the ansatz (7) into equation (9) we get

$$\begin{aligned} \sum _{k\in {\mathbb {Z}}}i\partial _t A_k\left( t\right) e^{it\Delta }\delta _k= & {} {\mathcal {N}}\left( u\right) -\frac{M}{4\pi t}u={\mathcal {N}}\left( \sum _{j\in {\mathbb {Z}}}A_j\left( t\right) e^{it\Delta }\delta _j\right) \nonumber \\&-\frac{M}{4\pi t}\left( \sum _{k\in {\mathbb {Z}}}A_k\left( t\right) e^{it\Delta }\delta _k\right) . \end{aligned}$$
(16)

We have chosen here for simplicity the sign − in (9); the sign \(+\) can be treated the same.

The family \(e^{it\Delta }\delta _k(x)=\frac{e^{i\frac{(x-k)^2}{4t}}}{\sqrt{4\pi it}}\) is an orthonormal family of \(L^2(0,4\pi t)\) so by taking the scalar product of \(L^2(0,4\pi t)\) with \(e^{it\Delta }\delta _k\) we obtain

$$\begin{aligned} i\partial _t A_k\left( t\right) =\int _0^{4\pi t} {\mathcal {N}}\left( \sum _{j\in {\mathbb {Z}}}A_j\left( t\right) \frac{e^{i\frac{\left( x-j\right) ^2}{4t}}}{\sqrt{4\pi it}}\right) \frac{-e^{i\frac{\left( x-k\right) ^2}{4t}}}{\overline{\sqrt{4\pi it}}}\,dx-\frac{M}{4\pi t}A_k\left( t\right) . \end{aligned}$$

Note that as \(s>\frac{1}{2}\) we have \(\{A_j\}\in l^{2,s}\subset l^1\) and we can develop the cubic power to get

$$\begin{aligned} i\partial _t A_k(t)=\frac{1}{8\pi t}\sum _{k-j_1+j_2-j_3=0}e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)- \frac{M}{4\pi t}A_k(t).\nonumber \\ \end{aligned}$$
(17)

We note already that for a sequence of real numbers a(k) we have:

$$\begin{aligned}&\partial _t \sum _k a(k)|A_k(t)|^2\nonumber \\&\quad =\frac{1}{4\pi t}\mathfrak {I}\, \sum _{k-j_1+j_2-j_3=0}a(k)e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)\overline{A_k(t)} \nonumber \\&\quad =\frac{1}{8\pi ti}\left( \sum _{k-j_1+j_2-j_3=0}a(k)e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)\overline{A_k(t)}\right. \nonumber \\&\qquad \left. -\sum _{j_3-j_2+j_1-k=0}a(k)e^{-i\frac{j_3^2-j_2^2+j_1^2-k^2}{4t}}A_{j_2}(t)\overline{A_{j_1}(t)}A_{k}(t)\overline{A_{j_3}(t)}\right) \nonumber \\&\quad =\frac{1}{8\pi ti}\sum _{k-j_1+j_2-j_3=0}(a(k)-a(j_3))e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)\overline{A_k(t)} \nonumber \\&\quad =\frac{1}{16\pi ti}\sum _{k-j_1+j_2-j_3=0}(a(k)-a(j_1)+a(j_2) \nonumber \\&\qquad -a(j_3))e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)\overline{A_k(t)}. \end{aligned}$$
(18)

Therefore the system conserves the “mass”:

$$\begin{aligned} \sum _k|A_k(t)|^2= \sum _k|A_k(0)|^2, \end{aligned}$$
(19)

and the momentum

$$\begin{aligned} \sum _kk|A_k(t)|^2= \sum _kk|A_k(0)|^2. \end{aligned}$$
(20)

We split the summation indices of (17) into the following two sets:

$$\begin{aligned} NR_k= & {} \{(j_1,j_2,j_3)\in {\mathbb {Z}}^3, k-j_1+j_2-j_3=0, k^2-j_1^2+j_2^2-j_3^2\ne 0\}, \\ Res_k= & {} \{(j_1,j_2,j_3)\in {\mathbb {Z}}^3, k-j_1+j_2-j_3=0, k^2-j_1^2+j_2^2-j_3^2=0\}. \end{aligned}$$

As we are in one dimension, the second set is simply

$$\begin{aligned} Res_k=\{(k,j,j), (j,j,k), j\in {\mathbb {Z}}\}, \end{aligned}$$

as for \(k-j_1+j_2-j_3=0\) we have

$$\begin{aligned} k^2-j_1^2+j_2^2-j_3^3=2(k-j_1)(j_1-j_2). \end{aligned}$$

In particular we get

$$\begin{aligned}&\sum _{k-j_1+j_2-j_3=0}e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)\\&\quad =\sum _{j_1,j_2\in {\mathbb {Z}}}e^{-i\frac{2(k-j_1)(j_1-j_2)}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{k-j_1+j_2}(t) \\&\quad =\sum _{j_1\ne k}\sum _{j_2\ne j_1}e^{-i\frac{2(k-j_1)(j_1-j_2)}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{k-j_1+j_2}(t)+\sum _{j_1\ne k}A_{j_1}(t)\overline{A_{j_1}(t)}A_{k}(t)\\&\qquad +\sum _{j_2\in {\mathbb {Z}}}A_k(t)\overline{A_{j_2}(t)}A_{j_2}(t). \end{aligned}$$

Therefore the system (16) writes

$$\begin{aligned} i\partial _t A_k(t)= & {} \frac{1}{8\pi t}\sum _{(j_1,j_2,j_3)\in NR_k}e^{-i\frac{k^2-j_1^2+j_2^2-j_3^3}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t) \nonumber \\&+\frac{1}{8\pi t}A_k(t)(2\sum _j|A_j(t)|^2-|A_k(t)|^2-2M). \end{aligned}$$
(21)

As we have already noticed, this system conserves the “mass” \(\sum _j|A_j(t)|^2\), so since \(M=\sum _j|\alpha _j|^2\), finding a solution for \(t>0\) satisfying

$$\begin{aligned} \underset{t\rightarrow 0}{\lim }|A_j(t)|=|\alpha _j|, \end{aligned}$$
(22)

is equivalent to finding a solution for \(t>0\) satisfying also (22), for the following also “mass”-conserving system:

$$\begin{aligned} i\partial _t A_k(t)= & {} \frac{1}{8\pi t}\sum _{(j_1,j_2,j_3)\in NR_k}e^{-i\frac{k^2-j_1^2+j_2^2-j_3^3}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)\nonumber \\&-\frac{1}{8\pi t}|A_k(t)|^2A_k(t). \end{aligned}$$
(23)

By doing a change of phase \(A_k(t)=e^{i\frac{ |\alpha _k|^2}{4\pi }\log \sqrt{t}}{\tilde{A}}_k(t)\) we get as a system

$$\begin{aligned} i\partial _t {\tilde{A}}_k(t)=f_k(t)-\frac{1}{8\pi t}(|{\tilde{A}}_k(t)|^2-|\alpha _k|^2){\tilde{A}}_k(t), \end{aligned}$$
(24)

where

$$\begin{aligned}&f_k(t)=\frac{1}{8\pi t}\sum _{(j_1,j_2,j_3)\in NR_k}\nonumber \\&\quad e^{-i\frac{k^2-j_1^2+j_2^2-j_3^3}{4t}}e^{-i\frac{|\alpha _k|^2-|\alpha _{j_1}|^2+|\alpha _{j_2}|^2-|\alpha _{j_3}|^2}{4\pi }\log \sqrt{t}}{\tilde{A}}_{j_1}(t)\overline{{\tilde{A}}_{j_2}(t)}{\tilde{A}}_{j_3}(t). \end{aligned}$$
(25)

Now we note that a solution of (24) satisfies

$$\begin{aligned} \partial _t | \tilde{A}_k(t)|^2=2\mathfrak {I}(f_k(t)\overline{{\tilde{A}}_k(t)}), \end{aligned}$$
(26)

so obtaining a solution of (24) for \(t>0\) with

$$\begin{aligned} \underset{t\rightarrow 0}{\lim }|{\tilde{A}}_k(t)|=|\alpha _k|, \end{aligned}$$
(27)

is equivalent to obtaining a solution for \(t>0\) also satisfying (27), for the following system, that also enjoys (26):

$$\begin{aligned} i\partial _t {\tilde{A}}_k(t)=f_k(t)-\frac{1}{8\pi t}\int _0^t 2\mathfrak {I}(f_k(\tau )\overline{{\tilde{A}}_k(\tau )})d\tau \,{\tilde{A}}_k(t). \end{aligned}$$
(28)

We recall that we expect solutions behaving as \(A_k(t)=e^{i\frac{ |\alpha _k|^2}{4\pi }\log \sqrt{t}}(\alpha _k+R_k(t))\), with \(\{R_k\}\) in the space:

$$\begin{aligned} X^\gamma :=\{\{f_k\}\in {\mathcal {C}}^1((0,T),l^{2,s}),\,\,\Vert \{t^{-\gamma }f_k(t)\}\Vert _{L^\infty (0,T) l^{2,s}}+\Vert \{t\,\partial _tf_k(t)\}\Vert _{L^\infty (0,T) l^{2,s}}<\infty \},\nonumber \\ \end{aligned}$$
(29)

with T to be specified later. We also denote

$$\begin{aligned} \Vert \{f_k\}\Vert _{X^\gamma }=\Vert \{t^{-\gamma }f_k(t)\}\Vert _{L^\infty (0,T) l^{2,s}}+\Vert \{t\,\partial _tf_k(t)\}\Vert _{L^\infty (0,T) l^{2,s}}. \end{aligned}$$

To prove the theorem we shall show that we have a contraction on a suitable chosen ball of size \(\delta \) of \(X^\gamma \) for the operator \(\Phi \) sending \(\{R_k\}\) into

$$\begin{aligned} \Phi (\{R_k\})=\{\Phi _k(\{R_j\})\}, \end{aligned}$$

with

$$\begin{aligned} \Phi _k(\{R_j\})(t)=i\int _0^t g_k(\tau )d\tau -i\int _0^t \int _0^\tau \mathfrak {I}(g_k(s)\overline{(\alpha _{k}+R_{k}(s)})ds\,(\alpha _{k}+R_{k}(\tau ))\frac{d\tau }{4\pi \tau }, \end{aligned}$$

where

$$\begin{aligned} g_k(t)= & {} \,\frac{1}{8\pi t}\sum _{(j_1,j_2,j_3)\in NR_k}e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}e^{-i\omega _{k,j_1,j_2,j_3}\log \sqrt{t}}(\alpha _{j_1}\\&+R_{j_1}(t))\overline{(\alpha _{j_2}+R_{j_2}(t))}(\alpha _{j_3}+R_{j_3}(t)), \end{aligned}$$

and \(\omega _{k,j_1,j_2,j_3}=\frac{|\alpha _k|^2-|\alpha _{j_1}|^2+|\alpha _{j_2}|^2-|\alpha _{j_3}|^2}{4\pi }\).

Finally we note that in the case of N Dirac masses with coefficients \(|\alpha _k|=a\) and equation (9) with \(M=(N-\frac{1}{2})a^2\), we get instead of (23) the equation

$$\begin{aligned} i\partial _t A_k(t)= & {} \frac{1}{8\pi t}\sum _{(j_1,j_2,j_3)\in NR_k}e^{-i\frac{k^2-j_1^2+j_2^2-j_3^3}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)\nonumber \\&-\frac{1}{8\pi t}(|A_k(t)|^2-|\alpha _k|^2)A_k(t). \end{aligned}$$
(30)

Hence we can write \(A_k(t)=\alpha _k+R_k(t)\) and the same fixed point argument works for \(\{R_k\}\).

2.2 The Fixed Point Argument estimates

Lemma 2.1

For \(\{R_k\}\in X^\gamma \) with \(\Vert \{R_k\}\Vert _{X^\gamma }\le \delta \) we have the following estimates:

$$\begin{aligned}&\Vert \{g_k(t)\}\Vert _{l^{2,s}}\le \frac{C}{t} \bigg (\Vert \{\alpha _k\}\Vert ^3_{l^{2,s}}+t^{3\gamma }\delta ^3\bigg ), \end{aligned}$$
(31)
$$\begin{aligned}&\left\| \left\{ \int _0^t g_k(\tau )d\tau \right\} \right\| _{l^{2,s}}\le Ct(\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5+t^{3\gamma }(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)\delta ^3\nonumber \\&\quad +\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\delta +t^{2\gamma }\delta ^3), \end{aligned}$$
(32)
$$\begin{aligned}&\left\| \left\{ \int _0^t g_k(\tau )\overline{(\alpha _{k}+R_{k}(\tau )})d\tau \right\} \right\| _{l^{2,s}}\le Ct(\Vert \{\alpha _k\}\Vert _{l^{2,s}}+t^{\gamma }\delta ) \nonumber \\&\quad \times (\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5+t^{3\gamma }(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)\delta ^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\delta +t^{2\gamma }\delta ^3).\nonumber \\ \end{aligned}$$
(33)

Proof

We note first that

$$\begin{aligned} \{M_j\}\star \{{N_j}\}\star \{P_j\}(k)=\sum _{(j_1,j_2,j_3)\in NR_k\cup Res_k} M_{j_1}N_{j_2}P_{j_3}, \end{aligned}$$

so in particular

$$\begin{aligned} \left| \sum _{(j_1,j_2,j_3)\in NR_k} M_{j_1}N_{j_2}P_{j_3}\right| \le \{|M_j|\}\star \{{|N_j|}\}\star \{|P_j|\}(k). \end{aligned}$$
(34)

We shall frequently use the following inequality:

$$\begin{aligned} \Vert \{M_j\}\star \{{N_j}\}\star \{P_j\}\Vert _{l^{\infty }}+\Vert \{M_j\}\star \{{N_j}\}\star \{P_j\}\Vert _{l^{2,s}}\le C\Vert \{M_j\}\Vert _{l^{2,s}}\Vert \{N_j\}\Vert _{l^{2,s}}\Vert \{P_j\}\Vert _{l^{2,s}}.\nonumber \\ \end{aligned}$$
(35)

The first part follows from \(l^{2,s}\subset l^1\) and the second part follows using also the weighted Young argument on two series:

$$\begin{aligned}&\Vert \{M_j\}\star \{{N_j}\}\Vert _{l^{2,s}}\le C\Vert \{M_j\}\star \{(1+|j|)^{s}N_j\}\Vert _{l^{2}}+C\Vert \{(1+|j|)^{s}M_j\}\star \{N_j\}\Vert _{l^{2}} \\&\quad \le C\Vert \{M_j\}\Vert _{l^{1}}\Vert \{(1+|j|)^{s}N_j\}\Vert _{l^{2}}+C\Vert \{(1+|j|)^{s}M_j\}\Vert _{l^{2}}\Vert \{N_j\}\Vert _{l^{1}}\\&\quad \le C\Vert \{M_j\}\Vert _{l^{2,s}}\Vert \{N_j\}\Vert _{l^{2,s}}. \end{aligned}$$

Therefore by (34) we have

$$\begin{aligned} |g_k(t)|\le & {} \frac{C}{t}\sum _{(j_1,j_2,j_3)\in NR_k}(|\alpha _{j_1}|+|R_{j_1}(t)|)(|\alpha _{j_2}|+|R_{j_2}(t)|)(|\alpha _{j_3}|+|R_{j_3}(t)|) \\\le & {} \frac{C}{t}\{|\alpha _{j}|+|R_{j}(t)|\}\star \{|\alpha _{j}|+|R_{j}(t)|\}\star \{|\alpha _{j}|+|R_{j}(t)|\}(k), \end{aligned}$$

and by (35) we get (31).

To estimate \(\int _0^t g_k(\tau )d\tau \) we perform an integration by parts to get advantage of the non-resonant phase and to obtain integrability in time:

$$\begin{aligned}&i\int _0^t g_k(\tau )d\tau =t\sum _{(j_1,j_2,j_3)\in NR_k}\frac{e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}e^{-i\omega _{k,j_1,j_2,j_3}\log \sqrt{t}}}{\pi (k^2-j_1^2+j_2^2-j_3^2)}(\alpha _{j_1} \nonumber \\&\quad +R_{j_1}(t))\overline{(\alpha _{j_2}+R_{j_2}(t))}(\alpha _{j_3}+R_{j_3}(t)) \nonumber \\&\quad -\int _0^t \sum _{(j_1,j_2,j_3)\in NR_k}\frac{e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4\tau }}}{\pi (k^2-j_1^2+j_2^2-j_3^2)} \nonumber \\&\quad \times \partial _\tau (\tau e^{-i\omega _{k,j_1,j_2,j_3}\log \sqrt{\tau }}(\alpha _{j_1}+R_{j_1}(\tau ))\overline{(\alpha _{j_2}+R_{j_2}(\tau ))}(\alpha _{j_3}+R_{j_3}(\tau )))\,d\tau .\nonumber \\ \end{aligned}$$
(36)

Indeed, for fixed t, this computation is justified by considering, for \(0<\eta <t\), the quantity \(I_k^{\eta }(t)\) defined as \(\Phi _k^1(\{R_j\})(t)\) but with the integral in time from \(\eta \) instead of 0. More precisely, \(I_k^{\eta }(t)\) is well defined as the integrand can be upper-bounded using (34) and (35) by the function \(C\frac{\Vert \{\alpha _j\}\Vert ^3_{l^{2,s}}+\Vert \{R_j(\tau )\}\Vert ^3_{l^{2,s}}}{\tau }\) which is integrable on \((\eta ,t)\). In particular the discrete summation commutes with the integration in time. Performing then integrations by parts on \(I_k^{\eta }(t)\) as above, we obtain for \(I_k^{\eta }(t)\) an expression that yields as \(\eta \rightarrow 0\) the above expression for \(\int _0^t g_k(\tau )d\tau \).

We obtain, in view of (35), and on the fact that on the resonant set \(|k^2-j_1^2+j_2^2-j_3^2|\ge 1\),

$$\begin{aligned}&\left| \int _0^t g_k(\tau )d\tau \right| \le Ct\{|\alpha _{j}|+|R_{j}(t)|\}\star \{|\alpha _{j}|+|R_{j}(t)|\}\star \{|\alpha _{j}|+|R_{j}(t)|\}(k) \\&\quad +C(1+\Vert \{\alpha _k\}\Vert _{l^{\infty }}^2)\int _0^t \{|\alpha _{j}|+|R_{j}(\tau )|\}\star \{|\alpha _{j}|+|R_{j}(\tau )|\}\star \{|\alpha _{j}|+|R_{j}(\tau )|\}(k)\,d\tau \\&\quad +C\int _0^t \{|\tau \,\partial _\tau R_{j}(\tau )|\}\star \{|\alpha _{j}|+|R_{j}(\tau )|\}\star \{|\alpha _{j}|+|R_{j}(\tau )|\}(k)\,d\tau . \end{aligned}$$

We perform Cauchy–Schwarz in the integral terms to get the squares for the discrete variable and we sum using (35):

$$\begin{aligned}&\left\| \left\{ \int _0^t g_k(\tau )d\tau \right\} \right\| _{l^{2,s}}^2 \le Ct^2\, \Vert \{\alpha _{j}+R_{j}(t)\}\Vert _{l^{2,s}}^6\\&\quad +C(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^4)\,t \int _0^t \Vert \{\alpha _{j}+R_{j}(\tau )\}\Vert _{l^{2,s}}^6d\tau \\&\quad +\,Ct \int _0^t \Vert \{\alpha _{j}+R_{j}(\tau )\}\Vert _{l^{2,s}}^4\Vert \{\tau \partial _\tau R_j(\tau )\}\Vert _{l^{2,s}}^2d\tau . \end{aligned}$$

Therefore we get (32):

$$\begin{aligned}&\left\| \left\{ \int _0^t g_k(\tau )d\tau \right\} \right\| _{ l^{2,s}} \le Ct( \Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{R_{j}(t)\}\Vert _{l^{2,s}}^3+ \Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5) \\&\qquad +\,C(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)t^{1+3\gamma }\Vert \{\tau ^{-\gamma }R_{j}(\tau )\}\Vert _{L^\infty (0,T),l^{2,s}}^3 \\&\qquad +\,Ct \Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\Vert \{\tau \partial _\tau R_{j}(\tau )\}\Vert _{L^\infty (0,T),l^{2,s}}\\&\qquad +\,Ct^{1+2\gamma }\Vert \{\tau ^{-\gamma }R_{j}(\tau )\}\Vert _{L^\infty (0,T),l^{2,s}}^2\Vert \{\tau \partial _\tau R_{j}(\tau )\}\Vert _{L^\infty (0,T),l^{2,s}} \\&\quad \le Ct(\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5)+Ct^{1+3\gamma }(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)\delta ^3\\&\qquad +\,Ct\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\delta +Ct^{1+2\gamma }\delta ^3. \end{aligned}$$

The last estimate (33) is obtained the same way as (32), by adding in the computations the extra-term \(\alpha _k+R_k(\tau )\) and by upper-bounding it in modulus by \(\Vert \{\alpha _k\}\Vert _{l^{2,s}}+\tau ^{\gamma }\delta \). \(\square \)

We now use (31) and (33) to get

$$\begin{aligned}&\left\| \left\{ \partial _t\Phi _k(\{R_j\})(t)\right\} \right\| _{l^{2,s}}\le \Vert \{ g_k(t)d\tau \}\Vert _{l^{2,s}}\\&\qquad +\left\| \left\{ \int _0^t \mathfrak {I}(g_k(s)\overline{(\alpha _{k}+R_{k}(s)})ds\,(\alpha _{k}+R_{k}(t))\right\} \right\| _{l^{2,s}}\frac{C}{t} \\&\quad \le \frac{C}{t}(\Vert \{\alpha _k\}\Vert ^3_{l^{2,s}}+t^{3\gamma }\delta ^3)+C(\Vert \{\alpha _k\}\Vert _{l^{2,s}}+t^{\gamma }\delta )^2 \\&\qquad \times (\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5+t^{3\gamma }(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)\delta ^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\delta +t^{2\gamma }\delta ^3). \end{aligned}$$

On the other hand,

$$\begin{aligned}&|\{\Phi _k(\{R_j\})(t)\}|\le \left| \int _0^t g_k(\tau )d\tau \right| \\&\quad +\left| \int _0^t\int _0^\tau \mathfrak {I}(g_k(s)\overline{(\alpha _{k}+R_{k}(s)})ds\,(\alpha _{k}+R_{k}(\tau ))\frac{d\tau }{4\pi \tau }\right| , \end{aligned}$$

so by Cauchy–Schwarz

$$\begin{aligned} |\{\Phi _k(\{R_j\})(t)\}|^2\le & {} C\left| \int _0^t g_k(\tau )d\tau \right| ^2\\&+\,C\sqrt{t}\int _0^t\left| \int _0^\tau \mathfrak {I}(g_k(s)\overline{(\alpha _{k}+R_{k}(s)})ds\right| ^2\,(|\alpha _{k}|^2+|R_{k}(\tau )|^2)\frac{d\tau }{\tau ^\frac{3}{2}}. \end{aligned}$$

Now we use (32) and (33) to get

$$\begin{aligned}&\Vert \{\Phi _k(\{R_j\})(t)\}\Vert _{l^{2,s}}\le Ct(\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5+t^{3\gamma }(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)\delta ^3\\&\quad +\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\delta +t^{2\gamma }\delta ^3) \\&\quad +\,Ct(\Vert \{\alpha _k\}\Vert _{l^{2,s}}+t^{\gamma }\delta )^2 \\&\quad \times (\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5+t^{3\gamma }(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)\delta ^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\delta +t^{2\gamma }\delta ^3). \end{aligned}$$

Summarizing, we have obtained

$$\begin{aligned}&\Vert \{\Phi (\{R_k\})\}\Vert _{X^\gamma }\le C(\Vert \{\alpha _k\}\Vert ^3_{l^{2,s}}+T^{3\gamma }\delta ^3)+CT(\Vert \{\alpha _k\}\Vert _{l^{2,s}}+T^{\gamma }\delta )^2 \nonumber \\&\quad \times (\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5+T^{3\gamma }(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)\delta ^3+\,\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\delta +T^{2\gamma }\delta ^3) \nonumber \\&\quad +\,CT^{1-\gamma }(\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5+T^{3\gamma }(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)\delta ^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\delta +T^{2\gamma }\delta ^3) \nonumber \\&\quad +\,CT^{1-\gamma }(\Vert \{\alpha _k\}\Vert _{l^{2,s}}+T^{\gamma }\delta )^2 \nonumber \\&\quad \times \, (\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^5+T^{3\gamma }(1+\Vert \{\alpha _k\}\Vert _{l^{2,s}}^2)\delta ^3+\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}^2\delta +T^{2\gamma }\delta ^3). \end{aligned}$$
(37)

In view of (37), we can choose \(\delta \) in terms of \(\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}\), and T small with respect to \(\Vert \{\alpha _{j}\}\Vert _{l^{2,s}}\) and \(\gamma \), to obtain the stability estimate

$$\begin{aligned} \Vert \{\Phi (\{R_k\})\}\Vert _{X^\gamma }<\delta . \end{aligned}$$

The contraction estimate is obtained in the same way as the stability one. As a conclusion the fixed point argument is closed and this settles the local in time existence of the solutions of Theorem 1.4.

Remark 2.2

We notice that in (36) we just upper-bounded the inverse of the non-resonant phase by 1. One can actually exploit this decay in the discrete summations to relax the assumptions on the initial data. More precisely, for \(1\le p<\infty \) one can use:

$$\begin{aligned}&\left\| \sum _{(j_1,j_2,j_3)\in NR_k}\frac{M_{j_1}N_{j_2}P_{j_3}}{k^2-j_1^2+j_2^2-j_3^2}\right\| _{l^p}^p=\sum _k\left( \sum _{j_1,j_2; j_1\notin \{k,j_2\}}\frac{M_{j_1}N_{j_2}P_{k-j_1+j_2}}{|j_1-j_2||k-j_1|}\right) ^p \\&\quad \le C\sum _k\left( \sum _{j_1,j_2}|M_{j_1}|^p|M_{j_2}|^p|M_{k-j_1+j_2}|^p\right) \\&\quad \left( \sum _{j_1,j_2}\frac{1}{(1+|j_1-j_2|)^q(1+|k-j_1|)^q}\right) ^\frac{p}{q}, \end{aligned}$$

where q is the conjugate exponent of p. As \(1\le p<\infty \) we have \(q>1\) so

$$\begin{aligned} \left\| \sum _{(j_1,j_2,j_3)\in NR_k}\frac{M_{j_1}N_{j_2}P_{j_3}}{k^2-j_1^2+j_2^2-j_3^2}\right\| _{l^p}\le \Vert \{M_j\}\Vert _{l^p}\Vert \{N_j\}\Vert _{l^p}\Vert \{P_j\}\Vert _{l^p}. \end{aligned}$$

2.3 Global in Time Extension

We consider the local in time solution constructed previously. In the case \(s= 1\) we shall prove that the growth of \(\Vert \{\alpha _j+R_j(t)\}\Vert _{L^\infty (0,T) l^{2,1}}\) is controlled, so we can extend the solution globally in time. Global existence for \(s>1\) is obtained by considering the \(l^{2,1}\) global solution and proving the persistency of the regularity \(l^{2,s}\).

We shall use (18) with \(a(k)=k^2\) to get a control of the “energy”:

$$\begin{aligned}&\partial _t \sum _k k^2|A_k(t)|^2\\&\quad =\mp \frac{1}{16\pi t}\sum _{k-j_1+j_2-j_3=0}(k^2-j_1^2+j_2^2-j_3^2)\\&\quad e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)\overline{A_k(t)} \\&\quad =\pm \frac{it}{4\pi }\sum _{k-j_1+j_2-j_3=0}\partial _t\left( e^{-i\frac{k^2-j_1^2+j_2^2-j_3^2}{4t}}\right) A_{j_1}(t)\overline{A_{j_2}(t)}A_{j_3}(t)\overline{A_k(t)}. \end{aligned}$$

By integrating from 0 to t and then using integrations by parts we get

$$\begin{aligned}&\sum _k k^2|A_k(t)|^2\le \sum _k k^2|A_k(0)|^2+Ct\sum _{k-j_1+j_2-j_3=0}|A_{j_1}(t)A_{j_2}(t)A_{j_3}(t)A_k(t)| \\&\qquad +\, C\int _0^t\sum _{k-j_1+j_2-j_3=0}|\partial _\tau (\tau A_{j_1}(\tau )\overline{A_{j_2}(\tau )}A_{j_3}(\tau )\overline{A_k(\tau ))}|d\tau \\&\quad \le \Vert \{\alpha _j\}\Vert _{l^{2,1}}^2+Ct\sum _k (|A_j(t)|\star |A_j(t)|\star |A_j(t)|)(k)|A_k(t)|\\&\qquad + \int _0^t\sum _k (|A_j(\tau )|\star |A_j(\tau )|\star |A_j(\tau )|)(k)|A_k(\tau )|d\tau \\&\qquad + \int _0^t\sum _k (\partial _\tau |A_j(\tau )|\star |A_j(\tau )|\star |A_j(\tau )|)(k)|A_k(\tau )|\tau d\tau \\&\qquad + \int _0^t\sum _k (|A_j(\tau )|\star |A_j(\tau )|\star |A_j(\tau )|)(k)\partial _\tau |A_k(\tau )|\tau d\tau . \end{aligned}$$

We shall use now the following estimate, based on Cauchy–Schwartz inequality, Young and Hölder estimates for weak \(l^p\) spaces, and the fact that \(\{j^{-\frac{1}{2}}\}\in l^2_w\):

$$\begin{aligned}&\left| \sum _k \{M_j\}\star \{{N_j}\}\star \{P_j\}(k) R_k\right| \le \Vert \{M_j\}\star \{{N_j}\}\star \{P_j\}\Vert _{l^2}\Vert R_j\Vert _{l^2}\\&\quad \le C\Vert \{M_j\}\Vert _{l^1_w}\Vert \{{N_j}\}\Vert _{l^1_w}\Vert \{P_j\}\Vert _{l^2}\Vert R_j\Vert _{l^2} \\&\quad \le C\Vert \{M_j\, j^\frac{1}{2}\}\Vert _{l^2_w}\Vert \{{N_j\,j^\frac{1}{2}}\}\Vert _{l^2_w}\Vert \{P_j\}\Vert _{l^2}\Vert R_j\Vert _{l^2} \\&\quad \le C\Vert \{M_j\}\Vert _{l^2}^\frac{1}{2}\Vert \{M_j\}\Vert _{l^{2,1}}^\frac{1}{2}\Vert \{N_j\}\Vert _{l^2}^\frac{1}{2}\Vert \{N_j\}\Vert _{l^{2,1}}^\frac{1}{2}\{P_j\}\Vert _{l^2}\Vert R_j\Vert _{l^2} \end{aligned}$$

to obtain

$$\begin{aligned}&\Vert \{A_j(t)\}\Vert _{l^{2,1}}^2\le \Vert \{\alpha _j\}\Vert _{l^{2,1}}^2+Ct\Vert \{A_j(t)\}\Vert _{l^2}^3\Vert \{A_j(t)\}\Vert _{l^{2,1}} \\&\quad +\int _0^t \Vert \{A_j(\tau )\}\Vert _{l^2}^3\Vert \{A_j(\tau )\}\Vert _{l^{2,1}}d\tau \\&\quad + \int _0^t \Vert \{\partial _\tau A_j(\tau )\}\Vert _{l^2}\Vert \{A_j(\tau )\}\Vert _{l^2}^2\Vert \{A_j(\tau )\}\Vert _{l^{2,1}}\tau d\tau . \end{aligned}$$

Now we notice that for system (17) we get

$$\begin{aligned} \Vert \{\partial _\tau A_j(t)\}\Vert _{l^2}\le & {} \frac{C}{t} (\Vert \{A_j(t)\}\star \{A_j(t)\}\star \{A_j(t)\}\Vert _{l^2}+\Vert \{A_j(t)\}\Vert _{l^2}) \\\le & {} \frac{C}{t} (\Vert \{A_j(t)\}\Vert _{l^{2,1}}\Vert \{A_j(t)\}\Vert _{l^2}^2+\Vert \{A_j(t)\}\Vert _{l^2}). \end{aligned}$$

By using also the conservation of “mass” (19) we finally obtain

$$\begin{aligned}&\Vert \{A_j(t)\}\Vert _{l^{2,1}}^2\le \Vert \{\alpha _j\}\Vert _{l^{2,1}}^2+Ct\Vert \{\alpha _j\}\Vert _{l^2}^3\Vert \{A_j(t)\}\Vert _{l^{2,1}} \\&\quad +\int _0^t \Vert \{\alpha _j\}\Vert _{l^2}^3\Vert \{A_j(\tau )\}\Vert _{l^{2,1}}d\tau + \int _0^t \Vert \{\alpha _j\}\Vert _{l^2}^4\Vert \{A_j(\tau )\}\Vert _{l^{2,1}}^2d\tau . \end{aligned}$$

We thus obtain by Grönwall’s inequality a control of the growth of \(\Vert A_j(t)\Vert _{l^{2,1}}\), so the local solution can be extended globally and the proof of Theorem (1.4) is finished.

2.4 Cases of Dirac Masses Not Necessary Located at Integer Numbers

Some cases of Dirac masses, not necessary located at integer numbers, were treated in [32] and can be extended here to the cubic case. We denote for doubly indexed sequences

$$\begin{aligned} \Vert \{\alpha _{k,{\tilde{k}}}\}\Vert ^2_{l^{2,s}}:=\sum _{k,{\tilde{k}}\in {\mathbb {Z}}}(1+|k|+|{\tilde{k}}|)^{2s}|\alpha _{k,{\tilde{k}}}|^2. \end{aligned}$$

We note that a distribution \(f=\sum _{k\in {\mathbb {Z}}}\alpha _{k,{\tilde{k}}}\delta _{ak+b{\tilde{k}}}\) satisfies

$$\begin{aligned} {\hat{f}}(\xi )=\widehat{\sum _{k\in {\mathbb {Z}}}\alpha _{k,{\tilde{k}}}\delta _{ak+b{\tilde{k}}}}(\xi )=\sum _{k\in {\mathbb {Z}}}\alpha _{k,{\tilde{k}}}e^{-i\xi (ak+b{\tilde{k}})}, \end{aligned}$$

that can be seen as the restriction to \(\xi _1=\xi _2=\xi \) of

$$\begin{aligned} \sum _{k\in {\mathbb {Z}}}\alpha _{k,{\tilde{k}}}e^{-i\xi _1ak-i\xi _2b{\tilde{k}}}, \end{aligned}$$

which is the Fourier transform of

$$\begin{aligned} E(f):=\sum _{k\in {\mathbb {Z}}}\alpha _{k,{\tilde{k}}}\delta _{(ak,b{\tilde{k}})}. \end{aligned}$$

We denote

$$\begin{aligned} H^s_{pF;a,b}:= & {} \left\{ u\in {\mathcal {S}}'\left( {\mathbb {R}}^2\right) ,\,\, {\hat{u}}\left( \xi _1+\frac{2\pi }{a},\xi _2\right) = {\hat{u}}\left( \xi _1,\xi _2+\frac{2\pi }{b}\right) \right. \\= & {} \left. \left. {\hat{u}}\left( \xi _1,\xi _2\right) , {\hat{u}}\in H^s\left( \left( 0,\frac{2\pi }{a}\right) \times \left( 0,\frac{2\pi }{b}\right) \right) \right\} ,\right. \end{aligned}$$

and

$$\begin{aligned} \Vert f\Vert _{H^{s,diag}_{pF;a,b}}=\Vert \widehat{E(f)}\Vert _{H^s((0,\frac{2\pi }{a})\times (0,\frac{2\pi }{b}))}. \end{aligned}$$

Theorem 2.3

Let \(s>\frac{1}{2}\), \(T>0\) and \(\frac{1}{2}<\gamma <1\). Let \(a,b\in {\mathbb {R}}^*\) such that \(\frac{a}{b}\notin {\mathbb {Q}}\). We consider the 1-D cubic NLS equation:

$$\begin{aligned} i\partial _t u +\Delta u\pm \frac{1}{2}\left( |u|^2-\frac{M}{2\pi t}\right) u=0. \end{aligned}$$
(38)

with \(M=\sum _{k,{\tilde{k}}\in {\mathbb {Z}}}|\alpha _{k,{\tilde{k}}}|^2\) and \(\#\{(k,{\tilde{k}}),\alpha _{k,{\tilde{k}}}\ne 0\}<\infty \). There exists \(\epsilon _0>0\) such that if \(\Vert \{\alpha _{k,{\tilde{k}}}\}\Vert _{l^{2,s}}\le \epsilon _0\) then we have \(T>0\) and a unique solution on (0, T) of the form

$$\begin{aligned} u(t)=\sum _{k,{\tilde{k}}\in {\mathbb {Z}}}e^{\mp i\frac{|\alpha _{k,{\tilde{k}}}|^2}{4\pi }\log \sqrt{t}}(\alpha _{k,{\tilde{k}}}+R_{k,{\tilde{k}}}(t))e^{it\Delta }\delta _{ak+b{\tilde{k}}}, \end{aligned}$$
(39)

with the decay

$$\begin{aligned} \sup _{0<t<T}t^{-\gamma }\Vert \{R_{k,{\tilde{k}}}(t)\}\Vert _{l^{2,s}}+t\,\Vert \{\partial _t R_{k,{\tilde{k}}}(t)\}\Vert _{l^{2,s}}<C. \end{aligned}$$
(40)

Moreover, considering an initial data a finite sum of N Dirac masses

$$\begin{aligned} u(0)=\sum _{k\in {\mathbb {Z}}}\alpha _{k,{\tilde{k}}}\delta _{ak+b{\tilde{k}}}, \end{aligned}$$

with coefficients of same modulus \(|\alpha _{k,{\tilde{k}}}|=a\) and equation (38) normalized with \(M=(N-\frac{1}{2})a^2\), we have a unique solution on \((-T,T)\)

$$\begin{aligned} u(t)=e^{it\Delta }u(0)\pm ie^{it\Delta }\int _0^te^{-i\tau \Delta }\left( \left( |u(\tau )|^2-\frac{M}{2\pi \tau }\right) u(\tau )\right) \,\frac{d\tau }{2}, \end{aligned}$$

such that \(\widehat{E(e^{-it\Delta }u(t))}\in {\mathcal {C}}^1((-T,T),H^s((0,\frac{2\pi }{a})\times (0,\frac{2\pi }{b})))\) with

$$\begin{aligned} \Vert e^{-it\Delta }u(t)-u(0)\Vert _{H^{s,diag}_{pF;a,b}}\le C t^\gamma , \quad \forall t\in (-T,T). \end{aligned}$$

The new phenomenon here is that if for instance the initial data is the sum of three Dirac masses located at 0, a and b then we see small effects on the dense subset on \({\mathbb {R}}\) given by the group \(a{\mathbb {Z}}+ b{\mathbb {Z}}\). Another difference with respect to the previous case is that the non-resonant phases can approach zero so we shall perform integration by parts from the phase only on the free term. Due to this small divisor problem we impose on one hand only a finite number of Dirac masses at time \(t=0\), and on the other hand a smallness condition on the data.

The proof of Theorem 2.3 goes similarly to the one of Theorem 1.4, by plugging the ansatz (39) into equation (38) to get by using the orthogonality of the family \(\{\frac{e^{i\frac{(x-ak-b{\tilde{k}})^2}{4t}}}{\sqrt{4\pi it}}\}\) the associated system

$$\begin{aligned}&i\partial _t A_{k,{\tilde{k}}}(t) \\&\quad =\mp \frac{1}{8\pi t}\sum _{((j_1,\tilde{j_1}),(j_2,\tilde{j_2}),(j_3,\tilde{j_3}))\in NR_{k,{\tilde{k}}}}\\&\quad e^{-i\frac{(ka+{\tilde{k}}b)^2-(j_1a+\tilde{j_1}b)^2+(j_2a+\tilde{j_2}b)^2-(j_3a+\tilde{j_3}b)^2}{4t}}A_{j_1,\tilde{j_1}}(t)\overline{A_{j_2,\tilde{j_2}}(t)}A_{j_3,\tilde{j_3}}(t) \\&\qquad \pm \frac{1}{8\pi t}A_{k,{\tilde{k}}}(t)\left( 2\sum _{j,{\tilde{j}}}|A_{j,{\tilde{j}}}(t)|^2-|A_{k,{\tilde{k}}}(t)|^2-2M\right) , \end{aligned}$$

where \(NR_{k,{\tilde{k}}}\) is the set of indices such that the phase does not vanish i.e. such that \(k-j_1+j_2-j_3=0, {\tilde{k}}-\tilde{j_1}+\tilde{j_2}-\tilde{j_3}=0\), \(k^2-j^2_1+j^2_2-j^2_3\ne 0,\) and \({\tilde{k}}-\tilde{j_1}+\tilde{j_2}-\tilde{j_3}\ne 0\). We have to solve the equivalent “mass”-conserving system:

$$\begin{aligned}&i\partial _t A_{k,{\tilde{k}}}(t) \nonumber \\&\quad =\mp \frac{1}{8\pi t}\sum _{((j_1,\tilde{j_1}),(j_2,\tilde{j_2}),(j_3,\tilde{j_3}))\in NR_{k,{\tilde{k}}}}\nonumber \\&\qquad \quad e^{-i\frac{(ka+{\tilde{k}}b)^2-(j_1a+\tilde{j_1}b)^2+(j_2a+\tilde{j_2}b)^2-(j_3a+\tilde{j_3}b)^2}{4t}}A_{j_1,\tilde{j_1}}(t)\overline{A_{j_2,\tilde{j_2}}(t)}A_{j_3,\tilde{j_3}}(t) \nonumber \\&\qquad \mp \frac{1}{8\pi t}|A_{k,{\tilde{k}}}(t)|^2A_{k,{\tilde{k}}}(t). \end{aligned}$$
(41)

We look for solutions of the form \(A_{k,{\tilde{k}}}(t)=e^{\mp i\frac{|\alpha _{k,{\tilde{k}}}|^2}{4\pi }\log \sqrt{t}}(\alpha _{k,{\tilde{k}}}+R_{k,{\tilde{k}}}(t))\), with \(\{R_{k,{\tilde{k}}}\}\in Y^\gamma \):

$$\begin{aligned} Y^\gamma :=\{\{f_{k,{\tilde{k}}}\}\in {\mathcal {C}}((0,T);l^{2,s})\}.\end{aligned}$$
(42)

As for Theorem 1.4, we make a fixed point argument in a ball of \(Y^\gamma \) of size depending on \(\Vert \{\alpha _{k,{\tilde{k}}}\}\Vert _{l^{2,s}}\) for the operator \(\Phi \) sending \(\{R_{k,{\tilde{k}}}\}\) into

$$\begin{aligned} \Phi (\{R_{k,{\tilde{k}}}\})=\{\Phi _{k,{\tilde{k}}}(\{R_{j,{\tilde{j}}}\})\}, \end{aligned}$$

with

$$\begin{aligned}&\Phi _{k,{\tilde{k}}}(\{R_{j,{\tilde{j}}}\}(t))\\&\quad =\mp i\int _0^t f_{k,{\tilde{k}}}(\tau )\,d\tau \pm i\int _0^t\int _0^\tau \mathfrak {I}(f_{k,{\tilde{k}}}(s)\overline{(\alpha _{k,{\tilde{k}}}+R_{k,{\tilde{k}}}(s))}ds(\alpha _{k,{\tilde{k}}}+R_{k,{\tilde{k}}}(\tau ))\frac{d\tau }{4\pi t}, \end{aligned}$$

where

$$\begin{aligned}&f_{k,{\tilde{k}}}(t)=\sum _{((j_1,\tilde{j_1}),(j_2,\tilde{j_2}),(j_3,\tilde{j_3}))\in NR_{k,{\tilde{k}}}}\frac{e^{-i\frac{(ka+{\tilde{k}}b)^2-(j_1a+\tilde{j_1}b)^2+(j_2a+\tilde{j_2}b)^2-(j_3a+\tilde{j_3}b)^2}{4t}}}{8\pi t} \\&\qquad \times e^{-i\frac{|\alpha _{k,{\tilde{k}}}|^2-|\alpha _{j_1,{\tilde{j}}_1}|^2+|\alpha _{j_2,{\tilde{j}}_2}|^2-|\alpha _{j_3,{\tilde{j}}_3}|^2}{4\pi }\log t}(\alpha _{j_1,\tilde{j_1}}+R_{j_1,\tilde{j_1}}(t))\overline{(\alpha _{j_2,\tilde{j_2}}+R_{j_2,\tilde{j_2}}(t))}(\alpha _{j_3,\tilde{j_3}}\\&\quad +R_{j_3,\tilde{j_3}}(t)). \end{aligned}$$

To avoid issues related to having the non-resonant phase approaching zero, we perform integrations by parts only in the free term involving a finite number of terms, as \(\#\{(k,{\tilde{k}}),\alpha _{k,{\tilde{k}}}\ne 0\}<\infty \). All the remaining terms contain powers of \(R_{j,{\tilde{j}}}(\tau )\) so we get integrability in time by using the Young inequalities (35) for double indexed sequences. However, due to the presence of terms linear in \(R_{j,{\tilde{j}}}(\tau )\) we need to impose a smallness condition on the initial data \(\Vert \{\alpha _{j,{\tilde{j}}}\}\Vert _{l^{2,s}}\). Moreover, from the cubic terms treated without integrations by parts as previously, we need to impose \(\gamma >\frac{1}{2}\). The control of \(\Vert \{t\partial _t R_{k,{\tilde{k}}}(t)\}\Vert _{L^\infty (0,T) l^{2,s}}\) is easily obtained a-posteriori, once a solution is constructed in \(Y^\gamma \).

3 The Talbot Effect

The Talbot effect for the linear and nonlinear Schrödinger equations on the torus with initial data given by functions with bounded variation has been largely studied ([5, 17, 20, 46, 48, 52]). Here we place ourselves in a more singular setting on \({\mathbb {R}}\), and get closer to the Talbot effect observed in optics (see for example [6]) which is typically modeled with Dirac combs as we consider in this paper.

As a consequence of Theorem 1.4 the solution u(t) of equation (9) with initial data

$$\begin{aligned} u(0)=\sum _{k\in {\mathbb {Z}}}\alpha _k\delta _k, \end{aligned}$$

behaves for small times like \(e^{it\Delta }u_0\). We compute first the linear evolution \(e^{it\Delta }u_0\) which displays a Talbot effect.

Proposition 3.1

(Talbot effect for linear evolutions) Let \(p\in {\mathbb {N}}\) and \(u_0\) with \(\hat{u_0}\)\(2\pi -\)periodic. For all \(t_{p,q}=\frac{1}{2\pi }\frac{p}{q}\) with q odd and for all \(x\in {\mathbb {R}}\) we have

$$\begin{aligned}&e^{it_{p,q}\Delta }u_0(x)=\frac{1}{\sqrt{q}}\int _0^{2\pi }\hat{u_0}(\xi )e^{-it_{p,q}\xi ^2+ix\xi }\nonumber \\&\quad \sum _{l\in {\mathbb {Z}}}\sum _{m=0}^{q-1}e^{i\theta _{m,p,q}}\delta \left( x-2t_{p,q}\,\xi -l-\frac{m}{q}\right) \,d\xi , \end{aligned}$$
(43)

for some \(\theta _{m,p,q}\in {\mathbb {R}}\). We suppose now that moreover \({\hat{u}}_0\) is located modulo \(2\pi \) only in a neighborhood of zero of radius less than \(\eta \frac{\pi }{ p}\) with \(0<\eta <1\). For a given \(x\in {\mathbb {R}}\) we define

$$\begin{aligned} \xi _{x} :=\frac{\pi q}{p} \,dist\left( x,\frac{1}{q}{\mathbb {Z}}\right) \in [0,\frac{\pi }{p}). \end{aligned}$$

Then there exists \(\theta _{x,p,q}\in {\mathbb {R}}\) such that

$$\begin{aligned} e^{it_{p,q}\Delta }u_0(x)=\frac{1}{\sqrt{q}} \,\hat{u_0}(\xi _x)\, e^{-it_{p,q}\,\xi _x^2+ix\,\xi _x+i\theta _{x,p,q}}. \end{aligned}$$
(44)

In particular \(|e^{it_{p,q}\Delta }u_0(l+\frac{m}{q})|=|e^{it_{p,q}\Delta }u_0(0)|\) and if x is at distance larger than \(\frac{\eta }{q}\) from \(\frac{1}{q}{\mathbb {Z}}\) then \(e^{it_{p,q}\Delta }u_0(x)\) vanishes.

Moreover, the solution can concentrate near \(\frac{1}{q}{\mathbb {Z}}\) in the sense that there is a family of initial data \(u_0^\lambda =\sum _{k\in {\mathbb {Z}}}\alpha _k^\lambda \delta _k\) and \(C>0\) such that

$$\begin{aligned} \left| \frac{e^{it_{p,q}\Delta }u_0^\lambda (0)}{e^{it_{p,q}\Delta }\alpha _0^\lambda \delta _0(0)}\right| \overset{\lambda \rightarrow \infty }{\longrightarrow }\infty . \end{aligned}$$
(45)

We note here that thanks to Poisson summation formula the above proposition applies to \(u_0=\sum _{k\in {\mathbb {Z}}}\delta _k\). Therefore \(e^{it\Delta }u_0(x)=0\) for \(x\notin \frac{1}{q}{\mathbb {Z}}\), and is a Dirac mass otherwise, which is the classical Talbot effect. However this kind of data does not satisfy the conditions of Theorem 1.4. Nevertheless, the concentration phenomena (45) is obtained by taking a sequence of initial data \(\{u_0^\lambda \}\) whose Fourier transform is periodic and concentrates near the integers.

Proposition 3.1 insures the persistence of the Talbot effect at the nonlinear level.

Proposition 3.2

(Talbot effect for nonlinear evolutions) Let \(p\in {\mathbb {N}}\), \(\epsilon \in (0,1)\) and \(q_\epsilon \) such that \(\epsilon ^2\sqrt{q_\epsilon }\log q_\epsilon <\frac{1}{2}\); in particular \(q_\epsilon \overset{\epsilon \rightarrow 0}{\longrightarrow }+\infty \).

Let \(u_0\) be such that \({\hat{u}}_0\) is a \(2\pi -\)periodic, located modulo \(2\pi \) only in a neighborhood of zero of radius less than \(\eta \frac{\pi }{ p}\) with \(0<\eta <1\) and having Fourier coefficients such that \(\Vert \{\alpha _k\}\Vert _{l^{2,s}}\le \epsilon \) for some \(s>\frac{1}{2}\). Let u(tx) be the solution of (9) obtained in Theorem 1.4 from \(\{\alpha _k\}\). Then for all \(t_{p,q}=\frac{1}{2\pi }\frac{p}{q}\) with \(1\le q\le q_\epsilon \) odd and for all x at distance larger than \(\frac{\eta }{q}\) from \(\frac{1}{q}{\mathbb {Z}}\) the function u(tx) almost vanishes in the sense:

$$\begin{aligned} |u(t_{p,q},x)|\le \epsilon . \end{aligned}$$
(46)

Moreover, the solution can concentrate near \(\frac{1}{q}{\mathbb {Z}}\) in the sense that there is a family of sequences \(\{\alpha _k^\lambda \}\) with \(\Vert \{\alpha _k^\lambda \}\Vert _{l^{2,s}}\overset{\lambda \rightarrow \infty }{\longrightarrow }0\) such that the corresponding solutions \(u_\lambda \) obtained in Theorem 1.4 satisfy

$$\begin{aligned} \left| \frac{u^\lambda (t_{p,q},0)}{e^{it_{p,q}\Delta }\alpha _0^\lambda \delta _0(0)}\right| \overset{\lambda \rightarrow \infty }{\longrightarrow }\infty . \end{aligned}$$
(47)

3.1 Proof of Propositions 3.13.2

We start by recalling the Poisson summation formula \(\sum _{k\in {\mathbb {Z}}}f_k=\sum _{k\in {\mathbb {Z}}}{\hat{f}}(2\pi k)\) for the Dirac comb:

$$\begin{aligned} \left( \sum _{k\in {\mathbb {Z}}}\delta _k\right) (x)=\sum _{k\in {\mathbb {Z}}}\delta (x-k)=\sum _{k\in {\mathbb {Z}}}e^{i2\pi kx}, \end{aligned}$$

as

$$\begin{aligned} \widehat{\delta (x-\cdot )}(2\pi k)=\int _{-\infty }^\infty e^{-i2\pi ky}\delta (x-y)\,dy=e^{-i2\pi kx}. \end{aligned}$$

The computation of the free evolution with periodic Dirac data is

$$\begin{aligned} e^{it\Delta }\left( \sum _{k\in {\mathbb {Z}}}\delta _k\right) (x)=\sum _{k\in {\mathbb {Z}}}e^{-it(2\pi k)^2+i2\pi kx}. \end{aligned}$$
(48)

For \(t=\frac{1}{2\pi }\frac{p}{q}\) we have (choosing \(M=2\pi \) in formulas (37) combined with (42) from [19])

$$\begin{aligned} e^{it\Delta }\left( \sum _{k\in {\mathbb {Z}}}\delta _k\right) (x)=\frac{1}{q}\sum _{l\in {\mathbb {Z}}}\sum _{m=0}^{q-1}G(-p,m,q)\delta \left( x-l-\frac{m}{q}\right) , \end{aligned}$$
(49)

which describes the linear Talbot effect in the periodic setting. Here \(G(-p,m,q)\) stands for the Gauss sum

$$\begin{aligned} G(-p,m,q)=\sum _{l=0}^{q-1}e^{2\pi i\frac{-pl^2+ml}{q}}. \end{aligned}$$

Now we want to compute the free evolution of data \(u_0=\sum _{k\in {\mathbb {Z}}}\alpha _k\delta _k\). As \(\widehat{u_0}(\xi )=\sum _{k\in {\mathbb {Z}}}e^{-ik\xi }\) is \(2\pi -\)periodic we have:

$$\begin{aligned} e^{it\Delta }u_0(x)= & {} \frac{1}{2\pi }\int _{-\infty }^\infty e^{ix\xi }e^{-it\xi ^2}\hat{u_0}(\xi )\,d\xi =\frac{1}{2\pi }\sum _{k\in {\mathbb {Z}}}\int _{2\pi k}^{2\pi (k+1)} e^{ix\xi -it\xi ^2}\hat{u_0}(\xi )\,d\xi \\= & {} \frac{1}{2\pi }\int _0^{2\pi }\hat{u_0}(\xi )\sum _{k\in {\mathbb {Z}}}e^{ix(2\pi k+\xi )-it(2\pi k+\xi )^2}\,d\xi \\= & {} \frac{1}{2\pi }\int _0^{2\pi }\hat{u_0}(\xi )e^{-it\xi ^2+ix\xi }\sum _{k\in {\mathbb {Z}}}e^{-it\,(2\pi k)^2+i2\pi k (x- 2t\xi )}\,d\xi . \end{aligned}$$

Therefore, for \(t_{p,q}=\frac{1}{2\pi }\frac{p}{q}\) we get using (48)–(49):

$$\begin{aligned}&e^{it_{p,q}\Delta }u_0(x)\nonumber \\&\quad =\frac{1}{q}\int _0^{2\pi }\hat{u_0}(\xi )e^{-it_{p,q}\xi ^2+ix\xi }\sum _{l\in {\mathbb {Z}}}\sum _{m=0}^{q-1}G(-p,m,q)\delta \left( x-2t_{p,q}\xi -l-\frac{m}{q}\right) \,d\xi . \end{aligned}$$

For q even \( G(-p,m,q)\) can be null. Therefore we consider q odd. In this case \( G(-p,m,q)=\sqrt{q}e^{i\theta _m}\) for some \(\theta _{m,p,q}\in {\mathbb {R}}\) so we get for \(t_{p,q}=\frac{1}{2\pi }\frac{p}{q}\)

$$\begin{aligned}&e^{it_{p,q}\Delta }u_0(x)\nonumber \\&\quad =\frac{1}{\sqrt{q}}\int _0^{2\pi }\hat{u_0}(\xi )e^{-it_{p,q}\xi ^2+ix\xi }\sum _{l\in {\mathbb {Z}}}\sum _{m=0}^{q-1}e^{i\theta _{m,p,q}}\delta \left( x-2t_{p,q}\,\xi -l-\frac{m}{q}\right) \,d\xi . \end{aligned}$$

We note that for \(0\le \xi <2\pi \) we have \(0\le 2t\xi <\frac{2p}{q}\). For a given \(x\in {\mathbb {R}}\) there exists a unique \(l_x\in {\mathbb {Z}}\) and a unique \(0\le m_x<q\) such that

$$\begin{aligned} x-l_x-\frac{m_x}{q} \in [0,\frac{1}{q}). \end{aligned}$$

We denote

$$\begin{aligned} \xi _{x} :=\frac{\pi q}{p}\left( x-l_x-\frac{m_x}{q}\right) =\frac{\pi q}{p}\,d\left( x,\frac{1}{q}{\mathbb {Z}}\right) \in [0,\frac{\pi }{p}). \end{aligned}$$

In particular if \({\hat{u}}_0\) is located modulo \(2\pi \) only in a neighborhood of zero of radius less than \(\frac{\pi }{ p}\) then

$$\begin{aligned} e^{it_{p,q}\Delta }u_0(x)=\frac{1}{\sqrt{q}} \,\hat{u_0}(\xi _x)\, e^{-it_{p,q}\,\xi _x^2+ix\,\xi _x+i\theta _{m,p,q}}, \end{aligned}$$

for some \(\theta _{x,p,q}\in {\mathbb {R}}\). If moreover \({\hat{u}}_0\) is located modulo \(2\pi \) only in a neighborhood of zero of radius less than \(\eta \frac{\pi }{ p}\) with \(0<\eta <1\), then the above expression vanishes for x at distance larger than \(\frac{\eta }{ q}\) from \(\frac{1}{q}{\mathbb {Z}}\).

We are left with proving the concentration effect (45) of Proposition 3.1. We shall construct a family of sequences \(\{\alpha _k^\lambda \}\) such that \(\sum _{k\in {\mathbb {Z}}} \alpha _k^\lambda \delta _k\) concentrates in the Fourier variable near the integers. To this purpose we consider \(\psi \) a real bounded function with support in \([-\frac{1}{2},\frac{1}{2}]\) and \(\psi (0)=1\). We define

$$\begin{aligned} f^\lambda (\xi )= \lambda ^{\beta }\psi (\lambda \xi ) ,\forall \xi \in [-\pi ,\pi ], \end{aligned}$$

with \(\beta \in {\mathbb {R}}\). Thus we can decompose

$$\begin{aligned} f^\lambda (\xi )=\sum _{k\in {\mathbb {Z}}} \alpha _k^\lambda e^{ik\xi }, \end{aligned}$$

and consider

$$\begin{aligned} u_0^\lambda =\sum _{k\in {\mathbb {Z}}} \alpha _k^\lambda \delta _k. \end{aligned}$$

In particular, on \([-\pi ,\pi ]\), we have \(\widehat{u_0^\lambda }=f^\lambda \). Given \(t_{p,q}=\frac{1}{2\pi }\frac{p}{q}\), for \(\lambda >p\), the restriction of \(\widehat{u_0^\lambda }\) to \([-\pi ,\pi ]\) has support included in a neighborhood of zero of radius less than \(\eta \frac{\pi }{p}\) for a \(\eta \in ]0,1[\). We then get by (44)

$$\begin{aligned} e^{it_{p,q}\Delta }u_0^\lambda (0)=\frac{1}{\sqrt{q}} \,\widehat{u_0^\lambda }(0)\, e^{-it_{p,q}\,\xi _x^2+ix\,\xi _x+i\theta _{m_x}}, \end{aligned}$$

so

$$\begin{aligned} |e^{it_{p,q}\Delta }u_0^\lambda (0)|=\frac{1}{\sqrt{q}} \,|f^\lambda (0)|=\frac{1}{\sqrt{q}} \lambda ^{\beta }\psi (0)=\frac{1}{\sqrt{q}} \lambda ^{\beta }. \end{aligned}$$

On the other hand, at \(t_{p,q}=\frac{1}{2\pi }\frac{p}{q}\) we have

$$\begin{aligned} |e^{it_{p,q}\Delta }\alpha _0^\lambda \delta _0(0)|=\sqrt{\frac{4q}{p}} \,|\alpha _0^\lambda |=\sqrt{\frac{4q}{p}} \,\frac{1}{2\pi }\left| \int _{-\pi }^\pi f^\lambda (\xi )d\xi \right| =C(\psi )\sqrt{\frac{q}{p}} \lambda ^{\beta -1} \end{aligned}$$

Therefore

$$\begin{aligned} \left| \frac{e^{it_{p,q}\Delta }u_0^\lambda (0)}{e^{it_{p,q}\Delta }\alpha _0^\lambda \delta _0(0)}\right| =\frac{ \sqrt{p}}{C(\psi )q}\lambda \overset{\lambda \rightarrow \infty }{\longrightarrow }\infty , \end{aligned}$$

and the proof of Proposition 3.1 is complete.

Finally, for the first part of Proposition 3.2, as the sequence \(\{\alpha _k\}\) satisfies the conditions of Theorem 1.4,

$$\begin{aligned} u(t_{p,q},x)=\sum _{k\in {\mathbb {Z}}}e^{\mp i\frac{|\alpha _k|^2}{4\pi }\log \sqrt{t_{p,q}}}(\alpha _k+R_k(t_{p,q}))e^{it_{p,q}\Delta }\delta _k(x), \end{aligned}$$

so

$$\begin{aligned}&\left| u(t_{p,q},x)-\sum _{k\in {\mathbb {Z}}}e^{it_{p,q}\Delta }\alpha _k\delta _k(x)\right| \\&\quad \le \sum _{k\in {\mathbb {Z}}}(1-e^{\mp i\frac{|\alpha _k|^2}{4\pi }\log \sqrt{t_{p,q}}})\alpha _k e^{it_{p,q}\Delta }\delta _k(x)+\sum _{k\in {\mathbb {Z}}}e^{\mp i\frac{|\alpha _k|^2}{4\pi }\log \sqrt{t_{p,q}}}R_k(t_{p,q})e^{it_{p,q}\Delta }\delta _k(x). \end{aligned}$$

From Proposition 3.1, if x is at distance larger than \(\frac{\eta }{q}\) from \(\frac{1}{q}{\mathbb {Z}}\) then \(e^{it_{p,q}\Delta }\sum _k\alpha _k\delta _k(x)\) vanishes. Also, from (37) we can choose the radius \(\delta \) of the fixed point argument for \(\{R_k\}\) to be of type \(C\Vert \{\alpha _k\}\Vert _{l^{2,s}}^3\) so we get

$$\begin{aligned} |u(t_{p,q},x)|\le \sum _{k\in {\mathbb {Z}}}|1-e^{i\frac{\mp |\alpha _k|^2}{4\pi }\log \sqrt{t_{p,q}}}||\alpha _k| \frac{C}{\sqrt{t_{p,q}}}+C\Vert \{\alpha _k\}\Vert _{l^{2,s}}^3t_{p,q}^{\gamma -\frac{1}{2}}. \end{aligned}$$

If q is such that \(\Vert \{\alpha _k\}\Vert _{l^{\infty }}^2\log q<\frac{1}{2}\) then we obtain

$$\begin{aligned} |u(t_{p,q},x)|\le C\sqrt{q}\log q\sum _{k\in {\mathbb {Z}}}|\alpha _k|^3+\frac{C}{q^{\gamma -\frac{1}{2}}}\Vert \{\alpha _k\}\Vert _{l^{2,s}}^3, \end{aligned}$$

and therefore (46) follows for \(C\sqrt{q}\log q\,\epsilon ^2<1\).

For the last part of Proposition 3.2 we proceed as for the last part of Proposition 3.1, and we suppose also that \(\psi \in H^s({\mathbb {R}})\) with \(s>\frac{1}{2}\), and impose \(\beta <\frac{1}{2}-s\). Then the condition \(\psi \in H^s({\mathbb {R}})\) insures us that \(\{\alpha ^\lambda _k\}\in l^{2,s}\), and moreover \(\Vert \{\alpha ^\lambda _k\}\Vert _{l^{2,s}}=C(\psi )\lambda ^{\beta +s-\frac{1}{2}}\overset{\lambda \rightarrow +\infty }{\longrightarrow }0\). Therefore, for \(\lambda \) large enough, by using the same estimates as above we obtain for \(\frac{1}{2}<\gamma <1\)

$$\begin{aligned} |u^\lambda (t_{p,q},0)-e^{it_{p,q}\Delta }u_0^\lambda (0)|\le & {} C\sqrt{q}\log q\Vert \{\alpha ^\lambda _k\}\Vert _{l^{2,s}}^3+\frac{C}{q^{\gamma -\frac{1}{2}}}\Vert \{\alpha ^\lambda _k\}\Vert _{l^{2,s}}^3\\\le & {} C\sqrt{q}\log q\lambda ^{3\beta +3s-\frac{3}{2}}, \end{aligned}$$

so

$$\begin{aligned} \left| \frac{u^\lambda (t_{p,q},0)}{e^{it_{p,q}\Delta }\alpha _0^\lambda \delta _0(0)}-\frac{e^{it_{p,q}\Delta }u_0^\lambda (0)}{e^{it_{p,q}\Delta }\alpha _0^\lambda \delta _0(0)}\right| \le C\log q\lambda ^{2\beta +3s-\frac{1}{2}}. \end{aligned}$$

By choosing \(\beta =\frac{3}{2}(\frac{1}{2}-s)^-\) we have \(\lambda ^{2\beta +3s-\frac{1}{2}}\ll \lambda \) so in view of (45) the divergence (47) follows.

4 Evolution of Polygonal Lines Through the Binormal Flow

In this section we prove Theorem 1.1.

4.1 Plan of the Proof

We consider equation (9) with initial data

$$\begin{aligned} u(0)=\sum _{k\in {\mathbb {Z}}} \alpha _k\delta _k, \end{aligned}$$

where the coefficients \(\alpha _k\) will be defined in §4.2 in a specific way involving geometric quantities characterizing the polygonal line \(\chi _0\). Then Theorem 1.4 gives us a solution u(tx) on \(t>0\). From this smooth solution on \(]0,\infty [\) we construct in §4.3 a smooth solution \(\chi (t)\) of the binormal flow on \(]0,,\infty [\), that has a limit \(\chi (0)\) at \(t=0\). Now the goal is to prove that the curve \(\chi (0)\) is the curve \(\chi _0\) modulo a translation and a rotation. This is done in several steps. First we show in §4.4 that the tangent vector has a limit at \(t=0\). Secondly we show in §4.5 that \(\chi (0)\) is a segment for \(x\in ]n,n+1[,\forall n\in {\mathbb {Z}}\). Then we prove in §4.7, by analyzing the frame of the curve through self-similar variables paths, that at points \(x=k\in {\mathbb {Z}}\) the curve \(\chi (0)\) presents a corner of same angle as \(\chi _0\). In §4.9 we recover the torsion of \(\chi _0\) by using also a similar analysis for modulated normal vectors in §4.8. Therefore we conclude in §4.10 that \(\chi (0)\) and \(\chi _0\) coincide modulo a translation and a rotation. By considering the corresponding translated and rotated \(\chi (t)\) we obtain the desired binormal flow solution with limit \(\chi _0\) at \(t=0\). The extension to negative times is done by using time reversibility. Uniqueness holds in the class of curves having filament functions of type (7). In §4.11 we describe some properties of the binormal flow solution given by the Theorem 1.1.

4.2 Designing the Coefficients of the Dirac Masses in Geometric Terms

Let \(\chi _0(x)\) be a polygonal line paramatrized by archlength, having at least two consecutive corners, located at \(x=x_0\) and \(x=x_1\). We denote by \(\{x_n, n\in L\}\subset {\mathbb {R}}\), the locations of all its corners ordered incresingly: \(x_n<x_{n+1}\). Here L stand for a finite or infinite set of consecutive integers starting at \(n=0\). We can characterize such a curve by the location of its corners \(\{x_n, n\in L\}\subset {\mathbb {R}}\) and by a triple sequence \(\{\theta _n,\tau _n,\delta _n\}_{n\in L}\) where \(\theta _n\in ]0,\pi [\), \(\tau _n\in [0,\pi ]\) and \(\delta _n\in \{-,+\}\), \(\tau _{0}=0\), \(\delta _{0}=+\), in the following way.

Let us first denote by \(T_n\in {\mathbb {S}}^2\) the tangent vector of \(\chi _0(x)\) for \(x\in ]x_n,x_{n+1}[\). For \(n\in L\) we define \(\theta _n\in ]0,\pi [\) to be the (curvature) angle between \(T_{n-1}\) and \(T_{n}\). We note that given only \(T_{n-1}\) and \(\theta _n\) we have a \([0,2\pi [\)-parameter of possibilities to choose \(T_{n}\). We define \(\tau _{0}=0\), \(\delta _{0}=+\) and for \(n>0\) we define a (torsion) angle \(\tau _n\in [0,\pi ]\) at the corner located at \(x_n\) to be such that

$$\begin{aligned} \cos (\tau _{n})=\frac{T_{n-1}\wedge T_n}{|T_{n-1}\wedge T_n|}.\frac{T_n\wedge T_{n+1}}{|T_n\wedge T_{n+1}|}. \end{aligned}$$
(50)

We note now that given only \(T_{n-1},T_n\), \(\theta _n\) and \(\tau _n\) we have two possibilities to choose \(T_{n+1}\). Indeed, \(T_{n+1}\) is determined by the position of \(T_n\wedge T_{n+1}\) in the plane \(\Pi _n\) orthogonal to \(T_n\), given by the oriented frame \(T_{n-1}\wedge T_n\) and \(T_n\wedge (T_{n-1}\wedge T_n)\). Therefore we have two possibilities by orienting it with respect to \(T_{n-1}\wedge T_n\): by \(\tau _n\) or by \(-\tau _n\). We define \(\delta _n=+\) if \( (T_{n-1}\wedge T_n)\wedge (T_n\wedge T_{n+1})\) points in the same direction as \(T_n\), and \(\delta _n=-\) if it points out in the opposite direction. For \(n<0\) we define similarly (torsion) angles \(\tau _n\in [0,\pi [\) at the corner located at \(x_n\).

Conversely, given L a set of consecutive integers containing 0 and 1, given an increasing sequence \(\{x_n, n\in L\}\subset {\mathbb {R}}\), and given a triple sequence \(\{\theta _n,\tau _n,\delta _n\}_{n\in L}\) where \(\theta _n\in ]0,\pi [\), \(\tau _n\in [0,\pi ]\) and \(\delta _n\in \{-,+\}\), such that \(\tau _{0}=0\), \(\delta _{0}=+\) , we can determine a polygonal line \(\chi _0\), unique up to rotations and translation, in the following way. We construct first the tangent vectors, then \(\chi _0\) is obtained by setting \(\chi _0'(x)=T_n\) on \(x\in ]x_n,x_{n+1}[\). We pick a unit vector and denote it \(T_{-1}\). Then we pick a unit vector having an angle \(\theta _{0}\) with \(T_{-1}\), and we call it \(T_{0}\). Then we consider all unit vectors v having an angle \(\theta _{1}\) with \(T_{0}\). Among them, we choose the two of them such that \(T_{0}\wedge v\), that lives in the plane \(\Pi _{0}\) orthogonal to \(T_{0}\), have an angle \(\tau _{1}\) with \(T_{-1}\wedge T_{0}\). Eventually, we choose \(T_{1}\) to be the one of the two such that if \(\delta _{0}=+\) the vector \( (T_{-1}\wedge T_{0})\wedge (T_{0}\wedge v)\) points in the same direction as \(T_{0}\), and such that if \(\delta _{0}=+\) the vector \( (T_{-1}\wedge T_{0})\wedge (T_{0}\wedge v)\) points in the opposite direction of \(T_{0}\). We iterate this process to construct \(\chi _0(x)\) on \(x>x_0\), and similarly to construct \(\chi _0(x)\) on \(x<x_0\).

Given \(\chi _0\) the polygonal line of the statement, we define \(\{x_n, n\in L\}\) the ordered set of its integer corner locations and the corresponding sequence \(\{\theta _n,\tau _n,\delta _n\}_{n\in L}\) where \(\theta _n\in ]0,\pi [\), \(\tau _n\in [0,\pi ]\) and \(\delta _n\in \{-,+\}\), \(\tau _{0}=0\), \(\delta _{0}=+\). Then we define \(\alpha _k=0\) if \(k\notin \{x_n, n\in L\} \) and if \(k=x_n\) for some \(n\in L\) we define \(\alpha _k\in {\mathbb {C}}\) in the following way. First we set

$$\begin{aligned} |\alpha _k|=\sqrt{-\frac{2}{\pi }\log \left( \sin \left( \frac{\theta n}{2}\right) \right) }. \end{aligned}$$
(51)

Then we set \(Arg(\alpha _{x_0})=0\) and we define \(Arg(\alpha _k)\in [0,2\pi )\) to be determined by

$$\begin{aligned} \left\{ \begin{array}{c} \cos (\tau _n)=-\cos (\phi _{|\alpha _{x_n}|}-\phi _{|\alpha _{x_{n+1}}|}+\beta _n+Arg(\alpha _{x_n})-Arg(\alpha _{x_{n+1}})) ,\\ \delta _n=-sgn(\sin (\phi _{|\alpha _{x_n}|}-\phi _{|\alpha _{x_{n+1}}|}+\beta _n+Arg(\alpha _{x_n})-Arg(\alpha _{x_{n+1}}))), \end{array}\right. \end{aligned}$$
(52)

where \(\{\phi _{|\alpha _{x_n}|}\}\) are defined in Lemma 4.8 and depend only on \(|\alpha _{x_n}|\), and

$$\begin{aligned} \beta _n=(|\alpha _{x_n}|^2-|\alpha _{x_{n+1}}|^2)\log |x_n-x_{n+1}|. \end{aligned}$$

We consider the solution u(tx) given by Theorem (1.4) for the sequence \(\sqrt{4\pi i} \alpha _k\) and \(\frac{1}{2}<\gamma <1\), that solves

$$\begin{aligned} \begin{array}{c} i\partial _t u +\Delta u+\frac{1}{2}\left( |u|^2-\frac{2M}{t}\right) u=0, \end{array} \end{aligned}$$
(53)

with \(M=\sum _{k\in {\mathbb {Z}}}|\alpha _k|^2\), and can be written as

$$\begin{aligned} u(t,x)=\sum _{k\in {\mathbb {Z}}}e^{-i|\alpha _k|^2\log \sqrt{t}}(\alpha _k+R_k(t))\frac{e^{i\frac{|x-k|^2}{4t}}}{\sqrt{t}}, \end{aligned}$$
(54)

with

$$\begin{aligned} \sup _{0<t<T}t^{-\gamma }\Vert \{R_k(t)\}\Vert _{l^{2,s}}+t\,\Vert \{\partial _t R_k(t)\}\Vert _{l^{2,s}}<C. \end{aligned}$$

4.3 Construction of a Solution of the Binormal Flow for \(t>0\)

Given an orthonormal basis \((v_1, v_2, v_3)\) of \({\mathbb {R}}^3\), a point \(P\in {\mathbb {R}}^3\) and a time \(t_0>0\) we construct a frame at all points \(x\in {\mathbb {R}}\) and times \(t>0\) by imposingFootnote 3\((T,e_1,e_2)(t_0,0)=(v_1,v_2,v_3)\),

$$\begin{aligned} \left( \begin{array}{c} T\\ e_1\\ e_2 \end{array}\right) _t(t,x)= \left( \begin{array}{ccc} 0 &{} -\mathfrak {I}u_x &{} \mathfrak {R}u_x \\ \mathfrak {I}u_x &{} 0 &{} -\frac{|u|^2}{2}+\frac{M}{2t}\\ -\mathfrak {R}u_x &{} \frac{|u|^2}{2}-\frac{M}{ 2t} &{} 0 \end{array}\right) \left( \begin{array}{c} T\\ e_1\\ e_2 \end{array}\right) (t,x), \end{aligned}$$

and

$$\begin{aligned} \left( \begin{array}{c} T\\ e_1\\ e_2 \end{array}\right) _x(t,x)= \left( \begin{array}{ccc} 0 &{} \mathfrak {R}u &{} \mathfrak {I}u \\ -\mathfrak {R}u &{} 0 &{} 0 \\ - \mathfrak {I}u &{} 0 &{} 0 \end{array}\right) \left( \begin{array}{c} T\\ e_1\\ e_2 \end{array}\right) (t,x). \end{aligned}$$

We can already notice that T(tx) satisfies the Schrödinger map:

$$\begin{aligned} T_t=T\wedge T_{xx}. \end{aligned}$$

We define now for all points \(x\in {\mathbb {R}}\) and times \(t>0\):

$$\begin{aligned} \chi (t,x)=P+\int _t^{t_0}(T\wedge T_{x})(\tau ,0)d\tau +\int _0^{x}T(t,s)ds. \end{aligned}$$

As T(tx) satisfies the Schrödinger map we have \(T_t=(T\wedge T_x)_x\), so we can easily compute that \(\chi (t,x)\) satisfies the binormal flow:

$$\begin{aligned} \chi _t=T\wedge T_x=\chi _x\wedge \chi _{xx}. \end{aligned}$$

We note that there are two degrees of freedom in this construction— the choice of the orthonormal basis \((v_1, v_2, v_3)\) of \({\mathbb {R}}^3\) and the choice of the point \(P\in {\mathbb {R}}^3\). Changing these parameters is equivalent to rotate and translate respectively the solution \(\chi (t)\). The resulting evolution of curves is still a solution of the binormal flow, with the same laws of evolution in time and space for the frame.

We introduce now the complex valued normal vector

$$\begin{aligned} N(t,x)=e_1(t,x)+ie_2(t,x). \end{aligned}$$

With this vector we can write in a simpler way the laws of evolution in time and space for the frame, that will be useful in the rest of the proof:

$$\begin{aligned} T_x= & {} \mathfrak {R}u \,e_1+\mathfrak {I}u \,e_2=\mathfrak {R}({\overline{u}}\, N), \end{aligned}$$
(55)
$$\begin{aligned} N_x= & {} e_{1x}+ie_{2x}=-\mathfrak {R}u\, T-i\mathfrak {I}u \,T=-u\, T, \end{aligned}$$
(56)
$$\begin{aligned} T_t= & {} -\mathfrak {I}u_x\, e_1+\mathfrak {R}u_x\,e_2=\mathfrak {I}(\overline{u_x}\,N), \end{aligned}$$
(57)
$$\begin{aligned} N_t= & {} \mathfrak {I}u_x\,T+\left( -\frac{|u|^2}{2}+\frac{M}{2t}\right) e_2-i\mathfrak {R}u_x\,T+i\left( \frac{|u|^2}{2}-\frac{M}{2t}\right) e_1\nonumber \\= & {} -iu_x\, T+i\left( \frac{|u|^2}{2}-\frac{M}{2t}\right) N, \end{aligned}$$
(58)
$$\begin{aligned} \chi _t= & {} T\wedge T_x=T\wedge \mathfrak {R}({\overline{u}}\,N)=\mathfrak {I}({\overline{u}}\, N). \end{aligned}$$
(59)

In particular from (54) and (59) we have for \(0<t_1<t_2<1\):

$$\begin{aligned}&|\chi (t_2,x)-\chi (t_1,x)|=\left| \int _{t_1}^{t_2}\chi _t(t,x)dt\right| \le \int _{t_1}^{t_2}|u(t,x)|dt \\&\quad \le \int _{t_1}^{t_2}\sum _j|\alpha _j+R_j(t)|\frac{dt}{\sqrt{t}}\le C\sqrt{t_2}(\Vert \{\alpha _j\}\Vert _{l^{1}}+\Vert \{R_j\}\Vert _{L^\infty (0,1)l^{1}}). \end{aligned}$$

This implies the existence of a limit curve at \(t=0\) for all \(x\in {\mathbb {R}}\):

$$\begin{aligned} \exists \, \underset{t\rightarrow 0}{\lim }\,\chi (t,x)=:\chi (0,x). \end{aligned}$$

Moreover, \(\chi (0)\) is a continuous curve.

4.4 Existence of a Trace at \(t=0\) for the Tangent Vector

For further purposes we shall need to show that the tangent vector T(tx) has a limit T(0, x) at \(t=0\), and moreover we shall need to get a convergence decay of selfsimilar type \(\frac{\sqrt{t}}{d(x,{\mathbb {Z}})}\) for x close to \({\mathbb {Z}}\). This is insured by the following lemma.

Lemma 4.1

Let \(0<t_1<t_2<1\). If \(x\in {\mathbb {R}}\backslash \frac{1}{2}{\mathbb {Z}}\) then

$$\begin{aligned} |T(t_2,x)-T(t_1,x)|\le C(1+|x|)\sqrt{t_2}\left( \frac{1}{d(x,\frac{1}{2}{\mathbb {Z}})}+\frac{1}{d(x,{\mathbb {Z}})}\right) , \end{aligned}$$
(60)

while if \(x\in \frac{1}{2}{\mathbb {Z}}\) then

$$\begin{aligned} |T(t_2,x)-T(t_1,x)|\le C(1+|x|)\sqrt{t_2}. \end{aligned}$$
(61)

In particular for any \(x\in {\mathbb {R}}\) there exists a trace for the tangent vector at \(t=0\):

$$\begin{aligned} \exists \, \underset{t\rightarrow 0}{\lim }\,T(t,x)=:T(0,x). \end{aligned}$$
(62)

Proof

In view of (57) and (54) we have

$$\begin{aligned}&T(t_2,x)-T(t_1,x)\\&\quad =\int _{t_1}^{t_2} \mathfrak {I}(\overline{u_x}\,N(t,x)) \,dt \\&\quad =\mathfrak {I}\int _{t_1}^{t_2}\sum _je^{i|\alpha _j|^2\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})\frac{e^{-i\frac{(x-j)^2}{4t}}}{\sqrt{t}}(-i)\frac{(x-j)}{2t}N(t,x)\,dt. \end{aligned}$$

In case \(j=x\) the integrant vanishes so we get the left-hand-side of (61) null.

For \(j\ne x\) we perform an integration by parts that exploits the oscillatory phase to get integrability in time:

$$\begin{aligned}&T(t_2,x)-T(t_1,x)\\&\quad =\left[ \mathfrak {I}\sum _{j\ne x}e^{i|\alpha _j|^2\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})\frac{e^{-i\frac{(x-j)^2}{4t}}}{\sqrt{t}}(-i)\frac{4t^2}{(x-j)^2}(-i)\frac{(x-j)}{2t}N(t,x)\right] _{t_1}^{t_2} \\&\qquad +\,2\mathfrak {I}\int _{t_1}^{t_2}\sum _{j\ne x}\frac{e^{-i\frac{(x-j)^2}{4t}}}{x-j}(\sqrt{t}\,e^{i|\alpha _j|^2\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})N(t,x))_t\,dt\\&\quad =:I_0+I_1+I_2+I_3+I_4, \end{aligned}$$

where we have denoted by \(I_0\) the boundary term and by \(I_1,I_2,I_3,I_4\) the terms obtained after the differentiation in time of the quadruple product in the integral part. We consider first the boundary term

$$\begin{aligned} |I_0|\le C\sqrt{t_2}\sum _{j\ne x}|\alpha _j+R_j(t_2)|\frac{1}{|x-j|}+C\sqrt{t_1}\sum _{j\ne x}|\alpha _j+R_j(t_1)|\frac{1}{|x-j|}. \end{aligned}$$

If \(x\in {\mathbb {Z}}\) then we have

$$\begin{aligned} |I_0|\le C\sqrt{t_2}(\Vert \{\alpha _j\}\Vert _{l^{1}}+\Vert \{R_j\}\Vert _{L^\infty (0,t_2)l^{1}}), \end{aligned}$$

while if \(x\notin {\mathbb {Z}}\)

$$\begin{aligned} |I_0|\le C\frac{\sqrt{t_2}}{d(x,{\mathbb {Z}})}(\Vert \{\alpha _j\}\Vert _{l^{1}}+\Vert \{R_j\}\Vert _{L^\infty (0,t_2)l^{1}}). \end{aligned}$$

Therefore the contribution of \(I_0\) fits with the estimates in the statement of the Lemma. The terms \(I_1\) and \(I_2\) can be treated the same, as \(\int _{t_1}^{t_2}(\sqrt{t}e^{-i|\alpha _k|^2\log \sqrt{t}})_t\,dt\le C\sqrt{t_2}\). Also the term \(I_3\) can be treated similarly, as \(|\partial _t R_j(t))|\le \frac{C}{t}\) on (0, 1). We are left with the \(I_4\) term, which contains in view of (58) the expression \(N_t=-iu_x\, T+i\left( \frac{|u|^2}{2}-\frac{M}{2t}\right) N\):

$$\begin{aligned} I_4= & {} 2\mathfrak {I}\int _{t_1}^{t_2}\sum _{j\ne x}\frac{e^{-i\frac{(x-j)^2}{4t}}}{x-j}\sqrt{t}\,e^{i|\alpha _j|^2\log \sqrt{t}}(\overline{\alpha _j+R_j(t)}) \\&\times \left( -i\sum _ke^{-i|\alpha _k|^2\log \sqrt{t}}(\alpha _k+R_k(t))\,\frac{e^{i\frac{(x-k)^2}{4t}}}{\sqrt{t}}i\frac{(x-k)}{2t}T(t,x)\right. \\&\left. +i\left( \frac{\sum _{r,{\tilde{r}}}e^{-i(|\alpha _r|^2-|\alpha _{{\tilde{r}}}|^2)\log \sqrt{t}}(\alpha _r+R_r(t))(\overline{\alpha _{{\tilde{r}}}+R_{{\tilde{r}}}(t)})e^{i\frac{(x-r)^2-(x-{\tilde{r}})^2}{4t}}}{2 t} \right. \right. \\&\left. \left. -\frac{M}{2t}\right) N(t,x)\right) \,dt. \end{aligned}$$

We notice that the second term can be upper-bounded as \(I_0\). We are thus left with the first term:

$$\begin{aligned} I_{4,1}= & {} \mathfrak {I}\int _{t_1}^{t_2}\sum _{j,k\ne x}e^{-i\frac{(j-k)(j+k-2x)}{4t}}\frac{x-k}{x-j}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})(\alpha _k \\&\quad +R_k(t))T(t,x)\frac{dt}{t}, \end{aligned}$$

for which we still have an obstruction for the integrability in time. The terms in the sum for which \(j=k\) have null contribution as they are real numbers. Also, in case \(2x\in {\mathbb {Z}}\), the terms in the sum for which \(j+k-2x=0\) give

$$\begin{aligned}&-\mathfrak {I}\int _{t_1}^{t_2}\sum _{k\ne x}e^{i(|\alpha _{-k+2x}|^2-|\alpha _k|^2)\log \sqrt{t}}(\overline{\alpha _{-k+2x}+R_{-k+2x}(t)})(\alpha _k+R_k(t))T(t,x)\frac{dt}{t} \\&\quad =-\mathfrak {I}\int _{t_1}^{t_2}\sum _{j\ne x}e^{i(|\alpha _j|^2-|\alpha _{-j+2x}|^2)\log \sqrt{t}}(\overline{\alpha _{j}+R_{j}(t)})(\alpha _{-j+2x}+R_{-j+2x}(t))T(t,x)\frac{dt}{t} \\&\quad =\mathfrak {I}\int _{t_1}^{t_2}\sum _{j\ne x}e^{i(-|\alpha _j|^2+|\alpha _{-j+2x}|^2)\log \sqrt{t}}(\alpha _{j}+R_{j}(t))(\overline{\alpha _{-j+2x}+R_{-j+2x}(t)})T(t,x)\frac{dt}{t} \\&\quad =\mathfrak {I}\int _{t_1}^{t_2}\sum _{k\ne x}e^{i(|\alpha _{-k+2x}|^2-|\alpha _k|^2)\log \sqrt{t}}(\overline{\alpha _{-k+2x}+R_{-k+2x}(t)})(\alpha _k+R_k(t))T(t,x)\frac{dt}{t}, \end{aligned}$$

so their contribution is null.

Therefore we have, for all \(x\in {\mathbb {R}}\),

$$\begin{aligned} I_{4,1}= & {} \mathfrak {I}\int _{t_1}^{t_2}\sum _{j,k\ne x;\,j\ne k;\,j+k\ne 2x}e^{-i\frac{(j-k)(j+k-2x)}{4t}}\frac{x-k}{x-j} \\&\times \, e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})(\alpha _k+R_k(t))T(t,x)\frac{dt}{t}. \end{aligned}$$

We perform an integration by parts:

$$\begin{aligned} I_{4,1}= & {} \mathfrak {I}\left[ \sum _{j,k\ne x;\,j\ne k;\,j+k\ne 2x}e^{-i\frac{(j-k)(j+k-2x)}{4t}}\frac{(-i)4t^2}{(j-k)(j+k-2x)}\frac{x-k}{x-j}\right. \\&\left. \times \, e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})(\alpha _k+R_k(t))\frac{T(t,x)}{t}\right] _{t_1}^{t_2} \\&+\,4\mathfrak {I}\int _{t_1}^{t_2}\sum _{j,k\ne x;\,j\ne k;\,j+k\ne 2x}e^{-i\frac{(j-k)(j+k-2x)}{4t}}\frac{i}{(j-k)(j+k-2x)}\frac{x-k}{x-j} \\&\times \, (te^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})(\alpha _k+R_k(t))T(t,x))_t\,dt \\=: & {} I_{4,1}^0+I_{4,1}^1+I_{4,1}^2+I_{4,1}^3+I_{4,1}^4+I_{4,1}^5, \end{aligned}$$

where \(I_{4,1}^0\) stands for the boundary term and \(I_{4,1}^1,I_{4,1}^2,I_{4,1}^3,I_{4,1}^4\) and \(I_{4,1}^5\) are the terms after differentiating in time the quintuple product in the integral. For the boundary term we have

$$\begin{aligned} |I_{4,1}^0|\le & {} 4t_2\sum _{j,k\ne x;\,j\ne k;\,j+k\ne 2x}\frac{|x-k|}{|j-k||j+k-2x||x-j|}|\alpha _j+R_j(t_2)||\alpha _k+R_k(t_2)| \\&+\,4t_1\sum _{j,k\ne x;\,j\ne k;\,j+k\ne 2x}\frac{|x-k|}{|j-k||j+k-2x||x-j|}|\alpha _j+R_j(t_1)||\alpha _k+R_k(t_1)|. \end{aligned}$$

As for \(j\ne k\)

$$\begin{aligned}&\frac{|x-k|}{|j-k||j+k-2x||x-j|}\nonumber \\&\quad \le \frac{|x-j|+|j+k-2x|}{|j-k||j+k-2x||x-j|}\le \frac{1}{|j+k-2x|}+\frac{1}{|x-j|}, \end{aligned}$$
(63)

we have for \(x\in \frac{1}{2}{\mathbb {Z}}\)

$$\begin{aligned} |I_{4,1}^0|\le Ct_2(\Vert \{\alpha _j\}\Vert _{l^{1}}+\Vert \{R_j\}\Vert _{L^\infty (0,t_2)l^{1}})^2. \end{aligned}$$

while for \(x\notin \frac{1}{2}{\mathbb {Z}}\) we obtain

$$\begin{aligned} |I_{4,1}^0|\le Ct_2\left( \frac{1}{d(x,\frac{1}{2}{\mathbb {Z}})}+\frac{1}{d(x,{\mathbb {Z}})}\right) (\Vert \{\alpha _j\}\Vert _{l^{1}}+\Vert \{R_j\}\Vert _{L^\infty (0,t_2)l^{1}})^2. \end{aligned}$$

The terms \(I_{4,1}^1,I_{4,1}^2,I_{4,1}^3\) and \(I_{4,1}^4\) can be upper-bounded as \(I_{4,1}^0\) by using moreover for \(I_{4,1}^3\) and \(I_{4,1}^4\) the bound \(\partial _t R_j(t))\le \frac{C}{t}\) on (0, 1). The last term \(I_{4,1}^5\) involves, in view of (57),

$$\begin{aligned}&T_t(t,x)=\mathfrak {I}(\overline{u_x}\,N)(t,x)\nonumber \\&\quad =\mathfrak {I}\sum _re^{i|\alpha _r|^2\log \sqrt{t}}(\overline{\alpha _r+R_r(t)})\frac{e^{-i\frac{(x-r)^2}{4t}}}{\sqrt{t}}(-i)\frac{(x-r)}{2t}N(t,x) \end{aligned}$$

so

$$\begin{aligned} I_{4,1}^5= & {} -\frac{1}{2}\mathfrak {I}\int _{t_1}^{t_2}\sum _{j,k\ne x;\,j\ne k;\,j+k\ne 2x}e^{-i\frac{(j-k)(j+k-2x)}{4t}}\frac{i}{(j-k)(j+k-2x)}\frac{x-k}{x-j} \\&\times e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})(\alpha _k \\&+R_k(t))\mathfrak {R}\sum _re^{i|\alpha _r|^2\log \sqrt{t}}(\overline{\alpha _r+R_r(t)})e^{-i\frac{(x-r)^2}{4t}}(x-r)N(t,x)\,\frac{dt}{\sqrt{t}}, \end{aligned}$$

and in particular

$$\begin{aligned} |I_{4,1}^5|\le & {} C\int _{t_1}^{t_2}\sum _{j,k\ne x;\,j\ne k;\,j+k\ne 2x}\frac{|x-k|}{|j-k||j+k-2x||x-j|} \\&\times |\alpha _j+R_j(t)||\alpha _k+R_k(t)|\sum _r|\alpha _r+R_r(t)||x-r|\,\frac{dt}{\sqrt{t}}. \end{aligned}$$

We can write

$$\begin{aligned} \sum _r|\alpha _r+R_r(t)||x-r|\le C(1+|x|)(\Vert \{\alpha _j\}\Vert _{l^{2,\frac{3}{2}^+}}+\Vert \{R_j\}\Vert _{L^\infty (0,t_2)l^{2,\frac{3}{2}^+}}), \end{aligned}$$

so by using (63) we get for \(x\in \frac{1}{2}{\mathbb {Z}}\):

$$\begin{aligned}&|I_{4,1}^5|\le C(1+|x|)\sqrt{t_2}(\Vert \{\alpha _j\}\Vert _{l^{1}}\\&\quad +\Vert \{R_j\}\Vert _{L^\infty (0,t_2)l^{1}})^2(\Vert \{\alpha _j\}\Vert _{l^{2,\frac{3}{2}^+}}+\Vert \{R_j\}\Vert _{L^\infty (0,t_2)l^{2,\frac{3}{2}^+}}), \end{aligned}$$

while for \(x\notin \frac{1}{2}{\mathbb {Z}}\) we obtain:

$$\begin{aligned}&|I_{4,1}^5|\le C\sqrt{t_2}(1+|x|)\left( \frac{1}{d(x,\frac{1}{2}{\mathbb {Z}})}+\frac{1}{d(x,{\mathbb {Z}})}\right) \\&\quad \times (\Vert \{\alpha _j\}\Vert _{l^{1}}+\Vert \{R_j\}\Vert _{L^\infty (0,t_2)l^{1}})^2(\Vert \{\alpha _j\}\Vert _{l^{2,\frac{3}{2}^+}}+\Vert \{R_j\}\Vert _{L^\infty (0,t_2)l^{2,\frac{3}{2}^+}}). \end{aligned}$$

Therefore the proof of the Lemma is completed. \(\square \)

4.5 Segments of the Limit Curve at \(t=0\)

Lemma 4.2

Let \(n\in {\mathbb {Z}}\) and \(x_1,x_2\in (n,n+1)\). Then

$$\begin{aligned} T(0,x_1)=T(0,x_2). \end{aligned}$$

In particular, we recover that \(\chi (0)\) is a polygonal line, and might have corners only at integer locations.

Proof

From Lemma 4.1 we have

$$\begin{aligned} T(0,x_1)-T(0,x_2)=\underset{t\rightarrow 0}{\lim } \,(T(t,x_1)-T(t,x_2)). \end{aligned}$$
(64)

In view of (55) we compute

$$\begin{aligned}&T(t,x_1)-T(t,x_2)=\int _{x_1}^{x_2}\mathfrak {R}({\overline{u}}N(t,x))\,dx \\&\quad =\mathfrak {R}\int _{x_1}^{x_2}\sum _je^{i|\alpha _j|^2\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})\frac{e^{-i\frac{(x-j)^2}{4t}}}{\sqrt{t}}\,N(t,x)\,dx. \end{aligned}$$

In this case the integral is well defined, but we need decay in time. For this purpose we perform an integration by parts, that is allowed on \((x_1,x_2)\subset (n,n+1)\):

$$\begin{aligned}&T(t,x_1)-T(t,x_2)=\left[ \mathfrak {R}\sum _{j}e^{i|\alpha _j|^2\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})\frac{e^{-i\frac{(x-j)^2}{4t}}}{\sqrt{t}}i\frac{2t}{x-j}\,N(t,x)\right] _{x_1}^{x_2} \\&\qquad +2\sqrt{t}\mathfrak {I}\int _{x_1}^{x_2}\sum _{j}e^{i|\alpha _j|^2\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})e^{-i\frac{(x-j)^2}{4t}}\left( \frac{1}{x-j}\,N(t,x)\right) _x\,dx \\&\quad =O(\sqrt{t})+2\sqrt{t}\mathfrak {I}\int _{x_1}^{x_2}\sum _{j}e^{i|\alpha _j|^2\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})e^{-i\frac{(x-j)^2}{4t}}\frac{1}{x-j}\,N_x(t,x)\,dx. \end{aligned}$$

As by (56) we have \(N_x=-uT\),

$$\begin{aligned}&T(t,x_1)-T(t,x_2)=O(\sqrt{t}) \\&\quad -2\mathfrak {I}\sum _{j,k}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\overline{\alpha _j+R_j(t)})(\alpha _k+R_k(t))\\&\quad \int _{x_1}^{x_2}\frac{e^{-i\frac{(x-j)^2-(x-k)^2}{4t}}}{x-j}T(t,x)\,dx. \end{aligned}$$

The summation holds only for \(j\ne k\), as for \(j=k\) the contribution is null. Moreover, from (11) we have \(\Vert \{R_j(t)\}\Vert _{l^1}=O(t^\gamma )\), \(\gamma >1/2\), so

$$\begin{aligned}&T(t,x_1)-T(t,x_2)\\&\quad =O(\sqrt{t})-2\mathfrak {I}\sum _{j\ne k}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}\overline{\alpha _j}\alpha _ke^{i\frac{j^2-k^2}{4t}}\int _{x_1}^{x_2}\frac{e^{i\frac{(j-k)x}{2t}}}{x-j}T(t,x)\,dx. \end{aligned}$$

To get decay in time we need to perform again an integration by parts:

$$\begin{aligned}&T(t,x_1)-T(t,x_2)=O(\sqrt{t})\\&\qquad -\left[ 2\mathfrak {I}\sum _{j\ne k}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}\overline{\alpha _j}\alpha _ke^{i\frac{j^2-k^2}{4t}}\frac{e^{i\frac{(j-k)x}{2t}}}{x-j}\frac{2t}{i(j-k)}T(t,x)\right] _{x_1}^{x_2} \\&\qquad +\,4t\,\mathfrak {R}\sum _{j\ne k}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}\overline{\alpha _j}\alpha _k\frac{e^{i\frac{j^2-k^2}{4t}}}{j-k}\int _{x_1}^{x_2}e^{i\frac{(j-k)x}{2t}}\left( \frac{1}{x-j}T(t,x)\right) _x\,dx \\&\quad =O(\sqrt{t})+4t\,\mathfrak {R}\sum _{j\ne k}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}\overline{\alpha _j}\alpha _k\frac{e^{i\frac{j^2-k^2}{4t}}}{j{-}k}\int _{x_1}^{x_2}e^{i\frac{(j{-}k)x}{2t}}\frac{1}{x-j}T_x(t,x)\,dx. \end{aligned}$$

From (55) we have \(T_x=\mathfrak {R}({\overline{u}}\,N)\) so finally

$$\begin{aligned}&T(t,x_1)-T(t,x_2)=O(\sqrt{t})+4t\,\mathfrak {R}\sum _{j\ne k}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}\overline{\alpha _j}\alpha _k\frac{e^{i\frac{j^2-k^2}{4t}}}{j-k} \\&\quad \times \int _{x_1}^{x_2}\frac{e^{i\frac{(j-k)x}{2t}}}{x-j}\mathfrak {R}\left( \sum _re^{i|\alpha _r|^2\log \sqrt{t}}(\overline{\alpha _r+R_r(t)})\frac{e^{-i\frac{(x-r)^2}{4t}}}{\sqrt{ t}}\,N(t,x)\right) \,dx=O(\sqrt{t}). \end{aligned}$$

Therefore in view of (64) we have indeed

$$\begin{aligned} T(0,x_1)-T(0,x_2)=0. \end{aligned}$$

\(\square \)

4.6 Recovering Self-Similar Structures Through Self-Similar Paths

In this subsection we shall use the results in [25] that characterize all the selfsimilar solutions of BF and give their corresponding asymptotics (see Theorem 1 in [25]).

Let us denote by \(A^\pm _{|\alpha _k|}\in {\mathbb {S}}^2\) the directions of the corner generated at time \(t=0\) by the canonical self-similar solution \(\chi _{|\alpha _k|}(t,x)\) of the binormal flow of curvature \(\frac{|\alpha _k|}{\sqrt{t}}\):

$$\begin{aligned} A^\pm _{|\alpha _k|}:=\partial _x\chi _{|\alpha _k|}(0,0^\pm ). \end{aligned}$$

We recall also that the frame of the profile (i.e. \(\chi _{|\alpha _k|}(1)\)) satisfies the system

$$\begin{aligned} \left\{ \begin{array}{l} \partial _xT_{|\alpha _k|}(1,x)=\mathfrak {R}(|\alpha _k|e^{-i\frac{x^2}{4}}N_{|\alpha _k|}(1,x)),\\ \partial _xN_{|\alpha _k|}(1,x)=-|\alpha _k|e^{i\frac{x^2}{4}}T_{|\alpha _k|}(1,x), \end{array}\right. \end{aligned}$$
(65)

and that for \(x\rightarrow \pm \infty \) there exist \(B^\pm _{|\alpha _k|}\perp A^\pm _{|\alpha _k|}\), with \(\mathfrak {R}(B^\pm _{|\alpha _k|}),\mathfrak {I}(B^\pm _{|\alpha _k|})\in {\mathbb {S}}^2\), such that

$$\begin{aligned} T_{|\alpha _k|}\left( 1,x\right) =A^\pm _{|\alpha _k|}+{\mathcal {O}}\left( \frac{1}{x}\right) ,\quad e^{i|\alpha _k|^2\log |x|}N_{|\alpha _k|}\left( 1,x\right) \nonumber \\ =B^\pm _{|\alpha _k|}+{\mathcal {O}}\left( \frac{1}{|x|}\right) . \end{aligned}$$
(66)

Lemma 4.3

Let \(t_n\) be a sequence of positive times converging to zero. Up to a subsequence, there exists for all \(x\in {\mathbb {R}}\) a limit

$$\begin{aligned} (T_*(x),N_*(x))=\underset{n\rightarrow \infty }{\lim }\,(T(t_n,k+x\sqrt{t_n}),e^{i|\alpha _k|^2\log \sqrt{t_n}}N(t_n,k+x\sqrt{t_n})), \end{aligned}$$

and there exists a unique rotation \(\Theta _k\) such that

$$\begin{aligned} \left\{ \begin{array}{l} T_*(x)=\Theta _k(T_{|\alpha _k|}(x)),\\ \mathfrak {R}(e^{i Arg\alpha _k}N_*(x))=\Theta _k(\mathfrak {R}(N_{|\alpha _k|}(x))),\\ \mathfrak {I}(e^{i Arg\alpha _k}N_*(x))=\Theta _k(\mathfrak {I}(N_{|\alpha _k|}(x))). \end{array}\right. \end{aligned}$$
(67)

Moreover, for \(x\rightarrow \pm \infty \)

$$\begin{aligned}&T_*\left( x\right) =\Theta _k\left( A^\pm _{|\alpha _k|}\right) +{\mathcal {O}}\left( \frac{1}{|x|}\right) ,\nonumber \\&\quad e^{i|\alpha _k|^2\log |x|}e^{i Arg\left( \alpha _k\right) }N_*\left( x\right) =\Theta _k\left( B^\pm _{|\alpha _k|}\right) +{\mathcal {O}}\left( \frac{1}{|x|}\right) . \end{aligned}$$
(68)

Proof

Let \(t_n\) be a sequence of positive times converging to zero. We introduce for \(x\in {\mathbb {R}}\) the functions

$$\begin{aligned} (T_n(x),N_n(x))=(T(t_n,k+x\sqrt{t_n}),e^{i|\alpha _k|^2\log \sqrt{t_n}}N(t_n,k+x\sqrt{t_n})). \end{aligned}$$

This sequence is uniformly bounded. In view of (55) and (56) we have

$$\begin{aligned} T_n'(x)= & {} \sqrt{t_n}\mathfrak {R}({\overline{u}}N)(t_n,k+x\sqrt{t_n}) \\= & {} \mathfrak {R}\left( \sum _je^{i|\alpha _j|^2\log \sqrt{t_n}}(\overline{\alpha _j+R_j(t_n))}e^{-i\frac{(k+x\sqrt{t_n}-j)^2}{4t_n}}\, N(t_n,k+x\sqrt{t_n})\right) , \end{aligned}$$

and

$$\begin{aligned} N_n'(x)= & {} -e^{i|\alpha _k|^2 \log \sqrt{t_n}}\sqrt{t_n}(uT)(t_n,k+x\sqrt{t_n}) \\= & {} -\sum _je^{i(|\alpha _k|^2-|\alpha _j|^2)\log \sqrt{t_n}}(\alpha _j+R_j(t_n))e^{i\frac{(k+x\sqrt{t_n}-j)^2}{4t_n}}\, T(t_n,k+x\sqrt{t_n}). \end{aligned}$$

Therefore the sequence \((T_n'(x),N_n'(x))\) is also uniformly bounded. These two facts give via Arzela–Ascoli’s theorem the existence of a limit in n (of a subsequence, that we denote again \((T_n(x),N_n(x))\)):

$$\begin{aligned} \exists \, \underset{n\rightarrow \infty }{\lim }\,(T_n(x),N_n(x))=:(T_*(x),N_*(x)). \end{aligned}$$

Moreover, as \(\Vert \{R_j(t_n)\}\Vert _{l^1}=o(n)\) we can write

$$\begin{aligned} T_n'(x)= & {} \mathfrak {R}\left( \sum _je^{i|\alpha _j|^2\log \sqrt{t_n}}\overline{\alpha _j}e^{-i\frac{(k+x\sqrt{t_n}-j)^2}{4t_n}}\, N(t_n,k+x\sqrt{t_n})\right) +o(n)N_n(x) \\= & {} \mathfrak {R}(\overline{\alpha _k}e^{-i\frac{x^2}{4}}\, N_n(x))+\mathfrak {R}(f_n(x)N_n(x))+o(n)N_n(x), \end{aligned}$$

where

$$\begin{aligned} f_n(x)=\sum _{j\ne k}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t_n}}\overline{\alpha _j}e^{-i\frac{x^2}{4}+ix\frac{j-k}{2\sqrt{t_n}}-i\frac{(k-j)^2}{4t_n}}. \end{aligned}$$

For a test function \(\psi \in {\mathcal {C}}^\infty _c({\mathbb {R}})\) we have by integrating by parts, avoiding in case a region os size o(n) around \(x=0\),

$$\begin{aligned}&\langle f_n(x),\psi (x)\rangle =\int \sum _{j\ne k}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t_n}}\overline{\alpha _j}e^{-i\frac{x^2}{4}+ix\frac{j-k}{2\sqrt{t_n}}-i\frac{(k-j)^2}{4t_n}}\psi (x)\,dx \\&\quad =-\int \sum _{j\ne k}e^{i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t_n}}\overline{\alpha _j}2\sqrt{t_n}\frac{e^{ix\frac{j-k}{2\sqrt{t_n}}-i\frac{(k-j)^2}{4t_n}}}{i(j-k)}\\&\quad (e^{-i\frac{x^2}{4}}\psi (x))_x\,dx=C(\psi )o(n). \end{aligned}$$

Similarly we obtain

$$\begin{aligned} N_n'(x)=-\alpha _ke^{i\frac{x^2}{4}}\, T_n(x)+g_n(x)T_n(x)+o(n)T_n(x), \end{aligned}$$

with \(g_n=o(n)\) in the weak sense. Therefore \((T_*(x),e^{i Arg(\alpha _k)}N_*(x))\) satisfies system (65) in the weak sense. As the coefficients involved in the ODE are analytic we conclude that \((T_*(x),e^{i Arg(\alpha _k)}N_*(x))\) satisfies system (65) in the strong sense, as \((T_{|\alpha _k|}(x),N_{|\alpha _k|}(x))\) does. Therefore there exists a unique rotation \(\Theta _k\) such that (67) holds. We obtain (68) as a consequence of (66). \(\square \)

4.7 Recovering the Curvature Angles of the Initial Data

Lemma 4.4

Let \(k\in {\mathbb {Z}}\). Then, with the notations of the previous subsection,

$$\begin{aligned} T(0,k^\pm )=\Theta _k(A^\pm _{|\alpha _k|}). \end{aligned}$$

In particular, in view of (3) and (51) we recover that \(\chi (0)\) is a polygonal line with corners at the same locations as \(\chi _0\), and of same angles.Footnote 4

Proof

Let \(\epsilon >0\). In view of (68) we first choose \(x>0\) large enough such that

$$\begin{aligned} |T_*(x)-\Theta _k(A^+_{|\alpha _k|}|)\le \frac{\epsilon }{3}, \end{aligned}$$

and that \(\frac{C(1+|k|)}{x}\le \frac{\epsilon }{3}\), where C is the coefficient in (60). Then we choose n large enough such that \(|x\sqrt{t_n}|<\frac{1}{2}\) and that \(|T(t_n,k+x\sqrt{t_n})-T_*(x)|\le \frac{\epsilon }{3}\). The last fact is possible in view of Lemma 4.3. Finally, we have, in view of Lemma 4.2 and (60):

$$\begin{aligned}&|T(0,k^+)-\Theta _k(A^+_{|\alpha _k|})|=|T(0,k+x\sqrt{t_n})-\Theta _k(A^+_{|\alpha _k|})| \\&\quad \le |T(0,k+x\sqrt{t_n})-T(t_n,k+x\sqrt{t_n})|+|T(t_n,k+x\sqrt{t_n})-T_*(x)|\\&\qquad +|T_*(x)-\Theta _k(A^+_{|\alpha _k|})|\le \epsilon , \end{aligned}$$

so

$$\begin{aligned} T(0,k^+)=\Theta _k(A^+_{|\alpha _k|}). \end{aligned}$$

Similarly we show by taking \(x<0\) that

$$\begin{aligned} T(0,k^-)=\Theta _k(A^-_{|\alpha _k|}). \end{aligned}$$

\(\square \)

The lemma insures us that \(\chi (0)\) has corners at the same locations as \(\chi _0\), and of same angles. To recover \(\chi _0\) up to rotation and translations we need to recover also the torsion properties of \(\chi _0\).

4.8 Trace and Properties of Modulated Normal Vectors

In order to recover the torsion angles we shall need to get informations about N(tx) as t goes to zero. For \(x\notin {\mathbb {Z}}\) we denote the modulated normal vector

$$\begin{aligned} {\tilde{N}}(t,x)=e^{i\Phi (t,x)}N(t,x), \end{aligned}$$
(69)

where

$$\begin{aligned} \Phi (t,x)=\sum _j|\alpha _j|^2\log \frac{|x-j|}{\sqrt{t}}. \end{aligned}$$
(70)

We start with a lemma insuring the existence of a limit for \({\tilde{N}}(t,x)\) at \(t=0\), with a convergence decay of selfsimilar type \(\frac{\sqrt{t}}{d(x,{\mathbb {Z}})}\) for x close to \({\mathbb {Z}}\).

Lemma 4.5

Let \(0<t_1<t_2<1\). For \(x\notin \frac{1}{2}{\mathbb {Z}}\) we have

$$\begin{aligned} |{\tilde{N}}(t_2,x)-{\tilde{N}}(t_1,x)|\le C(1+|x|)\sqrt{t_2}\left( \frac{1}{d(x,\frac{1}{2}{\mathbb {Z}})}+\frac{1}{d(x,{\mathbb {Z}})}\right) , \end{aligned}$$
(71)

while if \(x\in \frac{1}{2}{\mathbb {Z}}\backslash {\mathbb {Z}}\) then

$$\begin{aligned} |{\tilde{N}}(t_2,x)-{\tilde{N}}(t_1,x)|\le C(1+|x|)\sqrt{t_2}. \end{aligned}$$
(72)

In particular for any \(x\notin {\mathbb {Z}}\) there exists a trace for the modulated normal vector at \(t=0\):

$$\begin{aligned} \exists \, \underset{t\rightarrow 0}{\lim }\,{\tilde{N}}(t,x)=:{\tilde{N}}(0,x). \end{aligned}$$

Moreover for any \(x\in {\mathbb {Z}}\) there exists a trace

$$\begin{aligned} \exists \, \underset{t\rightarrow 0}{\lim }\,e^{i\sum _{j\ne x}|\alpha _j|^2\log \frac{|x-j|}{\sqrt{t}}}N(t,x), \end{aligned}$$
(73)

with a rate of convergence upper-bounded by \(C(1+|x|)\sqrt{t}\).

Proof

In view of (58) and (54) we have

$$\begin{aligned}&{\tilde{N}}(t_2,x)-{\tilde{N}}(t_1,x)=\int _{t_1}^{t_2}\left( -iu_x\, T+\,i\left( \frac{|u|^2}{2}-\frac{M}{2t}\right) N+i\Phi _t N\right) e^{i\Phi }dt \\&\quad =\int _{t_1}^{t_2}\left( -i\sum _je^{-i|\alpha _j|^2\log \sqrt{t}}(\alpha _j+R_j(t))\frac{e^{i\frac{(x-j)^2}{4t}}}{\sqrt{t}}i\frac{(x-j)}{2t}T(t,x)\right. \\&\qquad +i\sum _{j\ne k}e^{-i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\alpha _j\\&\qquad \left. +R_j(t))(\overline{\alpha _k+R_k(t)})\frac{e^{i\frac{(x-j)^2-(x-k)^2}{4t}}}{2t}N(t,x)+i\Phi _t N(t,x)\,\right) e^{i\Phi }dt. \end{aligned}$$

We can integrate by parts in the first term to get

$$\begin{aligned}&{\tilde{N}}(t_2,x)-{\tilde{N}}(t_1,x)\\&\quad = \left[ \sum _{j\ne x}e^{-i|\alpha _j|^2\log \sqrt{t}}(\alpha _j+R_j(t))\frac{e^{i\frac{(x-j)^2}{4t}}}{\sqrt{t}}\left( -\frac{4t^2}{i(x-j)^2}\right) \frac{(x-j)}{2t}T(t,x)\right] _{t_1}^{t_2} \\&\qquad -2i\int _{t_1}^{t_2}\sum _{j\ne x} \frac{e^{i\frac{(x-j)^2}{4t}}}{x-j} (\sqrt{t}e^{-i|\alpha _j|^2\log \sqrt{t}}(\alpha _j+R_j(t))T(t,x)e^{i\Phi })_tdt \\&\qquad +i\int _{t_1}^{t_2}\left( \sum _{j\ne x,k}e^{-i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\alpha _j+R_j(t))(\overline{\alpha _k+R_k(t)})\frac{e^{i\frac{(x-j)^2-(x-k)^2}{4t}}}{2t}N(t,x) \right. \\&\qquad \left. +\Phi _t N(t,x)\right) e^{i\Phi }dt. \end{aligned}$$

Having in mind the expression (57) for \(T_t\) we obtain

$$\begin{aligned}&{\tilde{N}}\left( t_2,x\right) -{\tilde{N}}\left( t_1,x\right) =O\left( \frac{\sqrt{t_2}}{d\left( x,{\mathbb {Z}}\right) }\right) \\&\quad -2i\int _{t_1}^{t_2}\sum _{j\ne x} \frac{e^{i\frac{\left( x-j\right) ^2}{4t}}}{x-j} \sqrt{t}e^{-i|\alpha _j|^2\log \sqrt{t}}\left( \alpha _j+R_j\left( t\right) \right) \\&\quad \times \mathfrak {I}\left( \sum _ke^{i|\alpha _k|^2\log \sqrt{t}}\left( \overline{\alpha _k+R_k\left( t\right) }\right) \frac{e^{-i\frac{\left( x-k\right) ^2}{4t}}}{\sqrt{t}}\left( -i\frac{x-k}{2t}\right) N\left( t,x\right) e^{i\Phi }dt \right. \\&\quad +i\int _{t_1}^{t_2}\left( \sum _{j\ne x,k}e^{-i\left( |\alpha _j|^2-|\alpha _k|^2\right) \log \sqrt{t}}\left( \alpha _j \right. \right. \\&\quad \left. \left. +R_j\left( t\right) \right) \left( \overline{\alpha _k+R_k\left( t\right) }\right) \frac{e^{i\frac{\left( x-j\right) ^2-\left( x-k\right) ^2}{4t}}}{2t}N\left( t,x\right) +\Phi _t N\left( t,x\right) \right) e^{i\Phi }dt. \end{aligned}$$

The integrals are in \(\frac{1}{t}\). By writing \(\mathfrak {I}(-iz)=-\frac{z+{{\overline{z}}}}{2}\) in the first integral, we have terms \(e^{i\frac{(x-j)^2-(x-k)^2}{4t}}\) or \(e^{i\frac{(x-j)^2+(x-k)^2}{4t}}\), both oscillant except for the first one, in case \(j=k\) or \(2x=j+k\). For \(x\notin \frac{1}{2}{\mathbb {Z}}\) we perform integrations by parts in all terms, except in case \(j=k\) for the first integral, that allow for a gain of \(t^2\) minus at worse terms involving \(N_t\) that are in \(\frac{1}{t\sqrt{t}}\):

$$\begin{aligned}&{\tilde{N}}\left( t_2,x\right) -{\tilde{N}}\left( t_1,x\right) =O\left( \left( 1+|x|\right) \sqrt{t_2}\left( \frac{1}{d\left( x,\frac{1}{2}{\mathbb {Z}}\right) }+\frac{1}{d\left( x,{\mathbb {Z}}\right) }\right) \right) \\&\quad +i\int _{t_1}^{t_2}\sum _j \frac{|\alpha _j+R_j\left( t\right) |^2}{2t}N e^{i\Phi }+\Phi _t N\left( t,x\right) e^{i\Phi }dt. \end{aligned}$$

In view of the decay of \(\{R_j(t)\}\) and the expression (70) of the phase \(\Phi \) we obtain (71).

We are left with the case \(x\in \frac{1}{2}{\mathbb {Z}}\). The computations goes as above, with some extra non-oscillant terms that actually calcel:

$$\begin{aligned}&{\tilde{N}}(t_2,x)-{\tilde{N}}(t_1,x)=O((1+|x|)\sqrt{t_2} \\&\quad +i\int _{t_1}^{t_2}\sum _{j\ne x,k;j+k=2x}e^{-i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\alpha _j+R_j(t))(\overline{\alpha _k+R_k(t)})\frac{x-k}{x-j}\frac{N(t,x)}{2t} e^{i\Phi }dt \\&\quad +i\int _{t_1}^{t_2}\sum _{j\ne x,k;j+k=2x}e^{-i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}(\alpha _j+R_j(t))(\overline{\alpha _k+R_k(t)})\frac{N(t,x)}{2t}e^{i\Phi }dt. \\&\quad +i\int _{t_1}^{t_2}\sum _{j\ne x} \frac{|\alpha _j+R_j(t)|^2}{2t}N e^{i\Phi }+\Phi _t N(t,x)e^{i\Phi }dt=O((1+|x|)\sqrt{t_2}. \end{aligned}$$

\(\square \)

Next we shall prove that \({\tilde{N}}(0,x)\) is piecewise constant.

Lemma 4.6

Let \(n\in {\mathbb {Z}}\) and \(x_1,x_2\in (n,n+1)\). Then

$$\begin{aligned} {\tilde{N}}(0,x_1)={\tilde{N}}(0,x_2). \end{aligned}$$

Moreover, the same statement remains valid for \(x_1,x_2\in (n-1,n+1)\) if \(\alpha _n=0\).

Proof

From Lemma 4.5 we have

$$\begin{aligned} {\tilde{N}}(0,x_1)-{\tilde{N}}(0,x_2)=\underset{t\rightarrow 0}{\lim } \,({\tilde{N}}(t,x_1)-{\tilde{N}}(t,x_2)). \end{aligned}$$
(74)

In view of (56) we compute

$$\begin{aligned}&{\tilde{N}}\left( t,x_1\right) -{\tilde{N}}\left( t,x_2\right) =\int _{x_1}^{x_2}\left( -uT\left( t,x\right) +i\Phi _x N\left( t,x\right) \right) e^{i\Phi }\,dx \\&\quad =\int _{x_1}^{x_2}\left( -\sum _je^{-i|\alpha _j|^2\log \sqrt{t}}\left( \alpha _j+R_j\left( t\right) \right) \frac{e^{i\frac{\left( x-j\right) ^2}{4t}}}{\sqrt{t}}\,T\left( t,x\right) +i\Phi _x N\left( t,x\right) \right) e^{i\Phi }\,dx. \end{aligned}$$

The integral is well defined, and In view of the decay of \(\{R_j(t)\}\) we have

$$\begin{aligned}&{\tilde{N}}\left( t,x_1\right) -{\tilde{N}}\left( t,x_2\right) \\&\quad =O\left( \sqrt{t}\right) +\int _{x_1}^{x_2}\left( -\sum _je^{-i|\alpha _j|^2\log \sqrt{t}}\alpha _j\frac{e^{i\frac{\left( x-j\right) ^2}{4t}}}{\sqrt{t}}\,T\left( t,x\right) +i\Phi _x N\left( t,x\right) \right) e^{i\Phi }\,dx. \end{aligned}$$

If we are in the case \(x_1,x_2\in (n-1,n+1)\) and \(\alpha _n=0\), the phase \(x-j\) can vanish on \((x_1,x_2)\) only for \(j=n\) but in this case the whole term vanishes as \(\alpha _n=0\). In the case \((x_1,x_2)\in (n,n+1)\) the phase \(x-j\ne 0\) cannot vanish on \((x_1,x_2)\). Therefore to get decay in time we integrate by parts:

$$\begin{aligned}&{\tilde{N}}(t,x_1)-{\tilde{N}}(t,x_2)=O(\sqrt{t})+\left[ -\sum _je^{-i|\alpha _j|^2\log \sqrt{t}}\alpha _j\frac{e^{i\frac{(x-j)^2}{4t}}}{\sqrt{t}}\frac{4t}{2i(x-j)}T(t,x)e^{i\Phi }\right] _{t_1}^{t_2} \\&\quad +\int _{x_1}^{x_2}\sum _je^{-i|\alpha _j|^2\log \sqrt{t}}\alpha _j\frac{2\sqrt{t}}{i}e^{i\frac{(x-j)^2}{4t}}\left( \frac{1}{x-j}T(t,x)e^{i\Phi }\right) _x+i\Phi _x N(t,x)e^{i\Phi }\,dx. \end{aligned}$$

In view of formula (55) for the derivative \(T_x\) and the expression (70) of \(\Phi (t,x)\) we get

$$\begin{aligned}&{\tilde{N}}\left( t,x_1\right) -{\tilde{N}}\left( t,x_2\right) =O\left( \sqrt{t}\right) \\&\qquad +i\int _{x_1}^{x_2}\left( -2\sum _je^{-i|\alpha _j|^2\log \sqrt{t}}\alpha _je^{i\frac{\left( x-j\right) ^2}{4t}}\frac{1}{x-j}\mathfrak {R}\left( \sum _je^{i|\alpha _k|^2\log \sqrt{t}}\overline{\alpha _k}e^{-i\frac{\left( x-k\right) ^2}{4t}}N\left( t,x\right) \right) \right. \\&\qquad \left. +\Phi _x N\left( t,x\right) \right) e^{i\Phi }\,dx \\&\quad =O\left( \sqrt{t}\right) +i\int _{x_1}^{x_2}\left( -\sum _{j,k}e^{-i\left( |\alpha _j|^2-|\alpha _k|^2\right) \log \sqrt{t}}\alpha _j\overline{\alpha _k}e^{i\frac{\left( x-j\right) ^2-\left( x-k\right) ^2}{4t}}\frac{1}{x-j}N\left( t,x\right) \right. \\&\qquad \left. +\Phi _x N\left( t,x\right) \right) e^{i\Phi }\,dx \\&\qquad -i\int _{x_1}^{x_2}\sum _{j,k}e^{-i\left( |\alpha _j|^2+|\alpha _k|^2\right) \log \sqrt{t}}\alpha _j\alpha _ke^{i\frac{\left( x-j\right) ^2+\left( x-k\right) ^2}{4t}}\frac{1}{x-j}\overline{N\left( t,x\right) }e^{i\Phi }\,dx. \end{aligned}$$

In the first integral the terms with \(j=k\) cancel with the ones from \(\Phi _x\). In the second integral the phase \((x-j)^2+(x-k)^2\) does not vanish as \((x_1,x_2)\) does not contain integers, so we can integrate by parts, use the expression (56) for \(N_x\) and gain a \(\sqrt{t}\) decay in time. We are left with

$$\begin{aligned}&{\tilde{N}}(t,x_1)-{\tilde{N}}(t,x_2)=O(\sqrt{t}) \\&\quad -i\int _{x_1}^{x_2}\sum _{j\ne k}e^{-i(|\alpha _j|^2-|\alpha _k|^2)\log \sqrt{t}}\alpha _j\overline{\alpha _k}e^{i\frac{(x-j)^2-(x-k)^2}{4t}}\frac{1}{x-j}N(t,x)e^{i\Phi }\,dx. \end{aligned}$$

If \(n\pm \frac{1}{2}\notin (x_1,x_2)\) the phase \((x-j)^2-(x-k)^2\) does not vanish, so again we can perform an integration by parts to get the decay in time. If \(n+\frac{1}{2}\in (x_1,x_2)\subset (n,n+1)\) we split the integral into three pieces: \((x_1,n+\frac{1}{2}-\sqrt{t}), (n+\frac{1}{2}-\sqrt{t},n+\frac{1}{2}+\sqrt{t})\) and \((n+\frac{1}{2}+\sqrt{t},x_2)\). On the middle segment, of size \(2\sqrt{t}\) we upper-bound the integrant by a constant. On the extremal segments we perform an integration by parts, that gives a power \(\sqrt{t}\) as

$$\begin{aligned} \frac{1}{|(x-j)^2-(x-k)^2|}\le \frac{C}{d(2x,{\mathbb {Z}})}. \end{aligned}$$

The cases when \(n\pm \frac{1}{2}\in (x_1,x_2)\subset (n-1,n+1)\) are treated similarly. Therefore

$$\begin{aligned} {\tilde{N}}(t,x_1)-{\tilde{N}}(t,x_2)=O(\sqrt{t}), \end{aligned}$$

and in view of (74) we get the conclusion of the Lemma. \(\square \)

We end this section with a lemma that gives a link between \({\tilde{N}}(0,k^\pm )\) and the rotation \(\Theta _k\) from Lemma 4.3.

Lemma 4.7

Let \(t_n\) be a sequence of positive times converging to zero, such that

$$\begin{aligned} e^{i\sum _j|\alpha _j|^2\log (\sqrt{t_n})}=1. \end{aligned}$$
(75)

Using the notations in Lemma 4.3 we have the following relation:

$$\begin{aligned} \Theta _k(B^\pm _{|\alpha _k|})=e^{-i\sum _{j\ne k}|\alpha _j|^2\log |k-j|}\,e^{i Arg(\alpha _k)}\,{\tilde{N}}(0,k^\pm ). \end{aligned}$$

Proof

Let \(\epsilon >0\). We choose \(x>0\) large enough such that in view of (68)

$$\begin{aligned} |e^{i|\alpha _k|^2\log |x|}\,e^{i Arg(\alpha _k)}N_*(x)-\Theta _k(B^+_{|\alpha _k|})|\le \frac{\epsilon }{4}, \end{aligned}$$
(76)

and such that

$$\begin{aligned} \frac{C(1+|k|)}{x}\le \frac{\epsilon }{4}, \end{aligned}$$
(77)

where C is the coefficient in (71). Then we choose n large enough such that \(|x\sqrt{t_n}|<\frac{1}{2}\) and such that

$$\begin{aligned} |e^{-i\sum _{j\ne k} |\alpha _j|^2\log |x\sqrt{t_n}+k-j|}-e^{-i\sum _{j\ne k} |\alpha _j|^2\log |k-j|}\le \frac{\epsilon }{4}, \end{aligned}$$
(78)

and

$$\begin{aligned} |e^{i|\alpha _k|^2\log \sqrt{t_n}}N(t_n,k+x\sqrt{t_n})-N_*(x)|\le \frac{\epsilon }{4}. \end{aligned}$$
(79)

The last fact is possible in view of Lemma 4.3. Therefore we have, in view of Lemma 4.6:

$$\begin{aligned}&I:=|e^{-i\sum _{j\ne k}|\alpha _j|^2\log |k-j|}\,e^{i Arg(\alpha _k)}\,{\tilde{N}}(0,k^+)-\Theta _k(B^+_{|\alpha _k|})| \\&\quad =|e^{-i\sum _{j\ne k}|\alpha _j|^2\log |k-j|}\,e^{i Arg(\alpha _k)}\,{\tilde{N}}(0,k+x\sqrt{t_n})-\Theta _k(B^+_{|\alpha _k|})| \\&\quad \le |{\tilde{N}}(0,k+x\sqrt{t_n})-{\tilde{N}}(t_n,k+x\sqrt{t_n})| \\&\qquad +|e^{-i\sum _{j\ne k}|\alpha _j|^2\log |k-j|}\,e^{i Arg(\alpha _k)}\,{\tilde{N}}(t_n,k+x\sqrt{t_n})-\Theta _k(B^+_{|\alpha _k|})|. \end{aligned}$$

By using the convergence (71) of Lemma 4.5 together with (77), and the definition (69) of \({\tilde{N}}\) we get

$$\begin{aligned} I\le & {} \frac{\epsilon }{4}+ |e^{-i\sum _{j\ne k}|\alpha _j|^2\log |k-j|}\,e^{i Arg(\alpha _k)}\,e^{i\sum _j|\alpha _j|^2\log \frac{|x\sqrt{t_n}+k-j|}{\sqrt{t_n}}}\,N(t_n,k+x\sqrt{t_n})\\&-\Theta _k(B^+_{|\alpha _k|})|. \end{aligned}$$

In view of (78) and (75) we have

$$\begin{aligned}&I\le \frac{2\epsilon }{4}+ |e^{i Arg(\alpha _k)}\,e^{i|\alpha _k|^2\log |x|}e^{-i\sum _{j\ne k}|\alpha _j|^2\log (\sqrt{t_n})}\,N(t_n,k+x\sqrt{t_n})-\Theta _k(B^+_{|\alpha _k|})| \\&\quad =\frac{\epsilon }{2}+ |e^{i Arg(\alpha _k)}\,e^{i|\alpha _k|^2\log |x|}e^{i|\alpha _k|^2\log (\sqrt{t_n})}\,N(t_n,k+x\sqrt{t_n})-\Theta _k(B^+_{|\alpha _k|})|. \end{aligned}$$

Finally, by (79)

$$\begin{aligned} I\le \frac{3\epsilon }{4}+ |e^{i Arg(\alpha _k)}\,e^{i|\alpha _k|^2\log |x|}\,N^*(x)-\Theta _k(B^+_{|\alpha _k|})|, \end{aligned}$$

and we conclude by (76) that

$$\begin{aligned} I\le \epsilon ,\forall \epsilon >0, \end{aligned}$$

thus

$$\begin{aligned} \Theta _k(B^+_{|\alpha _k|})=e^{-i\sum _{j\ne k}|\alpha _j|^2\log |k-j|}\,e^{i Arg(\alpha _k)}\,{\tilde{N}}(0,k^+). \end{aligned}$$

For \(x<0\) we argue similarly to get

$$\begin{aligned} \Theta _k(B^-_{|\alpha _k|})=e^{-i\sum _{j\ne k}|\alpha _j|^2\log |k-j|}\,e^{i Arg(\alpha _k)}\,{\tilde{N}}(0,k^-). \end{aligned}$$

\(\square \)

4.9 Recovering the Torsion of the Initial Data

Recall that in §4.2 we have denoted by \(\{x_n, n\in L\}\) the ordered set of the integer corner locations of \(\chi _0\) and by \(\{\theta _n,\tau _n,\delta _n\}_{n\in L}\) the sequence determining the curvature and torsion angles of \(\chi _0\). Lemma 4.4 insured us that \(\chi (0)\) has corners at the same locations as \(\chi _0\), and of same angles. Let us denote \(\{\theta _n,{\tilde{\tau }}_n,{\tilde{\delta }}_n\}_{n\in L}\) the correspondent sequence of \(\chi (0)\). To recover \(\chi _0\) up to rotation and translations we need to recover also the torsion properties of \(\chi _0\), i.e. \({\tilde{\tau }}_n=\tau _n\) and \({\tilde{\delta }}_n=\delta _n\).

In §4.2 we have defined the torsion parameters in terms of the vectorial product of two consecutive tangent vectors, and in view of the way the tangent vectors of \(\chi (0)\) are described in Lemma 4.4, we are lead to investigate vectorial products of type \(\Theta _k(A^-_{|\alpha _k|}\wedge A^+_{|\alpha _k|})\). We start with the following lemma.

Lemma 4.8

For \(a>0\) there exists a unique \(\phi _a\in [0,2\pi )\) such that

$$\begin{aligned} \frac{A^-_a\wedge A^+_a }{|A^-_a\wedge A^+_a |}=\mathfrak {R}(e^{i\phi _a}B^+_a)=-\mathfrak {R}(e^{-i\phi _a}B^-_a). \end{aligned}$$

Proof

For simplicity we drop the subindex a. We recall from (66) that the tangent vectors of the profile \(\chi (1)\) have asymptotic directions the unitary vectors \(A^\pm \) that can be described in view of formula (11) in [25] as

$$\begin{aligned} A^+=(A_1,A_2,A_3),\quad A^-=(A_1,-A_2,-A_3). \end{aligned}$$

This parity property for the tangent vector implies similar parity properties for normal and binormal vectors and from (66) we also get

$$\begin{aligned} B^+=(B_1,B_2,B_3),\quad B^-=(B_1,-B_2,-B_3). \end{aligned}$$

In particular we have

$$\begin{aligned} \frac{A^-\wedge A^+}{|A^-\wedge A^+ |}=\frac{1}{\sqrt{1-A_1^2}}(0,-A_3,A_2), \end{aligned}$$

so

$$\begin{aligned} \frac{A^-\wedge A^+ }{|A^-\wedge A^+|}.B^+=\frac{1}{\sqrt{1-A_1^2}}(A_3B_2-A_2B_3)=-\frac{A^-\wedge A^+ }{|A^-\wedge A^+ |}.B^-. \end{aligned}$$
(80)

Since \(B^+\perp A^+\) and \(\mathfrak {R}B^+,\mathfrak {I}B^+,A^+\) is an orthonormal basis of \({\mathbb {R}}^3\), we have a unique \(\phi \in [0,2\pi )\) such that

$$\begin{aligned} \frac{A^-\wedge A^+}{|A^-\wedge A^+ |}=\cos \phi \,\mathfrak {R}B^++\sin \phi \,\mathfrak {I}B^+=\mathfrak {R}(e^{-i\phi }B^+), \end{aligned}$$

thus the first inequality in the statement. The second inequality follows from (80). \(\square \)

We continue with some useful information on the connection between quantities involving normal components at two consecutive corners of \(\chi (0)\). Recall that we have defined \(\alpha _k=0\) if \(k\notin \{x_n, n\in L\} \) and if \(k=x_n\) for some \(n\in L\) we have defined \(\alpha _k\in {\mathbb {C}}\) by (51) and (52). In particular two consecutive corners are located at \(x_n\) and \(x_{n+1}\), and the corresponding information is encoded by \(\alpha _{x_n}\) and \(\alpha _{x_{n+1}}\).

Lemma 4.9

Let \(t_n\) be a sequence of positive times converging to zero, such that the hypothesis (75) of Lemma 4.7 holds. We have the following relation concerning two consecutive corners located at \(x_n\) and \(x_{n+1}\):

$$\begin{aligned} \Theta _{x_n}(B^+_{|\alpha _{x_n}|})=e^{i\beta _n}\,\,e^{i Arg(\alpha _{x_n})-Arg(\alpha _{x_{n+1}})}\Theta _{x_{n+1}}(B^-_{|\alpha _{x_{n+1}}|}), \end{aligned}$$

where

$$\begin{aligned} \beta _n=(|\alpha _{x_n}|^2-|\alpha _{x_{n+1}}|^2)\log |x_n-x_{n+1}|. \end{aligned}$$

Proof

The result is a simple consequence of Lemma 4.7 and Lemma 4.6. \(\square \)

Now we shall recover in the next lemma the modulus and the sign of the torsion angles of \(\chi _0\).

Lemma 4.10

The torsion angles of \(\chi (0)\) and \(\chi _0\) coincide:

$$\begin{aligned} {\tilde{\tau }}_n=\tau _n,\quad {\tilde{\delta }}_n=\delta _n,\quad \forall n\in L. \end{aligned}$$

Proof

From the definition (50) we have

$$\begin{aligned} \cos ({\tilde{\tau }}_n)=\frac{T(0,x_{n}^-)\wedge T(0,x_{n}^+)}{|T(0,x_{n}^-)\wedge T(0,x_{n}^+)|}.\frac{T(0,x_{n+1}^-)\wedge T(0,x_{n+1}^+)}{|T(0,x_{n+1}^-)\wedge T(0,x_{n+1}^+)|}. \end{aligned}$$

Now we use Lemma 4.4:

$$\begin{aligned} \cos ({\tilde{\tau }}_n)=\Theta _{x_n}\left( \frac{A^-_{|\alpha _{x_n}|}\wedge A^+_{|\alpha _{x_n}|}}{|A^-_{|\alpha _{x_n}|}\wedge A^+_{|\alpha _{x_n}|}|}\right) .\Theta _{x_{n+1}}\left( \frac{A^-_{|\alpha _{x_{n+1}}|}\wedge A^+_{|\alpha _{x_{n+1}}|}}{|A^-_{|\alpha _{x_{n+1}}|}\wedge A^+_{|\alpha _{x_{n+1}}|}|}\right) . \end{aligned}$$

By using Lemma 4.8 we write

$$\begin{aligned} \cos ({\tilde{\tau }}_n)= & {} \Theta _{x_n}\left( \mathfrak {R}(e^{i\phi _{|\alpha _{x_n}|}}B^+_{|\alpha _{x_n}|})\right) .\Theta _{x_{n+1}}\left( -\mathfrak {R}(e^{i\phi _{|\alpha _{x_{n+1}}|}}B^-_{|\alpha _{x_{n+1}}|})\right) \\= & {} -\mathfrak {R}\left( \Theta _{x_n}(e^{i\phi _{|\alpha _{x_n}|}}B^+_{|\alpha _{x_n}|})\right) .\mathfrak {R}\left( \Theta _{x_{n+1}}(e^{i\phi _{|\alpha _{x_{n+1}}|}}B^-_{|\alpha _{x_{n+1}}|})\right) . \end{aligned}$$

Finally, by Lemma 4.9 we get

$$\begin{aligned} \cos ({\tilde{\tau }}_n)= & {} -\mathfrak {R}\left( e^{i\phi _{|\alpha _{x_n}|}+i\beta _n+i(Arg(\alpha _{x_n})-Arg(\alpha _{x_{n+1}}))}\Theta _{x_{n+1}}(B^-_{|\alpha _{x_{n+1}}|})\right) \\&.\mathfrak {R}\left( \Theta _{x_{n+1}}(e^{i\phi _{|\alpha _{x_{n+1}}|}}B^-_{|\alpha _{x_{n+1}}|})\right) . \end{aligned}$$

Since \(\mathfrak {R}B^-_{|\alpha _{x_{n+1}}|}\) and \(\mathfrak {I}B^-_{|\alpha _{x_{n+1}}|}\) are unitary orthogonal vectors, we obtain

$$\begin{aligned} \cos ({\tilde{\tau }}_n)=-\cos (\phi _{|\alpha _{x_n}|}+\beta _n+Arg(\alpha _{x_n})-Arg(\alpha _{x_{n+1}})-\phi _{|\alpha _{x_{n+1}}|}). \end{aligned}$$

Therefore, by definition (52) of \(\{Arg(\alpha _j)\}\) we get

$$\begin{aligned} \cos ({\tilde{\tau }}_n)=\cos (\tau _n), \end{aligned}$$

and in particular \({\tilde{\tau }}_n=\tau _n\).

Similarly, we compute

$$\begin{aligned}&\frac{T(0,x_{n}^-)\wedge T(0,x_{n}^+)}{|T(0,x_{n}^-)\wedge T(0,x_{n}^+)|}\wedge \frac{T(0,x_{n+1}^-)\wedge T(0,x_{n+1}^+)}{|T(0,x_{n+1}^-)\wedge T(0,x_{n+1}^+)|} \\&\quad =-\Theta _{x_{n+1}}(\mathfrak {R}B^-_{|\alpha _{x_{n+1}}|})\wedge \Theta _{x_{n+1}}(\mathfrak {I}B^-_{|\alpha _{x_{n+1}}|})\sin (\phi _{|\alpha _{x_n}|}+\beta _n+Arg(\alpha _{x_n})\\&\qquad -Arg(\alpha _{x_{n+1}})-\phi _{|\alpha _{x_{n+1}}|}). \end{aligned}$$

As \(\mathfrak {R}(B^-_{|\alpha _{x_{n+1}}|})\wedge \mathfrak {I}(B^-_{|\alpha _{x_{n+1}}|})=A^-_{|\alpha _{x_{n+1}}|}\), in view of Lemma 4.4 we get

$$\begin{aligned}&\frac{T(0,x_{n}^-)\wedge T(0,x_{n}^+)}{|T(0,x_{n}^-)\wedge T(0,x_{n}^+)|}\wedge \frac{T(0,x_{n+1}^-)\wedge T(0,x_{n+1}^+)}{|T(0,x_{n+1}^-)\wedge T(0,x_{n+1}^+)|} \\&\quad =-T(0,x_{n+1}^-)\sin (\phi _{|\alpha _{x_n}|}+\beta _n+Arg(\alpha _{x_n})-Arg(\alpha _{x_{n+1}})-\phi _{|\alpha _{x_{n+1}}|}), \end{aligned}$$

so by definition (52) of \(\{Arg(\alpha _j)\}\) we conclude \({\tilde{\delta }}_n=\delta _n\). \(\square \)

4.10 End of the Existence Result Proof

From Lemma 4.4 and Lemma 4.10 we conclude that \(\chi (0)\) and \(\chi _0\) have the same characterizing sequences \(\{\theta _n,\tau _n,\delta _n\}_{n\in L}\). In view of the definition of this sequence in §4.2 we conclude that \(\chi (0)\) and \(\chi _0\) coincide modulo a rotation and a translation. This rotation and translation can be removed by changing the initial point P and frame \((v_1,v_2,v_3)\) used in the construction of \(\chi (t)\) in §4.3. Therefore we have constructed the curve evolution in Theorem 1.1 for positive times. The extension in time is done by using the time reversibility of the Schrödinger equation and the one of the binormal flow, that means solving for positive times the binormal flow with initial data \(\chi (-s)\), which is still a polygonal line satisfying the hypothesis.

4.11 Further Properties of the Constructed Solution

In this last subsection we describe the trajectories in time of the \({\mathbb {R}}^3-\)locations of the corners, \(\chi (t,k)\).

Lemma 4.11

Let k such that \(\alpha _k\ne 0\), that is a location of corner for \(\chi _0\). Then there exists two orthogonal vectors \(v_1,v_2\in {\mathbb {S}}^2\) such that

$$\begin{aligned} \chi (t,k)=\chi (0,k)+\sqrt{t}\,(v_1\sin (M\log \sqrt{t})+v_2\cos (M\log (\sqrt{t}))+O(t). \end{aligned}$$

Proof

From (59) and the decay of \(\{R_j(\tau )\}\) we have

$$\begin{aligned}&\chi (t,k)-\chi (0,k)=\int _0^t\mathfrak {I}({\overline{u}}N(\tau ,k))\,d\tau \\&\quad =\mathfrak {I}\int _{0}^{t}\sum _je^{i|\alpha _j|^2\log \sqrt{\tau }}(\overline{\alpha _j+R_j(\tau )})\frac{e^{-i\frac{(k-j)^2}{4\tau }}}{\sqrt{\tau }}\,N(\tau ,k)\,d\tau \\&\quad =\mathfrak {I}\int _{0}^{t}\sum _je^{i|\alpha _j|^2\log \sqrt{\tau }}\overline{\alpha _j}\frac{e^{-i\frac{(k-j)^2}{4\tau }}}{\sqrt{\tau }}\,N(\tau ,k)\,d\tau +O(t^{\frac{1}{2}+\gamma }). \end{aligned}$$

In the terms with \(j\ne k\) we perform an integration by parts to get decay in time

$$\begin{aligned}&\chi (t,k)-\chi (0,k)=\mathfrak {I}\int _{0}^{t}e^{i|\alpha _k|^2\log \sqrt{\tau }}\overline{\alpha _k}\,N(\tau ,k)\,\frac{d\tau }{\sqrt{\tau }}+O(t^{\frac{1}{2}+\gamma }) \\&\quad +\left[ \mathfrak {I}\sum _{j\ne k} e^{i|\alpha _j|^2\log \sqrt{\tau }}\overline{\alpha _j}\frac{e^{-i\frac{(k-j)^2}{4\tau }}}{\sqrt{\tau }} \frac{4\tau ^2}{i(k-j)^2}\,N(\tau ,k) \right] _0^t \\&\quad -\mathfrak {I}\int _{0}^{t} \sum _{j\ne k} 4\overline{\alpha _j}\frac{e^{-i\frac{(k-j)^2}{4\tau }}}{ i(k-j)^2}(e^{i|\alpha _j|^2\log \sqrt{\tau }}\tau \sqrt{\tau }\,N(\tau ,k))_\tau \,d\tau . \end{aligned}$$

The boundary term is of order \(O(t\sqrt{t})\). In view of (58) we get a \(\frac{1}{\tau \sqrt{\tau }}\) estimate for \(N_\tau \) so the last term is of order O(t), and we have

$$\begin{aligned} \chi (t,k)-\chi (0,k)=\mathfrak {I}\int _{0}^{t}e^{i|\alpha _k|^2\log \sqrt{\tau }}\overline{\alpha _k}\,N(\tau ,k)\,\frac{d\tau }{\sqrt{\tau }}+O(t). \end{aligned}$$

Now, from (73) in Lemma 4.5 we get the existence of \(w_1,w_2\in {\mathbb {S}}^2\) such that

$$\begin{aligned} w_1+iw_2=\underset{t\rightarrow 0}{\lim }\,e^{-i\sum _{j\ne k}|\alpha _j|^2\log \sqrt{t}}N(t,k), \end{aligned}$$

with a rate of convergence upper-bounded by \(C(1+|k|)\sqrt{t}\). This implies

$$\begin{aligned} \chi (t,k)-\chi (0,k)=\mathfrak {I}\, \overline{\alpha _k}\,(w_1+iw_2)\int _{0}^{t}e^{i\sum _j|\alpha _j|^2\log \sqrt{\tau }}\,\frac{d\tau }{\sqrt{\tau }}+O(t), \end{aligned}$$

and thus the conclusion of the Lemma. \(\square \)