1 Introduction

In this work we consider the initial value problem (IVP) associated to the cubic nonlinear Schrödinger equation with third-order dispersion

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t}u+i\alpha \partial ^{2}_{x}u- \partial ^{3}_{x}u+i\beta |u|^{2}u = 0, \quad x,t \in \mathbb R, \\ u(x,0) = u_0(x), \end{array}\right. } \end{aligned}$$
(1.1)

where \(\alpha ,\beta \in \mathbb R\) and \(u = u(x, t)\) is complex valued function.

The equation in (1.1), also known as the extended nonlinear Schrödinger (e-NLS) equation, appears to describe several physical phenomena like the nonlinear pulse propagation in an optical fiber, nonlinear modulation of a capillary gravity wave on water, for more details we refer to [1, 3, 12, 15, 18, 21, 25] and references therein. In some literature, this model is also known as the third order Lugiato-Lefever equation [19] and can also be considered as a particular case of the higher order nonlinear Schrödinger (h-NLS) equation proposed by Hasegawa and Kodama in [14, 17] to describe the nonlinear propagation of pulses in optical fibers

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t}u-i\alpha \partial ^{2}_{x}u+ \partial ^{3}_{x}u-i\beta |u|^{2}u+\gamma |u|^{2}\partial _{x}u+\delta \partial _{x}(|u|^2)u = 0, \quad x,t \in \mathbb R,\\ u(x,0) = u_0(x), \end{array}\right. } \end{aligned}$$

where \(\alpha ,\beta , \gamma \in \mathbb R\), \( \delta \in \mathbb {C}\) and \(u = u(x, t)\) are complex valued function.

The well-posedness issues and other properties of solutions of the IVP (1.1) posed on \(\mathbb R\) or \(\mathbb T\) have extensively been studied by several authors, see for example [3, 6, 12, 19, 20] references threrein. As far as we know, the best local well-posedness result for the IVP (1.1) with given data in the \(L^2\)-based Sobolev spaces \(H^s(\mathbb R)\), \(s>-\frac{1}{4}\), is obtained by the first author in [3]. More precisely, the following result was obtained in [3].

Theorem 1.1

[3] Let \(u_0\in H^s(\mathbb R)\) and \(s>-\frac{1}{4}\). Then there exist \(\delta = \delta (\Vert u_0\Vert _{H^s})\) (with \(\delta (\rho )\rightarrow \infty \) as \(\rho \rightarrow 0\)) and a unique solution to the IVP (1.1) in the time interval \([0, \delta ]\). Moreover, the solution satisfies the estimate

$$\begin{aligned} \Vert u\Vert _{X_{\delta }^{s, b}}\lesssim \Vert u_0\Vert _{H^s}, \end{aligned}$$
(1.2)

where the norm \(\Vert u\Vert _{X_{\delta }^{s, b}}\) is as defined in (2.5).

To obtain this result, the author in [3] derived a trilinear estimate

$$\begin{aligned} \Vert u_1u_2\bar{u_3}\Vert _{X^{s,b'}}\lesssim \prod _{j=1}^3\Vert u_j\Vert _{X^{s,b}}^3, \quad 0\ge s>-\frac{1}{4},\;\; b>\frac{7}{12}, \;\, b'<\frac{s}{3}, \end{aligned}$$
(1.3)

where, for \(s,b\in \mathbb R\), \(X^{s,b}\) is the Fourier transform restriction norm space introduced by Bourgain [2] with norm

$$\begin{aligned} \Vert u\Vert _{X^{s,b}}:=\Vert \langle \xi \rangle ^s\langle \tau -\phi (\xi )\rangle ^b\widehat{u}(\xi , \tau )\Vert _{L^2_{\xi }L^2_{\tau }}, \end{aligned}$$
(1.4)

where \(\langle x\rangle :=1+|x|\) and \(\phi (\xi ) \) is the phase function associated to the e-NLS equation (1.1) (for detailed definition, see (2.4) below). The author in [3] also showed that the crucial trilinear estimate (1.3) fails for \(s<-\frac{1}{4}\). Further, it has been proved that the application data to solution fails to be \(C^3\) at the origin if \(s<-\frac{1}{4}\), see Theorem 1.3, iv) in [5]. In this sense, the local well-posedness result given by Theorem 1.1 is sharp using this method.

Remark 1.2

We note that, the following quantity

$$\begin{aligned} E(u): = \int _{\mathbb R}|u(x,t)|^2 dx, \end{aligned}$$
(1.5)

is conserved by the flow of (1.1). Using this conserved quantity, the local solution given by Theorem 1.1 can be extended globally in time, thereby proving the global well-posedness of the IVP (1.1) in \(H^s(\mathbb R)\), whenever \(s\ge 0\).

Looking at the local well-posedness result given by Theorem 1.1 and the Remark above, it is clear that there is a gap between the local and the global well-posedness results. In other words, one may ask the following natural question. Is it possible that the local solution given by Theorem 1.1 can be extended globally in time for \( -\frac{1}{4}<s<0\)?

The main objective of this work is to answer the question raised in the previous paragraph that is left open in [3] since 2004. In other words, the main focus of this work is in investigating the global well-posedness issue of the IVP (1.1) for given data in the low regularity Sobolev spaces \(H^s(\mathbb R)\), \(-\frac{1}{4}<s<0\). No conserved quantities are available for data with regularity below \(L^2\) to apply the classical method to extend the local solution globally in time. To overcome this difficulty we use the famous I-method introduced by Colliander et al [8,9,10] and derive an almost conserved quantity to obtain the global well-posedness result for given data in the low regularity Sobolev spaces. More precisely, the main result of this work is the following.

Theorem 1.3

The IVP (1.1) is globally well-posed for any initial data \( u_0\in H^s(\mathbb R)\), \(s>-\frac{1}{4}\).

Remark 1.4

In the proof of this theorem, an almost conservation of the second generation of the modified energy, viz.,

$$\begin{aligned} |E^2_I(u(\delta ))|\le |E^2_I(\phi )| + C N^{-\frac{7}{4}}\Vert Iu\Vert _{X^{0, \frac{1}{2}+}_{\delta }}^6 \end{aligned}$$

plays a crucial role. The decay \(N^{-\frac{7}{4}}\) is more than enough to get the required result. Behind the proof of an almost conservation law, there are decay estimates of the multipliers involved. Structure of the multipliers in our case is different from the ones that appear in the case of the KdV or the NLS equations, see for example [4, 9, 10]. This fact creates some extra difficulties as can be seen in the proof of Proposition 3.3.

The well-posedness issues of the IVP (1.1) posed on the periodic domain \(\mathbb T:=\mathbb R/2\pi \mathbb Z\) are also considered by several authors in recent time. The authors in [19] studied the IVP (1.1) considering that \(\frac{2\alpha }{3}\notin \mathbb Z\) with data \(u_0\in L^2(\mathbb T)\) and obtained the global existence of the solution. They also obtained the global attractor in \(L^2(\mathbb T)\). The local existence result obtained in [19] is further improved in [18] for given data in the Sobolev spaces \(H^s(\mathbb T)\) with \(s>-\frac{1}{6}\) (see also [25]) with the same consideration.

Taking in consideration the results in [19] and [18], there is a gap between the local and the global well-posedness results in the periodic case too. In other words, one has the following natural question. Is it possible to extend to local solution to the IVP (1.1) posed on periodic domain \(\mathbb T\) can be extended globally in time for given data in \(H^s(\mathbb T)\), \(-\frac{1}{6}<s<0\)? Although this is a very good question, deriving almost conserved quantities in the periodic setting is more demanding and we will not consider it here.

In recent time, other properties of solutions of the IVP (1.1) have also been studied in the literature. The authors in [20] proved that the mean-zero Gaussian measures on Sobolev spaces \(H^s(\mathbb T)\) are quasi-invariant under the flow whenever \(s >\frac{3}{4}\). This result is further improved in [12] on Sobolev spaces \(H^s(\mathbb T)\) for \(s>\frac{1}{2}\). Quite recently, in [6], we considered the IVP (1.1) with given data in the modulation spaces \(M_s^{2,p}(\mathbb R) \) and obtained the local well-posedness result for \(s> -\frac{1}{4}\) and \(2\le p<\infty \).

Now we present the organization of this work. In Sect. 2, we define function spaces and provide some preliminary results. In Sect. 3 we introduce multilinear estimates and an almost conservation law that is fundamental to prove the main result of this work. In Sect. 4 we provide the proof of the main result of this paper. We finish this section recording some standard notations that will be used throughout this work.

Notations: We use c to denote various constants whose exact values are immaterial and may vary from one line to the next. We use \(A\lesssim B\) to denote an estimate of the form \(A\le cB\) and \(A\sim B\) if \(A\le cB\) and \(B\le cA\). Also, we use the notation \(a+\) to denote \(a+\epsilon \) for \(0< \epsilon \ll 1\).

2 Function Spaces and Preliminary Results

We start this section by introducing some function spaces that will be used throughout this work. For \(f:\mathbb R\times [0, T] \rightarrow \mathbb R\) we define the mixed \(L_x^pL_T^q\)-norm by

$$\begin{aligned} \Vert f\Vert _{L_x^pL_T^q} = \left( \int _{\mathbb R}\left( \int _0^T |f(x, t)|^q\,dt \right) ^{p/q}\,dx\right) ^{1/p}, \end{aligned}$$

with usual modifications when \(p = \infty \). We replace T by t if [0, T] is the whole real line \(\mathbb R\).

We use \(\widehat{f}(\xi )\) to denote the Fourier transform of f(x) defined by

$$\begin{aligned} \widehat{f}(\xi ) = c \int _{\mathbb R}e^{-ix\xi }f(x)dx \end{aligned}$$

and \(\widetilde{f}(\xi )\) to denote the Fourier transform of f(xt) defined by

$$\begin{aligned} \widetilde{f}(\xi , \tau ) = c \int _{\mathbb R^2}e^{-i(x\xi +t\tau )}f(x,t)dxdt. \end{aligned}$$

We use \(H^s\) to denote the \(L^2\)-based Sobolev space of order s with norm

$$\begin{aligned} \Vert f\Vert _{H^s(\mathbb R)} = \Vert \langle \xi \rangle ^s \widehat{f}(\xi )\Vert _{L^2_{\xi }}, \end{aligned}$$

where \(\langle \xi \rangle = 1+|\xi |\).

In order to simplify the presentation we consider the following gauge transform considered in [24]

$$\begin{aligned} u(x,t):= v(x-d_1t, -t)e^{i(d_2x+d_3t)}. \end{aligned}$$
(2.1)

Using this transformation the IVP (1.1) turns out to be

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t}v+ \partial ^{3}_{x}v-i(\alpha -3d_2) \partial ^{2}_{x}v+(d_1+2\alpha d_2-3d_2^2)\partial _x v -i(d_2^3-\alpha d_2^2 +d_3) v - i\beta |v|^2v = 0,\\ v(x,0) = v_0(x):= u_0(x)e^{-id_2x}. \end{array}\right. }\nonumber \\ \end{aligned}$$
(2.2)

If one chooses \(d_1=-\frac{\alpha ^2}{3}\), \(d_2= \frac{\alpha }{3}\) and \(d_3=\frac{2\alpha ^3}{27}\) the third, fourth and fifth terms in the first equation in (2.2) vanish. Also, we note that

$$\begin{aligned} \Vert u_0\Vert _{H^s}\sim \Vert v_0\Vert _{H^s}. \end{aligned}$$

So from now on, we will consider the IVP (1.1) with \(\alpha = 0\), more precisely,

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t}u+ \partial ^{3}_{x}u-i\beta |u|^{2}u = 0, \quad x,t \in \mathbb R, \\ u(x,0) = u_0(x). \end{array}\right. } \end{aligned}$$
(2.3)

This simplification allows us to work in the Fourier transform restriction norm space restricted to the cubic \(\tau -\xi ^3\). In what follows we formally introduce the Fourier transform restriction norm space, commonly known as the Bourgain’s space.

For \(s, b \in \mathbb R\), we define the Fourier transform restriction norm space \(X^{s,b}(\mathbb R\times \mathbb R)\) with norm

$$\begin{aligned} \Vert f\Vert _{ X^{s, b}} = \Vert (1+D_t)^b U(t)f\Vert _{L^{2}_{t}(H^{s}_{x})} = \Vert \langle \tau -\xi ^3\rangle ^b\langle \xi \rangle ^s \widetilde{f}(\xi , \tau )\Vert _{L^2_{\xi ,\tau }}, \end{aligned}$$
(2.4)

where \(U(t) = e^{-t\partial ^{3}_{x}}\) is the unitary group.

If \(b> \frac{1}{2}\), the Sobolev lemma imply that, \( X^{s, b} \subset C(\mathbb R; H^s_x(\mathbb R)).\) For any interval I, we define the localized spaces \(X^{s,b}_I:= X^{s,b}(\mathbb R\times I)\) with norm

$$\begin{aligned} \Vert f\Vert _{ X^{s, b}(\mathbb R\times I)} = \inf \big \{\Vert g\Vert _{X^{s, b}};\; g |_{\mathbb R\times I} = f\big \}. \end{aligned}$$
(2.5)

Sometimes we use the definition \(X^{s,b}_{\delta }:=\Vert f\Vert _{ X^{s, b}(\mathbb R\times [0, \delta ])}\).

We define a cut-off function \(\psi _1 \in C^{\infty }(\mathbb R;\; \mathbb R^+)\) which is even, such that \(0\le \psi _1\le 1\) and

$$\begin{aligned} \psi _1(t) = {\left\{ \begin{array}{ll} 1, \quad |t|\le 1,\\ 0, \quad |t|\ge 2. \end{array}\right. } \end{aligned}$$
(2.6)

We also define \(\psi _T(t) = \psi _1(t/T)\), for \(0\le T\le 1\).

In the following lemma we list some estimates that are crucial in the proof of the local well-posedness result whose proof can be found in [13].

Lemma 2.1

For any \(s, b \in \mathbb R\), we have

$$\begin{aligned} \Vert \psi _1U(t)\phi \Vert _{X^{s,b}}\le C \Vert \phi \Vert _{H^s}. \end{aligned}$$
(2.7)

Further, if \(-\frac{1}{2}<b'\le 0\le b<b'+1\) and \(0\le \delta \le 1\), then

$$\begin{aligned} \Vert \psi _{\delta }\int _0^tU(t-t')f(u(t'))dt'\Vert _{X^{s,b}}\lesssim \delta ^{1-b+b'}\Vert f(u)\Vert _{X^{s, b'}}. \end{aligned}$$
(2.8)

As mentioned in the introduction, our main objective is to prove the global well-posedness result for the low regularity data. Using the \(L^2\) conservation law (1.5) we have the global well-posedness of the IVP (2.3) for given data in \(H^s(\mathbb R),\) \(s\ge 0\). So, from now on we suppose \(-\frac{1}{4}<s<0\) throughout this work.

Our aim is to derive an almost conserved quantity and use it to prove Theorem 1.3. For this, we use the I-method introduced in [10] and define the Fourier multiplier operator I by,

$$\begin{aligned} \widehat{Iu}(\xi ) = m(\xi ) \widehat{u}(\xi ), \end{aligned}$$
(2.9)

where \(m(\xi )\) is a smooth, radially symmetric and nonincreasing function given by

$$\begin{aligned} m(\xi ) = {\left\{ \begin{array}{ll} 1, \quad \quad \quad \,\,\,\,\,\,\,\;\;|\xi |< N, \\ N^{-s}|\xi |^s, \quad \quad |\xi |\ge 2N, \end{array}\right. } \end{aligned}$$
(2.10)

with \(N\gg 1\) to be fixed later.

Note that, I is the identity operator in low frequencies, \(\{\xi : |\xi |< N\}\), and simply an integral operator in high frequencies. In general, it commutes with differential operators and satisfies the following property.

Lemma 2.2

Let \(-\frac{1}{4}<s<0\) and \(N\ge 1\). Then the operator I maps \(H^s(\mathbb R)\) to \(L^2(\mathbb R)\) and

$$\begin{aligned} \Vert I f\Vert _{L^2(\mathbb R)} \lesssim N^{-s}\Vert f\Vert _{H^s(\mathbb R)}. \end{aligned}$$
(2.11)

Now record a variant of the local well-posedness result for initial data \(u_0 \in H^s\), \(0>s>-\frac{1}{4}\) such that \(Iu_0\in L^2\). More precisely we have the following result which will be very useful in the proof of the global well-posedness theorem.

Theorem 2.3

Let \(-\frac{1}{4}<s<0\), then for any \(u_0\) such that \(Iu_0\in L^2\), there exist \(\delta = \delta (\Vert Iu_0\Vert _{L^2})\) (with \(\delta (\rho )\rightarrow \infty \) as \(\rho \rightarrow 0\)) and a unique solution to the IVP (2.3) in the time interval \([0, \delta ]\). Moreover, the solution satisfies the estimate

$$\begin{aligned} \Vert Iu\Vert _{X_{\delta }^{0, b}}\lesssim \Vert Iu_0\Vert _{L^2}, \end{aligned}$$
(2.12)

and the local existence time \(\delta \) can be chosen satisfying

$$\begin{aligned} \delta \lesssim \Vert Iu_0\Vert _{L^2}^{-\theta }, \end{aligned}$$
(2.13)

where \(\theta >0\) is some constant.

Proof

As the operator I commutes with the differential operators, the linear estimates in Lemma 2.1 necessary in the contraction mapping principle hold true after applying I to equation (2.3). Since the operator I does not commute with the nonlinearity, the trilinear estimate is not straightforward. However, applying the interpolation lemma (Lemma 12.1 in [11]) to (1.3) we obtain, under the same assumptions on the parameters s, b and \(b'\) that

$$\begin{aligned} \Vert I(u^3)_x\Vert _{X^{0, b'}}\lesssim \Vert Iu\Vert _{X^{0,b}}^3, \end{aligned}$$
(2.14)

where the implicit constant does not depend on the parameter N appearing in the definition of the operator I.

Now, using the trilinear estimate (2.14) and the linear estimates the proof of this theorem follows exactly as in the proof of Theorem 1.1. So, we omit the details.

We finish this section recording some known results that will be useful in our work. First we record the following double mean value theorem (DMVT).

Lemma 2.4

(DMVT) Let \(f\in C^2(\mathbb R)\), and \(\max \{|\eta |,|\lambda |\}\ll |\xi |\), then

$$\begin{aligned} |f(\xi +\eta +\lambda )-f(\xi +\eta )-f(\xi +\lambda )+f(\xi )|\lesssim |f''(\theta )|\,|\eta |\,|\lambda |, \end{aligned}$$

where \(|\theta | \sim |\xi |\).

The following Strichartz’s type estimates will also be useful.

Lemma 2.5

For any \(s_1 \ge -\frac{1}{4}\), \(s_2 \ge 0\) and \(b>1/2\), we have

$$\begin{aligned} \Vert u\Vert _{L_x^5 L_t^{10}}&\lesssim \Vert u\Vert _{X^{s_2,b}}, \end{aligned}$$
(2.15)
$$\begin{aligned} \Vert u\Vert _{L_x^{20/3} L_t^5}&\lesssim \Vert u\Vert _{X^{s_1,b}},\end{aligned}$$
(2.16)
$$\begin{aligned} \Vert u\Vert _{L_x^\infty L_t^\infty }&\lesssim \Vert u\Vert _{X^{s_2,b}},\end{aligned}$$
(2.17)
$$\begin{aligned} \Vert u\Vert _{L_x^2 L_t^2}&\lesssim \Vert u\Vert _{X^{0,0}},\end{aligned}$$
(2.18)
$$\begin{aligned} \Vert u\Vert _{L_t^\infty L_x^2 }&\lesssim \Vert u\Vert _{X^{0,0}}. \end{aligned}$$
(2.19)

Proof

The estimates (2.15) and (2.16) follow from

$$\begin{aligned} \Vert U(t)u_0\Vert _{L_x^5 L_t^{10}}\lesssim \Vert u_0\Vert _{L^2} \quad \text {and} \quad \Vert D_x^{\frac{1}{4}}U(t)u_0\Vert _{L_x^{20/3} L_t^{5}}\lesssim \Vert u_0\Vert _{L^2}, \end{aligned}$$

whose proofs can be found in [16]. The estimates (2.17) and (2.19) follow by immersion and inequality (2.18) is obviuous.

Lemma 2.6

Let \(n\ge 2\) be an even integer, \(f_1,\dots ,f_n \in \textbf{S}(\mathbb R)\), then

$$\begin{aligned} \int _{\xi _1+\cdots +\xi _n=0}\widehat{f_1}(\xi _1)\widehat{\overline{f_2}}(\xi _2)\cdots \widehat{f_{n-1}}(\xi _{n-1})\widehat{\overline{f_{n}}}(\xi _{n})=\int _{\mathbb R}f_1(x)\overline{f_2}(x)\cdots f_{n-1}(x)\overline{f_{n}}(x). \end{aligned}$$

3 Almost Conservation Law

3.1 Modified Energy

Before introducing modified energy functional, we define n-multiplier and n-linear functional.

Let \(n\ge 2\) be an even integer. An n-multiplier \(M_n(\xi _1, \dots , \xi _n)\) is a function defined on the hyper-plane \(\Gamma _n:= \{(\xi _1, \dots , \xi _n);\;\xi _1+\dots +\xi _n =0\}\) with Dirac delta \(\delta (\xi _1+\cdots +\xi _n)\) as a measure.

If \(M_n\) is an n-multiplier and \(f_1, \dots , f_n\) are functions on \(\mathbb R\), we define an n-linear functional, as

$$\begin{aligned} \Lambda _n(M_n;\; f_1, \dots , f_n):= \int _{\Gamma _n}M_n(\xi _1, \dots , \xi _n)\prod _{j=1}^{n}\widehat{f_j}(\xi _j). \end{aligned}$$
(3.1)

When f is a complex function and \(\Lambda _n\) is applied to the n copies of the same function f, we write

$$\begin{aligned} \Lambda _n(M_n)\equiv \Lambda _n(M_n; f):=\Lambda _n(M_n;\; f,\bar{f},f,\bar{f},\dots ,f,\bar{f}). \end{aligned}$$

For \(1\le j\le n\) and \(k\ge 1\), we define the elongation \(\mathbf{{X}}_j^k(M_n)\) of the multiplier \(M_n\) to be the multiplier of order \(n+k\) given by

$$\begin{aligned} \mathbf{{X}}_j^k(M_n)(\xi _1, \cdots , \xi _{n+k}):= M_n(\xi _1,\cdots ,\xi _{j-1}, \xi _j+\cdots +\xi _{j+k}, \xi _{j+k+1}, \cdots , \xi _{n+k}).\nonumber \\ \end{aligned}$$
(3.2)

Using Plancherel identity, the energy E(u) defined in (1.5) can be written in terms of the n-linear functional as

$$\begin{aligned} E(u)= \Lambda _2(1). \end{aligned}$$
(3.3)

In what follows we record a lemma that relates the time-derivative of the n-linear functional defined for the solution u of the e-NLS equation (2.3).

Lemma 3.1

Let u be a solution of the IVP (2.3) and \(M_n\) be a n-multiplier, then

$$\begin{aligned} \frac{d}{dt}\Lambda _n(M_n;u) = i\Lambda _n(M_n\gamma _n; u)+i\Lambda _{n+2}\Big (\sum _{j=1}^n\gamma _j^{\beta }{} \mathbf{{X}}_j^2(M_n;u)\Big ), \end{aligned}$$
(3.4)

where \(\gamma _n = \xi _1^3+\cdots +\xi _n^3\), \(\gamma _j^{\beta }=(-1)^{j-1}\beta \) and \(\mathbf{{X}}_j^2(M_n)\) as defined in (3.2).

Now we introduce the first modified energy

$$\begin{aligned} E^1_I(u):= E(Iu), \end{aligned}$$
(3.5)

where I is the Fourier multiplier operator defined in (2.9) with m given by (2.10). Note that for \(m\equiv 1\), \(E^1_I(u)= \Vert u\Vert _{L_2}^2 = \Vert u_0\Vert _{L_2}^2\).

Using Plancherel identity, we can write the first modified energy in terms of the n-linear functional as

$$\begin{aligned} \begin{aligned} E^1_I(u)&= \int m(\xi )\widehat{u}(\xi )m(\xi )\bar{\widehat{u}}(\xi )d\xi \\&=\int _{\xi _1+\xi _2=0} m(\xi _1)m(\xi _2)\widehat{u}(\xi _1)\widehat{\bar{u}}(\xi _2)\\&=\Lambda _2(M_2; u), \end{aligned} \end{aligned}$$
(3.6)

where \(M_2=m_1m_2\) with \(m_j =m(\xi _j)\), \(j=1, 2\).

We define the second generation of the modified energy as

$$\begin{aligned} E^2_I(u):= E^1_I(u)+\Lambda _4(M_4;u), \end{aligned}$$
(3.7)

where the multiplier \(M_4\) is to be chosen later.

Now, using the identity (3.4), we get

$$\begin{aligned} \frac{d}{dt} E^2_I(u){} & {} = i\Lambda _2\Big (M_2\gamma _2;u\Big )+i\Lambda _4\Big (\sum _{j=1}^2\gamma _j^{\beta }{} \mathbf{{X}}_j^2(M_2);u\Big )\nonumber \\{} & {} \qquad +i\Lambda _4\Big (M_4\gamma _4;u\Big ) + i\Lambda _6\Big (\sum _{j=1}^4\gamma _j^{\beta }{} \mathbf{{X}}_j^2(M_4); u\Big ).\nonumber \\ \end{aligned}$$
(3.8)

Note that \(\Lambda _2\big (M_2\gamma _2;u\big )=0\). If we choose, \(M_4\) in such a way that

$$\begin{aligned} M_4\gamma _4+\sum _{j=1}^2\gamma _j^{\beta }{} \mathbf{{X}}_j^2(M_2)=0, \end{aligned}$$

i.e.,

$$\begin{aligned} M_4(\xi _1, \xi _2, \xi _3, \xi _4) = -\frac{\sum _{j=1}^2\gamma _j^{\beta }{} \mathbf{{X}}_j^2(M_2)}{\gamma _4}, \end{aligned}$$
(3.9)

then we get \(\Lambda _4 =0\) as well.

So, for the choice of \(M_4\) in (3.9), we have

$$\begin{aligned} \frac{d}{dt} E^2_I(u) =\Lambda _6(M_6), \end{aligned}$$
(3.10)

where

$$\begin{aligned} M_6= \sum _{j=1}^4\gamma _j^{\beta }{} \mathbf{{X}}_j^2(M_4), \end{aligned}$$
(3.11)

with \(M_4\) given by (3.9).

We recall that on \(\Lambda _n\) (\(n=4,6\)), one has \(\xi _1+\cdots +\xi _n =0\). Let us introduce the notations \(\xi _i+\xi _j=\xi _{ij}\), \(\xi _{ijk} = \xi _i+\xi _j+\xi _k\) and so on.

Using the fact that m is an even function, we can symmetrize the multiplier \(M_4\) given by (3.9), to obtain

$$\begin{aligned} \delta _4\equiv \delta _4(\xi _1, \xi _2, \xi _3, \xi _4):=[M_4]_{\textrm{sym}} = \frac{\beta (m_1^2-m_2^2+m_3^2-m_4^2)}{6\xi _{12}\xi _{13}\xi _{14}}, \end{aligned}$$
(3.12)

where we have used the identity \(\xi _1^3+\xi _2^3+\xi _3^3+\xi _4^3= 3\xi _{12}\xi _{13}\xi _{14}\) on the hyperplane \(\xi _1+\xi _2+\xi _3+\xi _4=0\).

Using the multiplier \([M_4]_{\textrm{sym}}\) given by (3.12) in (3.11) we obtain \([M_6]_{\textrm{sym}}\) in the symmetric form as follows

$$\begin{aligned} \begin{aligned}&\delta _6\equiv \delta _6(\xi _1, \xi _2, \xi _3, \xi _4, \xi _5, \xi _6):=[M_6]_{\textrm{sym}}\\&= \frac{\beta }{36}\sum _{\begin{array}{c} \{k,m,o\}=\{1,3,5\}\\ \{l,n,p\}=\{2,4,6\} \end{array}}\left[ \delta _4(\xi _{klm}, \xi _n, \xi _o, \xi _p)- \delta _4(\xi _k,\xi _{lmn}, \xi _o, \xi _p)+\delta _4(\xi _k,\xi _l,\xi _{mno}, \xi _p)-\delta _4(\xi _k, \xi _l, \xi _m, \xi _{nop})\right] . \end{aligned}\nonumber \\ \end{aligned}$$
(3.13)

Remark 3.2

In the case \(k=1\), \(l=2\), \(m=3\), \(n=4\), \(o=5\), \(p=6\), one can obtain the following sum of the symmetric multiplier \([M_6]_{\textrm{sym}}\) in the extended form as

$$\begin{aligned} \begin{aligned}&\frac{\beta ^2}{36}\Big [-\frac{m^2(\xi _{123})-m^2(\xi _4)+ m^2(\xi _5)-m^2(\xi _6)}{\xi _{56}\xi _{46}\xi _{45}}+\frac{m^2(\xi _1)-m^2(\xi _{234})+m^2(\xi _5)-m^2(\xi _6)}{\xi _{56}\xi _{15}\xi _{16}}\\&\qquad -\frac{m^2(\xi _1)-m^2(\xi _2)+m^2(\xi _{345})-m^2(\xi _{6})}{\xi _{12}\xi _{26}\xi _{16}} +\frac{m^2(\xi _1)-m^2(\xi _2)+m^2(\xi _3)-m^2(\xi _{456})}{\xi _{12}\xi _{13}\xi _{23}}\Big ]. \end{aligned} \end{aligned}$$

But, for our purpose \(\delta _6\) given by (3.13) in terms of \(\delta _4\) is enough to obtain the required estimates, see Proposition 3.3 below.

3.2 Multilinear Estimates

In this subsection we will derive some multilinear estimates associated to the symmetric multipliers \(\delta _4\) and \(\delta _6\), use them to get some local estimates in the Bourgain’s space that will be useful to obtain an almost conserved quantity.

From here onwards we will consider the notation \(|\xi _i|=N_i\), \(m(N_i)=m_i\). Given four number \(N_1, N_2, N_3, N_4\) and \(\mathcal {C}=\{N_1, N_2, N_3, N_4\}\), we will denote \(N_s=\max \mathcal {C}\), \(N_a=\max \mathcal {C}\setminus \{N_s\}\), \(N_t=\max \mathcal {C}\setminus \{N_s, N_a\}\), \(N_b=\min \mathcal {C}\). Thus

$$\begin{aligned} N_s \ge N_a\ge N_t \ge N_b. \end{aligned}$$

Proposition 3.3

Let m be as defined in (2.10)

1) If \(|\xi _{1j}| \gtrsim N_s\) for all \(j=2,3,4\) and \(|N_b|\ll N_s\), then

$$\begin{aligned} |\delta _4| \sim \dfrac{m^2 (N_b)}{N_s^3}. \end{aligned}$$
(3.14)

2) If \(|\xi _{1j}| \gtrsim N_s\) for all \(j=3,4\) and \(|\xi _{12}|\ll N_s\), then

$$\begin{aligned} |\delta _4| \lesssim \dfrac{m^2 (N_b)}{\max \{N_t,N\} \,N_s^2}. \end{aligned}$$
(3.15)

3) If \(|\xi _{1j}| \ll N_s\) for \(j=2,3\), \(a\ge 0\), \(b\ge 0\), \(a+b=1\), then

$$\begin{aligned} |\delta _4| \lesssim \dfrac{m^2 (N_s)}{N_s^2|\xi _{12} |^{a}|\xi _{13} |^{b}}. \end{aligned}$$
(3.16)

4) In the other cases, we have

$$\begin{aligned} |\delta _4| \lesssim \dfrac{m^2 (N_s)}{N_s^3}. \end{aligned}$$
(3.17)

Proof

Let \(f(\xi ):=m^2(\xi )\) be an even function, nonincreasing on \(|\xi |\). From definition of \(m(\xi )\), we have \(|f'(\xi ) |\sim \frac{m^2(\xi )}{|\xi |}\) if \(|\xi |>N\). Without loss of generality we can assume \(N_s=|\xi _1|\) and \(N_a=|\xi _2|\). As \(N_s=|\xi _2+\xi _3+\xi _4|\), we have \(N_a \sim N_s\). Also by symmetry we can assume \(|\xi _{12}|\le |\xi _{14}|\).

By the definition of \(\delta _4\), if \(N_s\le N\) then \(\delta _4=0\). Thus so from now on, throughout the proof, we will consider that \(N_s>N\). Depending on the frequency regimes we divide the proof in two different cases, viz., \(|\xi _{13}| \gtrsim N_s\) and \(|\xi _{14}| \gtrsim N_s\); and \(|\xi _{14}| \ll N_s\) or \(|\xi _{13}| \ll N_s\).

Case A. \(|\xi _{13}| \gtrsim N_s\) and \(|\xi _{14}| \gtrsim N_s\): We further divide this case in two sub-cases.

Sub-case A1. \(|\xi _{12}| \ll N_s\): Using the standard Mean Value Theorem, we have

$$\begin{aligned} |m^2 (\xi _1)-m^2 (\xi _2)|= |f (\xi _1)-f (-\xi _2)| = |f'(\xi _{\theta _1})|\,|\xi _{12}| \end{aligned}$$
(3.18)

where \(\xi _{\theta _1}= \xi _1-\theta _1\xi _{12}\) with \(\theta _1\in (0,1)\).

Since \(|\xi _{12}|\ll N_s\) we have \(|\xi _{\theta _1}|\sim |\xi _1|\sim N_s\) and consequently \(|f'(\xi _{\theta _1})|\sim \dfrac{m^2 (N_s)}{N_s} \). Using this in (3.18), we obtain

$$\begin{aligned} \dfrac{|m^2 (\xi _1)-m^2 (\xi _2)|}{|\xi _{12}| |\xi _{13}| |\xi _{14}|}\lesssim \dfrac{m^2 (N_s)}{N_s^3}. \end{aligned}$$
(3.19)

Now, we move to estimate \(|m^2(\xi _3)-m^2(\xi _4)|\). First note that, if \(N_t \le N\), then we have \(|m^2(\xi _3)-m^2(\xi _4)|=0\). Thus we will assume that \(|\xi _3|=N_t > N\). We divide in two cases.

Case 1. \(|\xi _{34}|\ll N_t\): Using the Mean Value Theorem, we get

$$\begin{aligned} |m^2 (\xi _3)-m^2 (\xi _4)|= |f (\xi _3)-f (-\xi _4)|=|f'(\xi _{\theta _2})|\,|\xi _{34}|, \end{aligned}$$
(3.20)

where \(\xi _{\theta _2}= \xi _3-\theta _2\xi _{34}\) with \(\theta _2\in (0,1)\). Since \(|\xi _{34}|\ll N_t\) we have \(|\xi _{\theta _2}|\sim |\xi _3|\sim N_t\) and consequently \(|f'(\xi _{\theta _2})|\sim \dfrac{m^2 (N_t)}{N_t} \). Using this in (3.20), we obtain

$$\begin{aligned} \dfrac{|m^2 (\xi _3)-m^2 (\xi _4)|}{{|\xi _{12}| |\xi _{13}| |\xi _{14}|}} \lesssim \dfrac{m^2 (N_t)}{N_tN_s^2}. \end{aligned}$$
(3.21)

Case 2. \(|\xi _{34}|\gtrsim N_t\): In this case, using triangular inequality and the fact that the function \(f(\xi )=m^2(\xi )\) is nonincreasing on \(|\xi |\) we obtain from the definition of \(\delta _4\) that

$$\begin{aligned} \dfrac{|m^2(\xi _3)-m^2(\xi _4)|}{|\xi _{12}|\,|\xi _{13}|\,|\xi _{14}|\,}\lesssim \dfrac{m^2 (N_b)}{N_t\,N_s^2}. \end{aligned}$$
(3.22)

Now, combining (3.19), (3.21) and (3.22), we obtain from the definition of \(\delta _4\) in (3.12) that

$$\begin{aligned} |\delta _4| \sim \dfrac{|f(\xi _1)-f(\xi _2)+f(\xi _3)-f(\xi _4)|}{|\xi _{12}|\,|\xi _{13}|\,|\xi _{14}|\,}\lesssim \dfrac{m^2 (N_b)}{\max \{N_t,N\}\,N_s^2}. \end{aligned}$$

Sub-case A2. \(|\xi _{12}| \gtrsim N_s\): Here also, we divide in two different sub-cases.

Sub-case A21. \(N_b \gtrsim N_s\): In this case we have \(N_b\sim N_t\sim N_a\sim N_s\). Without loss of generality we can assume \(\xi _1 > 0\). Since \(\xi _1+\cdots +\xi _4=0\), two largest frequencies must have opposite signs, i.e., \(\xi _2<0\). If possible, suppose \(\xi _2 \ge 0\). Then we have \(\xi _{1}+\xi _{2}=:M>N_s\) and \(\xi _{3}+\xi _{4}=-M<-N_s<0\). In this situation one has \(\xi _3 \xi _4>0\), otherwise

$$\begin{aligned} \xi _{3}^2+\xi _{4}^2=M^2-2\xi _3 \xi _4\ge M^2>\xi _{1}^2+\xi _{2}^2, \end{aligned}$$

which is a contradiction. As \(\xi _3+\xi _4<0\), we conclude that \(\xi _{3}<0\) and \(\xi _{4}<0\). Now, the frequency ordering \(|\xi _2|\ge |\xi _3|\) implies

$$\begin{aligned} \xi _2=M-\xi _1\ge |\xi _3|=-\xi _3 = M+\xi _4, \end{aligned}$$

and consequently \(\xi _{14} \le 0\). On the other hand, \(|\xi _1|\ge |\xi _4| \implies \xi _1\ge -\xi _4 \implies \xi _{14}\ge 0\). Therefore, we get \(\xi _{14}=0\) contradicting the hypothesis \(|\xi _{14}| \gtrsim N_s\) of this case.

Now, for \(\xi _1>0\) and \(\xi _2 < 0\), we have

$$\begin{aligned} |m^2 (\xi _1)-m^2 (\xi _2)|= |f (\xi _1)-f (-\xi _2)|=|f'(\xi _\theta )|\,|\xi _{12}|, \end{aligned}$$
(3.23)

where \( \xi _1 \ge \xi _\theta \ge -\xi _2\), so that \(\xi _\theta \sim N_s\) and consequently \(|f'(\xi _\theta )|\sim \dfrac{m^2(N_s)}{N_s}\). Using this in (3.23), we get

$$\begin{aligned} |m^2 (\xi _1)-m^2 (\xi _2)|\lesssim m^2 (N_s). \end{aligned}$$
(3.24)

Similarly, one can also obtain

$$\begin{aligned} |m^2 (\xi _3)-m^2 (\xi _4)| \lesssim m^2 (N_s). \end{aligned}$$
(3.25)

Thus, taking in consideration of (3.24) and (3.25), from definition of \(\delta _4\), we get

$$\begin{aligned} |\delta _4| \lesssim \dfrac{m^2 (N_s)}{N_s^3}. \end{aligned}$$

Sub-case A22. \(N_b \ll N_s\): Without loss of generality we can assume \(|\xi _4|=N_b\). In this case \(|\xi _3|=|\xi _{12}+\xi _4|\sim |\xi _{12}|\sim N_s\sim |\xi _1| \sim |\xi _2|\). It follows that

$$\begin{aligned} |m^2 (\xi _1)-m^2 (\xi _2)+m^2 (\xi _3)-m^2 (\xi _4)|\sim |m^2 (\xi _4)|= |m^2 (N_b)|. \end{aligned}$$

Therefore in this case

$$\begin{aligned} |\delta _4| \sim \dfrac{m^2 (N_b)}{N_s^3}. \end{aligned}$$

Case B. \(|\xi _{14}| \ll N_s\) or \(|\xi _{13}| \ll N_s\): We divide in two sub-cases.

Sub-case B1. \(|\xi _{14}| \ll N_s\): We move to find estimates considering two different sub-cases

Sub-case B11. \(|\xi _{13}|\gtrsim N_s\): In this case we necessarily have \(|\xi _{12}| \ll N_s\). If \(|\xi _{12}| \gtrsim N_s\), using the consideration made in the beginning of the proof, we get

$$\begin{aligned} |\xi _{14}|\gtrsim |\xi _{12}|\gtrsim N_s, \end{aligned}$$

but this contradicts the defining condition \(|\xi _{14}| \ll N_s\) of Case B1.

Now, for \(|\xi _{12}| \ll N_s\) using the Double Mean Value Theorem with \(\xi :=-\xi _1\), \(\eta :=\xi _{12}\) and \(\lambda :=\xi _{14}\), we have

$$\begin{aligned} \begin{aligned} |f (\xi +\lambda +\eta )-f (\xi +\eta )-f (\xi +\lambda )+f(\xi )|&\lesssim |f''(\xi _\theta )|\, |\xi _{12}|\, \xi _{14}|\\&\lesssim \dfrac{m^2 (N_s)|\xi _{12}|\,|\xi _{14}|}{N_s^2}. \end{aligned} \end{aligned}$$

Hence,

$$\begin{aligned} |\delta _4| \lesssim \dfrac{m^2 (N_s)|\xi _{12}|\,|\xi _{14}|}{N_s^2}\dfrac{1}{|\xi _{13}|\,|\xi _{12}|\,|\xi _{14}|}\sim \dfrac{m^2 (N_s)}{N_s^3}. \end{aligned}$$

Sub-case B12. \(|\xi _{13}|\ll N_s\): Without loss of generality we can assume \(\xi _1\ge 0\). Recall that, in this Sub-case \(|\xi _{12}|\le |\xi _{14}| \ll N_s\). As \(N_s=\xi _1\), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} |\xi _{12}|\ll N_s &{}\Longrightarrow \,\,\xi _2<0\quad \text {and} \quad |\xi _2|\sim N_s,\\ |\xi _{13}|\ll N_s &{}\Longrightarrow \,\,\xi _3<0\quad \text {and} \quad |\xi _3|\sim N_s,\\ |\xi _{14}|\ll N_s &{}\Longrightarrow \,\,\xi _4 <0\quad \text {and} \quad |\xi _4|\sim N_s. \end{array}\right. } \end{aligned}$$

Combining these informations, we get

$$\begin{aligned} N_s\gg |\xi _{13}|=|\xi _{24}|=|\xi _2|+|\xi _4|\sim N_s, \end{aligned}$$

which is a contradiction. Consequently this case is not possible.

Sub-case B2. \(|\xi _{13}| \ll N_s\): Taking in consideration Sub-case B1, we will assume that \(|\xi _{14}| \gtrsim N_s\). In this case too, we will analyse considering two different sub-cases.

Sub-case B21. \(|\xi _{12}|\ll N_s\): In this case we have \(|\xi _1| \sim |\xi _2|\sim |\xi _3| \sim N_s\). Furthermore \( |\xi _4|=|\xi _{12}+ \xi _3|\sim N_s \). Hence

$$\begin{aligned} |\xi _1|\sim |\xi _2|\sim |\xi _3|\sim |\xi _4|\sim N_s. \end{aligned}$$

Observe that, \(N_a=|\xi _2|\ge |\xi _3|\) implies \(|\xi _{12}|\le |\xi _{13}|\). In fact, if \(\xi _1 > 0\), then

$$\begin{aligned} {\left\{ \begin{array}{ll} |\xi _{12}|\ll N_s &{}\Longrightarrow \,\,\xi _2<0,\\ |\xi _{13}|\ll N_s &{}\Longrightarrow \,\,\xi _3 <0, \end{array}\right. } \end{aligned}$$

and it follows that \(\xi _{13}\ge \xi _{12}\ge 0\).

If \(\xi _1 < 0\), then

$$\begin{aligned} {\left\{ \begin{array}{ll} |\xi _{12}|\ll N_s &{}\Longrightarrow \,\,\xi _2>0,\\ |\xi _{13}|\ll N_s &{}\Longrightarrow \,\,\xi _3 >0, \end{array}\right. } \end{aligned}$$

and it follows that \(0\ge \xi _{12}\ge \xi _{13}\). Hence \(|\xi _{12}|\le |\xi _{13}|\).

On the other hand using the Mean Value Theorem, we obtain

$$\begin{aligned} \begin{aligned} |f(\xi _{1})-f(\xi _{2})+f(\xi _{3})-f(\xi _{4})|&=|f(\xi _{1})-f(-\xi _{2})+f(\xi _{3})-f(-\xi _{4})| \\&=|\xi _{12} f'(-\xi _2+\theta _1 \xi _{12})+\xi _{34} f'(-\xi _4+\theta _2 \xi _{34})|\\&=|\xi _{12} | \,|f'(-\xi _2+\theta _1 \xi _{12})-f'(-\xi _4+\theta _2 \xi _{34})|\\&\lesssim |\xi _{12} |\,|f'(N_s)|\\&\lesssim |\xi _{12} |\,\dfrac{m^2(N_s)}{N_s}, \end{aligned} \end{aligned}$$

where \(|\theta _j|\le 1\), \(j=1,2\). From this we deduce

$$\begin{aligned} |\delta _4| \lesssim \dfrac{|\xi _{12} |\, m^2 (N_s)}{N_s}\cdot \dfrac{1}{|\xi _{12} |\, |\xi _{13}|\, |\xi _{14}|}\le \dfrac{ m^2 (N_s)}{N_s^2|\xi _{12} |^{a}|\xi _{13} |^{b}}. \end{aligned}$$

Sub-case B22. \(|\xi _{12}|\gtrsim N_s\): As \(N_a=|\xi _2|\sim |\xi _1|=N_s\sim |\xi _3|\), one has \(N_s\sim |\xi _2|=|\xi _{13}+\xi _4| \). Thus \(|\xi _4| \sim N_s\) and \(|\xi _j| \sim N_s\), \(j=1,2,3,4\). Also

$$\begin{aligned} |\xi _{24}|=|\xi _{13}|\ll N_s \Longrightarrow \,\, \xi _3\,\xi _1<0\quad \text {and} \quad \xi _2\,\xi _4 <0. \end{aligned}$$
(3.26)

Let

$$\begin{aligned} \epsilon :=\xi _{13}=-\xi _{24}. \end{aligned}$$
(3.27)

We consider the following cases.

Case 1. \(\epsilon >0\): In this case if \(\xi _1<0\), then \(\xi _3=\epsilon +|\xi _1|> |\xi _1|\) which is a contradiction because \(|\xi _3|\le |\xi _1|\). Similarly by (3.26) and (3.27) if \(\xi _2>0\), then \(|\xi _4|=\epsilon +|\xi _2|> |\xi _2|\) which is a contradiction. Therefore we can assume \(\xi _1>0\) and \(\xi _2<0\) and by (3.26) \(\xi _3<0\) and \(\xi _4>0\). One has that

$$\begin{aligned} \xi _1\ge -\xi _2\ge -\xi _3\ge \xi _4>0, \end{aligned}$$

and using (3.27)

$$\begin{aligned} \xi _1\ge \xi _4+\epsilon \ge \xi _1-\epsilon \ge \xi _4>0. \end{aligned}$$
(3.28)

Let \(b:=\xi _4-N_s\), using (3.28), we have

$$\begin{aligned} N_s\ge N_s+b+\epsilon \ge N_s-\epsilon \ge N_s+b>0, \end{aligned}$$

which implies that \(-2\epsilon \le b\le -\epsilon \). Consequently by (3.27), \(\xi _2=-N_s-b-\epsilon \) and therefore using the condition of this Sub-case B22

$$\begin{aligned} N_s \lesssim \xi _{12}=-b-\epsilon , \end{aligned}$$

which is a contradiction. So, this case is not possible.

Case 2. \(\epsilon <0\): Similarly as above, if \(\xi _1>0\), then \(|\xi _3|=|\epsilon | +\xi _1>|\xi _1|\) which is a contradiction. Similarly by (3.26) if \(\xi _2<0\), then \(\xi _4=|\epsilon | +|\xi _2|> |\xi _2|\) which is a contradiction. Therefore we can assume \(\xi _1<0\) and \(\xi _2>0\) and by (3.26) \(\xi _3>0\) and \(\xi _4<0\). Using (3.27) one has that

$$\begin{aligned} -\xi _1\ge |\epsilon |-\xi _4\ge N_s-|\epsilon | \ge -\xi _4>0. \end{aligned}$$
(3.29)

Let \(b:=\xi _4+N_s\), using (3.29), we have

$$\begin{aligned} N_s\ge N_s-b+|\epsilon |\ge N_s-|\epsilon | \ge N_s-b>0, \end{aligned}$$

which implies that \(2|\epsilon |\ge b\ge |\epsilon |\). Consequently \(\xi _2=N_s-b+|\epsilon |\) and

$$\begin{aligned} N_s \lesssim \xi _{12}=|\epsilon |-b, \end{aligned}$$

which is a contradiction. Therefore, this case also does not exist.

Combining all cases we finish the proof of proposition.

Remark 3.4

Let \(0<\epsilon \ll N_s\). An example for the Sub-case A1 is

$$\begin{aligned} \xi _1=N_s, \quad \xi _2=-N_s+\epsilon , \quad \xi _3=-\dfrac{\epsilon }{2}, \quad \xi _4=-\dfrac{\epsilon }{2}, \end{aligned}$$

other example is

$$\begin{aligned} \xi _1=N_s, \quad \xi _2=-N_s+\epsilon , \quad \xi _3=\dfrac{N_s}{2}-\dfrac{\epsilon }{2}, \quad \xi _4=-\dfrac{N_s}{2}-\dfrac{\epsilon }{2}. \end{aligned}$$

An example for the Sub-case A21 with \(\xi _1 \ge 0\) and \(\xi _2 \le 0\) is

$$\begin{aligned} \xi _1=N_s, \quad \xi _2=-\dfrac{N_s}{2}, \quad \xi _3=-\dfrac{N_s}{4}, \quad \xi _4=-\dfrac{N_s}{4}. \end{aligned}$$

An example for the Sub-case A22 is

$$\begin{aligned} \xi _1=N_s, \quad \xi _2=-\dfrac{N_s}{2}-\epsilon , \quad \xi _3=-\dfrac{N_s}{2}, \quad \xi _4=\epsilon . \end{aligned}$$

An example for the Sub-case B21 is

$$\begin{aligned} \xi _1=N_s, \quad \xi _2=-N_s+\dfrac{\epsilon }{2}, \quad \xi _3=-N_s+\dfrac{\epsilon }{2}, \quad \xi _4=N_s-\epsilon . \end{aligned}$$

Proposition 3.5

Let \(w\in \textbf{S}(\mathbb R\times \mathbb R)\), \(0>s>-\frac{1}{4}\) and \(b>\frac{1}{2}\), then we have

$$\begin{aligned} \left| \Lambda _4(\delta _4; u(t) ) dt\right| \lesssim \dfrac{1}{N^{(\frac{5}{4}-3s)}}\,\Vert Iu\Vert ^4_{L^2}, \end{aligned}$$
(3.30)

and

$$\begin{aligned} \left| \int _0^\delta \Lambda _6(\delta _6; u(t) ) dt\right| \lesssim N^{-\frac{7}{4}}\Vert Iu\Vert ^6_{X_\delta ^{0,b}}. \end{aligned}$$
(3.31)

Proof

To prove (3.30), taking idea from [9, 10], first we perform a Littlewood-Paley decomposition of the four factors u on \(\delta _4\) so that \(\xi _j\) are essentially constants \(N_j\), \(j=1,2,3,4\). To recover the sum at the end we borrow a factor \(N_s^{-\epsilon }\) from the large denominator \(N_s\) and often this will not be mentioned. Also, without loss of generality, we can suppose that the Fourier transforms involved in the multipliers are all positive.

Recall that for \(N_s\le N\) one has \(m(\xi _j)=1\) for all \(j=1,2,3,4\) and consequently the multiplier \(\delta _4\) vanish. Therefore, we will consider \(N_s\le N\).

In view of the estimates obtained in Proposition 3.3, we divide the proof of (3.30) in two different parts.

First part: Cases 1), 2) and 4) of Proposition3.3. We observe that \(N_s^{\frac{1}{4}}m_s \gtrsim N^{-s}\). In fact, if \(N_s\in [N,2N]\), then \(m_s \sim 1\) and \(N_s^{\frac{1}{4}}m_s \gtrsim N_s^{-s}\gtrsim N^{-s}\). If \(N_s>2N\), then from the definition of m and the fact that \(s>-\frac{1}{4}\), we arrive at \(N_s^{\frac{1}{4}}m_s =N_s^{\frac{1}{4}}\dfrac{N^{-s}}{N_s^{-s}}=N_s^{\frac{1}{4}+s}N^{-s}\gtrsim ~N^{-s}\). Furthermore, we observe that \(\frac{1}{\max \{N_t,\, N\}} \le \frac{1}{N}\). Thus

$$\begin{aligned} \left| \Lambda _4(\delta _4; u(t) ) \right|{} & {} =\left| \int _{\xi _1+\cdots +\xi _4=0}\delta _4(\xi _1, \dots , \xi _4)\widehat{u_1}(\xi _1)\cdots \widehat{\overline{u_4}}(\xi _4)\right| \nonumber \\{} & {} \lesssim \int _{\xi _1+\cdots +\xi _4=0}\dfrac{m^2(N_b)}{N\,N_s^2}\dfrac{\widehat{Iu_1}(\xi _1)\cdots \widehat{I\overline{u_4}}(\xi _4)}{m_1\cdots m_4}\nonumber \\{} & {} \lesssim \int _{\xi _1+\cdots +\xi _4=0}\dfrac{N_s}{N\,N_s^2m_s^3}\widehat{D_x^{-\frac{1}{4}}Iu_1}(\xi _1)\cdots \widehat{D_x^{-\frac{1}{4}}I\overline{u_4}}(\xi _4)\nonumber \\{} & {} \lesssim \int _{\xi _1+\cdots +\xi _4=0}\dfrac{1}{N\,N_s^{\frac{1}{4}-3s}}\widehat{D_x^{-\frac{1}{4}}Iu_1}(\xi _1)\cdots \widehat{D_x^{-\frac{1}{4}}I\overline{u_4}}(\xi _4)\nonumber \\{} & {} \lesssim \dfrac{1}{N\,N_s^{\frac{1}{4}-3s}}\Vert D_x^{-1/4} Iu\Vert ^4_{L^4}\nonumber \\{} & {} \lesssim \dfrac{1}{N^{(\frac{5}{4}-3s)}}\Vert Iu\Vert ^4_{L^2}, \end{aligned}$$
(3.32)

where in the fourth line we used the following estimate

$$\begin{aligned} \dfrac{N_s}{N_s^2 m_s^3}=\dfrac{1}{N_s^{\frac{1}{4}}(N_s^{\frac{1}{4}}m_s)^3}\lesssim \dfrac{1}{N_s^{\frac{1}{4}-3s}}. \end{aligned}$$

Second part. Case 3) of Proposition 3.3. Recall from the first part, we have \(N_s^{\frac{1}{4}}m_s \gtrsim N^{-s}\). Using (3.16) with \(a=1\) and \(b=0\), and recalling the fact that \(|\xi _{12}|= |\xi _{34}|\), we get

$$\begin{aligned} \begin{aligned} \left| \Lambda _4(\delta _4; u(t) ) \right|&=\left| \int _{\xi _1+\cdots +\xi _4=0}\delta _4(\xi _1, \dots , \xi _4)\widehat{u_1}(\xi _1)\cdots \widehat{\overline{u_4}}(\xi _4)\right| \\&=\left| \int _{\xi _1+\cdots +\xi _4=0}\delta _4(\xi _1, \dots , \xi _4)\dfrac{\widehat{Iu_1}(\xi _1)\cdots \widehat{\overline{Iu_4}}(\xi _4)}{m_1\cdots m_4}\right| \\&\lesssim \int _{\xi _1+\cdots +\xi _4=0}\dfrac{1}{N_s^2 m_s^2}|\xi _{12}|^{-\frac{1}{2}}\widehat{Iu_1}(\xi _1)\widehat{\overline{Iu_2}}(\xi _2)\,|\xi _{34}|^{-\frac{1}{2}}\widehat{Iu_3}(\xi _3)\widehat{\overline{Iu_4}}(\xi _4) \\&\lesssim \int _{\mathbb R}\dfrac{1}{N_s^{3/2-2s}}D_x^{-\frac{1}{2}}(Iu_1Iu_2)\,D_x^{-\frac{1}{2}}(Iu_3Iu_4)\\&\lesssim \dfrac{1}{N_s^{\frac{3}{2}-2s}}\Vert D_x^{-\frac{1}{2}}(Iu_1Iu_2)\Vert _{L^2}\Vert D_x^{-\frac{1}{2}}(Iu_3Iu_4)\Vert _{L^2}, \end{aligned}\nonumber \\ \end{aligned}$$
(3.33)

where in the second last line we used

$$\begin{aligned} \dfrac{1}{N_s^2 m_s^2}=\dfrac{1}{N_s^{\frac{3}{2}}(N_s^{\frac{1}{4}}m_s)^2}\lesssim \dfrac{1}{N_s^{3/2-2s}}. \end{aligned}$$

Now, applying Hardy-Litlewwod-Sobolev inequality, we obtain from (3.33) that

$$\begin{aligned} \begin{aligned} \left| \Lambda _4(\delta _4; u(t) ) \right|&\lesssim \dfrac{1}{N^{(\frac{3}{2}-2s)}} \Vert Iu_1Iu_2\Vert _{L^1}\Vert Iu_3Iu_4\Vert _{L^1}\\&\lesssim \dfrac{1}{N^{(\frac{3}{2}-2s)}}\Vert Iu\Vert ^4_{L^2}. \end{aligned} \end{aligned}$$
(3.34)

Observe that the condition \(s>-\frac{1}{4}\) implies that \(\frac{5}{4}-3s<\frac{3}{2}-2s\) and this completes the proof of (3.30).

Now we move to prove (3.31). As in the proof of (3.30), first we perform a Littlewood-Paley decomposition of the six factors u on \(\delta _6\) so that \(\xi _j\) are essentially constants \(N_j\), \(j=1, \cdots , 6\). Recall that for \(N_s\le N\) one has \(m(\xi _j)=1\) for all \(j=1,\cdots ,6\) and consequently the multiplier \(\delta _6\) vanish. Therefore, we will consider \(N_s\le N\). Since \(N_a\sim N_s>N\), it follows that

$$\begin{aligned} m_sN_s\gtrsim N\quad \text {and}\quad m_aN_a\gtrsim N. \end{aligned}$$

Without loss of generality we will consider only the term \(\delta _4(\xi _{123}, \xi _4, \xi _5, \xi _6)\) in the symmetrization of \(\delta _6(\xi _1, \dots , \xi _6)\), see (3.13). The estimates for the other terms are similar.

Here also, we will provide a proof of (3.31) dividing in two parts.

First part. Cases 1), 2) and 4) in Proposition3.3. In these cases, we have

$$\begin{aligned} \begin{aligned} J&:=\left| \int _0^\delta \Lambda _6(\delta _6; u(t) ) \right| = \left| \int _0^\delta \int _{\xi _1+\cdots +\xi _6=0}\delta _4(\xi _{123}, \xi _4, \xi _5, \xi _6)\widehat{u_1}(\xi _1)\cdots \widehat{\overline{u_6}}(\xi _6)\right| \\&\lesssim \int _0^\delta \int _{\xi _1+\cdots +\xi _6=0}\dfrac{m_b^2}{\max \{N_t, N\} \,N_s^2}\,\cdot \,\dfrac{m_a m_s\widehat{u_1}(\xi _1)\cdots \widehat{\overline{u_6}}(\xi _6)}{m_a m_s}\\&\lesssim \int _0^\delta \int _{\mathbb R}\dfrac{1}{\max \{N_t, N\}\,N^2 }Iu_s Iu_a Iu_b u_t u_5 u_6\\\&\lesssim \int _0^\delta \int _{\mathbb R}\dfrac{N_t^{\frac{1}{4}}}{\max \{N_t, N\}\,N^2}Iu_s Iu_a Iu_b( D_x^{-\frac{1}{4}}u_t) u_5 u_6\\&\lesssim \dfrac{1}{N^{11/4}}\Vert Iu_s\Vert _{L_x^2 L_t^2} \Vert Iu_a\Vert _{L_x^\infty L_t^\infty }\Vert Iu_b\Vert _{L_x^\infty L_t^\infty }\Vert D_x^{-\frac{1}{4}}u_t\Vert _{L_x^5 L_t^{10}}\Vert u_5\Vert _{L_x^{20/3} L_t^{5}}\Vert u_6\Vert _{L_x^{20/3} L_t^5}. \end{aligned} \end{aligned}$$
(3.35)

Using estimates from Lemma 2.5, we obtain from (3.35) that

$$\begin{aligned} \begin{aligned} J&\lesssim \dfrac{1}{N^{\frac{11}{4}}}\Vert Iu_s\Vert _{X^{0,b}_{\delta }} \Vert Iu_a\Vert _{X^{0,b}_{\delta }}\Vert Iu_b\Vert _{X^{0,b}_{\delta }}\Vert u_t\Vert _{X^{-\frac{1}{4},b}_{\delta }}\Vert u_5\Vert _{X^{-\frac{1}{4},b}_{\delta }}\Vert u_6\Vert _{X^{-\frac{1}{4},b}_{\delta }}\\&\lesssim \dfrac{1}{N^{\frac{11}{4}}}\Vert Iu_s\Vert _{X^{0,b}_{\delta }} \Vert Iu_a\Vert _{X^{0,b}_{\delta }}\Vert Iu_b\Vert _{X^{0,b}_{\delta }}\Vert Iu_t\Vert _{X^{0,b}_{\delta }}\Vert Iu_5\Vert _{X^{0,b}_{\delta }}\Vert Iu_6\Vert _{X^{0,b}_{\delta }}\\&\lesssim \dfrac{1}{N^{\frac{11}{4}}}\Vert Iu\Vert _{X^{0,b}_{\delta }}^6. \end{aligned} \end{aligned}$$
(3.36)

Second part. Case 3) in Proposition 3.3. Without loss of generality we can assume that \(|\xi _{123}|=N_s\), \(|\xi _4|=N_a\), \(|\xi _5|=N_t\) and \(|\xi _6|=N_b\). Notice that \(m_s^2\le m_t m_b\) and \(|\xi _j|\sim N_s\) for some \(j=1,2,3\). So, we can assume \(|\xi _3|\sim N_s\). Using (3.16) in Proposition 3.3 with \(a=1\), and \(b=0\), we can obtain

$$\begin{aligned} \begin{aligned} J:=&\left| \int _0^\delta \Lambda _6(\delta _6; u(t) ) \right| = \left| \int _0^\delta \int _{\xi _1+\cdots +\xi _6=0}\delta _4(\xi _{123}, \xi _4, \xi _5, \xi _6)\widehat{u_1}(\xi _1)\cdots \widehat{\overline{u_6}}(\xi _6)\right| \\ \lesssim&\int _0^\delta \int _{\xi _1+\cdots +\xi _6=0}\dfrac{m_t m_b}{N_s^2|\xi _{1234}|^{\frac{1}{2}}\, |\xi _{56}|^{\frac{1}{2}}}\,\cdot \,\dfrac{m_a \widehat{u_1}(\xi _1)\cdots \widehat{\overline{u_6}}(\xi _6)}{m_a }\\ \lesssim&\int _0^\delta \int _{\xi _1+\cdots +\xi _6=0}\dfrac{N_s^{\frac{1}{4}}}{N N_s}|\xi _{1234}|^{-\frac{1}{2}} (\widehat{u_1}(\xi _1)\widehat{\overline{u_2}}(\xi _2)|\xi _3|^{-\frac{1}{4}}\widehat{u_3}(\xi _3))\widehat{\overline{Iu_4}}(\xi _4)) |\xi _{56}|^{-\frac{1}{2}}(\widehat{Iu_5}(\xi _1)\widehat{\overline{Iu_6}}(\xi _6))\\ \lesssim&\dfrac{1}{N^{\frac{7}{4}}}\int _0^\delta \int _{\mathbb R}D_x^{-\frac{1}{2}}(u_1 u_2 (D_x^{-\frac{1}{4}}u_3) Iu_a) D_x^{-\frac{1}{2}}(Iu_t Iu_b)\\ \lesssim&\dfrac{1}{N^{\frac{7}{4}}}\int _0^\delta \Vert D_x^{-\frac{1}{2}}(u_1 u_2 (D_x^{-\frac{1}{4}}u_3) Iu_a) \Vert _{L^2_x}\Vert D_x^{-\frac{1}{2}}(Iu_t Iu_b)\Vert _{L^2_x}. \end{aligned}\nonumber \\ \end{aligned}$$
(3.37)

Now, applying Hardy-Litlewwod-Sobolev inequality followed by estimates from Lemma 2.5, we obtain from (3.37) that

$$\begin{aligned} \begin{aligned} J&\lesssim \dfrac{1}{N^{\frac{7}{4}}}\int _0^\delta \Vert u_1 u_2(D_x^{-\frac{1}{4}}u_3) Iu_a \Vert _{L^1_x}\Vert Iu_t Iu_b\Vert _{L^1_x}\\&\lesssim \dfrac{1}{N^{\frac{7}{4}}} \Vert u_1\Vert _{ L_x^{20/3}L_t^5}\Vert u_2\Vert _{ L_x^{20/3}L_t^5}\Vert D_x^{-\frac{1}{4}}u_3\Vert _{L_x^5 L_t^{10}}\Vert I u_a\Vert _{L_x^{2} L_t^{2}}\Vert Iu_t\Vert _{L_t^\infty L_x^2 }\Vert Iu_b\Vert _{L_t^\infty L_x^2 }\\&\lesssim \dfrac{1}{N^{\frac{7}{4}}}\Vert u_1\Vert _{X^{-\frac{1}{4},b}_{\delta }} \Vert u_2\Vert _{X^{-\frac{1}{4},b}_{\delta }}\Vert u_3\Vert _{X^{-\frac{1}{4},b}_{\delta }}\Vert Iu_a\Vert _{X^{0,b}_{\delta }}\Vert Iu_t\Vert _{X^{0,b}_{\delta }}\Vert Iu_b\Vert _{X^{0,b}_{\delta }}\\&\lesssim \dfrac{1}{N^{\frac{7}{4}}}\Vert Iu_1\Vert _{X^{0,b}_{\delta }} \Vert Iu_2\Vert _{X^{0,b}_{\delta }}\Vert Iu_3\Vert _{X^{0,b}_{\delta }}\Vert Iu_a\Vert _{X^{0,b}_{\delta }}\Vert Iu_t\Vert _{X^{0,b}_{\delta }}\Vert Iu_b\Vert _{X^{0,b}_{\delta }}\\&\lesssim \dfrac{1}{N^{\frac{7}{4}}}\Vert Iu\Vert _{X^{0,b}_{\delta }}^6. \end{aligned} \end{aligned}$$
(3.38)

3.3 Almost Conserved Quantity

We use the estimates proved in the previous subsection to obtain the following almost conservation law for the second generation of the energy.

Proposition 3.6

Let u be the solution of the IVP (2.3) given by Theorem 2.3 in the interval \([0, \delta ]\). Then the second generation of the modified energy satisfies the following estimates

$$\begin{aligned} |E^2_I(u(\delta ))|\le |E^2_I(\phi )| + C N^{-\frac{7}{4}}\Vert Iu\Vert _{X^{0, \frac{1}{2}+}_{\delta }}^6. \end{aligned}$$
(3.39)

Proof

The proof follows combining (3.10) and (3.31).

4 Proof of the Main Results

In this section we provide proof of the main results of this work.

Proof of Theorem 1.3

Let \(u_0\in H^s(\mathbb R)\), \(-\frac{1}{4}<s<0\). Given any \(T>0\), we are interested in extending the local solution to the IVP (2.3) to the interval [0, T].

To make the analysis a bit easy we use the scaling argument. If u(xt) solves the IVP (2.3) with initial data \(u_0(x)\) then for \(1<\lambda <\infty \), so does \(u^{\lambda }(x,t)\) with initial data \(u_0^{\lambda }(x)\); where \(u^{\lambda }(x,t)= \lambda ^{-\frac{3}{2}} u(\frac{x}{\lambda }, \frac{t}{\lambda ^3})\) and \(u_0^{\lambda }(x)=\lambda ^{-\frac{3}{2}}u_0(\frac{x}{\lambda })\).

Our interest is in extending the rescaled solution \(u^{\lambda }\) to the bigger time interval \([0, \lambda ^3T]\).

Observe that

$$\begin{aligned} \Vert u_0^{\lambda }\Vert _{{H}^s}\lesssim \lambda ^{-1-s}\Vert u_0\Vert _{{H}^s}. \end{aligned}$$
(4.1)

From this observation and (2.11) we have that

$$\begin{aligned} E^1_I(u_0^{\lambda })=\Vert Iu_0^{\lambda }\Vert _{L^2}^2\lesssim N^{-2s}\lambda ^{-2(1+s)}\Vert u_0\Vert _{L^2}^2. \end{aligned}$$
(4.2)

The number \(N\gg 1\) will be chosen later suitably. Now we choose the parameter \(\lambda =\lambda (N)\) in such a way that \(E^1_I(u_0^{\lambda })=\Vert Iu_0^{\lambda }\Vert _{L^2}^2\) becomes as small as we please. In fact, for arbitrary \(\epsilon >0\), if we choose

$$\begin{aligned} \lambda \sim N^{-\frac{s}{1+s}}, \end{aligned}$$
(4.3)

we can obtain

$$\begin{aligned} E^1_I(u_0^{\lambda })=\Vert Iu_0^{\lambda }\Vert _{L^2}^2\le \epsilon . \end{aligned}$$
(4.4)

From (4.4) and the variant of the local well-posedness result (2.13), we can guarantee that the rescaled solution \(Iu^{\lambda }\) exists in the time interval [0, 1].

Moreover, for this choice of \(\lambda \), from (3.7), (3.30) and (4.4), in the time interval [0, 1], we have

$$\begin{aligned} |E^2_I(u_0^{\lambda })|\lesssim |E^1_I(u_0^{\lambda })| +|\Lambda _4(M_4)|\lesssim \Vert Iu_0^{\lambda }\Vert _{L^2}^2 + \Vert Iu_0^{\lambda }\Vert _{L^2}^4\le \epsilon +\epsilon ^2\lesssim \epsilon . \end{aligned}$$
(4.5)

Using the almost conservation law (3.39) for the modified energy, (2.12), (4.4) and (4.5), we obtain

$$\begin{aligned} \begin{aligned} |E^2_I(u^{\lambda })(1)|&\lesssim |E^2_I(u_0^{\lambda })| +N^{-\frac{7}{4}}\Vert Iu^{\lambda }\Vert _{X_1^{0,{\frac{1}{2}+}}}^6\\&\lesssim \epsilon +N^{-\frac{7}{4}}\epsilon ^3\\&\lesssim \epsilon +N^{-\frac{7}{4}}\epsilon . \end{aligned} \end{aligned}$$
(4.6)

From (4.6), it is clear that we can iterate this process \(N^{\frac{7}{4}}\) times before doubling the modified energy \(|E^2(u^{\lambda })|\). Therefore, by taking \(N^{\frac{7}{4}}\) times steps of size O(1), we can extend the rescaled solution to the interval \([0, N^{\frac{7}{4}}]\). As we are interested in extending the the solution to the interval \([0, \lambda ^3T]\), we must select \(N=N(T)\) such that \(\lambda ^3T\le N^{\frac{7}{4}}\). Therefore, with the choice of \(\lambda \) in (4.3), we must have

$$\begin{aligned} TN^{\frac{-7-19s}{4(1+s)}}\le c. \end{aligned}$$
(4.7)

Hence, for arbitrary \(T>0\) and large N, (4.7) is possible if \(s>-\frac{7}{19}\), which is true because we have considered \(s>-\frac{1}{4}\). This completes the proof of the theorem.

Remark 4.1

From the proof of Theorem 1.3 it can be seen that the global well-posedness result might hold for initial data with Sobolev regularity below \(-\frac{1}{4}\) as well provided there is local solution. But, as shown in [3] one cannot obtain the local well-posedness result for such data because the crucial trilinear estimate fails for \(s<-\frac{1}{4}\).

Remark 4.2

In this work we focused to address the well-posedness issues for the IVPs associated to the nonlinear Schrödinger equations with third order dispersion. Mainly, we obtained the least possible Sobolev regularity requirement on the initial data that suffices to get the global solution. In the recent time, study of the existence of the soliton solutions and their dynamics has also attracted attention of several mathematicians and physicists. Solitons play a very important role in many fields of nonlinear science such as nonlinear optics, Bose-Einstein condensates, plamas physics, biology, fluid mechanics, and other related fields. The nonlinear Schrödinger equations with third order dispersion considered in this work are also widely studied in this context see for example [7, 22, 23] and references therein.