1 Introduction

In this work we consider the initial value problems (IVPs) associated to two dispersive models with real analytic initial data. The first model we consider is the modified Korteweg–de Vries (mKdV) equation

$$\begin{aligned} \left\{ \begin{array}{l} \partial _t u+ \partial _x^3u+\mu u^2\partial _xu = 0, \quad x\in \mathbb {R},\; t\in \mathbb {R}, \\ u(x,0) = u_0(x), \end{array}\right. \end{aligned}$$
(1.1)

where u is a real valued function and \(\mu =\pm 1\). The next model is the cubic nonlinear Schrödinger equation with third order dispersion (tNLS equation in short)

$$\begin{aligned} \left\{ \begin{array}{l} \partial _t v+i\alpha \partial _x^2v+\beta \partial _x^3v+i\gamma |v|^2v = 0, \quad x\in \mathbb {R},\; t\in \mathbb {R}, \\ v(x,0) = v_0(x), \end{array}\right. \end{aligned}$$
(1.2)

where \(\alpha , \beta \) and \(\gamma \) are real constants and v is a complex valued function.

The mKdV equation (1.1) is a generalization of the famous KdV equation [28] and is known as focusing for \(\mu =1\) and defocusing for \(\mu =-1\). The mKdV equation appears in several physical contexts, for example, propagation of waves in plasma [27], dynamics of traffic flow [32], fluid mechanics [19] and nonlinear optics [30, 31] are a few to mention.

The mKdV equation (1.1) has attracted much attention from both applied and theoretical perspectives. Various methods have been used to construct solutions, see for example [15, 20, 40] and references therein. It possesses infinite number of conserved quantities, is both Hamiltonian and completely integrable, and can be solved by using the inverse scattering technique [40].

Among infinite number of conserved quantities possesses by (1.1), we highlight the mass

$$\begin{aligned} M(u)(t):=\int u^2(x,t)dx, \end{aligned}$$
(1.3)

and the energy

$$\begin{aligned} E(u)(t):=\int \Big [(\partial _xu(x,t))^2-\frac{\mu }{6} (u(x,t))^4\Big ]dx, \end{aligned}$$
(1.4)

that will be useful in this work.

The well-posedness of the IVP (1.1) with data in the \(L^2\)-based Sobolev spaces \(H^s({\mathbb {R}})\), \(s \in {\mathbb {R}}\), has long been studied in the literature, see [6, 22,23,24] and references therein. The optimal local well-posedness result for given data in \(H^s({\mathbb {R}})\) was obtained by Kenig et al. [23], who exploited dispersion through the use of local smoothing estimates, for \(s\ge \frac{1}{4}\). An alternative proof of this result in \(H^{\frac{1}{4}}({\mathbb {R}})\) was given by Tao [41] using the Fourier transform restriction norm method. Using the conserved quantities (1.3) and (1.4) one can get the global well-posedness result in \(H^s({\mathbb {R}})\) for \(s\ge 1\). The global well-posedness result for the low regularity data, viz., \(s >\frac{1}{4}\), was established, using the I-method, by Colliander et al. [12], and at the end-point \(s = \frac{1}{4}\), by Kishimoto [25]. For further improvement we refer to [14, 35] and references therein.

The tNLS equation (1.2), also known as the extended nonlinear Schrödinger equation, appears to describe several physical phenomena like the nonlinear pulse propagation in an optical fiber, nonlinear modulation of a capillary gravity wave on water, for more details we refer to [1, 9, 13, 18, 33, 37, 43] and references therein. In some literature, this model is also known as the third order Lugiato–Lefever equation [34].

The tNLS equation can also be considered as a particular case of the higher order nonlinear Schrödinger equation proposed by Hasegawa and Kodama in [17, 26]

$$\begin{aligned} \partial _t v +i\alpha \partial _x^2 v +\beta \partial _x^3 v +i\gamma |v|^2v+\delta |v|^2\partial _x v +\epsilon v^2 \partial _x \overline{v} =0, \end{aligned}$$

where \(\gamma ,\delta ,\epsilon \in \mathbb {C}\) and \(\alpha ,\beta \in \mathbb {R}\) are constants, and \(v = v(x, t)\) is complex valued function.

As presented in [29] (see also [9]), the \(L^2\)-norm

$$\begin{aligned} {\tilde{M}}(v)(t):=\int |v(x,t)|^2 dx, \end{aligned}$$
(1.5)

and the following quantity

$$\begin{aligned} {\tilde{E}}(v)(t):=\int v(x,t)\overline{\partial _x v(x,t)}d x, \end{aligned}$$
(1.6)

are conserved by the flow of (1.2).

The well-posedness issues and other properties of solutions of the IVP (1.2) posed on \({\mathbb {R}}\) or \(\mathbb {T}\) have extensively been studied by several authors, see for example [9, 11, 13, 34, 36] and references threrein. The optimal local well-posedness result for the IVP (1.2) with given data in \(H^s({\mathbb {R}})\), \(s>-\frac{1}{4}\) is obtained in [9]. The author in [9] also proved that the crucial trilinear estimate used to obtain the local well-posedness result fails whenever \(s<-\frac{1}{4}\). In this sense, the local well-posedness result for \(s>-\frac{1}{4}\) is the best possible using this technique. Quite recently, the authors in [10] implemented the I-method to construct an almost conserved quantity and used it to obtain a sharp global well-posedness result to the IVP (1.2) for given data in \(H^s({\mathbb {R}})\), \(s>-\frac{1}{4}\).

We note that the best local well-posedness results for the IVPs (1.1) and (1.2) with given data in \(H^s({\mathbb {R}})\) were respectively obtained in [9, 41] using the Fourier transform norm spaces introduced in [7, 8], commonly known as Bourgain’s spaces. Generally speaking, Bourgain’s spaces \(X^{s,b}\), \(s, b\in {\mathbb {R}}\) are very suitable to get well-posedness results for low regularity Sobolev data. These spaces are defined with the norm given by

$$\begin{aligned} \Vert w\Vert _{X^{s,b}} = \Vert \langle \xi \rangle ^{s} \langle \tau -\phi (\xi )\rangle ^{b} |{\widehat{w}}(\xi ,\tau )|\Vert _{L_{\tau ,\xi }^2}, \end{aligned}$$
(1.7)

where \(\langle \xi \rangle := 1+|\xi |\), w is a generic function which could be u or v and \(\phi \) is the phase function associated to the linear part of the equation. In our case,

$$\begin{aligned} \phi (\xi ) ={\left\{ \begin{array}{ll} \xi ^3, &{} \text { for the mKdV equation, }\\ \alpha \xi ^2 +\beta \xi ^3, &{} \text { for the tNLS equation. } \end{array}\right. } \end{aligned}$$
(1.8)

As mentioned in the beginning, the main interest of this work is in considering the IVPs (1.1) and (1.2) with real analytic initial data \(u_0\). For this purpose we consider the initial data \(u_0\) in the Gevrey class \(G^{\sigma ,s}(\mathbb {R})\), \(\sigma >0\) and \(s\in \mathbb {R}\) defined as follows

$$\begin{aligned} G^{\sigma ,s}(\mathbb {R}):= \left\{ f\in L^2(\mathbb {R});\; \Vert f\Vert _{G^{\sigma ,s}(\mathbb {R})}^2 = \int \langle \xi \rangle ^{2s}e^{2\sigma |\xi |}|{\widehat{f}}(\xi )|^2 d{\xi } < \infty \right\} , \end{aligned}$$

where \({\widehat{f}}\) denotes the Fourier transform given by

$$\begin{aligned} {\widehat{f}}(\xi ) = c\int e^{-ix\xi }f(x)d x. \end{aligned}$$

When is convenient we also use the notation \(\mathcal {F}(f)\) to denote the Fourier transform of f. Also, we will use c or C to denote constants whose value may vary from one line to the next.

From the Paley-Wienner theorem we have that every function in \(G^{\sigma , s}(\mathbb {R})\) is analytic in space variable and admits an holomorphic extension to a complex strip \(S_\sigma =\{ x+iy;\; |y|<\sigma \}\). In this context, \(\sigma \) is called the uniform radius of analyticity.

While considering the existence of the solution to the IVPs with initial data in the Gevrey class \(G^{\sigma ,s}(\mathbb {R})\), the following two questions arise naturally. Starting with the given data \(u(0)\in G^{\sigma ,s}\), can one guarantee the existence of the solution such that the regularity in the space variable is sustained at least for short time? When one extends the local solution globally in time, the radius of analyticity \(\sigma (t)\) most possibly decreases. In this situation, can one find the lower bound for the radius of analyticity \(\sigma (t)\) when \(t\rightarrow \infty \)? In the recent time these sort of questions have attracted attention of several mathematicians, see for example [3,4,5, 16, 21, 38, 39, 42] and references therein.

At this point, we mention the works in [5, 16] where the authors obtained the well-posedness results for the IVP associated to the generalized KdV equations with data in the Gevrey class \(G^{\sigma ,s}(\mathbb {R})\). In [5], the authors also obtained an algebraic lower bound for the evolution of the radius of analyticity which turns out to be \(CT^{-12}\) for the both KdV and mKdV equations. Recently, Selberg and Silva [39] introduced a concept of almost conserved quantities and obtained \(cT^{-(\frac{4}{3}+\epsilon )}\) as a lower bound for the radius of analyticity for the KdV equation improving the result in [5]. Quite recently, this lower bound has further been improved to \(cT^{-\frac{1}{4}}\) in [21] using the I-method. The NLS equation has also been extensively studied with data in the Gevrey class, see [2, 42] and references contained there. We emphasize here the work of Tesfahun [42], where the author proved the existence of the global solution to the defocusing cubic NLS equation belonging to \(C([-T,T], G^{\sigma (T),s}(\mathbb {R}))\) for any \(T>0\) as long as \(\sigma (T)\ge cT^{-1}\). As in the KdV equation, the main ingredient to obtain this result is an almost conserved quantity derived in the \(H^1\)-level.

To get motivation for the present work we would like to mention the following. When the coefficient \(\beta =0\) in (1.2) one obtains the well known classical NLS equation with cubic nonlinearity. When the coefficient \(\alpha =0\) in (1.2) one gets the dispersive term of the famous complex mKdV equation but without derivative on the cubic nonlinearity. As mentioned in the previous paragraph, one has well-posedness results for both the mKdV and the NLS equations with data in the Gevrey class as well as the lower bounds for the evolution of the radius of analyticity. So, it is natural to ask, what happens to the IVP (1.2) with data in the Gevrey class \(G^{\sigma , s}({\mathbb {R}})\) when \(\alpha \ne 0\) and \(\beta \ne 0\)? Is it possible to obtain a lower bound for the evolution of the radius of analyticity \(\sigma (t)\), \(t\rightarrow \infty \)? The next natural question concerning the IVP (1.1) is, can one improve the algebraic lower bound for the radius of analyticity obtained in [5]?

The main objective of this work is to provide affirmative answers to the questions posed above. For this, we will use the analytic version of the Bourgain’s space, so called Gevrey–Bourgain space related to the mKdV and tNLS equations. Given \(\sigma \ge 0\) and \(s,b\in \mathbb {R}\), the Gevrey–Bourgain space \(X^{\sigma ,s,b}(\mathbb {R}^2)\) are defined as the closure of the Schwartz space under the norm

$$\begin{aligned} \Vert w\Vert _{X^{\sigma ,s,b}} = \Vert e^{\sigma |\xi |}\langle \xi \rangle ^{s} \langle \tau -\phi (\xi )\rangle ^{b} |{\widehat{w}}(\xi ,\tau )|\Vert _{L_{\tau ,\xi }^2}, \end{aligned}$$
(1.9)

where \(\langle \xi \rangle := 1+|\xi |\) and \(\phi \) is the phase function given by (1.8).

Also, for \(T>0\) we denote the Gevrey–Bourgain space restricted in time by \(X_T^{\sigma ,s,b}(\mathbb {R}^2)\) with norm given by

$$\begin{aligned} \Vert w\Vert _{X_T^{\sigma ,s,b}} = \inf \big \{ \Vert {\tilde{w}}\Vert _{X^{\sigma ,s,b}};\; w={\tilde{w}} \text { on } \mathbb {R}\times (-T,T)\big \}. \end{aligned}$$

For \(\sigma =0\) we recover the classical Bourgain’s space with norm given by (1.7). In this case, we simply have \(X^{0,s,b}\equiv X^{s,b}\) and \(X_T^{0, s,b}\equiv X_T^{s,b}\). We introduce the operator \(e^{\sigma |D_x|}\) given by

$$\begin{aligned} \widehat{e^{\sigma |D_x|} w}(\xi ) = e^{\sigma |\xi |}{\widehat{w}}(\xi ), \end{aligned}$$
(1.10)

so that, one has

$$\begin{aligned} \Vert e^{\sigma |D_x|} w\Vert _{X^{s,b}} = \Vert w\Vert _{X^{\sigma , s, b}}. \end{aligned}$$
(1.11)

The relation (1.11) allows us to translate the results in the classical Bourgain’s spaces to the analytic version of them.

To avoid any possible confusion we introduce the following notations to distinguish the Gevrey–Bourgain’s space related to the mKdV and tNLS equations

$$\begin{aligned} \begin{aligned}&Y^{\sigma , s,b}: = X^{\sigma , s, b}, \quad {\text {when}}\quad \phi (\xi ) = \xi ^3,\\&Z^{\sigma , s,b}: = X^{\sigma , s, b}, \quad {\text {when}}\quad \phi (\xi ) = \alpha \xi ^2+\beta \xi ^3, \end{aligned} \end{aligned}$$

and similarly for \(Y^{\sigma , s,b}_T\), \(Z^{\sigma , s,b}_T\) and when \(\sigma =0\), \(Y^{ s,b}\), \(Z^{ s,b}\), \(Y^{ s,b}_T\), \(Z^{ s,b}_T\) as well. From now on, we will use the spaces \(X^{\sigma , s, b}\) and \(X^{ s, b}\) to state and prove the results that hold for any phase function \(\phi \) given by (1.8). We will use the spaces \(Y^{\sigma , s, b}\) and \(Y^{ s, b}\) to state and prove the results that hold only for the mKdV equation and the spaces \(Z^{\sigma , s, b}\) and \(Z^{ s, b}\) to state and prove the results that hold only for the tNLS equation.

Now we are in position to state the main results of this work. Regarding the local well-posedness, we prove the following results.

Theorem 1.1

Let \(\sigma >0\) and \(s\ge \frac{1}{4}\). For each \(u_0\in G^{\sigma ,s}(\mathbb {R})\) there exists a time \(T_0=T_0(\Vert u_0\Vert _{G^{\sigma ,s}})>0\) such that the IVP (1.1) admits a unique solution u in \( C([-T_0,T_0]; G^{\sigma ,s}(\mathbb {R}))\cap Y_{T_0}^{\sigma ,s,b}\). Moreover, the data-to-solution map is locally Lipschitz.

Theorem 1.2

Let \(\sigma >0\) and \(s>-\frac{1}{4}\). For each \(v_0\in G^{\sigma ,s}(\mathbb {R})\) there exists a time \(T_0=T_0(\Vert v_0\Vert _{G^{\sigma ,s}})>0\) such that the IVP (1.2) admits a unique solution v in \(C([-T_0,T_0]; G^{\sigma ,s}(\mathbb {R}))\cap Z_{T_0}^{\sigma ,s,b}\). Moreover, the data-to-solution map is locally Lipschitz.

Remark 1.3

Note that the Gevrey spaces \(G^{\sigma ,s}\) enjoy the following inclusion

$$\begin{aligned} G^{\sigma ,s}(\mathbb {R}) \subset G^{\sigma ',s'}(\mathbb {R}),\; \text {for all }0<\sigma '<\sigma \text { and } s,s'\in \mathbb {R}. \end{aligned}$$
(1.12)

In view of this inclusion, it is sufficient to prove the local well-posedness theory for indices \(\sigma >0\) and \(s=s_0\) fixed, which in turn would imply the same in \(G^{\sigma , s}({\mathbb {R}})\) for all \(\sigma >0\) and \(s\in \mathbb {R}\).

In the following theorems we state the main results regarding the global solution and the evolution of the radius of analyticity.

Theorem 1.4

Let \(\sigma _0>0\), \(s\ge \frac{1}{4}\), \(u_0\in G^{\sigma _0,s}({\mathbb {R}})\) and u be the local solution to the IVP (1.1) in the defocusing case \((\mu = -1)\) given by Theorem 1.1. Then, for any \(T\ge T_0\) the local solution u extends globally in time satisfying

$$\begin{aligned} u\in C([-T,T]; G^{\sigma (T),s}), \quad \text {with}\quad \sigma (T)\ge \min \Big \{\sigma _0, cT^{-\frac{4}{3}}\Big \}, \end{aligned}$$

where c is a positive constant depending on s, \(\sigma _0\) and \(\Vert u_0\Vert _{G^{\sigma _0, s}}\).

Theorem 1.5

Let \(\sigma _0>0\), \(s>-\frac{1}{4}\), \(v_0\in G^{\sigma _0,s}({\mathbb {R}})\) and v be the local solution to the IVP (1.2) given by Theorem 1.2. Then, for any \(T\ge T_0\) the local solution u extends globally in time satisfying

$$\begin{aligned} v\in C([-T,T]; G^{\sigma (T),s}), \quad \text {with}\quad \sigma (T)\ge \min \Big \{\sigma _0, cT^{-(4+\varepsilon )}\Big \}, \end{aligned}$$

where \(\varepsilon >0\) is arbitrarily small and c is a positive constant depending on s, \(\sigma _0\) and \(\Vert v_0\Vert _{G^{\sigma _0, s}}\).

We emphasize that the result of Theorem 1.4 significantly improves the earlier result in [5] where the authors obtained \(cT^{-12}\) as a lower bound for the radius of analyticity for the mKdV equation. As far as we know, for the tNLS equation the results of Theorems 1.2 and 1.5 are the first ones in this direction.

To prove the global results and the lower bounds for the evolution of the radius of analyticity stated in Theorems 1.4 and 1.5, we derived the almost conserved quantities (ACQ) in the \(G^{\sigma ,s}(\mathbb {R})\) spaces of the form (see (4.32) and (4.44) below)

$$\begin{aligned} A_{\sigma }(t) \le A_{\sigma }(0) + C \sigma ^{\theta }A_{\sigma }(0)^2, \qquad \theta >0, \end{aligned}$$
(1.13)

where \(A_{\sigma }(t)\) is defined appropriately taking in consideration the conserved quantities.

Once having the almost conserved quantity (1.13) at hand, one can extend the local solution to the global in time and obtain the lower bound for the radius of analyticity following the scheme developed in [39]. As can be seen in this process, the higher the value of \(\theta \) in (1.13) better the lower bound for the radius of analyticity in the sense that it decays much slower as time advances.

Remark 1.6

As can be seen in Theorems  1.4 and 1.5, the lower bound for the radius of analyticity for the mKdV equation is better than that for the tNLS equation. The main difference for this is the level of Sobolev regularity used to construct almost conserved quantity. The IVP (1.1) associated to the mKdV equation is locally well-posed for \(s\ge \frac{1}{4}\), so we constructed ACQ at the \(H^1\)-level using auxiliary trilinear estimate at \(H^{\frac{1}{4}}\)-level. The IVP (1.2) associated to the tNLS equation is locally well-posed for \(s>-\frac{1}{4}\), so we constructed ACQ at \(L^2\)-level using auxiliary trilinear estimate at \(H^{-\frac{1}{4}+\epsilon }\)-level. The difference in the level of regularities used for the ACQs and the level of auxiliary trilinear estimate provides the exponent \(\theta \) in (1.13) that in turn provides the decay rate of the radius of analyticity in the form \(cT^{-\frac{1}{\theta }}\). So, one may naturally ask why not using \(H^1\)-level to obtain ACQ for the tNLS equation as well to get a better result? Unfortunately, as already noted in [29], the conserved quantity given by (1.6) is not sign definite and cannot be used for our purpose.

This paper is organized as follows. In Sect. 2 we record some preliminary estimates and derive the trilinear estimates in the analytic Gevrey–Bourgain spaces. Section 3 is devoted to furnish the local well-posedness results with real analytic data. In Sect. 4 we introduce almost conserved quantities and find the associated decay estimates. Finally, in Sect. 5 we extend the local solution globally in time and obtain an algebraic lower bound for the radius of analyticity as stated in Theorems 1.4 and 1.5.

2 Preliminaries and trilinear estimates

In this section we will derive some trilinear estimates that play crucial role in the proof of the well-posedness results. We start with some classical linear estimates in the Gevrey–Bourgain’s spaces, whose proof can be found in [3, 5, 16, 38] for instance. Before stating these results let \(\psi \in C_0^\infty ((-2,2))\) be a cut-off function with \(0\le \psi \le 1\), \(\psi (t)=1\) on \([-1,1]\) and \(\psi _T(t)=\psi \left( \frac{t}{T}\right) \). Also, let W(t) be the unitary group given by \(\widehat{W(t)\varphi }=e^{it\phi (\xi )}\widehat{\varphi }\) where \(\phi \) is the phase function defined in (1.8).

Lemma 2.1

Let \(\sigma \ge 0\), \(s\in \mathbb {R}\), \(b>\frac{1}{2}\) and \(b-1<b'<0\). Then, for all \(0<T\le 1\) there is a constant \(c=c(s,b)>0\) such that

$$\begin{aligned} \Vert \psi (t)W(t)f(x)\Vert _{X^{\sigma ,s,b}} \le c \Vert f\Vert _{G^{\sigma ,s}}, \end{aligned}$$
(2.1)

and

$$\begin{aligned} \Bigg \Vert \psi _T(t)\int \nolimits _0^t W(t-t')w(x,t')d t'\Bigg \Vert _{X^{\sigma ,s,b}} \le cT^{1-(b-b')}\Vert w\Vert _{X^{\sigma ,s,b'}}. \end{aligned}$$
(2.2)

As we will see in the sequel, the proof of Theorems 1.1 and 1.2 heavily rely on the trilinear estimates in the Gevrey–Bourgain’s spaces \(X^{\sigma ,s,b}(\mathbb {R}^2)\). We start with the following trilinear estimate in the classical Bourgain’s spaces associated to the mKdV equation proved in [41] (see Corollary 6.3 there).

Lemma 2.2

For all \(u_1, u_2\), \(u_3\) and \(0<\varepsilon \ll 1\), we have

$$\begin{aligned} \Vert \partial _x(u_1 u_2 u_3)\Vert _{Y^{\frac{1}{4},-\frac{1}{2}+\varepsilon }} \le C\Vert u_1\Vert _{Y^{\frac{1}{4},\frac{1}{2}+\varepsilon }}\Vert u_2\Vert _{Y^{\frac{1}{4},\frac{1}{2}+\varepsilon }}\Vert u_3\Vert _{Y^{\frac{1}{4},\frac{1}{2}+\varepsilon }}, \end{aligned}$$
(2.3)

with the constant \(C>0\) depending only on \(\varepsilon \).

Also considering the classical Bourgain’s spaces, but now associated to the tNLS equation, the following estimate was proved in [9] (see Lemma 2.2 there).

Lemma 2.3

Let \(- \frac{1}{4}< s \le 0\), \(b>\frac{7}{12}\) and \(b'<\frac{s}{3}\). Denoting \(\eta =(\xi ,\tau )\), \(\eta _1=(\xi _1,\tau _1)\) and \(\eta _2=(\xi _2,\tau _2)\), consider

$$\begin{aligned}{} & {} K(\eta ,\eta _1,\eta _2)\nonumber \\{} & {} = \frac{\langle \xi \rangle ^{s}\langle \xi +\xi _1-\xi _2\rangle ^{-s}\langle \xi _2\rangle ^{-s}\langle \xi _1\rangle ^{-s}}{\langle \tau -\phi (\xi )\rangle ^{-b'}\langle \tau +\tau _1-\tau _2-\phi (\xi +\xi _1-\xi _2)\rangle ^b \langle \tau _1-\phi (\xi _1)\rangle ^b\langle \tau _2-\phi (\xi _2)\rangle ^b},\nonumber \\ \end{aligned}$$
(2.4)

then

$$\begin{aligned} I(\xi ,\tau ):= \Vert K(\eta ,\eta _1,\eta _2)\Vert ^2_{L^2_{\eta _1,\eta _2}} \le C(s,b,b')<\infty , \end{aligned}$$
(2.5)

where \(\phi (\xi ) = \alpha \xi ^2+\beta \xi ^3\) and \(C(s,b,b')\) is a positive constant independent of \(\xi \) and \(\tau \).

We will use Lemmas 2.2 and 2.3 to derive the following trilinear estimates in the Gevrey–Bourgain’s spaces associated to the mKdV equation and tNLS equation, respectively.

Proposition 2.4

Let \(\sigma \ge 0\), there is \(\frac{1}{2}<b<1\) such that

$$\begin{aligned} \Vert \partial _x(u_1 u_2 u_3)\Vert _{Y^{\sigma ,\frac{1}{4},b-1}} \le C\Vert u_1\Vert _{Y^{\sigma ,\frac{1}{4},b}}\Vert u_2\Vert _{Y^{\sigma ,\frac{1}{4},b}}\Vert u_3\Vert _{Y^{\sigma ,\frac{1}{4},b}}, \end{aligned}$$
(2.6)

where \(C>0\) depends only on b.

Proof

The proof is done by applying the trivial inequality \(e^{\sigma |\xi |}\le e^{\sigma |\xi -\xi _1-\xi _2|} e^{\sigma |\xi _1|}e^{\sigma |\xi _2|}\) and estimate (2.3) for \(e^{\sigma |D_x|} u_i\) in place of \(u_i\) with \(i=1,2,3\), where \(e^{\sigma |D_x|}\) is the operator given in (1.10) (see Corollary 1 in [3] for a more detailed proof).

\(\square \)

Proposition 2.5

Let \(\sigma \ge 0\), \(-\frac{1}{4}<s\le 0\), \(b>\frac{7}{12}\) and \(b'<\frac{s}{3}\), then we have

$$\begin{aligned} \Vert v_1v_2\overline{v}_3\Vert _{Z^{\sigma ,s,b'}} \le C \Vert v_1\Vert _{Z^{\sigma ,s,b}} \Vert v_2\Vert _{Z^{\sigma ,s,b}}\Vert v_3\Vert _{Z^{\sigma ,s,b}}. \end{aligned}$$
(2.7)

Proof

As in the proof of Proposition 2.4, we use the inequality \(e^{\sigma |\xi |}\le e^{\sigma |\xi -\xi _1-\xi _2|} e^{\sigma |\xi _1|}e^{\sigma |\xi _2|}\), to obtain

$$\begin{aligned}{} & {} |e^{\sigma |\xi |}\widehat{v_1v_2\overline{v}_3}(\xi ,\tau )|\nonumber \\{} & {} \quad \le \int _{\mathbb {R}^4} |\widehat{V_1}(\xi +\xi _1-\xi _2, \tau +\tau _1-\tau _2)\widehat{V_2}(\xi _2,\tau _2)\overline{\widehat{V_3}}(\xi _1,\tau _1)| d\xi _2d\tau _2d\xi _1d\tau _1,\nonumber \\ \end{aligned}$$
(2.8)

where \(V_j=e^{\sigma |D_x|}v_i\) for \(j=1,2,3\) and we considered a change of variables \((\xi _1,\tau _1)\rightarrow (-\xi _1,-\tau _1)\).

To simplify the exposition, let us define

$$\begin{aligned}{} & {} f(\xi ,\tau ) = \langle \xi \rangle ^s\langle \tau -\phi (\xi )\rangle ^b|\widehat{V_1}|, \;\; g(\xi ,\tau ) = \langle \xi \rangle ^s\langle \tau -\phi (\xi )\rangle ^b|\widehat{V_2}|, \;\; h(\xi ,\tau ) \nonumber \\{} & {} \quad = \langle \xi \rangle ^s\langle \tau -\phi (\xi )\rangle ^b|\widehat{V_3}|, \end{aligned}$$
(2.9)

so that \( \Vert f\Vert _{L^2}=\Vert v_1\Vert _{Z^{\sigma ,s,b}}\), \( \Vert g\Vert _{L^2}= \Vert v_2\Vert _{Z^{\sigma ,s,b}}\) and \( \Vert h\Vert _{L^2}= \Vert v_3\Vert _{Z^{\sigma ,s,b}}\). Also, we define

$$\begin{aligned} \eta =(\xi ,\tau ), \quad \eta _1=(\xi _1,\tau _1),\; \quad \eta _2=(\xi _2,\tau _2). \end{aligned}$$
(2.10)

Now, using these notations, the definition of the \(Z^{\sigma ,s,b'}\)-norm given in (1.9) with \(\phi (\xi ) = \alpha \xi ^2+\beta \xi ^3\) and the estimate (2.8), one can easily obtain

$$\begin{aligned} \Vert v_1v_2\overline{v_3}\Vert _{Z^{\sigma ,s,b'}} \le \Big \Vert \int _{\mathbb {R}^4} f(\eta +\eta _1-\eta _2)g(\eta _2)\overline{h}(\eta _1)K(\eta ,\eta _1,\eta _2)d \eta _1d \eta _2 \Big \Vert _{L^2_\eta },\nonumber \\ \end{aligned}$$
(2.11)

where \(K(\eta ,\eta _1,\eta _2)\) is as in (2.4).

Applying Minkowski inequality for integrals and Hölder’s inequality, it is easy to obtain

$$\begin{aligned}{} & {} \!\!\! \Bigg \Vert \! \int _{\mathbb {R}^4}\!\! f(\eta +\eta _1-\eta _2)g(\eta _2)\overline{h}(\eta _1)K(\eta ,\eta _1,\eta _2)d{\eta _1}d{\eta _2} \Bigg \Vert _{L^2_{\eta }}\nonumber \\{} & {} \quad \!\!\! \le \! \Vert f\Vert _{L^2}\Vert g\Vert _{L^2}\Vert h\Vert _{L^2}\Vert K(\eta ,\eta _1,\eta _2)\Vert _{L^\infty _{\eta } L^2_{\eta _1,\eta _2}}. \end{aligned}$$
(2.12)

Using (2.5) from Lemma 2.3, we guarantee that

$$\begin{aligned} \Vert K(\eta ,\eta _1,\eta _2)\Vert _{L^\infty _\eta L^2_{\eta _1,\eta _2}}\le C(s,b,b')<\infty . \end{aligned}$$
(2.13)

Now, combining (2.11), (2.12) and (2.13), we arrive at

$$\begin{aligned} \Vert v_1v_2\overline{v}_3\Vert _{Z^{\sigma ,s,b'}} \le C(s,b,b')\Vert f\Vert _{L^2}\Vert g\Vert _{L^2}\Vert h\Vert _{L^2} = C\Vert v_1\Vert _{Z^{\sigma ,s,b}} \Vert v_2\Vert _{Z^{\sigma ,s,b}}\Vert v_3\Vert _{Z^{\sigma ,s,b}}, \end{aligned}$$

finishing the proof. \(\square \)

In what follows, we present some results that will be useful to derive almost conserved quantities.

Lemma 2.6

(Lemma 5 in [39]) Let \(\sigma \ge 0\), \(s\in \mathbb {R}\), \(-\frac{1}{2}<b<\frac{1}{2}\) and \(T>0\). Then, for any time interval \(I\subset [-T,T]\), we have

$$\begin{aligned} \left\| \chi _{I}w\right\| _{X^{\sigma ,s,b}}\le C\left\| w\right\| _{X_{T}^{\sigma ,s,b}}, \end{aligned}$$

where \(\chi _{I}\) is the characteristic function of I and \(C>0\) depends only on b.

Lemma 2.7

(Lemma 3 in [4]) For \(\sigma >0\), \(\theta \in [0,1]\) and \(\alpha ,\beta ,\gamma \in \mathbb {R}\), the following estimate holds

$$\begin{aligned}{} & {} e^{\sigma |\alpha |}e^{\sigma |\beta |}e^{\sigma |\gamma |} - e^{\sigma |\alpha +\beta +\gamma |}\nonumber \\{} & {} \quad \le \left[ 2\sigma \min \left\{ |\alpha |+ |\beta |, |\alpha |+ |\gamma |, |\beta |+ |\gamma | \right\} \right] ^{\theta } e^{\sigma |\alpha |}e^{\sigma |\beta |}e^{\sigma |\gamma |}. \end{aligned}$$
(2.14)

We finish this section stating one more lemma that will be useful in obtaining almost conserved quantity for the mKdV equation, which is an immediate consequence of the well known Strichartz’s type estimates

$$\begin{aligned} \Vert u\Vert _{L^6_xL^6_t} \le C\Vert u\Vert _{Y^{0,b}}, \;\; \text {for all } b>\frac{1}{2}. \end{aligned}$$
(2.15)

The following result is immediate using (2.15) and the generalized Hölder inequality.

Lemma 2.8

For all \(u_1\), \(u_2\), \(u_3\) and \(b> \frac{1}{2}\), we have

$$\begin{aligned} \Vert u_1 u_2 u_3\Vert _{L^2_x L^2_t} \le C \Vert u_1\Vert _{Y^{0,b}}\Vert u_2\Vert _{Y^{0,b}}\Vert u_3\Vert _{Y^{0,b}}. \end{aligned}$$
(2.16)

3 Local well-posedness: Proof of Theorems 1.1 and 1.2

In this section we prove the local well-posedness results by using the Gevrey–Bourgain’s spaces. We use the standard strategy based on a fixed point argument for the iteration map defined via the solution of the corresponding integral equation, commonly known as Duhamel’s formula,

$$\begin{aligned} w(t) =W(t)w_0 - \int _0^{t} W(t-t')f(w)(x,t') d t', \end{aligned}$$

where W(t) is the semigroup associated to the linear problem, \(w_0\) is the initial data and f(w) is the nonlinear part. This is a classical strategy found in the literature (see for example [7, 23] for the KdV equation). For the sake of completeness we provide detailed proof for the tNLS equation, the proof for the mKdV equation follows similarly.

Proof of Theorem 1.2

Let \(\sigma >0\), \(v_0\in G^{\sigma ,s}(\mathbb {R})\) with \(-\frac{1}{4}< s \le 0\) and \(\psi _T\) be the cut-off function as defined earlier. Let us define a solution map \(\Phi _T\) for \(0<T\le 1\) given by

$$\begin{aligned} \Phi _T(v) = \psi (t)W(t)v_0 - \psi _T(t)\int _0^{t} W(t-t')(i\gamma |v|^2v)(x,t') d t'. \end{aligned}$$
(3.1)

Our main goal is to show the existence of a lifespan \(T>0\) such that \(\Phi _T\) is a contraction map on a suitable complete space. In other words, we will prove that there are \(b>\frac{1}{2}\) and \(T_0=T_0(\Vert v_0\Vert _{G^{\sigma , s}})>0\) such that \( \Phi _{T_0}: B(r)\rightarrow B(r) \) is a contraction map, where \( B(r) = \left\{ v\in Z_{T_0}^{\sigma ,s,b};\; \left\| v\right\| _{Z_{T_0}^{\sigma ,s,b}}\le r\right\} \) with \(r=2c\Vert v_0\Vert _{G^{\sigma ,s}}\) and c is a positive constant depending only on s and b.

Indeed, applying the nonlinear estimate (2.7) and the linear inequalities (2.1) and (2.2), for all \(v\in B(r)\), we obtain

$$\begin{aligned} \Vert \Phi _{T_0}(v)\Vert _{Z^{\sigma ,s,b}} \le c\Vert v_0\Vert _{G^{\sigma ,s}} +cT_0^{\frac{1}{a}}\Vert v\Vert _{Z^{\sigma ,s,b}}^3 \le \frac{r}{2} +cT_0^{\frac{1}{a}}r^3, \end{aligned}$$
(3.2)

where \(\frac{1}{a}=1-(b-b')>0\) with b and \(b'\) given as in Proposition 2.5.

By choosing

$$\begin{aligned} T_0\le (2cr^2)^{-a}, \end{aligned}$$
(3.3)

one can readily get from (3.2) that \( \Vert \Phi _{T_0}(v)\Vert _{Z^{\sigma ,s,b}}\le r\) for all \(v\in B(r)\), showing the inclusion \(\Phi _{T_0}(B(r))\subset B(r)\).

Using (2.2) once again, for all \(v_1,v_2\in B(r)\), we have

$$\begin{aligned} \Vert \Phi _{T_0}(v_1)-\Phi _{T_0}(v_2)\Vert _{Z^{\sigma ,s,b}} \le cT_0^{\frac{1}{a}}\big \Vert |v_1|^2v_1-|v_2|^2v_2\big \Vert _{Z^{\sigma ,s,b'}}. \end{aligned}$$

Since \(|v_1|^2v_1-|v_2|^2v_2 = (v_1-v_2)(|v_1|^2+\overline{v}_1v_2)+\overline{(v_1-v_2)}v_2^2\), we get

$$\begin{aligned} \Vert \Phi _{T_0}(v_1)-\Phi _{T_0}(v_2)\Vert _{Z^{\sigma ,s,b}}&\le cT_0^{\frac{1}{a}}\Vert v_1-v_2\Vert _{Z^{\sigma ,s,b}}(\Vert v_1\Vert ^2_{Z^{\sigma ,s,b}}+\Vert v_1\Vert _{Z^{\sigma ,s,b}}\Vert v_2\Vert _{Z^{\sigma ,s,b}}\\&\quad +\Vert v_2\Vert ^2_{Z^{\sigma ,s,b}})\\&\le cT_0^{\frac{1}{a}}3r^2\Vert v_1-v_2\Vert _{Z^{\sigma ,s,b}}. \end{aligned}$$

Now, if we choose \(T_0\) also satisfying

$$\begin{aligned} T_0 < (3cr^2)^{-a}, \end{aligned}$$
(3.4)

it can easily be shown that \(\Phi _{T_0}\) is a contraction map. To complete the proof, it is sufficient to choose a lifespan \(0<T_0\le 1\) satisfying (3.3) and (3.4).

Therefore, \(\Phi _{T_0}\) satisfies the desired requirements by considering

$$\begin{aligned} T_0 = \frac{c_0}{(1+\Vert v_0\Vert ^2_{G^{\sigma ,s}})^a}, \end{aligned}$$
(3.5)

for an appropriate constant \(c_0>0\) depending on s and b. Hence, \(\Phi _{T_0}\) admits a unique fixed point, which is a local in time solution of (1.2) satisfying

$$\begin{aligned} \Vert v\Vert _{Z^{\sigma ,s,b}_{T_0}} \le r = c\Vert v_0\Vert _{G^{\sigma ,s}}. \end{aligned}$$
(3.6)

Also, we can prove that \(\Phi _{T_0}(v)\) depends continuously on \(v_0\) in an analogous manner, thereby completing the proof of Theorem 1.2. \(\square \)

Proof

Idea of proof of this theorem is similar to that of Theorem 1.2. The only difference is that in this case we use the trilinear estimate (2.3) from Lemma 2.2. So, we omit the details. \(\square \)

Remark 3.1

The bound of the local solution given in (3.6) plays an important role in the construction of a global solution to be shown in the last section of this work. Of course, for the local solution \(u\in Y_{T_0}^{\sigma , s, b}\) of the mKdV equation we also have an analogous bound

$$\begin{aligned} \Vert u\Vert _{Y^{\sigma ,s,b}_{T_0}} \le c\Vert u_0\Vert _{G^{\sigma ,s}}. \end{aligned}$$
(3.7)

4 Almost conserved quantities

In this section we will introduce almost conserved quantities associated to the mKdV and tNLS equations and find appropriate estimates for them. Taking in consideration the conserved quantities in (1.3), (1.4) and (1.5), we define for the mKdV equation

$$\begin{aligned} E_\sigma (t) = \Vert u(t)\Vert ^2_{G^{\sigma ,1}}-\frac{\mu }{6} \Vert e^{\sigma |D_x|}u\Vert _{L^4_x}^4, \end{aligned}$$
(4.1)

and for the tNLS equation

$$\begin{aligned} M_\sigma (t) = \Vert v(t)\Vert ^2_{G^{\sigma ,0}}. \end{aligned}$$
(4.2)

Note that, for \(\sigma =0\) (4.1) and (4.2) turn out to be the conserved quantities (1.4) and (1.5) respectively. However, for \(\sigma >0\) they are no more conserved by the flow. We will show that these quantities are almost conserved by deriving some appropriate estimates. For this purpose, we need estimates in the Bourgain’s space norm for the following expressions

$$\begin{aligned} F(U):= \frac{\mu }{3} \partial _x\Big [ U^3-e^{\sigma |D_x|}\big ((e^{-\sigma |D_x|} U)^3\big ) \Big ], \text { for the mKdV equation, } \end{aligned}$$
(4.3)

with \(\mu =\pm 1\) and

$$\begin{aligned} G(V):= -\Big [ |V|^2V-e^{\sigma |D_x|}\big (|e^{-\sigma |D_x|} V|^2e^{-\sigma |D_x|} V \big ) \Big ], \text { for the tNLS equation, }\nonumber \\ \end{aligned}$$
(4.4)

which is the content of the next two lemmas.

Lemma 4.1

Let F be as defined in (4.3) and \(\sigma >0\). Then, there is some \(\frac{1}{2}<b<1\) such that for all \(\ell \in [0,\frac{3}{4}]\)

$$\begin{aligned} \Vert F(U)\Vert _{L^2_xL^2_t}&\le C \sigma ^{\ell }\Vert U\Vert ^3_{Y^{1,b}},\end{aligned}$$
(4.5)
$$\begin{aligned} \Vert \partial _xF(U)\Vert _{Y^{0,b-1}}&\le C \sigma ^{\ell }\Vert U\Vert ^3_{Y^{1,b}}, \end{aligned}$$
(4.6)

for some constant \(C>0\) independent of \(\sigma \).

Proof

We start the proof by observing that

$$\begin{aligned} |\widehat{F(U)}(\xi ,\tau )| \le C|\xi | \int _{*} (1-e^{-\sigma (|\xi _1|+|\xi _2|+|\xi _3|-|\xi |)})| {\widehat{U}}(\xi _1,\tau _1)|| {\widehat{U}}(\xi _2,\tau _2)||{\widehat{U}}(\xi _3,\tau _3)|,\nonumber \\ \end{aligned}$$
(4.7)

where \(\int _*\) denotes the integral over the set \(\xi =\xi _1 +\xi _2+\xi _3\) and \(\tau =\tau _1+\tau _2+\tau _3\).

Now, from the classical inequality

$$\begin{aligned} e^x-1 \le x^{\ell }e^x, \;\text { for all }\; x\ge 0 \;\text { and }\; \ell \in [0,1], \end{aligned}$$

we get

$$\begin{aligned} 1-e^{-\sigma (|\xi _1|+|\xi _2|+|\xi _3|-|\xi |)} \le \sigma ^{\ell } (|\xi _1|+|\xi _2|+|\xi _3|-|\xi |)^{\ell }. \end{aligned}$$
(4.8)

Let \(\xi _{\max }, \xi _{\text {med}}\) and \(\xi _{\min }\) be the maximum, medium and minimum values of \(\{|\xi _1|,|\xi _2|,|\xi _3|\}\). As shown in [42] (more precisely, see the proof of Lemma 7 there), we have

$$\begin{aligned} |\xi _1|+|\xi _2|+|\xi _3|-|\xi | \le 12\xi _{\text {med}}, \end{aligned}$$

and consequently the estimate (4.8) yields

$$\begin{aligned} 1-e^{-\sigma (|\xi _1|+|\xi _2|+|\xi _3|-|\xi |)} \le C\sigma ^{\ell }\xi _{\text {med}}^{\ell }. \end{aligned}$$
(4.9)

Thus, using (4.9) in (4.7), we obtain

$$\begin{aligned} |\widehat{F(U)}(\xi ,\tau )| \le C\sigma ^{\ell } |\xi | \int _{*} \xi _{\text {med}}^{\ell }| {\widehat{U}}(\xi _1,\tau _1)|| {\widehat{U}}(\xi _2,\tau _2)||{\widehat{U}}(\xi _3,\tau _3)|. \end{aligned}$$
(4.10)

Now, we move to prove (4.5). In order to simplify the exposition, without loss of generality, we can consider \(|\xi _1|\le |\xi _2|\le |\xi _3|\). With this consideration, one has \( |\xi |\xi _\text {med}^{\ell } \le 3|\xi _3||\xi _2|^{\ell }, \) and consequently from (4.10) and Plancherel’s identity, we obtain

$$\begin{aligned}{} & {} \Vert F(U)\Vert _{L^2_xL^2_t} \le C\sigma ^{\ell }\Big \Vert \int _*|{\widehat{U}}(\xi _1,\tau _1)|| \widehat{D_x^{\ell }U}(\xi _2,\tau _2)||\widehat{D_xU}(\xi _3,\tau _3)|\Big \Vert _{L^2_\xi L^2_\tau }\\{} & {} \quad = C\sigma ^{\ell } \big \Vert w_1 w_2 w_3\big \Vert _{L^2_x L^2_t}, \end{aligned}$$

where \(w_1\), \(w_2\) and \(w_3\) are defined by \(\widehat{w_1}(\xi ,\tau )=|{\widehat{U}}(\xi ,\tau )|\), \(\widehat{w_2}(\xi ,\tau )=|\widehat{D_x^{\ell } U}(\xi ,\tau )|\) and \(\widehat{w_3}(\xi ,\tau )=|\widehat{D_x U}(\xi ,\tau )|\). Then, by using Lemma 2.8, we obtain

$$\begin{aligned} \Vert F(U)\Vert _{L^2_xL^2_t} \le C\sigma ^{\ell } \Vert w_1\Vert _{Y^{0,b}}\Vert w_2\Vert _{Y^{0,b}} \Vert w_3\Vert _{Y^{0,b}} \le C\sigma ^{\ell } \Vert U\Vert _{Y^{1,b}}^3, \end{aligned}$$

since \(0\le \ell \le 1\), and this finishes the proof of (4.5).

Concerning (4.6), first we observe that for \(0\le k\le 1\), one has

$$\begin{aligned} \Vert \partial _xF(U)\Vert _{Y^{0,b-1}} \le \big \Vert \langle \tau -\xi ^3\rangle ^{b-1}\langle \xi \rangle ^{k}|\xi |^{1-k}|\widehat{F(U)}(\xi ,\tau )|\big \Vert _{L^2_\xi L^2_\tau }, \end{aligned}$$
(4.11)

since \(\langle \xi \rangle ^{-k}\le |\xi |^{-k}\), for all \(\xi \ne 0\).

Assuming again \(|\xi _1|\le |\xi _2|\le |\xi _3|\) and using (4.10), we get from (4.11) that

$$\begin{aligned} \begin{aligned} \!\!\Vert \partial _xF(U)\Vert _{Y^{0,b-1}} \!&\!\le \! C\sigma ^{\ell } \Big \Vert \langle \tau -\xi ^3\rangle ^{b-1}\!\!\langle \xi \rangle ^{k} |\xi |\\&\quad \int _*|\xi _2|^{\ell }|\xi _3|^{1-k}| {\widehat{U}}(\xi _1,\tau _1)|| {\widehat{U}}(\xi _2,\tau _2)||{\widehat{U}}(\xi _3,\tau _3)| \Big \Vert _{L^2_\xi L^2_\tau }\\ \!&\!= C\sigma ^{\ell } \big \Vert \partial _x(w_1 w_2 w_4)\big \Vert _{Y^{k,b-1}}, \end{aligned} \end{aligned}$$
(4.12)

where in the first inequality we used \(|\xi |^{1-k}\le 3^{1-k} |\xi _3|^{1-k}\) and \(\widehat{w_4 }(\xi ,\tau )=|\widehat{D_x^{1-k} U}(\xi ,\tau )|\).

Now, considering \(k=\frac{1}{4}\), we can use the trilinear estimate (2.3) with \(b=\frac{1}{2}+\varepsilon \) in (4.12), to obtain

$$\begin{aligned} \Vert \partial _xF(U)\Vert _{Y^{0,b-1}} \le C\sigma ^{\ell } \Vert w_1\Vert _{Y^{\frac{1}{4},b}} \Vert w_2\Vert _{Y^{\frac{1}{4},b}}\Vert w_4\Vert _{Y^{\frac{1}{4},b}}. \end{aligned}$$
(4.13)

Finally, since \(0\le \ell \le \frac{3}{4}\) the estimate (4.13) yields

$$\begin{aligned} \Vert \partial _xF(U)\Vert _{Y^{0,b-1}} \le C\sigma ^{\ell } \Vert U\Vert _{Y^{1,b}}^3, \end{aligned}$$

which proves the desired estimate (4.6). \(\square \)

Remark 4.2

As can be inferred from the proof, the estimate (4.5) holds for \(0\le \ell \le 1\). However, the estimate (4.6) holds only for \(0\le \ell \le 3/4\). This later restriction forces us to consider the maximum exponent of \(\sigma \) to be \(\frac{3}{4}\) in the almost conserved quantity, see (4.21) below.

Lemma 4.3

Let G be as in (4.4) and \(\sigma >0\). Then, for any \(\theta \in [0,\frac{1}{4})\) there is some \(\frac{1}{2}<b<1\) such that

$$\begin{aligned} \Vert G(V)\Vert _{Z^{0,b-1}} \le C\sigma ^{\theta }\Vert V\Vert ^3_{Z^{0,b}}, \end{aligned}$$
(4.14)

for some constant \(C>0\) independent of \(\sigma \).

Proof

We start by observing that

$$\begin{aligned} \begin{aligned} \big |\widehat{G(V)}(\xi ,\tau )\big |&\le C \int \! |e^{\sigma |\xi |}\!-\!e^{\sigma (|\xi -\xi _1-\xi _2|+|\xi _2|+|\xi _1|)}| |{\widehat{v}}(\xi -\xi _{1}-\xi _2,\tau -\tau _{1}-\tau _2)|\times \\&\qquad \quad \times | {\widehat{v}}(\xi _2,\tau _2) ||\overline{{\widehat{v}}}(-\xi _1,-\tau _1)|d{\xi _{2}}d{\tau _{2}}d{\xi _{1}}d{\tau _{1}}, \end{aligned}\nonumber \\ \end{aligned}$$
(4.15)

where \(v=e^{-\sigma |D_x| }V\). Using the estimate (2.14), it follows from (4.15) that

$$\begin{aligned} \begin{aligned} \big |\widehat{G(V)}(\xi ,\tau )\big |&\le C\sigma ^\theta \int _{\mathbb {R}^4} \min \{|\xi -\xi _1-\xi _2|+|\xi _1|,|\xi -\xi _1-\xi _2|+|\xi _2|,|\xi _1|+|\xi _2|\}^\theta \times \\&\qquad \times |{\widehat{V}}(\xi -\xi _{1}-\xi _2,\tau -\tau _{1}-\tau _2)| |{\widehat{V}}(\xi _2,\tau _2) | |\overline{{\widehat{V}}}(-\xi _1,-\tau _1)|d{\xi _{2}}d{\tau _{2}}d{\xi _{1}}d{\tau _{1}}. \end{aligned}\nonumber \\ \end{aligned}$$
(4.16)

Also, we have the following inequality

$$\begin{aligned}{} & {} \min \{|\xi -\xi _1-\xi _2|+|\xi _1|,|\xi -\xi _1-\xi _2|+|\xi _2|,|\xi _1|+|\xi _2|\}\nonumber \\{} & {} \quad \le 3 \frac{\langle \xi -\xi _1-\xi _2\rangle \langle \xi _2\rangle \langle \xi _1\rangle }{\langle \xi \rangle }, \end{aligned}$$
(4.17)

which is obtained from inequality (3.4) in [4] by considering \(\xi _1+\xi _2\) in place of \(\xi _1\).

Inserting (4.17) in (4.16), we obtain

$$\begin{aligned} \begin{aligned} \big |\widehat{G(V)}(\xi ,\tau )\big |&\le C\sigma ^\theta \int \langle \xi +\xi _1-\xi _2\rangle ^\theta \langle \xi _2\rangle ^\theta \langle \xi _1\rangle ^{\theta } \langle \xi \rangle ^{-\theta } |{\widehat{V}}(\xi +\xi _{1}-\xi _2,\tau +\tau _{1}-\tau _2)|\\&\quad \times |{\widehat{V}}(\xi _2,\tau _2) | |\overline{{\widehat{V}}}(\xi _1,\tau _1)|d{\xi _{2}}d{\tau _{2}}d{\xi _{1}}d{\tau _{1}}, \end{aligned}\nonumber \\ \end{aligned}$$
(4.18)

where we made the change of variables \((\xi _1,\tau _1)\rightarrow (-\xi _1,-\tau _1)\).

Now, using the notations in (2.10), one can easily obtain from (4.18) that

$$\begin{aligned} \begin{aligned} \Vert G(V)\Vert _{Z^{0,b-1}}&\le C\sigma ^\theta \Bigg \Vert \int f(\eta +\eta _1-\eta _2)f(\eta _2)\overline{f}(\eta _1)K(\eta ,\eta _1,\eta _2)d{\eta _1}d{\eta _2} \Bigg \Vert _{L^2_{\eta }}, \end{aligned}\nonumber \\ \end{aligned}$$
(4.19)

where \(f(\xi ,\tau )=\langle \tau -\phi (\xi )\rangle ^b |{\widehat{V}}(\xi ,\tau )|\) and

$$\begin{aligned}{} & {} K(\eta ,\eta _1,\eta _2)\\{} & {} = \frac{\langle \xi \rangle ^{-\theta }\langle \xi +\xi _1-\xi _2\rangle ^\theta \langle \xi _2\rangle ^\theta \langle \xi _1\rangle ^\theta }{\langle \tau -\phi (\xi )\rangle ^{1-b} \langle \tau +\tau _1-\tau _2-\phi (\xi +\xi _1-\xi _2)\rangle ^b\langle \tau _1-\phi (\xi _1)\rangle ^b\langle \tau _2-\phi (\xi _2) \rangle ^b}. \end{aligned}$$

Next, applying the same arguments used to obtain (2.12) in Proposition 2.5, we have

$$\begin{aligned}&\Bigg \Vert \int f(\eta +\eta _1-\eta _2)f(\eta _2)\overline{f}(\eta _1)K(\eta ,\eta _1,\eta _2)d{\eta _1}d{\eta _2} \Bigg \Vert _{L^2_{\eta }}\nonumber \\&\quad \le \Vert f\Vert ^3_{L^2}\Vert K(\eta ,\eta _1,\eta _2)\Vert _{L^\infty _\eta L^2_{\eta _1,\eta _2}}. \end{aligned}$$
(4.20)

Using Lemma 2.3 with \(s=-\theta \), \(\frac{7}{12}<b<\frac{11}{12}\) and \(b'=b-1\), it follows from (4.19) and (4.20) the following estimate

$$\begin{aligned} \Vert G(V)\Vert _{Z^{0,b-1}}&\le C\sigma ^\theta \Vert f\Vert _{L^2}^3 = C\sigma ^\theta \Vert V\Vert ^3_{Z^{0,b}}, \end{aligned}$$

thereby finishing the proof of (4.14). \(\square \)

In sequel we use the estimates obtained in Lemmas 4.1 and 4.3 to prove that the quantities \(E_{\sigma }(t)\) and \(M_{\sigma }(t)\) defined in (4.1) and (4.2) are almost conserved. This will be the content of the following propositions.

Proposition 4.4

Let \(\sigma >0\) and \(\ell \in [0,\frac{3}{4}]\). There exist \(C>0\) and \(b>\frac{1}{2}\) such that for any solution \(u\in Y^{\sigma ,1,b}_T\) to the IVP (1.1) in the interval [0, T], we have

$$\begin{aligned} \sup \limits _{t\in [0,T]} E_\sigma (t) \le E_\sigma (0) + C\sigma ^\ell \Vert u\Vert ^4_{Y^{\sigma ,1,b}_T} \big (1+\Vert u\Vert ^2_{Y^{\sigma ,1,b}_T}\big ), \end{aligned}$$
(4.21)

where \(E_\sigma (t)\) is defined in (4.1).

Proof

Let \(U=e^{\sigma |D_x|}u\). First, we observe that

$$\begin{aligned} \frac{d}{dt}\big (E_\sigma (t)\big ) = 2\int U\partial _t Ud x +2\int \partial _xU\partial _x(\partial _t U)d x -\frac{2\mu }{3} \int U^3\partial _tU d x. \end{aligned}$$
(4.22)

Applying the operator \(e^{\sigma |D_x|}\) to the mKdV equation (1.1), we get

$$\begin{aligned} \partial _tU+\partial _x^3U+\mu U^2\partial _xU = F(U), \end{aligned}$$
(4.23)

where F(U) is defined as in (4.3). Using (4.23) in each term of (4.22), we obtain

$$\begin{aligned} \int U\partial _t Ud x&= -\int U\partial _x^3Ud x -\frac{\mu }{4} \int \partial _x(U^4)d x +\int UF(U)d x, \\ \int \partial _xU\partial _x(\partial _t U)d x&= -\int \partial _xU\partial _x^4Ud x -\mu \int \partial _xU\partial _x(U^2 \partial _xU)d x +\int \partial _xU\partial _x(F(U))d x,\\ \int U^3\partial _tU d x&=-\int U^3\partial _x^3U d x -\frac{\mu }{6}\int \partial _x(U^6)d x +\int U^3F(U)d x . \end{aligned}$$

It follows from integration by parts and the fact that U and all its spatial derivatives tend to zero as |x| tends to infinity (see [39] for a detailed argument) that

$$\begin{aligned} \int U\partial _t U d x&= \int UF(U)d x, \end{aligned}$$
(4.24)
$$\begin{aligned} \int \partial _xU\partial _x(\partial _t U)d x&= -\frac{\mu }{3}\int U^3 \partial _x^3Ud x +\int \partial _xU\partial _x(F(U))d x,\end{aligned}$$
(4.25)
$$\begin{aligned} \int U^3\partial _tU d x&=-\int U^3\partial _x^3U d x +\int U^3F(U)d x . \end{aligned}$$
(4.26)

Now, plugging (4.24), (4.25) and (4.26) in (4.22), we arrive at

$$\begin{aligned} \frac{d}{dt}E_\sigma (t)&= 2\int UF(U)d x +2\int \partial _xU\partial _x(F(U))d x -\frac{2\mu }{3} \int U^3F(U) d x. \end{aligned}$$
(4.27)

Integrating (4.27) in time over \([0,t']\) for \(0<t'\le T\), we obtain

$$\begin{aligned} E_\sigma (t')= E_\sigma (0) + R_\sigma (t'), \end{aligned}$$
(4.28)

where

$$\begin{aligned} R_\sigma (t')= & {} 2\iint \chi _{[0,t']}UF(U)d x d t + 2\iint \chi _{[0,t']}\partial _xU\partial _x(F(U))d x d t \\{} & {} \quad - \frac{2\mu }{3} \iint \chi _{[0,t']}U^3F(U) d x d t. \end{aligned}$$

Now, we move to estimate \(|R_\sigma (t')|\) for all \(0<t'\le T\). For the first and the third term of \(R_\sigma (t')\) we use Cauchy-Schwarz inequality, Lemmas 2.8 and 2.6 and estimate (4.5) restricted to time, to obtain

$$\begin{aligned} \Big |\iint \chi _{[0,t']}UF(U)d x d t \Big | \le \Vert \chi _{[0,t']}U\Vert _{L^2_xL^2_t} \Vert \chi _{[0,t']}F(U)\Vert _{L^2_xL^2_t} \le C\sigma ^{\ell }\Vert u\Vert _{Y^{\sigma ,1,b}_T}^4\nonumber \\ \end{aligned}$$
(4.29)

and

$$\begin{aligned} \Big | \iint \chi _{[0,t']}U^3F(U) d x d t\Big | \le \Vert \chi _{[0,t']}U^3\Vert _{L^2_xL^2_t} \Vert \chi _{[0,t']}F(U)\Vert _{L^2_xL^2_t} \le C\sigma ^{\ell }\Vert u\Vert _{Y^{\sigma ,1,b}_T}^6,\nonumber \\ \end{aligned}$$
(4.30)

for all \(0<t'\le T\).

On the other hand, we apply Cauchy-Schwarz inequality, estimate (4.6) restricted to time and Lemma 2.6 to have the following estimate for the second term of \(R_\sigma (t')\)

$$\begin{aligned}{} & {} \Big |\iint \chi _{[0,t']}\partial _xU\partial _xF(U)d x d t\Big |\nonumber \\{} & {} \quad \le \Vert \chi _{[0,t']}\partial _xU\Vert _{Y^{0,1-b}} \Vert \chi _{[0,t']}\partial _xF(U)\Vert _{Y^{0,b-1}} \le C\sigma ^{\ell }\Vert u\Vert _{Y^{\sigma ,1,b}_T}^4, \end{aligned}$$
(4.31)

for some \(\frac{1}{2}<b<1\).

Finally, using (4.29), (4.30) and (4.31) in (4.28) the required estimate (4.21) follows. \(\square \)

Corollary 4.5

Let \(\sigma >0\), \(\ell \in [0,3/4]\) and \(E_\sigma (t)\) as defined in (4.1). There exists \(C>0\) such that for any solution \(u\in Y^{\sigma ,1,b}_T\) to the IVP (1.1) in the defocusing case \((\mu = -1)\), we have

$$\begin{aligned} \sup \limits _{t\in [0,T]} E_\sigma (t) \le E_\sigma (0) + C\sigma ^{\ell } E_{\sigma }(0)^2 \big (1+E_{\sigma }(0)\big ), \qquad \ell \in \Big [0, \frac{3}{4}\Big ]. \end{aligned}$$
(4.32)

Proof

First note that, for \(\mu =-1\) from (4.1), we have

$$\begin{aligned} E_{\sigma }(0) = \Vert u_0\Vert ^2_{G^{\sigma ,1}}+\frac{1}{6} \Vert e^{\sigma |D_x|} u_0 \Vert _{L^4_x}^4 \ge \Vert u_0\Vert ^2_{G^{\sigma ,1}}. \end{aligned}$$
(4.33)

Now, using the estimates (3.7) and (4.33) in the almost conserved quantity (4.35), we get the required estimate (4.32). \(\square \)

Remark 4.6

Observe that, for the solution to the IVP (1.1) in the focusing case \((\mu = 1)\), from (4.1) we obtain

$$\begin{aligned} E_{\sigma }(0) = \Vert u_0\Vert ^2_{G^{\sigma ,1}}-\frac{1}{6} \Vert e^{\sigma |D_x|} u_0 \Vert _{L^4_x}^4 \le \Vert u_0\Vert ^2_{G^{\sigma ,1}}, \end{aligned}$$
(4.34)

which cannot be used to obtain an estimate of the the form (4.32). As can be seen in the proof of Theorem 1.4 the estimate (4.32) plays a crucial role in our argument. For this reason, we only consider the defocusing mKdV equation to obtain the lower bound for the evolution of the radius of analyticity.

Proposition 4.7

Let \(\sigma >0\) and \(\theta \in [0,\frac{1}{4})\). There exists \(C>0\) and \(\frac{1}{2}<b<1\) such that for any solution \(v\in Z^{\sigma ,0,b}_T\) to the IVP (1.2) in the interval [0, T], we have

$$\begin{aligned} \sup \limits _{t\in [0,T]} M_\sigma (t) \le M_\sigma (0) + C\sigma ^\theta \Vert v\Vert ^4_{Z^{\sigma ,0,b}_T}, \end{aligned}$$
(4.35)

where \(M_\sigma (t)\) is defined as in (4.2).

Proof

Applying \(e^{\sigma |D_x|}\) to the tNLS equation in (1.2) and denoting \(V=e^{\sigma |D_x|}v\), we obtain

$$\begin{aligned} \partial _t V +i\alpha \partial _x^2V +\beta \partial _x^3 V + i\gamma |V|^2V = -i\gamma G(V), \end{aligned}$$
(4.36)

where G(V) is defined as in (4.4).

Now, multiplying (4.36) by \(\overline{V}\) and considering the real parts, we get

$$\begin{aligned} \text {Re}(\overline{V}\partial _tV)-\alpha \text {Im}(\overline{V}\partial _x^2V) +\beta \text {Re}(\overline{V}\partial _x^3V) = \gamma \text {Im}\big (\overline{V}G(V)\big ), \end{aligned}$$
(4.37)

since \(\alpha , \beta \) and \(\gamma \) are real constants. One can infer from (4.37) that

$$\begin{aligned} \frac{1}{2} \partial _t(|V|^2) -\alpha \text {Im}(\partial _x(\overline{V}\partial _xV))+\beta \text {Re}(\overline{U}\partial _x^3V) = \gamma \text {Im}\big (\overline{V}G(V)\big ). \end{aligned}$$
(4.38)

Integrating (4.38) with respect to the space variable, we get

$$\begin{aligned}{} & {} \frac{1}{2}\frac{d}{dt}\!\int \! |V|^2d x - \alpha \!\int \! \text {Im}(\partial _x(\overline{V}\partial _xV))d x\nonumber \\{} & {} \qquad + \beta \!\int \! \text {Re}(\overline{V}\partial _x^3 V)d x = \gamma \!\int \! \text {Im}\big (\overline{V}G(V)\big )d x. \end{aligned}$$
(4.39)

Since V and all its spatial derivatives tend to zero when |x| tends to infinity, using integration by parts, we get

$$\begin{aligned} \frac{d}{dt}\!\int |V|^2d x = 2\gamma \int \text {Im}\big (\overline{V} G(V)\big )d x. \end{aligned}$$
(4.40)

Now, integrating (4.40) in time over the interval \([0,t']\) for any \(0<t'\le T\), we obtain

$$\begin{aligned} \Vert v(t')\Vert ^2_{G^{\sigma ,0}} = \Vert v(0)\Vert ^2_{G^{\sigma , 0}} + 2\gamma \text {Im}\Big ( \iint \chi _{[0,t']}(t)\overline{V} G(V)d x d t\Big ). \end{aligned}$$
(4.41)

Using Plancherel’s identity and Hölder’s inequality, we estimate the integral in the right side of (4.41) as

$$\begin{aligned} \Big | \iint \chi _{[0,t']}(t) \overline{V} G(V)d x d t\Big | \le \Vert \chi _{[0,t']}G(V)\Vert _{Z^{0,b-1}} \Vert \chi _{[0,t']}V\Vert _{Z^{0,1-b}}, \end{aligned}$$
(4.42)

where \(\frac{1}{2}<b<1\) is chosen as in Lemma 4.3. Furthermore, using the fact that \(t'<T\), Lemma 2.6 and \(1-b<b\), we get

$$\begin{aligned} \Big | \iint \chi _{[0,t']}(t) \overline{V} G(V)d x d t\Big |&\le C\Vert v\Vert _{Z_{T}^{\sigma ,0,b}}\Vert G(V)\Vert _{Z^{0,b-1}_{T}}. \end{aligned}$$
(4.43)

Finally, putting together (4.41), (4.43) and (4.14) (restricted to the interval [0, T]), we conclude that one can choose \(b>\frac{1}{2}\) such that (4.35) goes true. \(\square \)

Combining the bound (3.6) with the almost conserved quantity (4.35) the following result follows immediately.

Corollary 4.8

Let \(\sigma >0\) and \(M_\sigma (t)\) be as defined as in (4.2). There exists \(C>0\) such that for any solution \(v\in Z^{\sigma ,0,b}_T\) to the IVP (1.2) in the interval [0, T], we have

$$\begin{aligned} \sup \limits _{t\in [0,T]} M_\sigma (t) \le M_\sigma (0) + C\sigma ^\theta M_{\sigma }(0)^2, \qquad \theta \in \Big [0, \frac{1}{4}\Big ). \end{aligned}$$
(4.44)

5 Global analytic solution: Proof of Theorems 1.4 and 1.5

We start by observing that if we prove the extension of the solution for \(s=s_0\) as stated in Theorems 1.4 and 1.5, then it can be proved for all general \(s\in \mathbb {R}\) using the inclusion (1.12) (for more details we refer the works [4, 39, 42]).

Also, due to time reversibility of the mKdV and tNLS equations, it suffices to consider \(t\ge 0\). Idea of proof of Theorems 1.4 and 1.5 is similar using the almost conserved quantities in (4.32) and (4.44). For the sake of completeness we give details for the proof of Theorem 1.4 and provide some hint for Theorem 1.5.

Proof of Theorem 1.4

Let \(\ell \in [0, \frac{3}{4}]\) and \(\mu =-1\). Taking in consideration the above discussion, let \(u_0\in G^{\sigma _0, 1}({\mathbb {R}})\). For initial data with other values of s the proof follows by using the inclusion (1.12) (see [39] for a detailed argument). Also, the time reversibility of the mKdV equation allows us to consider \(t\ge 0\). So, for given any \(T\ge T_0\), we will prove that the local solution u to the IVP (1.1) guaranteed by Theorem 1.1 belongs to \(C([0,T], G^{\sigma (T),1}(\mathbb {R}))\), with \( \sigma (T) = \min \left\{ \sigma _{0}, \dfrac{c}{T^{\frac{1}{\ell }}}\right\} . \)

From Theorem 1.1 one can infer the existence of a maximal lifespan \( T^{*}:=T^{*}(\Vert u_0\Vert _{G^{\sigma _{0},1}})\in (0,\infty ] \) such that \( u\in C([0,T^{*}), G^{\sigma _{0},1}(\mathbb {R})). \) We assume that \(T^{*}<\infty \), since otherwise we would have \(T^*=\infty \) and the radius of analyticity would remain the same \(\sigma _0\) for all time \(T\ge T_0\). Therefore, we just need to prove the following

$$\begin{aligned} u\in C([0,T], G^{\sigma (T),1}(\mathbb {R})), \;\; \text {for all } T\ge T^{*}. \end{aligned}$$
(5.1)

Observe that

$$\begin{aligned} E_{\sigma _0}(0)&= \Vert u_0\Vert ^2_{G^{\sigma _0,1}} + \frac{1}{6} \Vert e^{\sigma _0 |D_x|}u_0\Vert ^4_{L^4}\\&\le \Vert u_0\Vert ^2_{G^{\sigma _0,1}} + C \Vert D_x(e^{\sigma _0 |D_x|}u_0)\Vert _{L^2}\Vert e^{\sigma _0 |D_x|}u_0\Vert ^3_{L^2}\\&\le \Vert u_0\Vert ^2_{G^{\sigma _0,1}} + C \Vert u_0\Vert ^4_{G^{\sigma _0,1}}<\infty , \end{aligned}$$

where we used Gagliardo-Nirenberg inequality. Additionally, since \(E_{\sigma _0}(0)\ge \Vert u_0\Vert ^2_{G^{\sigma _0,1}}\) from (4.33), we can take the lifespan \(T_0\) given as follows

$$\begin{aligned} T_0 = \frac{c_{0}}{\left( 1+E_{\sigma _0}(0)\right) ^{a}}, \end{aligned}$$

with \(c_{0}>0\), \(a>1\) as in (3.5).

The proof will be given by applying the local well-posedness result iteratively as many times as necessary to reach any given time \(T>T_0\). For this purpose, we fix the time step

$$\begin{aligned} 0<\rho = \frac{c_{0}}{\left( 1+2E_{\sigma _0}(0)\right) ^{a}} < T_0. \end{aligned}$$
(5.2)

In what follows, we will describe in detail the induction steps until obtaining the desired extension of the solution.

Extension in \([0,\rho ]\). This is the trivial step, since from Theorem 1.1, for any \(0<\sigma \le \sigma _0\), we already have \(u\in C([0,\rho ]; G^{\sigma , 1}(\mathbb {R}))\). Furthermore, the solution u satisfies

$$\begin{aligned} \sup \limits _{t\in [0,\rho ]} E_{\sigma }(t) \le E_{\sigma }(0) + C\sigma ^\ell E_{\sigma }(0)^2 \big (1+E_{\sigma }(0)\big )\nonumber \\ \le E_{\sigma }(0) + 8C\sigma ^\ell E_{\sigma _0}(0)^2 \big (1+E_{\sigma _0}(0)\big ). \end{aligned}$$
(5.3)

The above inequality follows from the bound (4.32).

Extension in \([\rho ,2\rho ]\). If we assume

$$\begin{aligned} 8C\sigma ^\ell E_{\sigma _0}(0) \big (1+E_{\sigma _0}(0)\big ) \le 1, \end{aligned}$$

from (5.3), we get

$$\begin{aligned} \Vert u(\rho )\Vert ^2_{G^{\sigma ,1}} \le E_{\sigma }(\rho ) \le \big [1+8C\sigma ^\ell E_{\sigma _0}(0) \big (1+E_{\sigma _0}(0)\big )\big ] E_{\sigma _0}(0) \le 2 E_{\sigma _0}(0), \end{aligned}$$
(5.4)

since \(\sigma \le \sigma _0\). Therefore, applying the local well-posedness result for the initial data \(u(\rho )\) instead of \(u_0\), we obtain that the solution u belongs to \(C([\rho , 2\rho ]; G^{\sigma ,1}(\mathbb {R}))\). Additionally, applying the almost conserved quantity (4.32) for the initial time \(\rho \), the bound (5.4) and considering again inequality (5.3) from the previous step, we have

$$\begin{aligned} \sup \limits _{t\in [\rho ,2\rho ]} E_{\sigma }(t)&\le E_{\sigma }(\rho ) + C\sigma ^\ell E_{\sigma }(\rho )^2 \big (1+E_{\sigma }(\rho )\big ) \\&\le E_{\sigma }(\rho ) + 8C\sigma ^\ell E_{\sigma _0}(0)^2 \big (1+E_{\sigma _0}(0)\big ) \\&\le E_{\sigma }(0) + 2\cdot 8C\sigma ^\ell E_{\sigma _0}(0)^2 \big (1+E_{\sigma _0}(0)\big ). \end{aligned}$$

Extension in \([(n-1)\rho ,n\rho ]\). More generally, assuming

$$\begin{aligned} (n-1)8C\sigma ^\ell E_{\sigma _0}(0) \big (1+E_{\sigma _0}(0)\big ) \le 1, \end{aligned}$$

we can guarantee the bound \( E_{\sigma }((n-1)\rho ) \le 2 E_{\sigma _0}(0) \), and consequently we can apply the local well-posedness result to extend the solution u to the space \(C([(n-1)\rho , n\rho ]; G^{\sigma ,1}(\mathbb {R}))\).

Proceeding in this way, the induction stops at the first integer n for which

$$\begin{aligned} n8C\sigma ^\ell E_{\sigma _0}(0) \big (1+E_{\sigma _0}(0)\big ) >1, \end{aligned}$$
(5.5)

and we have reached the time \(T=n\rho \) for the extension of the solution. Now, using \(T=n\rho \) in (5.5), we obtain

$$\begin{aligned} \frac{T}{\rho } 8C\sigma ^\ell E_{\sigma _0}(0) \big (1+E_{\sigma _0}(0)\big )>1. \end{aligned}$$
(5.6)

Note that T can be chosen large as we want if \(\sigma \) is small enough. Furthermore, (5.2) and (5.6) imply

$$\begin{aligned} \sigma > \left[ \frac{\rho }{T8C\sigma ^\ell E_{\sigma _0}(0) \big (1+E_{\sigma _0}(0)\big )} \right] ^{\frac{1}{\ell }} =: cT^{-\frac{1}{\ell }}, \end{aligned}$$
(5.7)

where c depends on \(c_{0}\), \(\sigma _{0}\), \(\ell \) and \(\Vert u_0\Vert _{G^{\sigma _0, 1}} \). Considering \(\ell =\frac{3}{4}\) in (5.7), which is the maximum that can be considered in view of Proposition 4.4, finishes the proof for \(s=1\). For other values of \(s\in {\mathbb {R}}\), the proof follows using the inclusion (1.12) as described above. \(\square \)

Proof of Theorem 1.5

The proof of this theorem follows using similar steps as in the proof of Theorem 1.4. In this case we consider \(s=0\) and use (4.44) with \(\theta \in [0, \frac{1}{4})\). Finally, the proof is concluded considering the maximum value of \(\theta \in [0, \frac{1}{4})\). \(\square \)