4.1 Deterministic Cauchy Theory for the 3d Cubic Wave Equation

4.1.1 Introduction

In this section, we consider the cubic defocusing wave equation

$$\displaystyle \begin{aligned} (\partial_t^2 -\Delta) u+u^3=0, \end{aligned} $$
(4.1.1)

where u = u(t, x) is real valued, \(t\in \mathbb R\), (the 3d torus). In (4.1.1), Δ denotes the Laplace operator, namely

$$\displaystyle \begin{aligned}\Delta=\partial_{x_1}^2+\partial_{x_2}^2+\partial_{x_3}^2\,. \end{aligned}$$

Since (4.1.1) is of second order in time, it is natural to complement it with two initial conditions

$$\displaystyle \begin{aligned} u(0,x)=u_{0}(x),\quad \partial_t u(0,x)=u_1(x)\,. \end{aligned} $$
(4.1.2)

In this section, we will be studying the local and global well-posedness of the initial value problem (4.1.1)–(4.1.2) in Sobolev spaces via deterministic methods.

The Sobolev spaces \(H^s(\mathbb T^3)\) are defined as follows. For a function f on \(\mathbb T^3\) given by its Fourier series

$$\displaystyle \begin{aligned}f(x)=\sum_{n\in\mathbb Z^3} \hat{f}(n)\, e^{in\cdot x}, \end{aligned}$$

we define the Sobolev norm \(H^s(\mathbb T^3)\) of f as

$$\displaystyle \begin{aligned}\| f \|{}_{H^s}^2=\sum_{n\in \mathbb Z^3}\langle n\rangle^{2s}\ |\hat{f}(n)|{}^2, \end{aligned}$$

where 〈n〉 = (1 + |n|2)1∕2. On has that

$$\displaystyle \begin{aligned}\| f \|{}_{H^s}\approx \|D^s f\|{}_{L^2},\quad D\equiv (1-\Delta)^{1/2}\,. \end{aligned}$$

For integer values of s one can also give an equivalent norm in the physical space as follows

$$\displaystyle \begin{aligned}\|f\|{}_{H^s(\mathbb T^3)}\approx \sum_{|\alpha|\leq s}\| \partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}\partial_{x_3}^{\alpha_3}f\|{}_{L^2(\mathbb T^3)}\,, \end{aligned}$$

where the summation is taken over all multi-indexes \(\alpha =(\alpha _1,\alpha _2,\alpha _3)\in \mathbb N^3\).

As we shall see, it will be of importance to understand the interplay between the linear and the nonlinear part of (4.1.1). Indeed, let us first consider the Cauchy problem

$$\displaystyle \begin{aligned}\partial_t^2 u+u^3=0,\quad u(0,x)=u_{0}(x),\quad \partial_t u(0,x)=u_1(x) \end{aligned}$$

which is obtained from (4.1.1) by neglecting the Laplacian. If we set

$$\displaystyle \begin{aligned}U=(U_1,U_2)\equiv (u,\partial_t u)^t \end{aligned}$$

then the last problem can be written as

$$\displaystyle \begin{aligned}\partial_t U=F(U), \quad F(U)=(U_2,-U_1^3)^t\,. \end{aligned}$$

On may wish to solve, at least locally, the last problem via the Cauchy-Lipschitz argument in the spaces \(H^{s_1}(\mathbb T^3)\times H^{s_2}(\mathbb T^3)\). For such a purpose one should check that the vector field F(U) is locally Lipschitz on these spaces. Thanks to the Sobolev embedding \(H^s(\mathbb T^3)\subset L^\infty (\mathbb T^3)\), s > 3∕2 we can see that the map \(U_1\mapsto U_1^3\) is locally Lipschitz on \(H^s(\mathbb T^3)\), s > 3∕2. It is also easy to check that the map \(U_1\mapsto U_1^3\) is not continuous on \(H^s(\mathbb T^3)\), s < 3∕2. A more delicate argument shows that it is not continuous on \(H^{3/2}(\mathbb T^3)\) either. Therefore, if we impose that F(u) is locally Lipschitz on \(H^{s_1}(\mathbb T^3)\times H^{s_2}(\mathbb T^3)\) than we necessarily need to impose a regularity assumption s 1 > 3∕2. As we shall see below the term containing the Laplacian in (4.1.1) will allow as to significantly relax this regularity assumption.

On the other hand if we neglect the nonlinear term u 3 in (4.1.1), we get the linear wave equation which is well-posed in \(H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)\) for any \(s\in \mathbb R\), as it can be easily seen by the Fourier series description of the solutions of the linear wave equation (see the next section). In other words the absence of a nonlinearity allows us to solve the problem in arbitrary singular Sobolev spaces.

In summary, we expect that the Laplacian term in (4.1.1) will help us to prove the well-posedness of the problem (4.1.1) in singular Sobolev spaces while the nonlinear term u 3 will be responsible for the lack of well-posedness in singular spaces.

4.1.2 Local and Global Well-Posedness in H 1 × L 2

4.1.2.1 The Free Evolution

We first define the free evolution, i.e. the map defining the solutions of the linear wave equation

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u=0,\quad u(0,x)=u_0(x), \quad \partial_t u(0,x)=u_1(x). \end{aligned} $$
(4.1.3)

Using the Fourier transform and solving the corresponding second order linear ODE’s, we obtain that the solutions of (4.1.3) are generated by the map S(t), defined as follows

$$\displaystyle \begin{aligned} S(t)(u_0,u_1)\equiv \cos{}(t\sqrt{-\Delta})(u_0)+\frac{\sin{}(t\sqrt{-\Delta})}{\sqrt{-\Delta}}(u_1), \end{aligned}$$

where

$$\displaystyle \begin{aligned}\cos{}(t\sqrt{-\Delta})(u_0)\equiv \sum_{n\in\mathbb Z^3} \cos{}(t|n|) \widehat{u_0}(n)\, e^{in\cdot x} \end{aligned}$$

and

$$\displaystyle \begin{aligned}\frac{\sin{}(t\sqrt{-\Delta})}{\sqrt{-\Delta}}(u_1) \equiv t\widehat{u_1}(0)+ \sum_{n\in\mathbb Z^3_{\star}} \frac{\sin{}(t|n|)}{|n|}\widehat{u_1}(n)\, e^{in\cdot x}\,,\quad \mathbb Z^3_{\star}=\mathbb Z^3\backslash\{0\}\,. \end{aligned}$$

We have that S(t)(u 0, u 1) solves (4.1.3) and if (u 0, u 1) ∈ H s × H s−1, \(s\in \mathbb R\) then S(t)(u 0, u 1) is the unique solution of (4.1.3) in \(C(\mathbb R;H^s(\mathbb T^3))\) such that its time derivative is in \(C(\mathbb R;H^{s-1}(\mathbb T^3))\). It follows directly from the definition that the operator \(\bar {S}(t)\equiv (S(t),\partial _t S(t))\) is bounded on H s × H s−1, \(\bar {S}(0)=\mathrm {Id}\) and \(\bar {S}(t+\tau )=\bar {S}(t) \circ \bar {S}(\tau )\), for every real numbers t and τ. In the proof of the boundedness on H s × H s−1, we only use the boundedness of \(\cos {}(t|n|)\) and \(\sin {}(t|n|)\). As we shall see below one may use the oscillations of \(\cos {}(t|n|)\) and \(\sin {}(t|n|)\) for |n|≫ 1 in order to get more involved L p, p > 2 properties of the map S(t).

Let us next consider the non homogeneous problem

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u=F(t,x),\quad u(0,x)=0, \quad \partial_t u(0,x)=0. \end{aligned} $$
(4.1.4)

Using the variation of the constants method, we obtain that the solutions of (4.1.4) are given by

$$\displaystyle \begin{aligned} u(t)=\int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((F(\tau))d\tau\,. \end{aligned}$$

As a consequence, we obtain that the solution of the non homogeneous problem (4.1.4) is one derivative smoother than the source term F. More precisely, for every \(s\in \mathbb R\), the solution of (4.1.4) satisfies the bound

$$\displaystyle \begin{aligned} \|u\|{}_{L^\infty([0,1];H^{s+1}(\mathbb T^3))}\leq C\|F\|{}_{L^1([0,1];H^s(\mathbb T^3))}\,. \end{aligned} $$
(4.1.5)

4.1.2.2 The Local Well-Posedness

We state the local well-posedness result.

Proposition 4.1.1 (Local Well-Posedness)

Consider the cubic defocusing wave equation

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u+u^3=0\,, \end{aligned} $$
(4.1.6)

posed on \(\mathbb T^3\). There exist constants c and C such that for every \(a\in \mathbb R\), every Λ ≥ 1, every

$$\displaystyle \begin{aligned}(u_0,u_1)\in H^1(\mathbb T^3)\times L^2(\mathbb T^3) \end{aligned}$$

satisfying

$$\displaystyle \begin{aligned} \|u_0\|{}_{H^1}+\|u_1\|{}_{L^2}\leq \Lambda \end{aligned} $$
(4.1.7)

there exists a unique solution of (4.1.6) on the time interval [a, a + c Λ −2] of (4.1.6) with initial data

$$\displaystyle \begin{aligned}u(a,x)=u_0(x), \quad \partial_t u(a,x)=u_1(x)\,. \end{aligned}$$

Moreover the solution satisfies

$$\displaystyle \begin{aligned}\|(u,\partial_t u)\|{}_{L^\infty([a,a+c \Lambda^{-2}],H^1(\mathbb T^3)\times L^2(\mathbb T^3))}\leq C\Lambda, \end{aligned}$$

(u, tu) is unique in the class \(L^\infty ([a,a+c \Lambda ^{-2}],H^1(\mathbb T^3)\times L^2(\mathbb T^3))\)and the dependence with respect to the initial data and with respect to the time is continuous. Finally, if

$$\displaystyle \begin{aligned}(u_0,u_1)\in H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3) \end{aligned}$$

for some s ≥ 1 then there exists c s > 0 such that

$$\displaystyle \begin{aligned}(u,\partial_t u)\in C([a,a+c_s \Lambda^{-2}];H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3))\,. \end{aligned}$$

Proof

If u(t, x) is a solution of (4.1.6) then so is u(t + a, x). Therefore, it suffices to consider the case a = 0.

Thanks to the analysis of the previous section, we obtain that we should solve the integral equation

$$\displaystyle \begin{aligned} u(t)=S(t)(u_0,u_1)-\int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((u^3(\tau))d\tau\,. \end{aligned} $$
(4.1.8)

Set

$$\displaystyle \begin{aligned}\Phi_{u_0,u_1}(u)\equiv S(t)(u_0,u_1)-\int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((u^3(\tau))d\tau. \end{aligned}$$

Then for T ∈ (0, 1], we define X T as

$$\displaystyle \begin{aligned}X_{T}\equiv C([0,T];H^1(\mathbb T^3)), \end{aligned}$$

endowed with the natural norm

$$\displaystyle \begin{aligned}\|u\|{}_{X_T}=\sup_{0\leq t\leq T}\|u(t)\|{}_{H^1(\mathbb T^3)}\,. \end{aligned}$$

Using the boundedness properties of \(\bar {S}\) on H s × H s−1 explained in the previous section and using the Sobolev embedding \(H^1(\mathbb T^3)\subset L^6(\mathbb T^3)\), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \|\Phi_{u_0,u_1}(u)\|{}_{X_T} &\displaystyle \leq &\displaystyle C\big(\|u_0\|{}_{H^1}+\|u_1\|{}_{L^2}+T\sup_{\tau\in[0,T]}\|u(\tau)\|{}_{L^6}^3\big) \\ &\displaystyle \leq &\displaystyle C\big(\|u_0\|{}_{H^1}+\|u_1\|{}_{L^2}+CT\|u\|{}^3_{X_T}\big). \end{array} \end{aligned} $$

It is now clear that for T = c Λ−2 , c ≪ 1 the map \(\Phi _{u_0,u_1}\) sends the ball

$$\displaystyle \begin{aligned}B\equiv (u:\|u\|{}_{X_T)} \leq 2C\Lambda) \end{aligned}$$

into itself. Moreover, by a similar arguments involving the Sobolev embedding \(H^1(\mathbb T^3)\subset L^6(\mathbb T^3)\) and the Hölder inequality, we obtain the estimate

$$\displaystyle \begin{aligned} \|\Phi_{u_0,u_1}(u)-\Phi_{u_0,u_1}(\tilde{u})\|{}_{X_T} \leq CT\|u-\tilde{u}\|{}_{X_T} \big(\|u\|{}^2_{X_T} +\|\tilde{u}\|{}^2_{X_T}\big). \end{aligned} $$
(4.1.9)

Therefore, with our choice of T, we get that

$$\displaystyle \begin{aligned}\|\Phi_{u_0,u_1}(u)-\Phi_{u_0,u_1}(\tilde{u})\|{}_{X_T} \leq \frac{1}{2}\|u-\tilde{u}\|{}_{X_T}\,,\quad u,\tilde{u}\in B\,. \end{aligned}$$

Consequently the map \(\Phi _{u_0,u_1}\) is a contraction on B. The fixed point of this contraction defines the solution u on [0, T] we are looking for. The estimate of \(\|\partial _t u\|{ }_{L^2}\) follows by differentiating in t the Duhamel formula (4.1.8). Let us now turn to the uniqueness. Let \(u,\tilde {u}\) be two solutions of (4.1.6) with the same initial data in the space X T for some T > 0. Then for τ ≤ T, we can write similarly to (4.1.9)

$$\displaystyle \begin{aligned} \|\Phi_{u_0,u_1}(u)-\Phi_{u_0,u_1}(\tilde{u})\|{}_{X_\tau} \leq C\tau \|u-\tilde{u}\|{}_{X_\tau} \big(\|u\|{}^2_{X_T} +\|\tilde{u}\|{}^2_{X_T}\big). \end{aligned} $$
(4.1.10)

Let us take τ such that

$$\displaystyle \begin{aligned}C\tau \big(\|u\|{}^2_{X_T} +\|\tilde{u}\|{}^2_{X_T}\big)<\frac{1}{2}. \end{aligned}$$

This fixes the value of τ. Thanks to (4.1.10), we obtain that u and \(\tilde {u}\) are the same on [0, τ]. Next, we cover the interval [0, T] by intervals of size τ and we inductively obtain that u and \(\tilde {u}\) are the same on each interval of size τ. This yields the uniqueness statement.

The continuous dependence with respect to time follows from the Duhamel formula representation of the solution of (4.1.8). The continuity with respect to the initial data follows from the estimates on the difference of two solutions we have just performed. Notice that we also obtain uniform continuity of the map data-solution on bounded subspaces of H 1 × L 2.

Let us finally turn to the propagation of higher regularity. Let (u 0, u 1) ∈ H 1 × L 2 such that (4.1.7) holds satisfy the additional regularity property (u 0, u 1) ∈ H s × H s−1 for some s > 1. We will show that the corresponding solution remains in H s × H s−1in the (essentially) whole time of existence. For s ≥ 1, we define \(X^s_T\) as

$$\displaystyle \begin{aligned}X^s_{T}\equiv C([0,T];H^s(\mathbb T^3)), \end{aligned}$$

endowed with the norm

$$\displaystyle \begin{aligned}\|u\|{}_{X^s_T}=\sup_{0\leq t\leq T}\|u(t)\|{}_{H^s(\mathbb T^3)}\,. \end{aligned}$$

We have that the solution with data (u 0, u 1) ∈ H s × H s−1 remains in this space for time intervals of order \( (1+\|u_0\|{ }_{H^s}+\|u_1\|{ }_{H^{s-1}})^{-2} \) by a fixed point argument, similar to the one we performed for data in H 1 × L 2. We now show that the regularity is preserved for (the longer) time intervals of order \( (1+\|u_0\|{ }_{H^1}+\|u_1\|{ }_{L^{2}})^{-2}\,. \) Coming back to (4.1.8), we can write

$$\displaystyle \begin{aligned}\|\Phi_{u_0,u_1}(u)\|{}_{X^s_T} \leq C\big(\|u_0\|{}_{H^s}+\|u_1\|{}_{H^{s-1}}+ T\sup_{\tau\in[0,T]}\|u^3(\tau)\|{}_{H^{s-1}}\big). \end{aligned}$$

Now using the Kato-Ponce product inequality, we can obtain that for σ ≥ 0, one has the bound

$$\displaystyle \begin{aligned} \|v^3\|{}_{H^\sigma(\mathbb T^3)}\leq C\|D^\sigma v\|{}_{L^6(\mathbb T^3)}\, \|v\|{}_{L^6(\mathbb T^3)}^2\,. \end{aligned} $$
(4.1.11)

Using (4.1.11) and applying the Sobolev embedding \(H^1(\mathbb T^3)\subset L^6(\mathbb T^3)\), we infer that

$$\displaystyle \begin{aligned}\|u^3(\tau)\|{}_{H^{s-1}}\lesssim \|D^{s-1} u(\tau)\|{}_{L^6}\|u(\tau)\|{}_{L^6}^2\lesssim \|D^s u(\tau)\|{}_{L^2}\|u(\tau)\|{}_{H^1}^2\,. \end{aligned}$$

Therefore, we arrive at the bound

$$\displaystyle \begin{aligned}\|\Phi_{u_0,u_1}(u)\|{}_{X^s_T} \leq C\big(\|u_0\|{}_{H^s}+\|u_1\|{}_{H^{s-1}}+ C_{s} T\sup_{\tau\in[0,T]} \|D^s u(\tau)\|{}_{L^2}\|u(\tau)\|{}_{H^1}^2\big)\,. \end{aligned}$$

By construction of the solution we infer that if T ≤ c s Λ−2 with c s small enough, we have that

$$\displaystyle \begin{aligned}\|u\|{}_{X^s_T}= \|\Phi_{u_0,u_1}(u)\|{}_{X^s_T} \leq C\big(\|u_0\|{}_{H^s}+\|u_1\|{}_{H^{s-1}}\big)+ \frac{1}{2}\|u\|{}_{X^s_T} \end{aligned}$$

which implies the propagation of the regularity statement for u. Strictly speaking, one should apply a bootstrap argument starting from the propagation of the regularity on times of order \( (1+\|u_0\|{ }_{H^s}+\|u_1\|{ }_{H^{s-1}})^{-2} \) and then extend the regularity propagation to the longer interval [0, c s Λ−2]. One estimates similarly tu in H s−1 by differentiating the Duhamel formula with respect to t. The continuous dependence with respect to time in H s × H s−1 follows once again from the Duhamel formula (4.1.8). This completes the proof of Proposition 4.1.1. □

Theorem 4.1.2 (Global Well-Posedness)

For every \( (u_0,u_1)\in H^1(\mathbb T^3)\times L^2(\mathbb T^3) \) the local solution of the cubic defocusing wave equation

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u+u^3=0\,,\quad u(0,x)=u_0(x), \quad \partial_t u(0,x)=u_1(x) \end{aligned}$$

can be extended globally in time. It is unique in the class \(C(\mathbb R ;H^1(\mathbb T^3)\times L^{2}(\mathbb T^3))\) and there exists a constant C depending only on \(\|u_0\|{ }_{H^1}\) and \(\|u_1\|{ }_{L^2}\) such that for every \(t\in \mathbb R\) ,

$$\displaystyle \begin{aligned}\|u(t)\|{}_{H^1(\mathbb R)}\leq C. \end{aligned}$$

If in addition \( (u_0,u_1)\in H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3) \)for some s ≥ 1 then

$$\displaystyle \begin{aligned}(u,\partial_t u)\in C(\mathbb R ;H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3))\,. \end{aligned}$$

Remark 4.1.3

One may obtain global weak solutions of the cubic defocusing wave equation for data in H 1 × L 2 via compactness arguments. The uniqueness and the propagation of regularity statements of Theorem 4.1.2 are the major differences with respect to the weak solutions.

Proof of Theorem 4.1.2

The key point is the conservation of the energy displayed in the following lemma.

Lemma 4.1.4

There exist c > 0 and C > 0 such that for every \((u_0,u_1)\in H^1(\mathbb T^3)\times L^2(\mathbb T^3)\)the local solution of the cubic defocusing wave equation, with data (u 0, u 1), constructed in Proposition 4.1.1is defined on [0, T] with

$$\displaystyle \begin{aligned}T=c(1+\|u_0\|{}_{H^1(\mathbb T^3)}+\|u_1\|{}_{L^2(\mathbb T^3)})^{-2} \end{aligned}$$

and

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \int_{\mathbb T^3} \big((\partial_t u(t,x))^2+|\nabla_x u(t,x)|{}^2+\frac{1}{2}u^4(t,x)\big)dx\\ &\displaystyle &\displaystyle \quad =\int_{\mathbb T^3} \big((u_1(x))^2+|\nabla_x u_0(x)|{}^2+\frac{1}{2}u_0^4(x)\big)dx,\quad t\in [0,T]. \end{array} \end{aligned} $$
(4.1.12)

As a consequence, for t ∈ [0, T],

$$\displaystyle \begin{aligned}\|u(t)\|{}_{H^1(\mathbb T^3)}+\|\partial_t u(t)\|{}_{L^2(\mathbb T^3)}\leq C \big(1+\|u_0\|{}_{H^1(\mathbb T^3)}^2+\|u_1\|{}_{L^2(\mathbb T^3)} \big). \end{aligned}$$

Remark 4.1.5

Using the invariance with respect to translations in time, we can state Lemma 4.1.4 with initial data at an arbitrary initial time.

Proof of Lemma 4.1.4

We apply Proposition 4.1.1 with \(\Lambda = \|u_0\|{ }_{H^1}+\|u_1\|{ }_{L^2}\) and we take T = c 10 Λ−2, where c 10 is the small constant involved in the propagation of the H 10 × H 9 regularity. Let (u 0,n, u 1,n) be a sequence in H 10 × H 9 which converges to (u 0, u 1) in H 1 × L 2 and such that

$$\displaystyle \begin{aligned}\|u_{0,n}\|{}_{H^1}+\|u_{1,n}\|{}_{L^2}\leq \|u_0\|{}_{H^1}+\|u_1\|{}_{L^2}\,. \end{aligned}$$

Let u n(t) be the solution of the cubic defocusing wave equation, with data (u 0,n, u 1,n). By Proposition 4.1.1 these solutions are defined on [0, T] and they keep their H 10 × H 9 regularity on the same time interval. We multiply the equation

$$\displaystyle \begin{aligned}(\partial_t^2-\Delta)u_n+u_n^3=0 \end{aligned}$$

by tu n. Using the regularity properties of u n(t), after integrations by parts, we arrive at

$$\displaystyle \begin{aligned}\frac{d}{dt}\Big[\int_{\mathbb T^3} \big((\partial_t u_n(t,x))^2+|\nabla_x u_n(t,x)|{}^2+\frac{1}{2}u_n^4(t,x)\big)dx\Big]=0 \end{aligned}$$

which implies the identity

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \int_{\mathbb T^3} \big((\partial_t u_n(t,x))^2+|\nabla_x u_n(t,x)|{}^2+\frac{1}{2}u_n^4(t,x)\big)dx \\ &\displaystyle &\displaystyle \quad = \int_{\mathbb T^3} \big((u_{1,n}(x))^2+|\nabla_x u_{0,n}(x)|{}^2+\frac{1}{2}u_{0,n}^4(x)\big)dx,\quad t\in [0,T]. \end{array} \end{aligned} $$
(4.1.13)

We now pass to the limit n in (4.1.13). The right hand-side converges to

$$\displaystyle \begin{aligned}\int_{\mathbb T^3} \big((u_1(x))^2+|\nabla_x u_0(x)|{}^2+\frac{1}{2}u_0^4(x)\big)dx \end{aligned}$$

by the definition of (u 0,n, u 1,n) (we invoke the Sobolev embedding for the convergence of the L 4 norms) . The right hand-side of (4.1.13) converges to

$$\displaystyle \begin{aligned}\int_{\mathbb T^3} \big((\partial_t u(t,x))^2+|\nabla_x u(t,x)|{}^2+\frac{1}{2}u^4(t,x)\big)dx \end{aligned}$$

by the continuity of the flow map established in Proposition 4.1.1. Using the compactness of \(\mathbb T^3\) and the Hölder inequality, we have that

$$\displaystyle \begin{aligned}\|u\|{}_{L^2(\mathbb T^3)}\leq C\|u\|{}_{L^4(\mathbb T^3)}\leq C(1+\|u\|{}^2_{L^4(\mathbb T^3)}) \end{aligned}$$

and therefore

$$\displaystyle \begin{aligned}\|u(t)\|{}^2_{H^1(\mathbb T^3)}+\|\partial_t u(t)\|{}^2_{L^2(\mathbb T^3)} \end{aligned}$$

is bounded by

$$\displaystyle \begin{aligned} C\,\int_{\mathbb T^3} \big(1+ (\partial_t u(t,x))^2+|\nabla_x u(t,x)|{}^2+\frac{1}{2}u^4(t,x)\big)dx\,. \end{aligned}$$

Now, using (4.1.12) and the Sobolev inequality

$$\displaystyle \begin{aligned}\|u\|{}_{L^4(\mathbb T^3)}\leq C\|u\|{}_{H^1(\mathbb T^3)}\, , \end{aligned}$$

we obtain that for t ∈ [0, T],

$$\displaystyle \begin{aligned}\|u(t)\|{}^2_{H^1(\mathbb T^3)}+\|\partial_t u(t)\|{}^2_{L^2(\mathbb T^3)}\leq C \big( 1+\|u_0\|{}_{H^1(\mathbb T^3)}^4+\|u_1\|{}^2_{L^2(\mathbb T^3)} \big). \end{aligned}$$

This completes the proof of Lemma 4.1.4. □

Let us now complete the proof of Theorem 4.1.2. Let \((u_0,u_1)\in H^1(\mathbb T^3)\times L^2(\mathbb T^3)\). Set

$$\displaystyle \begin{aligned}T= c\big(C\big(1+\|u_0\|{}_{H^1(\mathbb T^3)}^2+\|u_1\|{}_{L^2(\mathbb T^3)} \big) \big)^{-2}, \end{aligned}$$

where the constants c and C are defined in Lemma 4.1.4. We now observe that we can use Proposition 4.1.1 and Lemma 4.1.4 on the intervals [0, T], [T, 2T], [2T, 3T], and so on and therefore we extend the solution with data (u 0, u 1) on [0, ). By the time reversibility of the wave equation we similarly can construct the solution for negative times. More precisely, the free evolution S(t)(u 0, u 1) well-defined for all \(t\in \mathbb R\) and one can prove in the same way the natural counterparts of Proposition 4.1.1 and Lemma 4.1.4 for negative times. The propagation of higher Sobolev regularity globally in time follows from Proposition 4.1.1 while the H 1 a priori bound on the solutions follows from Lemma 4.1.4. This completes the proof of Theorem 4.1.2. □

Remark 4.1.6

One may proceed slightly differently in the proof of Theorem 4.1.2 by observing that as a consequence of Proposition 4.1.1, if a local solution with H 1 × L 2 data blows-up at time T  <  then

$$\displaystyle \begin{aligned} \lim_{t\rightarrow T^{\star}} \|(u(t),\partial_t u(t))\|{}_{H^1(\mathbb T^3)\times L^{2}(\mathbb T^3)} =\infty. \end{aligned} $$
(4.1.14)

The statement (4.1.14) is in contradiction with the energy conservation law.

Remark 4.1.7

Observe that the nonlinear problem

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u+u^3=0 \end{aligned} $$
(4.1.15)

behaves better than the linear problem

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u=0 \end{aligned} $$
(4.1.16)

with respect to the H 1 global in time bounds. Indeed, Theorem 4.1.2 establishes that the solutions of (4.1.15) are bounded in H 1 as far as the initial data is in H 1 × L 2. On the other hand one can consider u(t, x) = t which is a solution of the linear wave equation (4.1.16) on \(\mathbb T^3\) with data in H 1 × L 2 and its H 1 norm is clearly growing in time.

Remark 4.1.8

The sign in front of the nonlinearity is not of importance for Proposition 4.1.1. One can therefore obtain the local well-posedness of the cubic focusing wave equation

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u-u^3=0, \end{aligned} $$
(4.1.17)

posed on \(\mathbb T^3\), with data in \(H^1(\mathbb T^3)\times L^2(\mathbb T^3)\). However, the sign in front of the nonlinearity is of crucial importance in the proof of Theorem 4.1.2. Indeed, one has that

$$\displaystyle \begin{aligned}u(t,x)=\frac{\sqrt{2}}{1-t} \end{aligned}$$

is a solution of (4.1.17), posed on \(\mathbb T^3\) with data \((\sqrt {2},-\sqrt {2})\) which is not defined globally in time (it blows-up in H 1 × L 2 at t = 1).

4.1.3 The Strichartz Estimates

In the previous section, we solved globally in time the cubic defocusing wave equation in H 1 × L 2. One may naturally ask whether it is possible to extend these results to the more singular Sobolev spaces H s × H s−1 for some s < 1. It turns out that this is possible by invoking more refined properties of the map S(t) defining the free evolution. The proof of these properties uses in an essential way the time oscillations in S(t) and can be quantified as the L p, p > 2 mapping properties of S(t) (cf. [19, 30]).

Theorem 4.1.9 (Strichartz Inequality for the Wave Equation)

Let \((p,q)\in \mathbb R^2\)be such that 2 < p ∞ and \( \frac {1}{p}+\frac {1}{q}=\frac {1}{2}. \)Then we have the estimate

$$\displaystyle \begin{aligned}\|S(t)(u_0,u_1)\|{}_{L^p([0,1];L^q(\mathbb T^3))}\leq C\big(\|u_0\|{}_{H^{\frac{2}{p}}(\mathbb T^3)}+\|u_1\|{}_{H^{\frac{2}{p}-1}(\mathbb T^3)}\big). \end{aligned}$$

We shall use that the solutions of the wave equation satisfy a finite propagation speed property which will allow us to deduce the result of Theorem 4.1.9 from the corresponding Strichartz estimate for the wave equation on the Euclidean space. Consider therefore the wave equation

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u=0,\quad u(0,x)=u_0(x), \quad \partial_t u(0,x)=u_1(x), \end{aligned} $$
(4.1.18)

where now the spatial variable x belongs to \(\mathbb R^3\) and the initial data (u 0, u 1) belong to \(H^s(\mathbb R^3)\times H^{s-1}(\mathbb R^3)\). Using the Fourier transform on \(\mathbb R^3\), we can solve (4.1.18) and obtain that the solutions are generated by the map S e(t), defined as

$$\displaystyle \begin{aligned} S_{e}(t)(u_0,u_1)\equiv \cos{}(t\sqrt{-\Delta_{\mathbb R^3}})(u_0)+\frac{\sin{}(t\sqrt{-\Delta_{\mathbb R^3}})}{\sqrt{-\Delta_{\mathbb R^3}}}(u_1), \end{aligned}$$

where for u 0 and u 1 in the Schwartz class,

$$\displaystyle \begin{aligned}\cos{}(t\sqrt{-\Delta_{\mathbb R^3}})(u_0)\equiv \int_{\mathbb R^3}\cos{}(t|\xi|) \widehat{u_0}(\xi)\, e^{i\xi\cdot x}d\xi \end{aligned}$$

and

$$\displaystyle \begin{aligned}\frac{\sin{}(t\sqrt{-\Delta_{\mathbb R^3}})}{\sqrt{-\Delta_{\mathbb R^3}}}(u_1) \equiv \int_{\mathbb R^3} \frac{\sin{}(t|\xi|)}{|\xi|}\widehat{u_1}(\xi)\, e^{i\xi\cdot x}d\xi\,, \end{aligned}$$

where \( \widehat {u_0}\) and \(\widehat {u_1}\) are the Fourier transforms of u 0 and u 1 respectively. By density, one then extends S e(t)(u 0, u 1) to a bounded map from \(H^s(\mathbb R^3)\times H^{s-1}(\mathbb R^3)\) to \(H^s(\mathbb R^3)\) for any \(s\in \mathbb R\). The next lemma displays the finite propagation speed property of S e(t).

Proposition 4.1.10 (Finite Propagation Speed)

Let \((u_0,u_1)\in H^s(\mathbb R^3)\times H^{s-1}(\mathbb R^3)\)for some s ≥ 0 be such that

$$\displaystyle \begin{aligned}\mathrm{supp}(u_0)\cup \mathrm{supp}(u_1)\subset \{x\in\mathbb R^3\,:\, |x-x_0|\leq R\}, \end{aligned}$$

for some R > 0 and \(x_0\in \mathbb R^3\). Then for t ≥ 0,

$$\displaystyle \begin{aligned}\mathrm{supp}(S_{e}(t)(u_0,u_1))\subset \{x\in\mathbb R^3\,:\, |x-x_0|\leq t+R\}. \end{aligned}$$

Proof

The statement of Proposition 4.1.10 (and even more precise localisation property) follows from the Kirchoff formula representation of the solutions of the three dimensional wave equation. Here we will present another proof which has the advantage to extend to an arbitrary dimension and to variable coefficient settings. By the invariance of the wave equation with respect to spatial translations, we can assume that x 0 = 0. We need to prove Proposition 4.1.10 only for (say) s ≥ 100 which ensures by the Sobolev embedding that the solutions we study are of class \(C^2(\mathbb R^4)\). We than can treat the case of an arbitrary \((u_0,u_1)\in H^s(\mathbb R^3)\times H^{s-1}(\mathbb R^3)\), s ≥ 0 by observing that

$$\displaystyle \begin{aligned} \rho_{\varepsilon}\star S_{e}(t)(u_0,u_1) =S_{e}(t)(\rho_{\varepsilon}\star u_0,\rho_{\varepsilon }\star u_1), \end{aligned} $$
(4.1.19)

where ρ ε(x) = ε −3ρ(xε), \(\rho \in C^\infty _0(\mathbb R^3)\), 0 ≤ ρ ≤ 1, \(\int \rho =1\). It suffices then to pass to the limit ε → 0 in (4.1.19). Indeed, for \(\varphi \in C^\infty _0(|x|>t+R)\), S e(t)(ρ ε ⋆ u 0, ρ ε ⋆ u 1)(φ) is zero for ε small enough while ρ ε ⋆ S e(t)(u 0, u 1)(φ) converges to S e(t)(u 0, u 1)(φ).

Therefore, in the remaining of the proof of Proposition 4.1.10, we shall assume that S e(t)(u 0, u 1) is a C 2 solution of the 3d wave equation. The main point in the proof is the following lemma.

Lemma 4.1.11

Let \(x_0\in \mathbb R^3\), r > 0 and let S e(t)(u 0, u 1) be a C 2solution of the 3d linear wave equation. Suppose that u 0(x) = u 1(x) = 0 for |x  x 0|≤ r. Then S e(t)(u 0, u 1) = 0 in the cone C defined by

$$\displaystyle \begin{aligned}C=\{(t,x)\in \mathbb R^{4}\,:\, 0\leq t\leq r,\, \, |x-x_0|\leq r-t \}. \end{aligned}$$

Proof

Let u(t, x) = S e(t)(u 0, u 1). For t ∈ [0, r], we set

$$\displaystyle \begin{aligned}E(t)\equiv \frac{1}{2}\int_{B(x_0,r-t)}\big( (\partial_t u)^2(t,x)+|\nabla_x u(t,x)|{}^2 \big)dx, \end{aligned}$$

where \(B(x_0,r-t)=\{x\in \mathbb R^3\,:\, |x|\leq r-t \}\). Then using the Gauss-Green theorem and the equation solved by u, we obtain that

$$\displaystyle \begin{aligned}\dot{E}(t)= -\frac{1}{2}\int_{\partial B}\big( (\partial_t u)^2(t,y)+|\nabla_x u(t,y)|{}^2-2\partial_t u(t,y)\nabla_x u(t,y)\cdot \nu(y) \big)dS(y), \end{aligned}$$

where \(\partial B\equiv \{x\in \mathbb R^3\,:\, |x|= r-t \}\), dS(y) is the volume element associated with ∂B and ν(y) is the outer unit normal to ∂B. We clearly have

$$\displaystyle \begin{aligned}2\partial_t u(t,y)\nabla_x u(t,y)\cdot \nu(y) \leq (\partial_t u)^2(t,y)+|\nabla_x u(t,y)|{}^2, \end{aligned}$$

which implies that \(\dot {E}(t)\leq 0\). Since E(0) = 0 we obtain that E(t) = 0 for every t ∈ [0, r]. This in turn implies that u(t, x) is a constant in C. We also know that u(0, x) = 0 for |x − x 0|≤ r. Therefore u(t, x) = 0 in C. This completes the proof of Lemma 4.1.11. □

Let us now complete the proof of Proposition 4.1.10. Let \(t_0\in \mathbb R\) and \(y\in \mathbb R^3\) such that |y| > R + t 0. We need to show that u(t 0, y) = 0. Consider the cone C defined by

$$\displaystyle \begin{aligned}C=\{(t,x)\in \mathbb R^{4}\,:\, 0\leq t\leq t_0,\,\, |x-y|\leq t_0-t \}. \end{aligned}$$

Set \(B\equiv C\cap \{(t,x)\in \mathbb R^4: t=0 \}\). We have that

$$\displaystyle \begin{aligned}B=\{(t,x)\in \mathbb R^{4}\,:\, t=0,\,\, |x-y|\leq t_0 \} \end{aligned}$$

and therefore by the definition of t 0 and y we have that

$$\displaystyle \begin{aligned} B\cap \{(t,x)\in \mathbb R^{4}\,:\, t=0,\,\, |x|\leq R \}=\emptyset. \end{aligned} $$
(4.1.20)

Therefore u(0, x) =  tu(0, x) for |x − y|≤ t 0. Using Lemma 4.1.11, we obtain that u(t, x) = 0 in C. In particular u(t 0, y) = 0. This completes the proof of Proposition 4.1.10. □

Using Proposition 4.1.10 and a decomposition of the initial data associated with a partition of unity corresponding to a covering of \(\mathbb T^3\) by sufficiently small balls, we obtain that the result of Theorem 4.1.9 is a consequence of the following statement.

Proposition 4.1.12 (Local in Time Strichartz Inequality for the Wave Equation on \(\mathbb R^3\))

Let \((p,q)\in \mathbb R^2\)be such that 2 < p ∞ and \(\frac {1}{p}+\frac {1}{q}=\frac {1}{2}\). Then we have the estimate

$$\displaystyle \begin{aligned}\|S_{e}(t)(u_0,u_1)\|{}_{L^p([0,1];L^q(\mathbb R^3))}\leq C\big(\|u_0\|{}_{H^{\frac{2}{p}}(\mathbb R^3)}+\|u_1\|{}_{H^{\frac{2}{p}-1}(\mathbb R^3)}\big). \end{aligned}$$

Proof

Let \(\chi \in C^\infty _0(\mathbb R^3)\) be such that χ(x) = 1 for |x| < 1. We then define the Fourier multiplier χ(D x) by

$$\displaystyle \begin{aligned} \chi(D_x)(f)=\int_{\mathbb R^3}\chi(\xi) \widehat{f}(\xi)\, e^{i\xi\cdot x}d\xi. \end{aligned} $$
(4.1.21)

Using a suitable Sobolev embedding in \(\mathbb R^3\), we obtain that for every \(\sigma \in \mathbb R\),

$$\displaystyle \begin{aligned}\big\| \frac{\sin{}(t\sqrt{-\Delta_{\mathbb R^3}})}{\sqrt{-\Delta_{\mathbb R^3}}}(\chi(D_x) u_1) \big\|{}_{L^p([0,1];L^q(\mathbb R^3))} \leq C\|u_1\|{}_{H^{\sigma}(\mathbb R^3)}\,. \end{aligned}$$

Therefore, by splitting u 1 as

$$\displaystyle \begin{aligned}u_1=\chi(D_x)(u_1)+(1-\chi(D_x))(u_1) \end{aligned}$$

and by expressing the \(\sin \) and \(\cos \) functions as combinations of exponentials, we observe that Proposition 4.1.12 follows from the following statement.

Proposition 4.1.13

Let \((p,q)\in \mathbb R^2\)be such that 2 < p ∞ and \( \frac {1}{p}+\frac {1}{q}=\frac {1}{2}. \)Then we have the estimate

$$\displaystyle \begin{aligned}\big\|e^{\pm it\sqrt{-\Delta_{\mathbb R^3}}}(f)\big\|{}_{L^p([0,1];L^q(\mathbb R^3))}\leq C\|f\|{}_{H^{\frac{2}{p}}(\mathbb R^3)}\,. \end{aligned}$$

Remark 4.1.14

Let us make an important remark. As a consequence of Proposition 4.1.13 and a suitable Sobolev embedding, we obtain the estimate

$$\displaystyle \begin{aligned} \big\|e^{\pm it\sqrt{-\Delta_{\mathbb R^3}}}(f)\big\|{}_{L^2([0,1];L^\infty(\mathbb R^3))}\leq C\|f\|{}_{H^{s}(\mathbb R^3)}\,, \quad s>1. \end{aligned} $$
(4.1.22)

Therefore, we obtain that for \(f\in H^s(\mathbb R^3)\), s > 1, the function \(e^{ it\sqrt {-\Delta _{\mathbb R^3}}}(f)\) which is a priori defined as an element of \(C([0,1];H^s(\mathbb R^3))\) has the remarkable property that

$$\displaystyle \begin{aligned}e^{ it\sqrt{-\Delta_{\mathbb R^3}}}(f)\in L^\infty(\mathbb R^3) \end{aligned}$$

for almost every t ∈ [0, 1]. Recall that the Sobolev embedding requires the condition s > 3∕2 in order to ensure that an \(H^s(\mathbb R^3)\) function is in \(L^\infty (\mathbb R^3)\). Therefore, one may wish to see (4.1.22) as an almost sure in t improvement (with 1∕2 derivative) of the Sobolev embedding \(H^{\frac {3}{2}+}(\mathbb R^3)\subset L^\infty (\mathbb R^3)\), under the evolution of the linear wave equation.

Proof of Proposition 4.1.13

Consider a Littlewood-Paley decomposition of the unity

$$\displaystyle \begin{aligned} \mathrm{Id}= P_{0}+\sum_{N}P_{N}, \end{aligned} $$
(4.1.23)

where the summation is taken over the dyadic values of N, i.e. N = 2j, j = 0, 1, 2, … and P 0, P N are Littlewood-Paley projectors. More precisely they are defined as Fourier multipliers by Δ0 = ψ 0(D x) and for N ≥ 1, P N = ψ(D xN), where \(\psi _0\in C_0^\infty (\mathbb R^3)\) and \(\psi \in C_0^\infty (\mathbb R^3\backslash \{0\})\) are suitable functions such that (4.1.23) holds. The maps ψ(D xN) are defined similarly to (4.1.21) by

$$\displaystyle \begin{aligned}\psi(D_x/N)(f)=\int_{\mathbb R^3}\psi(\xi/N) \widehat{f}(\xi)\, e^{i\xi\cdot x}d\xi. \end{aligned}$$

Set

$$\displaystyle \begin{aligned}u(t,x)\equiv e^{\pm it\sqrt{-\Delta_{\mathbb R^3}}}(f)\,. \end{aligned}$$

Our goal is to evaluate \(\|u\|{ }_{L^p([0,1]L^q(\mathbb R^3))}\). Thanks to the Littlewood-Paley square function theorem, we have that

$$\displaystyle \begin{aligned} \|u\|{}_{L^q(\mathbb R^3)}\approx \Big\| \big(|P_0 u|{}^2+\sum_{N}|P_N u|{}^2\big)^{\frac{1}{2}} \Big\|{}_{L^q(\mathbb R^3)}\,. \end{aligned} $$
(4.1.24)

The proof of (4.1.24) can be obtained as a combination of the Mikhlin-Hörmander multiplier theorem and the Khinchin inequality for Bernouli variables.Footnote 1 Using the Minkowski inequality, since p ≥ 2 and q ≥ 2, we can write

$$\displaystyle \begin{aligned} \|u\|{}_{L^p_{t}L^q_{x}}\lesssim \|P_0 u\|{}_{L^p_t L^q_{x}}+\|P_N u\|{}_{L^p_t L^q_x l^2_{N}}\leq \|P_0 u\|{}_{L^p_t L^q_{x}}+\|P_N u\|{}_{l^2_N L^p_t L^q_x } \end{aligned} $$
(4.1.25)

Therefore, it suffices to prove that for every \(\psi \in C_0^\infty (\mathbb R^3\backslash \{0\})\) there exists C > 0 such that for every N and every \(f\in L^2(\mathbb R^3)\),

$$\displaystyle \begin{aligned} \| \psi(D_x/N) e^{\pm it\sqrt{-\Delta_{\mathbb R^3}}}(f)\|{}_{L^p([0,1];L^q(\mathbb R^3))}\leq CN^{\frac{2}{p}}\|f\|{}_{L^2(\mathbb R^3)}\, . \end{aligned} $$
(4.1.26)

Indeed, suppose that (4.1.26) holds true. Then, we define \(\tilde {P}_{N}\) as \(\tilde {P}_N=\tilde {\psi }(D_x/N)\), where \(\tilde {\psi }\in C_0^\infty (\mathbb R^3\backslash \{0\})\) is such that \(\tilde {\psi }\equiv 1\) on the support of ψ. Then \(P_N=\tilde {P}_{N} P_{N}\). Now, coming back to (4.1.25), using the Sobolev inequality to evaluate \( \|P_0 u\|{ }_{L^p_t L^q_{x}}\) and (4.1.26) to evaluate \(\|P_N u\|{ }_{l^2_N L^p_t L^q_x }\), we arrive at the bound

$$\displaystyle \begin{aligned}\|u\|{}_{L^p_{t}L^q_{x}}\lesssim \|f\|{}_{L^2}+ \|N^{\frac{2}{p}}\|P_{N} f\|{}_{ L^2_x}\|{}_{l^2_N}\lesssim \|f\|{}_{H^{\frac{2}{p}}}\,. \end{aligned}$$

Therefore, it remains to prove (4.1.26). Set

$$\displaystyle \begin{aligned}T\equiv \psi(D_x/N) e^{\pm it\sqrt{-\Delta_{\mathbb R^3}}}\,. \end{aligned}$$

Our goal is to study the mapping properties of T from \(L^2_{x}\) to \(L^p_tL^q_{x}\). We can write

$$\displaystyle \begin{aligned} \|Tf\|{}_{L^p_tL^q_{x}}=\sup_{\|G\|{}_{L^{p'}_tL^{q'}_{x}}\leq 1}\big | \int_{t,x}Tf\overline{G} \big|, \end{aligned} $$
(4.1.27)

where \(\frac {1}{p}+\frac {1}{p'}=\frac {1}{q}+\frac {1}{q'}=1\). Note that in order to write (4.1.27) the values 1 and of p and q are allowed. Next, we can write

$$\displaystyle \begin{aligned} \int_{t,x}Tf\overline{G}=\int_{x}f\overline{T^\star G}, \end{aligned} $$
(4.1.28)

where T is defined by

$$\displaystyle \begin{aligned}T^{\star}G\equiv \int_{0}^1 \psi(D_x/N) e^{\mp i\tau\sqrt{-\Delta_{\mathbb R^3}}}G(\tau)d\tau\,. \end{aligned}$$

Indeed, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{t,x}Tf\overline{G} &\displaystyle = &\displaystyle \int_{0}^{1}\int_{\mathbb R^3} \psi(D_x/N) e^{\pm it\sqrt{-\Delta_{\mathbb R^3}}} f \, \overline{G(t)}dxdt \\ &\displaystyle = &\displaystyle \int_{0}^{1}\int_{\mathbb R^3} f\, \overline{ \psi(D_x/N) e^{\mp it\sqrt{-\Delta_{\mathbb R^3}}}G(t)} dxdt \\ &\displaystyle = &\displaystyle \int_{\mathbb R^3} f\, \int_{0}^1 \overline{ \psi(D_x/N) e^{\mp it\sqrt{-\Delta_{\mathbb R^3}}}G(t)}dt\, dx\,. \end{array} \end{aligned} $$

Therefore (4.1.28) follows. But thanks to the Cauchy-Schwarz inequality we can write

$$\displaystyle \begin{aligned}|\int_{x}f\overline{T^\star G}|\leq \|f\|{}_{L^2_{x}}\|T^\star G\|{}_{L^2_{x}}\,. \end{aligned}$$

Therefore, in order to prove (4.1.26), it suffices to prove the bound

$$\displaystyle \begin{aligned}\|T^\star G\|{}_{L^2_{x}}\lesssim N^{\frac{2}{p}} \|G\|{}_{L^{p'}_t L^{q'}_{x}}\,. \end{aligned}$$

Next, we can write

$$\displaystyle \begin{aligned} \begin{array}{rcl} \|T^\star G\|{}^2_{L^2_{x}} &\displaystyle = &\displaystyle \int_{x} T^\star G\,\overline{T^\star G} \\ &\displaystyle = &\displaystyle \int_{t,x}T(T^{\star}(G))\overline{G} \\ &\displaystyle \leq &\displaystyle \|T(T^{\star}(G))\|{}_{L^p_t L^{q}_{x}} \|G\|{}_{L^{p'}_t L^{q'}_{x}}\,. \end{array} \end{aligned} $$

Therefore, estimate (4.1.26) would follow from the estimate

$$\displaystyle \begin{aligned} \|T(T^{\star}(G))\|{}_{L^p_t L^{q}_{x}} \lesssim N^{\frac{4}{p}}\|G\|{}_{L^{p'}_t L^{q'}_{x}}\,. \end{aligned} $$
(4.1.29)

An advantage of (4.1.29) with respect to (4.1.26) is that we have the same number of variables in both sides of the estimates. Coming back to the definition of T and T , we can write

$$\displaystyle \begin{aligned}T(T^{\star}(G))= \int_{0}^1 \psi^2(D_x/N) e^{\pm i(t-\tau)\sqrt{-\Delta_{\mathbb R^3}}}G(\tau)d\tau\,. \end{aligned}$$

Now by using the triangle inequality, for a fixed t ∈ [0, 1], we can write

$$\displaystyle \begin{aligned} \|T(T^{\star}(G))\|{}_{L^q_{x}}\leq \int_{0}^1\ \big\| \psi^2(D_x/N) e^{\pm i(t-\tau)\sqrt{-\Delta_{\mathbb R^3}}}G(\tau) \big\|{}_{L^q_{x}}d\tau. \end{aligned} $$
(4.1.30)

On the other hand, using the Fourier transform, we can write

$$\displaystyle \begin{aligned}\psi^2(D_x/N) e^{\pm it\sqrt{-\Delta_{\mathbb R^3}}}(f) =\int_{\mathbb R^3} \psi^2(\xi/N)e^{\pm it |\xi|}e^{ix\cdot\xi}\hat{f}(\xi)d\xi\,. \end{aligned}$$

Therefore,

$$\displaystyle \begin{aligned}\psi^2(D_x/N) e^{\pm it\sqrt{-\Delta_{\mathbb R^3}}}(f) =\int_{\mathbb R^3} K(t,x-x')f(x')dx, \end{aligned}$$

where

$$\displaystyle \begin{aligned}K(t,x-x')=\int_{\mathbb R^3} \psi^2(\xi/N)e^{\pm it |\xi|}e^{i(x-x')\cdot\xi}d\xi\,. \end{aligned}$$

A simple change of variable leads to

$$\displaystyle \begin{aligned}K(t,x-x')= N^3\int_{\mathbb R^3} \psi^2(\xi)e^{\pm it N|\xi|}e^{iN(x-x')\cdot\xi}d\xi\,. \end{aligned}$$

In order to estimate K(t, x − x′), we invoke the following proposition.

Proposition 4.1.15 (Soft Stationary Phase Estimate)

Let d ≥ 1. For every Λ > 0, N ≥ 1 there exists C > 0 such that for every λ ≥ 1, every \(a\in C^{\infty }_0(\mathbb R^d)\), satisfying

$$\displaystyle \begin{aligned}\sup_{|\alpha|\leq 2N}\sup_{x\in\mathbb R^d}|\partial^{\alpha}a(x)|\leq \Lambda, \end{aligned}$$

every φ  C (supp(a)) satisfying

$$\displaystyle \begin{aligned} \sup_{2\leq |\alpha|\leq 2N+2}\, \sup_{x\in {\mathrm{supp}(a)} }|\partial^{\alpha}\varphi(x)|\leq \Lambda \end{aligned} $$
(4.1.31)

one has the bound

$$\displaystyle \begin{aligned} \Big|\int_{\mathbb R^d}e^{i\lambda\varphi(x)}a(x)dx\Big|\leq C\int_{\mathrm{supp}(a)}\frac{dx}{(1+\lambda|\nabla\varphi(x)|{}^{2})^{N}}\,. \end{aligned} $$
(4.1.32)

Remark 4.1.16

Observe that in (4.1.31), we do not require upper bounds for the first derivatives of φ.

We will give the proof of Proposition 4.1.15 later. Let us first show how to use it in order to complete the proof of (4.1.26). We claim that

$$\displaystyle \begin{aligned} |K(t,x-x')|\lesssim N^3 (tN)^{-1}=N^2t^{-1}\,. \end{aligned} $$
(4.1.33)

Estimate (4.1.33) trivially follows from the expression defining K(t, x − x′) for |tN|≤ 1 (one simply ignores the oscillation term). For |Nt|≥ 1, using Proposition 4.1.15 (with a = ψ 2, N = 2 and d = 3), we get the bound

$$\displaystyle \begin{aligned}|K(t,x-x')|\lesssim N^3 \int_{\mathrm{supp}(\psi)}\frac{d\xi}{(1+|tN||\nabla\varphi(\xi)|{}^{2})^{2}}\,, \end{aligned}$$

where

$$\displaystyle \begin{aligned}\varphi(\xi)=\pm |\xi|+\frac{N(x-x')\cdot\xi}{t}\,. \end{aligned}$$

Observe that φ is C on the support of ψ and moreover it satisfies the assumptions of Proposition 4.1.15. We next observe that

$$\displaystyle \begin{aligned} \int_{\mathrm{supp}(\psi)}\frac{d\xi}{(1+|tN||\nabla\varphi(\xi)|{}^{2})^{2}} \lesssim (tN)^{-1}\,. \end{aligned} $$
(4.1.34)

Indeed, since \(\nabla \varphi (\xi )=\pm \frac {\xi }{|\xi |}+t^{-1}N(x-x')\) we obtain that one can split the support of integration in regions such that there are two different j 1, j 2 ∈{1, 2, 3} such that one can perform the change of variable

$$\displaystyle \begin{aligned}\eta_{j_1}=\partial_{\xi_{j_1}}\varphi(\xi),\quad \eta_{j_2}=\partial_{\xi_{j_2}}\varphi(\xi), \end{aligned}$$

with a non-degenerate Hessian. More precisely, we have

$$\displaystyle \begin{aligned}\det\left( \begin{array}{ll} \partial_{\xi_1}^2\varphi(\xi) & \partial_{\xi_1,\xi_2}^2\varphi(\xi) \\ \partial_{\xi_1,\xi_2}^2\varphi(\xi) & \partial_{\xi_2}^2\varphi(\xi) \end{array} \right)=\frac{\xi_3^2}{|\xi|{}^4} \end{aligned}$$

which is not degenerate for ξ 3 ≠ 0. Therefore for ξ 3 ≠ 0, we can choose j 1 = 1 and j 2 = 2. Similarly, ξ 1 ≠ 0, we can choose j 1 = 2 and j 2 = 3 and for ξ 2 ≠ 0, we can choose j 1 = 1 and j 2 = 3. Therefore, using that the support of ψ does not meet zero, after splitting the support of the integration in three regions, by choosing the two “good” variables and by neglecting the integration with respect to the remaining variable, we obtain that

$$\displaystyle \begin{aligned}\int_{\mathrm{supp}(\psi)}\frac{d\xi}{(1+|tN||\nabla\varphi(\xi)|{}^{2})^{2}} \lesssim \int_{\mathbb R^2} \frac{d \eta_{j_1} d \eta_{j_2}}{(1+|tN|\, (|\eta_{j_1}|{}^{2} +|\eta_{j_2}|{}^2)^{2}} \lesssim (tN)^{-1}\,. \end{aligned}$$

Thus, we have (4.1.34) which in turn implies (4.1.33).

Thanks to (4.1.33), we arrive at the estimate

$$\displaystyle \begin{aligned}\big\| \psi^2(D_x/N) e^{\pm i(t-\tau)\sqrt{-\Delta_{\mathbb R^3}}}G(\tau)\big\|{}_{L^\infty_{x}}\lesssim N^2|t-\tau|{}^{-1}\|G(\tau)\|{}_{L^1_{x}}\,. \end{aligned}$$

On the other hand, we also have the trivial bound

$$\displaystyle \begin{aligned}\big\| \psi^2(D_x/N) e^{\pm i(t-\tau)\sqrt{-\Delta_{\mathbb R^3}}}G(\tau)\big\|{}_{L^2_{x}}\lesssim \|G(\tau)\|{}_{L^2_{x}}\,. \end{aligned}$$

Therefore using the basic Riesz-Torin interpolation theorem, we arrive at the bound

$$\displaystyle \begin{aligned}\big\| \psi^2(D_x/N) e^{\pm i(t-\tau)\sqrt{-\Delta_{\mathbb R^3}}}G(\tau)\big\|{}_{L^q_{x}}\lesssim \frac{N^{\frac{4}{p}}}{|t-\tau|{}^{\frac{2}{p}}} \|G(\tau)\|{}_{L^{q'}_{x}}\,. \end{aligned}$$

Therefore coming back to (4.1.30), we get

$$\displaystyle \begin{aligned}\|T(T^{\star}(G))\|{}_{L^q_{x}}\lesssim \int_{0}^1 \frac{N^{\frac{4}{p}}}{|t-\tau|{}^{\frac{2}{p}}} \big\| G(\tau)\big\|{}_{L^{q'}_{x}}d\tau\,. \end{aligned}$$

Therefore, the estimate (4.1.29) would follow from the one dimensional estimate

$$\displaystyle \begin{aligned} \big\|\int_{\mathbb R} \frac{f(\tau)}{|t-\tau|{}^{\frac{2}{p}}} d\tau\big\|{}_{L^p(\mathbb R)}\lesssim \|f\|{}_{L^{p'}(\mathbb R)}\,. \end{aligned} $$
(4.1.35)

Thanks to our assumption, one has \(\frac {2}{p}<1\) and also

$$\displaystyle \begin{aligned}1+\frac{1}{p}=\frac{1}{p'}+\frac{2}{p}\,. \end{aligned}$$

Therefore estimate (4.1.35) is precisely the Hardy-Littlewood-Sobolev inequality (cf. [29]). This completes the proof of (4.1.26), once we provide the proof of Proposition 4.1.15.

Proof of Proposition 4.1.15

We follow [17]. Consider the first order differential operator defined by

$$\displaystyle \begin{aligned}L\equiv \frac{1}{i(1+\lambda|\nabla \varphi|{}^2)}\sum_{j=1}^{d}\partial_{j}\varphi \partial_{j} + \frac{1}{1+\lambda|\nabla \varphi|{}^2}\,. \end{aligned}$$

which satisfies L(e iλφ) = e iλφ . We have that

$$\displaystyle \begin{aligned}\int_{\mathbb R^d}e^{i\lambda\varphi(x)}a(x)dx= \int_{\mathbb R^d}L(e^{i\lambda\varphi(x)})a(x)dx = \int_{\mathbb R^d}e^{i\lambda\varphi(x)} \tilde{L}(a(x))dx, \end{aligned}$$

where \(\tilde {L}\) is defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} \tilde{L}(u)&\displaystyle =&\displaystyle -\sum_{j=1}^{d}\frac{\partial_{j}\varphi}{i(1+\lambda|\nabla\varphi|{}^2)}\partial_{j}u\\ &\displaystyle &\displaystyle +\,\Big(-\sum_{j=1}^{d}\frac{\partial_{j}^{2}\varphi}{i(1+\lambda|\nabla\varphi|{}^2)}+ \sum_{j=1}^{d}\frac{2\lambda\partial_{j}\varphi\,(\nabla\varphi\cdot\nabla\partial_{j}\varphi)}{i(1+\lambda|\nabla\varphi|{}^2)^2}\Big)u +\frac{1}{1+\lambda|\nabla\varphi|{}^2}u\,. \end{array} \end{aligned} $$

As a consequence, we get the bound

$$\displaystyle \begin{aligned}\Big|\int_{\mathbb R^d}e^{i\lambda\varphi(x)}a(x)dx\Big|\leq \int_{\mathbb R^d} | \tilde{L}^N a|, \end{aligned} $$
(4.1.36)

where \(N\in \mathbb {N}\). To conclude, we need to estimate the coefficients of \(\tilde {L}.\) We shall use the notation \(\langle u\rangle = ( 1 + |u|{ }^2)^{\frac 12}\) and we set λ = μ 2. At first, we consider

$$\displaystyle \begin{aligned}F(x)= Q( \mu^2 | \nabla \varphi(x) |{}^2), \quad Q(u)= { 1 \over 1 + u }, \, u\geq 0.\end{aligned}$$

We clearly have

$$\displaystyle \begin{aligned} F \lesssim \langle \mu \nabla \varphi\rangle^{-2} \end{aligned} $$
(4.1.37)

and we shall estimate the derivatives of F. Set

$$\displaystyle \begin{aligned}\Lambda^k(x) = \sup_{ 2 \leq |\alpha | \leq k} |\partial^\alpha \varphi(x) |. \end{aligned}$$

We have the following statement.

Lemma 4.1.17

For |α| = k ≥ 1, we have the bound

$$\displaystyle \begin{aligned} | \partial^\alpha F(x) | \lesssim C( \Lambda^{k+1}(x)) \Big( { 1 \over \langle\mu \nabla \varphi(x)\rangle^2} + { \mu^k \over \langle\mu \nabla \varphi(x)\rangle^{k+2}}\Big), \end{aligned} $$
(4.1.38)

where \(C:\mathbb R^{+}\rightarrow \mathbb R^{+}\)is a suitable continuous increasing function (which can change from line to line and can always be taken of the form C(t) = (1 + t)Mfor a sufficiently large M).

Proof

Using an induction on k, we get that αF for |α| = k ≥ 1 is a linear combination of terms under the form

$$\displaystyle \begin{aligned}T_{q}= Q^{(m)}( \mu^2 |\nabla \varphi|{}^2) \Big(\partial^{\gamma_{1}} ( \mu^2 |\nabla \varphi|{}^2) \Big)^{q_{1}} \cdots \Big( \partial^{\gamma_{k}} (\mu^2 |\nabla \varphi |{}^2)\Big)^{q_{k}}\end{aligned}$$

where

$$\displaystyle \begin{aligned} q_{1}+ \cdots + q_{k}= m \quad \mathrm{and} \quad \sum |\gamma_{i}| q_{i}= k,\quad q_i\geq 0. \end{aligned} $$
(4.1.39)

Since \(|Q^{(m)}(u)| \lesssim \langle u\rangle ^{ -m- 1} ,\) we get

$$\displaystyle \begin{aligned}|T_{q}|\lesssim {1 \over \langle\mu \nabla \varphi \rangle^{ 2}} \left( {\mu \over \langle\mu \nabla \varphi \rangle}\right)^{2m} \Big| \Big(\partial^{\gamma_{1}} ( |\nabla \varphi|{}^2) \Big)^{q_{1}} \cdots \Big( \partial^{\gamma_{k}} ( |\nabla \varphi |{}^2)\Big)^{q_{k}} \Big|.\end{aligned}$$

Moreover, by the Leibnitz formula

$$\displaystyle \begin{aligned} \partial^{\gamma_{i}} (|\nabla \varphi|{}^2)\leq \left\{ \begin{array}{l l } C(\Lambda^{2})|\nabla \varphi|, & \text{ if } |\gamma_i|=1,\\ C(\Lambda^{|\gamma_i|+1})(|\nabla \varphi|+1), & \text{ if } |\gamma_i|>1. \end{array} \right. \end{aligned}$$

We therefore have the following bound for T q

$$\displaystyle \begin{aligned} \begin{array}{rcl} |T_{q}| &\displaystyle \lesssim &\displaystyle C(\Lambda^{k+1}) {1 \over \langle\mu \nabla \varphi \rangle^{ 2}} \left( {\mu \over \langle\mu \nabla \varphi \rangle}\right)^{2m} \left( |\nabla \varphi|{}^{m} + |\nabla \varphi|{}^{\sum_{|\gamma_i|=1}q_i}\right)\\ &\displaystyle \lesssim&\displaystyle C(\Lambda^{k+1}){1 \over \langle\mu \nabla \varphi \rangle^{ 2}}\left[ \left( {\mu \over \langle\mu \nabla \varphi \rangle}\right)^{m} + \left({\mu \over \langle\mu \nabla \varphi \rangle} \right)^{m+\sum_{|\gamma_i|>1}q_i}\right]. \end{array} \end{aligned} $$

Next, by using (4.1.39), we note that

$$\displaystyle \begin{aligned}m+\sum_{|\gamma_i|>1}q_i = \sum_{|\gamma_i|>1}2q_i + \sum_{|\gamma_i|=1}q_i \leq \sum |\gamma_{i}| q_{i}= k.\end{aligned}$$

Therefore, we get

$$\displaystyle \begin{aligned} |T_{q}| \lesssim C(\Lambda^{k+1}) \Big( { 1 \over \langle\mu \nabla \varphi\rangle^2} + { \mu^k \over \langle\mu \nabla \varphi\rangle^{k+2}}\Big). \end{aligned}$$

This completes the proof of Lemma 4.1.17. □

We are now in position to prove the following statement.

Lemma 4.1.18

For \(N\in \mathbb {N}\) , we can write \(\tilde {L}^N\) under the form

$$\displaystyle \begin{aligned}\tilde{L}^Nu= \sum_{ |\alpha | \leq N} a^{(N)}_{\alpha} \partial^\alpha u \end{aligned} $$
(4.1.40)

with the estimates

$$\displaystyle \begin{aligned} |a_{\alpha}^{(N)}(x)| \lesssim C(\Lambda^{ N+ 2}(x) ) { 1 \over \langle \mu \nabla \varphi(x) \rangle^N } \end{aligned} $$
(4.1.41)

and more generally for |β| = k,

$$\displaystyle \begin{aligned} |\partial^\beta a_{\alpha}^{(N)}(x)| \lesssim C(\Lambda^{ N+ k + 2}(x) )\Big( { 1 \over \langle \mu \nabla \varphi(x) \rangle^N } + {\mu^k \over\langle \mu \nabla \varphi(x)\rangle^{N+k}} \Big). \end{aligned} $$
(4.1.42)

Proof

We reason by induction on N. First, we notice that \(\tilde {L}\) is under the form

$$\displaystyle \begin{aligned}\tilde{L}= \sum_{j=1}^d a_{j } \partial_{j} + b,\end{aligned}$$

where

$$\displaystyle \begin{aligned}a_{j}= i \partial_{j} \varphi F, \quad b= F+ i \sum_{j=1}^d \partial_{j}\big( \partial_{j}\varphi F \big)= F+ \sum_{j=1}^d \partial_{j} a_{j}.\end{aligned}$$

Consequently, by using (4.1.37), we get that

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} &\displaystyle &\displaystyle |a_{j}| \lesssim { 1 \over \mu } { 1 \over \langle \mu \nabla \varphi \rangle} \end{array} \end{aligned} $$
(4.1.43)

and by the Leibnitz formula, since αa j for |α|≥ 1 is a linear combination of terms under the form

$$\displaystyle \begin{aligned}( \partial^{\beta}\partial_{j} \varphi) \partial^\gamma F, \quad |\beta|+ |\gamma| = |\alpha|, \end{aligned}$$

we get by using (4.1.38) that for |α| = k ≥ 1,

$$\displaystyle \begin{aligned} |\partial^\alpha a_{j} | \lesssim C(\Lambda^{ k + 1}) \Big( { 1 \over \langle \mu \nabla \varphi \rangle} + { \mu^{k-1} \over \langle \mu \nabla \varphi \rangle^{k+1}} \Big). \end{aligned} $$
(4.1.44)

Consequently, we also find thanks to (4.1.44), (4.1.38) that for |α| = k ≥ 0,

$$\displaystyle \begin{aligned} |\partial^\alpha b | \lesssim C(\Lambda^{k+2}) \Big( { 1 \over \langle \mu \nabla \varphi \rangle} + { \mu^{k} \over \langle \mu \nabla \varphi \rangle^{k+2}} \Big). \end{aligned} $$
(4.1.45)

Using (4.1.44) (4.1.45), we obtain that the assertion of the lemma holds true for N = 1. Next, let us assume that it is true at the order N. We have

$$\displaystyle \begin{aligned}(\tilde{L})^{N+1} u = \sum_{j=1}^d \sum_{ |\alpha |\leq N} \Big(a_{j} a^{(N)}_{\alpha} \partial_{j} \partial^\alpha u + a_{j} \partial_{j} a^{(N)}_{\alpha} \partial^\alpha u\Big) + \sum_{ |\alpha | \leq N } b a^{(N)}_{\alpha} \partial^\alpha u.\end{aligned}$$

Consequently, we get that the coefficients are under the form

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle a_{\alpha}^{(N+1)}= a_{j} a_{\beta}^{(N)} , \quad |\alpha|=N+1, \, |\beta|=N, \\ &\displaystyle &\displaystyle a_{\alpha}^{(N+1)} = a_{j} \partial_{j} a_{\beta}^{(N)} + a_{j} a_{\gamma}^{(N)} + b a_{\delta}^{(N)}, \quad |\beta|= |\delta |= |\alpha|, \, |\gamma|= |\alpha |-1. \end{array} \end{aligned} $$

Therefore, by using (4.1.43) and (4.1.42), we get that (4.1.41) is true for N + 1. In order to prove (4.1.42) for N + 1, we need to evaluate \(\partial ^\gamma a_{\alpha }^{(N+1)}\). The estimate of the contribution of all terms except \(\partial ^\gamma (a_{j} \partial _{j} a_{\beta }^{(N)})\) follows directly from the induction hypothesis. In order to estimate \(\partial ^\gamma ( a_{j} \partial _{j} a_{\beta }^{(N)})\), we need to invoke (4.1.43) and (4.1.44) and the induction hypothesis. This completes the proof of Lemma 4.1.18. □

Finally, thanks to (4.1.36) and Lemma 4.1.18, we get

$$\displaystyle \begin{aligned}\Big| \int_{\mathbb{R}^d} e^{i \lambda \varphi(x)} a(x)\, dx \Big| \lesssim K \int_{\mathrm{supp(a)}} { dx \over {( 1 + \lambda |\nabla \varphi |{}^2) }^{N\over 2} }\, dx, \end{aligned}$$

where

$$\displaystyle \begin{aligned}K\equiv (\sup_{x\in \mathrm{supp(a)} } \Lambda^{N+2}(x)) \big(\sup_{x\in\mathbb R^d} \sup_{ | \alpha | \leq N} | \partial^\alpha a(x)|\big). \end{aligned}$$

This completes the proof of Proposition 4.1.15. □

This completes the proof of Proposition 4.1.13. □

This completes the proof of Proposition 4.1.12. □

Remark 4.1.19

If in the proof of the Strichartz estimates, we use the triangle inequality instead of the square function theorem and the Young inequality instead of the Hardy-Littlewood-Sobolev inequality, we would obtain slightly less precise estimates. These estimates are sufficient to get all sub-critical well-posedness results. However in the case of initial data with critical Sobolev regularity the finer arguments using the square function and the Hardy-Littlewood-Sobolev inequality are essentially needed.

4.1.4 Local Well-Posedness in H s x H s−1, s ≥ 1∕2

In this section, we shall use the Strichartz estimates in order to improve the well-posedness result of Proposition 4.1.1. We shall be able to consider initial data in the more singular Sobolev spaces H s × H s−1, s ≥ 1∕2. We start by a definition.

Definition 4.1.20

For 0 ≤ s < 1, a couple of real numbers \((p,q), \frac 2 s\leq p\leq + \infty \) is s-admissible if

$$\displaystyle \begin{aligned}\frac{1}{p}+\frac{3}{q}=\frac{3}{2}-s. \end{aligned}$$

For T > 0, 0 ≤ s < 1, we define the spaces

$$\displaystyle \begin{aligned} X^s_T= C ([0, T]; H^s( \mathbb T^3)) \bigcap_{(p,q) \ s\text{-admissible}} L^p((0, T) ; L^q(\mathbb T^3)) \end{aligned} $$
(4.1.46)

and its “dual space”

$$\displaystyle \begin{aligned} Y^s_T= \bigcup_ {(p,q) \ s\text{-admissible}} L^{p'}((0, T) ; L^{q'}(\mathbb T^3)) \end{aligned} $$
(4.1.47)

(p′, q′) being the conjugate couple of (p, q), equipped with their natural norms (notice that to define these spaces, we can keep only the extremal couples corresponding to p = 2∕s and p = + respectively).

We can now state the non homogeneous Strichartz estimates for the three dimensional wave equation on the torus \(\mathbb T^3\).

Theorem 4.1.21

For every 0 < s < 1, every s-admissible couple (p, q), there exists C > 0 such that for every T ∈ ]0, 1], every \(F\in Y^{1-s}_T\), every \((u_0,u_1)\in H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)\)one has

$$\displaystyle \begin{aligned} \|S(t)(u_0,u_1)\|{}_{X^s_T}\leq C(\|u_0\|{}_{H^s(\mathbb T^3)}+\|u_1\|{}_{H^{s-1}(\mathbb T^3)}) \end{aligned} $$
(4.1.48)

and

$$\displaystyle \begin{aligned} \Big\|\int_0^t \frac {\sin{}((t-\tau) \sqrt{ - \Delta})}{\sqrt{- \Delta}} (F(\tau)) d\tau\Big\|{}_{X^s_T}\leq C \|F\|{}_{Y^{1-s}_T} \end{aligned} $$
(4.1.49)

Proof

Thanks to the Hölder inequality, in order to prove (4.1.48), it suffices the consider the two end point cases for p, i.e. p = 2∕s and p =  (the estimate in \(C ([0, T]; H^s( \mathbb T^3))\) is straightforward). The case p = 2∕s follows from Theorem 4.1.9. The case p =  results from the Sobolev embedding. This ends the proof of (4.1.48).

Let us next turn to (4.1.49). We first observe that

$$\displaystyle \begin{aligned} \Big\|\int_0^t \frac {\sin{}((t-\tau) \sqrt{ - \Delta})}{\sqrt{- \Delta}} (F(\tau)) d\tau\Big\|{}_{ C ([0, T]; H^s( \mathbb T^3)) }\leq C \|F\|{}_{Y^{1-s}_T} \end{aligned} $$
(4.1.50)

follows by duality from (4.1.48). Thanks to (4.1.50), we obtain that it suffices to show

$$\displaystyle \begin{aligned} \Big\|\int_0^t \frac {\sin{}((t-\tau) \sqrt{ - \Delta})}{\sqrt{- \Delta}} (F(\tau)) d\tau\Big\|{}_{ L^{p_1}_T L^{q_1}}\leq C \|F\|{}_{ L^{p_2^{\prime}}_T L^{q_2^{\prime}}}, \end{aligned} $$
(4.1.51)

where (p 1, q 1) is s-admissible and \((p_2^{\prime },q_2^{\prime })\) are such that (p 2, q 2) are (1 − s)-admissible and where for shortness we set

$$\displaystyle \begin{aligned}L^p_T L^q \equiv L^{p}((0, T) ; L^{q}(\mathbb T^3)). \end{aligned}$$

Denote by Π0 the projector on the zero Fourier mode on \(\mathbb T^3\), i.e.

$$\displaystyle \begin{aligned}\Pi_0(f)=(2\pi)^{-3}\int_{\mathbb T^3}f(x)dx\,. \end{aligned}$$

We have the bound

$$\displaystyle \begin{aligned}\Big\|\int_0^t \frac {\sin{}((t-\tau) \sqrt{ - \Delta})}{\sqrt{- \Delta}} (\Pi_0 F(\tau)) d\tau\Big\|{}_{L^p_T L^q} \leq C\|F\|{}_{L^1((0, T) ; L^1(\mathbb T^3))}\,. \end{aligned}$$

By the Hölder inequality

$$\displaystyle \begin{aligned} \|F\|{}_{L^1((0, T) ; L^1(\mathbb T^3))}\leq C \|F\|{}_{ L^{p_2^{\prime}}_T L^{q_2^{\prime}}} \end{aligned}$$

and therefore, it suffices to show the bound

$$\displaystyle \begin{aligned} \Big\|\int_0^t \frac {\sin{}((t-\tau) \sqrt{ - \Delta})}{\sqrt{- \Delta}} (\Pi_0^{\perp}F(\tau)) d\tau\Big\|{}_{L^{p_1}_TL^{q_1}}\leq C \|F\|{}_{ L^{p_2^{\prime}}_T L^{q_2^{\prime}} }, \end{aligned} $$
(4.1.52)

where

$$\displaystyle \begin{aligned}\Pi_0^{\perp}\equiv 1-\Pi_0\,. \end{aligned}$$

By writing the \(\sin \) function as a sum of exponentials, we obtain that (4.1.52) follows from

$$\displaystyle \begin{aligned} \Big\|\int_0^t e^{\pm i(t-\tau)\sqrt{-\Delta}} ((- \Delta)^{-\frac{1}{2}}\Pi_0^{\perp}F(\tau)) d\tau\Big\|{}_{ L^{p_1}_T L^{q_1} }\leq C \|F\|{}_{ L^{p_2^{\prime}}_T L^{q_2^{\prime}} }\,. \end{aligned} $$
(4.1.53)

Observe that \((- \Delta )^{-\frac {1}{2}}\Pi _0^{\perp }\) is well defined as a bounded operator from \(H^s(\mathbb T^3)\) to \(H^{s+1}(\mathbb T^3)\). Set

$$\displaystyle \begin{aligned}K\equiv e^{\pm it \sqrt{-\Delta}}\Pi_{0}^\perp\,. \end{aligned}$$

Thanks to (4.1.48), by writing

$$\displaystyle \begin{aligned}e^{\pm it \sqrt{-\Delta}}\Pi_{0}^\perp= \cos{}(t\sqrt{-\Delta})\Pi_{0}^\perp\pm i\sin{}( t\sqrt{-\Delta}) (- \Delta)^{-\frac{1}{2}} \Pi_{0}^\perp (- \Delta)^{\frac{1}{2}}\, , \end{aligned}$$

we see that the map K is bounded from \(H^s(\mathbb T^3)\) to \(X^s_T\). Consequently, the dual map K , defined by

$$\displaystyle \begin{aligned}K^*(F) = \int_{0}^T e^{\mp i\tau \sqrt{-\Delta}}\Pi_{0}^\perp(F(\tau))d\tau \end{aligned}$$

is bounded from Y s to \(H^{-s}(\mathbb T^3)\). Using the last property with s replaced by 1 − s (which remains in ]0, 1[ if s ∈ ]0, 1[), we obtain the following sequence of continuous mappings

$$\displaystyle \begin{aligned} L^{p_2^{\prime}}_T L^{q_2^{\prime}} \stackrel{K^{\star}}{\longrightarrow}H^{s-1}(\mathbb T^3) \stackrel{ (- \Delta)^{-\frac{1}{2}}\Pi_0^{\perp}}{\longrightarrow}H^{s}(\mathbb T^3) \stackrel{K}{\longrightarrow} L^{p_1}_T L^{q_1}\,. \end{aligned} $$
(4.1.54)

On the other hand, we have

$$\displaystyle \begin{aligned}\big(K \circ ((- \Delta)^{-\frac{1}{2}}\Pi_0^{\perp}) \circ K^*\big)(F) = \int_0^T e^{\pm i(t-\tau)\sqrt{-\Delta}} ((- \Delta)^{-\frac{1}{2}}\Pi_0^{\perp}F(\tau)) d\tau\ \end{aligned}$$

Therefore, we obtain the bound

$$\displaystyle \begin{aligned} \Big\|\int_0^T e^{\pm i(t-\tau)\sqrt{-\Delta}} ((- \Delta)^{-\frac{1}{2}}\Pi_0^{\perp}F(\tau)) d\tau\Big\|{}_{ L^{p_1}_T L^{q_1} }\leq C \|F\|{}_{L^{p_2^{\prime}}_T L^{q_2^{\prime}}}\,. \end{aligned} $$
(4.1.55)

The passage from (4.1.55) to (4.1.53) can be done by using the Christ-Kiselev [11] argument, as we explain below. By a density argument it suffices to prove (4.1.53) for \(F\in C^{\infty }( [0,T]\times \mathbb T^3 )\). We can of course also assume that

$$\displaystyle \begin{aligned}\|F\|{}_{L^{p_2^{\prime}}_T L^{q_2^{\prime}}}=1. \end{aligned}$$

For n ≥ 1 an integer and m = 0, 1, ⋯ , 2n, we define t n,m as

$$\displaystyle \begin{aligned}\int_{0}^{t_{n,m}}\|F(\tau)\|{}_{L^{q_2^{\prime}}(\mathbb T^3)}^{p_2^{\prime}}d\tau=m2^{-n}\,. \end{aligned}$$

Of course \(0=t_{n,0}\leq t_{n,1}\leq \cdots \leq t_{n,2^n}=T\). Next, we observe that for 0 ≤ α < β ≤ 1 there is a unique n such that α ∈ [2m2n, (2m + 1)2n) and β ∈ [(2m + 1)2n, (2m + 2)2n) for some m ∈{0, 1, ⋯ , 2n−1 − 1}. Indeed, this can be checked by writing the representations of α and β in base 2 (the number n corresponds to the first different digit of α and β). Therefore, if we denote by χ τ<t(τ, t) the characteristic function of the set {(τ, t) : 0 ≤ τ < t ≤ T} then we can write

$$\displaystyle \begin{aligned} \chi_{\tau<t}(\tau,t)=\sum_{n=1}^\infty\sum_{m=0}^{2^{n-1}-1} \chi_{n,2m}(\tau)\chi_{n,2m+1}(t), \end{aligned} $$
(4.1.56)

where χ n,m (m = 0, 1, ⋯ , 2n) denotes the characteristic function of the interval [t n,m, t n,(m+1)). Indeed, in order to achieve (4.1.56), it suffices to apply the previous observation for every : 0 ≤ τ < t ≤ T with α and β defined as

$$\displaystyle \begin{aligned}\alpha=\int_{0}^{\tau}\|F(s)\|{}_{L^{q_2^{\prime}}(\mathbb T^3)}^{p_2^{\prime}}ds,\quad \beta=\int_{0}^t\|F(s)\|{}_{L^{q_2^{\prime}}(\mathbb T^3)}^{p_2^{\prime}}ds\,. \end{aligned}$$

Therefore, thanks to (4.1.56), we can write

$$\displaystyle \begin{aligned}\int_0^t e^{\pm i(t-\tau)\sqrt{-\Delta}} ((- \Delta)^{-\frac{1}{2}}\Pi_0^{\perp}F(\tau)) d\tau \end{aligned}$$

as

$$\displaystyle \begin{aligned}\sum_{n=1}^\infty\sum_{m=0}^{2^{n-1}-1} \chi_{n,2m+1}(t) \int_0^{T} e^{\pm i(t-\tau)\sqrt{-\Delta}} ((- \Delta)^{-\frac{1}{2}}\Pi_0^{\perp}\chi_{n,2m}(\tau)F(\tau)) d\tau\,. \end{aligned}$$

The goal is to evaluate the \(L^{p_1}_T L^{q_1}\) norm of the last expression. Using that for a fixed n, χ n,2m+1(t) have disjoint supports, we obtain that the \(L^{p_1}_T L^{q_1}\) norm of the last expression can be estimated by

$$\displaystyle \begin{aligned}\sum_{n=1}^\infty \Big( \sum_{m=0}^{2^{n-1}-1} \big\| \int_0^{T} e^{\pm i(t-\tau)\sqrt{-\Delta}} ((- \Delta)^{-\frac{1}{2}}\Pi_0^{\perp}\chi_{n,2m}(\tau)F(\tau)) d\tau \big\|{}^{p_1}_{L^{p_1}_T L^{q_1}} \Big)^{\frac{1}{p_1}}\,. \end{aligned}$$

Now, using (4.1.55), we obtain that the last expression is bounded by

$$\displaystyle \begin{aligned} C \sum_{n=1}^\infty \Big( \sum_{m=0}^{2^{n-1}-1} \big\| \chi_{n,2m}(\tau)F(\tau) \big\|{}^{p_1}_{L^{p_2^{\prime}}_T L^{q_2^{\prime}}} \Big)^{\frac{1}{p_1}}\,. \end{aligned} $$
(4.1.57)

By definition

$$\displaystyle \begin{aligned}\big\| \chi_{n,2m}(\tau)F(\tau) \big\|{}^{p_2^{\prime}}_{L^{p_2^{\prime}}_T L^{q_2^{\prime}}}=2^{-n} \end{aligned}$$

and therefore (4.1.57) equals to

$$\displaystyle \begin{aligned}C \sum_{n=1}^\infty \Big( \sum_{m=0}^{2^{n-1}-1} 2^{-\frac{n p_1}{p_2^{\prime}}} \Big)^{\frac{1}{p_1}} \leq C \sum_{n=1}^\infty 2^{n(\frac{1}{p_1}-\frac{1}{p_2^{\prime}})} \,. \end{aligned}$$

The last series is convergent since by the definition of admissible pairs it follows that \(p_2^{\prime }<2<p_1\). Therefore we proved that (4.1.55) indeed implies (4.1.53). This completes the proof of Theorem 4.1.21. □

We can now use Theorem 4.1.21 in order to get the following improvement of Proposition 4.1.1.

Theorem 4.1.22 (Low Regularity Local Well-Posedness)

Let s > 1∕2. Consider the cubic defocusing wave equation

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u+u^3=0\,, \end{aligned} $$
(4.1.58)

posed on \(\mathbb T^3\). There exist positive constants γ, c and C such that for every Λ ≥ 1, every

$$\displaystyle \begin{aligned}(u_0,u_1)\in H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3) \end{aligned}$$

satisfying

$$\displaystyle \begin{aligned} \|u_0\|{}_{H^s}+\|u_1\|{}_{H^{s-1}}\leq \Lambda \end{aligned} $$
(4.1.59)

there exists a unique solution of (4.1.58) on the time interval [0, T], T  c Λ γwith initial data

$$\displaystyle \begin{aligned}u(0,x)=u_0(x), \quad \partial_t u(0,x)=u_1(x)\,. \end{aligned}$$

Moreover the solution satisfies

$$\displaystyle \begin{aligned}\|(u,\partial_t u)\|{}_{L^\infty([0,T],H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3))}\leq C\Lambda, \end{aligned}$$

u is unique in the class \(X^s_T\)described in Definition 4.1.20and the dependence with respect to the initial data and with respect to the time is continuous. More precisely, if u and \(\tilde {u}\)are two solutions of (4.1.58) with initial data satisfying (4.1.59) then

$$\displaystyle \begin{aligned} &\|(u-\tilde{u},\partial_t u-\partial_t \tilde{u})\|{}_{L^\infty([0,T],H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3))} \\ &\quad \leq C\big( \|u(0)-\tilde{u}(0)\|{}_{H^s(\mathbb T^3)}+\|\partial_t u(0)-\partial_t \tilde{u}(0)\|{}_{H^{s-1}(\mathbb T^3)} \big). \end{aligned} $$
(4.1.60)

Finally, if

$$\displaystyle \begin{aligned}(u_0,u_1)\in H^\sigma(\mathbb T^3)\times H^{\sigma-1}(\mathbb T^3) \end{aligned}$$

for some σ  s then there exists c σ > 0 such that

$$\displaystyle \begin{aligned}(u,\partial_t u)\in C([0,c_{\sigma} \Lambda^{-\gamma}];H^\sigma(\mathbb T^3)\times H^{\sigma-1}(\mathbb T^3))\,. \end{aligned}$$

Proof

We shall suppose that s ∈ (1∕2, 1), the case s ≥ 1 being already treated in Proposition 4.1.1. As in the proof of Proposition 4.1.1, we solve the integral equation

$$\displaystyle \begin{aligned} u(t)=S(t)(u_0,u_1)-\int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((u^3(\tau))d\tau \end{aligned}$$

by a fixed point argument. Recall that

$$\displaystyle \begin{aligned}\Phi_{u_0,u_1}(u)= S(t)(u_0,u_1)-\int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((u^3(\tau))d\tau. \end{aligned}$$

We shall estimate \(\Phi _{u_0,u_1}(u)\) in the spaces \(X_T^s\) introduced in Definition 4.1.20. Thanks to Theorem 4.1.21

$$\displaystyle \begin{aligned}\|S(t)(u_0,u_1)\|{}_{X^s_T}\leq C(\|u_0\|{}_{H^s(\mathbb T^3)}+\|u_1\|{}_{H^{s-1}(\mathbb T^3)})\,. \end{aligned}$$

Another use of Theorem 4.1.21 gives

$$\displaystyle \begin{aligned}\Big\| \int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((u^3(\tau))d\tau \Big\|{}_{X^s_T} \leq C\|u^3\|{}_{L^{\frac{2}{1+s}}_T L^{\frac{2}{2-s}}} = C\|u\|{}^3_{L^{\frac{6}{1+s}}_T L^{\frac{6}{2-s}}} \,. \end{aligned}$$

Observe that the couple \((\frac {2}{1+s}, \frac {2}{2-s})\) is the dual of \((\frac {2}{1-s}, \frac {2}{s})\) which is the end point (1 − s)-admissible couple. We also observe that if (p, q) is an s-admissible couple then \(\frac {1}{q}\) ranges in the interval \( [\frac {1}{2}-\frac {s}{2},\frac {1}{2}-\frac {s}{3}]. \) The assumption s ∈ (1∕2, 1) implies

$$\displaystyle \begin{aligned}\frac{1}{2}-\frac{s}{2}<\frac{2-s}{6}<\frac{1}{2}-\frac{s}{3}\,. \end{aligned}$$

Therefore \(q^{\star }\equiv \frac {6}{2-s}\) is such that there exists p such that (p , q ) is an s-admissible couple. By definition p is such that

$$\displaystyle \begin{aligned}\frac{1}{p^{\star}}+\frac{3}{q^{\star}}=\frac{3}{2}-s\,. \end{aligned}$$

The last relation implies that

$$\displaystyle \begin{aligned}\frac{1}{p^{\star}}=\frac{1}{2}-\frac{s}{2}\,. \end{aligned}$$

Now, using the Hölder inequality in time, we obtain

$$\displaystyle \begin{aligned}\|u\|{}_{L^{\frac{6}{1+s}}_T L^{\frac{6}{2-s}}}\leq T^{\frac{2s-1}{3}} \|u\|{}_{L^{p^{\star}}_T L^{q^{\star}}} \end{aligned}$$

which in turn implies

$$\displaystyle \begin{aligned}\Big\| \int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((u^3(\tau))d\tau \Big\|{}_{X^s_T} \leq CT^{2s-1}\|u\|{}_{X^s_T}^3\,. \end{aligned}$$

Consequently

$$\displaystyle \begin{aligned}\|\Phi_{u_0,u_1}(u)\|{}_{X^s_T}\leq C(\|u_0\|{}_{H^s(\mathbb T^3)}+\|u_1\|{}_{H^{s-1}(\mathbb T^3)}) + CT^{2s-1}\|u\|{}_{X^s_T}^3\,. \end{aligned}$$

A similar argument yields

$$\displaystyle \begin{aligned} \|\Phi_{u_0,u_1}(u) -\Phi_{u_0,u_1}(v) \|{}_{X^s_T}\leq CT^{2s-1} \big(\|u\|{}_{X^s_T}^2+\|v\|{}^2_{X^s_T}\big)\|u-v\|{}_{X^s_T} \,. \end{aligned} $$
(4.1.61)

Now, one obtains the existence and the uniqueness statements as in the proof of Proposition 4.1.1. Estimate (4.1.60) follows from (4.1.61) and a similar estimate obtained after differentiation of the Duhamel formula with respect to t. The propagation of regularity statement can be obtained as in the proof of Proposition 4.1.1. This completes the proof of Theorem 4.1.22. □

Concerning the uniqueness statement, we also have the following corollary which results from the proof of Theorem 4.1.22.

Corollary 4.1.23

Let s > 1∕2. Let (p , q ) be the s-admissible couple defined by

$$\displaystyle \begin{aligned}p^\star=\frac{2}{1-s},\quad q^{\star}\equiv \frac{6}{2-s}\,. \end{aligned}$$

Then the solutions constructed in Theorem 4.1.22 is unique in the class

$$\displaystyle \begin{aligned}L^{p^\star}([0,T];L^{q^{\star}}(\mathbb T^3))\,. \end{aligned}$$

Remark 4.1.24

As a consequence of Theorem 4.1.22, we have that for each \( (u_0,u_1)\in H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3) \) there is a solution with a maximum time existence T and if T  <  than necessarily

$$\displaystyle \begin{aligned} \lim_{t\rightarrow T^{\star}} \|(u(t),\partial_t u(t))\|{}_{H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)} =\infty. \end{aligned} $$
(4.1.62)

One can also prove a suitable local well-posedness in the case s = 1∕2 but in this case the dependence of the existence time on the initial data is more involved. Here is a precise statement.

Theorem 4.1.25

Consider the cubic defocusing wave equation

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)u+u^3=0\,, \end{aligned} $$
(4.1.63)

posed on \(\mathbb T^3\) . For every

$$\displaystyle \begin{aligned}(u_0,u_1)\in H^{\frac{1}{2}}(\mathbb T^3)\times H^{-\frac{1}{2}}(\mathbb T^3) \end{aligned}$$

there exists a time T > 0 and a unique solution of (4.1.63) in

$$\displaystyle \begin{aligned}L^4([0,T]\times\mathbb T^3)\times C([0,T];H^{\frac{1}{2}}(\mathbb T^3)), \end{aligned}$$

with initial data

$$\displaystyle \begin{aligned}u(0,x)=u_0(x), \quad \partial_t u(0,x)=u_1(x)\,. \end{aligned}$$

Proof

For T > 0, using the Strichartz estimates of Theorem 4.1.21, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \|\Phi_{u_0,u_1}(u)\|{}_{L^{4}([0,T]\times\mathbb T^3)} &\displaystyle \leq &\displaystyle \|S(t)(u_0,u_1)\|{}_{L^{4}([0,T]\times\mathbb T^3)}+C\|u^3\|{}_{L^{4/3}([0,T]\times\mathbb T^3)} \\ &\displaystyle = &\displaystyle \|S(t)(u_0,u_1)\|{}_{L^{4}([0,T]\times\mathbb T^3)} + C\|u\|{}^3_{L^{4}([0,T]\times\mathbb T^3)}\,. \end{array} \end{aligned} $$

Similarly, we get

$$\displaystyle \begin{aligned} &\|\Phi_{u_0,u_1}(u) -\Phi_{u_0,u_1}(v) \|{}_{L^{4}([0,T]\times\mathbb T^3)} \\&\quad \leq C \big(\|u\|{}_{L^{4}([0,T]\times\mathbb T^3)}^2+\|v\|{}^2_{L^{4}([0,T]\times\mathbb T^3)}\big)\|u-v\|{}_{ L^{4}([0,T]\times\mathbb T^3) } \,. \end{aligned} $$

Therefore if T is small enough then we can construct the solution by a fixed point argument in \(L^{4}([0,T]\times \mathbb T^3)\). In addition, the Strichartz estimates of Theorem 4.1.21 yield that the obtained solution belongs to \(C([0,T];H^{\frac {1}{2}}(\mathbb T^3))\). This completes the proof of Theorem 4.1.25. □

Remark 4.1.26

Observe that for data in \(H^{\frac {1}{2}}(\mathbb T^3)\times H^{-\frac {1}{2}}(\mathbb T^3)\) we no longer have the small factor T κ, κ > 0 in the estimates for \(\Phi _{u_0,u_1}\). This makes that the dependence of the existence time T on the data (u 0, u 1) is much less explicit. In particular, we can no longer conclude that the existence time is the same for a fixed ball in \(H^{\frac {1}{2}}(\mathbb T^3)\times H^{-\frac {1}{2}}(\mathbb T^3)\) and therefore we do not have the blow-up criterium (4.1.62) (with s = 1∕2).

4.1.5 A Constructive Way of Seeing the Solutions

In the proof of Theorem 4.1.22, we used the contraction mapping principle in order to construct the solutions. Therefore, one can define the solutions in a constructive way via the Picard iteration scheme. More precisely, for \((u_0,u_1)\in H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)\), we define the sequence (u (n))n≥0 as u (0) = 0 and for a given u (n), n ≥ 0, we define u (n+1) as the solutions of the linear wave equation

$$\displaystyle \begin{aligned}(\partial_t^2-\Delta)u^{(n+1)}+(u^{(n)})^3=0,\quad u(0)=u_0,\,\, \partial_{t}u(0)=u_1. \end{aligned}$$

Thanks to (the proof of) Theorem 4.1.22 the sequence (u (n))n≥0 is converging in \(X^s_T\), and in particular in \(C([0,T];H^s(\mathbb T^3))\) for

$$\displaystyle \begin{aligned}T\approx (\|u(0)\|{}_{H^s(\mathbb T^3)}+\|\partial_t u(0)\|{}_{H^{s-1}(\mathbb T^3)})^{-\gamma},\quad \gamma>0. \end{aligned}$$

One has that

$$\displaystyle \begin{aligned}u^{(1)}=S(t)(u_0,u_1) \end{aligned}$$

and for n ≥ 1,

$$\displaystyle \begin{aligned}u^{(n+1)}=u^{(1)}+{\mathcal T}(u^{(n)},u^{(n)},u^{(n)}), \end{aligned}$$

where the trilinear map \({\mathcal T}\) is defined as

$$\displaystyle \begin{aligned}{\mathcal T}(u,v,w)= -\int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((u(\tau) v(\tau) w(\tau))d\tau. \end{aligned}$$

One then may compute

$$\displaystyle \begin{aligned}u^{(2)}=u^{(1)}+{\mathcal T}(u^{(1)},u^{(1)},u^{(1)}). \end{aligned}$$

The expression for u (3) is then

$$\displaystyle \begin{aligned} u^{(3)}&=u^{(1)}+{\mathcal T}(u^{(1)},u^{(1)},u^{(1)}) +3{\mathcal T}(u^{(1)},u^{(1)},{\mathcal T}(u^{(1)},u^{(1)},u^{(1)})) \\ &\quad +3{\mathcal T}(u^{(1)},{\mathcal T}(u^{(1)},u^{(1)},u^{(1)}),{\mathcal T}(u^{(1)},u^{(1)},u^{(1)})) \\ &\quad + {\mathcal T}({\mathcal T}(u^{(1)},u^{(1)},u^{(1)}),{\mathcal T}(u^{(1)},u^{(1)},u^{(1)}),{\mathcal T}(u^{(1)},u^{(1)},u^{(1)})). \end{aligned} $$

We now observe that for n ≥ 2, the nth Picard iteration u (n) is a sum from j = 1 to j = 3n−1 of j-linear expressions of u (1). Moreover the first 3n−2 terms of this sum contain the (n − 1)th iteration. Therefore the solution can be seen as an infinite sum of multi-linear expressions of u (1). The Strichartz inequalities we proved can be used to show that for s ≥ 1∕2,

$$\displaystyle \begin{aligned}\|{\mathcal T}(u,v,w)\|{}_{H^s(\mathbb T^3)}\lesssim \|u\|{}_{H^s(\mathbb T^3)}\|v\|{}_{H^s(\mathbb T^3)}\|w\|{}_{H^s(\mathbb T^3)}\,. \end{aligned}$$

The last estimate can be used to analyse the multi-linear expressions in the expansion and to show its convergence. Observe that, we do not exploit any regularising effect in the terms of the expansion. The ill-posedness result of the next section, will basically show that such an effect is in fact not possible. In our probabilistic approach in the next section, we will exploit that the trilinear term in the expression defining the solution is more regular in the scale of the Sobolev spaces than the linear one, almost surely with respect to a probability measure on H s, s < 1∕2.

4.1.6 Global Well-Posedness in H s x H s−1, for Some s < 1

One may naturally ask whether the solutions obtained in Theorem 4.1.22 can be extended globally in time. Observe that one cannot use the argument of Theorem 4.1.2 because there is no a priori bound available at the H s, s ≠ 1 regularity. One however has the following partial answer.

Theorem 4.1.27 (Low Regularity Global Well-Posedness)

Let s > 13∕18. Then the local solution obtained in Theorem 4.1.22can be extended globally in time.

For the proof of Theorem 4.1.27, we refer to [18, 27, 40]. Here, we only present the main idea (introduced in [14]). Let \((u_0,u_1)\in H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)\) for some s ∈ (1∕2, 1). Let T ≫ 1. For N ≥ 1, we define a smooth Fourier multiplier acting as 1 for frequencies \(n\in \mathbb Z^3\) such that |n|≤ N and acting as N 1−s|n|s−1 for frequencies |n|≥ 2N. A concrete choice of I N is I N(D) = I((− Δ)1∕2N), where I(x) is a smooth function which equals 1 for x ≤ 1 and which equals x s−1 for x ≥ 2. In other words I(x) is one for x close to zero and decays like x s−1 for x ≫ 1. We choose N = N(T) such that for the times of the local existence the modified energy (which is well-defined in H s × H s−1)

$$\displaystyle \begin{aligned}\int_{\mathbb T^3}\Big( (\partial_t I_N u)^2+(\nabla I_N u)^2+\frac{1}{2}(I_N u)^4 \Big) \end{aligned}$$

does not vary much. This allows to extend the local solutions up to time T ≫ 1. The analysis contains two steps, a local existence argument for I Nu under the assumption that the modified energy remains below a fixed size and an energy increase estimate which is the substitute of the energy conservation used in the proof of Theorem 4.1.2. More precisely, we choose N as N = T γ for some γ = γ(s) ≥ 1. With this choice of N the initial size of the modified energy is T γ(1−s). The local well-posedness argument assures that I Nu (and thus u as well) exists on time of size T β for some β > 0 as far as the modified energy remains \(\lesssim T^{\gamma (1-s)}\). The main part of the analysis is to get an energy increase estimate showing that on the local existence time the modified energy does not increase more then T α for some α > 0. In order to arrive at time T we need to iterate ≈ T 1+β times the local existence argument. In order to ensure that at each step of the iteration the modified energy remains \(\lesssim T^{\gamma (1-s)}\), we need to impose the condition

$$\displaystyle \begin{aligned} T^{1+\beta}\, T^{-\alpha}\lesssim T^{\gamma(1-s)}, \quad T\gg 1. \end{aligned} $$
(4.1.64)

As far as (4.1.64) is satisfied, we can extend the local solutions globally in time. The condition (4.1.64) imposes the lower bound on s involved in the statement of Theorem 4.1.27. One may conjecture that the global well-posedness in Theorem 4.1.27 holds for any s > 1∕2.

4.1.7 Local Ill-Posedness in H s x H s−1, s ∈ (0, 1∕2)

It turns out that the restriction s > 1∕2 in Theorem 4.1.22 is optimal. Recall that the classical notion of well-posedness in the Hadamard sense requires the existence, the uniqueness and the continuous dependence with respect to the initial data. A very classical example of contradicting the continuous dependence with respect to the initial data for a PDE is the initial value problem for the Laplace equation with data in Sobolev spaces. Indeed, consider

$$\displaystyle \begin{aligned} (\partial_t^2+\partial_x^2)v=0,\quad v\,:\, \mathbb R_t\times \mathbb T_{x}\longrightarrow \mathbb R. \end{aligned} $$
(4.1.65)

Equation (4.1.65) has the explicit solution

$$\displaystyle \begin{aligned}v_{n}(t,x)=e^{-\sqrt{n}}\mathrm{sh}(nt)\cos{}(nx). \end{aligned}$$

Then for every \((s_1,s_2)\in \mathbb R^2\), v n satisfies

$$\displaystyle \begin{aligned}\|(v_n(0),\partial_t v_n(0))\|{}_{H^{s_1}(\mathbb T)\times H^{s_2}(\mathbb T)}\lesssim e^{-\sqrt{n}}n^{\max(s_1,s_2+1)}\longrightarrow 0, \end{aligned}$$

as n tends to +  but for t ≠ 0,

$$\displaystyle \begin{aligned}\|(v_n(t),\partial_t v_n(t))\|{}_{ H^{s_1}(\mathbb T)\times H^{s_2}(\mathbb T) }\gtrsim e^{n|t|}\,e^{-\sqrt{n}} n^{\min(s_1,s_2+1)} \longrightarrow +\infty, \end{aligned}$$

as n tends to + . Consequently (4.1.65) in not well-posed in \( H^{s_1}(\mathbb T)\times H^{s_2}(\mathbb T) \) for every \((s_1,s_2)\in \mathbb R^2\) because of the lack of continuous dependence with respect to the initial data (0, 0).

It turns out that a similar phenomenon happens in the context of the cubic defocusing wave equation with low regularity initial data. As we shall see below the mechanism giving the lack of continuous dependence is however quite different compared to (4.1.65). Here is the precise statement.

Theorem 4.1.28

Let us fix s ∈ (0, 1∕2) and \((u_0,u_1)\in C^{\infty }(\mathbb T^3)\times C^\infty (\mathbb T^3)\). Then there exist δ > 0, a sequence \((t_n)_{n=1}^{\infty }\)of positive numbers tending to zero and a sequence \((u_n(t,x))_{n=1}^{\infty }\)of \(C(\mathbb R;C^{\infty }(\mathbb T^3))\)functions such that

$$\displaystyle \begin{aligned}(\partial_{t}^{2}-\Delta)u_n+u_n^3=0 \end{aligned}$$

with

$$\displaystyle \begin{aligned}\|(u_{n}(0)-u_0,\partial_t u_n(0)-u_1)\|{}_{H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)} \leq C [\log(n)]^{-\delta}\rightarrow_{n\rightarrow + \infty} 0 \end{aligned}$$

but

$$\displaystyle \begin{aligned}\|(u_{n}(t_n),\partial_t u_n(t_n))\|{}_{H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)}\geq C[ \log(n)]^{\delta}\rightarrow_{n\rightarrow + \infty} +\infty. \end{aligned}$$

In particular, for every T > 0,

$$\displaystyle \begin{aligned}\lim_{n\rightarrow + \infty} \|(u_n(t),\partial_t u_n(t))\|{}_{L^\infty([0,T];H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3))}=+\infty. \end{aligned}$$

Proof of Theorem 4.1.28

We follow [6, 12, 48]. Consider

$$\displaystyle \begin{aligned} (\partial_{t}^{2}-\Delta)u+u^3=0 \end{aligned} $$
(4.1.66)

subject to initial conditions

$$\displaystyle \begin{aligned} (u_0(x)+\kappa_{n}n^{\frac{3}{2}-s}\varphi(nx), u_1(x)),\qquad n\gg 1\,, \end{aligned} $$
(4.1.67)

where φ is a nontrivial bump function on \(\mathbb R^3\) and

$$\displaystyle \begin{aligned}\kappa_{n}\equiv [\log(n)]^{-\delta_1}, \end{aligned}$$

with δ 1 > 0 to be fixed later. Observe that for n ≫ 1, we can see φ(nx) as a C function on \(\mathbb T^3\).

Thanks to Theorem 4.1.2, we obtain that (4.1.66) with data given by (4.1.67) has a unique global smooth solution which we denote by u n. Moreover \(u_n\in C(\mathbb R;C^{\infty }(\mathbb T^3))\) thanks the propagation of the higher Sobolev regularity and the Sobolev embeddings.

Next, we consider the ODE

$$\displaystyle \begin{aligned} V''+V^3=0,\quad V(0)=1,\,\, V'(0)=0. \end{aligned} $$
(4.1.68)

Lemma 4.1.29

The Cauchy problem (4.1.68) has a global smooth (non constant) solution V (t) which is periodic.

Proof

One defines locally in time the solution of (4.1.68) by an application of the Cauchy-Lipschitz theorem. In order to extend the solutions globally in time, we multiply (4.1.68) by V. This gives that the solutions of (4.1.68) satisfy

$$\displaystyle \begin{aligned}\frac{d}{dt}\big( (V'(t))^2+\frac{1}{2}(V(t))^4 \big)=0 \end{aligned}$$

and therefore taking into account the initial conditions, we get

$$\displaystyle \begin{aligned} (V'(t))^2+\frac{1}{2}(V(t))^4=\frac{1}{2}\,. \end{aligned} $$
(4.1.69)

The relation (4.1.69) implies that (V (t), V(t)) cannot go to infinity in finite time. Therefore the local solution of (4.1.68) is defined globally in time. Let us finally show that V (t) is periodic in time. We first observe that thanks to (4.1.69), |V (t)|≤ 1 for all times t. Therefore t = 0 is a local maximum of V (t). We claim that there is t 0 > 0 such that V(t 0) = 0. Indeed, otherwise V (t) is decreasing on [0, +) which implies that V(t) ≤ 0 and from (4.1.69), we deduce

$$\displaystyle \begin{aligned}V'(t)=-\sqrt{\frac{(1-(V(t)))^4}{2}}\,. \end{aligned}$$

Integrating the last relation between zero and a positive t 0 gives

$$\displaystyle \begin{aligned}t_0=\sqrt{2}\int_{V(t_0)}^{1}\frac{dv}{\sqrt{1-v^4}} \, . \end{aligned}$$

Therefore

$$\displaystyle \begin{aligned}t_0\leq \sqrt{2}\int_{-1}^{1}\frac{dv}{\sqrt{1-v^4}} \end{aligned}$$

and we get a contradiction for t 0 ≫ 1. Hence, we indeed have that there is t 0 > 0 such that V(t 0) = 0. Coming back to (4.1.69) and using that V (t 0) < 1, we deduce that V (t 0) = −1. Therefore t = t 0 is a local minimum of V (t). We now can show exactly as before that there exists t 1 > t 0 such that V(t 1) = 0 and V (t 1) > −1. Once again using (4.1.69), we infer that V (t 1) = 1, i.e. V (0) = V (t 1) and V(0) = V(t 1). By the uniqueness part of the Cauchy-Lipschitz theorem, we obtain that V  is periodic with period t 1. This completes the proof of Lemma 4.1.29. □

We next denote by v n the solution of

$$\displaystyle \begin{aligned} \partial_{t}^2v_{n}+v_{n}^{3}=0,\quad (v_{n}(0),\partial_{t}v_{n}(0))= (\kappa_{n}n^{\frac{3}{2}-s}\varphi(nx), 0). \end{aligned} $$
(4.1.70)

It is now clear that

$$\displaystyle \begin{aligned}v_{n}(t,x)=\kappa_{n}n^{\frac{3}{2}-s}\varphi(nx)V\big(t\kappa_{n}n^{\frac{3}{2}-s}\varphi(nx)\big). \end{aligned}$$

In the next lemma, we collect the needed bounds on v n.

Lemma 4.1.30

Let

$$\displaystyle \begin{aligned}t_{n}\equiv [\log(n)]^{\delta_2}n^{-(\frac{3}{2}-s)} \end{aligned}$$

for some δ 2 > δ 1. Then, we have the following bounds for t ∈ [0, t n],

$$\displaystyle \begin{aligned} \|\Delta(v_n)(t,\cdot)\|{}_{H^1(\mathbb T^3)}\leq C[\log(n)]^{3\delta_2}n^{3-s}, \end{aligned} $$
(4.1.71)
$$\displaystyle \begin{aligned} \| \Delta(v_n)(t,\cdot)\|{}_{L^2(\mathbb T^3)}\leq C[\log(n)]^{2\delta_2}n^{2-s}\,, \end{aligned} $$
(4.1.72)
$$\displaystyle \begin{aligned} \|\nabla^{k}v_{n}(t,\cdot)\|{}_{L^{\infty}(\mathbb T^3)}\leq C[\log(n)]^{k\delta_2}n^{\frac{3}{2}-s+k}\, ,k=0,1,\cdots. \end{aligned} $$
(4.1.73)

Finally, there exists n 0 ≫ 1 such that for n  n 0 ,

$$\displaystyle \begin{aligned} \|v_{n}(t_n,\cdot)\|{}_{H^{s}(\mathbb T^3)}\geq C\kappa_{n}(t_n\kappa_{n}n^{\frac{3}{2}-s})^{s} =C[\log(n)]^{-(s+1)\delta_1+s\delta_2}\,. \end{aligned} $$
(4.1.74)

Proof

Estimates (4.1.71) and (4.1.72) follow from the general bound

$$\displaystyle \begin{aligned} \|v_n(t,\cdot)\|{}_{H^\sigma(\mathbb T^3)}\leq C\kappa_n (t_n\kappa_{n}n^{\frac{3}{2}-s})^{\sigma}n^{\sigma-s}\,, \end{aligned} $$
(4.1.75)

where t ∈ [0, t n] and σ ≥ 0. For integer values of σ, the bound (4.1.75) is a direct consequence of the definition of v n. For fractional values of σ one needs to invoke an elementary interpolation inequality in the Sobolev spaces. Estimate (4.1.73) follows directly from the definition of v n. The proof of (4.1.74) is slightly more delicate. We first observe that for n ≫ 1, we have the lower bound

$$\displaystyle \begin{aligned} \|v_n(t_n,\cdot)\|{}_{H^1(\mathbb T^3)}\geq c\kappa_n (t_n\kappa_{n}n^{\frac{3}{2}-s})n^{1-s}\,. \end{aligned} $$
(4.1.76)

Now, we can obtain (4.1.74) by invoking (4.1.75) (with σ = 2), the lower bound (4.1.76) and the interpolation inequality

$$\displaystyle \begin{aligned}\|v_{n}(t_n,\cdot)\|{}_{H^1(\mathbb T^3)}\leq \|v_{n}(t_n,\cdot)\|{}^{\theta}_{H^s(\mathbb T^3)} \|v_{n}(t_n,\cdot)\|{}^{1-\theta}_{H^2(\mathbb T^3)} \end{aligned}$$

for some θ > 0. It remains therefore to show (4.1.76). After differentiating once the expression defining v n, we see that (4.1.76) follows from the following statement.

Lemma 4.1.31

Consider a smooth not identically zero periodic function V  and a non trivial bump function \(\phi \in C^\infty _0( \mathbb {R}^d)\). Then there exist c > 0 and λ 0 ≥ 1 such that for every λ > λ 0

$$\displaystyle \begin{aligned}\| \phi (x) V(\lambda \phi(x))\|{}_{L^2(\mathbb{R}^d)} \geq c \,. \end{aligned}$$

Proof

We can suppose that the period of V  is 2πL for some L > 0. Consider the Fourier expansion of V ,

$$\displaystyle \begin{aligned}V(t) = \sum_{n\in \mathbb{Z}}v_n e^{i\frac{n}{L} t},\quad |v_n| \leq C_N (1+|n|)^{-N}\,. \end{aligned}$$

We can assume that there is an open ball B of \(\mathbb R^d\) such that for some c 0 > 0, \(|\partial _{x_1}\phi (x)|\geq c_0\) on B. Let 0 ≤ ψ ≤ 1 be a non trivial \(C^\infty _0(B)\) function. We can write

$$\displaystyle \begin{aligned}\| \phi (x) V(\lambda \phi(x))\|{}_{L^2(\mathbb{R}^d)}^2 \geq \| \psi(x)\phi (x) V(\lambda \phi(x))\|{}_{L^2(B)}^2 = I_1+I_2, \end{aligned}$$

where

$$\displaystyle \begin{aligned}I_{1}= \sum_{n\in \mathbb{Z}} |v_n|{}^2 \int_{B}(\psi(x)\phi(x))^2 dx, \end{aligned}$$

and

$$\displaystyle \begin{aligned}I_{2}= \sum_{n_1\neq n_2}v_{n_1}\overline{v_{n_2}}\int_{B} e^{i\lambda\frac{n_1-n_2}{L}\phi(x)}\, (\psi(x)\phi(x))^2 \,dx. \end{aligned}$$

Clearly I 1 > 0 is independent of λ. On the other hand

$$\displaystyle \begin{aligned}e^{i\lambda\frac{n_1-n_2}{L}\phi(x)} = \frac{L}{i\lambda (n_1-n_2)\partial_{x_1}\phi(x)} \partial_{x_1}\big( e^{i\lambda\frac{n_1-n_2}{L}\phi(x)} \big). \end{aligned}$$

Therefore, after an integration by parts, we obtain that \(|I_{2}|\lesssim \lambda ^{-1}\). This completes the proof of Lemma 4.1.31. □

This completes the proof of Lemma 4.1.30. □

We next consider the semi-classical energy

$$\displaystyle \begin{aligned} E_{n}(u)&\equiv n^{-(1-s)}\big(\|\partial_{t}u\|{}_{L^{2}(\mathbb T^3)}^{2}+ \|\nabla u\|{}_{L^{2}(\mathbb T^3)}^{2}\big)^{\frac{1}{2}} \\&\quad + n^{-(2-s)}\big(\|\partial_{t}u\|{}_{H^{1}(\mathbb T^3)}^{2}+\|\nabla u\|{}_{H^{1}(\mathbb T^3)}^{2}\big)^{\frac{1}{2}}\,. \end{aligned} $$

We are going to show that for very small times u n and v n + S(t)(u 0, u 1) are close with respect to E n but these small times are long enough to get the needed amplification of the H s norm. We emphasise that this amplification is a phenomenon only related to the solution of (4.1.70). Here is the precise statement.

Lemma 4.1.32

There exist ε > 0, δ 2 > 0 and C > 0 such that for δ 1 < δ 2, if we set

$$\displaystyle \begin{aligned}t_{n}\equiv [\log(n)]^{\delta_2}n^{-(\frac{3}{2}-s)} \end{aligned}$$

then for every n ≫ 1, every t ∈ [0, t n],

$$\displaystyle \begin{aligned}E_{n}\big( u_{n}(t)-v_{n}(t)-S(t)(u_0,u_1) \big) \leq Cn^{-\varepsilon}\,. \end{aligned}$$

Moreover,

$$\displaystyle \begin{aligned} \|u_{n}(t)-v_n(t)-S(t)(u_0,u_1)\|{}_{H^{s}(\mathbb T^3)}\leq Cn^{-\varepsilon}\,. \end{aligned} $$
(4.1.77)

Proof

Set u L = S(t)(u 0, u 1) and w n = u n − u L − v n. Then w n solves the equation

$$\displaystyle \begin{aligned} (\partial_{t}^{2}-\Delta)w_n=\Delta v_n-3v_{n}^{2}(u_L+w_{n})-3v_{n}(u_L+w_{n})^{2}-(u_L+w_{n})^{3}, \end{aligned} $$
(4.1.78)

with initial data

$$\displaystyle \begin{aligned} (w_{n}(0,\cdot),\partial_{t}w_{n}(0,\cdot))=(0,0)\,. \end{aligned}$$

Set

$$\displaystyle \begin{aligned}F\equiv \Delta v_n-3v_{n}^{2}(u_L+w_{n})-3v_{n}(u_L+w_{n})^{2}-(u_L+w_{n})^{3}\,. \end{aligned}$$

Multiplying Eq. (4.1.78) with tw n and integrating over \(\mathbb T^3\) gives

$$\displaystyle \begin{aligned} \Big| \frac{d}{dt} \big(\|\partial_{t}w_n(t)\|{}_{L^{2}(\mathbb T^3)}^{2}+ \|\nabla w_n(t)\|{}_{L^{2}(\mathbb T^3)}^{2}\big) \Big| \lesssim \|\partial_t w_n(t)\|{}_{L^2(\mathbb T^3)} \|F(t)\|{}_{L^2(\mathbb T^3)} \end{aligned}$$

which in turn implies

$$\displaystyle \begin{aligned} \Big| \frac{d}{dt} \big(\|\partial_{t}w_n(t)\|{}_{L^{2}(\mathbb T^3)}^{2}+ \|\nabla w_n(t)\|{}_{L^{2}(\mathbb T^3)}^{2}\big)^{\frac{1}{2}} \Big| \lesssim \|F(t)\|{}_{L^2(\mathbb T^3)}\,. \end{aligned} $$
(4.1.79)

Similarly, by first differentiating (4.1.78) with respect to the spatial variables, we get the bound

$$\displaystyle \begin{aligned} \Big|\frac{d}{dt}\big(\|\partial_{t}w_n(t)\|{}_{H^{1}(\mathbb T^3)}^{2}+\|\nabla w_n(t)\|{}_{H^{1}(\mathbb T^3)}^{2}\big)^{\frac{1}{2}}\Big| \lesssim \|F(t)\|{}_{H^1(\mathbb T^3)}\,. \end{aligned} $$
(4.1.80)

Now, using (4.1.79) and (4.1.80), we obtain the estimate

$$\displaystyle \begin{aligned} \Big|\frac{d}{dt}\Big(E_{n}(w_n(t))\Big)\Big| \leq C n^{-(2-s)}\|F(t)\|{}_{H^1(\mathbb T^3)}+Cn^{-(1-s)}\|F(t)\|{}_{L^2(\mathbb T^3)}\,. \end{aligned}$$

Therefore using (4.1.71), (4.1.72), we get

$$\displaystyle \begin{aligned} \Big|\frac{d}{dt}\Big(E_{n}(w_n(t))\Big)\Big| &\leq C\Big([\log(n)]^{3\delta_2}n \\ &\quad +n^{-(2-s)}\|G(t,\cdot)\|{}_{H^1(\mathbb T^3)}+n^{-(1-s)}\|G(t,\cdot)\|{}_{L^2(\mathbb T^3)}\Big)\,, \end{aligned} $$
(4.1.81)

where G ≡ G 1 + G 2 with

$$\displaystyle \begin{aligned}G_1= -3v_n^2 u_L- 3v_n u_L^2-u_L^3 \end{aligned}$$

and

$$\displaystyle \begin{aligned}G_2 = -3(u_L+v_n)^2 w_n - 3(u_L+v_n) w_n^2-w_n^3. \end{aligned}$$

Since \(u_L\in C^{\infty }(\mathbb {R}\times \mathbb T^3)\) is independent of n, using (4.1.73) and (4.1.75) we can estimate G 1 as follows

$$\displaystyle \begin{aligned}n^{-(l-s)}\|G_1(t,\cdot))\|{}_{H^{l-1}(\mathbb T^3)}\lesssim [\log n]^{\delta_2}n^{\frac{1}{2}-s} \lesssim [\log(n)]^{3\delta_2}n,\quad l=1,2. \end{aligned}$$

Writing for t ∈ [0, t n],

$$\displaystyle \begin{aligned}w_n(t,x) = \int_0^t \partial_t w_n(\tau, x) d\tau, \end{aligned}$$

we obtain

$$\displaystyle \begin{aligned} \|w_{n}(t,\cdot)\|{}_{H^k(\mathbb T^3)}\leq C[\log(n)]^{\delta_2}n^{-(\frac{3}{2}-s)} \sup_{0\leq \tau\leq t}\|\partial_{t}w_{n}(\tau,\cdot)\|{}_{H^{k}(\mathbb T^3)}\,. \end{aligned} $$
(4.1.82)

Set

$$\displaystyle \begin{aligned}e_{n}(w_n(t))\equiv \sup_{0\leq \tau\leq t}E_{n}(w_n(\tau))\,. \end{aligned}$$

Observe that e n(w n(t)) is increasing. Using (4.1.82) (with k = 0, 1), (4.1.73) and the Leibniz rule, we get that for t ∈ [0, t n] and for l = 1, 2,

$$\displaystyle \begin{aligned} n^{-(l-s)}\|(u_{L}(t)+ v_{n}(t))^{2}w_{n}(t)\|{}_{H^{l-1}(\mathbb T^3)}\leq C[\log(n)]^{l\delta_2}n^{\frac{3}{2}-s} e_{n}(w_n(t))\,. \end{aligned}$$

Thanks to the Gagliardo-Nirenberg inequality, and (4.1.82) with k = 0, we get for t ∈ [0, t n],

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \|w_{n}(t,\cdot)\|{}_{L^{\infty}(\mathbb T^3)} &\displaystyle \leq &\displaystyle C\|w_{n}(t,\cdot)\|{}_{H^2(\mathbb T^3)}^{\frac{3}{4}}\|w_n(t,\cdot)\|{}_{L^2(\mathbb T^3)}^{\frac{1}{4}} \\ &\displaystyle \leq &\displaystyle Cn^{\frac{3}{2}-s}e_{n}(w_{n}(t))\,. \end{array} \end{aligned} $$
(4.1.83)

Hence, we can use (4.1.83) to treat the quadratic and cubic terms in w n and to get the bound

$$\displaystyle \begin{aligned} n^{-(l-s)}\| G_2(t,\cdot)\|{}_{H^{l-1}(\mathbb T^3)}\leq C[\log(n)]^{l\delta_2}n^{\frac{3}{2}-s} \big(e_{n}(w_n(t))+[e_{n}(w_n(t))]^3\big)\,. \end{aligned}$$

Therefore, coming back to (4.1.81), we get for t ∈ [0, t n],

$$\displaystyle \begin{aligned} \Big|\frac{d}{dt}\Big(E_{n}(w_n(t))\Big)\Big| & \leq C[\log(n)]^{3\delta_2}n \\ &\quad +C[\log(n)]^{2\delta_2}n^{\frac{3}{2}-s}\big(e_{n}(w_n(t))+[e_{n}(w_n(t))]^3\big)\,. \end{aligned} $$

We now observe that

$$\displaystyle \begin{aligned}\frac{d}{dt} \big(e_{n}(w_n(t))\big) \leq \Big|\frac{d}{dt}\Big(E_{n}(w_n(t))\Big)\Big| \end{aligned}$$

is resulting directly from the definition. Therefore, we have the bound

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \frac{d}{dt}\Big( e_{n}(w_n(t)) \Big) &\displaystyle \leq&\displaystyle C[\log(n)]^{3\delta_2}n \\ &\displaystyle &\displaystyle +C[\log(n)]^{2\delta_2}n^{\frac{3}{2}-s}\big(e_{n}(w_n(t))+[e_{n}(w_n(t))]^3\big)\,. \end{array} \end{aligned} $$
(4.1.84)

We first suppose that e n(w n(t)) ≤ 1. This property holds for small values of t since E n(w n(0)) = e n(w n(0)) = 0. In addition, the estimate for e n(w n(t)) we are looking for is much stronger than e n(w n(t)) ≤ 1. Therefore, once we prove the desired estimate for e n(w n(t)) under the assumption e n(w n(t)) ≤ 1, we can use a bootstrap argument to get the estimate without the assumption e n(w n(t)) ≤ 1.

Estimate (4.1.84) yields that for t ∈ [0, t n],

$$\displaystyle \begin{aligned}\frac{d}{dt} (e_{n}(w_{n}(t))) \leq C[\log(n)]^{3\delta_2}n +C[\log(n)]^{2\delta_2}n^{\frac{3}{2}-s} e_{n}(w_{n}(t)) \end{aligned}$$

and consequently

$$\displaystyle \begin{aligned}\frac{d}{dt} \Big( e^{ -Ct [\log(n)]^{2\delta_2}n^{\frac{3}{2}-s} } e_{n}(w_{n}(t)) \Big) \leq C[\log(n)]^{3\delta_2}\, n\, e^{ -Ct [\log(n)]^{2\delta_2}n^{\frac{3}{2}-s} }\,. \end{aligned}$$

An integration of the last estimate gives that for t ∈ [0, t n],

$$\displaystyle \begin{aligned} \begin{array}{rcl} e_{n}(w_{n}(t)) &\displaystyle \leq &\displaystyle C \big( [\log(n)]^{\delta_2}n^{s-\frac{1}{2}} \big) e^{Ct[\log(n)]^{2\delta_2}n^{\frac{3}{2}-s}} \\ &\displaystyle \leq &\displaystyle C \big( [\log(n)]^{\delta_2}n^{s-\frac{1}{2}} \big) e^{C[\log(n)]^{3\delta_2}}\,. \end{array} \end{aligned} $$

(one should see δ 2 as 3δ 2 − 2δ 2 and s − 1∕2 as 1 − (3∕2 − s)). Since s < 1∕2, by taking δ 2 > 0 small enough, we obtain that there exists ε > 0 such that for t ∈ [0, t n],

$$\displaystyle \begin{aligned}E_{n}(w_n(t))\leq Cn^{-\varepsilon} \end{aligned}$$

and in particular one has for t ∈ [0, t n],

$$\displaystyle \begin{aligned} \|\partial_{t}w_{n}(t,\cdot)\|{}_{L^2(\mathbb T^3)}+\|\nabla w_{n}(t,\cdot)\|{}_{L^2(\mathbb T^3)}\leq Cn^{1-s-\varepsilon}\,. \end{aligned} $$
(4.1.85)

We next estimate \(\|w_{n}(t,\cdot )\|{ }_{L^2}\). We may write for t ∈ [0, t n],

$$\displaystyle \begin{aligned}\|w_{n}(t,\cdot)\|{}_{L^2(\mathbb T^3)}= \|\int_0^t \partial_tw_{n}(\tau,\cdot) d\tau \|{}_{L^2(\mathbb T^3)}\leq ct_{n}\sup_{0\leq\tau\leq t}\|\partial_{t}w_{n}(\tau,\cdot)\|{}_{L^2(\mathbb T^3)}\,. \end{aligned}$$

Thanks to (4.1.85) and the definition of t n, we get

$$\displaystyle \begin{aligned}\|w_{n}(t,\cdot)\|{}_{L^2(\mathbb T^3))}\leq C[\log(n)]^{\delta_2}n^{-(\frac{3}{2}-s)}n^{1-s}n^{-\varepsilon}\,. \end{aligned}$$

Therefore, since s < 1∕2,

$$\displaystyle \begin{aligned} \|w_{n}(t,\cdot)\|{}_{L^2(\mathbb T^3)}\leq Cn^{-s-\varepsilon}\,. \end{aligned} $$
(4.1.86)

An interpolation between (4.1.85) and (4.1.86) yields (4.1.77). This completes the proof of Lemma 4.1.32. □

Using Lemma 4.1.32, we may write

$$\displaystyle \begin{aligned}\|u_{n}(t_n,\cdot)\|{}_{H^{s}(\mathbb T^3)}\geq \|v_{n}(t_n,\cdot)\|{}_{H^{s}(\mathbb T^3)} -C-Cn^{-\varepsilon}\,. \end{aligned}$$

Recall that (4.1.74) yields

$$\displaystyle \begin{aligned} \|v_{n}(t_n,\cdot)\|{}_{H^{s}(\mathbb T^3)}\geq C[\log(n)]^{-(s+1)\delta_1+s\delta_2}\,, \end{aligned}$$

provided n ≫ 1. Therefore, by choosing δ 1 small enough (depending on δ 2 fixed in Lemma 4.1.32), we obtain that the exists δ > 0 such that

$$\displaystyle \begin{aligned}\|v_n(t_n,\cdot)\|{}_{H^s(\mathbb T^3)}\geq C [\log(n)]^{\delta},\quad n\gg 1 \end{aligned}$$

which in turn implies that

$$\displaystyle \begin{aligned}\|u_n(t_n,\cdot)\|{}_{H^s(\mathbb T^3)}\geq C [\log(n)]^{\delta},\quad n\gg 1\,. \end{aligned}$$

This completes the proof of Theorem 4.1.28. □

Theorem 4.1.28 implies that the Cauchy problem associated with the cubic focusing wave equation,

$$\displaystyle \begin{aligned}(\partial_{t}^{2}-\Delta)u+u^3=0 \end{aligned}$$

is ill-posed in \(H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)\) for s < 1∕2 because of the lack of continuous dependence for any \(C^{\infty }(\mathbb T^3)\times C^{\infty }(\mathbb T^3)\) initial data.

For future references, we also state the following consequence of Theorem 4.1.28.

Theorem 4.1.33

Let us fix s ∈ (0, 1∕2), T > 0 and

$$\displaystyle \begin{aligned}(u_0,u_1)\in H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)\,. \end{aligned}$$

Then there exists a sequence \((u_n(t,x))_{n=1}^{\infty }\) of \(C(\mathbb R;C^{\infty }(\mathbb T^3))\) functions such that

$$\displaystyle \begin{aligned}(\partial_{t}^{2}-\Delta)u_n+u_n^3=0 \end{aligned}$$

with

$$\displaystyle \begin{aligned}\lim_{n\rightarrow + \infty} \|(u_{n}(0)-u_0,\partial_t u_n(0)-u_1)\|{}_{H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)}=0 \end{aligned}$$

but

$$\displaystyle \begin{aligned}\lim_{n\rightarrow + \infty} \|(u_n(t),\partial_t u_n(t))\|{}_{L^\infty([0,T];H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3))}=+\infty. \end{aligned}$$

Proof

Let \((u_{0,m}, u_{1,m})_{m=1}^{\infty }\) be a sequence of \(C^{\infty }(\mathbb T^3)\times C^{\infty }(\mathbb T^3)\) functions such that

$$\displaystyle \begin{aligned}\lim_{m\rightarrow + \infty} \|(u_0-u_{0,m},u_1-u_{1,m})\|{}_{H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)}=0\,. \end{aligned}$$

For a fixed m, we apply Theorem 4.1.28 in order to find a sequence \((u_{m,n}(t,x))_{n=1}^{\infty }\) of \(C(\mathbb R;C^{\infty }(\mathbb T^3))\) functions such that

$$\displaystyle \begin{aligned}(\partial_{t}^{2}-\Delta)u_{m,n}+u_{m,n}^3=0 \end{aligned}$$

with

$$\displaystyle \begin{aligned}\lim_{n\rightarrow + \infty} \|(u_{m,n}(0)-u_{0,m},\partial_t u_{m,n}(0)-u_{1,m})\|{}_{H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)}=0 \end{aligned}$$

and for every m ≥ 1,

$$\displaystyle \begin{aligned} \lim_{n\rightarrow + \infty} \|(u_{m,n}(t),\partial_t u_{m,n}(t))\|{}_{L^\infty([0,T];H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3))}=+\infty. \end{aligned} $$
(4.1.87)

Now, using the triangle inequality, we obtain that for every l ≥ 1 there is M 0(l) such that for every m ≥ M 0(l) there is N 0(m) such that for every n ≥ N 0(m),

$$\displaystyle \begin{aligned}\|(u_{m,n}(0)-u_{0},\partial_t u_{m,n}(0)-u_{1})\|{}_{H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)}<\frac{1}{l}\,. \end{aligned}$$

Thanks to (4.1.87), we obtain that for every m ≥ 1 there exists N 1(m) ≥ N 0(m) such that for every n ≥ N 1(m),

$$\displaystyle \begin{aligned}\|(u_{m,n}(t),\partial_t u_{m,n}(t))\|{}_{L^\infty([0,T];H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3))} >l\,. \end{aligned}$$

We now observe that

$$\displaystyle \begin{aligned}u_{l}(t,x)\equiv u_{M_0(l),N_1(M_0(l))}(t,x),\quad l=1,2, 3,\cdots \end{aligned}$$

is a sequence of solutions of the cubic defocusing wave equation satisfying the conclusions of Theorem 4.1.33. □

Remark 4.1.34

It is worth mentioning that we arrive without too much complicated technicalities to a sharp local well-posedness result in the context of the cubic wave equation because we do not need a smoothing effect to recover derivative losses neither in the nonlinearity nor in the non homogeneous Strichartz estimates. The X s, b spaces of Bourgain are an efficient tool to deal with these two difficulties. These developments go beyond the scope of these lectures.

4.1.8 Extensions to More General Nonlinearities

One may consider the wave equation with a more general nonlinearity than the cubic one. Namely, let us consider the nonlinear wave equation

$$\displaystyle \begin{aligned} (\partial_{t}^{2}-\Delta)u+|u|{}^{\alpha}u=0, \end{aligned} $$
(4.1.88)

posed on \(\mathbb T^3\) where α > 0 measures the “degree” of the nonlinearity. If u(t, x) is a solution of (4.1.88) posed on \(\mathbb R^3\), than so is \(u_{\lambda }(t,x)=\lambda ^{\frac {2}{\alpha }}u(\lambda t,\lambda x)\). Moreover

$$\displaystyle \begin{aligned}\|u_{\lambda}(t,\cdot)\|{}_{H^s}\approx \lambda^{\frac{2}{\alpha}}\lambda^{s}\lambda^{-\frac{3}{2}}\|u(\lambda t,\cdot)\|{}_{H^s} \end{aligned}$$

which implies that H s with \(s=\frac {3}{2}-\frac {2}{\alpha }\) is the critical Sobolev regularity for (4.1.88). Based on this scaling argument one may wish to expect that for \(s>\frac {3}{2}-\frac {2}{\alpha }\) the Cauchy problem associated with (4.1.88) is well-posed in H s × H s−1 and that for \(s<\frac {3}{2}-\frac {2}{\alpha }\) it is ill-posed in H s × H s−1. In this section, we verified that this is indeed the case for α = 2. For 2 < α < 4, a small modification of the proof of Theorem 4.1.22 shows that (4.1.88) is locally well-posed in H s × H s−1 for \(s\in (\frac {3}{2}-\frac {2}{\alpha },\alpha )\). Then, as in the proof of Theorem 4.1.2, we can show that (4.1.88) is globally well-posed in H 1 × L 2. Moreover a small modification of the proof of Theorem 4.1.28 shows that for \(s\in (0,\frac {3}{2}-\frac {2}{\alpha })\) the Cauchy problem for (4.1.88) is locally ill-posed in H s × H s−1. For α = 4, we can prove a local well-posedness statement for (4.1.88) as in Theorem 4.1.25. The global well-posedness in H 1 × L 2 for α = 4 is much more delicate than the globalisation argument of Theorem 4.1.2. It is however possible to show that (4.1.88) is globally well-posed in H 1 × L 2 (see [20, 21, 41, 42]). The new global information for α = 4, in addition to the conservation of the energy, is the Morawetz estimate which is a quantitative way to contradict the blow-up criterium in the case α = 4. For α > 4 the Cauchy problem associated with (4.1.88) is still locally well-posed in H s × H s−1 for some \(s>\frac {3}{2}-\frac {2}{\alpha }\). The global well-posedness (i.e. global existence, uniqueness and propagation of regularity) of (4.1.88) for α > 4 is an outstanding open problem. For α > 4, the argument used in Theorem 4.1.28 may allow to construct weak solutions in H 1 × L 2 with initial data in H σ for \(1<\sigma <\frac {3}{2}-\frac {2}{\alpha }\) which are losing their H σ regularity. See [28] for such a result for (4.1.88), posed on \(\mathbb R^3\).

4.2 Probabilistic Global Well-Posedness for the 3d Cubic Wave Equation in H s, s ∈ [0, 1]

4.2.1 Introduction

Consider again the Cauchy problem for the cubic defocusing wave equation

$$\displaystyle \begin{aligned} \begin{gathered} (\partial_t^2-\Delta) u+u^3=0,\quad u:\mathbb R \times \mathbb T^3\rightarrow \mathbb R, \\ u|{}_{t=0}=u_0,\,\, \partial_t u|{}_{t=0}=u_1,\quad \quad (u_0,u_1)\in {\mathcal H}^s(\mathbb T^3), \end{gathered} \end{aligned} $$
(4.2.1)

where

$$\displaystyle \begin{aligned}{\mathcal H}^s(\mathbb T^3)\equiv H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)\,. \end{aligned}$$

In the previous section, we have shown that (4.2.1) is (at least locally in time) well-posed in \({\mathcal H}^s(\mathbb T^3)\), s ≥ 1∕2. The main ingredient in the proof for s ∈ [1∕2, 1) was the Strichartz estimates for the linear wave equation. We have also shown that for s ∈ (0, 1∕2) the Cauchy problem (4.2.1) is ill-posed in \({\mathcal H}^s(\mathbb T^3)\).

One may however ask whether some sort of well-posedness for (4.2.1) survives for s < 1∕2. We will show below that this is indeed possible, if we accept to “randomise” the initial data. This means that we will endow \({\mathcal H}^s(\mathbb T^3)\), s ∈ (0, 1∕2) with suitable probability measures and we will show that the Cauchy problem (4.2.1) is well-posed in a suitable sense for initial data (u 0, u 1) on a set of full measure.

Let us now describe these measures. Starting from \((u_0,u_1)\in {\mathcal H}^s\) given by their Fourier series

$$\displaystyle \begin{aligned}u_{j}(x)=a_{j}+\sum_{n\in\mathbb Z^3_{\star}}\Big(b_{n,j}\,\cos{}(n\cdot x)+c_{n,j}\sin{}(n\cdot x)\Big), \quad j=0,1, \end{aligned}$$

we define \(u_{j}^\omega \) by

$$\displaystyle \begin{aligned} u_{j}^\omega(x)=\alpha_{j}(\omega)a_{j}+\sum_{n\in\mathbb Z^3_{\star}}\Big(\beta_{n,j}(\omega)b_{n,j}\,\cos{}(n\cdot x)+\gamma_{n,j}(\omega)c_{n,j}\sin{}(n\cdot x)\Big), \end{aligned} $$
(4.2.2)

where (α j(ω), β n,j(ω), γ n,j(ω)), \(n\in \mathbb Z^3_{\star }\), j = 0, 1 is a sequence of real random variables on a probability space \((\Omega ,p,{\mathcal F})\). We assume that the random variables \((\alpha _j,\beta _{n,j},\gamma _{n,j})_{n\in \mathbb Z^3_{\star },j=0,1}\) are independent identically distributed real random variables with a distribution θ satisfying

$$\displaystyle \begin{aligned} \exists\, c>0,\quad \forall\,\gamma\in\mathbb R,\quad \int_{-\infty}^{\infty}e^{\gamma x}d\theta(x) \leq e^{c\gamma^2} \end{aligned} $$
(4.2.3)

(notice that under the assumption (4.2.3) the random variables are necessarily of mean zero). Typical examples (see Remark 4.2.13 below) of random variables satisfying (4.2.3) are the standard Gaussians, i.e.

$$\displaystyle \begin{aligned}d\theta(x)=(2\pi)^{-\frac{1}{2}}e^{-\frac{x^2}{2}}dx \end{aligned}$$

(with an identity in (4.2.3)) or the Bernoulli variables

$$\displaystyle \begin{aligned}d\theta(x)=\frac{1}{2}(\delta_{-1}+\delta_{1})\,. \end{aligned}$$

An advantage of the Bernoulli randomisation is that it keeps the \({\mathcal H}^s\) norm of the original function. The Gaussian randomisation has the advantage to “generate” a dense set in \({\mathcal H}^s\) via the map

$$\displaystyle \begin{aligned} \omega \in \Omega \longmapsto (u_0^\omega, u_1^\omega)\in{\mathcal H}^s \end{aligned} $$
(4.2.4)

for most of \((u_0,u_1)\in {\mathcal H}^s\) (see Proposition 4.2.2 below).

Definition 4.2.1

For fixed \((u_0, u_1) \in \mathcal {H}^s\), the map (4.2.4) is a measurable map from \((\Omega ,{\mathcal F})\) to \({\mathcal H}^s\) endowed with the Borel sigma algebra since the partial sums form a Cauchy sequence in \(L^2(\Omega ;{\mathcal H}^s)\). Thus (4.2.4) endows the space \({\mathcal H}^s(\mathbb T^3)\) with a probability measure which is the direct image of p. Let us denote this measure by \(\mu _{(u_0, u_1)}\). Then

$$\displaystyle \begin{aligned}\forall\, A \subset \mathcal{H}^s,\,\, \mu_{(u_0, u_1)} (A)= p ( \omega\in \Omega\,:\, (u_0^\omega, u_1^\omega) \in A). \end{aligned}$$

Denote by \({\mathcal M}^s\) the set of measures obtained following this construction:

$$\displaystyle \begin{aligned}{\mathcal M}^s= \bigcup_{(u_0, u_1) \in \mathcal{H}^s} \{ \mu_{(u_0, u_1)}\}\,. \end{aligned}$$

Here are two basic properties of these measures.

Proposition 4.2.2

For any s′ > s, if \((u_0, u_1) \notin \mathcal {H}^{s'}\), then

$$\displaystyle \begin{aligned}\mu_{(u_0, u_1)}( \mathcal{H}^{s'})=0\,. \end{aligned}$$

In other words, the randomisation (4.2.4) does not regularise in the scale of the L 2-based Sobolev spaces (this fact is obvious for the Bernoulli randomisation). Next, if (u 0, u 1) have all their Fourier coefficients different from zero and if \(\mathrm {supp}(\theta )=\mathbb R\)then supp( \(\mu _{(u_0, u_1)})={\mathcal H^s}\). In other words, under these assumptions, for any \((w_0, w_1)\in \mathcal {H}^s\)and any 𝜖 > 0,

$$\displaystyle \begin{aligned} \mu_{(u_0, u_1)} ( \{ (v_0, v_1)\in\mathcal{H}^s\,:\, \| (w_0, w_1) - (v_0, v_1)\|{}_{\mathcal{H}^s} < \epsilon \}) >0, \end{aligned} $$
(4.2.5)

or in yet other words, any set of full \(\mu _{(u_0, u_1)}\)-measure is dense in \(\mathcal {H}^s\) .

We have the following global existence and uniqueness result for typical data with respect to an element of \({\mathcal M}^s\).

Theorem 4.2.3 (Existence and Uniqueness)

Let us fix s ∈ (0, 1) and \(\mu \in {\mathcal M}^s\). Then, there exists a full μ measure set \(\varSigma {\subset } {\mathcal H}^s(\mathbb T^3)\)such that for every (v 0, v 1)  ∈ Σ, there exists a unique global solution v of the nonlinear wave equation

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)v+v^3=0,\quad (v(0),\partial_t v(0))=(v_0,v_1) \end{aligned} $$
(4.2.6)

satisfying

$$\displaystyle \begin{aligned}(v(t),\partial_t v(t)) \in \big(S(t)(v_0, v_1),\partial_t S(t)(v_0, v_1)\big)+ C(\mathbb R; H^1(\mathbb T^3) \times L^2(\mathbb T^3)). \end{aligned}$$

Furthermore, if we denote by

$$\displaystyle \begin{aligned}\Phi(t) (v_0, v_1)\equiv (v(t),\partial_t v(t))\end{aligned}$$

the flow thus defined, the set Σ is invariant by the map Φ(t), namely

$$\displaystyle \begin{aligned}\Phi(t)(\varSigma)=\varSigma,\qquad \forall\, t\in \mathbb R. \end{aligned}$$

The next statement gives quantitative bounds on the solutions.

Theorem 4.2.4 (Quantitative Bounds)

Let us fix s ∈ (0, 1) and \(\mu \in {\mathcal M}^s\). Let Σ be the set constructed in Theorem 4.2.3. Then for every ε > 0 there exist C, δ > 0 such that for every (v 0, v 1) ∈ Σ, there exists M > 0 such that the global solution to (4.2.6) constructed in Theorem 4.2.3satisfies

$$\displaystyle \begin{aligned}v(t)= S(t) \Pi_0^{\perp}(v_0, v_1)+ w(t), \end{aligned}$$

with

$$\displaystyle \begin{aligned} \|(w(t), \partial_t w(t)) \|{}_{\mathcal{H}^1(\mathbb T^3)} \leq C (M+ |t|)^{\frac {1-s} s + \varepsilon} \end{aligned}$$

and

$$\displaystyle \begin{aligned} \mu ((v_0,v_1) \,:\,M>\lambda) \leq C e^{-\lambda^\delta}. \end{aligned}$$

Remark 4.2.5

Recall that Π0 is the orthogonal projector on the zero Fourier mode and \(\Pi _0^{\perp } = {\text{Id}} - \Pi _0\).

We now further discuss the uniqueness of the obtained solutions. For s > 1∕2, we have the following statement.

Theorem 4.2.6 (Unique Limit of Smooth Solutions for s > 1∕2)

Let s  ∈ (1∕2, 1). With the notations of the statement of Theorem 4.2.3, let us fix an initial datum (v 0, v 1) ∈ Σ with a corresponding global solution v(t). Let \((v_{0,n},v_{1,n})_{n=1}^{\infty }\)be a sequence of \({\mathcal H}^1(\mathbb T^3)\)such that

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty}\|(v_{0,n}-v_0,v_{1,n}-v_1)\|{}_{{\mathcal H}^s(\mathbb T^3)}=0\,. \end{aligned}$$

Denote by v n(t) the solution of the cubic defocusing wave equation with data (v 0,n, v 1,n) defined in Theorem 4.1.2. Then for every T > 0,

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty}\|(v_{n}(t)-v(t),\partial_t v_{n}(t)-\partial_t v(t))\|{}_{L^\infty([0,T]; {\mathcal H}^s(\mathbb T^3))}=0\,. \end{aligned}$$

Thanks to Theorem 4.1.33, we know that for s ∈ (0, 1∕2) the result of Theorem 4.2.6 cannot hold true ! We only have a partial statement.

Theorem 4.2.7 (Unique Limit of Particular Smooth Solutions for s < 1∕2)

Let s ∈ (0, 1∕2). With the notations of the statement of Theorem 4.2.3, let us fix an initial datum (v 0, v 1) ∈ Σ with a corresponding global solution v(t). Let \((v_{0,n},v_{1,n})_{n=1}^{\infty }\)be the sequence of \(C^\infty (\mathbb T^3)\times C^\infty (\mathbb T^3)\)defined as the usual regularisation by convolution, i.e.

$$\displaystyle \begin{aligned}v_{0,n}=v_{0}\star \rho_n,\quad v_{1,n}=v_{1}\star \rho_n\,, \end{aligned}$$

where \((\rho _n)_{n=1}^{\infty }\)is an approximate identity. Denote by v n(t) the solution of the cubic defocusing wave equation with data (v 0,n, v 1,n) defined in Theorem 4.1.2. Then for every T > 0,

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty}\|(v_{n}(t)-v(t),\partial_t v_{n}(t)-\partial_t v(t))\|{}_{L^\infty([0,T]; {\mathcal H}^s(\mathbb T^3))}=0\,. \end{aligned}$$

Remark 4.2.8

We emphasise that the result of Theorem 4.1.33 applies for the elements of Σ. More precisely, thanks to Theorem 4.1.33, we have that for every (v 0, v 1) ∈ Σ there is a sequence \((v_{0,n},v_{1,n})_{n=1}^{\infty }\) of elements of \(C^\infty (\mathbb T^3)\times C^\infty (\mathbb T^3)\) such that

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty}\|(v_{0,n}-v_0, v_{1,n}-v_1)\|{}_{{\mathcal H}^s(\mathbb T^3)}=0 \end{aligned}$$

but such that if we denote by v n(t) the solution of the cubic defocusing wave equation with data (v 0,n, v 1,n) defined in Theorem 4.1.2 then for every T > 0,

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty}\|(v_{n}(t),\partial_t v_{n}(t))\|{}_{L^\infty([0,T]; {\mathcal H}^s(\mathbb T^3))}=\infty\,. \end{aligned}$$

Therefore the choice of the particular regularisation of the initial data in Theorem 4.2.7 is of key importance. It would be interesting to classify the “admissible type of regularisations” allowing to get a statement such as Theorem 4.2.7 .

Remark 4.2.9

We can also see the solutions constructed in Theorem 4.2.3 as the (unique) limit as N tends to infinity of the solutions of the following truncated versions of the cubic defocusing wave equation.

$$\displaystyle \begin{aligned}(\partial_t^2-\Delta)S_N u+S_{N}((S_N u)^3)=0, \end{aligned}$$

where S N is a Fourier multiplier localising on modes of size ≤ N. The convergence of a subsequence can be obtained by a compactness argument (cf. [9]). The convergence of the whole sequence however requires strong solutions techniques.

The next question is whether some sort of continuous dependence with respect to the initial data survives in the context of Theorem 4.2.3. In order to state our result concerning the continuous dependence with respect to the initial data, we recall that for any event B (of non vanishing probability) the conditioned probability p(⋅|B) is the natural probability measure supported by B, defined by

$$\displaystyle \begin{aligned}p( A\vert B) = \frac{ p (A\cap B) } {p( B)}\,. \end{aligned}$$

We have the following statement.

Theorem 4.2.10 (Conditioned Continuous Dependence)

Let us fix s ∈ (0, 1), let A > 0, let \(B_A\equiv (V\in {\mathcal H}^s : \|V\|{ }_{{\mathcal H}^s}\leq A)\)be the closed ball of radius A centered at the origin of \({\mathcal H}^s\)and let T > 0. Let \(\mu \in {\mathcal M}^s\)and suppose that θ (the law of our random variables) is symmetric. Let Φ(t) be the flow of the cubic wave equations defined μ almost everywhere in Theorem 4.2.3. Then for ε, η > 0, we have the bound

(4.2.7)

where \( X_{T}\equiv (C ([0,T]; \mathcal {H}^s)\cap L^4([0,T]\times \mathbb T^3))\times C([0,T];H^{s-1}) \)and g(ε, η) is such that

$$\displaystyle \begin{aligned}\lim_{\eta\rightarrow 0}g(\varepsilon,\eta)=0,\qquad \forall\,\varepsilon>0. \end{aligned}$$

Moreover, if for s ∈ (0, 1∕2) we assume in addition that the support of μ is the whole \({\mathcal H}^s\)(which is true if in the definition of the measure μ, we have \(a_{i}, b_{n,j}, c_{n,j} \neq 0, \forall n \in \mathbb {Z}^d\)and the support of the distribution function of the random variables is \(\mathbb {R}\)), then there exists ε > 0 such that for every η > 0 the left hand-side in (4.2.7) is positive.

A probability measure θ on \(\mathbb R\) is called symmetric if

$$\displaystyle \begin{aligned}\int_{\mathbb R}f(x)d\theta(x)=\int_{\mathbb R}f(-x)d\theta(x),\quad \forall\,\, f\in L^1(d\theta). \end{aligned}$$

A real random variable is called symmetric if its distribution is a symmetric measure on \(\mathbb R\).

The result of Theorem 4.2.10 is saying that as soon as η ≪ ε, among the initial data which are η-close to each other, the probability of finding two for which the corresponding solutions to (4.2.1) do not remain ε close to each other, is very small. The last part of the statement is saying that the deterministic version of the uniform continuity property (4.2.7) does not hold and somehow that one cannot get rid of a probabilistic approach in the question concerning the continuous dependence (in \({\mathcal H}^s\), s < 1∕2) with respect to the data. The ill-posedness result of Theorem 4.1.28 will be of importance in the proof of the last part of Theorem 4.2.10.

4.2.2 Probabilistic Strichartz Estimates

Lemma 4.2.11

Let \((l_n(\omega ))_{n=1}^{\infty }\) be a sequence of real, independent random variables with associated sequence of distributions \((\theta _{n})_{n=1}^{\infty }\) . Assume that θ n satisfy the property

$$\displaystyle \begin{aligned} \exists\, c>0\,:\, \forall\,\gamma\in\mathbb R,\,\forall\, n\geq 1,\, \Big|\int_{-\infty}^{\infty}e^{\gamma x}d\theta_{n}(x)\Big|\leq e^{c\gamma^{2}}\,. \end{aligned} $$
(4.2.8)

Then there exists α > 0 such that for every λ > 0, every sequence \((c_n)_{n=1}^{\infty }\in l^2\)of real numbers,

$$\displaystyle \begin{aligned} p\Big(\omega\,:\,\big|\sum_{n=1}^{\infty} c_n l_n(\omega) \big|>\lambda\Big)\leq 2 e^{-\frac{\alpha\lambda^2}{\sum_{n}c_n^2}} \, . \end{aligned} $$
(4.2.9)

As a consequence there exists C > 0 such that for every p ≥ 2, every \((c_n)_{n=1}^{\infty }\in l^2\) ,

$$\displaystyle \begin{aligned} \big\|\sum_{n=1}^{\infty} c_n l_n(\omega) \big\|{}_{L^{p}(\Omega)} \leq C\sqrt{p}\big(\sum_{n=1}^{\infty}c_n^2\big)^{1/2} . \end{aligned} $$
(4.2.10)

Remark 4.2.12

The property (4.2.8) is equivalent to assuming that θ n are of zero mean and assuming that

$$\displaystyle \begin{aligned} \exists\, c>0,\, C>0\,:\, \forall\,\gamma\in\mathbb R,\,\forall\, n\geq 1,\, \Big|\int_{-\infty}^{\infty}e^{\gamma x}d\theta_{n}(x)\Big|\leq C\, e^{c\gamma^{2}}\,. \end{aligned} $$
(4.2.11)

Remark 4.2.13

Let us notice that (4.2.8) is readily satisfied if \((l_n(\omega ))_{n=1}^{\infty }\) are standard real Gaussian or standard Bernoulli variables. Indeed in the case of Gaussian

$$\displaystyle \begin{aligned}\int_{-\infty}^{\infty}e^{\gamma x}d\theta_{n}(x)= \int_{-\infty}^{\infty}e^{\gamma x}\, e^{-x^2/2}\frac{dx}{\sqrt{2\pi}} =e^{\gamma^2/2}\,. \end{aligned}$$

In the case of Bernoulli variables one can obtain that (4.2.8) is satisfied by invoking the inequality

$$\displaystyle \begin{aligned}\frac{e^{\gamma }+e^{-\gamma }}{2}\leq e^{\gamma^2/2},\quad \forall\, \gamma\in\mathbb R. \end{aligned}$$

More generally, we can observe that (4.2.11) holds if θ n is compactly supported.

Remark 4.2.14

In the case of Gaussian we can see Lemma 4.2.11 as a very particular case of a L p smoothing properties of the Hartree-Foch heat flow (see e.g. [44, Section 3] for more details on this issue).

Proof of Lemma 4.2.11

For t > 0 to be determined later, using the independence and (4.2.8), we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{\Omega}\, e^{t\sum_{n\geq 1}c_n l_n(\omega)}dp(\omega) &\displaystyle = &\displaystyle \prod_{n\geq 1}\int_{\Omega}e^{t c_n l_n(\omega)}dp(\omega) \\ &\displaystyle = &\displaystyle \prod_{n\geq 1}\int_{-\infty}^{\infty}e^{tc_n x}\, d\theta_{n}(x) \\ &\displaystyle \leq &\displaystyle \prod_{n\geq 1}e^{c(t c_n)^2}= e^{(ct^2)\sum_{n}c_n^2}\, . \end{array} \end{aligned} $$

Therefore

$$\displaystyle \begin{aligned}e^{(ct^2)\sum_{n}c_n^2}\geq e^{t\lambda}\,\,\, p\,(\omega\,:\,\sum_{n\geq 1} c_n l_n(\omega)>\lambda) \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned}p\,(\omega\,:\,\sum_{n\geq 1} c_n l_n(\omega)>\lambda)\leq e^{(ct^2)\sum_{n}c_n^2}\,\,\, e^{-t\lambda}\, . \end{aligned}$$

We choose t as

$$\displaystyle \begin{aligned}t\equiv \frac{\lambda}{2c \sum_{n}c_n^2}\,. \end{aligned}$$

Hence

$$\displaystyle \begin{aligned}p\,(\omega\,:\,\sum_{n\geq 1} c_n l_n(\omega)>\lambda)\leq e^{-\frac{\lambda^2}{4c\sum_{n}c_n^2}}\, . \end{aligned}$$

In the same way (replacing c n by − c n), we can show that

$$\displaystyle \begin{aligned}p\,(\omega\,:\,\sum_{n\geq 1} c_n l_n(\omega)<-\lambda)\leq e^{-\frac{\lambda^2}{4c\sum_{n}c_n^2}} \end{aligned}$$

which completes the proof of (4.2.9). To deduce (4.2.10), we write

$$\displaystyle \begin{aligned} \begin{array}{rcl} \|\sum_{n=1}^{\infty} c_n l_n(\omega)\|{}^p_{L^p(\Omega)} &\displaystyle = &\displaystyle {p} \int_0^{+\infty} p(\omega\,:\, |\sum_{n=1}^{\infty} c_n l_n(\omega)|>\lambda ) \lambda^{p-1} d \lambda \\ &\displaystyle \leq &\displaystyle Cp \int_0^{+\infty} \lambda^{p-1} e^{-\frac{c \lambda^2}{\sum_n c_n^2}} d\lambda \\ &\displaystyle \leq &\displaystyle Cp (C\sum_n c_n^2)^{\frac p 2}\int_0^{+\infty} \lambda^{p-1} e^{-\frac{\lambda^2} 2} d\lambda \\ &\displaystyle \leq &\displaystyle C (Cp \sum_n c_n^2)^{\frac p 2} \end{array} \end{aligned} $$

which completes the proof of Lemma 4.2.11. □

As a consequence of Lemma 4.2.11, we get the following “probabilistic” Strichartz estimates.

Theorem 4.2.15

Let us fix s ∈ (0, 1) and let \(\mu \in {\mathcal M}^s\)be induced via the map (4.2.4) from the couple \((u_0,u_1)\in {\mathcal H}^s\). Let us also fix σ ∈ (0, s], 2 ≤ p 1 < +∞, 2 ≤ p 2 ≤ +∞ and \(\delta >1+ \frac 1 {p_1}\). Then there exists a positive constant C such that for every p ≥ 2,

$$\displaystyle \begin{aligned} \Big\| \|\langle t \rangle ^{- \delta} S(t) (v_0,v_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))} \Big\|{}_{L^p({\mu})} \leq C\sqrt{p} \|(u_0, u_1)\|{}_{\mathcal{H}^{\sigma}(\mathbb T^3)}\,. \end{aligned} $$
(4.2.12)

As a consequence for every T > 0 and p 1 ∈ [1, ), p 2 ∈ [2, ],

$$\displaystyle \begin{aligned} \|S(t) (v_0,v_1)\|{}_{L^{p_1}([0,T] ; L^{p_2}( \mathbb T^3))}<\infty ,\quad \mu\mathit{\text{ - almost surely.}} \end{aligned} $$
(4.2.13)

Moreover, there exist two positive constants C and c such that for every λ > 0,

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \mu\Big((v_0,v_1)\in {\mathcal H}^s\,:\, \|\langle t \rangle ^{- \delta} S(t) (v_0,v_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \lambda \Big)\\ &\displaystyle &\displaystyle \quad \leq C\exp\Bigl(- \frac {c\lambda^2} {\|(u_0, u_1)\|{}^2_{\mathcal{H}^{\sigma}(\mathbb T^3)}}\Bigr)\,. \end{array} \end{aligned} $$
(4.2.14)

Remark 4.2.16

Observe that (4.2.13) applied for p 2 =  displays an improvement of 3∕2 derivatives with respect to the Sobolev embedding which is stronger than the improvement obtained by the (deterministic) Strichartz estimates (see Remark 4.1.14). The proof of Theorem 4.2.15 exploits the random oscillations of the initial data while the proof of the deterministic Strichartz estimates exploits in a crucial (and subtle) manner the time oscillations of S(t). In the proof of Theorem 4.2.15, we simply neglect these times oscillations.

Remark 4.2.17

In the proof of Theorem 4.2.15, we shall make use of the Sobolev spaces \(W^{\sigma ,q}(\mathbb T^3)\), σ ≥ 0, q ∈ (1, ), defined via the norm

$$\displaystyle \begin{aligned}\|u\|{}_{W^{\sigma,q}(\mathbb T^3)}=\|(1-\Delta)^{\sigma/2}u\|{}_{L^q(\mathbb T^3)}\,. \end{aligned}$$

Proof of Theorem 4.2.15

We have that

$$\displaystyle \begin{aligned}\Big\| \|\langle t \rangle ^{- \delta}\Pi_0 S(t) (v_0,v_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))} \Big\|{}_{L^p({\mu})} \end{aligned}$$

equals

$$\displaystyle \begin{aligned} \Big\| \|\langle t \rangle ^{- \delta} (\alpha_{0}(\omega)a_0+t\alpha_1(\omega) a_1) \|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))} \Big\|{}_{L^p_{\omega}}\,. \end{aligned} $$
(4.2.15)

A trivial application of Lemma 4.2.11 implies that

$$\displaystyle \begin{aligned}\|\alpha_j(\omega)\|{}_{L^p_{\omega}} \leq C\sqrt{p},\quad j=0,1. \end{aligned}$$

Therefore, using that δ > 1 + 1∕p 1 the expression (4.2.15) can be bounded by

$$\displaystyle \begin{aligned}(2\pi)^{\frac{3}{p_2}} \Big\| \|\langle t \rangle ^{- \delta} (\alpha_{0}(\omega)a_0+t\alpha_1(\omega) a_1) \|{}_{L^{p_1} ( \mathbb R_t )} \Big\|{}_{L^p_{\omega}} \leq C\sqrt{p}(|a_0|+|a_1|)\,. \end{aligned}$$

Therefore, it remains to estimate

$$\displaystyle \begin{aligned}\Big\| \|\langle t \rangle ^{- \delta}\Pi_0^{\perp} S(t) (v_0,v_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))} \Big\|{}_{L^p({\mu})}\,. \end{aligned}$$

By a use of the Hölder inequality on \(\mathbb T^3\), we observe that it suffices to estimate

$$\displaystyle \begin{aligned}\Big\| \|\langle t \rangle ^{- \delta}\Pi_0^{\perp} S(t) (v_0,v_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{\infty}( \mathbb T^3))} \Big\|{}_{L^p({\mu})}\,. \end{aligned}$$

Let q <  be such that σ > 3∕q. Then by the Sobolev embedding \(W^{\sigma ,q}(\mathbb T^3)\subset C^0(\mathbb T^3)\), we have

$$\displaystyle \begin{aligned}\|\Pi_0^{\perp} S(t) (v_0,v_1)\|{}_{L^\infty(\mathbb T^3)}\leq C \|(1-\Delta)^{\sigma/2}\Pi_0^{\perp} S(t) (v_0,v_1)\|{}_{L^q(\mathbb T^3)}\,. \end{aligned}$$

Therefore, we need to estimate

$$\displaystyle \begin{aligned}\Big\| \|\langle t \rangle ^{- \delta} (1-\Delta)^{\sigma/2} \Pi_0^{\perp} S(t) (v_0,v_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{q}( \mathbb T^3))} \Big\|{}_{L^p({\mu})} \end{aligned}$$

which equals

$$\displaystyle \begin{aligned} \Big\| \|\langle t \rangle ^{- \delta} (1-\Delta)^{\sigma/2} \Pi_0^{\perp} S(t) (u_0^\omega,u_1^\omega)\|{}_{L^{p_1} ( \mathbb R_t ; L^{q}( \mathbb T^3))} \Big\|{}_{L^p_{\omega}}\,. \end{aligned} $$
(4.2.16)

By using the Hölder inequality in ω, we observe that it suffices to evaluate the last quantity only for \(p>\max (p_1,q)\). For such values of p, using the Minkowski inequality, we can estimate (4.2.16) by

$$\displaystyle \begin{aligned} \Big\| \big\| \langle t \rangle ^{- \delta} (1-\Delta)^{\sigma/2} \Pi_0^{\perp} S(t) (u_0^\omega,u_1^\omega) \big\|{}_{L^p_{\omega}} \Big\|{}_{L^{p_1} ( \mathbb R_t ; L^{q}( \mathbb T^3))}\,. \end{aligned} $$
(4.2.17)

Now, we can write \((1-\Delta )^{\sigma /2} \Pi _0^{\perp } S(t) (u_0^\omega ,u_1^\omega )\) as

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{n\in\mathbb Z^3_{\star}} \langle n\rangle^{\sigma}\Big(\big(\beta_{n,0}(\omega)b_{n,0}\cos{}(t|n|)+\beta_{n,1}(\omega)b_{n,1}\frac{\sin{}(t|n|)}{|n|}\big)\cos{}(n\cdot x)\\ &\displaystyle &\displaystyle \qquad \qquad +\big(\gamma_{n,0}(\omega)c_{n,0}\cos{}(t|n|)+\gamma_{n,1}(\omega)c_{n,1}\frac{\sin{}(t|n|)}{|n|}\big)\sin{}(n\cdot x)\Big), \end{array} \end{aligned} $$

with

$$\displaystyle \begin{aligned}\sum_{n\in\mathbb Z^3_{\star}} \langle n\rangle^{2\sigma}\Big(|b_{n,0}|{}^2+|c_{n,0}|{}^2+|n|{}^{-2}(|b_{n,1}|{}^2+|c_{n,1}|{}^2)\Big) \leq C\|(u_0, u_1)\|{}_{\mathcal{H}^\sigma(\mathbb T^3)}^2\,. \end{aligned} $$

Now using (4.2.10) of Lemma 4.2.11 and the boundedness of \(\sin \) and \(\cos \) functions, we obtain that (4.2.17) can be bounded by

$$\displaystyle \begin{aligned} C\Big\| \langle t \rangle ^{- \delta} C\sqrt{p}\|(u_0, u_1)\|{}_{\mathcal{H}^\sigma(\mathbb T^3)} \Big\|{}_{L^{p_1} ( \mathbb R_t ; L^{q}( \mathbb T^3))}\,. \end{aligned} $$
(4.2.18)

Since δ > 1 + 1∕p 1, we can estimate (4.2.18) by

$$\displaystyle \begin{aligned}C\sqrt{p}\|(u_0, u_1)\|{}_{\mathcal{H}^\sigma(\mathbb T^3)}\,. \end{aligned} $$

This completes the proof of (4.2.12). Let us finally show how (4.2.12) implies (4.2.14). Using the Tchebichev inequality and (4.2.12), we have that

$$\displaystyle \begin{aligned}\mu\Big((v_0,v_1)\in {\mathcal H}^s\,:\, \|\langle t \rangle ^{- \delta} S(t) (v_0,v_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \lambda \Big)\ \end{aligned} $$

is bounded by

$$\displaystyle \begin{aligned}\lambda^{-p} \Big\| \|\langle t \rangle ^{- \delta} S(t) (v_0,v_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))} \Big\|{}_{L^p({\mu})}^p\leq \big(C\lambda^{-1}\sqrt{p}\|(u_0, u_1)\|{}_{\mathcal{H}^\sigma(\mathbb T^3)} \big)^p \end{aligned} $$

We now choose p as

$$\displaystyle \begin{aligned} C\lambda^{-1}\sqrt{p}\|(u_0, u_1)\|{}_{\mathcal{H}^\sigma(\mathbb T^3)}= \frac 1 2 \Leftrightarrow p= \frac{ \lambda^2 \|(u_0, u_1)\|{}^{-2}_{\mathcal{H}^\sigma(\mathbb T^3)} } {4C^2} , \end{aligned}$$

which yields (4.2.14). This completes the proof of Theorem 4.2.15. □

The proof of Theorem 4.2.15 also implies the following statement.

Theorem 4.2.18

Let us fix s ∈ (0, 1) and let \(\mu \in {\mathcal M}^s\)be induced via the map (4.2.4) from the couple \((u_0,u_1)\in {\mathcal H}^s\). Let us also fix p ≥ 2, σ ∈ (0, s] and q < ∞ such that σ > 3∕q. Then for every T > 0,

$$\displaystyle \begin{aligned} \|S(t) (v_0,v_1)\|{}_{L^{p}([0,T] ; W^{\sigma,q}( \mathbb T^3))}<\infty ,\quad \mu\mathit{\text{ - almost surely.}} \end{aligned} $$
(4.2.19)

4.2.3 Regularisation Effect in the Picard Iteration Expansion

Consider the Cauchy problem

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta) u+u^3=0,\quad u|{}_{t=0}=u_0,\,\, \partial_t u|{}_{t=0}=u_1, \end{aligned} $$
(4.2.20)

where (u 0, u 1) is a typical element on the support of \(\mu \in {\mathcal M}^s\), s ∈ (0, 1). According to the discussion in Sect. 4.1.5 of the previous section, for small times depending on (u 0, u 1), we can hope to represent the solution of (4.2.20) as

$$\displaystyle \begin{aligned}u=\sum_{j=1}^{\infty}Q_{j}(u_0,u_1), \end{aligned}$$

where Q j is homogeneous of order j in (u 0, u 1). We have that

$$\displaystyle \begin{aligned} \begin{array}{rcl} Q_1(u_0,u_1) &\displaystyle = &\displaystyle S(t)(u_0,u_1), \\ Q_{2}(u_0,u_1) &\displaystyle = &\displaystyle 0, \\ Q_{3}(u_0,u_1) &\displaystyle = &\displaystyle -\int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}} \big( S(\tau)(u_0,u_1) \big)^3d\tau, \end{array} \end{aligned} $$

etc. We have that μ a.s. Q 1H σ for σ > s. However, using the probabilistic Strichartz estimates of the previous section, we have that for T > 0,

$$\displaystyle \begin{aligned}\|Q_3(u_0,u_1)\|{}_{L^\infty_TH^1(\mathbb T^3)}\lesssim \| S(t)(u_0,u_1)\|{}_{L^3_T L^6(\mathbb T^3)}^3<\infty,\quad \mu\text{-almost surely.} \end{aligned}$$

Therefore the second non trivial term in the formal expansion defining the solution is more regular than the initial data ! The strategy will therefore be to write the solution of (4.2.20) as

$$\displaystyle \begin{aligned}u=Q_1(u_0,u_1)+v, \end{aligned}$$

where v ∈ H 1 and solve the equation for v by the methods described in the previous section. In the case of the cubic nonlinearity the deterministic analysis used to solve the equation for v is particularly simple, it is in fact very close to the analysis in the proof of Proposition 4.1.1. For more complicated problems the analysis of the equation for v could involve more advanced deterministic arguments. We refer to [4], where a similar strategy is used in the context of the nonlinear Schrödinger equation and to [16] where it is used in the context of stochastic PDE’s.

This argument is not particularly restricted to Q 3. One can imagine situations when for some m > 3, Q m is the first element in the expansion whose regularity fits well in a deterministic analysis. Then we can equally well look for the solutions under the form

$$\displaystyle \begin{aligned} u=\sum_{j=1}^{m-1} Q_j(u_0,u_1)+v, \end{aligned} $$
(4.2.21)

and treat v by a deterministic analysis. It is worth noticing that such a situation occurs in the work on parabolic PDE’s with a singular random source term [22,23,24]. In these works in expansions of type (4.2.21) the random initial data (u 0, u 1) should be replaced by the random source term (the white noise). Let us also mention that in the case of parabolic equations the deterministic smoothing comes from elliptic regularity estimates while in the context of the wave equation we basically rely on the smoothing estimate (4.1.5).

4.2.4 The Local Existence Result

Proposition 4.2.19

Consider the problem

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)v+(f+v)^3=0\,. \end{aligned} $$
(4.2.22)

There exists a constant C such that for every time interval I = [a, b] of size 1, every Λ ≥ 1, every (v 0, v 1, f) ∈ H 1 × L 2 × L 3(I, L 6) satisfying

$$\displaystyle \begin{aligned}\|v_0\|{}_{H^1}+\|v_1\|{}_{L^2}+\|f\|{}^3_{L^3(I,L^6)}\leq \Lambda \end{aligned}$$

there exists a unique solution on the time interval [a, a + C −1Λ −2] of (4.2.22) with initial data

$$\displaystyle \begin{aligned}v(a,x)=v_0(x), \quad \partial_t v(a,x)=v_1(x)\,. \end{aligned}$$

Moreover the solution satisfies \( \|(v,\partial _t v)\|{ }_{L^\infty ([a,a+C^{-1} \Lambda ^{-2}],H^1\times L^2)}\leq C\Lambda , \) (v, tv) is unique in the class L ([a, a + C −1Λ −2], H 1 × L 2) and the dependence in time is continuous.

Proof

The proof is very similar to the proof of Proposition 4.1.1. By translation invariance in time, we can suppose that I = [0, 1]. We can rewrite the problem as

$$\displaystyle \begin{aligned} v(t)=S(t)(v_0,v_1)-\int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((f(\tau)+v(\tau))^3d\tau\,. \end{aligned} $$
(4.2.23)

Set

$$\displaystyle \begin{aligned}\Phi_{v_0,v_1,f}(v)\equiv S(t)(v_0,v_1)-\int_{0}^t \frac{\sin{}((t-\tau)\sqrt{-\Delta})}{\sqrt{-\Delta}}((f(\tau)+v(\tau))^3d\tau. \end{aligned}$$

Then for T ∈ (0, 1], using the Sobolev embedding \(H^1(\mathbb T^3)\subset L^6(\mathbb T^3)\), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \|\Phi_{v_0,v_1,f}(v)\|{}_{L^\infty([0,T],H^1)}\\ &\displaystyle &\displaystyle \quad \leq C\big(\|v_0\|{}_{H^1}+\|v_1\|{}_{L^2}+\int_0^T\|f(\tau)\|{}_{L^6}^3d\tau\big)+ T\sup_{\tau\in[0,T]}\|v(\tau)\|{}_{L^6}^3\\ &\displaystyle &\displaystyle \quad \leq C\big(\|v_0\|{}_{H^1}+\|v_1\|{}_{L^2}+\|f\|{}^3_{L^3(I,L^6)}\big)+T\|v\|{}^3_{L^\infty([0,T],H^1)}. \end{array} \end{aligned} $$

It is now clear that for T ≈ Λ−2 the map \(\Phi _{u_0,u_1,f}\) send the ball

$$\displaystyle \begin{aligned}\{v:\|v\|{}_{L^\infty([0,T],H^1)}\leq C\Lambda\} \end{aligned}$$

into itself. Moreover by a similar argument, we obtain that this map is a contraction on the same ball. Thus we obtain the existence part and the bound on v in H 1. The estimate of \(\|\partial _t v\|{ }_{L^2}\) follows by differentiating in t the Duhamel formula (4.2.23). This completes the proof of Proposition 4.2.19. □

4.2.5 Global Existence

In this section, we complete the proof of Theorem 4.2.3. We search v under the form v(t) = S(t)(v 0, v 1) + w(t). Then w solves

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)w+(S(t)(v_0, v_1)+w)^3=0,\quad w\mid_{t=0} =0, \quad \partial_t w\mid_{t=0} = 0. \end{aligned} $$
(4.2.24)

Thanks to Theorems 4.2.15 and 4.2.18, we have that μ-almost surely,

$$\displaystyle \begin{aligned} \begin{aligned} g(t) &= \|S(t)(v_0, v_1)\|{}^3_{L^6( \mathbb T^3)}\in L^1_{\text{loc}} ( \mathbb R_t), \\ f(t) &= \|S(t)(v_0, v_1)\|{}_{W^{\sigma,q}( \mathbb T^3)}\in L^1_{\text{loc}} ( \mathbb R_t), \end{aligned} \end{aligned} $$
(4.2.25)

σ > 3∕q. The local existence for (4.2.24) follows from Proposition 4.2.19 and the first estimate in (4.2.25). We also deduce from Proposition 4.2.19, that as long as the H 1 × L 2 norm of (w, tw) remains bounded, the solution w of (4.2.24) exists. Set

$$\displaystyle \begin{aligned}{\mathcal E}(w(t)) = \frac 1 2 \int_{\mathbb T^3}\big( (\partial_t w)^2 + |\nabla_x w|{}^2 + \frac 1 2 w^4\big) dx\,. \end{aligned}$$

Using the equation solved by w, we now compute

$$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{d} {dt} {\mathcal E}(w(t)) &\displaystyle = &\displaystyle \int_{\mathbb T^3}\big( \partial_t w \partial_t^2 w + \nabla_x \partial _t w \cdot \nabla_x w + \partial_t w\, w ^3 \big)dx \\ &\displaystyle = &\displaystyle \int_{\mathbb T^3} \partial_t w \Bigl(\partial_t^2 w -\Delta w + w^3\Bigr) dx \\ &\displaystyle = &\displaystyle \int_{\mathbb T^3} \partial_t w \Bigl(w^3- (S(t)(v_0, v_1)+ w)^3\Bigr) dx. \end{array} \end{aligned} $$

Now, using the Cauchy-Schwarz inequality, the Hölder inequalities and the Sobolev embedding \(W^{\sigma ,q}(\mathbb T^3)\subset C^0(\mathbb T^3)\), we can write

$$\displaystyle \begin{aligned} \begin{aligned} \frac{d} {dt} {\mathcal E}(w(t)) &\leq C \big({\mathcal E}(w(t))\big)^{1/2} \|w^3- (S(t)(v_0, v_1)+ w)^3\|{}_{L^2( \mathbb T^3)} \\ &\leq C \big({\mathcal E}(w(t))\big)^{1/2}\\ &\quad \times \Bigl(\|S(t)(v_0, v_1)\|{}^3_{L^6( \mathbb T^3)} + \|S(t)(v_0, v_1)\|{}_{L^\infty( \mathbb T^3)} \|w^2\|{}_{L^2( \mathbb T^3)} \Bigr)\\ &\leq C \big({\mathcal E}(w(t))\big)^{1/2}\\ &\quad \times \Bigl(\|S(t)(v_0, v_1)\|{}^3_{L^6( \mathbb T^3)} + \|S(t)(v_0, v_1)\|{}_{W^{\sigma,q}( \mathbb T^3)} \|w^2\|{}_{L^2( \mathbb T^3)} \Bigr)\\ &\leq C \big({\mathcal E}(w(t))\big)^{1/2} \Big(g(t) + f(t) \big({\mathcal E}(w(t))\big)^{1/2} \Big) \end{aligned} \end{aligned}$$

and consequently, according to Gronwall inequality and (4.2.25), w exists globally in time.

This completes the proof of the existence and uniqueness part of Theorem 4.2.3. Let us now turn to the construction of an invariant set. Define the sets

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Theta &\displaystyle \equiv&\displaystyle \big\{ (v_0,v_1)\in {\mathcal H}^s\,:\,\|S(t)(v_0, v_1)\|{}^3_{L^6( \mathbb T^3)}\in L^1_{\text{loc}} ( \mathbb R_t), \\ &\displaystyle &\displaystyle \quad \|S(t)(v_0, v_1)\|{}_{W^{\sigma,q}( \mathbb T^3)}\in L^1_{\text{loc}} ( \mathbb R_t) \big\} \end{array} \end{aligned} $$

and \( \varSigma \equiv \Theta +{\mathcal H}^1. \) Then Σ is of full μ measure for every \(\mu \in {\mathcal H}^s\), since so is Θ. We have the following proposition.

Proposition 4.2.20

Assume that s > 0 and let us fix \(\mu \in {\mathcal M}^s\). Then, for every (v 0, v 1) ∈ Σ, there exists a unique global solution

$$\displaystyle \begin{aligned}(v(t),\partial_t v(t)) \in (S(t)(v_0, v_1),\partial_t S(t)(v_0, v_1))+ C(\mathbb R; H^1(\mathbb T^3) \times L^2(\mathbb T^3))\end{aligned}$$

of the nonlinear wave equation

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta)v+v^3=0,\quad (v(0,x),\partial_{t} v(0,x))=(v_0(x),v_1(x))\, . \end{aligned} $$
(4.2.26)

Moreover for every \(t\in \mathbb R\) , (v(t), tv(t)) ∈ Σ and thus by the time reversibility Σ is invariant under the flow of (4.2.26).

Proof

By assumption, we can write \((v_0,v_1)=(\tilde {v}_0,\tilde {v}_1)+(w_0,w_1)\) with \((\tilde {v}_0,\tilde {v}_1)\in \Theta \) and \((w_0,w_1)\in {\mathcal H}^1\). We search v under the form

$$\displaystyle \begin{aligned}v(t)= S(t) (\tilde{v}_0, \tilde{v}_1) +w(t)\,. \end{aligned}$$

Then w solves

$$\displaystyle \begin{aligned} (\partial_t^2-\Delta_{\mathbb T^3})w+(S(t)(\tilde{v}_0, \tilde{v}_1)+w)^3=0,\quad w\mid_{t=0} =w_0, \quad \partial_t w\mid_{t=0} = w_1\,. \end{aligned}$$

Now, exactly as before, we obtain that

$$\displaystyle \begin{aligned}\frac{d} {dt} {\mathcal E}(w(t)) \leq C \big({\mathcal E}(w(t))\big)^{1/2} \Big(g(t) + f(t) \big({\mathcal E}(w(t))\big)^{1/2} \Big), \end{aligned}$$

where

$$\displaystyle \begin{aligned}g(t)= \|S(t)(\tilde{v}_0, \tilde{v}_1)\|{}^3_{L^6( \mathbb T^3)},\quad f(t) = \|S(t)(\tilde{v}_0, \tilde{v}_1)\|{}_{W^{\sigma,q}( \mathbb T^3)}. \end{aligned}$$

Therefore thanks to the Gronwall lemma, using that \({\mathcal E}(w(0))\) is well defined, we obtain the global existence for w. Thus the solution of (4.2.26) can be written as

$$\displaystyle \begin{aligned}v(t)= S(t) (\tilde{v}_0, \tilde{v}_1) +w(t),\quad (w,\partial_t w)\in C(\mathbb R;{\mathcal H}^1). \end{aligned}$$

Coming back to the definition of Θ, we observe that

$$\displaystyle \begin{aligned}S(t)(\Theta)=\Theta. \end{aligned}$$

Thus (v(t), tv(t)) ∈ Σ.

This completes the proof of Theorem 4.2.3. □

4.2.6 Unique Limits of Smooth Solutions

In this section, we present the proofs of Theorems 4.2.6 and 4.2.7.

Proof of Theorem 4.2.6

Thanks to Theorem 4.2.3, the Sobolev embeddings and Theorem 4.2.15 we obtain that

$$\displaystyle \begin{aligned}(v,\partial_t v)\in C(\mathbb R;{\mathcal H}^s(\mathbb T^3)) \end{aligned}$$

and

$$\displaystyle \begin{aligned}v\in L^{p^\star}_{loc}(\mathbb R;L^{q^{\star}}(\mathbb T^3))\,, \end{aligned}$$

where (p , q ) are as in Corollary 4.1.23 (observe that q  ≤ 6). Once, we have this information the proof of Theorem 4.2.6 follows from Theorem 4.1.22 (here we use the assumption s > 1∕2) and Corollary 4.1.23. Indeed, let us fix T > 0 and let Λ be such that

$$\displaystyle \begin{aligned}\sup_{0\leq t\leq T}\|(v(t),\partial_t v(t))\|{}_{{\mathcal H}^s(\mathbb T^3)}<\Lambda-1\,. \end{aligned}$$

Let τ > 0 be the time of existence associated with Λ in Theorem 4.1.22. We now cover the interval [0, T] with intervals of size τ and using iteratively the continuous dependence statement of Theorem 4.1.22 and the uniqueness statement given by Corollary 4.1.23, we obtain that

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty}\|(v_{n}(t)-v(t),\partial_t v_{n}(t)-\partial_t v(t))\|{}_{L^\infty([0,T]; {\mathcal H}^s(\mathbb T^3))}=0\,. \end{aligned}$$

This completes the proof of Theorem 4.2.6. □

We now turn to the proof of Theorem 4.2.7 which is slightly more delicate.

Proof of Theorem 4.2.7

For (v 0, v 1) ∈ Σ we decompose the solution as

$$\displaystyle \begin{aligned}v(t)=S(t)(v_0,v_1)+w(t),\quad w(0)=0,\, \partial_t w(0)=0. \end{aligned}$$

Similarly, we decompose the solutions issued from (v 0,n, v 1,n) as

$$\displaystyle \begin{aligned}v_n(t)=S(t)(v_{0,n},v_{1,n})+w_n(t),\quad w_n(0)=0,\, \partial_t w_n(0)=0. \end{aligned}$$

Using the energy estimates of the previous section, we obtain that

$$\displaystyle \begin{aligned}\frac{d} {dt} {\mathcal E}(w_n(t)) \leq C \big({\mathcal E}(w_n(t))\big)^{1/2} \Big(g_n(t) + f_n(t) \big({\mathcal E}(w(t))\big)^{1/2} \Big), \end{aligned}$$

where

$$\displaystyle \begin{aligned}g_n(t) = \|S(t)(v_{0,n}, v_{1,n})\|{}^3_{L^6( \mathbb T^3)},\quad f_n(t) = \|S(t)(v_{0,n}, v_{1,n})\|{}_{W^{\sigma,q}( \mathbb T^3)}. \end{aligned}$$

Therefore

$$\displaystyle \begin{aligned}({\mathcal E}(w_n(t)))^{1/2}\leq C \big( \int_{0}^t g_n(\tau)d\tau \big) e^{\int_0^t f_{n}(\tau) d\tau} \,. \end{aligned}$$

Using that

$$\displaystyle \begin{aligned} S(t)(v_{0,n}, v_{1,n})=\rho_n \star \big(S(t)(v_0,v_1)\big), \end{aligned} $$
(4.2.27)

and the fact that (v 0, v 1) ∈ Σ, we obtain that

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty} \int_0^t g_n(\tau)d\tau=\int_0^t g(\tau)d\tau,\quad \lim_{n\rightarrow\infty} \int_0^t f_n(\tau)d\tau= \int_0^t f(\tau)d\tau, \end{aligned}$$

where g(t) and f(t) are defined in (4.2.25). Therefore, we obtain that for every T > 0 there is C > 0 such that for every n,

$$\displaystyle \begin{aligned} \sup_{0\leq t\leq T}\|(w_n(t),\partial_t w_n(t))\|{}_{{\mathcal H}^1(\mathbb T^3)}\leq C. \end{aligned} $$
(4.2.28)

Next, we observe that w and w n solve the equations

$$\displaystyle \begin{aligned}(\partial_t^2-\Delta)w+(S(t)(v_0,v_1)+w)^3=0 \end{aligned} $$

and

$$\displaystyle \begin{aligned}(\partial_t^2-\Delta)w_n+(S(t)(v_{0,n},v_{1,n})+w_n)^3=0. \end{aligned} $$

Therefore

$$\displaystyle \begin{aligned}(\partial_t^2-\Delta)(w-w_n)=-\big((S(t)(v_0,v_1)+w)^3-S(t)(v_{0,n},v_{1,n})+w_n)^3\big). \end{aligned} $$

We multiply the last equation by t(w − w n), and by using the Sobolev embedding \(H^1(\mathbb T^3)\subset L^6(\mathbb T^3)\) and the Hölder inequality, we arrive at the bound

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \frac{d}{dt}\|(w-w_n,\partial_ t w -\partial_t w_n) \|{}_{{\mathcal H}^1(\mathbb T^3)}\\ &\displaystyle &\displaystyle \quad \leq C\big( \|S(t)(v_0-v_{0,n}, v_1-v_{1,n})\|{}_{L^6(\mathbb T^3)}+\|w-w_n\|{}_{H^1(\mathbb T^3)} \big)\\ &\displaystyle &\displaystyle \qquad \times \Big( \|S(t)(v_{0}, v_{1})\|{}^2_{L^6(\mathbb T^3)}+ \|S(t)(v_{0,n}, v_{1,n})\|{}^2_{L^6(\mathbb T^3)}\\ &\displaystyle &\displaystyle \qquad \qquad + \|w\|{}^2_{H^1(\mathbb T^3)}+\|w_n\|{}^2_{H^1(\mathbb T^3)} \Big). \end{array} \end{aligned} $$

Using (4.2.28) and the properties of the solutions obtained in Theorem 4.2.3, we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \frac{d}{dt} \|(w-w_n,\partial_ t w -\partial_t w_n) \|{}_{{\mathcal H}^1(\mathbb T^3)}\\ &\displaystyle &\displaystyle \quad \leq C\big( \|S(t)(v_0-v_{0,n}, v_1-v_{1,n})\|{}_{L^6(\mathbb T^3)}+\|w-w_n\|{}_{H^1(\mathbb T^3)} \big)\\ &\displaystyle &\displaystyle \qquad \times \Big( \|S(t)(v_{0}, v_{1})\|{}^2_{L^6(\mathbb T^3)} + \|S(t)(v_{0,n}, v_{1,n})\|{}^2_{L^6(\mathbb T^3)} + C \Big). \end{array} \end{aligned} $$

The last inequality implies the following bound for t ∈ [0, T],

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \| (w(t)-w_n(t),\partial_ t w(t) -\partial_t w_n(t)) \|{}_{{\mathcal H}^1}\\ &\displaystyle &\displaystyle \quad \leq C \int_{0}^t \|S(\tau)(v_0-v_{0,n}, v_1-v_{1,n})\|{}_{L^6}\\ &\displaystyle &\displaystyle \qquad \times\big(\|S(\tau)(v_{0}, v_{1})\|{}^2_{L^6}+\|S(\tau)(v_{0,n}, v_{1,n})\|{}^2_{L^6}+C\big) d\tau\\ &\displaystyle &\displaystyle \qquad \times\exp\left( \int_{0}^t(\|S(\tau)(v_{0}, v_{1})\|{}^2_{L^6}+\|S(\tau)(v_{0,n}, v_{1,n})\|{}^2_{L^6}+C)d\tau\right).\qquad \quad \end{array} \end{aligned} $$
(4.2.29)

More precisely, we used that if x(t) ≥ 0 satisfies the differential inequality

$$\displaystyle \begin{aligned}\dot{x}(t)\leq C z(t)(y(t)+x(t)),\quad x(0)=0, \end{aligned}$$

for some z(t) ≥ 0 and y(t) ≥ 0 then

$$\displaystyle \begin{aligned}x(t)\leq C\int_{0}^t y(\tau)z(\tau) d\tau \exp\big(\int_{0}^t z(\tau)d\tau\big)\,. \end{aligned}$$

Coming back to (4.2.29) and using the Hölder inequality, we get for t ∈ [0, T],

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \|(w(t)-w_n(t),\partial_ t w(t) -\partial_t w_n(t))\|{}_{{\mathcal H}^1}\\ &\displaystyle &\displaystyle \quad \leq C \|S(t)(v_0-v_{0,n}, v_1-v_{1,n})\|{}_{L^2_{T}L^6}\\ &\displaystyle &\displaystyle \qquad \times\big( \|S(t)(v_{0}, v_{1})\|{}^2_{L^4_T L^6}+\|S(t)(v_{0,n}, v_{1,n})\|{}^2_{L^4_TL^6}+C\big)\\ &\displaystyle &\displaystyle \qquad \times\exp\left( \int_{0}^t (\|S(\tau)(v_{0}, v_{1})\|{}^2_{L^6} + \|S(\tau)(v_{0,n}, v_{1,n})\|{}^2_{L^6} + C)d\tau \right).\qquad \quad \end{array} \end{aligned} $$
(4.2.30)

Recalling (4.2.27), we obtain that for 1 < p < ,

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty}\int_{0}^T \|S(\tau)(v_0-v_{0,n}, v_1-v_{1,n})\|{}^p_{L^6(\mathbb T^3)}d\tau=0. \end{aligned}$$

Therefore (4.2.30) implies that

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty} \|(w(t)-w_n(t),\partial_ t w(t) -\partial_t w_n(t)) \|{}_{ L^\infty([0,T]; {\mathcal H}^1(\mathbb T^3))}=0\,. \end{aligned}$$

Recall that

$$\displaystyle \begin{aligned}v(t)=S(t)(v_0,v_1)+w(t),\quad v_n(t)=S(t)(v_{0,n},v_{1,n})+w_n(t). \end{aligned}$$

Using once again (4.2.27) and

$$\displaystyle \begin{aligned}\partial_t S(t)(v_{0,n}, v_{1,n})=\rho_n \star \big(\partial_t S(t)(v_0,v_1)\big) \end{aligned}$$

we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \lim_{n\rightarrow\infty} \| ( S(t)(v_0,v_1)-S(t)(v_{0,n},v_{1,n}), \\ &\displaystyle &\displaystyle \quad \partial_t S(t)(v_0,v_1)- \partial_t S(t)(v_{0,n},v_{1,n}) ) \|{}_{L^\infty([0,T]; {\mathcal H}^s(\mathbb T^3))}=0 \end{array} \end{aligned} $$

and consequently

$$\displaystyle \begin{aligned}\lim_{n\rightarrow\infty} \|(v(t)-v_n(t),\partial_ t v(t) -\partial_t v_n(t)) \|{}_{L^\infty([0,T]; {\mathcal H}^s(\mathbb T^3))}=0\,. \end{aligned}$$

This completes the proof of Theorem 4.2.7. □

Remark 4.2.21

In the proof of Theorem 4.2.7, we essentially used that the regularisation by convolution works equally well in H s and L p (p < ) and that it commutes with the Fourier multipliers such as the free evolution S(t). Any other regularisation respecting these two properties would produce smooth solutions converging to the singular dynamics constructed in Theorem 4.2.3.

4.2.7 Conditioned Large Deviation Bounds

In this section, we prove conditioned large deviation bounds which are the main tool in the proof of Theorem 4.2.10.

Proposition 4.2.22

Let \(\mu \in {\mathcal M}^s\), s ∈ (0, 1) and suppose that the real random variable with distribution θ, involved in the definition of μ is symmetric. Then for \(\delta > 1+ \frac { 1} {p_1} \) , 2 ≤ p 1 < ∞ and 2 ≤ p 2 ≤∞ there exist positive constants c, C such that for every positive ε, λ,  Λ and A,

$$\displaystyle \begin{aligned} &\mu\otimes\mu\Big( ((v_0,v_1),(v^{\prime}_0,v^{\prime}_1))\in {\mathcal H}^s\times {\mathcal H}^s\,:\,\ \\ &\qquad \qquad \|\langle t \rangle ^{- \delta} S(t) (v_0- v^{\prime}_0,v_1- v^{\prime}_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \lambda \\ &\qquad \qquad {\,\mathit{\text{ or }}} \|\langle t \rangle ^{- \delta} S(t) (v_0+ v^{\prime}_0,v_1+ v^{\prime}_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}\\ &\qquad \qquad > \Lambda \Big\vert \|(v_0- v^{\prime}_0,v_1- v^{\prime}_1)\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq \varepsilon \\ &\qquad \qquad \mathrm{\,and\,} \|(v_0+ v^{\prime}_0,v_1+ v^{\prime}_1)\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq A \Big) \leq C\Big(e^{-c\frac{\lambda^2}{\varepsilon^2}}+ e^{-c\frac{\Lambda^2}{A^2}}\Big). \end{aligned} $$
(4.2.31)

We shall make use of the following elementary lemmas.

Lemma 4.2.23

For j = 1, 2, let E jbe two Banach spaces endowed with measures μ j. Let \(f: E_1\times E_2\rightarrow \mathbb C\)and \(g_1,g_2:E_2\rightarrow \mathbb C\)be three measurable functions. Then

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \mu_1\otimes\mu_2 \Big( (x_1,x_2)\in E_1\times E_2\,:\, |f(x_1,x_2)|> \lambda\Big\vert|\, g_1(x_2)|\leq \varepsilon,\,\, |g_2(x_2)|\leq A \Big) \\ &\displaystyle &\displaystyle \quad \leq \sup_{x_2\in E_2, |g_1(x_2)|\leq \varepsilon, |g_2(x_2)|\leq A } \mu_1(x_1\in E_1\,:\, |f(x_1,x_2)|>\lambda)\,, \end{array} \end{aligned} $$

where by \(\sup \) we mean the essential supremum.

Lemma 4.2.24

Let g 1and g 2be two independent identically distributed real random variables with symmetric distribution. Then g 1 ± g 2have symmetric distributions. Moreover if h is a Bernoulli random variable independent of g 1then hg 1has the same distribution as g 1.

Proof of Proposition 4.2.22

Define

$$\displaystyle \begin{aligned}{\mathcal E}\equiv \mathbb R\times \mathbb R^{\mathbb Z^3_{\star}}\times \mathbb R^{\mathbb Z^3_{\star}}, \end{aligned}$$

equipped with the natural Banach space structure coming from the l norm. We endow \({\mathcal E}\) with a probability measure μ 0 defined via the map

$$\displaystyle \begin{aligned}\omega\mapsto \Big( k_{0}(\omega), \big(l_{n}(\omega)\big)_{n\in\mathbb Z^3_{\star}}, \big(h_{n}(\omega)\big)_{n\in\mathbb Z^3_{\star}} \Big) , \end{aligned}$$

where (k 0, l n, h n) is a system of independent Bernoulli variables.

For \(h=\big (x,(y_n)_{n\in \mathbb Z^3_{\star }},(z_n)_{n\in \mathbb Z^3_{\star }}\big )\in {\mathcal E}\) and

$$\displaystyle \begin{aligned}u(x)=a+\sum_{n\in\mathbb Z^3_{\star}}\Big(b_{n}\cos{}(n\cdot x)+c_{n}\sin{}(n\cdot x)\Big), \end{aligned}$$

we define the operation ⊙ by

$$\displaystyle \begin{aligned}h\odot u\equiv ax+\sum_{n\in\mathbb Z^3_{\star}}\Big(b_{n} y_{n}\cos{}(n\cdot x)+c_{n}z_{n}\sin{}(n\cdot x)\Big). \end{aligned}$$

Let us first evaluate the quantity

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \mu\otimes\mu\Big( ((v_0,v_1),(v^{\prime}_0,v^{\prime}_1))\in {\mathcal H}^s\times {\mathcal H}^s\,:\,\ \\ &\displaystyle &\displaystyle \qquad \qquad \|\langle t \rangle ^{- \delta} S(t) (v_0- v^{\prime}_0,v_1- v^{\prime}_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \lambda \Big\vert \\ &\displaystyle &\displaystyle \qquad \qquad \|(v_0- v^{\prime}_0,v_1- v^{\prime}_1)\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq \varepsilon\\ &\displaystyle &\displaystyle \qquad \qquad \mathrm{\,and\,} \|(v_0+ v^{\prime}_0,v_1+ v^{\prime}_1)\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq A \Big). \end{array} \end{aligned} $$
(4.2.32)

Observe that, thanks to Lemma 4.2.24, (4.2.32) equals

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \mu\otimes\mu\otimes\mu_0\otimes\mu_0\Big( ((v_0,v_1),(v^{\prime}_0,v^{\prime}_1), (h_0,h_1))\in {\mathcal H}^s\times {\mathcal H}^s\times {\mathcal E}\times {\mathcal E}\,:\,\ \\ &\displaystyle &\displaystyle \quad \|\langle t \rangle ^{- \delta} S(t) (h_0\odot (v_0-v^{\prime}_0),h_1\odot (v_1- v^{\prime}_1))\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \lambda \Big\vert \\ &\displaystyle &\displaystyle \quad \|(h_0\odot(v_0- v^{\prime}_0),h_1\odot(v_1- v^{\prime}_1))\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq \varepsilon \\ &\displaystyle &\displaystyle \quad \mathrm{\,and\,} \|(h_0\odot(v_0+ v^{\prime}_0),h_1\odot(v_1+ v^{\prime}_1))\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq A \Big). \end{array} \end{aligned} $$
(4.2.33)

Since the \(H^s(\mathbb T^3)\) norm of a function f depends only on the absolute value of its Fourier coefficients, we deduce that (4.2.33) equals

$$\displaystyle \begin{aligned} &\mu\otimes\mu\otimes\mu_0\otimes\mu_0\Big( ((v_0,v_1),(v^{\prime}_0,v^{\prime}_1), (h_0,h_1))\in {\mathcal H}^s\times {\mathcal H}^s\times {\mathcal E}\times {\mathcal E}\,:\,\ \\ &\quad \|\langle t \rangle ^{- \delta} S(t) (h_0\odot (v_0-v^{\prime}_0),h_1\odot (v_1- v^{\prime}_1))\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \lambda \Big\vert \\ &\quad \|(v_0- v^{\prime}_0,v_1- v^{\prime}_1)\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq \varepsilon \mathrm{\,and\,} \|(v_0+ v^{\prime}_0,v_1+ v^{\prime}_1)\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq A \Big). \end{aligned} $$
(4.2.34)

We now apply Lemma 4.2.23 with μ 1 = μ 0 ⊗ μ 0 and μ 2 = μ ⊗ μ to get that (4.2.34) is bounded by

$$\displaystyle \begin{aligned} &\sup_{ \|(v_0- v^{\prime}_0,v_1- v^{\prime}_1)\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq \varepsilon} \mu_0\otimes\mu_0\Big( (h_0,h_1)\in {\mathcal E}\times {\mathcal E}\,:\,\ \\ &\quad \|\langle t \rangle ^{- \delta} S(t) (h_0\odot (v_0-v^{\prime}_0),h_1\odot (v_1- v^{\prime}_1))\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \lambda \Big). \end{aligned} $$
(4.2.35)

We now apply Theorem 4.2.15 (with Bernoulli variables) to obtain that (4.2.32) is bounded by \( C\exp (-c\frac {\lambda ^2}{\varepsilon ^2}). \) A very similar argument gives that

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \mu\otimes\mu\Big( ((v_0,v_1),(v^{\prime}_0,v^{\prime}_1))\in {\mathcal H}^s\times {\mathcal H}^s\,:\,\ \\ &\displaystyle &\displaystyle \qquad \qquad \|\langle t \rangle ^{- \delta} S(t) (v_0+ v^{\prime}_0,v_1+ v^{\prime}_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \Lambda \Big\vert \\ &\displaystyle &\displaystyle \qquad \qquad \|(v_0- v^{\prime}_0,v_1- v^{\prime}_1)\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq \varepsilon\\ &\displaystyle &\displaystyle \qquad \qquad \mathrm{\,and\,} \|(v_0+ v^{\prime}_0,v_1+ v^{\prime}_1)\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq A \Big) \end{array} \end{aligned} $$

is bounded by \( C\exp (-c\frac {\Lambda ^2}{A^2}). \) This completes the proof of Proposition 4.2.22. □

4.2.8 End of the Proof of the Conditioned Continuous Dependence

In this section, we complete the proof of Theorem 4.2.10. According to (a variant of) Proposition 4.2.22, we have that for any

$$\displaystyle \begin{aligned}2\leq p_1<+\infty,\, 2\leq p_2 \leq + \infty,\, \delta > 1+ \frac 1 {p_1},\, \eta\in(0,1),\, \end{aligned}$$

one has

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \mu\otimes\mu\Big( (V_0,V_1)\in {\mathcal H}^s\times {\mathcal H}^s\,:\,\ \|\langle t \rangle ^{- \delta} S(t) (V_0- V_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \eta^{\frac{1}{2}} \\ &\displaystyle &\displaystyle \qquad \qquad {\,\text{ or }} \|\langle t \rangle ^{- \delta} S(t) (V_0 )\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \log \log\log(\eta^{-1}) \\ &\displaystyle &\displaystyle \qquad \qquad {\,\text{ or }} \|\langle t \rangle ^{- \delta} S(t) (V_1 )\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}> \log \log\log(\eta^{-1}) \Big\vert \\ &\displaystyle &\displaystyle \qquad \qquad \|V_0- V_1\|{}_{\mathcal{H}^s(\mathbb T^3)}< \eta \mathrm{\,\,\,and\,\,\,} \|V_j\|{}_{\mathcal{H}^s(\mathbb T^3)}\leq A,\,j=0,1 \Big) \longrightarrow 0, \end{array} \end{aligned} $$

as η → 0. Therefore, we can also suppose that

$$\displaystyle \begin{aligned} \|\langle t \rangle ^{- \delta} S(t) (V_0- V_1)\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}\leq \eta^{\frac{1}{2}} \end{aligned} $$
(4.2.36)

and

$$\displaystyle \begin{aligned} \|\langle t \rangle ^{- \delta} S(t) (V_j )\|{}_{L^{p_1} ( \mathbb R_t ; L^{p_2}( \mathbb T^3))}\leq \log \log\log(\eta^{-1}),\, j=0,1, \end{aligned} $$
(4.2.37)

when we estimate the needed conditional probability.

We therefore need to estimate the difference of two solutions under the assumptions (4.2.36) and (4.2.37), in the regime η ≪ 1. Let

$$\displaystyle \begin{aligned}v_{j}(t)=S(t)(V_j)+w_{j}(t), \quad j=0,1 \end{aligned}$$

be two solutions of the cubic wave equation with data V j. We thus have

$$\displaystyle \begin{aligned}(w_{j}(0),\partial_{t}w_{j}(0))=(0,0). \end{aligned}$$

Applying the energy estimate, performed several times in this section, for j = 0, 1, we get the bound

$$\displaystyle \begin{aligned}\frac{d}{dt}\mathcal{E}^{1/2}(w_{j}(t))\leq C\Big( \|S(t)(V_j)\|{}^3_{L^{6}(\mathbb T^3)} + \|S(t)(V_j)\|{}_{L^{\infty}(\mathbb T^3)}\mathcal{E}^{1/2}(w_{j}(t)) \Big), \end{aligned}$$

and therefore, under the assumptions (4.2.36) and (4.2.37), for t ∈ [0, T] one has

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \mathcal{E}^{1/2}(w_{j}(t)) &\displaystyle \leq &\displaystyle C_{T}\,e^{C_T\log \log\log(\eta^{-1})} (\log\log\log(\eta^{-1}))^3 \\ &\displaystyle \leq &\displaystyle C_{T}[\log(\eta^{-1})]^{\frac{1}{20}}, \end{array} \end{aligned} $$
(4.2.38)

where here and in the sequel we denote by C T different constants depending only on T (but independent of η).

We next estimate the difference w 0 − w 1. Using the equations solved by w 0, w 1, we infer that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \frac{d}{dt}\|w_0(t,\cdot)-w_1(t,\cdot)\|{}^2_{{\mathcal H}^1(\mathbb T^3)}\\ &\displaystyle &\displaystyle \quad \leq 2 \Big| \int_{\mathbb T^3}\partial_{t}(w_0(t,x)-w_1(t,x))(\partial_t^2-\Delta)(w_0(t,x)-w_1(t,x))dx \Big|\\ &\displaystyle &\displaystyle \quad \leq C \|w_0(t,\cdot)-w_1(t,\cdot)\|{}_{{\mathcal H}^1(\mathbb T^3)} \\ &\displaystyle &\displaystyle \qquad \|(w_0+S(t)(V_0))^3-(w_1+S(t)(V_1))^3\|{}_{L^2(\mathbb T^3)}\,, \end{array} \end{aligned} $$
(4.2.39)

where for shortness we denote \(\|(u, \partial _t u)\|{ }_{{\mathcal H}^1}\) simply by \(\|u\|{ }_{{\mathcal H}^1}\).

Thanks to (4.2.39) and the Sobolev embedding \(H^1(\mathbb T^3)\subset L^6(\mathbb T^3)\), we get that

$$\displaystyle \begin{aligned}\frac{d}{dt}\|w_0(t,\cdot)-w_1(t,\cdot)\|{}_{{\mathcal H}^1(\mathbb T^3)} \end{aligned}$$

is bounded by

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle C\Big(\|w_0(t,\cdot)-w_1(t,\cdot)\|{}_{{\mathcal H}^1(\mathbb T^3)}+\|S(t)(V_0-V_1)\|{}_{L^6(\mathbb T^3)}\Big) \\ &\displaystyle &\displaystyle \quad \Big(\|w_0(t,\cdot)\|{}_{H^1(\mathbb T^3)}^2+\|w_1(t,\cdot)\|{}_{H^1(\mathbb T^3)}^2 \\ &\displaystyle &\displaystyle \qquad +\|S(t)(V_0)\|{}_{L^6(\mathbb T^3)}^2+\|S(t)(V_1)\|{}_{L^6(\mathbb T^3)}^2\Big). \end{array} \end{aligned} $$

Therefore, using (4.2.38) and the Gronwall lemma, under the assumptions (4.2.36) and (4.2.37), for t ∈ [0, T],

$$\displaystyle \begin{aligned} \begin{array}{rcl} \|w_0(t,\cdot)-w_1(t,\cdot)\|{}_{{\mathcal H}^1(\mathbb T^3)} &\displaystyle \leq &\displaystyle C_{T}\eta^{\frac{1}{2}}[\log(\eta^{-1})]^{\frac{1}{10}}\, e^{C_{T}[\log(\eta^{-1})]^{\frac{1}{10}}} \\ &\displaystyle \leq &\displaystyle C_{T}\eta^{\frac{1}{4}}\,. \end{array} \end{aligned} $$

In particular by the Sobolev embedding

$$\displaystyle \begin{aligned}\|w_0-w_1\|{}_{L^4([0,T]\times\mathbb T^3)}\leq C_{T}\eta^{\frac{1}{4}}, \end{aligned}$$

and therefore under the assumption (4.2.36),

$$\displaystyle \begin{aligned}\|v_0-v_1\|{}_{L^4([0,T]\times\mathbb T^3)}\leq C_{T}\eta^{\frac{1}{4}}\,. \end{aligned}$$

In summary, we obtained that for a fixed ε > 0, the μ ⊗ μ measure of V 0, V 1 such that

$$\displaystyle \begin{aligned}\|\Phi(t)(V_0)-\Phi(t)(V_1)\|{}_{X_{T}}>\varepsilon \end{aligned}$$

under the conditions (4.2.36), (4.2.37) and \(\|V_0-V_1\|{ }_{{\mathcal H}^s}<\eta \) is zero, as far as η > 0 is sufficiently small. Therefore, we obtain that the left hand side of (4.2.7) tends to zero as η → 0. This ends the proof of the first part of Theorem 4.2.10.

For the second part of the proof of Theorem 4.2.10, we argue by contradiction. Suppose thus that for every ε > 0 there exist η > 0 and Σ of full μ ⊗ μ measure such that

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \forall\, (V,V')\in\varSigma\cap (B_{A}\times B_{A}),\, \|V-V'\|{}_{{\mathcal H}^s}<\eta\\ &\displaystyle &\displaystyle \quad \implies \| \Phi(t) ( V) - \Phi(t) ( V') \|{}_{X_T} <\varepsilon. \end{array} \end{aligned} $$

Let us apply the previous affirmation with ε = 1∕n, n = 1, 2, 3… which produces full measure sets Σ(n). Set

$$\displaystyle \begin{aligned}\varSigma_1\equiv\bigcap_{n=1}^{\infty}\varSigma(n). \end{aligned}$$

Then Σ 1 is of full μ ⊗ μ measure and we have that

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \forall\,\varepsilon>0,\, \exists\,\eta>0,\,\,\, \forall\, (V,V')\in\varSigma_1\cap (B_{A}\times B_{A}),\, \\ &\displaystyle &\displaystyle \quad \|V-V'\|{}_{{\mathcal H}^s}<\eta \implies\,\, \| \Phi(t) (V) - \Phi(t) ( V') \|{}_{X_T} <\varepsilon. \end{array} \end{aligned} $$
(4.2.40)

Next for \(V\in {\mathcal H}^s\) we define \({\mathcal A}(V)\subset {\mathcal H}^s\) by

$$\displaystyle \begin{aligned}{\mathcal A}(V)\equiv \{V'\in {\mathcal H}^s\,:\, (V,V')\in \varSigma_1\}. \end{aligned}$$

According to Fubini Theorem, there exists \({\mathcal E}\subset {\mathcal H}^s\) a set of full μ measure such that for every \(V\in {\mathcal E}\) the set \({\mathcal A}(V)\) is a full μ measure.

We are going to extend Φ(t) to a uniformly continuous map on B A. For that purpose, we first extend Φ(t) to a uniformly continuous map on dense set of B A. Let \(\{(V_j)_{j\in \mathbb {N}}\}\) be a dense set of B A for the \(\mathcal {H}^s\) topology. For \(j\in \mathbb {N}\), we can construct by induction a sequence (V j,n) such that

$$\displaystyle \begin{aligned} V_{j,n}\in B_{A}\cap {\mathcal E}\cap\, \bigcap_{m<n}{\mathcal A}(V_{j,m}) \cap\, \bigcap_{l<j, q\in\mathbb N}{\mathcal A}(V_{l,q}) ,\quad \|V_{j,n}-V_j\|{}_{{\mathcal H}^s}<1/n. \end{aligned}$$

Indeed, the induction assumption guarantees that the set

$$\displaystyle \begin{aligned}{\mathcal E}\cap\, \bigcap_{m<n}{\mathcal A}(V_{j,m}) \bigcap_{l<j, q\in \mathbb N}{\mathcal A}(V_{l,q}) \end{aligned}$$

has measure 1 (as an intersection of sets of measure 1) and consequently is dense. Notice that by construction, we have

$$\displaystyle \begin{aligned} (V_{k,n}, V_{l,m}) \in \varSigma_1, \forall\, k< l, \forall \,n,m \in \mathbb{N}, \text{ and } \forall\, k=l, n<m. \end{aligned} $$
(4.2.41)

Using (4.2.41) for k = l, we obtain according to (4.2.40) that for any fixed k, the sequence \(\Phi (t)(V_{k,n})_{n\in \mathbb {N}}\) is a Cauchy sequence in X T and we can define \(\overline {\Phi }(t)(V_j)\) as its limit. Using again (4.2.41), for k ≠ l, we see according to (4.2.40) that the map \(\overline {\Phi }(t) \) is uniformly continuous on the set \(\{(V_j)_{j\in \mathbb {N}}\}\). Therefore Φ(t) can be extended by density to a uniformly continuous map, on the whole B A. Let us denote by \(\overline {\Phi (t)}\) the extension of Φ(t) to B A. We therefore have

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \forall\,\varepsilon>0,\, \exists\,\eta>0,\,\,\, \forall\, V,V'\in B_{A},\, \\ &\displaystyle &\displaystyle \quad \|V-V'\|{}_{{\mathcal H}^s}<\eta \implies\,\, \| \overline{\Phi(t)} ( V) - \overline{\Phi(t)} ( V') \|{}_{X_T} <\varepsilon. \end{array} \end{aligned} $$
(4.2.42)

We have the following lemma.

Lemma 4.2.25

For \(V\in (C^{\infty }(\mathbb T^3)\times C^{\infty }(\mathbb T^3))\cap B_A\), we have that \(\overline {\Phi (t)}(V)=(u,u_t)\), where u is the unique classical solution on [0, T] of

$$\displaystyle \begin{aligned}(\partial_{t}^2-\Delta)u+u^3=0,\quad (u(0),\partial_{t}u(0))=V. \end{aligned}$$

Proof

Let us first show that first component of

$$\displaystyle \begin{aligned}\overline{\Phi(t)}(V)\equiv(\overline{\Phi_1(t)}(V),\overline{\Phi_2(t)}(V)) \end{aligned}$$

is a solution of the cubic wave equation. Observe that by construction, necessarily \(\overline {\Phi _2(t)}(V)=\partial _t \overline {\Phi _1(t)}(V)\) in the distributional sense (in \({\mathcal D}'((0,T)\times \mathbb T^3)\)).

Again by construction, we have that

$$\displaystyle \begin{aligned}V=\lim_{n\rightarrow\infty}V_{n}\,, \end{aligned}$$

in \({\mathcal H}^s\) where V n are such that

$$\displaystyle \begin{aligned} (\partial_{t}^2-\Delta)(\Phi_1(t)(V_n))+(\Phi_1(t)(V_n))^3=0, \end{aligned} $$
(4.2.43)

with the notation Φ(t) = ( Φ1(t), Φ2(t)). In addition,

$$\displaystyle \begin{aligned}\overline{\Phi(t)}(V)=\lim_{n\rightarrow\infty}\Phi(t)(V_{n})\,, \end{aligned}$$

in X T. We therefore have that

$$\displaystyle \begin{aligned}(\partial_{t}^2-\Delta)(\overline{\Phi_1(t)}(V))=\lim_{n\rightarrow\infty}(\partial_{t}^2-\Delta)(\Phi_1(t)(V_n)), \end{aligned}$$

in the distributional sense. Moreover, coming back to the definition of X T, we also obtain that

$$\displaystyle \begin{aligned}(\overline{\Phi_1(t)}(V))^3=\lim_{n\rightarrow\infty}(\Phi_1(t)(V_n))^3, \end{aligned}$$

in \(L^{4/3}([0,T]\times \mathbb T^3)\). Therefore, passing into the limit n → in ((4.2.43)), we obtain that \(\overline {\Phi _1(t)}(V)\) solves the cubic wave equation (with data V ). Moreover, since \((\overline {\Phi _1(t)}(V))^3\in L^{4/3}([0,T]\times \mathbb T^3)\), it also satisfies the Duhamel formulation of the equation.

Let us denote by u(t), t ∈ [0, T] the classical solution of

$$\displaystyle \begin{aligned}(\partial_{t}^2-\Delta)u+u^3=0,\quad (u(0),\partial_{t}u(0))=V, \end{aligned}$$

defined by Theorem 4.1.2. Set \(v\equiv \overline {\Phi _1(t)}(V)\). Since our previous analysis has shown that v is a solution of the cubic wave equation, we have that

$$\displaystyle \begin{aligned} (\partial_{t}^2-\Delta)(u-v)+u^3-v^3=0,\quad (u(0),\partial_{t}u(0))=(0,0)\,. \end{aligned} $$
(4.2.44)

We now invoke the L 4 − L 4∕3 non homogenous estimates for the three dimensional wave equation. Namely, thanks to Theorem 4.1.21, we have that there exists a constant (depending on T) such that for every interval I ⊂ [0, T], the solution of the wave equation

$$\displaystyle \begin{aligned}(\partial_{t}^2-\Delta)w=F,\quad (u(0),\partial_{t}u(0))=(0,0) \end{aligned}$$

satisfies

$$\displaystyle \begin{aligned} \|w\|{}_{L^4(I\times\mathbb T^3)}\leq C\|F\|{}_{L^{4/3}(I\times\mathbb T^3)}\,. \end{aligned} $$
(4.2.45)

Applying (4.2.45) in the context of (4.2.44) together with the Hölder inequality yields the bound

$$\displaystyle \begin{aligned} \|u-v\|{}_{L^4(I\times\mathbb T^3)}\leq C \big(\|u\|{}_{L^{4}(I\times\mathbb T^3)}^2+\|v\|{}_{L^4(I\times\mathbb T^3)}^2\big) \|u-v\|{}_{L^{4}(I\times\mathbb T^3)}\,. \end{aligned} $$
(4.2.46)

Since \(u,v\in L^{4}(I\times \mathbb T^3)\), we can find a partition of intervals I 1, …, I l of [0, T] such that

$$\displaystyle \begin{aligned}C\big(\|u\|{}_{L^{4}(I_j\times\mathbb T^3)}^2+\|v\|{}_{L^4(I_j\times\mathbb T^3)}^2\big)<\frac{1}{2},\quad j=1,\dots, l. \end{aligned}$$

We now apply (4.2.46) with I = I j, j = 1, …, l to conclude that u = v on I 1, then on I 2 and so on up to I l which gives that u = v on [0, T]. Thus \(u=\overline {\Phi _1(t)}(V)\) and therefore also \(\partial _t u=\overline {\Phi _2(t)}(V)\). This completes the proof of Lemma 4.2.25. □

It remains now to apply Lemma 4.2.25 to the sequence of smooth data in the statement of Theorem 4.1.28 to get a contradiction with (4.2.42). More precisely, if (U n) is the sequence involved in the statement of Theorem 4.1.28, the result of Theorem 4.1.28 affirms that \(\overline {\Phi (t)}(U_n)\) tends to infinity in \(L^{\infty }([0,T];{\mathcal H}^s)\) while (4.2.42) affirms that the same sequence tends to zero in the same space \(L^{\infty }([0,T];{\mathcal H}^s)\). This completes the proof of Theorem 4.2.10.

4.2.9 Extensions to More General Nonlinearities

In the remarkable work by Oh-Pocovnicu [35, 36] (based on the previous contributions [2, 38]) it is shown that the result of Theorem 4.2.3 can be extended to the energy critical equation

$$\displaystyle \begin{aligned}(\partial_t^2-\Delta)v+v^5=0\,. \end{aligned}$$

This equation is H 1 critical and the data is a typical element with respect to \(\mu \in {\mathcal M}^s\), s > 1∕2. We refer also to [31, 43] for extensions of Theorem 4.2.3 to nonlinearities between cubic and quintic.

4.2.10 Notes

For the case s = 0 and the proof of the quantitative bounds displayed by Theorem 4.2.4, we refer to [8]. For the proof of Proposition 4.2.2, we refer to [6, Appendix B] and [8, Appenidix B2]. The probabilistic part of our analysis only relies on linear bounds such as Lemma 4.2.11. In other situations multi-linear versions of these bounds are of importance (see [4, 13, 33]). The above mentioned work by Oh-Pocovnicu relies on a much more complicated deterministic analysis (such as the concentration compactness) and also on a significant extension of the probabilistic energy bound used in the proof of Theorem 4.2.3.

Our starting point and main motivation toward the probabilistic well-posedness results presented in this section was the ill-posedness result of Theorem 4.1.28 of the previous section. As already mentioned the method of proof has some similarities with the earlier work [16] or with the even earlier work of Bourgain [4] on the invariance of the Gibbs measure associated with the nonlinear Schrödinger equation

$$\displaystyle \begin{aligned} (i\partial_t+\Delta)u=|u|{}^2 u, \end{aligned} $$
(4.2.47)

posed on the two dimensional torus. The main purpose of [4] is to show the invariance of the Gibbs measure and as a byproduct one gets the global existence and uniqueness of solutions of (4.2.47) with a suitable random data belonging a.s. to \(H^{-\varepsilon }(\mathbb T^2)\) for every ε > 0 but missing a.s. \(L^2(\mathbb T^2)\). In the time of writing of [4] statement such as Theorem 4.1.28 or Theorem 4.1.33 were not known in the context of (4.2.47). In the recent work [34], the analogue of Theorems 4.1.28 and 4.1.33 in the context of (4.2.47) is obtained. Most likely, the analysis of [4] can be adapted in order to get the analogue of Theorem 4.2.7 in the context of (4.2.47). As a consequence, it looks that we can see from the same view point (4.2.47) with data on the support of the Gibbs measure and the cubic defocusing wave equation with random data of super-critical regularity presented in these lectures. We plan to address this issue in a future work.

4.3 Random Data Global Well-Posedness with Data of Supercritical Regularity via Invariant Measures

In the previous section, we presented a method to construct global in time solutions for the cubic defocusing wave equation posed on the three dimensional torus with random data of supercritical regularity (\(H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)\), s ∈ (0, 1∕2)). These solutions are unique in a suitable sense and depend continuously (in a conditional sense) on the initial data. The method we used is based on a local in time result showing that even if the data is of supercritical regularity, we can find a local solution written as “free evolution” (keeping the Sobolev regularity of the initial data) plus “a remainder of higher regularity”. The term of higher regularity is then regular enough to allow us to deal it with the deterministic methods to treat the equation. The globalisation was then done by establishing an energy bound for the remainder in a probabilistic manner, here of course the energy conservation law is the key structure allowing to perform the analysis. Moreover, we have shown that the problem is ill-posed with data of this supercritical regularity and this in turn implied the impossibility to see the constructed flow as the unique extension of the regular solutions flow.

In this section, we will show another method for global in time solutions for a defocusing wave equation with data of supercritical regularity. The construction of local solutions will be based on the same principle as in the previous section, i.e. we shall again see the solution as a “free evolution” plus “a remainder of higher regularity”. However the globalisation will be done by a different argument (due to Bourgain [3, 4]) based on exploiting the invariance of the Gibbs measure associated with the equation. The Gibbs measure is constructed starting from the energy conservation law and therefore this energy conservation law is again the key structure allowing to perform the global in time analysis. This method of globalisation by invariant measures is working only for very particular choice of the initial data and in this sense it is much less general than the method presented in the previous section. On the other hand the method based on exploiting invariant measures gives a strong macroscopic information about the constructed flow, namely one has a precise information on the measure evolution along the time. The method presented in the previous section gives essentially no information about the evolution in time under the constructed flow of the measures in \({\mathcal M}^s\). We shall come back to this issue in the next section.

Our model to present the method of globalisation via invariant measures will be the radial nonlinear wave equation posed on the unit ball of \(\mathbb R^3\), with Dirichlet boundary conditions. Let Θ be the unit ball of \(\mathbb R^3\). Consider the nonlinear wave equation with Dirichlet boundary condition posed on Θ,

$$\displaystyle \begin{aligned} (\partial_{t}^{2}-\mathbf{\Delta})w+|w|{}^{\alpha}w=0,\quad (w,\partial_t w)|{}_{t=0}=(f_1,f_2), \qquad \alpha>0, \end{aligned} $$
(4.3.1)

subject to Dirichlet boundary conditions

$$\displaystyle \begin{aligned}u\mid_{\mathbb R_t \times \partial \Theta} =0, \end{aligned}$$

with radial real valued initial data (f 1, f 2).

We now make some algebraic manipulations on (4.3.1) allowing to write it as a first order equation in t. Set \(u\equiv w+i{\sqrt { - \mathbf {\Delta }}^{-1}}{\partial _t w}\). Observe that Δ −1 is well-defined because 0 is not in the spectrum of the Dirichlet Laplacian. Then we have that u solves the equation

$$\displaystyle \begin{aligned} (i\partial_{t}-\sqrt{ - \mathbf{\Delta}})u-\sqrt{ - \mathbf{\Delta}}^{-1}\big(|\mathrm{Re}(u)|{}^{\alpha}\mathrm{Re}(u)\big) =0,\quad u|{}_{t=0}=u_{0}, \end{aligned} $$
(4.3.2)

with \(u|{ }_{\mathbb R\times \partial \Theta }=0\), where \(u_0=f_1+i\sqrt { - \mathbf {\Delta }}^{-1}f_2\). We consider (4.3.2) for data in the (complex) Sobolev spaces \(H^s_{rad}(\Theta )\) of radial functions.

Equation (4.3.2) is (formally) an Hamiltonian equation on L 2( Θ) with Hamiltonian,

$$\displaystyle \begin{aligned} \frac{1}{2}\|\sqrt{ - \mathbf{\Delta}} (u)\|{}_{L^2(\Theta)}^{2}+ \frac{1}{\alpha+2}\|\mathrm{Re}(u)\|{}_{L^{\alpha+2}(\Theta)}^{\alpha+2} \end{aligned} $$
(4.3.3)

which is (formally) conserved by the flow of (4.3.2).

Let us next discuss the measure describing the initial data set. For s < 1∕2, we define the measure μ on \(H^{s}_{rad}(\Theta )\) as the image measure under the map from a probability space \((\Omega ,{\mathcal A},p)\) to \(H^{s}_{rad}(\Theta )\) equipped with the Borel sigma algebra, defined by

$$\displaystyle \begin{aligned} \omega\longmapsto \sum_{n=1}^{\infty}\frac{ h_{n}(\omega)+i l_{n}(\omega) }{n\pi}e_{n}\,, \end{aligned} $$
(4.3.4)

where \(((h_{n},l_{n}))_{n=1}^{\infty }\) is a sequence of independent standard real Gaussian random variables. In (4.3.4), the functions \((e_n)_{n=1}^{\infty }\) are the radial eigenfunctions of the Dirichlet Laplacian on Θ, associated with eigenvalues (πn)2. The eigenfunctions e n have the following explicit form

$$\displaystyle \begin{aligned}e_n(r)=\frac{\sin{}(n\pi r)}{r},\quad 0\leq r\leq 1. \end{aligned}$$

They are the analogues of \(\cos {}(n\cdot x)\) and \(\sin {}(n\cdot x)\), \(n\in \mathbb Z^3\) used in the analysis on \(\mathbb T^3\) in the previous section. One has that \(\mu (H^{1/2}_{rad}(\Theta ))=0\). By the method described in the previous section one may show that (4.3.2) is ill-posed in \(H^s_{rad}(\Theta )\) for \(s<\frac {3}{2}-\frac {2}{\alpha }\). Therefore for α > 2 the map (4.3.4) describes functions of supercritical Sobolev regularity (i.e. \(H^s_{rad}(\Theta )\) with s smaller than \(\frac {3}{2}-\frac {2}{\alpha }\)). The situation is therefore similar to the analysis of the cubic defocusing wave equation on \(\mathbb T^3\) with data in H s × H s−1, s < 1∕2 considered in the previous section. As in the previous section, we can still get global existence and uniqueness for (4.3.2), almost surely with respect to μ.

Theorem 4.3.1

Let s < 1∕2. Suppose that α < 3. Let us fix a real number p such that \(\max (4,2\alpha )<p<6\). Then there exists a full μ measure set \(\varSigma \subset H^s_{rad}(\Theta )\)such that for every u 0 ∈ Σ there exists a unique global solution of (4.3.2)

$$\displaystyle \begin{aligned}u\in C(\mathbb R,H^s_{rad}(\Theta))\cap L^p_{loc}( \mathbb{R}_t;L^p(\Theta))\,. \end{aligned}$$

The solution can be written as

$$\displaystyle \begin{aligned}u(t)=S(t)(u_0)+v(t), \end{aligned}$$

where \(S(t) = e^{-it\sqrt { -\mathbf {\Delta }}}\)is the free evolution and \(v(t)\in H^\sigma _{rad}(\Theta )\)for some σ > 1∕2. Moreover

$$\displaystyle \begin{aligned}\|u(t)\|{}_{H^s(\Theta)} \leq C(s) \big(\log(2+|t|)\big)^{\frac 1 2}\,. \end{aligned}$$

The proof of Theorem 4.3.1 is based on the following local existence result.

Proposition 4.3.2

For a given positive number α < 3 we choose a real number p such that \(\max (4,2\alpha )<p<6\). Then we fix a real number σ by \(\sigma =\frac {3}{2}-\frac {4}{p}\). There exist C > 0, c ∈ (0, 1], γ > 0 such that for every R ≥ 1 if we set T = cR γthen for every radially symmetric u 0satisfying

$$\displaystyle \begin{aligned}\|S(t)u_0\|{}_{L^p((0,2)\times\Theta)}\leq R \end{aligned}$$

there exists a unique solution u of (4.3.2) such that

$$\displaystyle \begin{aligned}u(t)=S(t)u_0+v(t) \end{aligned}$$

with \(v\in X^{\sigma }_{T}\) (the Strichartz spaces defined in the previous section). Moreover

$$\displaystyle \begin{aligned}\|v\|{}_{X^\sigma_T}\leq CR. \end{aligned}$$

In particular, since S(t) is 2 periodic and thanks to the Strichartz estimates,

$$\displaystyle \begin{aligned}\sup_{t\in[-T,T]}\|S(\tau)u(t)\|{}_{L^p(\tau\in (0,2);L^p(\Theta))}\leq CR\,. \end{aligned}$$

In addition, if u 0 ∈ H s( Θ) (and thus s < σ) then

$$\displaystyle \begin{aligned}\|u(t)\|{}_{H^s(\Theta)}\leq \|S(t)u_0\|{}_{H^s(\Theta)}+\|v(t)\|{}_{H^s(\Theta)}\leq \|u_0\|{}_{H^s(\Theta)}+CR\,. \end{aligned}$$

Using probabilistic Strichartz estimates for S(t) as we did in the previous section, we can deduce the following corollary of Proposition 4.3.2.

Proposition 4.3.3

Under the assumptions of Proposition 4.3.2there is a set Σ of full μ measure such that for every u 0 ∈ Σ there is T > 0 and a unique solution of (4.3.2) on [0, T] in

$$\displaystyle \begin{aligned}C([0,T],H^s_{rad}(\Theta))\cap L^p_{loc}( \mathbb{R}_t;L^p(\Theta)). \end{aligned}$$

Moreover for every T ≤ 1 there is a set Σ T ⊂ Σ such that

$$\displaystyle \begin{aligned}\mu(\varSigma_T)\geq 1-Ce^{-c/T^\delta},\quad c>0,\, \delta>0 \end{aligned}$$

and such that for every u 0 ∈ Σ Tthe time of existence is at least T.

Let us next define the Gibbs measures associated with (4.3.2). Using [1, Theorem 4], we have that for α < 4 the quantity

$$\displaystyle \begin{aligned} \|\sum_{n=1}^{\infty}\frac{ h_{n}(\omega)+i l_{n}(\omega) }{n\pi}e_{n}\|{}_{L^{\alpha+2}(\Theta)} \end{aligned} $$
(4.3.5)

is finite almost surely. Moreover the restriction α < 4 is optimal because for α = 4 the quantity (4.3.5) is infinite almost surely. Therefore, for α < 4, we can define a nontrivial measure ρ as the image measure on \(H^s_{rad}(\Theta )\) by the map (4.3.4) of the measure

$$\displaystyle \begin{aligned} \exp\Big(-\frac{1}{\alpha+2} \|\sum_{n=1}^{\infty}\frac{ h_{n}(\omega)}{n\pi}e_{n})\|{}_{L^{\alpha+2}(\Theta)}^{\alpha+2}\Big)dp(\omega)\,. \end{aligned} $$
(4.3.6)

The measure ρ is the Gibbs measures associated with (4.3.2) and it can be formally seen as

$$\displaystyle \begin{aligned} \exp\Big( -\frac{1}{2}\|\sqrt{ - \mathbf{\Delta}} (u)\|{}_{L^2(\Theta)}^{2}- \frac{1}{\alpha+2}\|\mathrm{Re}(u)\|{}_{L^{\alpha+2}(\Theta)}^{\alpha+2} \Big)du, \end{aligned}$$

where a renormalisation of

$$\displaystyle \begin{aligned}\exp\Big( -\frac{1}{2}\|\sqrt{ - \mathbf{\Delta}} (u)\|{}_{L^2(\Theta)}^{2} \Big)du, \end{aligned}$$

corresponds to the measure μ and

$$\displaystyle \begin{aligned}\exp\Big(-\frac{1}{\alpha+2}\|\mathrm{Re}(u)\|{}_{L^{\alpha+2}(\Theta)}^{\alpha+2}\Big), \end{aligned}$$

corresponds to the density in (4.3.6). Thanks to the conservation of the Hamiltonian (4.3.3), the measure ρ is expected to be invariant under the flow of (4.3.2). This expectation is also supported by the fact that the vector field defining (4.3.2) is (formally) divergence free. This fact follows again from the Hamiltonian structure of (4.3.2).

Observe that if a Borel set A ⊂ H s( Θ) is of full ρ measure then A is also of full μ measure. Therefore, it suffices to solve (4.3.2) globally in time for u 0 in a set of full ρ measure.

We now explain how the local existence result of Proposition 4.3.2 can be combined with invariant measure considerations in order to get global existence of the solution. The details can be found in [7]. Consider a truncated version of (4.3.2)

$$\displaystyle \begin{aligned} (i\partial_{t}-\sqrt{ - \mathbf{\Delta}})u- S_N\big( \sqrt{ - \mathbf{\Delta}}^{-1}\big(|S_N \mathrm{Re}(u)|{}^{\alpha}S_N\mathrm{Re}(u)\big) \big) =0, \end{aligned} $$
(4.3.7)

where S N is a suitable “projector” tending to the identity as N goes to infinity. Let us denote by ΦN(t) the flow of (4.3.7). This flow is well-defined for a fixed N because for frequencies ≫ N it is simply the linear flow and for the remaining frequencies one can use that (4.3.7) has the preserved energy

$$\displaystyle \begin{aligned} \frac{1}{2}\|\sqrt{ - \mathbf{\Delta}} (u)\|{}_{L^2(\Theta)}^{2}+ \frac{1}{\alpha+2}\|S_N \mathrm{Re}(u)\|{}_{L^{\alpha+2}(\Theta)}^{\alpha+2}\,. \end{aligned} $$
(4.3.8)

The energy (4.3.8) allows us to define an approximated Gibbs measure ρ N. One has that ρ N is invariant under ΦN(t) by the Liouville theorem and the invariance of complex Gaussians under rotations (for the frequencies ≫ N). In addition, ρ N converges in a strong sense to ρ as N →.

Let us fix T ≫ 1 and a small 𝜖 > 0. Our goal is to find a set of ρ residual measure < 𝜖 such that for initial data in this set we can solve (4.3.2) up to time T.

The local existence theory implies that as far as

$$\displaystyle \begin{aligned} \|S(t)u\|{}_{L^p(\tau\in (0,2);L^p(\Theta))}\leq R,\quad R\geq 1 \end{aligned} $$
(4.3.9)

we can define the solution of the true equation with datum u for times of order R γ, γ > 0, the bound (4.3.9) is propagated and moreover on the interval of existence this solution is the limit asN →of the solutions of the truncated equation(4.3.7) with the same datum.

Our goal is to show that with a suitably chosen R = R(T, ε) we can propagate the bound (4.3.9) for the solutions of the approximated equation (4.3.7) (for N ≫ 1) up to time T for initial data in a set of residual ρ measure \(\lesssim \varepsilon \).

For R > 1, we define the set B R as

$$\displaystyle \begin{aligned}B_R=\{u\,:\, \|S(t)u\|{}_{L^p(\tau\in (0,2);L^p(\Theta))}\leq R\}. \end{aligned}$$

As mentioned the (large) number R will be determined depending on T and ε. Thanks to the probabilistic Strichartz estimates for S(t), we have the bound

$$\displaystyle \begin{aligned} \rho(B_R^{c})<e^{-\kappa R^2} \end{aligned} $$
(4.3.10)

for some κ > 0. Let τ ≈ R γ be the local existence time associated to R given by Proposition 4.3.2. Define the set B by

$$\displaystyle \begin{aligned} B=\bigcap_{k=0}^{[T/\tau]}\Phi_N(-k\tau)(B_R)\, . \end{aligned} $$
(4.3.11)

Thanks to the local theory, we can propagate (4.3.9) for data in B up to time T. On the other hand, using the invariance of ρ N under ΦN(t) and (4.3.10), we obtain that

$$\displaystyle \begin{aligned}\rho_N(B^c)\lesssim T R^\gamma e^{-\kappa R^2}. \end{aligned}$$

We now choose R so that

$$\displaystyle \begin{aligned}T R^\gamma e^{-\kappa R^2}\sim \varepsilon. \end{aligned}$$

In other words

$$\displaystyle \begin{aligned}R \sim \Big(\log\big( \frac{T}{\varepsilon}\big)\Big)^{\frac{1}{2}}\,. \end{aligned}$$

This fixes the value of R. With this choice of R, ρ(B c) < ε, provided N ≫ 1. With this value of R the set B defined by (4.3.11) is such that for data in B we have the bound (4.3.9) up to time T on a set of residual ρ measure < ε. Now, we can pass to the limit N → thanks to the above mentioned consequence of the local theory and hence defining the solution of the true equation (4.3.2) up to time T for data in a set of ρ residual measure < ε.

We now apply the last conclusion with T = 2j and ε∕2j. This produces a set Σ j,ε such that ρ((Σ j,ε)c) < ε∕2j an for u 0 ∈ Σ j,ε we can solve (4.3.2) up to time 2j. We next set

$$\displaystyle \begin{aligned}\varSigma_{\varepsilon}=\bigcap_{j=1}^{\infty} \varSigma_{j,\varepsilon}\,. \end{aligned}$$

Clearly, we have ρ((Σ ε)c) < ε and for u 0 ∈ Σ ε, we can define a global solution of (4.3.2). Finally

$$\displaystyle \begin{aligned}\varSigma=\bigcup_{j=1}^\infty \varSigma_{2^{-j}} \end{aligned}$$

is a set of full ρ measure on which we can define globally the solutions of (4.3.2). The previous construction also keeps enough information allowing to get the claimed uniqueness property.

Remark 4.3.4

In [5], a part of the result of Theorem 4.3.1 was extended to α < 4 which is the full range of the definition of the measure ρ.

Remark 4.3.5

The previous discussion has shown that we have two methods to globalise the solutions in the context of random data well-posedness for the nonlinear wave equation. The one of the previous section is based on energy estimates while the method of this section is based on invariant measures considerations. It is worth mentioning that these two methods are also employed in the context of singular stochastic PDE’s. More precisely in [32] the globalisation is done via the (more flexible) method of energy estimates while in [25] one globalises by exploiting invariant measures considerations.

4.4 Quasi-Invariant Measures

4.4.1 Introduction

4.4.1.1 Motivation

In Sect. 4.2, for each s ∈ (0, 1) we introduced a family of measures \({\mathcal M}^s\) on the Sobolev space \({\mathcal H}^s(\mathbb T^3)=H^s(\mathbb T^3)\times H^{s-1}(\mathbb T^3)\). Then for each \(\mu \in {\mathcal M}^s\), we succeeded to define a unique global flow Φ(t) of the cubic defocusing wave equation a.s. with respect to μ. This result is of interest for the solvability of the Cauchy problem associated with the cubic defocusing wave equation for data in \({\mathcal H}^s(\mathbb T^3)\), especially for s < 1∕2 because for these regularities this Cauchy problem is ill-posed in the Hadamard sense in \({\mathcal H}^s(\mathbb T^3)\). On the other hand the methods of Sect. 4.2 give no information about the transport by Φ(t) of the measures in \({\mathcal M}^s\), even for large s. Of course, \({\mathcal M}^s\) can be defined for any \(s\in \mathbb R\) and for s ≥ 1 the global existence a.s. with respect to an element of \({\mathcal M}^s\) follows from Theorem 4.1.2. The question of the transport of the measures of \({\mathcal M}^s\) under Φ(t) is of interest in the context of the macroscopic description of the flow of the cubic defocusing wave equation. It is no longer only a low regularity issue and the answer of this question is a priori not clear at all for regular solutions either.

On the other hand, in Sect. 4.3, we constructed a very particular (Gaussian) measure μ on the Sobolev spaces of radial functions on the unit disc of \(\mathbb R^3\) such that a.s. with respect to this measure the nonlinear defocusing wave equation with nonlinear term |u|αu, α ∈ (2, 3) has a well defined dynamics. The typical Sobolev regularity on the support of this measure is supercritical and thus again this result is of interest concerning the individual behaviour of the trajectories. This result is also of interest concerning the macroscopic description of the flow because, we can also prove by the methods of Sect. 4.3 that the transported measure by the flow is absolutely continuous with respect to μ. Unfortunately, the method of Sect. 4.3 is only restricted to a very particular initial distribution with data of low regularity.

Motivated by the previous discussion, a natural question to ask is what can be said for the transport of the measures of \({\mathcal M}^s\) under the flow of the cubic defocusing wave equation. In this section we discuss some recent progress on this question.

4.4.1.2 Statement of the Result

Consider the cubic defocusing wave equation

$$\displaystyle \begin{aligned} (\partial_t^2 -\Delta) u+u^3=0, \end{aligned} $$
(4.4.1)

where \(u :\mathbb R\times \mathbb T^d\rightarrow \mathbb R\). We rewrite (4.4.1) as the first order system

$$\displaystyle \begin{aligned} \partial_t u=v,\quad \partial_t v=\Delta u-u^3. \end{aligned} $$
(4.4.2)

As we already know, if (u, v) is a smooth solution of (4.4.2) then

$$\displaystyle \begin{aligned}\frac{d}{dt}H(u(t),v(t))=0, \end{aligned}$$

where

$$\displaystyle \begin{aligned} H(u,v)=\frac{1}{2}\int_{\mathbb T^d}\big(v^2+|\nabla u|{}^2\big)+\frac{1}{4}\int_{\mathbb T^d}u^4\,. \end{aligned} $$
(4.4.3)

Thanks to Theorem 4.1.2, for d ≤ 3 the Cauchy problem associated with (4.4.2) is globally well-posed in \({\mathcal H}^s(\mathbb T^d)= H^s(\mathbb T^d)\times H^{s-1}(\mathbb T^d),\)s ≥ 1. Denote by \(\Phi (t):{\mathcal H}^s(\mathbb T^d)\rightarrow {\mathcal H}^s(\mathbb T^d)\) the resulting flow. As we already mentioned, we are interested in the statistical description of Φ(t). Let μ s,d be the measure formally defined by

$$\displaystyle \begin{aligned}d \mu_{s,d} = Z_{s,d}^{-1} e^{-\frac 12 \| (u,v)\|{}_{{\mathcal H}^{s+1}}^2} du dv \end{aligned}$$

or

$$\displaystyle \begin{aligned}d \mu_{s,d} = Z_{s,d}^{-1} \prod_{n \in \mathbb Z^2} e^{-\frac 12 \langle n\rangle ^{2(s+1)} |\widehat u_n|{}^2} e^{-\frac 12 \langle n\rangle^{2s} |\widehat v_n|{}^2} d\widehat u_n d\widehat v_n\,, \end{aligned}$$

where \(\widehat u_n\) and \(\widehat v_n\) denote the Fourier transforms of u and v respectively. Recall that \(\langle n\rangle =(1+|n|{ }^2)^{\frac {1}{2}}\).

Rigorously one can define the Gaussian measure μ s,d as the induced probability measure under the map

$$\displaystyle \begin{aligned}\omega \longmapsto (u^\omega(x), v^\omega(x)) \end{aligned}$$

with

$$\displaystyle \begin{aligned} u^\omega(x) = \sum_{n \in \mathbb Z^d} \frac{g_n(\omega)}{\langle n\rangle^{s+1}}e^{in\cdot x},\quad v^\omega(x) = \sum_{n \in \mathbb Z^d} \frac{h_n(\omega)}{\langle n\rangle^{s}}e^{in\cdot x} \,\,. \end{aligned} $$
(4.4.4)

In (4.4.4), \(( g_n )_{n \in \mathbb Z^d}\), \(( h_n )_{n \in \mathbb Z^d}\) are two sequences of “standard” complex Gaussian random variables, such that \(g_n=\overline {g_{-n}}\), \(h_n=\overline {h_{-n}}\) and such that {g n, h n} are independent, modulo the central symmetry. The measures μ s,d can be seen as special cases of the measures in \({\mathcal M}^s\) considered in Sect. 4.2. The partial sums of the series in (4.4.4) are a Cauchy sequence in \(L^2(\Omega ; \mathcal {H}^{\sigma }(\mathbb T^d))\) for every \(\sigma <s+1-\frac {d}{2}\) and therefore one can see μ s,d as a probability measure on \({\mathcal H}^\sigma \) for a fixed \(\sigma <s+1-\frac {d}{2}\). Therefore, thanks to the results of Sect. 4.2, for d ≤ 3, the flow Φ(t) can be extended μ s,d almost surely, provided \(s>\frac {d}{2}-1\). We have the following result.

Theorem 4.4.1

Let s ≥ 0 be an integer. Then the measure μ s,1is quasi-invariant under the flow of (4.4.2).

We recall that given a measure space (X, μ), we say that μ is quasi-invariant under a transformation T : X → X if the transported measure T μ = μ ∘ T −1 and μ are equivalent, i.e. mutually absolutely continuous with respect to each other. The proof of Theorem 4.4.1 is essentially contained in the analysis of [45].

For d = 2 the situation is much more complicated. Recently in [37], we were able to prove the following statement.

Theorem 4.4.2

Let s ≥ 2 be an even integer. Then the measure μ s,2is quasi-invariant under the flow of (4.4.2).

We expect that by using the methods of Sect. 4.2, one can extend the result of Theorem 4.4.2 to all s > 0, not necessarily an integer.

It would be interesting to decide whether one can extend the result of Theorem 4.4.2 to the three dimensional case. It could be that the type of renormalisations employed in the context of singular stochastic PDE’s or the QFT become useful in this context.

From now on we consider d = 2 and we denote μ s,2 simply by μ s.

4.4.1.3 Relation to Cameron-Martin Type Results

In probability theory, there is an extensive literature on the transport property of Gaussian measures under linear and nonlinear transformations. The statements of Theorems 4.4.1 and 4.4.2 can be seen as such kind of results for the nonlinear transformation defined by the flow map of the cubic defocusing wave equation. The most classical result concerning the transport property of Gaussian measures is the result of Cameron-Martin [10] giving an optimal answer concerning the shifts. The Cameron-Martin theorem in the context of the measures μ s is saying that for a fixed \((h_1,h_2)\in {\mathcal H}^{\sigma }\), σ < s, the transport of μ s under the shift

$$\displaystyle \begin{aligned}(u,v)\longmapsto (u,v)+(h_1,h_2), \end{aligned}$$

is absolutely continuous with respect to μ s if and only if \((h_1,h_2)\in {\mathcal H}^{s+1}\).

If we denote by S(t) the free evolution associated with (4.4.2) then for \((u,v)\in {\mathcal H}^\sigma \), we classically have that the flow of the nonlinear wave equation can be decomposed as

$$\displaystyle \begin{aligned} \Phi(t)(u,v)=S(t)\big((u,v)+(h_1,h_2)\big), \end{aligned} $$
(4.4.5)

where \((h_1,h_2)= (h_1(u,v),h_2(u,v)) \in {\mathcal H}^{\sigma +1}\). In other word there is one derivative smoothing and no more. Of course, if σ < s then σ + 1 < s + 1 and therefore the result of Theorem 4.4.2 represents a statement displaying fine properties of the vector field generating Φ(t). More precisely if in (4.4.5) \((h_1,h_2)\in {\mathcal H}^{\sigma +1}\) were fixed (independent of (u, v)) then the transported measures would not be absolutely continuous with respect to μ s !

Let us next compare the result of Theorem 4.4.2 with a result of Ramer [39]. For σ < s, let us consider an invertible map Ψ on \({\mathcal H}^{\sigma }(\mathbb T^2)\) of the form

$$\displaystyle \begin{aligned}\varPsi(u,v)= (u,v)+F(u,v), \end{aligned}$$

where \(F:{\mathcal H}^\sigma (\mathbb T^2)\rightarrow {\mathcal H}^{s+1}(\mathbb T^2)\). Under some more assumptions, the most important being that

$$\displaystyle \begin{aligned}DF(u,v):{\mathcal H}^{s+1}(\mathbb T^2)\rightarrow {\mathcal H}^{s+1}(\mathbb T^2) \end{aligned}$$

is a Hilbert-Schmidt map, the analysis of [39] implies that μ s is quasi-invariant under Ψ. A typical example for the F is

$$\displaystyle \begin{aligned}F(u,v)=\varepsilon (1-\Delta)^{-1-\delta}(u^2,v^2),\quad \delta>0,\,\, |\varepsilon|\ll 1, \end{aligned}$$

i.e. 2-smoothing is needed in order to ensure the Hilbert-Schmidt assumption. Therefore the approach of Ramer is far from being applicable in the context of the flow map of the nonlinear wave equation because for the nonlinear wave equation there is only 1-smoothing.

Let us finally discuss the Cruzeiro generalisation of the Cameron-Martin theorem. In [15], Ana Bela Cruzeiro considered a general equation of the form

$$\displaystyle \begin{aligned} \partial_t u=X(u), \end{aligned} $$
(4.4.6)

where X is an infinite dimensional vector field. She proved that μ s would be quasi-invariant under the flow of (4.4.6) if we suppose a number of assumptions, the most important being of type:

$$\displaystyle \begin{aligned} \int_{H^{\sigma}(\mathbb T^2)}e^{\mathrm{div} (X(u))}d\mu_s(u)<\infty\,. \end{aligned} $$
(4.4.7)

The problem is how to check the abstract assumption (4.4.7) for concrete examples. Very roughly speaking the result of Theorem 4.4.2 aims to verify assumptions of type (4.4.7) “in practice”.

4.4.2 Elements of the Proof

In this section, we present some of the key steps in the proof of Theorem 4.4.2.

4.4.2.1 An Equivalent Gaussian Measure

Since the quadratic part of (4.4.3) does not control the L 2 norm of u, we will prove the quasi-invariance for the equivalent measure \(\widetilde { \mu }_{s} \) defined as the induced probability measure under the map

$$\displaystyle \begin{aligned} \omega \in \Omega \longmapsto (u^\omega(x), v^\omega(x)) \end{aligned}$$

with

$$\displaystyle \begin{aligned} u^\omega(x) = \sum_{n \in \mathbb Z^2} \frac{g_n(\omega)} { (1+|n|{}^2+|n|{}^{2s+2})^{\frac{1}{2}} }e^{in\cdot x},\quad v^\omega(x) = \sum_{n \in \mathbb Z^2} \frac{h_n(\omega)}{ (1+|n|{}^{2s})^{\frac{1}{2}} }e^{in\cdot x}\,. \end{aligned}$$

Formally \(\widetilde { \mu }_{s} \) can be seen as

$$\displaystyle \begin{aligned}Z^{-1} e^{-\frac 12 \int v^2- \frac 12 \int (D^s v)^2 - \frac 12 \int u^2 - \frac 12 \int |\nabla u|{}^2 - \frac 12 \int (D^{s+1} u)^2}du dv, \end{aligned}$$

where

$$\displaystyle \begin{aligned}D\equiv \sqrt{-\Delta}\,. \end{aligned}$$

As we shall see below, the expression

$$\displaystyle \begin{aligned} \frac 12 \int_{\mathbb T^2} v^2+\frac 12 \int_{\mathbb T^2} (D^s v)^2 + \frac 12 \int_{\mathbb T^2} u^2 + \frac 12 \int_{\mathbb T^2} |\nabla u|{}^2 + \frac 12 \int_{\mathbb T^2} (D^{s+1} u)^2 \end{aligned} $$
(4.4.8)

is the main part of the quadratic part of the renormalised energy in the context of the nonlinear wave equation (4.4.2). Using the result of Kakutani [26], we can show that for s > 1∕2 the Gaussian measures μ s and \(\widetilde { \mu }_{s} \) are equivalent.

4.4.2.2 The Renormalised Energies

Consider the truncated wave equation

$$\displaystyle \begin{aligned} \partial_t u=v,\quad \partial_t v=\Delta u-\pi_N((\pi_N u)^3), \end{aligned} $$
(4.4.9)

where π N is the Dirichlet projector on frequencies \(n\in \mathbb Z^2\) such that |n|≤ N. If (u, v) is a solution of (4.4.9) then

$$\displaystyle \begin{aligned} \partial_t \Big[\frac 12 \int_{\mathbb T^2} (D^s v_N)^2 + \frac 12 \int_{\mathbb T^2} (D^{s+1} u_N)^2 \Big] = \int_{\mathbb T^2} (D^{2s} v_N)(-u_N^3)\,, \end{aligned}$$

where (u N, v N) = (π Nu, π Nv). Clearly tu N = v N. Observe that for s = 0, we recover the conservation of the truncated energy H N(u, v), defined by

$$\displaystyle \begin{aligned} H_N(u,v)\equiv H(\pi_N u,\pi_N v)\,. \end{aligned}$$

For s ≥ 2, an even integer, using the Leibniz rule, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{\mathbb T^2} (D^{2s} v_N)(-u_N^3)&\displaystyle =&\displaystyle -3\int_{\mathbb T^2} D^sv_N\, D^s u_N\, u_N^2\\ &\displaystyle &\displaystyle + \sum_{\substack{ |\alpha|+|\beta|+|\gamma|=s\\ |\alpha|,|\beta|,|\gamma|<s }} c_{\alpha,\beta,\gamma} \int_{\mathbb T^2} D^sv_N\,\partial^\alpha u_N \partial^\beta u_N \partial^\gamma u_N, \end{array} \end{aligned} $$

for some unessential constants c α,β,γ.

It will be convenient in the sequel to suppose that the integration on \(\mathbb T^2\) is done with respect to a probability measure. Therefore the integrations will be done with respect to the Lebesgue measure multiplied by (2π)−2. We can write

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle -3\int_{\mathbb T^2} D^sv_N\, D^s u_N\, u_N^2= - \frac 32 \partial_t \bigg[\int_{\mathbb T^2} (D^s u_N)^2u_N^2 \bigg] + 3 \int_{\mathbb T^2} (D^s u_N)^2\, v_N \,u_N\\ &\displaystyle &\displaystyle \quad = - \frac 32 \partial_t\bigg[ \int_{\mathbb T^2} \Pi_{0}^\perp [(D^su_N)^2] \, \Pi_{0}^\perp[u_N^2] \bigg] + 3 \int_{\mathbb T^2} \Pi_{0}^\perp[(D^su_N)^2]\, \Pi_{0}^\perp[v_N \, u_N] \\ &\displaystyle &\displaystyle \qquad - \frac 32 \partial_t \bigg[\int_{\mathbb T^2} (D^s u_N)^2 \int_{\mathbb T^2} u_N^2\bigg] + 3 \int_{\mathbb T^2} (D^s u_N)^2\int_{\mathbb T^2} v_N\, u_N , \end{array} \end{aligned} $$
(4.4.10)

where \(\Pi _{0}^\perp \) is again the projector on the nonzero frequencies, i.e.

$$\displaystyle \begin{aligned}(\Pi_{0}^\perp(f))(x)=f(x)-\int_{\mathbb T^2} f(y)dy\,. \end{aligned}$$

The last two terms on the right-hand side of (4.4.10) are problematic because

$$\displaystyle \begin{aligned}\lim_{N\rightarrow\infty} \mathbb E_{\widetilde{\mu}_s} \bigg[\int_{\mathbb T^2} (D^s \pi_{N}u)^2\bigg]=+\infty\,. \end{aligned}$$

Therefore, we need to use a renormalisation in the definitions of the energies. Define σ N by

$$\displaystyle \begin{aligned} \sigma_N = \mathbb E_{\widetilde{\mu}_s} \bigg[\int_{\mathbb T^2} (D^s \pi_{N}u)^2\bigg] = \sum_{\substack{n \in \mathbb Z^2\\|n|\leq N}} \frac{|n|{}^{2s}}{1+|n|{}^2+|n|{}^{2s+2}}\sim \log N \,. \end{aligned}$$

Then, we have

$$\displaystyle \begin{aligned} - \frac 32 & \partial_t \bigg[\int_{\mathbb T^2} (D^s u_N)^2 \int_{\mathbb T^2} u_N^2\bigg] + 3 \int_{\mathbb T^2} (D^s u_N)^2\int v_N\, u_N \notag\\ & = - \frac 32 \partial_t \bigg[\bigg(\int_{\mathbb T^2} (D^s u_N)^2- \sigma_N\bigg)\int_{\mathbb T^2} u_N^2\bigg] + 3 \bigg( \int_{\mathbb T^2} (D^s u_N)^2 - \sigma_N\bigg)\int_{\mathbb T^2} v_N\, u_N. \end{aligned} $$

Now, the term

$$\displaystyle \begin{aligned}\int_{\mathbb T^2} (D^s u_N)^2- \sigma_N \end{aligned}$$

is a good term because thanks to Wiener chaos estimates, we have the bound

$$\displaystyle \begin{aligned}\Big\| \int_{\mathbb T^2} (D^s \pi_N u)^2- \sigma_N \Big\|{}_{L^p(d \widetilde{\mu}_s(u,v))}\leq C p, \end{aligned}$$

where the constant C is independent of p and N. We define \(\tilde {H}_{s, N}(u,v)\) by

$$\displaystyle \begin{aligned} \tilde{H}_{s, N}(u,v) = \frac 12 \int_{\mathbb T^2} (D^s v)^2 + \frac 12 \int_{\mathbb T^2} (D^{s+1} u)^2+ \frac 32 \int_{\mathbb T^2} (D^s u)^2 u^2 - \frac 32 \sigma_N \int_{\mathbb T^2} u^2\notag\,. \end{aligned}$$

We can summarise the previous analysis as follows: if (u, v) is a solution of (4.4.9) then

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \partial_t \tilde{H}_{s, N}(u_N,v_N)&\displaystyle =&\displaystyle 3 \int_{\mathbb T^2} \Pi_{0}^\perp[ (D^s u_N)^2 ] \, \Pi_{0}^\perp [v_N \, u_N ]\\ &\displaystyle &\displaystyle +\sum_{\substack{ |\alpha|+|\beta|+|\gamma|=s\\ |\alpha|,|\beta|,|\gamma|<s }} c_{\alpha,\beta,\gamma} \int_{\mathbb T^2} D^sv_N\,\partial^\alpha u_N \partial^\beta u_N \partial^\gamma u_N\\ &\displaystyle &\displaystyle +\,3 \bigg( \int_{\mathbb T^2} (D^s u_N)^2 - \sigma_N\bigg)\int_{\mathbb T^2} v_N\, u_N. \end{array} \end{aligned} $$
(4.4.11)

All terms in the right hand-side of (4.4.11) are suitable for a perturbative analysis. We finally define the full modified energy H s,N(u, v) as

$$\displaystyle \begin{aligned}H_{s, N}(u,v)=\tilde{H}_{s, N}(u,v)+H(u,v)+ \frac{1}{2} \int_{\mathbb T^2} u^2, \end{aligned}$$

where H is defined by (4.4.3). The quadratic part of H s,N (except the renormalisation term which is morally quartic) is now given by (4.4.8). Therefore in order to prove the quasi-invariance it will be of crucial importance to study the variation in time of H s,N. Here is the main quantitative bound used in the proof of Theorem 4.4.2.

Theorem 4.4.3

Let s ≥ 2 be an even integer and let us denote by Φ N(t) the flow of

$$\displaystyle \begin{aligned}\partial_t u=v,\quad \partial_t v=\Delta u-\pi_N((\pi_N u)^3)\,. \end{aligned}$$

Then for every r > 0 there is a constant C such that for every p ≥ 2 and every N ≥ 1,

$$\displaystyle \begin{aligned}\Big(\int_{H_N(u,v)\leq r} \Big|\partial_t H_{s, N}(\pi_N\Phi_N(t)(u,v))\vert_{t=0} \Big|{}^p d\widetilde{\mu}_s(u,v)\Big)^{\frac{1}{p}}\leq Cp. \end{aligned}$$

4.4.2.3 On the Proof of Theorem 4.4.3

Using Eq. (4.4.9), we have that

$$\displaystyle \begin{aligned}\partial_t H_{s, N}(u_N,v_N) =\partial_t\tilde{H}_{s, N}(u_N,v_N) +\int_{\mathbb T^2} u_N v_N. \end{aligned}$$

Therefore, coming back to (4.4.11), we obtain

$$\displaystyle \begin{aligned}\partial_t \tilde{H}_{s, N}(\pi_N\Phi_N(t)(u,v))\vert_{t=0}= \int_{\mathbb T^2}\pi_N u\pi_N v+ Q_1(u,v)+Q_2(u,v)+Q_3(u,v), \end{aligned}$$

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} Q_1(u,v)&\displaystyle =&\displaystyle 3 \int_{\mathbb T^2} \Pi_{0}^\perp[ (D^s \pi_N u)^2 ] \, \Pi_{0}^\perp [\pi_N v \, \pi_N u], \\ Q_2(u,v)&\displaystyle =&\displaystyle \sum_{\substack{ |\alpha|+|\beta|+|\gamma|=s\\ |\alpha|,|\beta|,|\gamma|<s }} c_{\alpha,\beta,\gamma} \int_{\mathbb T^2} D^s \pi_N v\,\partial^\alpha \pi_N u \partial^\beta \pi_N u \partial^\gamma \pi_N u, \\ Q_3(u,v)&\displaystyle =&\displaystyle 3 \bigg( \int_{\mathbb T^2} (D^s u_N)^2 - \sigma_N\bigg)\int_{\mathbb T^2} \pi_N v\, \pi_N u. \end{array} \end{aligned} $$

Let us first consider

$$\displaystyle \begin{aligned} \int_{\mathbb T^2} \pi_N u \pi_N v\,. \end{aligned} $$
(4.4.12)

We need to estimate (4.4.12) under the restriction

$$\displaystyle \begin{aligned} \int_{\mathbb T^2}(|\nabla\pi_N u|{}^2+(\pi_N v)^2+\frac{1}{2}(\pi_N u)^4)\leq 2r. \end{aligned} $$
(4.4.13)

Using the compactness of \(\mathbb T^2\), one can see that under the restriction (4.4.13),

$$\displaystyle \begin{aligned}\big|\int_{\mathbb T^2} \pi_N u\pi_N v\big|\leq \| \pi_N u\|{}_{L^2(\mathbb T^2)} \|\pi_N v\|{}_{L^2(\mathbb T^2)} \leq C\|\pi_N u\|{}_{L^4(\mathbb T^2)}\|\pi_N v\|{}_{L^2(\mathbb T^2)}\leq Cr^{\frac{3}{4}}\,. \end{aligned}$$

Let us next consider Q 3(u, v). For r > 0, we define μ s,r,N as

$$\displaystyle \begin{aligned}d \mu_{s,r,N}(u,v)= \chi_{H_N(u,v) \leq r} \, d \widetilde{\mu}_s(u,v)\,, \end{aligned}$$

where \( \chi _{H_N(u,v) \leq r} \) stays for the characteristic function of the set

$$\displaystyle \begin{aligned}\{(u,v)\,:\, H_N(u,v)\leq r\}. \end{aligned}$$

The goal is to show that

$$\displaystyle \begin{aligned}\|Q_3(u,v)\|{}_{L^p(d\mu_{s,r,N}(u,v))}\leq Cp, \end{aligned}$$

with a constant C independent of N and p. Since we already checked that under (4.4.13),

$$\displaystyle \begin{aligned}\Big| \int_{\mathbb T^2} \pi_N v\, \pi_N u \Big| \leq C_r, \end{aligned}$$

we obtain that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \|Q_3(u,v)\|{}_{L^p(d\mu_{s,r,N}(u,v))}&\displaystyle \leq&\displaystyle C_r\, \Big\| \int_{\mathbb T^2} (D^s \pi_N u)^2 - \sigma_N\Big\|{}_{L^p(d\mu_{s,r,N}(u,v))}\\ &\displaystyle \leq&\displaystyle C_r\, \Big\| \int_{\mathbb T^2} (D^s \pi_N u)^2 - \sigma_N\Big\|{}_{L^p(d\widetilde{\mu}_{s}(u,v))}\,. \end{array} \end{aligned} $$

On the other hand

$$\displaystyle \begin{aligned}\Big\| \int_{\mathbb T^2} (D^s \pi_N u)^2 - \sigma_N\Big\|{}_{L^p(d\widetilde{\mu}_{s}(u,v))}= \Big\|\sum_{\substack{n \in \mathbb Z^2\\|n|\leq N}} \frac{(|g_n(\omega)|{}^2 - 1)|n|{}^{2s}}{1+|n|{}^2+|n|{}^{2s+2}} \Big\|{}_{L^p(\Omega)} \end{aligned}$$

and by using Wiener chaos estimates, we have

$$\displaystyle \begin{aligned}\Big\|\sum_{\substack{n \in \mathbb Z^2\\|n|\leq N}} \frac{(|g_n(\omega)|{}^2 - 1)|n|{}^{2s}}{1+|n|{}^2+|n|{}^{2s+2}} \Big\|{}_{L^p(\Omega)} \leq Cp \Big\|\sum_{\substack{n \in \mathbb Z^2\\|n|\leq N}} \frac{(|g_n(\omega)|{}^2 - 1)|n|{}^{2s}}{1+|n|{}^2+|n|{}^{2s+2}} \Big\|{}_{L^2(\Omega)} \leq Cp \end{aligned}$$

which provides the needed bound for Q 3(u, v).

The analysis of

$$\displaystyle \begin{aligned}Q_1(u,v)= 3 \int_{\mathbb T^2} \Pi_{0}^\perp[ (D^s \pi_N u)^2 ] \, \Pi_{0}^\perp [\pi_N v \, \pi_N u] \end{aligned}$$

is the most delicate part of the analysis and relies on subtle multi-linear arguments. The analysis of Q 2(u, v) follows similar lines.

Basically, we are allowed to have outputs as

$$\displaystyle \begin{aligned}\| D^\sigma u\|{}_{L^\infty(\mathbb T^2)},\quad \sigma<s \end{aligned}$$

with a loss \(\sqrt {p}\) and H N(u, v) with no loss in p. The outputs H N(u, v) follow from deterministic analysis and thus have no loss in p but they are regularity consuming.

We observe that a naive Hölder inequality approach clearly fails. A purely probabilistic argument based on Wiener chaos estimates fails because the output power of p is too large. The basic strategy is therefore to perform a multi-scale analysis redistributing properly the derivative losses by never having more then quadratic weight of the contribution of the Wiener chaos estimate.

When analysing the 4-linear expression defining Q 1(u, v), we suppose that

$$\displaystyle \begin{aligned}D^s \pi_N u,\,\, D^s \pi_N u,\,\, \pi_N v,\,\, \pi_N u \end{aligned}$$

are localised at dyadic frequencies N 1, N 2, N 3, N 4 respectively.

We first consider the case when \(N_4\gtrsim (\max (N_1,N_2))^{\frac {1}{100}}\). In this case we exchange some regularity of D sπ Nu with this of π Nu and we perform the naive linear analysis.

Therefore, in the analysis of Q 1(u, v) we can suppose that

$$\displaystyle \begin{aligned}N_4\ll (\max(N_1,N_2))^{\frac{1}{100}}. \end{aligned}$$

In this case, we have that

$$\displaystyle \begin{aligned}\max(N_1,N_2)\sim \max(N_j,\, j=1,2,3,4). \end{aligned}$$

By symmetry, we can suppose that \(N_1=\max (N_1,N_2)\). We next consider the case

$$\displaystyle \begin{aligned}N_3\ll N_1^{1-a},\quad a=a(s)\ll 1\,. \end{aligned}$$

In this case, we perform a bi-linear Wiener chaos estimate and we have some gain of regularity in the localisation of \( \Pi _{0}^\perp [ (D^s \pi _N u)^2 ]\). Finally, we consider the case

$$\displaystyle \begin{aligned}N_1\sim \max(N_j,\, j=1,2,3,4),\,\, N_4\ll (\max(N_1,N_2))^{\frac{1}{100}},\,\, N_3\gtrsim N_1^{1-a} \end{aligned}$$

In this case, we perform a tri-linear Wiener chaos estimate and we have enough gain of regularity in the localisation of

$$\displaystyle \begin{aligned}\Pi_{0}^\perp[ (J^s \pi_N u)^2 ]\pi_N v\,. \end{aligned}$$

This essentially explains the argument leading to the key estimate of Theorem 4.4.3. We refer to [37] for the details.

4.4.2.4 On the Soft Analysis

We can observe that

$$\displaystyle \begin{aligned} H_{s, N}(u,v) =(4.4.8) + \frac 32 \int_{\mathbb T^2} (D^s u)^2 u^2 - \frac 32 \sigma_N \int_{\mathbb T^2} u^2 +\int_{\mathbb T^2}u^4\,. \end{aligned}$$

By classical arguments from QFT, we can define

$$\displaystyle \begin{aligned}\lim_{ N\rightarrow\infty } \Big( \frac 32 \int (D^s \pi_N u)^2 (\pi_N u)^2 - \frac 32 \sigma_N \int (\pi_N u)^2 \Big) \end{aligned}$$

in \(L^p(d\widetilde {\mu }_s(u,v))\), p < . Denote this limit by R(u). Essentially speaking, once we have the key estimate, we study the quasi-invariance of

$$\displaystyle \begin{aligned} \chi_{H(u,v) \leq r}\, e^{- R(u)-\int u^4}\,d\widetilde{\mu}_s(u,v) \end{aligned} $$
(4.4.14)

by soft analysis techniques.

Let us finally explain the importance of the loss p in the key estimate of Theorem 4.4.3. Denote by x(t) the measure evolution of a set having zero measure with respect to (4.4.14). Essentially speaking, using the key estimate and the arguments introduced in [46, 47], we obtain that x(t) satisfy the estimate

$$\displaystyle \begin{aligned} \dot{x}(t)\leq Cp(x(t))^{1-\frac{1}{p}},\quad x(0)=0\,. \end{aligned} $$
(4.4.15)

Integrating the last estimate leads to x(t) ≤ (Ct)p. Taking the limit p →, we infer that x(t) = 0 for 0 ≤ t < 1∕C. Since C is an absolute constant, we can iterate the argument and show that x(t) is vanishing. Observe that this argument would not work if in (4.4.15), we have p α, α > 1 instead of p. In order to make the previous reasoning rigorous, we need to use some more or less standard approximation arguments. We refer to [45] and [37] for the details of such type of reasoning.