1 Introduction

We consider the question of pointwise almost everywhere (a.e.) convergence of solutions to the cubic nonlinear Schrödinger equation (NLS) on \(\mathbb {T}^2\), namely

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lll} i \partial_t u + \Delta u &=& \pm |u|{}^2 u , \\ u(x,0) &=& f(x). \end{array} \right. \end{aligned} $$
(1)

If f ∈ Hs, for what s do we have that u(x, t) → f(x) as t → 0 for (Lebesgue) almost every x?

In the linear Euclidean setting, namely when the linear Schrödinger equation equation is posed on \(\mathbb {R}^d\), this question was first posed by Carleson [8]. He proved Lebesgue (a.e.) convergence eit Δ f(x) to f(x) for \(f \in H^s(\mathbb {R})\) with \(s \leq \frac 14\). Dahlberg–Kenig [11] showed that this one-dimensional result is sharp, proving the necessity of the regularity condition \(s\geq \frac 14\) in any dimension. The (considerably more difficult) higher dimensional problem has been studied by many authors [1, 4, 10, 12, 16, 20, 22,23,24, 26, 28, 29, 31, 32, 34]. Recently, Bourgain [5] proved that \(s \geq \frac {d}{2(d+1)}\) is necessary (see also [21, 24] for some refinements of this result). This has been proved to be sharp, up to the endpoint, by Du–Guth–Li [15] on \(\mathbb {R}^2\) and by Du–Zhang [14] in higher dimensions (the endpoint case is still open in dimensions d ≥ 2).

In the periodic case, much less is known. When d = 1, Moyua–Vega [27] proved the sufficiency of \(s > \frac 13\) and necessity of \(s \geq \frac 14\). Their proof, based on Strichartz estimates, has been extended to dimension d = 2 in [35] and to higher dimension in [9]. In fact, together with recent improvements in periodic Strichartz estimates [6], one can show that \(s > \frac {d}{d+2}\) is a sufficient condition for almost everywhere convergence to initial data. On the other hand, there are several counterexamples showing that we have the same necessary conditions than that on \(\mathbb {R}^d\) [9, 17, 27], namely the necessity of \(s \geq \frac {d}{2(d+1)}\); in particular, one can “adapt” the counterexamples from \(\mathbb {R}^d\) to the periodic setting. At the moment, in the periodic case, almost sure convergence when \(s\in \left [ \frac {d}{2(d+1)}, \frac {d}{d+2} \right ]\) remains an open question.

In the first part of this chapter, we show how to extend the a.e. convergence statement

$$\displaystyle \begin{aligned} \lim_{t \to 0} e^{it \Delta} f(x) = f(x), \qquad {\mbox{for a.e.}\ x \in \mathbb{T}^2\ \mbox{and for all}\ f \in H^{s}(\mathbb{T}^2), s > 1/2}\end{aligned} $$
(2)

to the case of the cubic equation. The following is a special case of Theorem 1.1 in [9].

Theorem 1

If \(f \in H^{s}(\mathbb {T}^2)\) with s > 1∕2 and u is the corresponding solution to (1), then

$$\displaystyle \begin{aligned} \lim_{t \to 0} u(x, t) = f(x) \quad \mathit{\mbox{for a.e.}}\ x \in \mathbb{T}^2 \, . \end{aligned} $$
(3)

Remark 1

By the proof, it will be clear that any improvement of the amount of Sobolev regularity that is sufficient for the convergence of the linear Schrödinger flow on \(\mathbb {T}^2\) would imply an analogous improvement in the statement of Theorem 1 as well.

In the second part of this chapter, we consider probabilistic improvements to the convergence problem. More precisely, we will show that a randomization of the Fourier coefficients of the initial data guarantees a better pointwise behavior of the associated linear (and also nonlinear) evolution. To explain why we may expect this, it is worth mentioning that counterexamples to the linear pointwise convergence problem in the periodic setting have been constructed in [17] using as building block for the initial datum the tensor product of Dirichlet kernels

$$\displaystyle \begin{aligned} \prod_{\ell =1, \ldots, d} \, \sum_{k_{\ell} \in \mathbb{Z}, \, |k_\ell| \leq N} \, e^{ik_{\ell} \cdot x_{\ell} }, \qquad x :=(x_1, \ldots, x_d), \end{aligned} $$
(4)

where N ≫ 1 is a large frequency parameter. It is wort recalling that the pointwise convergence problem is essentiallyFootnote 1 equivalent to establish an \(L^2(\mathbb {T}^2)\) estimate for the maximal Schrödinger operator

$$\displaystyle \begin{aligned} \bigg\| \sup_{t \in [0,1]} |e^{it\Delta} f| \bigg\|{}_{L^2(\mathbb{T}^2)} \lesssim \| f \|{}_{H^s(\mathbb{T}^2)}. \end{aligned} $$
(5)

It has been observed in [17, 27] that (5) behaves particularly bad with data of the form (4). It is in fact seen to be false for \(s <\frac {n}{2(n+1)}\), taking N →. The moral is that if the bad counterexamples are characterized by having a very rigid structure: the Fourier coefficients in (4) are indeed all equal to 1. This suggest to consider as “good” initial data the following randomized Fourier series

$$\displaystyle \begin{aligned} f^\omega(x) = \sum_{n\in \mathbb{Z}^d} \frac{g_n^\omega}{\langle n \rangle^{\frac{d}2+\alpha}} e^{in \cdot x} \, , \qquad \alpha >0 \, , \end{aligned} $$
(6)

where \(g_n^\omega \) are independent (complex) standard Gaussian variables. The Japanese brackets are defined as usual as \(\langle \cdot \rangle = (1 + |\cdot |{ }^2)^{\frac {1}{2}}\).

It is easy to see that if we fix \(t \in \mathbb {R}\), then eit Δ fω(x) belongs to \(\bigcap _{s < \alpha }H^{s}(\mathbb {T}^d)\) \(\mathbb P\)-almost surely (a.s.), where \(\mathbb P\) is the probability measure induced by the sequence \(\{ g_n^\omega \}_{n \in \mathbb {Z}}\). Thus we are working at the Hα level. In fact, more is true, namely that eit Δ fω(x) belongs to \(\bigcap _{s < \alpha }C^{s}(\mathbb {T}^d)\), \(\mathbb P\)-a.s.; in particular, eit Δ fω is \(\mathbb {P}\)-a. s. a continuous function of the x variable. On the other hand, the randomization does not improve the regularity, in the sense that \(\| f^\omega \|{ }_{H^\alpha (\mathbb {T}^d)} =\infty \) also holds \(\mathbb {P}\)-a. s.; see for example Remark 1.2 in [7] and the introduction of [25].

We have the following improved pointwise (a.e.) convergence result for randomized initial data. The following is the first part of Theorem 1.3 in [9].

Proposition 1

Let α > 0, and let fω of the form (6). We have \(\mathbb {P}\) -a. s. the following. For all \(t \in \mathbb {R}\) , the free solution eit Δ fω belongs to \(\bigcap _{s < \alpha }C^{s}(\mathbb {T}^d)\) and

$$\displaystyle \begin{aligned} e^{it\Delta} f^\omega(x) \to f^\omega(x) \quad \mathit{\mbox{as}}\ t \to 0 \end{aligned}$$

for every \(x \in \mathbb {T}^d\) and uniformly.

Finally, we want to prove a similar statement for the cubic NLS (1). In fact, it will be more convenient working with the Wick-ordered version of the equation (WNLS)

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lll} i \partial_t u + \Delta u &=& \mathcal{N}(u) , \\ u(x,0) &=& f(x), \end{array} \right. \end{aligned} $$
(7)

where

(8)

(recall that μ is a conserved quantity). Since solutions to WNLS are related to that of the cubic NLS by multiplication with a factor ei2μt, the study of pointwise convergence turns out to be equivalent to that of NLS. The following is the second part of Theorem 1.3 in [9].

Theorem 2

Let d = 2, α > 0, and let fω of the form (6). Let u be the solution to WNLS (7) with initial data fω . We have \(\mathbb {P}\) -almost surely:

$$\displaystyle \begin{aligned} \lim_{t \to 0} u(x, t) = f^{\omega}(x) \quad \mathit{\mbox{for a.e.}}\ x \in \mathbb{T}^2 \, . \end{aligned} $$
(9)

Thus the same is true for solutions to the cubic NLS.

1.1 Notations and Terminology

For a fixed \(p\in \mathbb {R}\), we often use the notation p+ := p + ε, p− := p − ε, where ε is any sufficiently small strictly positive real number. When in the same inequality we have two such quantities, we use the following notation to compare them. We write p + ⋯+ := p + ε ⋅ (number of +), p −⋯− := p − ε ⋅ (number of −). We will use C > 0 to denote several constants depending only on fixed parameters, like for instance the dimension d. The value of C may clearly differ from line to line. Let A, B > 0. We may write \(A \lesssim B\) if A ≤ CB when C > 0 is such a constant. We write A ≳ B if \(B \lesssim A\) and A ∼ B when \(A \lesssim B\) and A ≳ B. We write A ≪ B if A ≤ cB for c > 0 sufficiently small (and depending only on fixed parameters) and A ≫ B if B ≪ A. We denote \(A \wedge B := \min (A, B)\) and \(A \vee B := \max (A, B)\). We refer to the following inequality:

$$\displaystyle \begin{aligned} \| D^s P_N f \|{}_{L^{q}} \lesssim N^{s + \frac{d}{p} - \frac{d}{q} } \| P_N f \|{}_{L^p}, \quad 1 \leq p \leq q \leq \infty \, , \end{aligned}$$

simply as Bernstein inequality. Here, P N is the frequency projection on the annulus ξ ∼ N.

It is useful to recall that the Strichartz estimates are the main tool to study the nonlinear Schrödinger flow. We recall the periodic Strichartz estimates from [2, 6]:

$$\displaystyle \begin{aligned} \|e^{it\Delta} P_N f \|{}_{L^{p}_{x,t}(\Omega^{d+1})} \lesssim N^{\frac{d}{2} - \frac{d+2}{p} + } \| P_N f \|{}_{L^2_{x}(\Omega^d)}, \quad p \geq 2 \left(\frac{d+2}{d}\right). \end{aligned} $$
(10)

2 Proof of Theorem 1

Recall that the flow of (1) is locally well defined for initial data in \(f \in H^s(\mathbb {T}^2)\) for s > 0 [2]. The solutions are constructed via a fixed-point argument in the restriction space \(X^{s, b}_{\delta }\) for δ > 0 sufficiently small (depending polynomially on the \(H^s(\mathbb {T}^2)\) norm of f). We recall that

$$\displaystyle \begin{aligned} \| F \|{}_{X^{s, b}_{\delta}} := \inf_{ G = F \ \mbox{on} \ t \in [0, \delta]} \| G \|{}_{X^{s, b}}, \end{aligned}$$

where

$$\displaystyle \begin{aligned} \| F \|{}^2_{X^{s, b}} := \int_{\mathbb{R}} \sum_{n \in \mathbb{Z}^{d}} \langle \tau + |n|{}^2 \rangle^{2b} \langle n \rangle^{2s} | \widehat{F}(n, \tau)|{}^2 d \tau \end{aligned}$$

and \(\widehat {F}\) is the space–time Fourier transform of F.

Let \(\Phi ^N_t \) be the flow associated to the truncated NLS equation

$$\displaystyle \begin{aligned} i \partial_t \Phi^N_t f + \Delta \Phi^N_t f = \operatorname{P}_{\leq N} \mathcal{N} (\Phi^N_t f )\, , \end{aligned} $$
(11)

with initial datum \(\Phi ^N_0 f := \operatorname {P}_{\leq N} f\). We denote \(\operatorname {P}_{\leq N}\) the frequency projection on the ball of radius N centered in the origin. We write \(\Phi _t f := \Phi ^\infty _t f\) for the flow of the NLS equation with initial datum \(f = \operatorname {P}_{\infty } f\). We also denote \(\operatorname {P}_{>N} := \operatorname {P}_{\infty } - \operatorname {P}_{\leq N}\) and as already mentioned \(\operatorname {P}_{N} := \operatorname {P}_{\leq N} - \operatorname {P}_{\leq N/2}\).

A similar well-posedness result holds for the truncated flow, uniformly in \(N \in \mathbb {N}\). Of course, at fixed N, since Eq. (11) is finite-dimensional, one can construct a global flow in an elementary way using the Cauchy theorem for ODE and the conservation of \(\| \Phi ^N_t f \|{ }_{L^2(\mathbb {T}^2)}\) (which holds for all \(N \in \mathbb {N}\)). However, in the following, we will need (as usual in the study of NLS) a control of \(\Phi ^N_t f\) uniform over N. This is not elementary and will be ensured by the local well-posedness theory in the restriction space.

As already recalled, the main tool in the study of the pointwise convergence properties of the linear Schrödinger equation is the maximal Schrödinger operator

$$\displaystyle \begin{aligned} t \to \sup_{0 \leq t \leq \delta} |e^{it\Delta } f(x) |, \qquad \delta >0. \end{aligned}$$

Assume indeed that for some δ ∈ (0, 1], one has

$$\displaystyle \begin{aligned} \left\| \sup_{0 \leq t \leq \delta} |e^{it\Delta } f(x) | \right\|{}_{L^2_x(\mathbb{T}^2)} \lesssim \| f \|{}_{H^{s}_x(\mathbb{T}^2)} \, , \end{aligned} $$
(12)

and then it is not hard to see that eit Δ f(x) → f(x) as t → 0 for almost every (with respect to the Lebesgue measure) \(x \in \mathbb {T}^2\). The proof is a straightforward modification of the argument that we will use to prove Proposition 2 below.

In the nonlinear setting, we need a (nonlinear) replacement of (12). A convenient replacement is the maximal estimate (13).

Proposition 2

Let \(f \in L^2(\mathbb {T}^2)\) be such that

$$\displaystyle \begin{aligned} \lim_{N \to \infty} \left\| \sup_{0 \leq t \leq \delta} | \Phi_t f(x) - \Phi^N_t f(x) | \right\|{}_{L^2_x(\mathbb{T}^2)} = 0. \end{aligned} $$
(13)

Then Φ tf(x) → f(x) as t → 0 for almost every \(x \in \mathbb {T}^2\).

From the proof, it will be clear that in (13) we can replace the L2 norm with a weak L1 norm. However, it is usually convenient to work in the L2 setting.

Proof

To prove Proposition 2, we decompose the difference as follows:

$$\displaystyle \begin{aligned} | \Phi_t f(x) - f(x) | & \leq | \Phi_t f(x) - \Phi^N_t f(x) | + | \Phi^N_t f(x) - \operatorname{P}_{\leq N} f(x) | + | \operatorname{P}_{> N} f(x) | \end{aligned} $$
(14)

and pass to the limit t → 0. It is elementary to show that the second term on the right-hand side is zero, namely

$$\displaystyle \begin{aligned} \lim_{t \to 0} \Phi^N_t f(x) = \operatorname{P}_{\leq N} f(x) \, , \end{aligned}$$

for all \(x \in \mathbb {T}^2\). So we arrive atFootnote 2

$$\displaystyle \begin{aligned} \limsup_{t \to 0} |\Phi_t f - f | \leq \limsup_{t \to 0} | \Phi_t f - \Phi^N_t f | + | \operatorname{P}_{> N} f | \, . \end{aligned}$$

Let λ > 0. Using the Chebyshev inequality,

where |⋅| is the Lebesgue measure. On the other hand, we have \( \|\operatorname {P}_{> N} f \|{ }_{L^2(\mathbb {T}^2)} \to 0 \) as N → (since \(f \in L^{2}(\mathbb {T}^2)\)) and

$$\displaystyle \begin{aligned} \lim_{N \to \infty} \left\| \sup_{0 \leq t \leq \delta} | \Phi_t f - \Phi^N_t f | \right\|{}_{L^2(\mathbb{T}^2)} = 0 \end{aligned}$$

by assumption (13). Thus we arrive to

$$\displaystyle \begin{aligned} | \{ x \in \mathbb{T}^2 : \limsup_{t \to 0} | \Phi_t f - f | > \lambda \} | = 0, \end{aligned}$$

and the statement follows taking the union over λ > 0. □

It is not easy to verify the condition (13) directly. However, we can take advantage of a simple lemma that allows to embed a suitable restriction space into the relevant maximal space, namely the space induced by the norm

$$\displaystyle \begin{aligned} \bigg\| \sup_{t \in [0, \delta]} |F(x, t)| \bigg\|{}_{L^2_x(\mathbb{T}^2)}, \qquad F: (x, t) \in \mathbb{T}^{2} \times \mathbb{R} \to F(x, t) \in \mathbb{C}. \end{aligned}$$

In other words, we can bound the \(L^{2}_x(\mathbb {T}^2)\) norm of the associated maximal function

$$\displaystyle \begin{aligned} x \to \sup_{0 \leq t \leq \delta} |F(x,t)|\end{aligned}$$

with an appropriate \(X^{s,b}_{\delta }\) norm of F. In fact, this is a rather general property of the restriction spaces \(X^{s,b}_{\delta }\) with \(b > \frac 12\). The proof can be found in [30, Lemma 2.9], in the non-periodic case. The argument adapts to the periodic case as well.

Lemma 1

Let \(b > \frac 12\) , and let Y  be a Banach space of functions

$$\displaystyle \begin{aligned} F: (x, t) \in \Omega^{d} \times \mathbb{R} \to F(x, t) \in \mathbb{C} \, .\end{aligned}$$

Let \(\alpha \in \mathbb {R}\) . Assume

$$\displaystyle \begin{aligned} \| e^{i \alpha t} e^{it\Delta} f(x) \|{}_{Y} \leq C \| f \|{}_{H^s(\Omega^d)} \, , \end{aligned} $$
(15)

with a constant C > 0 uniform over \(\alpha \in \mathbb {R}\) . Then

$$\displaystyle \begin{aligned} \| F \|{}_{Y} \leq C \| F \|{}_{X^{s,b}} \, . \end{aligned}$$

Using Lemma 1 with

$$\displaystyle \begin{aligned} \| F \|{}_{Y} = \left\| \sup_{0 \leq t \leq \delta} | F(x,t) | \right\|{}_{L^{2}_x(\mathbb{T}^2)} \end{aligned}$$

and the fact that the maximal estimate (12) holds for s > 1∕2, we have the following:

Lemma 2

Let \(b > \frac 12\) and s > 1∕2. We have

$$\displaystyle \begin{aligned} \left\| \sup_{0\leq t \leq \delta} | F(x,t) | \right\|{}_{L^{2}_x(\mathbb{T}^2)} \lesssim \| F \|{}_{X^{s, b}_{\delta} } \, . \end{aligned} $$
(16)

We will combine the following lemma with the embedding from Lemma 2 to verify the maximal estimate hypothesis of Proposition 2 for the cubic NLS on \(\mathbb {T}^2\).

Lemma 3

Let d = 2 and s > 0. Then

$$\displaystyle \begin{aligned} \| \mathcal{N} (u) - \mathcal{N} (v) \|{}_{X^{s, - \frac 12 + + }} \lesssim \left( \| u \|{}^{2}_{X^{s, \frac 12 + }} + \| v \|{}^{2}_{X^{s, \frac 12 + }} \right) \| u - v \|{}_{X^{s, \frac 12 + }}. \end{aligned} $$
(17)

In fact, Lemma 3 is a consequence of the following slightly more general statement (that will be useful later) due to Bourgain [3].

Lemma 4

Let d = 2 and s > 0. Let M 1 ≥ M 2 ≥ M 3be dyadic scales. Then

$$\displaystyle \begin{aligned} \| (\operatorname{P}_{M_1}F) (\operatorname{P}_{M_2} G) (\operatorname{P}_{M_3} H) \|{}_{X^{s, -\frac 12++}} \\ \lesssim \| \operatorname{P}_{M_1} F \|{}_{X^{s, \frac 12+}} \| \operatorname{P}_{M_2} G \|{}_{X^{0+, \frac 12+}} \| \operatorname{P}_{M_3} H \|{}_{X^{0, \frac 12+}}. \end{aligned} $$
(18)

We denote \(R_0 = \| f \|{ }_{H^s(\mathbb {T}^2)}\). Hereafter η will be a smooth cut-off of [0, 1]. Taking δ = δ(R 0) < 1 sufficiently small and combining (25), (26), (27), and Lemma 3, one can show that the map

$$\displaystyle \begin{aligned} \Gamma (u(x,t)) = \eta(t) e^{it \Delta} \operatorname{P}_{\leq N} f(x) - i \eta(t) \int_{0}^{t} e^{i(t-t')\Delta} \operatorname{P}_{\leq N} \mathcal N (u(x,t')) dt' \end{aligned} $$
(19)

is a contraction on the ball \(\{ u \; : \; \| u \|{ }_{X^{s, \frac 12+}_{\delta }}\leq 2R_0\}\), for all \(N \in 2^{\mathbb {N}} \cup \{ \infty \}\). This is a standard argument, so we omit the proof (see for instance [18, Section 3.5.1]). Moreover, a similar computation is part of the proof of Theorem 1. However, we stress that the value of δ is uniform in \(N \in 2^{\mathbb {N}} \cup \{ \infty \}\). In particular, we have

$$\displaystyle \begin{aligned} \| \Phi^N_t f \|{}_{X^{s, \frac 12+}_{\delta}} \leq 2 R_0, \qquad \mbox{for all } \, N \in 2^{\mathbb{N}} \cup \{ \infty \} \, . \end{aligned} $$
(20)

We are now ready to prove Theorem 1.

2.1 Proof of Theorem 1

By Lemma 2, we have

$$\displaystyle \begin{aligned} \left\| \sup_{0 \leq t \leq \delta} | \Phi_t f(x) - \Phi^N_t f(x) | \right\|{}_{L^2_x(\mathbb{T}^2)} \lesssim \| \Phi_t f - \Phi^N_t f \|{}_{X^{s,\frac 12+}_{\delta}} \, . \end{aligned}$$

Thus using Proposition 2, it suffices to show that the right-hand side goes to zero as N →. For t ∈ [0, δ], we have (see (19))

$$\displaystyle \begin{aligned} & \Phi_t f(x) - \Phi^N_t f(x) \\ & = \eta(t) e^{it \Delta} \operatorname{P}_{>N} f(x) - i \eta(t) \int_{0}^{t} e^{i(t-t')\Delta} \left( \mathcal N (\Phi_{t'} f(x)) - \operatorname{P}_{\leq N} \mathcal N (\Phi^N_{t'} f(x)) \right) dt'. \end{aligned} $$

Then using (25) and (26), we have

$$\displaystyle \begin{aligned} \| \Phi_t f - \Phi^N_t f \|{}_{X^{s,\frac 12+}_{\delta}} \lesssim \| \operatorname{P}_{>N} f \|{}_{H^{s}(\mathbb{T}^2)} + \| \mathcal N (\Phi_t f) - \operatorname{P}_{\leq N} \mathcal N (\Phi^N_t f) \|{}_{X^{s, -\frac 12+}_{\delta}} \, . \end{aligned} $$
(21)

To handle the nonlinear contribution, we further decompose

$$\displaystyle \begin{aligned} \mathcal N (\Phi_t f) - \operatorname{P}_{\leq N} \mathcal N (\Phi^N_t f ) = \operatorname{P}_{\leq N} \left( \mathcal N (\Phi_t f) - \mathcal N (\Phi^N_t f ) \right) + \operatorname{P}_{>N} \mathcal N (\Phi_t f) \, \end{aligned}$$

so that

$$\displaystyle \begin{aligned} \| \Phi_t f - \Phi^N_t f \|{}_{X^{s,\frac 12+}_{\delta}} & \lesssim \| \operatorname{P}_{>N} f \|{}_{H^{s}(\mathbb{T}^2)} + \| \operatorname{P}_{>N} \mathcal N (\Phi_t f) \|{}_{X^{s, -\frac 12+}_{\delta}} \\ & \qquad + \| \operatorname{P}_{\leq N} \left( \mathcal N (\Phi_t f) - \mathcal N (\Phi^N_t f) \right) \|{}_{X^{s, -\frac 12+}_{\delta}} \, . \end{aligned} $$
(22)

Then by (27), Lemma 3, and (20), we get

$$\displaystyle \begin{aligned} & \| \operatorname{P}_{\leq N} \left( \mathcal N (\Phi_t f) - \mathcal N (\Phi^N_t f ) \right) \|{}_{X^{s, -\frac 12+}_{\delta}} \lesssim \delta^{0+} R_{0}^{2} \| \Phi_{t} f - \Phi^N_{t} f \|{}_{X^{s, \frac 12+}_{\delta}} \, , \end{aligned} $$
(23)

where we recall \(R_{0} = \| f \|{ }_{H^s(\mathbb {T}^2)}\). Plugging (23) into (22), taking δ = δ(R 0) small enough, and absorbing

$$\displaystyle \begin{aligned} \delta^{0+} R_{0}^{2} \| \Phi_{t} f - \Phi^N_{t} f \|{}_{X^{s, \frac 12+}_{\delta}} \leq \frac 12 \| \Phi_{t} f - \Phi^N_{t} f \|{}_{X^{s, \frac 12+}_{\delta}} \end{aligned}$$

into the left-hand side, we arrive to

$$\displaystyle \begin{aligned} \| \Phi_t f - \Phi^N_t f \|{}_{X^{s,\frac 12+}_{\delta}} & \lesssim \| \operatorname{P}_{>N} f \|{}_{H^{s}(\mathbb{T}^2)} + \| \operatorname{P}_{>N} \mathcal N (\Phi_{t} f) \|{}_{X^{s, -\frac 12+}_{\delta}}. \end{aligned} $$
(24)

The right-hand side of (24) goes to zero as N → since \(f \in H^{s}(\mathbb {T}^2)\) and \(\mathcal N (\Phi _{t} f) \in X^{s, -\frac 12+}_{\delta }\); in fact, applying Lemma 3 with v = 0 and recalling (20), we have

$$\displaystyle \begin{aligned} \| \mathcal N (\Phi_{t} f) \|{}_{X^{s, -\frac 12+}_{\delta}} \lesssim \| \Phi_{t} f \|{}^3_{X^{s, \frac 12+}_{\delta}} \lesssim R_0^3 \, . \end{aligned}$$

This concludes the proof of (3).

We conclude this section by recalling some well-known properties of restriction spaces that we have used (and that we will use in the rest of the paper). Recall that η is a smooth cut-off of the unit interval.

Lemma 5

Let \(s \in \mathbb {R}\) . Then

$$\displaystyle \begin{aligned} \| \eta (t) e^{it \Delta} f(x) \|{}_{X^{s, \frac 12+}} \lesssim \| f \|{}_{H^{s}(\Omega^d)} \, , \end{aligned} $$
(25)
$$\displaystyle \begin{aligned} \left\| \eta(t) \int_0^t e^{i(t-t') \Delta} F(\cdot,t') dt' \right\|{}_{X^{s, \frac 12 + }} \lesssim \| F \|{}_{X^{s, - \frac 12 + }} \, , \end{aligned} $$
(26)
$$\displaystyle \begin{aligned} \| F \|{}_{X^{s, - \frac 12 + }_{\delta}} \lesssim \delta^{0+} \| F \|{}_{X^{s, - \frac 12 ++}_{\delta}} \, . \end{aligned} $$
(27)

3 Proof of Proposition 1

Here we prove almost surely uniform convergence of the randomized Schrödinger flow to the initial datum, at the H0+ level, namely Proposition 1. Thus our goal is to show that eit Δ fω → fω as t → 0 uniformly over \(x \in \mathbb {T}^d\) and \(\mathbb {P}\)-almost surely for data fω defined as

$$\displaystyle \begin{aligned} f^\omega(x) = \sum_{n \in \mathbb{Z}^d} \frac{g_n^\omega}{\langle n \rangle^{\frac{d}{2}+ \alpha}} e^{i n \cdot x}, \qquad x\in \mathbb{T}^d \, , \end{aligned} $$
(28)

where α > 0 and each \(g_n^\omega \) is complex and independently drawn from a standard normal distribution. In fact, the argument we present works for independent \(g_n^\omega \) drawn from any distribution with sufficient decay of the tails (for instance, sub-Gaussian is enough). This will not be the case in Theorem 2, where we will need to take advantage of the hypercontractivity of (multilinear forms of ) normal distributions. However, we only present the standard normal case for definiteness, also in this section.

Fix \(t \in \mathbb {R}\). We have that \(\mathbb {P}\)-almost surely

$$\displaystyle \begin{aligned} e^{it \Delta} f^\omega \in \bigcap_{s <\alpha} H^s(\mathbb{T}^d). \end{aligned}$$

This is an immediate consequence of (44) below, taking the union over ε > 0. In fact, for all \(t \in \mathbb {R}\), we have \(\mathbb {P}\)-almost surely

$$\displaystyle \begin{aligned} e^{it \Delta} f^\omega \in \bigcap_{s <\alpha} C^s(\mathbb{T}^d); \end{aligned}$$

thus in particular, eit Δ fω are \(\mathbb {P}\)-almost surely continuous functions of the x variable. This is a consequence of the higher integrability property (34) below, from which one can easily deduce uniform convergence as N → of the sequence P Nfω, with probability larger than 1 − ε. So the limit fω is continuous with the same probability, and the almost sure continuity follows taking the union over ε > 0.

Before completing the proof of Proposition 1, we recall few lemmata. We start recalling the following well-known concentration bound:

Lemma 6 ([7, Lemma 3.1])

There exists a constant C such that

$$\displaystyle \begin{aligned} \bigg\| \sum_{n \in \mathbb{Z}^d} g_n^\omega \; a_n \bigg\|{}_{L^r_\omega} \leq C r^{\frac{1}{2}} \| a_n\|{}_{\ell^2_n(\mathbb{Z}^d)} \end{aligned} $$
(29)

for all r ≥ 2 and \(\{ a_n \} \in \ell ^2(\mathbb {Z}^d)\).

Using (29) with \(a_n = e^{i n \cdot x - i |n|{ }^2 t} \langle n \rangle ^{-\frac {d}{2} - \alpha }\), we obtain for r ≥ 2 that for fω an in (28)

$$\displaystyle \begin{aligned} \| \operatorname{P}_N\! e^{it \Delta} f^\omega \|{}_{L^r_\omega} \leq C r^{\frac 12} N^{-\alpha} \, , \end{aligned} $$
(30)

with a constant uniform in \(t \in \mathbb {R}\). From this, we also have improved \(L^p_x\) estimates for randomized data.

Lemma 7

Let p ∈ [2, ). Assume fω is as in (28). There exist constants C and c, independent of \(t \in \mathbb {R}\) , such that

$$\displaystyle \begin{aligned} \mathbb{P}(\| \operatorname{P}_N\! e^{it \Delta} f^\omega\|{}_{L^p_x(\mathbb{T}^d)} > \lambda) \leq Ce^{-cN^{2\alpha}\lambda^2} \, . \end{aligned} $$
(31)

Thus

$$\displaystyle \begin{aligned} \mathbb{P}(\| \operatorname{P}_N\! e^{it \Delta} f^\omega\|{}_{L^\infty_x(\mathbb{T}^d)} > \lambda) \leq Ce^{-cN^{2\alpha-}\lambda^2} \, . \end{aligned} $$
(32)

In particular, for any ε > 0 sufficiently small, we have

$$\displaystyle \begin{aligned} \| \operatorname{P}_N\! e^{it \Delta} f^\omega \|{}_{L^p_x(\mathbb{T}^d)} \lesssim N^{-\alpha} \left( - \ln \varepsilon \right)^{1/2}, \quad N \in 2^{\mathbb{Z}} \end{aligned} $$
(33)

and

$$\displaystyle \begin{aligned} \| \operatorname{P}_N\! e^{it \Delta} f^\omega \|{}_{L^{\infty}_x(\mathbb{T}^d)} \lesssim N^{-\alpha +} \left( - \ln \varepsilon \right)^{1/2} , \quad N \in 2^{\mathbb{Z}} \, , \end{aligned} $$
(34)

with probability at least 1 − ε.

Proof

We prove (31), and then (32) follows by Bernstein inequality. By Minkowski’s inequality and Lemma 6 above, we have for any r ≥ p ≥ 2

$$\displaystyle \begin{aligned} \left( \int \|\operatorname{P}_N\! e^{it \Delta} f^\omega\|{}_{L^p_x(\mathbb{T}^d)}^r d \mathbb{P}(\omega) \right)^{\frac{1}{r}} \leq \Bigl\| \| \operatorname{P}_N\! e^{it \Delta} f^\omega\|{}_{L^r_\omega} \Bigr\|{}_{L^p_x(\mathbb{T}^d)} \leq C N^{-\alpha}r^{\frac{1}{2}}, \end{aligned} $$

which is enough to conclude that \(\|\operatorname {P}_N\! e^{it \Delta } f^\omega \|{ }_{L^p_x(\mathbb {T}^d)}\) is a sub-Gaussian random variable satisfying the tail bound (31). □

Note that using (31)–(32), the triangle inequality

$$\displaystyle \begin{aligned} \|P_{>N} (\cdot) \| \leq \sum_{M \in 2^{\mathbb{N}} : M > N} \|P_{M} (\cdot) \| , \end{aligned}$$

the union bound, and the fact that

$$\displaystyle \begin{aligned} \sum_{M \in 2^{\mathbb{N}} : M > N} e^{-cM^{2\alpha} k^{-2}} \lesssim e^{-cN^{2\alpha}k^2}, \end{aligned}$$

we see that, for all \(t \in \mathbb {R}\) and α > 0, we have (p < )

$$\displaystyle \begin{aligned} \mathbb{P} \left( \| e^{it\Delta} \operatorname{P}_{>N} f^\omega \|{}_{L^{p}_{x}(\mathbb{T}^{d})} > \lambda \right) \lesssim e^{-cN^{2\alpha} \lambda^2} \end{aligned} $$
(35)
$$\displaystyle \begin{aligned} \mathbb{P} \left( \| e^{it\Delta} \operatorname{P}_{>N} f^\omega \|{}_{L^{\infty}_{x}(\mathbb{T}^{d})} > \lambda \right) \lesssim e^{-cN^{2\alpha-} \lambda^2}. \end{aligned} $$
(36)

Remark 2

Proceeding as we did to prove (35)–(36), we also easily see that the exceptional set where (33)–(34) are not valid can be chosen to be the same for all \(N \in \mathbb {N}\), paying an N0+ loss on the right-hand side of the estimates.

Proceeding as in the proof of Lemma 7, we also obtain improved Strichartz estimates for randomized data.

Lemma 8

Let p ∈ [2, ). Assume fω is as in (28). Then we have

$$\displaystyle \begin{aligned} \mathbb{P}\left( \| e^{it\Delta} \operatorname{P}_N\! f^\omega \|{}_{L^p_{x,t}(\mathbb{T}^{d+1})} > \lambda \right) \leq C e^{-cN^{2\alpha}\lambda^2}. \end{aligned} $$
(37)

Thus

$$\displaystyle \begin{aligned} \mathbb{P}\left( \| e^{it\Delta} \operatorname{P}_N\! f^\omega \|{}_{L^\infty_{x,t}(\mathbb{T}^{d+1})} > \lambda \right) \leq C e^{-c N^{2\alpha-}\lambda^2}. \end{aligned} $$
(38)

In particular, for any ε > 0 sufficiently small, we have

$$\displaystyle \begin{aligned} \| e^{it\Delta} \operatorname{P}_N\! f^\omega \|{}_{L^p_{x,t}(\mathbb{T}^{d+1})} \lesssim N^{-\alpha} \left( - \ln \varepsilon \right)^{1/2} , \quad N \in 2^{\mathbb{Z}} \end{aligned} $$
(39)

and

$$\displaystyle \begin{aligned} \| e^{it\Delta} \operatorname{P}_N\! f^\omega \|{}_{L^\infty_{x,t}(\mathbb{T}^{d+1})} \lesssim N^{-\alpha +} \left( - \ln \varepsilon \right)^{1/2} , \quad N \in 2^{\mathbb{Z}} \, , \end{aligned} $$
(40)

with probability at least 1 − ε.

The bounds (37)–(38) imply

$$\displaystyle \begin{aligned} \mathbb{P} \left( \| e^{it\Delta} \operatorname{P}_{>N} f^\omega \|{}_{L^{p}_{x,t}(\mathbb{T}^{d+1})} > \lambda \right) \lesssim e^{-cN^{2\alpha} \lambda^2} \end{aligned} $$
(41)
$$\displaystyle \begin{aligned} \mathbb{P} \left( \| e^{it\Delta} \operatorname{P}_{>N} f^\omega \|{}_{L^{\infty}_{x,t}(\mathbb{T}^{d+1})} > \lambda \right) \lesssim e^{-cN^{2\alpha-} \lambda^2} \end{aligned} $$
(42)

exactly as (31)–(32) imply (35)–(36). Also we have an analogous of Remark (2):

Remark 3

The exceptional set where (39)–(40) are not valid can be chosen to be the same for all \(N \in \mathbb {N}\), paying an N0+ loss on the right-hand side of the estimates.

Fix \(t \in \mathbb {R}\). Later we will also need the following bound for the Hs norm of eit Δ fω with s < α. This is a well-known fact that we recall applying again (29) with \(a_n = e^{i n \cdot x - |n|{ }^2 t} \langle n \rangle ^{-\frac {d}{2} - \alpha +s}\), so that we get for r ≥ 2

$$\displaystyle \begin{aligned} \| \operatorname{P}_N \langle D \rangle^s \! e^{it\Delta} f^\omega \|{}_{L^r_\omega} \leq C r^{\frac 12} N^{s-\alpha}, \qquad s <\alpha \, . \end{aligned}$$

Here 〈D〉 denotes the Fourier multiplier operator 〈n〉. Proceeding as in the proof of Lemma 7, we also obtain

$$\displaystyle \begin{aligned} \mathbb{P}\left( \| \langle D \rangle^s \operatorname{P}_N\! e^{it\Delta} f^\omega \|{}_{L^2_{x}(\mathbb{T}^d)} > \lambda \right) \leq C e^{-cN^{2(\alpha-s)}\lambda^2}, \qquad s <\alpha, \end{aligned} $$
(43)

and in particular, for any ε > 0 sufficiently small

$$\displaystyle \begin{aligned} \| e^{it \Delta} f^\omega \|{}_{H^s_{x}(\mathbb{T}^d)} \lesssim \left( - \ln \varepsilon \right)^{1/2} \qquad s < \alpha, \quad t \in \mathbb{R} \, , \end{aligned} $$
(44)

with probability at least 1 − ε. Again the constant is uniform on \(t \in \mathbb {R}\).

We are now ready to complete the proof of Proposition 1.

3.1 Proof of Proposition 1

Invoking the Borel–Cantelli lemma, it is enough to show that

$$\displaystyle \begin{aligned} \mathbb{P} \left( \limsup_{t \to 0} \| e^{it\Delta} f^\omega - f^\omega \|{}_{L^{\infty}_x(\mathbb{T}^d)} > 1 / k \right) \lesssim \gamma_k, \end{aligned} $$
(45)

for a summable sequence \(\{\gamma _k \}_{k \in \mathbb {N}}\). Let us decompose

$$\displaystyle \begin{aligned} | e^{it\Delta} f^\omega - f^\omega | \leq | e^{it\Delta} \operatorname{P}_{>N} f^\omega | + | e^{it\Delta} \operatorname{P}_{\leq N} f^\omega - \operatorname{P}_{\leq N} f^\omega | + | \operatorname{P}_{>N} f^\omega |. \end{aligned} $$
(46)

Using (36) (with t = 0) and (42), we see that

$$\displaystyle \begin{aligned} \| e^{it\Delta} \operatorname{P}_{>N} f^\omega \|{}_{L^{\infty}_{x, t}(\mathbb{T}^{d+1})} + \| \operatorname{P}_{>N} f^\omega \|{}_{L^{\infty}_{x}(\mathbb{T}^d)} \leq \frac{1}{2k} \end{aligned} $$
(47)

holds for all ω outside an exceptional set of measure \(\lesssim e^{-cN^{2\alpha } k^{-2}}\). We choose N = N k via the identity \(N_k^{2\alpha } = k^3 \), in such a way that \(e^{-cN_k^{2\alpha } k^{-2}} = e^{-ck}\) is summable (over \(k \in \mathbb {N}\)). Let s > d∕2. Since

$$\displaystyle \begin{aligned} e^{it\Delta} \operatorname{P}_{\leq N_k} f^\omega - \operatorname{P}_{\leq N_k} f^\omega = \sum_{|n| \leq N_k} (e^{-it |n|{}^2} - 1) e^{in \cdot x} \hat{f^{\omega}}(n) , \end{aligned}$$

using Cauchy–Schwarz, the summability of \(\langle n \rangle ^{-2s^*}\) (over \(n \in \mathbb {Z}^{d}\)) and (44) with s = 0, t = 0 (in the last inequality), we get

$$\displaystyle \begin{aligned} \| e^{it\Delta} \operatorname{P}_{\leq N_k} f^\omega - \operatorname{P}_{\leq N_k} f^\omega \|{}_{L^{\infty}_x(\mathbb{T}^d)} & \lesssim \sup_{|n| \leq N_k } | e^{-it |n|{}^2} - 1 | \left( \sum_{|n| \leq N_k} \langle n \rangle^{2s^*} |\hat{f^{\omega}}(n)|{}^2 \right)^{1/2} \\ & \lesssim |t| (N_k)^{s^*+2} \| f^{\omega} \|{}_{L^2} \leq |t|(N_k)^{s^* + 2} \frac{1}{k} \, , \end{aligned} $$
(48)

for ω outside an exceptional set of probability \(\lesssim e^{-cN_k^{2\alpha } k^{-2}} = e^{-ck}\). From the previous inequality, looking at t so small that \(|t|(N_k)^{s^* + 2} \leq 1/2\), we have

$$\displaystyle \begin{aligned} \mathbb{P} \left( \limsup_{t \to 0} \| e^{it\Delta} \operatorname{P}_{\leq N^*} f^\omega - \operatorname{P}_{\leq N^*} f^\omega \|{}_{L^{\infty}_x (\mathbb{T}^d)} > 1 / k \right) \lesssim e^{-ck}. \end{aligned} $$
(49)

Combining (36)–(36) and recalling the decomposition (46), the proof is concluded.

\(\Box \)

4 Proof of Theorem 2

In this section, we consider the cubic Wick-ordered NLS (8) on \(\mathbb {T}^d\) (d = 1, 2) as in the work of Bourgain in [3]. Namely, we look at the nonlinearity

We are interested again in randomized initial data, i.e., fω is taken to be of the form (28). Recall (see (44)) that such data is \(\mathbb {P}\)-almost surely in Hs for all s < α and

$$\displaystyle \begin{aligned} \| f^\omega \|{}_{H^{s}} \lesssim \left( - \ln \varepsilon \right)^{1/2}, \quad s < \alpha \, , \end{aligned} $$
(50)

with probability at least 1 − ε, for all ε ∈ (0, 1) sufficiently small. Since we work with any α > 0, we are considering initial data in H0+. We approximate Eq. (8) as in (11), for all \(N \in 2^{\mathbb {N}} \cup \{ \infty \}\). Recall that \(\Phi ^{N}_t f^{\omega }\) denotes the associated flow, with initial datum

$$\displaystyle \begin{aligned} \Phi^{N}_0 f^{\omega} := \operatorname{P}_{\leq N} f^{\omega} = \sum_{|n| \leq N} \frac{g_n^\omega}{\langle n \rangle^{\frac{d}2+ \alpha}} e^{i n \cdot x} \, . \end{aligned}$$

We write \(\Phi _t f^{\omega } = \Phi ^{\infty }_t f^{\omega }\) for the flow of (8) with datum \(f^{\omega } = \operatorname {P}_{\infty } f^{\omega }\).

The relevant choice of σ in the following statement is \(\sigma = \frac 12 -\) (we will use this to prove Theorem 2).

Proposition 3

Let d = 1, 2 and α > 0. Let \(N \in 2^{\mathbb {N}} \cup \{ \infty \}\) . For all \(\sigma \in [0, \frac 12)\) , the following holds. Assume

$$\displaystyle \begin{aligned} u = u(I) + u(II), \quad u(I) = e^{it \Delta} \operatorname{P}_{\leq N} f^{\omega}, \quad \| u(II) \|{}_{X^{\alpha + \sigma, \frac 12+}} < 1 \end{aligned} $$
(51)

and the same for v. Then

$$\displaystyle \begin{aligned} \| \mathcal{N} (u) \|{}_{X^{\alpha + \sigma, - \frac 12+}} \lesssim \left( - \ln \varepsilon \right)^{3/2} \end{aligned} $$
(52)
$$\displaystyle \begin{aligned} \| \mathcal{N} (u) - \mathcal{N} (v) \|{}_{X^{\alpha + \sigma, - \frac 12++}} \lesssim \left( - \ln \varepsilon \right) \| u - v \|{}_{X^{\alpha + \sigma, \frac 12+}} \end{aligned} $$
(53)

for initial data of the form (28), with probability at least 1 − ε, for all ε ∈ (0, 1) sufficiently small. If we take u as in (51) and we instead assume

$$\displaystyle \begin{aligned} v = v(I) + u(II), \quad v(I) = e^{it \Delta} f^{\omega}, \quad \| u(II) \|{}_{X^{\alpha + \sigma, \frac 12+}} < 1 \, , \end{aligned}$$

we have

$$\displaystyle \begin{aligned} \| \mathcal{N} (u) - \mathcal{N} (v) \|{}_{X^{\alpha + \sigma, - \frac 12++}} \lesssim N^{-\alpha} \, . \end{aligned} $$
(54)

Remark 4

Recall that α indicates the regularity of the initial datum. We are denoting by σ the amount of smoothing one can prove for the Wick-ordered cubic nonlinearity \(\mathcal N\). More precisely, since the initial data (28) belongs to Hα, one can interpret this statement as saying that, with arbitrarily large probability, \(\mathcal N\) is σ +  smoother than fω. Since \(\sigma < \frac 12\) is permissible, we reach \(\frac 12 -\) smoothing for \(\mathcal N\) and, combining with (26), also for the Duhamel contribution \(\Phi _{t}^N f^\omega - e^{it\Delta } P_{\leq N}f^\omega \).

In fact, a stronger statement than 3 has been proved in [13]. Namely that the reminder can be further decomposed into a sum of two terms. The first one, to which one we refer as paracontrolled, lies in \(X^{\frac 12 -, \frac 12+}\) but has a precise random structure. The second one is a smoother deterministic reminder that lies in \(X^{1-, \frac 12+}\).

Here we only explain how to get Proposition 3 for the first Picard iteration, namely when 4. Recall that η is a smooth cut-off of the unit interval. Let us fix α > 0. Using (26), (27), and Proposition 3, one can show that for all δ > 0 sufficiently small, the following holds. For all \(N \in 2^{\mathbb {N}} \cup \{ \infty \}\), the map

$$\displaystyle \begin{aligned} \Gamma^N(u) := \eta(t) e^{it\Delta} \operatorname{P}_{\leq N} f^{\omega} - i \eta(t) \int_0^t e^{i(t-s) \Delta} \operatorname{P}_{\leq N} \mathcal{N} (u (\cdot, s)) \, ds \end{aligned} $$
(55)

is a contraction on the set

$$\displaystyle \begin{aligned} \left\{ e^{it \Delta} \operatorname{P}_{\leq N} f^{\omega} + g, \quad \| g \|{}_{X^{\alpha + \sigma, \frac 12+}_{\delta}} < 1 \right\} \end{aligned} $$
(56)

equipped with the \(X^{\alpha + \sigma , \frac 12+}_{\delta }\) norm, outside an exceptional set (we call it a δ-exceptional set) of initial data of probability smaller than \(e^{-\delta ^{-\gamma }}\), with γ > 0 a given small constant. Notice that this holds uniformly over \(N \in 2^{\mathbb {N}} \cup \{ \infty \}\). Again, this is a standard routine calculation that we omit (see for instance [18, Section 3.5.1]). We only explain how to find the relation between the local existence time δ and the size of the exceptional set. Given any ε ∈ (0, 1) sufficiently small, using (26), (27), and Proposition 3, we have

$$\displaystyle \begin{aligned} \| \Gamma^N(u) - \eta(t) e^{it\Delta} \operatorname{P}_{\leq N} f^{\omega} \|{}_{X^{\alpha + \sigma, \frac 12+}_{\delta} } \lesssim \delta^{0+} \left( - \ln \varepsilon \right)^{3/2} \, , \end{aligned}$$

for all fω outside an exceptional set of probability smaller than ε. Letting δ such that \(\varepsilon = e^{-\delta ^{-\gamma }}\) with γ > 0 a fixed small constant, we have \(C \delta ^{0+} \left ( - \ln \varepsilon \right )^{3/2} < 1\) for all δ > 0 sufficiently small. Note that the measure \(e^{-\delta ^{-\gamma }}\) of the δ-exceptional set converges to zero as δ → 0. In particular, for ω outside the δ-exceptional set, the fixed point \(\Phi ^N_t f^{\omega }\) of the map (55) belongs to the set (56), namely

$$\displaystyle \begin{aligned} \| \Phi^N_t f^{\omega} - e^{it\Delta} \operatorname{P}_{\leq N} f^{\omega} \|{}_{X^{\alpha + \sigma, \frac 12+}_{\delta} } < 1, \qquad N \in 2^{\mathbb{N}} \cup \{ \infty \} \, . \end{aligned} $$
(57)

We are now ready to prove Theorem 2.

4.1 Proof of Theorem 2

It suffices to show that

$$\displaystyle \begin{aligned} \lim_{N \to \infty} \left\| \sup_{0 \leq t \leq \delta} | \Phi_{t} f^{\omega} (x) - \Phi^{N}_{t} f^{\omega} (x) | \right\|{}_{L^{2}_x(\mathbb{T}^2)} = 0 \end{aligned} $$
(58)

for all fω outside a δ-exceptional set A δ. Note indeed that (58) implies that given fω, we can find \(\mathbb {P}\)-almost surely, a δ ω (which depends on ω) such that (58) is satisfied. Indeed, if we could not do so, this would mean that fω ∈⋂δ>0A δ, and the probability of this event is zero since \(\mathbb {P}(A_\delta ) \to 0\) as δ → 0.

Once we have (58) with δ = δ ω, we have \(\mathbb {P}\)-almost surely

$$\displaystyle \begin{aligned} \lim_{t \to 0} \Phi_{t}^{\omega} f^{\omega}(x) - f^{\omega}(x) = 0, \qquad \mbox{for a.e.}\ x \in \mathbb{T}^2 \, , \end{aligned}$$

as claimed, simply invoking Proposition 2.

In order to prove (58), we decompose

$$\displaystyle \begin{aligned} |\Phi_t f^{\omega} - \Phi^N_t f^{\omega} | \leq |e^{it \Delta} \operatorname{P}_{>N} f^{\omega}| + |\Phi_t f^{\omega} - e^{it \Delta} f^{\omega} - ( \Phi^N_t f^{\omega} - e^{it \Delta} \operatorname{P}_{\leq N} f^{\omega} ) | \, . \end{aligned}$$

Thus, recalling the decay of the high-frequency linear term given by (36), it remains to show that

$$\displaystyle \begin{aligned} \lim_{N \to \infty} \left\| \sup_{0 \leq t \leq \delta} |\Phi_t f^{\omega} - e^{it \Delta} f^{\omega} - ( \Phi^N_t f^{\omega} - e^{it \Delta} \operatorname{P}_{\leq N} f^{\omega} ) | \right\|{}_{L^{2}(\mathbb{T}^2)} = 0 \, , \end{aligned} $$
(59)

for all fω outside a δ-exceptional set.

For any α > 0, we can choose σ sufficiently close to \(\frac 12\) that

$$\displaystyle \begin{aligned} \frac 12 < \alpha + \sigma \, . \end{aligned} $$
(60)

Thus, using the Xs, b space embedding from Lemma 2, it suffices to prove

$$\displaystyle \begin{aligned} \lim_{N \to \infty} \left\| w - w^N \right\|{}_{X^{\alpha + \sigma,\frac 12+}_{\delta}} = 0 \, , \end{aligned} $$
(61)

where

$$\displaystyle \begin{aligned} w^N := \Phi^N_t f - e^{it\Delta} \operatorname{P}_{\leq N} f^{\omega}, \qquad w := w^{\infty} \, . \end{aligned}$$

Notice that by (57), we have

$$\displaystyle \begin{aligned} \| w^N \|{}_{X^{\alpha + \sigma,\frac 12+}_{\delta}} < 1, \qquad N \in 2^{\mathbb{N}} \cup \{ \infty \} \, . \end{aligned}$$

Since for t ∈ [0, δ], we have

$$\displaystyle \begin{aligned} w - w^N = - i \eta(t) \int_{0}^{t'} e^{i(t-t')\Delta} \left( \mathcal N (\Phi_{t'} f^{\omega}) - \operatorname{P}_{\leq N} \mathcal N (\Phi^N_{t'} f^{\omega}) \right) dt' \, , \end{aligned} $$
(62)

using (26), (27), we get

$$\displaystyle \begin{aligned} \| w - w^N \|{}_{X^{\alpha + \sigma,\frac 12+}_{\delta}} \lesssim \delta^{0+} \| \mathcal N (\Phi_{t} f) - \operatorname{P}_{\leq N} \mathcal N (\Phi^N_{t} f) \|{}_{X^{\alpha + \sigma, - \frac 12++}_{\delta}} \, . \end{aligned} $$
(63)

We decompose

$$\displaystyle \begin{aligned} \mathcal N (\Phi_{t} f) & - \operatorname{P}_{\leq N} \mathcal N (\Phi^N_{t} f) = \\ & \operatorname{P}_{\leq N} \left( \mathcal N (e^{it\Delta} \operatorname{P}_{\leq N} f^{\omega} + w) - \mathcal N (e^{it\Delta} \operatorname{P}_{\leq N} f^{\omega} + w^N) \right) + \mbox{Remainders} \, , \end{aligned} $$
(64)

where

$$\displaystyle \begin{aligned} \mbox{Remainders} := \operatorname{P}_{\leq N} \left( \mathcal N (e^{it\Delta} f^{\omega} + w) - \mathcal N (e^{it\Delta} \operatorname{P}_{\leq N} f^{\omega} + w) \right) + \operatorname{P}_{> N} \mathcal N (\Phi_{t} f) \, . \end{aligned}$$

Notice that by (52), (54), we have

$$\displaystyle \begin{aligned} \| \mbox{Remainders} \|{}_{X^{\alpha + \sigma, -\frac 12++}_{\delta}} \to 0 \quad \mbox{as}\ N \to \infty \, , \end{aligned} $$
(65)

with probability at least 1 − ε. Using (53), we can estimate

$$\displaystyle \begin{aligned} \| \operatorname{P}_{\leq N} \left( \mathcal N (e^{it\Delta} \operatorname{P}_{\leq N} f^{\omega} + w) \right. - \mathcal N (e^{it\Delta} \operatorname{P}_{\leq N} f^{\omega} + & \left. w_N) \right) \|{}_{X^{\alpha + \sigma, -\frac 12++}_{\delta}} \\ & \lesssim \left( - \ln \varepsilon \right) \left\| w - w^N \right\|{}_{X^{\alpha + \sigma,\frac 12+}_{\delta}}, \end{aligned} $$
(66)

and (63), (64), (66) give

$$\displaystyle \begin{aligned} \left\| w - w^N \right\|{}_{X^{\alpha + \sigma,\frac 12+}_{\delta}} \lesssim \delta^{0+} \left( - \ln \varepsilon \right) \left\| w - w^N \right\|{}_{ X^{\alpha + \sigma,\frac 12+}_{\delta}} + \left\| \mbox{Remainders} \right\|{}_{X^{\alpha + \sigma,-\frac 12++}_{\delta}}\end{aligned} $$
(67)

with probability at least 1 − ε. Since with our choice of \(\varepsilon = e^{-\delta ^{-\gamma }}\), we have \(C \delta ^{0+} \left ( - \ln \varepsilon \right )^{3/2} < 1\), we can absorb the first term on the right-hand side into the left-hand side, and we still have that (65) holds outside a δ-exceptional set. Thus letting N →, the proof of (9) is complete.

\(\Box \)

Remark 5

It is worthy to remark that, comparing with for instance [3], the procedure that allows to promote a statement valid on a δ-exceptional set A δ for arbitrarily small δ > 0 to a statement that is valid with probability = 1 is far easier. In particular, it does not involve any control on the evolution of the (Gaussian) measure induced by the random Fourier series. This is because we are considering a property that has to be verified only at time t = 0 a.s., instead that in a time interval containing t = 0, as in [3].

We now give some hints on the proof of the smoothing estimates given in Proposition 3.

4.2 Proof of Proposition 3

Again it is worthy to recall that an even stronger statement than 3 has been proved in [13]. Here we show how to handle the first Picard iterate. Notice that the Wick-ordered nonlinearity can be written as

$$\displaystyle \begin{aligned} \mathcal{N}(u(x, \cdot)) = \sum_{n_2 \neq n_1, n_3} \widehat{u}(n_1) \widehat{\overline{u}}(n_2) \widehat{u}(n_3) e^{i (n_1 - n_2 + n_3) \cdot x} - \sum_{n} \widehat{u}(n) |\widehat{u}(n)|{}^2 e^{i n \cdot x}, \end{aligned} $$
(68)

where we are looking at the nonlinear term for fixed time and \(\widehat {u}(\cdot )\) denotes the space Fourier coefficients. Looking at a similar expansion for the difference \(\mathcal {N}(u) - \mathcal {N}(v)\), it is easy to see that we can deduce (3) from a slightly more general Lemma 9 given below. It implies the desired statement

$$\displaystyle \begin{aligned} u_{j} (n_j) = u (n_j), \;\; v(n_j), \;\; \text{ or } \; u(n_j) - v(n_j) \, . \end{aligned}$$

\(\Box \)

We will give a proof of the following Lemma in the fully random case J j = I for j = 1, 2, 3, , which corresponds to the study of the first Picard iterate. Comparing with 9 (and [13]), there is a simplification coming from the fact that our fω is slightly more regular, namely we consider α > 0 instead of α = 0.

Lemma 9

Let d = 1, 2 and α > 0. Let \(N \in 2^{\mathbb {N}} \cup \{ \infty \}\) . For all \(\sigma \in [0, \frac 12)\) , the following holds. Assume for j = 1, 2, 3

$$\displaystyle \begin{aligned} u_j(I) = e^{it \Delta} \operatorname{P}_{\leq N} f^{\omega}, \qquad \| u_j(II) \|{}_{X^{\alpha + \sigma, \frac 12+} } < 1 . \end{aligned} $$
(69)

Let J j ∈{I, II}, j = 1, 2, 3. Then, for all ε ∈ (0, 1) sufficiently small, we have the following:

$$\displaystyle \begin{aligned} \| \mathcal N (u_1(J_1), \overline{u_2}(J_2), u_3(J_3)) \|{}_{X^{\alpha + \sigma, - \frac 12+}} \lesssim \left( - \ln \varepsilon \right)^{3/2} \, , \end{aligned} $$
(70)

and more precisely,

$$\displaystyle \begin{aligned} \| \mathcal N (u_1(II), \overline{u_2}(J_2), u_3(J_3)) \|{}_{X^{\alpha + \sigma, - \frac 12++}} \lesssim \left( - \ln \varepsilon \right) \| u_1(II) \|{}_{X^{\alpha + \sigma, \frac 12+}} \, , \end{aligned} $$
(71)
$$\displaystyle \begin{aligned} \| \mathcal N (u_1(J_1), \overline{u_2}(II), u_3(J_3)) \|{}_{X^{\alpha + \sigma, - \frac 12++}} \lesssim \left( - \ln \varepsilon \right) \| u_2(II) \|{}_{X^{\alpha + \sigma, \frac 12+}} \, , \end{aligned} $$
(72)

with probability at least 1 − ε. Moreover, if in (69) we replace for some j = j the projection operator \(\operatorname {P}_{\leq N}\) by \(\operatorname {P}_{> N}\) , then the estimate (70) with \(J_{j^*} = I\) holds with an extra factor Nα on the right-hand side.

Remark 6

Saying that these estimates hold with probability at least 1 − ε means, more precisely, that they hold for all ω outside an exceptional set of probability ≤ ε. Moreover, this set can be chosen to be independent on \(N \in 2^{\mathbb {N}} \cup \{ \infty \}\).

Remark 7

Notice that by the symmetry n 1 ↔ n 3 the estimate (71) implies an analogous estimate for u 3(II).

Here we only consider the case J j = I for j = 1, 2, 3, namely the case in which all the contributions are a linear random evolution. We prove the bound (70) relative to this case and to N = . Moreover, we split the nonlinearity as a difference of two terms (see (68))

$$\displaystyle \begin{aligned} \mathcal{N}_1 (u_1(J_1), \overline{u_2}(J_2), u_3(J_3)) &= \sum_{n_2 \neq n_1, n_3} \widehat{u_1(J_1)}(n_1) \widehat{\overline{u_2}(J_2)}(n_2) \widehat{u_3(J_3)}(n_3) e^{i (n_1 - n_2 + n_3) \cdot x} \, , \\ \mathcal{N}_2 (u_1(J_1), \overline{u_2}(J_2), u_3(J_3)) &= \quad \sum_{n} \quad \widehat{u_1(J_1)}(n) \widehat{\overline{u_2}(J_2)}(n) \widehat{u_3(J_2)}(n) e^{i n \cdot x} \, , \end{aligned} $$

and we prove (70) only for \(\mathcal N_{1}\), which is the most challenging contribution. The proof for \(\mathcal {N}_2\) is indeed elementary.

To prove, (70) will be useful to recall that the space–time Fourier transform of eit Δ fω is

$$\displaystyle \begin{aligned} \widehat{e^{it \Delta} f^\omega }(n, \tau) = \frac{g^{\omega}}{\langle n \rangle^{\frac{d}{2} + \alpha} } \delta(\tau + |n|{}^2) \, , \end{aligned}$$

where δ is the delta function. So a direct computation gives

$$\displaystyle \begin{aligned} \| e^{it \Delta} f^\omega \|{}_{X^{0 + , \frac{1}{2} + } }^{2} = \sum_{n} \frac{ |g_n^{\omega}|{}^2 }{\langle n \rangle^{d + 2 \alpha -}} \, , \end{aligned}$$

which, recalling \(\int |g_n^{\omega }|{ }^2 d \omega =1\), immediately implies

$$\displaystyle \begin{aligned} \| \| e^{it \Delta} f^\omega \|{}_{X^{0 + , \frac{1}{2} + } } \|{}_{L^{2}_{\omega}}^2 = \sum_{n} \frac{1}{\langle n \rangle^{d + 2 \alpha -}} < \infty \, . \end{aligned}$$

Since we can expand the LHS as a bilinear form in the Gaussian variables \(g_n^{\omega }\), we get by Gaussian hypercontractivity

$$\displaystyle \begin{aligned} \| \| e^{it \Delta} f^\omega \|{}_{X^{0 + , \frac{1}{2} + } } \|{}_{L^{q}_{\omega}}^2 = \sum_{n} \frac{1}{\langle n \rangle^{d + 2 \alpha -}} < C_q < \infty \, . \end{aligned}$$

Proceeding essentially as in the Proof of Lemmas 78 (recall also Remarks 2–RemarkUniform1Bis), this allows to prove a pointwise bound

$$\displaystyle \begin{aligned} \| e^{it \Delta} f^\omega \|{}_{X^{0 + , \frac{1}{2} + } } \lesssim \sqrt{\ln \left( \frac{1}{\varepsilon} \right)} , \end{aligned} $$
(73)

with probability larger than 1 −  for all sufficiently small ε > 0.

Let N, N 1, N 2, N 3 be dyadic scales. We denote with \(\tilde N\) the maximum between N 1, N 2, N 3. First we perform a reduction to remove frequencies that are far from the paraboloid. More precisely, we denote with \(\operatorname {P}_{A}\) the space–time Fourier projection into the set A, and our goal is to reduce

$$\displaystyle \begin{aligned} & \sum_{N_1, N_2, N_3} \| \mathcal N_{1} \left( \operatorname{P}_{N_1} u_1 (I) , \operatorname{P}_{N_2} \overline{u_2} (I) , \operatorname{P}_{N_3} u_3 (I) \right) \|{}_{X^{\alpha + \sigma, -\frac 12++}}^2 \\ &= \sum_{N, N_1, N_2, N_3} N^{2\alpha + 2\sigma} \| \operatorname{P}_N \mathcal N_{1} \left( \operatorname{P}_{N_1} u_1 (I) , \operatorname{P}_{N_2} \overline{u_2} (I) , \operatorname{P}_{N_3} u_3 (I) \right) \|{}_{X^{0, -\frac 12++}}^2 \end{aligned} $$
(74)

to

$$\displaystyle \begin{aligned} \sum_{N, N_1, N_2, N_3 } N^{2\alpha + 2\sigma } \| \operatorname{P}_{N} \operatorname{P}_{\left\{ \langle \tau + |n|{}^2 \rangle \leq \tilde N^{1 + \frac{1}{10}} \right\} } \mathcal N_{1} \left( \operatorname{P}_{N_1} u_1 (I) \operatorname{P}_{N_2} \overline{u_2} (I) \operatorname{P}_{N_3} u_3 (I) \right) \|{}_{X^{0, -\frac 12++}}^2.\end{aligned} $$
(75)

To obtain this reduction, it is sufficient to show that projection of the nonlinearity onto the complementary set is appropriately bounded, i.e., that

$$\displaystyle \begin{aligned} & \sum_{N, N_1, N_2, N_3 } N^{2\alpha + 2\sigma} \| \operatorname{P}_{N} \operatorname{P}_{\left\{ \langle \tau + |n|{}^2 \rangle > \tilde N^{\frac{11}{10}} \right\} } \mathcal N_{1} \left( \operatorname{P}_{N_1} u_1 (I) , \operatorname{P}_{N_2} \overline{u_2} (I) , \operatorname{P}_{N_3} u_3 (I) \right) \|{}_{X^{0, -\frac 12++}}^2 \\ & \qquad \qquad \qquad \qquad \lesssim \left( - \ln \varepsilon \right)^3 \end{aligned} $$
(76)

with probability at least 1 − ε. To do so, we abbreviate

$$\displaystyle \begin{aligned} \mathcal N_{1}^{N_1, N_2, N_3} \left( \cdot \right) := \mathcal N_{1} \left( \operatorname{P}_{N_1} u_1 (I) , \operatorname{P}_{N_2} \overline{u_2} (I) , \operatorname{P}_{N_3} u_3 (I) \right), \end{aligned}$$

and we bound

$$\displaystyle \begin{aligned} & \sum_{ N_1, N_2, N_3 } N^{2\alpha + 2\sigma} \| \operatorname{P}_{N} \operatorname{P}_{\left\{ \langle \tau + |n|{}^2 \rangle > \tilde N^{\frac{11}{10}} \right\} } \mathcal N_{1}^{N_1, N_2, N_3} \|{}_{X^{0, -\frac 12++}}^2 \\ & \sim N^{2\alpha + 2\sigma} \sum_{ \substack{ N_1, N_2, N_3 \\ n \sim N } } \int \frac{ \chi_{ \{ \langle \tau + |n|{}^{2} \rangle > \tilde N^{\frac{11}{10}} \} } }{ \langle \tau + |n|{}^{2} \rangle^{1--}} \left| \widehat{\mathcal N_{1}^{N_1, N_2, N_3} (\cdot)} (n, \tau) \right|{}^2 \, d \tau \\ & \lesssim N^{2 \alpha +2 \sigma - 1 - \frac{1}{10} + 3(0+) } \sum_{ \substack{ N_1, N_2, N_3 \\ n \sim N } } \int \left| \widehat{\mathcal N_{1}^{N_1, N_2, N_3} (\cdot)} (n, \tau) \right|{}^2 \, d \tau \\ & \sim N^{2 \alpha - \frac{1}{20}} \sum_{ N_1, N_2, N_3 } \| \operatorname{P}_N \mathcal N_{1}^{N_1, N_2, N_3} \|{}_{L^{2}_{x,t}}^2 \, , \end{aligned} $$
(77)

recalling that σ < 1∕2 (here in fact we may have more smoothing than \(\frac 12-\)). We have used the fact that at least one of the frequency scales N j has to be comparable to N; otherwise, the contribution is zero by orthogonality, and so particular, we have \(N \lesssim \tilde N\) (recall that \(\tilde N= \max (N_1, N_2, N_3)\)). In order to continue the estimate, we assume for definiteness that N 1 ∼ N. The other possible case is N 2 ∼ N (since everything is symmetric under n 1 ↔ n 3), and one can indeed immediately check that the estimate (78) below is still valid in this case, with obvious changes. Thus we have using Hölder’s inequality, the improved Strichartz inequality (40) for randomized functions (for the L norm of u 1(I)), and the Strichartz inequality (10) (for the L4 norms of u 2(I) and u 3(I)), we obtain

$$\displaystyle \begin{aligned} &\| \operatorname{P}_N \mathcal N_{1}^{N_1, N_2, N_3} \|{}_{L^{2}_{x,t}}^2\\ & \leq \| \operatorname{P}_{N_1} u_1 (I) \|{}^2_{L^\infty_{x,t}} \| \operatorname{P}_{N_2} \overline{u_2} (I) \|{}^2_{L^{4}_{x,t} } \| \operatorname{P}_{N_3} u_3 (I) \|{}^2_{L^{4}_{x,t} } \, . \\ & \lesssim \left( - \ln \varepsilon \right) N_1^{-2\alpha} \| \operatorname{P}_{N_2} \overline{u_2} (I) \|{}^2_{L^{4}_{x,t} } \| \operatorname{P}_{N_3} w_3 (I) \|{}^2_{L^{4}_{x,t} } \, , \\ & \lesssim \left( - \ln \varepsilon \right) N^{-2\alpha} \| \operatorname{P}_{N_2} \overline{u_2} (I) \|{}^2_{X^{0+, \frac 12+} } \| \operatorname{P}_{N_3} u_3 (I) \|{}^2_{X^{0+, \frac 12+} }, \end{aligned} $$
(78)

this holds on a set of probability larger than 1 − ε, and this set may be chosen to be independent on \(N_1 \in \mathbb {N} \cup \{ \infty \}\) (see Remark 3) and thus on \(N \in \mathbb {N} \cup \{ \infty \}\). Plugging (78) into (77), summing over the N j, and using (73), we arrive to the needed bound

$$\displaystyle \begin{aligned} \mbox{LHS of} \ (76) & \lesssim \left( - \ln \varepsilon \right) \sum_{N, N_1} N^{-\frac{1}{20}} \| u_2 (I) \|{}^2_{X^{0+, \frac 12+} } \| u_3 (I) \|{}^2_{X^{0+, \frac 12+} } \\ & \lesssim \left( - \ln \varepsilon \right)^3 \sum_{N, N_1} N^{-\frac{1}{40}} N_1^{-\frac{1}{40}} \lesssim \left( - \ln \varepsilon \right)^3. \end{aligned} $$

Note that in (78), we could also use a weaker bound replacing the L4 norm with the L and that in the fully random case J j = I for all j is controlled invoking (40) for all j = 1, 2, 3. However, the L4 bound is more robust since it works also in the other cases, where the contributions are not all random (namely if some J j is of the form II).

So we have reduced to (75). We have

$$\displaystyle \begin{aligned} \operatorname{P}_N & \operatorname{P}_{\left\{ \langle \tau + |n|{}^2 \rangle \leq \tilde N^{\frac{11}{10}} \right\} } \mathcal N_{1}^{N_1, N_2, N_3} (\cdot) \\ & = \operatorname{P}_N \operatorname{P}_{\left\{ \langle \tau + |n|{}^2 \rangle \leq \tilde N^{\frac{11}{10}} \right\} } \left( \sum_{|n_j| \sim N_j} e^{i x \cdot(n_1 -n_2 + n_3)} e^{-it (|n_1|{}^2 - |n_2|{}^2 + |n_3|{}^2)} \right) \\ & \qquad \qquad \qquad \times \frac{g_{n_1}^\omega}{\langle n_1 \rangle^{1+\alpha}} \frac{\overline{g_{n_2}^\omega}}{\langle n_2 \rangle^{1+\alpha}} \frac{g_{n_3}^\omega}{\langle n_3 \rangle^{1+\alpha}}. \end{aligned} $$
(79)

Thus we see that (75) satisfies the desired inequalities (70) as long as we can bound

$$\displaystyle \begin{aligned} & N^{2\alpha + 2\sigma} \bigg\| \sum_{N_1, N_2, N_3} \operatorname{P}_N \operatorname{P}_{\left\{ \langle \tau + |n|{}^2 \rangle \leq \tilde N^{\frac{11}{10}} \right\} } \left( \sum_{|n_j| \sim N_j} e^{i x \cdot(n_1 -n_2 + n_3)} e^{-it (|n_1|{}^2 - |n_2|{}^2 + |n_3|{}^2)} \right) \\ & \qquad \qquad \qquad \quad \quad \times \frac{g_{n_1}^\omega}{\langle n_1 \rangle^{1+\alpha}} \frac{\overline{g_{n_2}^\omega}}{\langle n_2 \rangle^{1+\alpha}} \frac{g_{n_3}^\omega}{\langle n_3 \rangle^{1+\alpha}} \bigg\|{}^2_{X^{0, -\frac 12++}} \lesssim \left( - \ln \varepsilon \right)^3 N^{0-} \, , \end{aligned} $$
(80)

on a set of probability larger than 1 − ε.

Since

$$\displaystyle \begin{aligned} & \mathcal{F}\Bigl( e^{i x \cdot(n_1 -n_2 + n_3)} e^{-it (|n_1|{}^2 - |n_2|{}^2 + |n_3|{}^2)} \Bigr)(n, \tau) \\ & \qquad \qquad \qquad \qquad = \sum_{n_1 - n_2 + n_3 = n} \delta (\tau + |n_1|{}^2 - |n_2|{}^2 + |n_3|{}^2) \, , \end{aligned} $$
(81)

where \(\mathcal {F}\) is the space–time Fourier transform and δ is the delta function, we reduce (80) to showing that

$$\displaystyle \begin{aligned} N^{2\alpha + 2\sigma} \sum_{N_1, N_2, N_3} \sum_{|n| \sim N } \int \frac{ \chi_{ \{ \langle \tau + |n|{}^2 \rangle \leq \tilde N^{\frac{11}{10}} \} } }{ \langle \tau + |n|{}^2 \rangle^{1--}} \\ \times \left| \sum_{ \substack{|n_j| \sim N_j, \, n_2 \neq n_1, n_3 \\ n = n_1 -n_2 + n_3 \\ \tau + |n_1|{}^2 - |n_2|{}^2 + |n_3|{}^2 =0 }} \frac{g_{n_1}^\omega}{\langle n_1 \rangle^{1+\alpha}} \frac{\overline{g_{n_2}^\omega}}{\langle n_2 \rangle^{1+\alpha}} \frac{g_{n_3}^\omega}{\langle n_3 \rangle^{1+\alpha}} \right|{}^2 \, d \tau \lesssim \left( - \ln \varepsilon \right)^3 N^{0-}, \end{aligned} $$
(82)

with probability at least 1 − ε. Letting

$$\displaystyle \begin{aligned} \mu := |n|{}^2 + \tau = |n|{}^2 - |n_1|{}^2 + |n_2|{}^2 - |n_3|{}^2 \, \end{aligned}$$

(the second identity holds over the integration set, since we have a factor

$$\displaystyle \begin{aligned} \delta (\tau + |n_1|{}^2 - |n_2|{}^2 + |n_3|{}^2)\end{aligned}$$

in the integrand) and recalling that \(N \lesssim \tilde N\), this follows by

$$\displaystyle \begin{aligned} N^{2\alpha + 2\sigma} \sum_{N_1, N_2, N_3} \sum_{|n| \sim N } \int \frac{ \chi_{ \{ \langle \mu \rangle \leq \tilde N^{\frac{11}{10}} \} } }{ \langle \mu \rangle^{1--}} \\ \times \left| \sum_{ \substack{|n_j| \sim N_j, \, n_2 \neq n_1, n_3 \\ n = n_1 -n_2 + n_3 \\ -|n|{}^2 + |n_1|{}^2 - |n_2|{}^2 + |n_3|{}^2 = \mu }} \frac{g_{n_1}^\omega}{\langle n_1 \rangle^{1+\alpha}} \frac{\overline{g_{n_2}^\omega}}{\langle n_2 \rangle^{1+\alpha}} \frac{g_{n_3}^\omega}{\langle n_3 \rangle^{1+\alpha}} \right|{}^2 \, d \tau \lesssim \left( - \ln \varepsilon \right)^3 N^{0-}, \end{aligned} $$
(83)

with probability at least 1 − ε. Using Hölder inequality in , we reduce to prove (here we use the symmetry μ ↔−μ)

$$\displaystyle \begin{aligned} N^{2\alpha + 2\sigma} \tilde{N}^{0+} \sum_{N_1, N_2, N_3} \sup_{ |\mu| \lesssim \tilde N^{\frac{11}{10} } } \sum_{|n| \sim N } \\ \times \left| \sum_{ \substack{|n_j| \sim N_j, \, n_2 \neq n_1, n_3 \\ n = n_1 -n_2 + n_3 \\ \mu = |n|{}^2 - |n_1|{}^2 + |n_2|{}^2 - |n_3|{}^2 }} \frac{g_{n_1}^\omega}{\langle n_1 \rangle^{1+\alpha}} \frac{\overline{g_{n_2}^\omega}}{\langle n_2 \rangle^{1+\alpha}} \frac{g_{n_3}^\omega}{\langle n_3 \rangle^{1+\alpha}} \right|{}^2 \lesssim \left( - \ln \varepsilon \right)^3 N^{0-}, \end{aligned} $$
(84)

with probability at least 1 − ε. We rewrite (84) as

$$\displaystyle \begin{aligned} N^{2\alpha + 2\sigma} \tilde{N}^{0+} \sum_{N_1, N_2, N_3} \sup_{ |\mu| \lesssim \tilde N^{\frac{11}{10} } } \sum_{|n| \sim N} \left| \sum_{ R_{n}(n_1, n_2, n_3) } \frac{g_{n_1}^\omega}{\langle n_1 \rangle^{1+\alpha}} \frac{\overline{g_{n_2}^\omega}}{\langle n_2 \rangle^{1+\alpha}} \frac{g_{n_3}^\omega}{\langle n_3 \rangle^{1+\alpha}} \right|{}^2 \\ \lesssim \left( - \ln \varepsilon \right)^3 N^{0-} \, , \end{aligned} $$
(85)

where for fixed n, μ we have denoted

$$\displaystyle \begin{aligned} R_{n}(n_1, n_2, n_3) : = & \Big\{ (n_1, n_2, n_3) \in (\mathbb{Z}^2)^3 \, : \, |n_j| \sim N_j, j = 1, 2, 3, \\ & n_{2} \neq n_1, n_3, \, n_1 -n_2 + n_3 = n, \, \mu = |n|{}^2 - |n_1|{}^2 + |n_2|{}^2 - |n_3|{}^2 \Big\} \, . \end{aligned} $$
(86)

The set R n(⋅) depends on μ also (like all the sets we will define below). However, we omit this dependence to simplify the notation. Notice that in the definition of R n(⋅) the condition

$$\displaystyle \begin{aligned} |n|{}^2 - |n_1|{}^2 + |n_2|{}^2 - |n_3|{}^2 = \mu \end{aligned}$$

can be equivalently replaced by

$$\displaystyle \begin{aligned} 2 (n_1 - n_2 ) \cdot (n_3 - n_2) = \mu \, . \end{aligned}$$

We also note that we have reduced to a case in which at least one of the frequencies N 1, N 3 is comparable to \(\tilde N\). Indeed, if both \(N_1 \ll \tilde N\) and \(N_3 \ll \tilde N\), we must have \(N_2 = \tilde N\) and μ ∼ N2, which contradicts the fact that \( \mu \lesssim N^{\frac {11}{10}}\). Since the roles of N 1 and N 3 are symmetric (they are always the size of the indices of the Fourier coefficients of u 1, u 3), hereafter we assume that

$$\displaystyle \begin{aligned} N_1 \sim \tilde N \gtrsim N. \end{aligned}$$

To estimate, (85) will be also useful to introduce the set

$$\displaystyle \begin{aligned} S(n_1, n_2, n_3) : = & \Big\{ (n_1, n_2, n_3) \in (\mathbb{Z}^2)^3 \, : \, |n_j| \sim N_j, j = 1, 2, 3, \\ & n_{2} \neq n_1, n_3, \, \mu = 2 (n_1 - n_2 ) \cdot (n_3 - n_2) \Big\} \, . \end{aligned} $$
(87)

We recall that the Gaussian variables contract in the following way:

$$\displaystyle \begin{aligned} \int g_{n}^\omega g_{n'}^\omega d \mathbb{P}(\omega) = 0, \qquad \int g_{n}^\omega \overline{g_{n'}^\omega} d \mathbb{P}(\omega) = \left\{ \begin{array}{lll} 0 & \mbox{if} & n \neq n' \\ 1 & \mbox{if} & n = n' \end{array} \right. \, . \end{aligned} $$
(88)

Along with the fact that the sum is restricted over n 1, n 3 ≠ n 2 and symmetric under n 1 ↔ n 3, we get

$$\displaystyle \begin{aligned} \int & \left| \sum_{ R_{n}(n_1, n_2, n_3) } \frac{g_{n_1}^\omega}{\langle n_1 \rangle^{1+\alpha}} \frac{ \overline{ g_{n_2}^\omega}}{\langle n_2 \rangle^{1+\alpha}} \frac{g_{n_3}^\omega}{\langle n_3 \rangle^{1+\alpha}} \right|{}^2 d \mathbb{P}(\omega) \\ & = 2 \sum_{ R_{n}(n_1, n_2, n_3)} \frac{1}{\langle n_1 \rangle^ {2 \alpha +2}} \frac{ 1 }{\langle n_2 \rangle^{2 \alpha +2}} \frac{1}{\langle n_3 \rangle^{2 \alpha +2}} = 2 \sum_{ R_{n}(n_1, n_2, n_3)} N_1^{-2 \alpha -2} N_2^{-2 \alpha -2} N_3^{-2 \alpha -2} \, . \end{aligned} $$
(89)

In other words, the L2() norm of the Gaussian trilinear form is controlled by square root of the right-hand side of (89). Using the hypercontractivity of the Gaussians (see [19, 33]), we can promote this to an Lq() bound, with a multiplicative factor that is factor q3∕2. Then using Minkowski integral inequality and Bernstein inequality (as we did in Sect. 3), this also gives to us a (uniform) pointwise bound

$$\displaystyle \begin{aligned} & \left| \sum_{ R_{n}(n_1, n_2, n_3) } \frac{g_{n_1}^\omega}{\langle n_1 \rangle^{1+\alpha}} \frac{ \overline{ g_{n_2}^\omega}}{\langle n_2 \rangle^{1+\alpha}} \frac{g_{n_3}^\omega}{\langle n_3 \rangle^{1+\alpha}} \right|{}^2 \\ & \lesssim (- \ln \varepsilon)^3 N_1^{0+} \sum_{ R_{n}(n_1, n_2, n_3)} N_1^{-2 \alpha -2} N_2^{-2 \alpha -2} N_3^{-2 \alpha -2} \, , \end{aligned} $$
(90)

with an extra \(N_1^{0+}\) loss, that is valid for ω outside an exceptional set of probability ≤ ε (again, proceeding as in Sect. 3, we see that this exceptional set can be chosen to be independent on N, as required).

We finally distinguish two last possibilities. First restrict the summation over (n 1, n 2, n 3) ∈ R n(n 1, n 2, n 3) such that n 1 ≠ n 3 (with a small abuse of notation, we do not introduce additional notation for this restriction). In this case, we get, with probability > 1 − ε, the following estimate

$$\displaystyle \begin{aligned} & \sum_{|n| \sim N} \left| \sum_{R_{n}(n_1, n_2, n_3)} \frac{g_{n_1}^\omega}{\langle n_1 \rangle^{1+\alpha}} \frac{ \overline{ g_{n_2}^\omega}}{\langle n_2 \rangle^{1+\alpha}} \frac{g_{n_3}^\omega}{\langle n_3 \rangle^{1+ \alpha}} \right|{}^2 \\ & \lesssim (- \ln \varepsilon)^3 \sum_{|n| \sim N} \sum_{ R_{n}(n_1, n_2, n_3)} N_1^{-2 \alpha -2} N_2^{-2 \alpha -2} N_3^{-2 \alpha -2}\\ & \lesssim (- \ln \varepsilon)^3 \sum_{ S(n_1, n_2, n_3)} N_1^{-2 \alpha -2} N_2^{-2 \alpha -2} N_3^{-2 \alpha -2}\\ & \sim (- \ln \varepsilon)^3 \sum_{S(n_1, n_2, n_3)} N_1^{-2 \alpha -2 } N_2^{-2\alpha -2} N_3^{-2\alpha-2 } \\ & \lesssim (- \ln \varepsilon)^3 N_1^{-2 \alpha -2 } N_2^{-2\alpha -2} N_3^{-2\alpha -2} \# S(n_1, n_2, n_3) \\ & \lesssim (- \ln \varepsilon)^3 N_1^{-2 \alpha -1 } N_2^{-2\alpha} N_3^{-2\alpha} \, , \end{aligned} $$
(91)

where we used that if n 1 ≠ n 3, then

$$\displaystyle \begin{aligned} \# S(n_1, n_2, n_3) \lesssim N_1 N_2^2 N_3^2 \, ; \end{aligned}$$

this is because once we have fixed n 2, n 3 in \(N_2^2 N_3^2\) possible ways, we remain with at most N 1 choices for n 1 by the relation μ = 2(n 1 − n 2) ⋅ (n 3 − n 2). This fact has a clear geometric interpretation, namely that this relation forces the (two-dimensional) lattice point n 1 to belong to the portion of a line that lies inside a ball of radius \(\lesssim N_1\) (and there are \(\lesssim N_1\) such lattice points n 1).

The second possibility is that we sum over (n 1, n 2, n 3) ∈ R n(n 1, n 2, n 3) such that n 1 = n 3. In this case restriction, μ = 2|n 1 − n 2|2 implies that once we have chosen n 2 in \(N_{2}^2\) possible ways, we remain with \(\lesssim \mu ^{0+} \lesssim N_1^{0++}\) choices for n 1 = n 3 (since a circle of radius μ contains \(\lesssim \mu ^{0+}\) lattice points). This gives an even better bound than the one above.

Thus, summing the (91) over N 2, N 3 and recalling that \(N_1 \sim \tilde N \gtrsim N\), we have bounded, with probability > 1 − ε, the expression (85) by

$$\displaystyle \begin{aligned} N^{2\alpha + 2\sigma} N_1^{0+} \sum_{N_1} N_1^{-2 \alpha -1 } \lesssim (- \ln \varepsilon)^3 N_1^{2 \sigma - 1 + 0+} \lesssim (- \ln \varepsilon)^3 N^{0-} \, , \end{aligned}$$

where we used \(\sigma < \frac 12\).