Abstract
We consider NLS on \(\mathbb {T}^2\) with multiplicative spatial white noise and nonlinearity between cubic and quartic. We prove global existence, uniqueness and convergence almost surely of solutions to a family of properly regularized and renormalized approximating equations. In particular we extend a previous result by A. Debussche and H. Weber available in the cubic and sub-cubic setting.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
1.1 Statement of the main result
We are interested in the following family of NLS with multiplicative spatial white noise:
where \(\xi (x,\omega )\) is space white noise, \(2\le p\le 3\), \(\lambda \le 0\), and we identify \(\mathbb {T}^2\) with \((-\pi ,\pi )\times (-\pi , \pi )\). We work for simplicity with a defocusing nonlinearity, but the results of this paper can be extended to the focusing case under a smallness assumption on the initial datum. Our main aim is an improvement on the range of the nonlinearity p, from the case \(p=2\) achieved by A. Debussche and H. Weber in [4], to the larger range \(2\le p \le 3\). We basically follow the approach of [4], the main novelty being the introduction of modified energies in the context of (1.1). These energies allow to cover a larger set of p in (1.1). They also have the potential to be useful in the future for the study of the growth of the high Sobolev norms in the context of (1.1).
We assume that \(\xi (x,\omega )\) is real valued and has a vanishing zero Fourier mode (or equivalently of mean zero with respect to x). This assumption is however not essential because one may remove the zero mode of \(\xi \) from the equation by the transform \(u\mapsto e^{it\hat{\xi }(0)}u\). Therefore in the sequel, we will assume that \(\xi (x,\omega )\) is given by the following random Fourier series:
where \(x\in \mathbb {T}^2\) and \((g_n(\omega ))\) are identically distributed standard complex gaussians on the probability space \((\Omega , {\mathcal F},p)\). We suppose that \((g_n(\omega ))_{n\ne 0}\) are independent, modulo the relation \(\overline{g_n}(\omega )=g_{-n}(\omega )\) (so that \(\xi \) is a.s a real valued distribution).
Since the white noise \(\xi (x,\omega )\) is not a classical function, it is important to properly define what we mean by a solution of (1.1). The nature of the initial datum \(u_0(x)\) is also of importance in this discussion but even for \(u_0(x)\in C^\infty (\mathbb {T}^2)\) it is not clear what we mean by a solution of (1.1). Let us therefore suppose first that \(u_0(x)\in C^\infty (\mathbb {T}^2)\). Since it is well known how to solve (1.1) with \(\xi (x,\omega )\in C^\infty (\mathbb {T}^2)\) and \(u_0(x)\in C^\infty (\mathbb {T}^2)\), it is natural to consider the following regularized problems:
where \(\xi _\varepsilon (x,\omega )=\chi _{\varepsilon }(x)*\xi (x,\omega )\), \(\varepsilon \in (0,1)\) is a regularization of \(\xi \) by convolution with \(\chi _{\varepsilon }(x)=\varepsilon ^{-2}\chi (x/\varepsilon )\), where \(\chi (x)\) is smooth with a support in \(\{|x|<1/2\}\) and \(\int _{\mathbb {T}^2}\chi dx=1\). Then we have
where \(\rho =\hat{\chi }\) is the Fourier transform on \(\mathbb {R}^2\) of \(\chi \).
Unfortunately, we do not know how to pass into the limit \(\varepsilon \rightarrow 0\) in (1.2) (even for \(u_0(x)\in C^\infty (\mathbb {T}^2)\)) and it may be that this limit is quite singular in general. Our analysis will show that we can only pass into the limit almost surely w.r.t. \(\omega \) if we take a well chosen random approximation of the datum \(u_0(x)\) in (1.2) and if we properly renormalize the phase of the solution \(u_\varepsilon (t,x,\omega )\). Following [4] and [8] we introduce the following smoothed potential \(Y=\Delta ^{-1} \xi \) and its \(C^\infty \) regularization \(Y_\varepsilon =\Delta ^{-1} \xi _\varepsilon \), namely:
and
We now consider the regularized problems:
where we assume that almost surely w.r.t. \(\omega \) we have \(e^{Y(x,\omega )} u_0(x)\in H^2(\mathbb {T}^2)\). Notice that under this assumption the problem (1.6) has almost surely w.r.t. \(\omega \) a classical unique global solution \(u_\varepsilon (t,x,\omega )\in {\mathcal C}(\mathbb {R}; H^2(\mathbb {T}^2))\) (see [2, 13]). Here is our main result.
Theorem 1.1
Assume \(p\in [2,3], \lambda \le 0\) and \(u_0(x)\) be such that \(e^{Y(x,\omega )} u_0(x) \in H^2(\mathbb {T}^2)\) a.s. Then there exists an event \(\Sigma \subset \Omega \) such that \(p(\Sigma )=1\) and for every \(\omega \in \Sigma \) there exists
such that for every \(T>0\) and \(\gamma \in [0,2)\) we have:
where \(C_\varepsilon =\sum _{n\in \mathbb {Z}^2,n\ne 0}\, \frac{\rho ^2(\varepsilon n)}{|n|^2}\) and \(u_\varepsilon (t,x,\omega )\) are solutions to (1.6). Moreover for \(\gamma \in [0,1)\) and \(\omega \in \Sigma \) we have
The limits obtained in Theorem 1.1 are by definition what we may wish to call solutions of (1.1) with datum \(u_0(x)\). Observe that \(|u_\varepsilon (t,x,\omega )|\) has a well defined limit, while the phase of \(u_\varepsilon (t,x,\omega )\) should be suitably renormalized by the diverging constants \(C_\varepsilon \) in order to get a limit. We also point out that the meaning of the constants \(C_\varepsilon \), introduced along the statement of Theorem 1.1, is explained in Sect. 2, where the renormalization procedure is presented.
It is worth mentioning that despite (1.7), that works for \(\gamma \in (0,2)\), in (1.8) we assume \(\gamma \in (0,1)\). This is due to a technical reason since, in order to estimate the Sobolev norm of the absolute value of a Sobolev function, we use the diamagnetic inequality which, to the best of our knowledge, works up to the \(H^1\) regularity.
In a future work we plan to extend the result of Theorem 1.1 to any \(p<\infty \) by exploiting the dispersive properties of the Schrodinger equation on a compact spatial domain established in [2]. In the present moment we are only able to do so for potentials slightly more regular than the white noise. In fact, we shall not need to exploit the construction of [2] in its full strength because we will only need an \(\varepsilon \)-improvement of the Sobolev embedding. This means that we will need to make the WKB construction of [2] for solutions oscillating at frequency \(h^{-1}\) only up to time \(h^{2-\delta }\), \(\delta >0\) which are much shorter than the times h achieved in [2]. As already mentioned, even if we succeed to incorporate the dispersive effect in the analysis of (1.1) the modified energy method, used indirectly in the proof of Theorem 1.1 (and more directly in the proof of Theorem 1.2 below) would be essentially needed in order to get polynomial bounds on higher Sobolev norms of the obtained solutions, similar to the ones obtained in [11] in the case without a white noise potential.
In Theorem 1.1 the initial data \(u_0(x)\) is well-prepared because it is supposed to satisfy \(e^{Y(x,\omega )} u_0(x) \in H^2(\mathbb {T}^2)\) a.s. It would be interesting to decide whether a suitable application of the I-method introduced in [3] may allow to remove this assumption of well-prepared data. For this purpose, one should succeed to establish the limiting property by using energies at level \(H^s\), for a suitable \(s<1\).
1.2 The gauge transform
In the sequel we perform some formal computations that allow us to introduce heuristically a rather useful transformation. Following [4] and [8] we introduce the new unknown:
where u is assumed to be formally solution to (1.1) and \(Y=\Delta ^{-1} \xi \). In order to clarify the relevance of this transformation first notice that by direct computation we have that the equation solved (at least formally) by v is the following one:
Notice that the quantity \(|\nabla Y|^2\) is not well defined since \(\nabla Y\), even if is one derivative more regular than \(\xi \), has still negative Sobolev regularity. However this issue can be settled by a renormalization (see below and Sect. 2 for more details). On the other hand (1.10) compared with (1.1) looks more complicated since a perturbation of order one in the linear part of the equation is added compared with (1.1). Nevertheless, we have the advantage that the coefficients involved in the new equation are more regular that the spatial white noise \(\xi \) that appears in (1.1).
Another relevant advantage that comes from the new variable v is related to the conservation of the Hamiltonian. Recall that the conservation laws play a key role in the analysis of nonlinear Schrödinger equations. In particular in the context of (1.1) the quadratic part of the conserved energy is given by
The key feature in the transformation (1.9) is that there is a cancellation between the two terms in (1.11) and this cancellation is the main point in the definition of a suitable self-adjoint realisation of \(\Delta +\xi \) (see [7] and the references therein). Indeed, let us compute (1.11) in the new variable v, hence we have \(u=e^{-Y} v\) and (1.11) becomes
which after some elementary manipulations can be written as:
Thanks to the choice \(\Delta Y=\xi \) we get a cancellation of the white noise potential leading to
Notice that now the potential energy w.r.t. the new variable v involves the potential \(|\nabla Y(x)|^2\) which is (morally) one derivative more regular compared with the white noise.
Motivated by the previous discussion, we observe that if \(u_\varepsilon (t,x,\omega )\) is a solution to
where \(\xi _\varepsilon (x,\omega )\) is defined by (1.3), then the transformed function
satisfies
Here we have \(Y_\varepsilon (x,\omega )\) given by (1.5) and \(:|\nabla Y_\varepsilon |^2:(x,\omega )\) is defined as follows:
where
is the same constant as the one appearing in Theorem 1.1. One can show that almost surely w.r.t. \(\omega \) we have the following convergence, in spaces with negative regularity:
where
(see Sect. 2 for details).
The main idea to establish Theorem 1.1 is to look for the convergence of \(v_\varepsilon \) as \(\varepsilon \rightarrow 0\), and hence to get informations on \(u_\varepsilon \) by going back via the transformation (1.12).
Theorem 1.2
Assume \(p\in [2,3], \lambda \le 0\) and \(u_0(x)\) be such that \(e^{Y(x,\omega )} u_0(x) \in H^2(\mathbb {T}^2)\) a.s. Then there exists an event \(\Sigma \subset \Omega \) such that \(p(\Sigma )=1\) and for every \(\omega \in \Sigma \) there exists
such that for every fixed \(T>0\) and \(\gamma \in [0,2)\) we have:
Here we have denoted by \(v_\varepsilon (t,x,\omega )\) for \(\omega \in \Sigma \) the unique global solution in the space \({\mathcal C} (\mathbb {R};H^2(\mathbb {T}^2))\) of the following problem:
and \( v(t,x, \omega )\) denotes for \(\omega \in \Sigma \) the unique global solution in the space \({\mathcal C} (\mathbb {R}; H^\gamma (\mathbb {T}^2))\), for \(\gamma \in (1, 2)\), of the following limit problem:
where in both Cauchy problems (1.16) and (1.17) \(v_0(x)=e^{Y(x,\omega )} u_0(x)\), \(\omega \in \Sigma \).
The result of Theorem 1.2 for \(p=2\), with a weaker convergence, was established in [4]. Here we still follow the strategy developed in [4] which can be summarized as follows:
-
(1)
A priori bounds for the \(H^2\)-norm of \(v_\varepsilon \);
-
(2)
Convergence of the special sequence \((v_{2^{-k}})\) a.s. w.r.t. \(\omega \) in \({\mathcal C}([-T,T];H^\gamma (\mathbb {T}^2))\) for every \(T>0\);
-
(3)
Convergence of the whole family \((v_{\varepsilon })\) a.s. w.r.t. \(\omega \) in \({\mathcal C}([-T,T];H^\gamma (\mathbb {T}^2))\) for every \(T>0\);
-
(4)
Pathwise uniqueness of solutions to (1.17).
In contrast with [4], we do not use the pathwise uniqueness in the convergence procedure of steps (2) and (3). The main novelty in this paper is that we can extend the \(H^2\) bounds in step (1) to the range of the nonlinearity \(2\le p\le 3\). The key tool compared with [4] is the use of suitable energies in conjunction with the Brezis-Gallouët inequality. This technique is inspired by [10, 11, 13]. As already mentioned another difference compared with [4] is that we establish the convergence of solutions to the regularized problems to the solution of the limit problem almost surely rather than in the weaker convergence in probability.
It would be interesting to decide whether the modified energy argument developed in this paper can be useful in order to improve the range of the nonlinearity in [5], where the NLS with multiplicative space white noise on the whole space is considered. Another question concerns the possibility to implement our approach of modified energies in the context of the formalism used in [7].
1.3 Notations
Next we fix some notations. We denote by \(L^q\), \(W^{s,q}\), \(H^\gamma \), the spaces \(L^q(\mathbb {T}^2)\), \(W^{s,q}(\mathbb {T}^2)\), \(H^\gamma (\mathbb {T}^2)\). Let us give the precise definition of \(W^{s,q}\), we use. The linear operator \(D^s\) is defined by
where \(\langle n\rangle =(1+|n|^2)^{\frac{1}{2}}\). Then we define \(W^{s,q}\) via the norm
We also use the following notation for weighted Lebesgue spaces: \(\Vert f\Vert _{L^q(w)}^q=\int _{\mathbb {T}^2} |f|^q w \hbox { } dx\) where \(w\ge 0\) is a weight. We shall denote by \(x=(x_1, x_2)\) the generic point in \(\mathbb {T}^2\) and \(\nabla \) will be the full gradient operator w.r.t. the space variables and also \(\partial _i\) the partial derivative w.r.t. \(x_i\). To simplify the presentation we denote by \(\int _{\mathbb {T}^2} h\) the integral with respect to the Lebesgue measure \(\int _{\mathbb {T}^2} h \hbox { } dx\).
Starting from Sect. 3, we will denote by \(C(\omega )\) a generic random variable finite on the event of full probability defined in Proposition 3.1. The random contant \(C(\omega )\) will be allowed to change from line to line in our computations. For every \(q\in [1,\infty ]\) we denote by \(q'\) the conjugate Holder exponent. We shall use the notation \(\lesssim \) in order to denote a lesser or equal sign \(\le \) up to a positive multiplicative constant C, that in turn may depend harmlessly on contextual parameters. In some cases we shall drop the dependence of the functions from the variable \((t, x , \omega )\) when it is clear from the context.
1.4 Plan of the remaining part of the paper
In the next section, we present some stochastic analysis considerations. Section 3 is devoted to the basic bounds resulting from the Hamiltonian structure and some variants of the Gronwall lemma. Section 4 contains the key bounds at \(H^2\) level. The proof of the algebraic proposition Proposition 4.1 is postponed to the last section. In Sect. 5, we present the proof of Theorem 1.2 while Sect. 6 is devoted to the proof of Theorem 1.1. In the final Sect. 7, we present the proof of Proposition 4.1.
2 Probabilistic results
In this section we collect a series of results concerning the probabilistic object Y and its regularized version \(Y_\varepsilon \) (see (1.5)). The main point is that all the needed probabilistic properties are established a.s., which is the key point to establish convergence a.s. in Theorems 1.1 and 1.2. We shall need in the rest of the paper some special random constants that will be a combination of the ones involved in Proposition 2.1.
First we justify the introduction of the constant \(C_\varepsilon \) in (1.14) as follows. By definition of \(Y_{\varepsilon }(x,\omega )\) (see (1.5)) we have
whose zero Fourier coefficient is the random constant
Hence the constant \(C_\varepsilon \) defined in (1.14) is the average on \(\Omega \) of the zero Fourier modes defined above. We shall prove that a.s. w.r.t. \(\omega \) the functions \(:|\nabla Y_\varepsilon |^2:(x,\omega )\) defined in (1.13) converge as \(\varepsilon \rightarrow 0\), in the topology \(W^{-s,q}\) for \(s\in (0,1)\) and \(q\in (1, \infty )\), to the limit object \(:|\nabla Y|^2:(x,\omega )\) defined by (1.15).
Next we gather the key probabilistic properties that we need in the rest of the paper.
Proposition 2.1
Let \(s\in (0,1)\) and \(q\in (1,\infty )\) be given. There exists an event \(\Sigma _0\subset \Omega \) such that \(p(\Sigma _0)=1\) and for every \(\omega \in \Sigma _0\) there exists a finite constant \(C(\omega )>0\) such that:
-
we have the following uniform bound:
$$\begin{aligned} \sup _{\varepsilon \in (0,1)} \big \{\Vert e^{\pm Y_\varepsilon }(x,\omega )\Vert _{L^\infty }, \Vert e^{\pm Y_\varepsilon }(x,\omega )\Vert _{W^{s,q}},\Vert \nabla Y_\varepsilon (x,\omega )\Vert _{L^q} |\ln \varepsilon |^{-1},\\ \Vert :|\nabla Y_\varepsilon |^2:(x,\omega ) \Vert _{L^q} |\ln \varepsilon |^{-2}, \Vert :|\nabla Y_\varepsilon |^2: (x,\omega )\Vert _{W^{-s,q}} \big \}<C(\omega ); \end{aligned}$$ -
for a suitable \(\kappa >0\) we have:
$$\begin{aligned} \Vert Y_\varepsilon (x,\omega )-Y(x,\omega )\Vert _{W^{s,q}}<C(\omega ) \varepsilon ^\kappa , \end{aligned}$$(2.1)in particular by choosing \(sq>2\) we get by Sobolev embedding
$$\begin{aligned} \Vert Y_\varepsilon (x,\omega )-Y(x,\omega )\Vert _{L^\infty }<C(\omega ) \varepsilon ^\kappa \end{aligned}$$and also
$$\begin{aligned} \Vert e^{-pY_\varepsilon (x,\omega )}-e^{-pY(x,\omega )}\Vert _{L^\infty }<C(\omega ) \varepsilon ^\kappa , \quad \forall p\in \mathbb {R}; \end{aligned}$$ -
for a suitable \(\kappa >0\) we have
$$\begin{aligned} \Vert \nabla Y_\varepsilon (x,\omega ) - \nabla Y(x,\omega ) \Vert _{W^{-s,q}}< C(\omega ) \varepsilon ^\kappa , \end{aligned}$$(2.2)and
$$\begin{aligned} \Vert :|\nabla Y_\varepsilon |^2:(x,\omega ) - :|\nabla Y|^2:(x,\omega ) \Vert _{W^{-s,q}}< C(\omega ) \varepsilon ^\kappa . \end{aligned}$$(2.3)
Let us observe that since the condition on s is open, by using the Sobolev embedding we can include the case \(q=\infty \) in (2.1), (2.2), (2.3).
We shall split the proof of Proposition 2.1 in several propositions. The following result will be of importance to pass informations from a suitable discrete sequence \(\varepsilon _N\) to the continuous parameter \(\varepsilon \). Notice that the independence property of \((g_n)\) is not used in its proof.
Lemma 2.2
Let \(\gamma >0\) be fixed, then there exists an event \(\Sigma _1\) with full measure such that for every \(\omega \in \Sigma _1\) there exists \(K>0\) such that
Proof
We first prove
Notice that
It remains to observe that by gaussianity
and we conclude (2.5) by elementary considerations.
Next we introduce
and
Notice that (2.4) holds for \(\omega \in \Sigma _1\) for a suitable K, and moreover \(\Sigma _1\) has full measure by (2.5). \(\square \)
Proposition 2.3
Let \(s\in (0,1)\) and \(q\in (1,\infty )\) be fixed. There exists an event \(\tilde{\Sigma }\subset \Omega \) such that \(p(\tilde{\Sigma })=1\) and for every \(\omega \in \tilde{\Sigma }\) there exists \(C(\omega )<\infty \) such that:
for a suitable \(\kappa >0\). Moreover we have
Proof
First we notice that by (2.6) and Sobolev embedding we have a.s.
Hence (2.7) follows from the following computation
and by noticing that by (2.8) we have a.s. \(\sup _{\varepsilon } \Vert Y_\varepsilon (x,\omega )\Vert _{L^\infty }<\infty \).
Next we split the proof of (2.6) in two steps.
First step: proof of (2.6) for \(\varepsilon =\varepsilon _N=N^{-1}\)
For every \(p\ge q\) we combine the Minkowski inequality and a standard bound between the \(L^p\) and \(L^2\) norms of gaussians in order to get:
To justify the last inequality notice that by independence of \(g_n(\omega )\) we have for every fixed \(x\in \mathbb {T}^2\) the following estimates:
where we have used the following consequence of the mean value theorem
Next by combining (2.9) and Lemma 4.5 of [14], we can write
where the positive constants \(c,C>0\) are independent of N. The right-hand side of (2.10) is summable in N. Therefore, we can use the Borel-Cantelli lemma to conclude that there exists a full measure set \(\Sigma _2\subset \Omega \) such that for every \(\omega \in \Sigma _2\) there exists \(N_0(\omega )\in \mathbb {N}\) such that
and hence
Second step: proof of (2.6) for \(\varepsilon \in (0,1)\)
Let us set \( \tilde{\Sigma }=\Sigma _1\cap \Sigma _2\ \) (where \(\Sigma _2\) is given in the first step and \(\Sigma _1\) in Lemma 2.2), then \(p(\tilde{\Sigma })=1\) and we will show that for every \(\omega \in \tilde{\Sigma }\) we have the desired property.
For every \(\varepsilon \in (0,1)\) there exists \(N\in \mathbb {N}\) such that
where \(\varepsilon _N=N^{-1}\) is as in the previous step. We claim that
for every \(\gamma >0\). Once the estimate above is established then the proof of (2.6) for \(\omega \in \tilde{\Sigma }\) follows by recalling (2.11) and the Minkowski inequality:
Hence we get (2.6) for \(\kappa =\frac{1-s}{2}\) provided that we choose \(\gamma >0\) small enough.
Next we prove (2.13). Due to (2.4) and the Minkowski inequality for every \(\omega \in \Sigma _1\) we have some \(K>0\) such that:
On the other hand by the mean value theorem and (2.12) we get
therefore by using the rapid decay of \(\nabla \rho \) we get for every \(L>0\) the following bound
We can now estimate the r.h.s. in (2.14) as follows:
Hence going back to (2.14) we get
and we conclude (2.13) since we are assuming (2.12). \(\square \)
Proposition 2.4
Let \(q\in (1,\infty )\) be fixed. There exists an event \(\tilde{\Sigma }\subset \Omega \) such that \(p(\tilde{\Sigma })=1\) and for every \(\omega \in \tilde{\Sigma }\) there exists \(C(\omega )<\infty \) such that:
and
Proof
It is easy to deduce (2.17) from (2.16). In fact we have the following trivial estimate
and we conclude by noticing that \(C_\varepsilon \lesssim |\ln \varepsilon |\). Next, we focus on the proof of (2.16) that we split in two steps.
First step: proof of (2.16) for \(\varepsilon =\varepsilon _N=N^{-\delta }\), \(\delta \in (0,1)\)
Once again, by combining the Minkowski inequality and a standard bound between the \(L^p\) and \(L^2\) norms of gaussians we get for \(p\ge q\):
The last step follows since, by orthogonality of \(g_n(\omega )\), for every \(x\in \mathbb {T}^2\) we have:
where \(L>0\) is any number and we used the fast decay of \(\rho \). Using (2.18), we obtain that
satisfies \( \Vert F\Vert _{L^p(\Omega )}\le C\sqrt{p} \) and using Lemma 4.5 of [14] (with \(N=1\), according to the notations of [14]), we can write
where the positive constants \(c,C>0\) are independent of N and \(\lambda >0\) is chosen large enough in such a way that the right-hand side of (2.19) is summable in N. Therefore, we can use the Borel-Cantelli lemma to conclude that there exists a full measure set \(\Sigma _2\subset \Omega \) such that for every \(\omega \in \Sigma _2\) there exists \(N_0(\omega )\in \mathbb {N}\) such that
and hence for every \(\omega \in \Sigma _2\) we have
Second step: proof of (2.16) for \(\varepsilon \in (0,1)\)
Exactly as in the proof of Proposition 2.3 it is sufficient to estimate
where
with \(\varepsilon _N=N^{-\delta }\), provided that \(\omega \) belongs to the event \(\Sigma _1\) given in Lemma 2.2 and \(\delta >0\) is small enough . By Lemma 2.2 we have that for every \(\omega \in \Sigma _1\) there exists a constant \(K>0\) such that
Next notice that by combining the mean value theorem with the strong decay of \(\rho \) we get for every fixed \(L>0\):
Then we can estimate
and hence by (2.22) we get
for a suitable \(\alpha >0\), provided that we choose \(\delta , \gamma >0\) small enough. By (2.21) we get
and in particular we get (2.20). \(\square \)
Proposition 2.5
Let \(s\in (0,1)\) and \(q\in (1,\infty )\) be fixed. There exists an event \(\tilde{\Sigma }\subset \Omega \) such that \(p(\tilde{\Sigma })=1\) and for every \(\omega \in \tilde{\Sigma }\) there exists \(C(\omega )<\infty \) such that
for a suitable \(\kappa >0\).
Proof
We split again the proof of (2.23) in two steps.
First step: proof of (2.23) for \(\varepsilon =\varepsilon _N=N^{-\delta }\), \(\delta \in (0,1)\)
Notice that
For every \(p\ge q\), by the Minkowski inequality and using hypercontractivity (see [12]) to estimate the \(L^p\) norm of a bilinear form of the gaussian vector \((g_n)\), we get:
Now, we observe that for a complex gaussian g and two nonnegative integers \(k_1\), \(k_2\), we have that \(\mathbb {E}(g^{k_1}\bar{g}^{k_2})=0\), unless \(k_1=k_2\). Therefore, by the independence of \((g_n)\), modulo \(g_n=\overline{g_{-n}}\), for every fixed \(x\in \mathbb {T}^2\), we can write:
where the implicit constant is independent of x. Concerning II we notice first that by the mean value theorem
and by interpolation with the trivial bound
we get
Then we can estimate
Next we estimate I. First notice that
and hence by the mean value theorem we get
that by interpolation with the trivial bound
implies
Hence we can evaluate I as follows:
where we have used
which in turn follows from the discrete Young inequality provided that \(2\theta <s\). Going back to (2.24) we get that for \(\delta \in (0,1)\) and \(\theta \in (0,\frac{s}{2})\) one has the bound
This estimate, in conjunction with Lemma 4.5 of [14] implies
where the positive constants \(c,C>0\) are independent of N. Since the r.h.s. is summable in N we can apply the Borel-Cantelli lemma and deduce the existence of an event \(\Sigma _2\) with full measure and such that for every \(\omega \in \Sigma _2\) there exists \(N_0(\omega )>0\) with the property
and hence
Second step: proof of (2.23) for \(\varepsilon \in (0,1)\)
We consider a generic \(\varepsilon >0\) and we select N in such a way that \(\varepsilon _{N+1}<\varepsilon \le \varepsilon _N\) where \(\varepsilon _N=N^{-\delta }\) as in the first step. By the Minkowski inequality and the previous step it is sufficient to prove
for a suitable \(\alpha >0\), for \(\omega \in \Sigma _1\) where \(\Sigma _1\) is given in Lemma 2.2. By Lemma 2.2 we deduce that almost surely there exists a finite constant \(K>0\) such that (2.4) occurs. Then we have
By looking at the argument to prove (2.25) we get:
and hence we can continue the estimate above as follows
Concerning the first sum on the r.h.s. in (2.28) (notice that by symmetry we can assume \(|n_1|<|n_2|\)) we can estimate as follows:
For the second sum on the r.h.s. in (2.28) we use the fast decay of \(\rho \) and hence for every \(L>0\) we have
By using again the fast decay of \(\rho \) we can estimate the third sum on the r.h.s. in (2.28) as follows (we can assume by symmetry \(|n_1|<N^{2\delta }\le |n_2|\)):
The proof of (2.26) is now complete provided we choose L large and \(\delta ,\gamma \) small enough. \(\square \)
We complete this section by noticing that the analysis we performed here may allow the extension of the results of [9] to a continuous family of approximation problems.
3 Some useful facts
Next we provide a result that will be useful in the sequel. The proof is inspired by [4], nevertheless we provide a proof for the sake of completeness. From a technical viewpoint the minor difference is that our proof involves Sobolev spaces, while the one in [4] uses Hölder spaces.
Proposition 3.1
Let \(v_\varepsilon (t,x, \omega )\) be as in Theorem 1.2. Then there exists an event \(\Sigma \subset \Omega \) such that \(p(\Sigma )=1\), \(\Sigma \subset \Sigma _0\) where \(\Sigma _0\) is the event in Proposition 2.1 and for every \(\omega \in \Sigma \) there exists a finite constant \(C(\omega )>0\) such that:
and
Proof
We introduce the event \(\Sigma \) as follows
where \(\Sigma _0\) is the event provided by Proposition 2.1.
We now state the fundamental conservation laws satisfied by \(v_{\varepsilon }\). We have the mass conservation
and the energy conservation
Of course, the conservation laws (3.4) and (3.5) give the key global information in our analysis. By using (3.4) and Propositions 2.1, we get
In order to control \(\Vert \nabla v_\varepsilon (t) \Vert _{L^2}\) we first notice that by duality and by Lemma 2.2 in [9] (see also [6] and the references therein), and by using Proposition 2.1 we get for \(s\in (0,1)\):
where \(\frac{1}{q'}=\frac{1}{2} + \frac{1}{r}=\frac{2}{3} + \frac{1}{l}\), \(\frac{2}{3}=\frac{1}{m}+ \frac{1}{2}\) and \(q<\infty \) is large enough. We now fix \(s=\frac{1}{2}\). By using now interpolation and the Sobolev embedding we get
By combining this estimate with (3.5) (and by using that we are assuming \(\lambda \le 0\)) we get
which in turn by interpolation, Sobolev embedding and (3.6) implies
where \({\mathcal P}\) denotes a polynomial function and we have used Proposition 2.1 to estimate a.s. \(\sup _{\varepsilon \in (0,1)} \Vert e^{-Y_\varepsilon }\Vert _{L^\infty }<C(\omega )\). We therefore have the bound
where the random constant \(C(\omega )\) is finite for every \(\omega \in \Sigma \). We conclude the proof by the classical Young inequality. \(\square \)
In the sequel we shall need suitable versions of the Gronwall lemma. Although they are very classical we prefer to state them, in particular we emphasize how the estimates depend from the constants involved. We also mention that the estimates below are implicitely used in [4], however for the sake of clarity we prefer to give below the precise statements that we need.
Proposition 3.2
Let f(t) be a non-negative real valued function such that for \(t\in [0, \infty )\):
where \(A, B, C\in (1, \infty )\). Then we have the following upper bound
Proof
Notice that by assumption
Therefore
Hence
which implies after integration between 0 and t :
By taking twice the exponential we obtain
Coming back to (3.7), we get the needed bound. \(\square \)
Proposition 3.3
Let f(t) be a non-negative real valued function for \(t\in [0, \infty )\), such that:
where \(A, B\in (0, \infty )\). Then we have the following upper bound:
Proof
We notice that \(\frac{d}{dt}(e^{-Bt} f(t))\le Ae^{-Bt}\) and hence
\(\square \)
4 Modified energy for the gauged NLS on \(\mathbb {T}^2\) and \(H^2\) a-priori bounds
In the sequel \(v_\varepsilon (t,x,\omega )\) will denote the unique solution to:
where \(\lambda \in \mathbb {R}\), \(p\ge 2\).
Proposition 4.1
We have the identity
where
and the energies \({\mathcal F}_{\varepsilon }, {\mathcal G}_{\varepsilon }, \mathcal H_{\varepsilon }\) are defined as follows on a generic time dependent function w(t, x). The kinetic energy is defined by
The potential energy is defined by
Finally, the lack of exact conservation is measured by the functional
Remark 4.2
Notice that in the linear case (namely (4.1) with \(\lambda =0\)) we get the following exact conservation law:
The proof of Proposition 4.1 will be presented in the last section of the paper. Next, we estimate \({\mathcal H}_\varepsilon (v_\varepsilon )\) and the lower order terms in the energy \({\mathcal E}_\varepsilon (v_\varepsilon )\). They will play a crucial role in order to get the key \(H^2\) a-priori bound for \(v_\varepsilon \). In the sequel we shall assume that \(v_\varepsilon \) solves (4.1) with \(\lambda \le 0\) and \(p\in [2,3]\). In particular we are allowed to use Proposition 3.1 in order to control a.s. \(\Vert v_\varepsilon (t,x)\Vert _{H^1}\) uniformly w.r.t. \(\varepsilon \) and t.
Proposition 4.3
Let \(\Sigma \subset \Omega \) be the event of full probability, obtained in Proposition 3.1. Then there exists a random variable \(C(\omega )\) finite on \(\Sigma \) such that for every \(\varepsilon \in (0,\frac{1}{2})\):
Proof
By using the Hölder inequality, the Leibnitz rule and the diamagnetic inequality \(|\partial _t |u||\le |\partial _t u|\) we get that the first three terms in \({\mathcal H}_\varepsilon (v_\varepsilon )\) can be estimated by:
which by the Brezis-Gallouet inequality (see [1]) can be estimates as follows:
and by using the equation solved by \(v_\varepsilon (t,x)\):
Next we recall a family of estimates that will be useful to control I, II, III, IV. We shall also use without any further comment Propositions 2.1 and 3.1. We have the Gagliardo-Nirenberg type inequality
Indeed, using the Sobolev embedding \(H^{\frac{1}{2}}\subset L^4\), we can write
It remains to observe that
Therefore we have (4.2). Now, using (4.2) we get:
and also
Next notice that
where we have used the Sobolev embedding. Again by the Sobolev embedding we get:
Finally notice that
Based on the estimates above we get:
and also
where we used the Young inequality. We conclude with the following estimates:
and
Summarizing we can control the first three terms in \({\mathcal H}_\varepsilon (v_\varepsilon )\). Concerning the last term in the expression of \({\mathcal H}_\varepsilon (v_\varepsilon )\) we can estimate it as follows:
where we have used Propositions 2.1 and 3.1 in conjunction with the Sobolev embedding to control \(\Vert v_\varepsilon \Vert _{L^{8p}}\). Hence by the Gagliardo-Nirenberg (4.2) inequality and by using the equation solved by \(v_\varepsilon \) we can continue as follows
and by the Sobolev embedding
The conclusion is now straighforward. \(\square \)
Proposition 4.4
Let \(\Sigma \subset \Omega \) be the event of full probability, obtained in Propositions 3.1. For every \(\delta >0\) and \(\omega \in \Sigma \) there exists a finite constant \(C(\omega , \delta )>0\) such that for every \(\varepsilon \in (0,\frac{1}{2})\):
and
Proof
We estimate the terms involved in the expression \({\mathcal F}_{\varepsilon }(v_\varepsilon )-\int _{\mathbb {T}^2} |\Delta v_\varepsilon |^2 e^ {-2Y_\varepsilon }\). Since the arguments are quite similar to the ones used along of Proposition 4.3, we skip the details. Using Propositions 2.1 and 3.1, we can write
Next notice that third and fourth term in the energy \({\mathcal F}_{\varepsilon }\) can be estimated by
By similar arguments and Sobolev embedding we get:
Next we estimate the other term to be controlled:
where we have used the Sobolev embedding. We conclude the proof of (4.3) by the following estimates:
where we have used again the Sobolev embedding. Next, we prove (4.4). The first, second and third terms in the definition of \({\mathcal G}_\varepsilon (v_\varepsilon )\) can be estimated essentially by the same argument. Let us focus on the first one:
where we have used again the Sobolev embedding, the Gagliardo-Nirenberg inequality and Propositions 2.1 and 3.1. Concerning the fourth term in the definition of \({\mathcal G}_\varepsilon (v_\varepsilon )\) we get by the Hölder inequality and the Sobolev embedding
where we have used again Propositions 2.1 and 3.1. Finally we focus on the last term in the definition of \({\mathcal G}_\varepsilon (v_\varepsilon )\) that can be estimated as follows
where we have used Propositions 2.1 and 3.1. \(\square \)
As already mentioned in the introduction we carefully follow the approach in [4] along the proof of Theorem 1.2. The main novelty being the following \(H^2\) a-priori bound that we extend to the regime of the nonlinearity \(2\le p\le 3\). Next we shall focus on the proof of the following Proposition (to be compared with Proposition 4.2 in [4]) which is the most important result of this section.
Proposition 4.5
Let \(\Sigma \subset \Omega \) be the event of full probability, obtained in Propositions 3.1 and let \(T>0\) be fixed. Then there exists a random variable \(C(\omega )>0\) finite for every \(\omega \in \Sigma \) and such that for every \(\varepsilon \in (0,\frac{1}{2})\),
Proof of Proposition 4.5
We only consider positive times t. The case \(t<0\) can be treated similarly. We shall prove the following estimate
for a suitable random constant which is finite a.s., then the conclusion follows by
By Proposition 4.1 after integration in time and by using Propositions 4.3 and 4.4 (where we choose \(\delta >0\) small in such a way that we can absorb on the l.h.s. the factor \(\Vert e^{-Y_\varepsilon }\Delta v_\varepsilon (t)\Vert _{L^2}^2\)) we can write:
Notice also that by Proposition 4.4 one can can show the following bound for every \(\omega \) belonging to the event given in Proposition 3.1:
Hence, by recalling that \(\frac{p-1}{2}\le 1\), we deduce from (4.5) the following bound
We can apply Proposition 3.2 and the conclusion follows. \(\square \)
5 Proof of theorem 1.2
5.1 Convergence of the approximate solutions
Proposition 5.1
Let \(T>0\) be fixed and \(v_\varepsilon (t,x, \omega )\) be as in Theorem 1.2. Then there exists \( v(t,x, \omega )\in {\mathcal C}(\mathbb {R};H^\gamma )\) such that
Proof
We shall only consider positive times, the analysis for negative times being similar. Let us fix \(T>0\). Set
Then the equation solved by r is the following one:
We multiply the equation by \(e^{-2Y_{\varepsilon _1}(x)}\bar{r}(t,x)\) and we consider the imaginary part, then we get
From now on we choose \(\omega \in \Sigma \) where the event \(\Sigma \) is defined as in Proposition 3.1. We estimate I by using duality and Lemma 2.2 in [9] (see also the proof of Proposition 3.1):
where \(\frac{1}{q'}=\frac{1}{q_1}+\frac{1}{q_2}+\frac{1}{q_3}\) and \(s\in (0,1), q\in (1, \infty )\). Next notice that by choosing \(s\in (0,1)\) small enough, by using Sobolev embedding and by recalling Propositions 2.1, 3.1 and 4.5 we get
By a similar argument we can estimate II as follows:
and hence by using Sobolev embedding and by recalling Propositions 2.1, 3.1 and 4.5 we get for s small enough
The estimate of the term III is rather classical and can be done by using the Brezis-Gallouët inequality (see [1]). More precisely we get:
where we have used at the last step Proposition 3.1. In order to control \(\Vert v_{\varepsilon _1}(t)\Vert _{H^2}\) we use Proposition 4.5 and we get
Next, arguing as in the estimate of III, we get by combining Propositions 3.1 and 4.5
Finally by the Holder inequality, propositions 2.1 and 3.1 we estimate
Summarizing we obtain
Next we split the proof in two steps.
First step \(v_{2^{-k}}(t,x,\omega )\overset{k\rightarrow \infty }{\longrightarrow }v(t,x,\omega )\) for every \(\omega \in \Sigma \).
We consider \(r=v_{2^{-(k+1)}} - v_{2^{-k}}\). Then by combining Proposition 3.3 and (5.2) (where we choose \(\varepsilon _1 =2^{-(k+1)}\) and \( \varepsilon _2=2^{-k}\)) we get:
By recalling that for every \(\omega \in \Sigma \) we have \(\sup _k \Vert e^{2Y_{2^{-(k+1)}}}\Vert _{L^\infty }<C(\omega )<\infty \) we deduce that the bound above implies
By combining this estimate with interpolation and with Proposition 4.5 we deduce for every \(\gamma \in [0,2)\) the following bound
where \(\tilde{\kappa }, \tilde{p}>0\) are constants that depend from the interpolation inequality. It is easy to check that
and therefore \((v_{2^{-k}})\) is a Cauchy sequence in \({\mathcal C}([0,T];H^\gamma )\) and we conclude.
Second step: \(v_{\varepsilon }(t,x,\omega )\overset{\varepsilon \rightarrow 0}{\longrightarrow }v(t,x,\omega )\) for every \(\omega \in \Sigma \).
For every \(\varepsilon \in (2^{-(k+1)}, 2^{-k})\) we introduce \(r=v_{\varepsilon }- v_{2^{-k}}\). Then by combining (5.2) (where we choose \(\varepsilon _1=\varepsilon \) and \(\varepsilon _2=2^{-k}\)) with Proposition 3.3 and arguing as above we get
and hence (recall that \(\varepsilon \in (2^{-(k+1)}, 2^{-k})\))
We conclude by recalling the first step. \(\square \)
5.2 Uniqueness for (1.17)
It follows from the analysis of the previous section that \(v_{\varepsilon }\) converges almost surely to a solution of (1.17). We next prove the uniqueness of this solution.
Proposition 5.2
Let \(\Sigma \subset \Omega \) be the full measure event defined in Proposition 3.1 and \(T>0\). For every \(\omega \in \Sigma \) there exists at most one solution \( v(t,x)\in {\mathcal C}([0,T];H^\gamma )\) to (1.17) for \(\gamma >1\).
Proof
Assume \( v_1(t,x)\) and \( v_2(t,x)\) are two solutions, then we consider the difference \(r(t,x)= v_1(t,x)- v_2(t,x)\) which solves
Next we multiply the equation by \(e^{-2Y_\varepsilon (x)} \bar{r}(t,x)\) where \(\varepsilon \in (0,1)\), we integrate by parts and we take the imaginary part, finally we get:
By the Sobolev embedding \(H^\gamma \subset L^\infty \) we get
For the term I we get by duality and Lemma 2.2 in [9] (see the proof of proposition 3.1 for more details) the following estimate
where \(s\in (0,1), q\in (0, \infty )\), \(\frac{1}{q'}=\frac{1}{q_1}+ \frac{1}{q_2}+\frac{1}{2}\) and we have used Proposition 2.1 at the second step. By Sobolev embedding, provided that we choose s small enough, and Proposition 2.1 one can show that
Summarizing we get
We deduce by proposition 3.3 that
and hence by passing to the limit \(\varepsilon \rightarrow 0\) we deduce \(\int _{\mathbb {T}^2} e^{-2Y} |r(t)|^2=0\). \(\square \)
6 Proof of theorem 1.1
The proof of (1.7) follows by combining the transformation (1.12) with Theorem 1.2. Since now we shall denote by \(\Sigma \subset \Omega \) the event of full probability given by the intersection of the ones defined in Theorem 1.2 and in Proposition 3.1. In order to prove (1.8) we first show
Notice that from (1.7) and the Sobolev embedding, we get
and hence by the triangle inequality in \(\mathbb {C}\),
Next we prove
Since
we get in particular
and hence by the diamagnetic inequality
On the other hand by (6.4) we have
Next notice that \(e^{Y_{\varepsilon }} |u_{\varepsilon }(t)|=|v_{\varepsilon }(t)|\) and hence by the diamagnetic inequality \(\Vert e^{Y_{\varepsilon }} |u_{\varepsilon }(t)|\Vert _{H^1}\le \Vert v_{\varepsilon }(t)\Vert _{H^1}\) and hence summarizing
By interpolation between the uniform bound (6.8) and (6.2) we get (6.3).
Finally we prove (1.8). We show first the following fact
which in turn implies by (6.1) the following convergence
We shall establish the following equivalent form of (6.9):
We first focus on the case \(\gamma =0\). In this case we get (6.11) by combining the following facts: we have the convergence \(Y_\varepsilon (x) \overset{\varepsilon \rightarrow 0}{\longrightarrow }Y(x)\) for every \(\omega \in \Sigma \) in the \(L^\infty \) topology (see Proposition 2.1); we have the following bound
and hence \(\Vert u_{\varepsilon }(t)\Vert _{L^2}\) is bounded for every \(\omega \in \Sigma \) by (6.7) and Proposition 2.1.
In order to establish (6.11) for \(\gamma \in (0,1)\) it is sufficient to interpolate between the convergence for \(\gamma =0\) (already established above) with the uniform bound
In order to establish this bound it is sufficient to notice that for every \(\omega \in \Sigma \)
and to recall that \(H^\gamma \cap L^\infty \) is an algebra. We recall that the boundedness of \(\Vert |u_{\varepsilon }(t)|\Vert _{H^\gamma \cap L^\infty }\) comes on one hand by combining (6.7) with
where \(s>1\). On the other hand we have the following computation:
where we have used Proposition 2.1 and hence we get the desired uniform bound since by the diamagnetic inequality
and we conclude by (6.7).
Let us now establish (1.8). Notice that by combining (6.2), (6.3) and (6.11) we have:
Hence (1.8) in the case \(\gamma =0\) and the \(L^\infty \) convergence, follow from (6.13) since \(e^{-Y}\in L^\infty \) for every \(\omega \in \Sigma \) (see Proposition 2.1). To prove (1.8) in the general case \(\gamma \in (0,1)\) it is sufficient to make interpolation between \(\gamma =0\) and the bound
which in turn implies, thanks to the fact that \({H^\gamma \cap L^\infty }\) is an algebra, that the quantity \(\Vert |u_{\varepsilon }(t)| - e^{-Y}| v(t)| \Vert _{H^\gamma }\) is uniformly bounded for every \(\omega \in \Sigma \). The proof of (6.14) follows by combining: the estimate (6.12), the bound \(\Vert v(t)\Vert _{L^\infty }\lesssim \Vert v(t)\Vert _{H^s}<C\) for \(s\in (1,2)\) where we used (6.7) in the last inequality, by the bound (6.6) and finally by the properties of Y (see Proposition 2.1). This completes the proof of Theorem 1.1.
7 Proof of proposition 4.1
In the sequel, we use the following simplified notation: \(v=v_\varepsilon (t,x)\), \(Y=Y_\varepsilon (x)\) and \({\mathcal E}={\mathcal E}_{\varepsilon }\). Moreover we denote by \((\cdot , \cdot )\) the \(L^2\) scalar product. We also drop the explicit dependence of the functions involved from the variable (t, x), in order to make the computations more compact.
We are interested to construct a suitable energy with the following structure
By using the equation solved by v we have the following identity:
Notice that
Moreover we have
and using again the equation
and hence by (7.2) we get
Summarizing we get from the previous chain of identities
On the other hand we can compute
and hence
By combining (7.5), (7.6) and (7.7) we get
Next notice that
Summarizing we get
Next by using the equation we compute the first and last term on the r.h.s. in (7.10) as follows:
Finally we show that the third term on the r.h.s. in (7.10) can be written as a total derivative w.r.t. time variable:
We conclude the proof of Proposition 4.1 by combining (7.1), (7.10), (7.11), (7.12).
Change history
18 July 2022
Missing Open Access funding information has been added in the Funding Note.
References
Brezis, H., Gallouet, T.: Nonlinear schrodinger evolution equations. Nonlinear Anal. 4(4), 677–681 (1980)
Burq, N., Gerard, P., Tzvetkov, N.: Strichartz inequalities and the nonlinear schrodinger equation on compact manifolds. Amer. J. Math. 126, 569–605 (2004)
Colliander, J., Keel, M., Staffilani, G., Takaoka, H., Tao, T.: Almost conservation laws and global rough solutions to a nonlinear schrodinger equation. Math. Res. Letters 9, 659–682 (2002)
Debussche, A., Weber, H.: The Schrodinger equation with spatial white noise potential, Electron. J. Probab., 23 (2018) no. 28, 16 pp
Debussche, A., Martin, J.: Solution to the stochastic Schrodinger equation on the full space. Nonlinearity 32(4), 1147–1174 (2019)
Gubinelli, M., Koch, H., Oh, T.: Renormalization of the two-dimensional stochastic nonlinear wave equation. Trans. Amer. Math. Soc. 370, 7335–7359 (2018)
Gubinelli, M., Ugurcan, B., Zachhuber, I.: Semilinear evolution equations for the Anderson Hamiltonian in two and three dimensions. Stoch. Partial Differ. Equ. Anal. Comput. 8(1), 82–149 (2020)
Hairer, M., Labbe, C.: A simple construction of the continuum parabolic Anderson model on\({\mathbf{R}^{2}}\), Electron. Commun. Probab., 20 (2015) no. 43, 11 pp
Oh, T., Pocovnicu, O., Tzvetkov, N.: Probabilistic local well-posedness of the cubic nonlinear wave equation in negative Sobolev spaces, arXiv:1904.06792 [math.AP]
Ozawa, T., Visciglia, N.: An improvement on the Brezis-Gallouet technique for 2D NLS and 1D half-wave equation. Ann. Inst. H. Poincare Anal. Non Lineaire .33(4), 1069–1079 (2016)
Planchon, F., Tzvetkov, N., Visciglia, N.: On the growth of Sobolev norms for NLS on 2- and 3-dimensional manifolds. Anal. PDE 10, 1123–1147 (2017)
Simon, B.: The\(P(\varphi )_2\)Euclidean (quantum) field theory, Princeton Series in Physics. Princeton University Press, Princeton, N.J., (1974). xx+392 pp
Tsutsumi, M.: On smooth solutions to the initial boundary value problem for the nonlinear Schrodinger equation in two space dimensions. Nonlinear Anal. TMA. 13, 1051–1056 (1989)
Tzvetkov, N.: Construction of a Gibbs measure associated to the periodic Benjamin-Ono equation. Probab. Theory Relat. Fields 146, 481–514 (2010)
Funding
Open access funding provided by Universitá di Pisa within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The first author was supported by ANR Grant ODA (ANR-18-CE40-0020-01), the second author acknowledge the Gruppo Nazionale per l’ Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituzione Nazionale di Alta Matematica (INDAM)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tzvetkov, N., Visciglia, N. Two dimensional nonlinear Schrödinger equation with spatial white noise potential and fourth order nonlinearity. Stoch PDE: Anal Comp 11, 948–987 (2023). https://doi.org/10.1007/s40072-022-00251-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-022-00251-z