The stochastic Burgers equation (SBE) on the one dimensional torus \(\mathbb{T }=(-\pi ,\pi ]\) is the SPDE

$$\begin{aligned} \mathrm{d }u_t = \frac{1}{2} \partial _\xi ^2 u_t(\xi ) \mathrm{d }t + \frac{1}{2} \partial _\xi (u_t(\xi ))^2 \mathrm{d }t + \partial _\xi \mathrm{d }W_t \end{aligned}$$
(1)

where \(W_t\) is a cylindrical white noise on the Hilbert space \(H={L^2_0(\mathbb{T })}\) of square integrable, mean zero real function on \(\mathbb{T }\) and it has the form \(W_t(\xi ) = \sum _{k\in \mathbb{Z }_0} e_k(\xi ) \beta ^k_t\) with \(\mathbb{Z }_0 = \mathbb{Z }\backslash \{0\}\) and \(e_k(\xi )=e^{i k\xi }/\sqrt{2\pi }\) and \(\{\beta _t^k\}_{t\ge 0, k\in \mathbb{Z }_0}\) is a family of complex Brownian motions such that \((\beta ^k_t)^*=\beta ^{-k}_t\) and with covariance \(\mathbb{E }[\beta _t^k \beta _t^q ]=\mathbb{I }_{q+k=0}\). Formally the solution \(u\) of Eq. (1) is the derivative of the solution of the Kardar–Parisi–Zhang equation

$$\begin{aligned} \mathrm{d }h_t = \frac{1}{2} \partial _\xi ^2 h_t(\xi ) \mathrm{d }t + \frac{1}{2} (\partial _\xi h_t(\xi ))^2 \mathrm{d }t + \mathrm{d }W_t \end{aligned}$$
(2)

which is believed to capture the macroscopic behavior of a large class of surface growth phenomena [20].

The main difficulty with Eq. (1) is given by the rough nonlinearity which is incompatible with the distributional nature of the typical trajectories of the process. Note in fact that, at least formally, Eq. (1) preserves the white noise on \(H\) and that the square in the non-linearity is almost surely \(+\infty \) on the white noise. Additive renormalizations in the form of Wick products are not enough to cure this singularity [9].

In [7] Bertini and Giacomin studying the scaling limits for the fluctuations of an interacting particles system show that a particular regularization of (1) converges in law to a limiting process \(u^\mathrm{hc }_t(\xi )=\partial _\xi \log Z_t(\xi )\) (which is referred to as the Hopf–Cole solution) where \(Z\) is the solution of the stochastic heat equation with multiplicative space–time white noise

$$\begin{aligned} \mathrm{d }Z_t = \frac{1}{2} \partial _\xi ^2 Z_t(\xi ) \mathrm{d }t + Z_t(\xi ) \mathrm{d }W_t(\xi ) . \end{aligned}$$
(3)

The Hopf–Cole solution is believed to be the correct physical solution for (1) however up to recently a rigorous notion of solution to Eq. (1) was lacking so the issue of uniqueness remained open.

Jara and Gonçalves [15] introduced a notion of energy solution for Eq. (1) and showed that the macroscopic current fluctuations of a large class of weakly non-reversible particle systems on \(\mathbb{Z }\) obey the Burgers equation in this sense. Moreover their results show that also the Hopf–Cole solution is an energy solution of Eq. (1).

More recently Hairer [18] obtained a complete existence and uniqueness result for KPZ. In this remarkable paper the theory of controlled rough paths is used to give meaning to the nonlinearity and a careful analysis of the series expansion of the candidate solutions allow to give a consistent meaning to the equation and to obtain a uniqueness result. In particular Hairer’s solution coincide with the Cole–Hopf ansatz.

In this paper we take a different approach to the problem. We want to point out the regularizing effect of the linear stochastic part of the equation on the the non-linear part. This is linked to some similar remarks of Assing [3, 4] and by the approach of Jara and Gonçalves [15]. Our point of view is motivated also by similar analysis in the PDE and SPDE context where the noise or a dispersive term provide enough regularization to treat some non-linear term: there are examples involving the stochastic transport equation [12], the periodic Korteweg-de Vries equation [5, 17] and the fast rotating Navier–Stokes equation [6]. In particular in the paper [17] it is shown how, in the context of the periodic Korteweg-de Vries equation, an appropriate notion of controlled solution can make sense of the non-linear term in a space of distributions. This point of view has also links with the approach via controlled paths to the theory of rough paths [16].

With our approach we are not able to obtain uniqueness for the SBE above and we resort to study the more general equation (SBE\(_\theta \)):

$$\begin{aligned} \mathrm{d }u_t = - A^\theta u_t \mathrm{d }t + F(u_t) \mathrm{d }t + A^{\theta /2} \mathrm{d }W_t \end{aligned}$$
(4)

where \(F(u_t)(\xi )=\partial _\xi (u_t(\xi ))^2\), \(-A\) is the Lapacian with periodic b.c., where \(\theta \ge 0\) and where the initial condition is taken to be white noise. In the case \(\theta =1\) we essentially recover the stationary case of the SBE above (modulo a mismatch in the noise term which do not affect its law).

For any \(\theta \ge 0\) we introduce a class \(\mathcal R _\theta \) of distributional processes ”controlled” by the noise, in the sense that these processes have a small time behaviour similar to that of the stationary Ornstein–Uhlenbech process \(X\) which solves the linear part of the dynamics:

$$\begin{aligned} \mathrm{d }X_t = - A^\theta X_t \mathrm{d }t + A^{\theta /2} \mathrm{d }W_t, \end{aligned}$$
(5)

where \(X_0\) is white noise. When \(\theta > 1/2\) we are able to show that the time integral of the non-linear term appearing in SBE\(_\theta \) is well defined, namely that for all \(v\in \mathcal R _\theta \)

$$\begin{aligned} A^v_t = \int \limits _0 ^t F(v_s) \mathrm{d }s \end{aligned}$$
(6)

is a well defined process with continous paths in a space of distributions on \(\mathbb{T }\) of specific regularity. Note that this process is not necessarily of finite variation with respect to the time parameter even when tested with smooth test functions.

The existence of the drift process (6) allows to formulate naturally the SBE\(_\theta \) equation in the space \(\mathcal R _\theta \) of controlled processes and gives a notion of solution quite similar to that of energy solution introduced by Jara and Gonçalves [15]. Existence of (probabilistically) weak solutions will be established for any \(\theta > 1/2\), that is well below the KPZ regime. The precise notion of solution will be described below. We are also able to show easily pathwise uniqueness when \(\theta > 5/4\) but the case \(\theta =1\) seems still (way) out of range for this technique. In particular the question of pathwise uniqueness is tightly linked with that of existence of strong solutions and the key estimates which will allow us to handle the drift (6) are not strong enough to give a control on the difference of two solutions (with the same noise) or on the sequence of Galerkin approximations.

Similar regularization phenomena for stochastic transport equations are studied in [12] and in [10] for infinite dimensional SDEs. This is also linked to the fundamental paper of Kipnis and Varadhan [21] on CLT for additive functionals and to the Lyons–Zheng representation for diffusions with singular drifts [13, 14, 23].

Plan In Sect. 1 we define the class of controlled paths and we recall some results of the stochastic calculus via regularization which are needed to handle the Itô formula for the controlled processes. Section 2 is devoted to introduce our main tool which is a moment estimate of an additive functional of a stationary Dirichlet process in terms of the quadratic variation of suitable forward and backward martingales. In Sect. 3 we use this estimate to provide uniform bounds for the drift of any stationary solution. These bounds are used in Sect. 4 to prove tightness of the approximations when \(\theta > 1/2\) and to show existence of controlled solution of the SBE via Galerkin approximations. Finally in Sect. 5 we prove our pathwise uniqueness result in the case \(\theta > 5/4\). In Sect. 6 we discuss related results for the model introduced in [9].

Notations We write \(X \lesssim _{a,b,\ldots } Y\) if there exists a positive constant \(C\) depending only on \(a,b,\ldots \) such that \(X \le C Y\). We write \(X \sim _{a,b,\ldots } Y\) iff \(X\lesssim _{a,b,\ldots } Y \lesssim _{a,b,\ldots } X\).

We let \(\mathcal S \) be the space of smooth test functions on \(\mathbb{T }\), \(\mathcal S ^{\prime }\) the space of distributions and \(\langle \cdot ,\cdot \rangle \) the corresponding duality.

On the Hilbert space \(H={L^2_0(\mathbb{T })}\) the family \(\{e_k\}_{k\in \mathbb{Z }_0}\) is a complete orthonormal basis. On \(H\) we consider the space of smooth cylinder functions \(\mathcal C yl\) which depends only on finitely many coordinates on the basis \(\{e_k\}_{k\in \mathbb{Z }_0}\) and for \(\varphi \in \mathcal C yl\) we consider the gradient \(D \varphi : H\rightarrow H\) defined as \(D \varphi (x) = \sum _{k\in \mathbb{Z }_0} D_k \varphi (x) e_k\) where \(D_k = \partial _{x_k}\) and \(x_k = \langle e_k,x \rangle \) are the coordinates of \(x\).

For any \(\alpha \in \mathbb{R }\) define the space \(\mathcal{F }L^{p,\alpha }\) of functions on the torus for which

$$\begin{aligned} \,\,|x|_{\mathcal{F }L^{p,\alpha }} \!=\! \left[ \sum _{k\in \mathbb{Z }_0} (|k|^\alpha |x_k|)^p\right] ^{1/p}<+\infty \, \text{ if } p<\infty \text{ and } \, |x|_{\mathcal{F }L^{\infty ,\alpha }} \!=\! \sup _{k\in \mathbb{Z }_0} |k|^\alpha |x_k| <+\infty . \end{aligned}$$

We will use the notation \(H^\alpha = \mathcal{F }L^{2,\alpha }\) for the usual Sobolev spaces of periodic functions on \(\mathbb{T }\). We let \(A=-\partial _\xi ^2\) and \(B=\partial _\xi \) as unbounded operators acting on \(H\) with domains respectively \(H^2\) and \(H^{1}\). Note that \(\{e_k\}_{k\in \mathbb{Z }_0}\) is a basis of eigenvectors of \(A\) for which we denote \(\{\lambda _k = |k|^2 \}_{k\in \mathbb{Z }_0}\) the associated eigenvalues. The operator \(A^\theta \) will then be defined on \(H^{\theta }\) by \(A^\theta e_k = |k|^{2\theta }e_k\) with domain \(H^{2\theta }\). The linear operator \(\Pi _N: H \rightarrow H\) is the projection on the subspace generated by \(\{e_k\}_{k\in \mathbb{Z }_0, |k|\le N}\).

Denote \(\mathcal{C }_T V = C([0,T],V)\) the space of continuous functions from \([0,T]\) to the Banach space \(V\) endowed with the supremum norm and with \(\mathcal{C }^\gamma _T V = C^\gamma ([0,T],V)\) the subspace of \(\gamma \)-Hölder continuous functions in \(\mathcal{C }_T V\) with the \(\gamma \)-Hölder norm.

1 Controlled processes

We introduce a space of stationary processes which “looks like” an Ornstein–Uhlenbeck process. The invariant law at fixed time of these processes will be given by the canonical Gaussian cylindrical measure \(\mu \) on \(H\) which we consider as a Gaussian measure on \(H^{\alpha }\) for any \(\alpha <-1/2\). This measure is fully characterized by the equation

$$\begin{aligned} \int e^{i \langle \psi ,x \rangle }\mu (\mathrm{d }x) = e^{-\langle \psi ,\psi \rangle /2}, \qquad \forall \psi \in H ; \end{aligned}$$

or alternatively by the integration by parts formula

$$\begin{aligned} \int D_k \varphi (x) \mu (\mathrm{d }x) = \int x_{-k} \varphi (x) \mu (\mathrm{d }x),\qquad \forall k\in \mathbb{Z }_0, \varphi \in \mathcal C yl. \end{aligned}$$

Definition 1

(Controlled process) For any \(\theta \ge 0\) let \(\mathcal R _\theta \) be the space of stationary stochastic processes \((u_t)_{0 \le t \le T}\) with continuous paths in \(\mathcal S ^{\prime }\) such that

  1. i)

    the law of \(u_t\) is the white noise \(\mu \) for all \(t\in [0,T]\);

  2. ii)

    there exists a process \(\mathcal{A }\in C([0,T],\mathcal S ^{\prime })\) of zero quadratic variation such that \(\mathcal{A }_0 = 0\) and satisfying the equation

    $$\begin{aligned} u_t(\varphi ) = u_0(\varphi ) + \int \limits _0^t u_s(-A^\theta \varphi ) \mathrm{d }s+\mathcal{A }_t(\varphi ) + M_t(\varphi ) \end{aligned}$$
    (7)

    for any test function \(\varphi \in \mathcal S \), where \(M_t(\varphi )\) is a martingale with respect to the filtration generated by \(u\) with quadratic variation \([M(\varphi )]_t = 2t\Vert A^{\theta /2} \varphi \Vert _{L^2_0(\mathbb{T })}^2\);

  3. iii)

    the reversed processes \(\hat{u}_t = u_{T-t}\), \(\hat{\mathcal{A }}_t = -\mathcal{A }_{T-t}\) satisfies the same equation with respect to its own filtration (the backward filtration of \(u\)).

For controlled processes we will prove that if \(\theta >1/2\) the Burgers drift is well defined by approximating it and passing to the limit. Let \(\rho :\mathbb{R }\rightarrow \mathbb{R }\) be a positive smooth test function with unit integral and \(\rho ^{\varepsilon }(\xi )=\rho (\xi /{\varepsilon })/{\varepsilon }\) for all \({\varepsilon }>0\). For simplicity in the proofs we require that the function \(\rho \) has a Fourier transform \(\hat{\rho }\) supported in some ball and such that \(\hat{\rho }= 1\) in a smaller ball. This is a technical condition which is easy to remove but we refrain to do so here not to obscure the main line of the arguments.

Lemma 1.

If \(u\in \mathcal R _\theta \) and if \(\theta >1/2\) then almost surely

$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0} \int \limits _0^t F(\rho ^{\varepsilon }* u_s) \mathrm{d }s \end{aligned}$$

exists in the space \(C([0,T],\mathcal{F }L^{\zeta ,\infty })\) for some \(\zeta <0\). We denote with \(\int _0^t F( u_s) \mathrm{d }s\) the resulting process with values in \(C([0,T],\mathcal{F }L^{\zeta ,\infty })\).

Proof

We postpone the proof in Sect. 3. \(\square \)

It will turn out that for this process we have a good control of its space and time regularity and also some exponential moment estimates. Then it is relatively natural to define solutions of Eq. (4) by the following self-consistency condition.

Definition 2

(Controlled solution) Let \(\theta >1/2\), then a process \(u\in \mathcal R _\theta \) is a controlled solution of SBE\(_\theta \) if almost surely

$$\begin{aligned} \mathcal{A }_t(\varphi ) = \langle \varphi , \int \limits _0^t F(u_s) \mathrm{d }s \rangle \end{aligned}$$
(8)

for any test function \(\varphi \in \mathcal S \) and any \(t\in [0,T]\).

Note that these controlled solutions are a generalization of the notion of probabilistically weak solutions of SBE\(_\theta \). The key point is that the drift term is not given explicitly as a function of the solution itself but characterized by the self-consistency relation (8). In this sense controlled solutions are to be understood as a couple \((u,\mathcal A )\) of processes satisfying compatibility relations.

An analogy which could be familiar to the reader is that with a diffusion on a bounded domain with reflected boundary where the solution is described by a couple of processes \((X,L)\) representing the position of the diffusing particle and its local time at the boundary [22].

Note also that there is no requirement on \(\mathcal A \) to be adapted to \(u\). Our analysis below cannot exclude the possibility that \(\mathcal A \) contains some further randomness and that the solutions are strictly weak, that is not adapted to the filtration generated by the martingale term and the initial condition.

2 The Itô trick

In order to prove the regularization properties of controlled processes we will need some stochastic calculus and in particular an Itô formula and some estimates for martingales. Let us recall here some basic elements here. In this section \(u\) will be always a controlled process in \(\mathcal R _\theta \). For any test function \(\varphi \in \mathcal S \) the processes \((u_t(\varphi ))_{t}\) and \((\hat{u}_t(\varphi ))_{t}\) are Dirichlet processes: sums of a martingale and a zero quadratic variation process. Note that we do not want to assume controlled processes to be semimartingales (even when tested with smooth functions). This is compatible with the regularity of our solutions and there is no clue that solutions of SBE\(_\theta \) even with \(\theta =1\) are distributional semimartingales. A suitable notion of stochastic calculus which is valid for a large class of processes and in particular for Dirichlet processes is the stochastic calculus via regularization developed by Russo and Vallois [24]. In this approach the Itô formula can be extended to Dirichlet processes. In particular if \((X^i)_{i=1,\ldots ,k}\) is an \(\mathbb{R }^k\) valued Dirichlet process and \(g\) is a \(C^2(\mathbb{R }^k;\mathbb{R })\) function then

$$\begin{aligned} g(X_t) = g(X_0) + \sum _{i=1}^k\int \limits _0^t \partial _i g(X_s) \mathrm{d }^- X^i_s + \frac{1}{2} \sum _{i,j=1}^k \int \limits _0^t \partial ^2_{i,j} g(X_s) \mathrm{d }^- [X^i,X^j]_s \end{aligned}$$

where \(\mathrm{d }^-\) denotes the forward integral and \([X,X]\) the quadratic covariation of the vector process \(X\). Decomposing \(X=M+N\) as the sum of a martingale \(M\) and a zero quadratic variation process \(N\) we have \([X,X]=[M,M]\) and

$$\begin{aligned} \begin{aligned} g(X_t)&=g(X_0) + \sum _{i=1}^k \int \limits _0^t \partial _i g(X_s) \mathrm{d }^- M^i_s + \sum _{i=1}^k \int \limits _0^t \partial _i g(X_s) \mathrm{d }^- N^i_s\\&\quad + \sum _{i,j=1}^k\frac{1}{2} \int \limits _0^t \partial ^2_{i,j} g(X_s) \mathrm{d }^- [M^i,M^j]_s \end{aligned} \end{aligned}$$

where now \(\mathrm{d }^- M\) coincide with the usual Itô integral and \([M,M]\) is the usual quadratic variation of the martingale \(M\). The integral \(\int _0^t \partial _i g(X_s) \mathrm{d }^- N^i_s\) is well-defined due to the fact that all the other terms in this formula are well defined. The case the function \(g\) depends explicitly on time can be handled by the above formula by considering time as an additional (0-th) component of the process \(X\) and using the fact that \([X^i,X^0]=0\) for all \(i=1,\ldots ,k\). In the computations which follows we will only need to apply the Itô formula to smooth functions.

Let us denote by \(L_0\) the generator of the Ornstein–Uhlenbeck process associated to the operator \(A^\theta \):

$$\begin{aligned} L_0 \varphi (x) = \sum _{k \in \mathbb Z _0} |k|^{2\theta } \big (- x_k D_k \varphi (x) + \tfrac{1}{2} D_{-k}D_k \varphi (x)\big ). \end{aligned}$$
(9)

Consider now a smooth cylinder function \(h:[0,T]\times \Pi _N H\rightarrow \mathbb{R }\). The Itô formula for the finite quadratic variation process \((u^N_t = \Pi _N u_t)_t\) gives

$$\begin{aligned} h(t,u^N_t)=h(0,u^N_0)+\int \limits _0^t (\partial _s + L^N_0) h(s,u^N_s) \mathrm{d }s +\int \limits _0^t D h(s,u^N_s) \mathrm{d }\Pi _N \mathcal{A }_s + M^+_t \end{aligned}$$

where

$$\begin{aligned} L^N_0 h(s,x) = \sum _{k\in \mathbb{Z }_0 : |k|\le N} |k|^{2\theta } ( x_{k} D_k h(s,x) + D_k D_{-k} h(s,x)) \end{aligned}$$

is the restriction of the operator \(L_0\) to \(\Pi _N H\) and where the martingale part denoted \(M^+\) has quadratic variation given by \([ M^+ ]_t = \int _0^t \mathcal{E }^\theta _N(h(s,\cdot ))(u^N_s) \mathrm{d }s\), where

$$\begin{aligned} \mathcal{E }_N^\theta (\varphi )(x) = \frac{1}{2} \sum _{k\in \mathbb{Z }_0: |k|\le N}|k|^{2\theta } |D_k \varphi (x)|^2 , \end{aligned}$$

Similarly the Itô formula on the backward process reads

$$\begin{aligned} h(T-t,u^N_{T-t})&= h(T,u^N_T)+ \int \limits _0^{t} (-\partial _s + L^N_0) h(T-s,u^N_{T-s}) \mathrm{d }s\\&\quad - \int \limits _0^{t} D h(T-s,u^N_{T-s}) \mathrm{d }\Pi _N \mathcal{A }_{T-s} + M^-_t \end{aligned}$$

with \( [ M^- ]_t = \int \limits _0^t \mathcal{E }^\theta _N(h(T-s,\cdot ))(u^N_{T-s}) \mathrm{d }s \) so we have the key equality

$$\begin{aligned} \int \limits _0^t 2 L_0^N h(s,u^N_{s})\mathrm{d }s= -M^+_t + M^-_{T-t}-M^-_T. \end{aligned}$$
(10)

which allows us to represent the time integral of \(h\) as a sum of martingales which allows better control. On this martingale representation result we can use the Burkholder–Davis–Gundy inequalities to prove the following bound.

Lemma 2

( Itô trick) Let \(h : [0,T]\times \Pi _N H \rightarrow \mathbb{R }\) be a cylinder function. Then for any \(p \ge 1\),

$$\begin{aligned} \left\| \sup _{t\in [0,T]}\left| \int \limits _0^t L_0 h(s,\Pi _N u_s) \mathrm{d }s\right| \right\| _{L^p(\mathbb{P }_\mu )} \lesssim _p T^{1/2} \sup _{s\in [0,T]}\left\| \mathcal{E }^\theta (h(s,\cdot )) \right\| ^{1/2}_{L^{p/2}(\mu )}\qquad \end{aligned}$$
(11)

where \( \mathcal{E }^\theta (\varphi )(x) = \frac{1}{2} \sum _{k\in \mathbb{Z }_0}|k|^{2\theta } |D_k \varphi (x)|^2 \). In the particular case \(h(s,x)= e^{a(T- s)}\tilde{h}(x)\) for some \(a\in \mathbb{R }\) we have the improved estimate

$$\begin{aligned} \begin{aligned} \left\| \int \limits _0^T e^{a(T-s)} L_0 \tilde{h}(\Pi _N u_s) \mathrm{d }s\right\| _{L^p(\mathbb{P }_\mu )} \lesssim _p \left( \frac{1-e^{2aT}}{2a}\right) ^{1/2} \left\| \mathcal{E }^\theta (\tilde{h}) \right\| ^{1/2}_{L^{p/2}(\mu )} .\quad \end{aligned} \end{aligned}$$
(12)

Proof

$$\begin{aligned}&\left\| \sup _{t\in [0,T]}\left| \int \limits _0^t 2 L_0^N h(s,u_s) \mathrm{d }s\right| \right\| _{L^p(\mathbb{P }_\mu )} \!\le \! \left\| \sup _{t\in [0,T]}|M^+_t| \right\| _{L^p(\mathbb{P }_\mu )}\!+\! 2 \left\| \sup _{t\in [0,T]}|M^-_t| \right\| _{L^p(\mathbb{P }_\mu )}\\&\quad \lesssim _p \left\| \langle M^+\rangle _T \right\| _{L^{p/2}(\mathbb{P }_\mu )}^{1/2}+\left\| \langle M^-\rangle _T \right\| _{L^{p/2}(\mathbb{P }_\mu )}^{1/2} \lesssim _p \left\| \int \limits _0^T \mathcal{E }^\theta (h(s,\cdot ))(u_s) \mathrm{d }s \right\| _{L^{p/2}(\mathbb{P }_\mu )}^{1/2}\\&\quad \lesssim _p \left( \int \limits _0^T\left\| \mathcal{E }^\theta (h(s,\cdot ))(u_s) \right\| _{L^{p/2}(\mathbb{P }_\mu )} \mathrm{d }s\right) ^{1/2} \lesssim _p T^{1/2} \sup _{s\in [0,T]} \left\| \mathcal{E }^\theta (h(s,\cdot )) \right\| _{L^{p/2}(\mu )}^{1/2} .\quad \end{aligned}$$

For the convolution we bound as follows

$$\begin{aligned} \begin{aligned} \left\| \int \limits _0^T e^{a(T-s)} 2 L_0^N \tilde{h}(u_s) \mathrm{d }s\right\| _{L^p(\mathbb{P }_\mu )}&\lesssim _p \left( \int \limits _0^T e^{2a(T-s)}\mathrm{d }s\right) ^{1/2} \left\| \mathcal{E }^\theta (\tilde{h})(u_0) \right\| ^{1/2}_{L^{p/2}(\mathbb{P }_\mu )} \\&\lesssim _p \left( \frac{1-e^{2aT}}{2a}\right) ^{1/2} \left\| \mathcal{E }^\theta (\tilde{h}) \right\| ^{1/2}_{L^{p/2}(\mu )} \end{aligned} \end{aligned}$$

\(\square \)

The bound (11) in the present form (with the use of the backward martingale to remove the drift part) has been inspired by [8,  Lemma 4.4].

Lemma 3

(Exponential integrability) Let \(h : [0,T]\times \Pi _N H \rightarrow \mathbb{R }\) be a cylinder function. Then

$$\begin{aligned} \mathbb{E }\sup _{t\in [0,T]}e^{2 \int _0^t L_0^N h(s,\Pi _N u_s) \mathrm{d }s} \lesssim \mathbb{E }e^{8 \int _0^T \mathcal{E }^\theta (h(s,u_s)) \mathrm{d }s } \end{aligned}$$
(13)

Proof

Let as above \(M^\pm \) be the (Brownian) martingales in the representation of the integral \(\int _0^t L_0^N h(s,\Pi _N u_s) \mathrm{d }s\). By Cauchy–Schwartz

$$\begin{aligned} \mathbb{E }\sup _{t\in [0,T]}e^{2 \int _0^t L_0^N h(s,\Pi _N u_s) \mathrm{d }s} \le \left[ \mathbb{E }\sup _{t\in [0,T]}e^{2 M^+_t}\right] ^{1/2} \left[ \mathbb{E }\sup _{t\in [0,T]}e^{2 (M^-_T-M^-_{T-t})}\right] ^{1/2}. \end{aligned}$$

By Novikov’s criterion \( e^{4 M^+_t - 8 \langle M^+\rangle _t } \) is a martingale for \(t\in [0,T]\) if \(\mathbb{E }e^{8 \langle M^+\rangle _T} < \infty \). In this case

$$\begin{aligned} \mathbb{E }\sup _{t\in [0,T]}e^{2 M^+_t}&\le \mathbb{E }\sup _{t\in [0,T]}(e^{2 M^+_t- 4 \langle M^+\rangle _t}\sup _{t\in [0,T]} e^{ 4 \langle M^+\rangle _t })\\&\le \left[ \mathbb{E }\sup _{t\in [0,T]} e^{4 M^+_t- 8 \langle M^+\rangle _t}\right] ^{1/2} \left[ \mathbb{E }e^{8 \langle M^+\rangle _T }\right] ^{1/2} \end{aligned}$$

and by Doob’s inequality we get that the previous expression is bounded by

$$\begin{aligned} \left[ \mathbb{E }e^{4 M^+_T- 8 \langle M^+\rangle _T}\right] ^{1/2} \left[ \mathbb{E }e^{8 \langle M^+\rangle _T }\right] ^{1/2} \le \left[ \mathbb{E }e^{8 \langle M^+\rangle _T }\right] ^{1/2}. \end{aligned}$$

Reasoning similarly for \(M^-\) we obtain that

$$\begin{aligned} \mathbb{E }\sup _{t\in [0,T]}e^{2 \int _0^t L_0^N h(s,\Pi _N u_s) \mathrm{d }s} \le \mathbb{E }e^{8 \langle M^+\rangle _T } = \mathbb{E }e^{8 \int _0^T \mathcal{E }^\theta (h(s,u_s)) \mathrm{d }s }. \end{aligned}$$

\(\square \)

3 Estimates on the Burgers drift

In this section we provide the key estimates on the Burgers drift via the quadratic variations of the forward and backward martingales in its decomposition. Let \(F(x)(\xi ) = B (x(\xi ))^2\) and \(F_N(x) = F(\Pi _N x)\). Define

$$\begin{aligned} H_N(x) = -\int \limits _0^\infty F_N(e^{-A^\theta t}x)\mathrm{d }t \end{aligned}$$

and consider \(L_0 H_N(x)\) as acting on each Fourier coordinate of \(H_N(x)\). Remark that the second order part of \(L_0\) does not appear in the computation of \(L_0 F_N\) since

$$\begin{aligned} D_k D_{-k} F(\Pi _N e^{-A^\theta t} x)=0 \end{aligned}$$

for each \(k\in \mathbb{Z }_0\). Indeed

$$\begin{aligned}&D_{-k} D_k F(\Pi _N e^{-A^\theta t} x) \!=\! B [D_{-k} D_k (\Pi _N e^{-A^\theta t} x)^2]\!=\!2 B D_{-k} [(\Pi _N e^{-A^\theta t} x) (\Pi _N e^{-A^\theta t} e_k)]\\&\quad =2 [B(\Pi _N e^{-A^\theta t} e_{-k}) (\Pi _N e^{-A^\theta t} e_k)+(\Pi _N e^{-A^\theta t} e_{-k}) B(\Pi _N e^{-A^\theta t} e_k)] = 0 \end{aligned}$$

Then it is easy to check that

$$\begin{aligned}&L_0 H_N(\Pi _N x) = \langle A^\theta x, D H_N(\Pi _N x)\rangle = -2 \int \limits _0^\infty B [(e^{-A^\theta t}\Pi _N x)(A^\theta e^{-A^\theta t} \Pi _N x) ]\mathrm{d }t\\&\quad = -\int \limits _0^\infty \frac{\mathrm{d }}{\mathrm{d }t}B [(e^{-A^\theta t}\Pi _N x)^2 ] = B (\Pi _N x)^2=F(\Pi _N x) \end{aligned}$$

since \(\lim _{t\rightarrow \infty } B [(e^{-A^\theta t}\Pi _N x)^2 ] = 0\). Denote by \((x_k)_{k\in \mathbb{Z }}\) and \((H_N(x)_k)_{k\in \mathbb{Z }_0}\) the coordinates of \(x=\sum _{k\in \mathbb{Z }_0} x_k e_k\) and \(H_N(x)=\sum _{k\in \mathbb{Z }_0} H_N(x)_k e_k\) in the canonical basis \((e_k)_{k\in \mathbb{Z }_0}\). Then a direct computation gives an explicit formula for \(H_N(x)\):

$$\begin{aligned} (H_{N}(x))_k = 2 ik \sum _{k_1,k_2 : k=k_1+k_2} \frac{\mathbb{I }_{|k|,|k_1|,|k_2|\le N}}{|k_1|^{2\theta }+|k_2|^{2\theta }} x_{k_1} x_{k_2}. \end{aligned}$$

Let us denote with \((H_{N}(x))_k^{\pm }\) respectively the real and imaginary parts of this quantity: \((H_{N}(x))_k^{\pm }= ((H_{N}(x))_k\pm (H_{N}(x))_{-k})/(2 i^{\pm })\) where \(i^+=1\) and \(i^-=i\). Now

$$\begin{aligned} (H_{N}(x))^\pm _k = i^{\mp }k \sum _{k_1,k_2 : k=k_1+k_2} \frac{\mathbb{I }_{|k|,|k_1|,|k_2|\le N}}{k_1^{2\theta }+k_2^{2\theta }} (x_{k_1} x_{k_2}\mp x_{-k_1} x_{-k_2}) \end{aligned}$$

and recall that \( \mathcal{E }^\theta ((H_N)^\pm _k)(x) = \sum _{q\in \mathbb{Z }_0} |q|^{2\theta } |D_q H^\pm _{N,k}(x)|^2 \).

Lemma 4

For \(\lambda >0\) small enough we have

$$\begin{aligned} \sup _{k\in \mathbb{Z }_0} \mathbb{E }\exp \left[ \lambda |k|^{2\theta -3} \mathcal{E }^\theta ((H_N)^\pm _k)(u_0)\right] \lesssim 1 \end{aligned}$$
(14)

and

$$\begin{aligned} \sup _{1\le M \le N}\sup _{k\in \mathbb{Z }_0} \mathbb{E }\exp \left[ \lambda |k|^{-2} M^{2\theta -1} \mathcal{E }^\theta ((H_N-H_M)^\pm _k)(u_0)\right] \lesssim 1. \end{aligned}$$
(15)

Proof

We start by computing \(\mathcal{E }((H_N)^\pm _k)\): noting that

$$\begin{aligned} D_q (H_{N})^\pm _k(x)= i^\mp k \left[ \frac{\mathbb{I }_{|k|,|q|,|k-q|\le N}}{|q|^{2\theta }+|k-q|^{2\theta }} x_{k-q}\mp \frac{\mathbb{I }_{|k|,|q|,|k+q|\le N}}{|q|^{2\theta }+|k+q|^{2\theta }} x_{-k-q}\right] \end{aligned}$$

we have

$$\begin{aligned} \mathcal{E }^\theta ((H_N)^\pm _k)(x)&= \sum _{q\in \mathbb{Z }_0} |k|^2 |q|^{2\theta }\left[ 2 \frac{\mathbb{I }_{|k|,|q|,|k-q|\le N}}{(|q|^{2\theta }+|k-q|^{2\theta })^2} |x_{k-q}|^2 \right. \\&\quad \left. \mp \frac{\mathbb{I }_{|k|,|q|,|k-q|\le N}}{|q|^{2\theta }+|k-q|^{2\theta }} \frac{\mathbb{I }_{|k|,|q|,|k+q|\le N}}{|q|^{2\theta }+|k+q|^{2\theta }} (x_{k-q} x_{k+q}+x_{-k+q} x_{-k-q}) \right] \end{aligned}$$

which gives the bound

$$\begin{aligned}&\mathcal{E }^\theta ((H_N)^\pm _k)(x)\lesssim |k|^2 \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}} \frac{|k_1|^{2\theta }\mathbb{I }_{|k|,|k_1|,|k_2|\le N}}{(|k_1|^{2\theta }+|k_2|^{2\theta })^2}|x_{k_2}|^2 \\&\quad \lesssim |k|^2 \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}} \frac{\mathbb{I }_{|k|,|k_1|,|k_2|\le N}}{|k_1|^{2\theta }+|k_2|^{2\theta }} |x_{k_2}|^2 = \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}}c(k,k_1,k_2) |x_{k_2}|^2 = h_N(x) \end{aligned}$$

where \(c(k,k_1,k_2) = |k|^2/(|k_1|^{2\theta }+|k_2|^{2\theta })\). Let

$$\begin{aligned} I_N(k) = \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}} c(k,k_1,k_2) \end{aligned}$$

and note that the sum in \(I_N(k)\) can be bounded by the equivalent integral giving (uniformly in \(N\))

$$\begin{aligned} I_N(k) \lesssim |k|^{2} \int \limits _{\mathbb{R }} \frac{\mathrm{d }q}{|q|^{2\theta }+|k-q|^{2\theta }} = |k|^{3-2\theta } \int \limits _{\mathbb{R }} \frac{\mathrm{d }q}{|q|^{2\theta }+|1-q|^{2\theta }} \lesssim |k|^{3-2\theta } \end{aligned}$$

since that the last integral is finite for \(\theta > 1/2\). Then

$$\begin{aligned}&\mathbb{E }e^{\lambda |k|^{2\theta -3}\mathcal{E }^\theta ((H_N)^\pm _k)(u_0)} \le \mathbb{E }e^{\lambda C|k|^{2\theta -3} h_N(u_0)}\\&\quad \le \! \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}}\!\!\!\!\! c(k,k_1,k_2) \mathbb{E }\frac{e^{\lambda C |k|^{2\theta -3} I_N(k) |(u_0)_{k_2}|^2}}{I_N(k)} \!\le \! \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k \\ |k|,|k_1|,|k_2|\le N \end{array}}\!\!\!\!\! c(k,k_1,k_2) \mathbb{E }\frac{e^{\lambda C^{\prime }|(u_0)_{k_2}|^2}}{I_N(k)} \end{aligned}$$

where we used the previous bound to say that \(C|k|^{2\theta -3} I_N(k) \le C^{\prime }\) uniformly in \(k\). Remind that \((u_0)_k\) has a Gaussian distribution of mean zero and unit variance. Therefore for \(\lambda \) small enough \(\mathbb{E }e^{\lambda C^{\prime }|(u_0)_{k_2}|^2}\lesssim 1\) uniformly in \(k_2\) so that

$$\begin{aligned} \mathbb{E }e^{\lambda |k|^{2\theta -3}\mathcal{E }^\theta ((H_N)^\pm _k)(u_0)} \lesssim 1. \end{aligned}$$

This establishes the claimed exponential bound for \(\mathcal{E }^\theta ((H_N(x))_k^\pm )\). Similarly we have

$$\begin{aligned} \mathcal{E }^\theta ((H_N\!-\!H_M)^\pm _k)(x)\!\lesssim \! \sum _{k_1,k_2 :k_1+k_2\!=\!k} (\mathbb{I }_{|k|,|k_1|,|k_2|\le N}\!-\!\mathbb{I }_{|k|,|k_1|,|k_2|\le M})^2 c(k,k_1,k_2) | x_{k_2}|^2 . \end{aligned}$$

Let

$$\begin{aligned} I_{N,M}(k) =\sum _{k_1,k_2 :k_1+k_2=k}(\mathbb{I }_{|k|,|k_1|,|k_2|\le N}-\mathbb{I }_{|k|,|k_1|,|k_2|\le M})^2 c(k,k_1,k_2) \end{aligned}$$

and note that, for \(N\ge M\),

$$\begin{aligned} (\mathbb{I }_{|k|,|k_1|,|k_2|\le N}-\mathbb{I }_{|k|,|k_1|,|k_2|\le M}) \lesssim \mathbb{I }_{|k|,|k_1|,|k_2|\le N}(\mathbb{I }_{|k|> M}+\mathbb{I }_{|k_1|> M}+\mathbb{I }_{|k_2|> M}). \end{aligned}$$

Then, by estimating the sums with the corresponding integrals and after easy simplifications we remain with the following bound

$$\begin{aligned} I_{N,M}(k)\lesssim |k|^{2}\mathbb{I }_{|k|> M}\int \limits _{\mathbb{R }}\frac{\mathrm{d }q}{|q|^{2\theta }+|k-q|^{2\theta }} + |k|^{2}\int \limits _{\mathbb{R }}\frac{\mathbb{I }_{|q|> M}\mathrm{d }q}{|q|^{2\theta }+|k-q|^{2\theta }} \end{aligned}$$

The first integral in the r.h.s. is easily handled by

$$\begin{aligned} |k|^{2} \mathbb{I }_{|k|> M}\int \limits _{\mathbb{R }} \frac{\mathrm{d }q}{|q|^{2\theta }+|k-q|^{2\theta }}\lesssim |k|^{3-2\theta } \mathbb{I }_{|k|> M} \lesssim |k|^2 M^{1-2\theta } \end{aligned}$$

since \(\theta > 1/2\). For the second we have the analogous bound

$$\begin{aligned} |k|^{2} \int \limits _{\mathbb{R }}\frac{\mathbb{I }_{|q|> M}\mathrm{d }q}{|q|^{2\theta }+|k-q|^{2\theta }}\lesssim |k|^{2} \int \limits _{\mathbb{R }}\frac{\mathbb{I }_{|q|> M}\mathrm{d }q}{|q|^{2\theta }}\lesssim |k|^2 M^{1-2\theta } \end{aligned}$$

which concludes the proof. \(\square \)

Using Lemma 2 and the estimates contained in Lemma 4 we are led to the next set of more refined estimates for the drift and his small scale contributions.

Lemma 5

Let \( G^M_t = \int _0^t F_{M}(u_s) \mathrm{d }s \). For any \(M\le N\) we have

$$\begin{aligned} \Vert \sup _{t\in [0,T]}\left| (G^M_t)_k\right| \Vert _{L^p(\mathbb{P }_\mu )}\lesssim _p |k| M T , \end{aligned}$$
(16)
$$\begin{aligned} \Vert \sup _{t\in [0,T]}\left| (G^M_t)_k\right| \Vert _{L^p(\mathbb{P }_\mu )}\lesssim _p |k|^{3/2-\theta } T^{1/2} , \end{aligned}$$
(17)
$$\begin{aligned} \Vert \sup _{t\in [0,T]}\left| (G^M_t)_k-(G^N_t)_k\right| \Vert _{L^p(\mathbb{P }_\mu )}\lesssim _p |k| T^{1/2} M^{1/2-\theta } , \end{aligned}$$
(18)
$$\begin{aligned} \sup _{M\ge 0}\Vert \sup _{t\in [0,T]}\left| (G^M_t)_k\right| \Vert _{L^p(\mathbb{P }_\mu )}\lesssim _p |k| T^{2\theta /(1+2\theta )} . \end{aligned}$$
(19)

Proof

The Gaussian measure \(\mu \) satisfies the hypercontractivity estimate (see for example [19]): for any complex-valued finite order polynomial \(P(x)\in \mathcal C yl\) we have

$$\begin{aligned} \left\| P(x) \right\| _{L^p(\mu )}\lesssim _p \left\| P(x) \right\| _{L^2(\mu )}. \end{aligned}$$
(20)

Then we have \((F_M(x))_k = ik \sum _{k_1+k_2=k} x_{k_1} x_{k_2}\) and for all \(k\ne 0\)

$$\begin{aligned} \int |(F_M(x))_k|^2 \mu (\mathrm{d }x)&= |k|^2 \sum _{k_1+k_2=k} \sum _{k^{\prime }_1+k^{\prime }_2=k} \mathbb{I }_{|k_1|,|k_2|,|k^{\prime }_1|,|k^{\prime }_2|\le M}\int x_{k_1} x_{k_2} x_{k^{\prime }_1}^* x_{k^{\prime }_2}^* \mu (\mathrm{d }x)\\&= 4 |k|^2 M^2 \end{aligned}$$

This allows us to obtain the bound (16). Indeed

$$\begin{aligned}&\Vert \sup _{t\in [0,T]}\left| (G^M_t)_k \right| \Vert _{L^p(\mathbb{P }_\mu )} \lesssim \int \limits _0^T \left\| (F_{M}(u_s))_k \right\| _{L^p(\mathbb{P }_\mu )}\mathrm{d }s\\&\quad \lesssim T \left\| (F_{M}(\cdot ))_k \right\| _{L^p(\mu )} \lesssim _p T \left\| (F_{M}(\cdot ))_k \right\| _{L^2(\mu )} \lesssim _p |k| M T. \end{aligned}$$

For the bound (17) we use the fact that \(L_0 H_N = F_N\) and Lemma 2 to get

$$\begin{aligned} \Vert \sup _{t\in [0,T]} \left| (G^M_t)_k\right| \Vert _{L^p(\mathbb{P }_\mu )}\lesssim _p T^{1/2} \sup _{t\in [0,T]} \Vert \mathcal{E }^\theta (H_N(\cdot )) \Vert _{L^{p/2}(\mu )}^{1/2} \lesssim |k|^{3/2-\theta } T^{1/2} \end{aligned}$$

where we used the first energy estimate (14) of Lemma 4 and the fact that \( \Vert Q\Vert _{L^p(\mu )}^p \lesssim _p \int [e^{Q(x)^+}+e^{Q(x)^-}] \mu (\mathrm{d }x) \) where again \(Q^\pm \) are the real and imaginary parts of \(Q\). The bound (18) is obtained in the same way using the second energy estimate (15). Finally the last bound (19) is obtained from the previous two by taking \(0\le N\le M\), decomposing \(F_M(x) = F_N(x)-F_{N,M}(x)\):

$$\begin{aligned} \Vert \sup _{t\in [0,T]} \left| (G^M_t)_k\right| \Vert _{L^p(\mathbb{P }_\mu )} \!&\le \! \Vert \sup _{t\in [0,T]} \left| (G^N_t)_k\right| \Vert _{L^p(\mathbb{P }_\mu )}\!+\!\Vert \sup _{t\in [0,T]} \left| (G^M_t)_k-(G^N_t)_k\right| \Vert _{L^p(\mathbb{P }_\mu )}\\&\lesssim _p |k| ( N T+ N^{1/2-\theta } T^{1/2}) \end{aligned}$$

and performing the optimal choice \(N \sim T^{-1/(1+2\theta )}\). \(\square \)

Analogous estimates go through also for the functions obtained via convolution with the \(e^{-A^\theta t}\) semi-group.

Lemma 6

Let

$$\begin{aligned} \tilde{G}^M_t = \int \limits _0^t e^{-A^\theta (t-s)} F_{M}(u_s) \mathrm{d }s \end{aligned}$$

then for any \(M\le N\) we have

$$\begin{aligned}&\Vert (\tilde{G}^M_t)_k\Vert _{L^p(\mathbb{P }_\mu )}\lesssim _p |k| M \left( \frac{1-e^{-2k^{2\theta } t/2}}{2k^{2\theta }}\right) \end{aligned}$$
(21)
$$\begin{aligned}&\Vert (\tilde{G}^M_t)_k\Vert _{L^p(\mathbb{P }_\mu )}\lesssim _p |k|^{3/2-\theta } \left( \frac{1-e^{-2k^{2\theta } t/2}}{2k^{2\theta }}\right) ^{1/2} \end{aligned}$$
(22)
$$\begin{aligned}&\Vert (\tilde{G}^M_t)_k-(\tilde{G}^N_t)_k\Vert _{L^p(\mathbb{P }_\mu )}\lesssim _p |k| M^{1/2-\theta } \left( \frac{1-e^{-2k^{2\theta } t/2}}{2k^{2\theta }}\right) ^{1/2} \end{aligned}$$
(23)

Proof

The proof follows the line of Lemma 5 using Eq. (12) instead of Eq. (11). \(\square \)

Corollary 1

For all sufficiently small \({\varepsilon }> 0\)

$$\begin{aligned} \sup _{N\ge 0}\Vert (\tilde{G}^N_t)_k - (\tilde{G}^N_s)_k\Vert _{L^p(\mathbb{P }_\mu )} \lesssim _{p} |k|^{3/2-2\theta +2{\varepsilon }\theta } (t-s)^{\varepsilon }\end{aligned}$$
(24)

Proof

To control the time regularity of the drift convolution we consider \(0\le s \le t\) and decompose

$$\begin{aligned}&\Vert (\tilde{G}^N_t)_k - (\tilde{G}^N_s)_k\Vert _{L^p(\mathbb{P }_\mu )}\\&\quad \le \Vert \int \limits _s^t (e^{-A^\theta (t-r)} F_{N}(u_r))_k \mathrm{d }r\Vert _{L^p(\mathbb{P }_\mu )} + (e^{-k^{2\theta }(t-s)}-1)\Vert (\tilde{G}^N_s)_k\Vert _{L^p(\mathbb{P }_\mu )}\\&\quad \quad \lesssim |k|^{3/2-\theta } (t-s)^{1/2} +|k|^{3/2-2\theta }(e^{-k^{2\theta }(t-s)}-1) \lesssim |k|^{3/2-\theta } (t-s)^{1/2} \end{aligned}$$

Moreover a direct consequence of Eq. (22) is

$$\begin{aligned} \sup _{t\in [0,T]} \Vert (\tilde{G}^N_t)_k \Vert _{L^p(\mathbb{P }_\mu )}\lesssim _p |k|^{3/2-2\theta }. \end{aligned}$$

which give us a uniform estimate in the form

$$\begin{aligned} \Vert (\tilde{G}^N_t)_k - (\tilde{G}^N_s)_k\Vert _{L^p(\mathbb{P }_\mu )} \le \Vert (\tilde{G}^N_t)_k\Vert _{L^p(\mathbb{P }_\mu )}+\Vert (\tilde{G}^N_s)_k\Vert _{L^p(\mathbb{P }_\mu )} \lesssim _{p} |k|^{3/2-2\theta } \end{aligned}$$

By interpolation we get the claimed bound. \(\square \)

Remark 1

All these \(L^p\) estimates can be replaced with equivalent exponential estimates. For example it is not difficult to prove that for small \(\lambda \) we have

$$\begin{aligned} \sup _{t\in [0,T]}\sup _{k\in \mathbb{Z }_0}\mathbb{E }\exp \left( \lambda |k|^{2\theta -3/2}(\tilde{G}^N_t)^\pm _k \right) \lesssim 1 \end{aligned}$$

where \((\cdot )^\pm \) denote, as before, the real and imaginary parts, respectively.

At this point we are in position to prove Lemma 1 on the existence of the Burgers’ drift for controlled processes.

Proof

(of Lemma 1) Let \(\mathcal B ^{\varepsilon }_t = \int _0^t F(\rho ^{\varepsilon }* u_s) \mathrm{d }s\). We start by noting that since \(\hat{\rho }\) has a bounded support we have \(\rho ^{\varepsilon }* (\Pi _N u_s) = \rho ^{\varepsilon }* u_s\) for all \(N \ge C/{\varepsilon }\) for some constant \(C\) and \({\varepsilon }\) small enough. Moreover all the computation we made for \(F_N\) remains true for the functions \(F_{{\varepsilon },N}(x) = F(\rho ^{\varepsilon }* \Pi _N x)\) so we have estimates analogous to that in Lemma 5 for \(G^{{\varepsilon },M}_t = \int _0^t \int _0^t F(\rho ^{\varepsilon }* \Pi _M u_s) \mathrm{d }s\). In taking \({\varepsilon }>{\varepsilon }^{\prime }>0\) and \(N\ge C/{\varepsilon }\), \(M\ge C/{\varepsilon }^{\prime }\) and \(M\ge N\) we have

$$\begin{aligned}&\left\| \sup _{t\in [0,T]}\left| (\mathcal B ^{\varepsilon }_t)_k-(\mathcal B ^{{\varepsilon }^{\prime }}_t)_k\right| \right\| _{L^p(\mathbb{P }_\mu )} = \left\| \sup _{t\in [0,T]}\left| (G^{{\varepsilon },N}_t)_k-(G^{{\varepsilon }^{\prime },M}_t)_k\right| \right\| _{L^p(\mathbb{P }_\mu )}\\&\quad \lesssim _p |k| T^{1/2} M^{1/2-\theta } \lesssim _p |k| T^{1/2} ({\varepsilon }^{\prime })^{\theta -1/2} \end{aligned}$$

uniformly in \({\varepsilon },{\varepsilon }^{\prime },N,M\). This easily implies that the sequence of processes \((\mathcal B ^{\varepsilon })_{{\varepsilon }}\) converges almost surely to a limit in \(C(\mathbb{R }_+,\mathcal{F }L^{-1-{\varepsilon },\infty })\) if \(\theta >1/2\). By similar arguments it can be shown that the limit does not depend on the function \(\rho \). \(\square \)

4 Existence of controlled solutions

Fix \(\alpha < 1/2\) and consider the SDE on \(H^\alpha \) given by

$$\begin{aligned} \mathrm{d }u^N_t =- A^\theta u^N_t\mathrm{d }t + F_N(u^N_t)\mathrm{d }t + A^{\theta /2}\mathrm{d }W_t, \end{aligned}$$
(25)

where \(F_N : H\rightarrow H\) is defined by \(F_N(x) = \frac{1}{2} \Pi _N B(\Pi _N x)^2\). Global solution of this equation starting from any \(u_0^N\in H^\alpha \) can be constructed as follows. Let \((Z_t)_{t\ge 0}\) the unique OU process on \(H^\alpha \) which satisfies the SDE

$$\begin{aligned} \mathrm{d }Z_t = - A^\theta Z_t \mathrm{d }t + A^{\theta /2} \mathrm{d }W_t. \end{aligned}$$
(26)

with initial condition \(Z_0 = u^N_0\). Let \((v^N_t)_{t\ge 0}\) the unique solution taking values in the finite dimensional vector space \(\Pi _N H\) of the following SDE

$$\begin{aligned} \mathrm{d }v^N_t = - A^\theta v^N_t \mathrm{d }t + F_N(v^N_t)\mathrm{d }t + A^{\theta /2}\mathrm{d }\Pi _N W_t, \end{aligned}$$

with initial condition \(v^N_0 = \Pi _N u^N_0\). Note that this SDE has global solutions despite of the quadratic non-linearity. Indeed the vector field \(F_N\) preserves the \(H\) norm:

$$\begin{aligned} \langle v^N_t,F_N(v^N_t)\rangle = \langle v^N_t,B (v^N_t)^2\rangle = \frac{1}{3} \int \limits _\mathbb{T }\partial _\xi (v^N_t(\xi ))\mathrm{d }\xi = 0 \end{aligned}$$

and by Itô formula we have

$$\begin{aligned} \mathrm{d }\Vert v^N_t\Vert _H^2&= 2 \langle v^N_t,- A^\theta v^N_t \mathrm{d }t + F_N(v^N_t)\mathrm{d }t + A^{\theta /2}\mathrm{d }\Pi _N W_t \rangle + C_N \mathrm{d }t\\&= -2 \Vert A^{\theta /2} v^N_t\Vert ^2_H \mathrm{d }t + 2 \langle v^N_t, A^{\theta /2}\mathrm{d }\Pi _N W_t \rangle + C_N \mathrm{d }t \end{aligned}$$

where \(C_N = [A^{\theta /2}\Pi _N W]_t = \sum _{0<|k|\le N} |k|^{2\theta }\). From this equation we easily obtain that for any initial condition \(v^N_0\) the process \((\Vert v^N_t\Vert _H)_{t\in [0,T]}\) is almost surely finite for any \(T \ge 0\) which implies that the unique solution \((v^N_t)_{t \ge 0}\) can be extended to arbitrary intervals of time. Setting \(u^N_t = v^N_t + (1-\Pi _N)Z_t\) we obtain a global solution of Eq. (25). Moreover the diffusion \((u^N_t)_{t\ge 0}\) has generator

$$\begin{aligned} L_N \varphi (x) = L_0\varphi (x)+ \sum _{k\in \mathbb{Z }_0, |k|\le N} (F_N(x))_k D_k \varphi (x) \end{aligned}$$

where \(L_0\) is the generator of the Ornstein–Uhlenbeck defined in Eq. (9) and which satisfies the integration by parts formula \( \mu [\varphi L_0 \varphi ] = \mu [ \mathcal{E }(\varphi )] \) for \(\varphi \in \mathcal C yl\). This diffusion preserves the Gaussian measure \(\mu \). Indeed if we take \(u_0^N\) distributed according to the white noise \(\mu \) we have that \(((1-\Pi _N)Z_t)_{t \ge 0}\) is independent of \((v^N_t)_{t\ge 0}\). Moreover \(Z_t\) has law \(\mu \) of any \(t\ge 0\) and an easy argument for the finite dimensional diffusion \((v^N_t)_{t\ge 0}\) shows that for any \(t\ge 0\) the random variable \(v^N_t\) is distributed according to \(\mu ^N = (\Pi _N)_* \mu \): the push forward of the measure \(\mu \) with respect to the projection \(\Pi _N\).

We will use the fact that \(u^N\) satisfy the mild equation [11]

$$\begin{aligned} u^N_t = e^{-A^\theta t} u_0 + \int \limits _0^t e^{-A^\theta (t-s)} F_N(u^N_s) \mathrm{d }s + A^{\theta /2} \int \limits _0^t e^{-A^\theta (t-s)} \mathrm{d }W_s \end{aligned}$$
(27)

where the stochastic convolution in the r.h.s is given by

$$\begin{aligned} A^{\theta /2} \int \limits _0^t e^{-A^\theta (t-s)} \mathrm{d }W_s = \sum _{k\in \mathbb{Z }_0} |k|^{\theta } e_k \int \limits _0^t e^{-|k|^{2\theta } (t-s)} \mathrm{d }\beta ^k_s . \end{aligned}$$

Lemma 7

Let

$$\begin{aligned} \mathcal{A }_t^{N}=\int \limits _0^t F_{N}(u^{N}_s) \mathrm{d }s ,\qquad \tilde{\mathcal{A }}_t^{N}=\int \limits _0^t e^{-A^\theta (t-s)} F_{N}(u^{N}_s) \mathrm{d }s . \end{aligned}$$

and set \(\sigma =(3/2-2\theta )_+\). The family of laws of the processes \(\{(u^N,\mathcal{A }^N,\tilde{\mathcal{A }}^N,W)\}_N\) is tight in the space of continuous functions with values in \(\mathcal X =\mathcal{F }L^{\infty ,\sigma -{\varepsilon }}\times \mathcal{F }L^{\infty ,3/2-\theta -{\varepsilon }}\times \mathcal{F }L^{\infty ,3/2-2\theta -{\varepsilon }}\times \mathcal{F }L^{\infty ,-{\varepsilon }}\) for all small \({\varepsilon }>0\).

Proof

The estimate (24) in the previous section readily gives that for any small \({\varepsilon }>0\) and sufficienly large \(p\)

$$\begin{aligned} \mathbb{E }_\mu \left[ \sum _{k\in \mathbb{Z }_0} |k|^{-(3/2-2\theta +3\theta {\varepsilon }) p} \left( |(\tilde{\mathcal{A }}_t^{N}-\tilde{\mathcal{A }}_s^{N})_k| \right) ^p\right] \lesssim _{p,{\varepsilon }} \sum _{k\in \mathbb{Z }_0} |k|^{-\theta {\varepsilon }p} |t-s|^{p {\varepsilon }} \lesssim |t\!-\!s|^{p {\varepsilon }} \end{aligned}$$

This estimates show that the family of processes \(\{ \tilde{\mathcal{A }}^{N}\}_{N}\) is tight in \(C([0,T],\mathcal{F }L^{\infty ,\alpha })\) for \(\alpha =3/2-2\theta +3\theta {\varepsilon }\) and sufficiently small \({\varepsilon }>0\). An analogous argument using the estimate (17) shows that the family of processes \(\{ \mathcal{A }^{N}\}_{N}\) is tight in \(C^\gamma ([0,T],\mathcal{F }L^{\infty ,\beta })\) for any \(\gamma <1/2\) and \(\beta < 3/2-\theta \). It is not difficult to show that the stochastic convolution \(\int _0^t e^{-A^\theta (t-s)} A^{\theta /2} \mathrm{d }W_s\) belongs to \(C([0,T],\mathcal{F }L^{\infty ,1-\theta -{\varepsilon }})\) for all small \({\varepsilon }>0\). Taking into account the mild equation (27) we find that the processes \(\{(u^{N}_t)_{t\in [0,T]}\}_{N}\) are tight in \(C([0,T],\mathcal{F }L^{\infty ,\sigma -{\varepsilon }})\). \(\square \)

We are now ready to prove our main theorem on existence of (probabilistically weak) controlled solutions to the generalized SBE.

Theorem 1

There exists a probability space and a quadruple of processes \((u,\mathcal{A },\tilde{\mathcal{A }},W)\) with continuous trajectories in \(\mathcal X \) such that \(W\) is a cylindrical Brownian motion in \(H\), \(u\) is a controlled process and they satisfy

$$\begin{aligned} u_t = u_0 + \mathcal{A }_t - \int \limits _0^t A^\theta u_s \mathrm{d }s + B W_t = e^{-A^\theta t} u_0 + \tilde{\mathcal{A }}_t + \int \limits _0^t e^{-A^\theta (t-s)} B \mathrm{d }W_s\qquad \end{aligned}$$
(28)

where, as space distributions,

$$\begin{aligned} \mathcal{A }_t = \lim _{M \rightarrow \infty }\int \limits _0^t F_{M}(u_s) \mathrm{d }s \quad \text{ and } \quad \tilde{\mathcal{A }}_t = \int \limits _0^t e^{-A^\theta (t-s)} \mathrm{d }\mathcal{A }_s . \end{aligned}$$
(29)

this last integral being defined as a Young integral.

Proof

Let us first prove (29). By tightness of the laws of \(\{(u^N,\mathcal{A }^N,\tilde{\mathcal{A }}^N , W)\}_N\) in \(C(\mathbb R ;\mathcal X )\) we can extract a subsequence which converges weakly (in the probabilistic sense) to a limit point in \(C(\mathbb R ;\mathcal X )\). By Skhorohod embedding theorem, up to a change of the probability space, we can assume that this subsequence which we call \(\{N_n\}_{n\ge 1}\) converges almost surely to a limit \(u = \lim _n u^{N_n} \in C(\mathbb R ;\mathcal X )\). Then

$$\begin{aligned}&\int \limits _0^t F_{M}(u_s) \mathrm{d }s = \int \limits _0^t (F_{M}(u_s) - F_{M}(u^{N_n}_s))\mathrm{d }s\\&\quad +\int \limits _0^t (F_{M}(u^{N_n}_s) - F_{N_n}(u^{N_n}_s))\mathrm{d }s +\int \limits _0^t F_{N_n}(u^{N_n}_s) \mathrm{d }s . \end{aligned}$$

But now, in \(C(\mathbb{R }_+,\mathcal{F }L^{\infty ,3/2-\theta -{\varepsilon }})\) we have the almost sure limit

$$\begin{aligned} \lim _n \int \limits _0^\cdot F_{N_n}(u^{N_n}_s) \mathrm{d }s =\lim _n \mathcal{A }^{N_n}_\cdot = \mathcal{A }_\cdot \end{aligned}$$

and, always almost surely in \(C(\mathbb{R }_+,\mathcal{F }L^{\infty ,3/2-\theta -{\varepsilon }})\), we have also

$$\begin{aligned} \lim _n\int \limits _0^\cdot (F_{M}(u_s) - F_{M}(u^{N_n}_s)) \mathrm{d }s = 0 , \end{aligned}$$

since the functional \(F_M\) depends only of a finite number of components of \(u\) and \(u^{N_n}\) and that we have the convergence of \(u^{N_n}\) to \(u\) in \(C(\mathbb{R };\mathcal{F }L^{\infty ,\sigma -{\varepsilon }})\) and thus distributionally uniformly in time. Moreover, for all \(k\in \mathbb{Z }_0\),

$$\begin{aligned} \lim _M \sup _{N_n : M<N_n} \left\| \sup _{t\in [0,T]} \left| \int \limits _0^t (F_{M}(u^{N_n}_s) - F_{N_n}(u^{N_n}_s))_k \mathrm{d }s\right| \right\| _{L^p(\mathbb{P }_\mu )}= 0. \end{aligned}$$

By the apriori estimates, \(\mathcal{A }^{N_n}\) converges to \(\mathcal{A }\) in \(C^\gamma (\mathcal{F }L^{\infty ,3/2-\theta -{\varepsilon }})\) for all \(\gamma <1/2\) and \({\varepsilon }> 0\) so that we can use Young integration to define \(\int _0^t e^{-A^\theta (t-s)} \mathrm{d }\mathcal{A }^{N_n}_s\) as a space distribution and to obtain its distributional convergence (for example for each of its Fourier components) to \(\int _0^t e^{-A^\theta (t-s)} \mathrm{d }\mathcal{A }^{N_n}_s\). At this point Eq. (28) is a simple consequence. The backward processes \(\hat{u}^{N_n}_{t}=u^{N_n}_{T-t}\) and \(\hat{\mathcal{A }}^{N_n}_t = -\mathcal{A }^{N_n}_{T-t}\) converge to \(\hat{u}_{t}=u_{T-t}\) and \(\hat{\mathcal{A }}_t = -\mathcal{A }_{T-t}\) respectively and moreover note that \(\mathcal{A }\) as a distributional process has trajectories which are Hölder continuous for any exponent smaller than \(2\theta /(1+2\theta )>1/2\) as a consequence of the estimate (19) and this directly implies that \(\mathcal{A }\) has zero quadratic variation. So \(u\) is a controlled process in the sense of our definition. \(\square \)

5 Uniqueness for \(\theta >5/4\)

In this section we prove a simple pathwise uniqueness result for controlled solutions which is valid when \(\theta > 5/4\). Note that to each controlled solution \(u\) is naturally associated a cylindrical Brownian motion \(W\) on \(H\) given by the martingale part of the controlled decomposition (7). Pathwise uniqueness is then understood in the following sense.

Definition 3

SBE\(_\theta \) has pathwise uniqueness if given two controlled processes \(u,\tilde{u}\in \mathcal R _\theta \) on the same probability space which generate the same Brownian motion \(W\) and such that \(\tilde{u}_0 = u_0\) amost surely then there exists a negligible set \(\mathcal N \) such that for all \(\varphi \in \mathcal S \) and \(t\ge 0\) \(\{u_t(\varphi ) \ne \tilde{u}_t(\varphi )\} \subseteq \mathcal N \).

Theorem 2

The generalized SBE has pathwise uniqueness when \(\theta >5/4\).

Proof

Let \(u\) be a controlled solution to the equation and let \(u^N\) be the Galerkin approximations defined above with respect to the cylindrical Brownian motion \(W\) obtained from the martingale part of the decomposition of \(u\) as a controlled process. We will prove that \(u^N \rightarrow u\) almost surely in \(C(\mathbb{R }_+;\mathcal{F }L^{2\theta -3/2-2{\varepsilon },\infty })\) for any small \({\varepsilon }>0\). Since Galerkin approximations have unique strong solutions we have \(\tilde{u}^N = u^N\) almost surely and in the limit \(\tilde{u} = u\) in \(C(\mathbb{R }_+;\mathcal{F }L^{2\theta -3/2-2{\varepsilon },\infty })\) almost surely. This will imply the claim by taking as negligible set in the definition of pathwise uniqueness the set \(\mathcal N =\{\sup _{t\ge 0}\Vert u_t-\tilde{u}_t\Vert _{\mathcal{F }L^{2\theta -3/2-2{\varepsilon },\infty }}>0\}\). Let us proceed to prove that \(u^N \rightarrow u\). By bilinearity,

$$\begin{aligned} F_N \left( u \right) - F_N \left( u^N \right) =F_N ( \Pi _N u_s+u^N_s,\Delta ^N_s) \end{aligned}$$

and the difference \(\Delta ^N = \Pi _N ( u - u^N )\) satisfies the equation

$$\begin{aligned} \Delta ^N_t = \Pi _N \int \limits _0^t e^{- A^{\theta } ( t - s )}F_N ( u_s+u^N_s,\Delta ^N_s) \mathrm{d }s +\varphi ^N_t \end{aligned}$$

where

$$\begin{aligned} \varphi ^N_t = \int \limits _0^t e^{- A^{\theta }\left( t - s\right) }\left( F_{}\left( u \right) - F_N \left( u^{}\right) \right) \mathrm{d }s . \end{aligned}$$

Note that

$$\begin{aligned} \Vert \sup _{t \in [ 0, T ]}|(\varphi ^N_t )_k|\Vert _{L^p(\mathbb P _{\mu } )}\lesssim _p\max (| k |^{1 - 2 \theta }N^{1/ 2 -\theta },| k |^{3/2 - 2 \theta }) \end{aligned}$$

which by interpolation gives

$$\begin{aligned} \Vert \sup _{t \in [ 0, T ]}|(\varphi ^N_t )_k | \Vert _{L^p(\mathbb P _{\mu })}\lesssim _p | k |^{3/2 - 2 \theta +\varepsilon }N^{-\varepsilon } \end{aligned}$$

for any small \({\varepsilon }>0\). Now let

$$\begin{aligned} \Phi _N = \sup _{k\in \mathbb{Z }_0}\sup _{t \in [ 0, T ]}|k|^{2\theta -3/2-2{\varepsilon }} | ( \varphi ^N_t )_k | \end{aligned}$$

then

$$\begin{aligned} \mathbb{E }\sum _{N>1}N \Phi _N^p&\le \sum _{N>1}N \sum _{k\in \mathbb{Z }_0}\sup _{t \in [ 0, T ]}|k|^{p(2\theta -3/2-2{\varepsilon })}\mathbb{E }|( \varphi ^N_t )_k |^p\\&\lesssim _p \sum _{N>1}N^{1-{\varepsilon }p}\sum _{k\in \mathbb{Z }_0} |k|^{-p{\varepsilon }}<+\infty \end{aligned}$$

for \(p\) large enough, which implies that almost surely \( \Phi _N \lesssim _{p,\omega } N^{-1/p} \). For the other term we have

$$\begin{aligned} \sup _{t\in [0,T]}\left| \left( \int \limits _0^t e^{- A^{\theta } \left( t - s \right) } F_N \left( \Pi _N u + u^N, \Delta _N \right) \mathrm{d }s \right) _k\right| \lesssim A_N |k|^{3/2-2\theta +2{\varepsilon }} Q_T \end{aligned}$$

where \( A_N = \sup _{t \in \left[ 0, T \right] } \sup _k \left| k \right| ^{2 \theta - 3/2 - 2\varepsilon } \left| \left( \Delta ^N_t \right) _k \right| \) and

$$\begin{aligned} Q_T = \sup _{t\in [0,T]} |k|^{2\theta -1/2-2{\varepsilon }} \int \limits _0^t e^{- |k|^{2\theta } \left( t - s \right) } \sum _{q\in \mathbb{Z }_0} |( \Pi _N u_s + u^N_s)_q| |k-q|^{3/2-2\theta +2{\varepsilon }} \mathrm{d }s \end{aligned}$$

This gives

$$\begin{aligned} A_N \leqslant Q_T A_N + \Phi _N. \end{aligned}$$

Since \(3/2-2 \theta <-1 \) (that is \(\theta > 5/4\)), we have the estimate:

$$\begin{aligned} Q_T \lesssim \sup _{t\in [0,T]} |k|^{2\theta -1/2-2{\varepsilon }}\left[ \int \limits _0^t e^{- p^{\prime } |k|^{2\theta }\left( t - s \right) }\mathrm{d }s \right] ^{1/p^{\prime }} \left[ \int \limits _0^T\sum _{q\in \mathbb{Z }_0}\frac{ |(\Pi _N u_s + u^N_s)_q|^{p}}{ |k-q|^{-3/2+2\theta -2{\varepsilon }}} \mathrm{d }s \right] ^{1/p} \end{aligned}$$

valid for some \(p> 1\) (with \(1/p^{\prime }+1/p=1\)). Then

$$\begin{aligned} Q_T \lesssim |k|^{2\theta -1/2-2{\varepsilon }-2\theta /p^{\prime }}\left[ \int \limits _0^T \sum _{q\in \mathbb{Z }_0}\frac{ |( \Pi _N u_s + u^N_s)_q|^{p}}{ |k-q|^{-3/2+2\theta -2{\varepsilon }}}\mathrm{d }s \right] ^{1/p} \end{aligned}$$

and taking \(p\) large enough such that \(2\theta -1/2-2{\varepsilon }-2\theta /p^{\prime }\le 0\) we obtain

$$\begin{aligned} Q_T \lesssim _p \left[ \int \limits _0^T \sum _{q\in \mathbb{Z }_0}\frac{ |(\Pi _N u_s + u^N_s)_q|^{p}}{ |k-q|^{-3/2+2\theta -2{\varepsilon }}} \mathrm{d }s \right] ^{1/p} \end{aligned}$$

By the stationarity of the processes \(u\) and \(u^N\) and the fact that their marginal laws are the white noise we have

$$\begin{aligned} \mathbb{E }[ Q_T^p] \lesssim _p \int \limits _0^T \sum _{q\in \mathbb{Z }_0}\frac{\mathbb{E }|(\Pi _N u_s + u^N_s)_q|^{p}}{ |k-q|^{-3/2+2\theta -2{\varepsilon }}} \mathrm{d }s = T \sum _{q\in \mathbb{Z }_0}\frac{1}{ |k-q|^{-3/2+2\theta -2{\varepsilon }}} \lesssim _p T \end{aligned}$$

Then by a simple Borel–Cantelli argument, almost surely \(Q_{1/n} \lesssim _{p,\omega } n^{-1+1/p}\). Putting together the estimates for \(\Phi _N\) and that for \(Q_{1/n}\) we see that there exists a (random) \(T\) such that \(C Q_T\le 1/2\) almost surely and that for this \(T\): \(A_N \leqslant 2 \Phi _N\), which given the estimate on \(\Phi _N\) implies that \(A_N \rightarrow 0\) as \(N\rightarrow \infty \) almost surely and that the solution of the equation is unique and is the (almost-sure) limit of the Galerkin approximations. \(\square \)

6 Alternative equations

The technique of the present paper extends straighforwardly to some other modifications of the SBE.

6.1 Regularization of the convective term

Consider for example the equation

$$\begin{aligned} \mathrm{d }u_t = - A u_t \mathrm{d }t + A^{-\sigma }F(A^{-\sigma } u_t) \mathrm{d }t + B \mathrm{d }W_t \end{aligned}$$
(30)

which is the equation considered by Da Prato, Debbussche and Tubaro in [9]. Letting \(F_\sigma (x) = A^{-\sigma }F(A^{-\sigma } x)\), denoting by \(H_\sigma \) the corresponding solution of the Poisson equation and following the same strategy as above we obtain the same bounds

$$\begin{aligned} \mathcal{E }((H_{\sigma ,N})^\pm _k)(x)\lesssim \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}}c_\sigma (k,k_1,k_2) |x_{k_2}|^2 \end{aligned}$$

where \(c_\sigma (k,k_1,k_2) = |k|^{2-4\sigma }/[|k_1|^{4\sigma }|k_1|^{4\sigma }(|k_1|^{2}+|k_2|^{2})]\). This quantity can then be bounded in terms of the sum

$$\begin{aligned} I_{\sigma ,N}(k) = \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}}c_\sigma (k,k_1,k_2) \lesssim |k|^{1-12\sigma } \end{aligned}$$

From which we can reobtain similar bounds to those exploited above. For example

$$\begin{aligned} \left\| \int \limits _0^t (e^{-A(t-s)}F_{\sigma ,M}(u_s))_k \mathrm{d }s \right\| _{L^p(\mathbb{P }_\mu )}\lesssim _p |k|^{-1/2-6\sigma } \end{aligned}$$

And in particular we have existence of weak controlled solutions when \(8\sigma +2>1\), that is \(\sigma >-1/8\) and pathwise uniqueness when \(-1/2-6\sigma <-1\) that is \(\sigma > 1/12\). Which is an improvement over the result in [9] which has uniqueness for \(\sigma >1/8\).

6.2 The Sasamoto–Spohn discrete model

Another application of the above techniques is to the analysis of the discrete approximation to the SBE proposed by Spohn and Sasamoto in [25]. Their model is the following:

$$\begin{aligned} \mathrm{d }u_j&= (2N+1) (u_j^2+u_j u_{j+1}-u_{j-1}u_j-u^2_{j-1})\mathrm{d }t \\ \nonumber&\quad +\;(2N+1)^2(u_{j+1}-2 u_j+u_{j-1})\mathrm{d }t + (2N+1)^{3/2} (\mathrm{d }B_j - \mathrm{d }B_{j-1}) \end{aligned}$$
(31)

for \(j=1,\ldots ,2N+1\) with periodic boundary conditions \(u_0=u_{2N+1}\) and where the processes \((B_j)_{j=1,\ldots ,2N+1}\) are a family of independents standard Brownian motions with \(B_0=B_{2N+1}\). This model has to be tought as the discretization of the dynamic of the periodic velocity field \(u(x)\) with \(x\in (-\pi ,\pi ]\) sampled on a grid of mesh size \(1/(2N+1)\), that is \(u_j = u(\xi ^N_j)\) with \(\xi ^N_j = -\pi +2\pi (j/(2N+1))\). This fixes also the scaling factors for the different contributions to the dynamics if we want that, at least formally, this equation goes to a limit described by a SBE. Passing to Fourier variables \(\hat{u}(k) = (2N+1)^{-1}\sum _{j=0}^{2N-1} e^{i \xi ^N_j k} u_j\) for \(k\in \mathbb{Z }^N\) with \(\mathbb{Z }^N = \mathbb{Z }\cap [-N,N]\) and imposing that \(\hat{u}(0)=0\), that is, considering the evolution only with zero mean velocity we get the system of ODEs:

$$\begin{aligned} \mathrm{d }\hat{u}_t(k) = F^\flat _N(\hat{u}_t)_k \mathrm{d }t - |g_N(k)|^2 \hat{u}_t(k) \mathrm{d }t + (2N+1)^{1/2} g_N(k) \mathrm{d }\hat{B}_t(k) \end{aligned}$$

for \(k\in \mathbb{Z }_0^N=\mathbb{Z }_0\cap [-N,N]\), where \(g_N(k)=(2N+1)(1-e^{i k/(2N+1)})\),

$$\begin{aligned} F^\flat _N(u_t)_k = \sum _{k_1,k_2\in \mathbb{Z }^N_0} \hat{u}_t(k_1)\hat{u}_t(k_2)[ g_N(k)-g_N(k)^*+g_N(k_1)-g_N(k_2)^*] \end{aligned}$$

and \((\hat{B}_\cdot (k))_{k\in \mathbb{Z }_0^N}\) is a family of centred complex Brownian motions such that \(\hat{B}(k)^* = \hat{B}(-k)\) and with covariance \(\mathbb{E }\hat{B}_t(k) \hat{B}_t(-l) = \mathbb{I }_{k=l} t (2N+1)^{-1}\). If we then let \(\beta (k) = (2N+1)^{1/2} \hat{B}(k)\) we obtain a family of complex BM of covariance \(\mathbb{E }\beta _t(k) \beta _t(-l) = t \mathbb{I }_{k=l} \). The generator \(L^\flat _N\) of this stochastic dynamics is given by

$$\begin{aligned} L^{\flat }_N \varphi ( x) = \sum _{k\in \mathbb{Z }^N_0} F^\flat _N(x)_k D_k \varphi ( x)+L^{g_N,OU}_N \varphi (x) \end{aligned}$$

with

$$\begin{aligned} L^{g_N}_N \varphi ( x) = \sum _{k\in \mathbb{Z }^N_0} |g_N(xk)|^2 (x_k D_{k}+ D_{-k} D_k) \varphi ( x) \end{aligned}$$

the generator of the OU process corresponding to the linear part associated with the multiplier \(g_N\). It is easy to check that the complete dynamics preserves the (discrete) white noise measure, indeed

$$\begin{aligned} \sum _{k\in \mathbb{Z }_0^N} x_{-k} F^\flat _N(x)_k = \sum _{\begin{array}{c} k,k_1,k_2\in \mathbb{Z }_0^N\\ k+k_1+k_2=0 \end{array}} x_{k} x_{k_1} x_{k_2}[ g_N(k)^*-g_N(k)+g_N(k_1)-g_N(k_2)^*] =0 \end{aligned}$$

since the symmetrization of the r.h.s. with respect to the permutations of the variables \(k,k_1,k_2\) yields zero. Then defining suitable controlled process with respect to the linear part of this equation we can prove our apriori estimates on additive functionals which are now controlled by the quantity

$$\begin{aligned} \mathcal{E }^{g_N}((H_{g_N,N})^\pm _k)(x)\lesssim \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}}c_{g_N}(k,k_1,k_2) |x_{k_2}|^2 \end{aligned}$$

with \(c_{g_N}(k,k_1,k_2) = |g_N(k)|^2/[(|g_N(k_1)|^{2}+|g_N(k_2)|^{2})]\). Moreover noting that

$$\begin{aligned} |g_N(k)|^2 = 2 (2N+1)^2(1-\cos (2\pi k/(2N+1)) \sim |k|^2 \end{aligned}$$

uniformly \(N\), it is possible to estimate this energy in the same way we did before in the case \(\theta =1\) and obtain that the family of stationary solutions of Eq. (31) is tight in \(C([0,T],\mathcal{F }L^{\infty ,-{\varepsilon }})\) for all \({\varepsilon }>0\). Moreover using the fact that \(g_N(k) \rightarrow ik\) as \(N\rightarrow \infty \) uniformly for bounded \(k\) and that

$$\begin{aligned} \begin{aligned} \pi _M F^\flat _N(\pi _M x)_k \!&=\!\sum _{k_1,k_2\in \mathbb{Z }^N_0}\mathbb{I }_{|k|,|k_1|,|k_2|\le M}x_{k_1} x_{k_2}[ g_N(k)-g_N(k)^*+g_N(k_1)\!-\!g_N(k_2)^*]\\&\quad \rightarrow 3 i k \sum _{k_1,k_2\in \mathbb{Z }_0}\mathbb{I }_{|k|,|k_1|,|k_2|\le M}x_{k_1} x_{k_2}= 3 F_M(x)_k \end{aligned} \end{aligned}$$

it is easy to check that any accumulation point is a controlled solution of the SBEs (4).

7 2D stochastic Navier–Stokes equation

We consider the problem of stationary solutions to the 2d stochastic Navier–Stokes equation considered in [1] (see also [2]). We would like to deal with invariant measures obtained by formally taking the kinetic energy of the fluid and considering the associated Gibbs measure. However this measure is quite singular and we need a bit of hyperviscosity in the equation to make our estimates work.

7.1 The setting

Fix \(\sigma >0\) and consider the following stochastic differential equation

$$\begin{aligned} \mathrm{d }(u_{t})_k = - |k|^{2+2\sigma } (u_{t})_k \mathrm{d }t + B_k(u_t) \mathrm{d }t + |k|^{\sigma } \mathrm{d }\beta ^k_t \end{aligned}$$
(32)

where \((\beta ^k)_{k\in \mathbb{Z }^2\backslash \{0\}}\) is a family of complex BMs for which \((\beta ^k)^* = \beta ^{-k}\) and \(\mathbb{E }[\beta ^k \beta ^q] = \mathbb{I }_{q+k=0}\), \(u\) is a stochastic process with continuous trajectories in the space of distributions on the two dimensional torus \(\mathbb{T }^2\),

$$\begin{aligned} B_k(x) = \sum _{k_1+k_2=k} b(k,k_1,k_2) x_{k_1} x_{k_2} \end{aligned}$$

where \(x: \mathbb{Z }^2\backslash \{0\}\rightarrow \mathbb{C }\) is such that \(x_{-k} = x_k^*\) and

$$\begin{aligned} b(k,k_1,k_2) = \frac{(k^\bot \cdot k_1)(k \cdot k_2)}{k^2} \end{aligned}$$

with \((\xi ,\eta )^\bot = (\eta ,-\xi ) \in \mathbb{R }^2\). Apart from the two-dimensional setting and the difference covariance structure of the linear part this problem has the same structure as the one dimensional SBE we considered before. Note that to make sense of it (and in order to construct controlled solutions) we can consider the Galerkin approximations constructed as follows. Fix \(N\) and solve the problem finite dimensional problem

$$\begin{aligned} \mathrm{d }(u^N_{t})_k = - |k|^{2+\sigma } (u^N_{t})_k \mathrm{d }t + B^N_k(u^N_t) \mathrm{d }t + |k|^{-\sigma } \mathrm{d }\beta ^k_t \end{aligned}$$
(33)

for \(k \in \mathbb{Z }^2_N = \{k \in \mathbb{Z }^2 : |k| \le N\}\), where

$$\begin{aligned} B^N_k(x) =\mathbb{I }_{|k|\le N} \sum _{\begin{array}{c} k_1+k_2=k\\ |k_1|\le N, |k_2|\le N \end{array}} b(k,k_1,k_2) x_{k_1} x_{k_2} \end{aligned}$$
(34)

The generator of the process \(u^N\) is given by \( L^N\varphi (x) = L_0\varphi (x) +\sum _{k\in \mathbb{Z }^2\backslash \{0\}} B^N_k (x) D_k \varphi (x) \) where

$$\begin{aligned} L^0 \varphi (x) =\frac{1}{2}\sum _{k\in \mathbb{Z }^2\backslash \{0\}} |k|^{2\sigma }( D_{-k}D_k\varphi (x) -|k|^2 x_k D_k\varphi (x)) \end{aligned}$$

is the generator of a suitable OU flow. Note moreover that the kinetic energy of \(u\) given by \(E(x) = \sum _k |k|^{2} |x_k|^2\) is invariant under the flow generated by \(B^N\). Moreover \(D_{k} B^N_k(x) = 0 \) since \(x_k\) does not enter in the expression of \(B^N_k(x)\), so the vectorfields \(B^N\) leave also the measure \(\prod _{k \in \mathbb{Z }^2_N\backslash \{0\}} dx_k \) invariant. Then the (complex) Gaussian measures

$$\begin{aligned} \gamma (dx) = \prod _{k \in \mathbb{Z }^2\backslash \{0\}} Z_k e^{-|k|^2 |x_k|^2} dx_k \end{aligned}$$

is invariant under the flow generated by \(B^N\). (This measure should be understood restricted to the set \(\{x \in \mathbb{C }^{\mathbb{Z }^2\backslash \{0\}} : x_{-k} = \overline{x_k} \}\)). The measure \(\gamma \) is also invariant for the \(u^N\) diffusion since it is invariant for \(B^N\) and for the OU process generated by \(L^0\). Intoduce standard Sobolev norms \(\Vert x\Vert _\sigma ^2 = \sum _{k \in \mathbb{Z }^2\backslash \{0\}} |k|^{2\sigma } |x_k|^2\) and denote with \(H^\sigma \) the space of elements \(x\) with \(\Vert x \Vert _{\sigma }<\infty \). The measure \(\gamma \) is the Gaussian measure associated to \(H^1\) and is supported on any \(H^\sigma \) with \(\sigma <0\)

$$\begin{aligned} \int \Vert x\Vert _\sigma ^2 \gamma (dx) = \sum _{k \in \mathbb{Z }^2\backslash \{0\}} |k|^{2\sigma -2} < \infty \end{aligned}$$

so \((\gamma , H^1, \cap _{{\varepsilon }< 0}H^{\varepsilon })\) is an abstract Wiener space in the sense of Gross. Note that the vectorfield \(B_k(x)\) in not defined on the support of \(\gamma \). To give sense of controlled solutions to this equation we need to control

$$\begin{aligned} \mathcal{E }((H_{N})^\pm _k)(x)\lesssim \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}}c_{\text{ ns }}(k,k_1,k_2) |x_{k_2}|^2 \end{aligned}$$

with \(c_{\text{ ns }}(k,k_1,k_2) = |k_1|^{2\sigma } |k_1|^2|k_2|^2/(|k_1|^{2+2\sigma }+|k_2|^{2+2\sigma })^2\) and note that the stationary expectation of this term can be estimated by

$$\begin{aligned} I_N(k)&= \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}}c_{\text{ ns }}(k,k_1,k_2) |k_2|^{-2} \lesssim \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}}\frac{|k_1|^{2+2\sigma }}{ (|k_1|^{2+2\sigma }+|k_2|^{2+2\sigma })^2}\lesssim \\&\lesssim \sum _{\begin{array}{c} k_1,k_2 :k_1+k_2=k\\ |k|,|k_1|,|k_2|\le N \end{array}}\frac{1}{ |k_1|^{2+2\sigma }+|k_2|^{2+2\sigma }}\lesssim |k|^{-2\sigma } \end{aligned}$$

for any \(\sigma > 0\). This estimate allows to apply our machinery and obtain stationary controlled solutions to this equation.