Abstract
We study quasi-linear stochastic partial differential equations with discontinuous drift coefficients. Existence and uniqueness of a solution is already known under weaker conditions on the drift, but we are interested in the regularity of the solution in terms of Malliavin calculus. We prove that when the drift is bounded and measurable the solution is directional Malliavin differentiable.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the quasi-linear stochastic partial differential equation
with the initial condition \(u(0,x) = u_0(x)\), \(u_0 \in C([0,1])\). We will consider Neumann boundary conditions,
In (1) \(\frac{\partial ^2}{ \partial t \partial x} W(t,x)\) denotes space–time White noise and we assume \(b{: }\, \mathbb {R} \rightarrow \mathbb {R}\) is bounded and measurable, i.e. we allow for discontinuities.
Existence and uniqueness of a strong solution to (1) is already known under weaker conditions on the drift. More specifically, in [5] the authors prove existence and uniqueness of a strong solution to (1) when b is allowed to be of linear growth.
In this paper we are restricting ourselves to bounded drift, but we show that the solution has regularity properties. Indeed, the solution is Malliavin differentiable in every direction, \(h \in L^2([0,T] \times [0,1])\), denoted \(D^hu(t,x)\). Although we are not yet able to prove existence of the usual Malliavin derivative, i.e.
such that \(\int _0^T \int _0^1 D_{\theta , \xi } u(t,x) h(\theta ,\xi ) d\xi d\theta = D^hu(t,x)\), this paper has some major contributions:
-
This work shows that the solution behaves more regular than one could expect by considering the drift. The classical way of studying Malliavin calculus and S(P)DE’s is to show that the solutions ’inherit’ regularity from the coefficients. In the current paper we show that there are more regularity properties of S(P)DE’s.
-
It is an example of an infinite-dimensional generalization of [7], where the authors show that the SDE
$$\begin{aligned} dX_t = b(t,X_t)dt + dB_t, \, \, X_0 = x \in \mathbb {R}^d \end{aligned}$$(2)with bounded and measurable drift has a unique Malliavin differentiable strong solution using a new technique.
-
Very recently, the authors of [1] show that there is strong uniqueness (and thus strong existence) in the Hilbert-space valued SDE
$$\begin{aligned} dX_t = ( AX_t + B(t,X_t))dt + dW_t \in H \end{aligned}$$when \(B{: }\, [0,T] \times H \rightarrow H\) is bounded and measurable, thus proving a generalization of the famous result by Veretennikov [9] and Zvonkin [11] to SPDE’s. The current paper suggests that the technique in [7] could be used to show that the solutions obtained in [1] are even Malliavin differentiable. See also [3] where the authors prove Malliavin differentiability in the case of Hölder-continuous drift.
-
The Malliavin calculus is tailored to investigate regularity properties of densities of random variables. Perhaps the most well known explicit formula for this is the following: for a random variable \(F \in \mathbb {D}^{1,2}\), \(h \in H\) such that \(\langle DF, h\rangle \ne 0\) and \(\frac{h}{\langle DF,h\rangle } \in \text { dom}\delta \) (the Skorohod-operator) the density of F is continuous and given by
$$\begin{aligned} p_F(x) = E\left[ 1_{(F > x)} \delta \left( \frac{h}{\langle DF,h\rangle } \right) \right] . \end{aligned}$$(3)See [8] Proposition 2.1.1 and Exercise 2.1.3 for details and precise formulations. In the above we note that only the directional Malliavin derivative appears.
Even though only the directional derivative is appearing in (3), we are not able to prove this formula for u(t, x) with the techniques of this paper. In fact it requires \(\frac{h}{D^hu(t,x)}\) to be in the domain the Skorohod operator, and this typically requires the second order Malliavin derivative of u(t, x). In the finite dimensional case, i.e. for SDE’s, there are examples of discontinuous drift coefficients where the solution is once, but not twice, Malliavin differentiable. This suggests that (3) is out of reach for discontinuous coefficients.
On a more positive note we can prove, using the directional Malliavin derivative, that the support of the density is connected. For details, see Corollary 6.1.
Let us briefly explain the idea of the proof. Assume first that \(b \in C^1\) and u solves (1). The directional Malliavin derivative should then satisfy, for any direction \(h \in L^2([0,T] \times (0,1))\),
For a fixed sample path, we regard the above equation as a deterministic equation and we can use the Feynman–Kac formula to solve it as a functional of \(\int _0^tb'(u(s,\cdot ))ds\). Since the solution of (1) is very irregular as a function of t, the local time \(L(t, \cdot )\) is continuously differentiable in the spatial variable. Therefore we can write
where we have used integration by parts.
The main estimate is the following.
Theorem 1.1
Suppose \(b \in C_c^1\). There exists a continuous increasing function \(C: \mathbb {R}_+ \rightarrow \mathbb {R}_+\) such that for any \(h \in L^2([0,T] \times [0,1])\) we have
Finally we approximate a general b by smooth functions and use comparison to generate strong convergence (in \(L^2(\Omega )\)) of the corresponding sequence of solutions to the solution of (1). Since we can bound the corresponding sequence of (directional) derivatives, we arrive at the main result of this paper.
Theorem 1.2
Assume b is bounded and measurable. Denote by u the solution of (1). Then for every \(h \in L^2([0,T] \times [0,1])\) we have
The paper is organized as follows: In Sect. 2 we introduce the Malliavin calculus and related results we need. In Sect. 3 we state rigorously the equation (1). In Sect. 4 we prove that the local time of the solution to (1) with \(b=0\) has nice regularity properties. We then study (1) when the drift is smooth in Sect. 5 and use the results from Sect. 4 to obtain derivative-free estimates. The proof of Theorem 1.1 is found in Sect. 5.
2 Basic concepts of Malliavin calculus
Let \((\Omega , \mathcal {F}, P)\) be a complete probability space. We assume that \(\mathcal {F}\) is the completion of \( \sigma \{ W(h){: }\, h \in L^2([0,T] \times [0,1]) \}\) with the P-null sets. Here \(W: L^2([0,T] \times [0,1]) \rightarrow L^2(\Omega )\) is a linear mapping such that W(h) is a centered Gaussian random variable. The covariance is given by \(E[W(h)W(g)] = \langle h, g \rangle \) where the right hand side denotes the inner product in \(L^2([0,T] \times [0,1])\).
We have the orthogonal Wiener chaos decomposition
where \(H_n := span\{ I_n(f){: }\, f \in L^2(([0,T] \times [0,1])^n)\}\) and \(I_n(f)\) is the n-fold Wiener-Itô integral of f. For a random variable \(F \in L^2(\Omega )\) with Wiener chaos decomposition \(F = \sum _{n=0}^{\infty } I_n(f_n)\) we have
We call a random variable F smooth if it is of the form
for \(h_1, \ldots h_n \in L^2([0,T] \times [0,1])\) and \(f \in C^{\infty }_p (\mathbb {R}^n)\)—the smooth functions with polynomial growth. For such a random variable we define the Malliavin derivative
as an element of \(L^2 (\Omega ; L^2([0,T] \times [0,1]))\). We denote by \(\mathbb {D}^{1,2}\) the closure of the set of smooth random variables with respect to the norm
Furthermore we define the directional Malliavin derivative in the direction \(h \in L^2([0,T] \times [0,1])\) as
and by \(\mathbb {D}^{h,2}\) the closure of the set of smooth random variables with respect to the norm
The integration by parts formula
is well known, and can be found in [8]. This shows that the operator \(D^h\) is closeable on \(L^2(\Omega )\) with domain \(\mathbb {D}^{h,2}\). Moreover, using approximation and the two preceding facts, one can prove the chain rule \(D^h g(F) = g'(F)D^hF\) for all \(g \in C^{\infty }_c(\mathbb {R})\) and \(F \in \mathbb {D}^{h,2}\).
The following characterization of \(\mathbb {D}^{h,2}\) is obtained by modifying the proof of Proposition 1.2.1 in [8]:
Proposition 2.1
For \(F = \sum _{n =0}^{\infty } I_n(f_n) \in L^2(\Omega )\) we have that F belongs to \(\mathbb {D}^{h,2}\) if and only if
in which case the above is equal to \(E[(D^hF)^2]\).
Let us prove the following technical result which is inspired by Lemma 1.2.3. in [8]:
Lemma 2.2
Suppose \(\{ F_N \}_{N \ge 1} \subset \mathbb {D}^{h,2}\) is such that
-
\(F_N \rightarrow F\) in \(L^2(\Omega )\)
-
\(\sup _{N \ge 1} E[(D^hF_N)^2] < \infty \) .
Then \(F \in \mathbb {D}^{h,2}\) and \(D^hF_{N}\) converges to \(D^hF\) in the weak topology of \(L^2(\Omega )\).
Proof
We write
and
Since \(\{ D^hF_N \}_{N \ge 1}\) is bounded in \(L^2(\Omega )\) we may extract a subsequence \(D^hF_{N_k}\) converging in the weak topology to some element \(\alpha = \sum _{n=0}^{\infty } I_n ( \alpha _n)\). We note that
and we see that \(\langle f_{n,N_k}, h \rangle \) converges weakly in \(L^2(([0,T] \times [0,1])^{n-1})\) to \(\alpha _n\). It follows that \(\alpha _n\) coincides with \(\langle f_n, h \rangle \) and we have
which is finite by assumption. From Proposition 2.1 we have \(F \in \mathbb {D}^{h,2}\).
If we take any other weakly converging subsequence of \(\{ D^hF_N \}_{N \ge 1}\) its limit must converge, by the preceding argument, to \(D^hF\). This implies that the full sequence converges weakly. \(\square \)
Suppose now that \(F \in L^2(\Omega )\) is such that for all \(h \in L^2([0,T] \times [0,1])\) we have \(F \in \mathbb {D}^{h,2}\) and \(D^hF = 0\). It follows from Proposition 2.1 that \(f_n = 0\) a.e. for all \(n \ge 1\). Consequently \(F = E[F]\). Let now \(A \in \mathcal {F}\) and assume \(1_A \in \mathbb {D}^{h,2}\) for all \(h \in L^2([0,T] \times [0,1])\). From the chain rule applied to \(f \in C^{\infty }_0(\mathbb {R})\) such that \(f(x) = x^2\) on, say \(\{x \in \mathbb {R} : |x| < 2\}\), we get
which implies that \(1_A = E[1_A]\) and this is only possible if \(P(A) =0\) or \(P(A) =1\). This observation together with Lemma 2.2 leads to the following.
Proposition 2.3
Assume \(F \in \mathbb {D}^{h,2}\) for all \(h \in L^2([0,T] \times [0,1])\) and F has a density p. Then the support of p is connected.
Proof
Assume the support of p can be written as two disjoint connected sets, A and B. Let \(\psi _M\) be a sequence of smooth functions such that \(0 \le \psi _M(x) \le 1\) and
Moreover, we assume \(\sup _M \Vert \psi _M' \Vert _{\infty } < \infty \). For M large enough, \(A_M := A \cap \{x \in \mathbb {R} : |x| \le M \}\) and \(B_M := B \cap \{x \in \mathbb {R} : |x| \le M \}\) are both non-empty. Let \(f_M\) be a smooth function such that \(0 \le f_M \le 1\), \(f_M(x) = 1\) for \(x \in A_M\) and \(f_M(x) = 0\) for \(x \in B_M\). Using the density of F we observe
which gives \(D^h (f_M(F) \psi _M(F)) = f_M(F) \psi _M'(F) D^hF\). From Lemma 2.2 we see that \(1_A = \lim _{m \rightarrow } f_M(F)\psi _M(F) \in \mathbb {D}^{h,2}\) which is only possible if \(P(A) = 1\) or \(P(A) =0\), meaning that either A or B (respectively) is the entire support of p. \(\square \)
3 Framework and solutions
With the notation from the previous section, we define \(W(t,A) := W(1_{[0,t] \times A})\) which is the White noise on \([0,T] \times [0,1]\) and for \(h \in L^2([0,T] \times [0,1])\) the Wiener-Itô-integral w.r.t. dW(t, x) is equal to
Throughout this paper we will assume we have a filtration \(\{ \mathcal {F}_t \}_{t \in [0,T]}\), where \(\mathcal {F}_t\) is generated by \(\{ W(s,A) : (s,A) \in [0,t] \times \mathcal {B}([0,1]) \}\) augmented with the set of P-null sets.
We denote by G(t, x, y) the fundamental solution to the heat equation, i.e.
with boundary conditions \(\frac{\partial }{\partial x} G(t,0,y) = \frac{\partial }{\partial x} G(t,1,y) =0\) and \(\lim _{t \rightarrow 0} G(t,x,y) = \delta _{x}(y)\)—the Dirac delta distribution in x.
It is well known that
and there exist positive constants c and C such that uniformly in \(t' < t\) and \(x \in [0,1]\) we have
Assume we are given a bounded and measurable function \(b{: }\, \mathbb {R} \rightarrow \mathbb {R}\). By a solution to our main SPDE, (1), we shall mean an adapted and continuous random field u(t, x) such that
4 Local time estimates
The local time of a process \((X_t)_{t \in [0,T]}\) is defined as follows: we define the occupation measure
where \(|\cdot |\) denotes the Lebesgue measure. The process X has local time on [0, t] if \(\mu _t\) is absolutely continuous w.r.t. Lebesgue measure, and the local time, \(L(t, \cdot )\), is defined as the corresponding Radon–Nykodim derivative, i.e.
The local time satisfies the occupation time density formula
for any bounded and measurable \(f: \mathbb {R} \rightarrow \mathbb {R}\).
The aim of this section is to study local times of the driftless stochastic heat equation
with Neumann boundary conditions. We assume \(u_0 = 0\) for simplicity. The solution is given by
where G is the fundamental solution of the heat equation.
Fix \( x \in [0,1] \) and let \(\omega \in C([0,T]; [-x,1-x])\). We are interested in the stochastic process
Notice that we are not expanding the dynamics in t of the composition of u and \(\omega \). Indeed, \(x \mapsto u(t,x)\) is P-a.s. not differentiable so it is not clear how such a dynamic evolves. And even worse—there is no Itô formula for this process.
Nevertheless, \(X_t\) is a Gaussian process and we have for \(t> t'\)
from (4).
Theorem 4.1
Suppose \(N_t\) is a Gaussian process such that
Then, there exists a local time \(L^N(t,\cdot )\) of N which admits the following representation
Moreover, \(L^N(t,\cdot )\) is \(\lfloor p \rfloor \) times differentiable where \(\lfloor p \rfloor \) is the integer value of p.
For a proof, see e.g. [4], Theorem 28.1 or [10], Lemma 8.1.
With \(X_t\) as before, we see that the local time of \(X_t\) is in \(C^1\). Moreover, \(X_t\) satisfies the following strong local non-determinism:
Lemma 4.2
For all \(t_1 < \cdots t_n< t \in [0,1]\) we have
Proof
The conditional variance of \(X_t\) given \(X_{t_1}, \ldots X_{t_n}\) is the square of the distance between \(X_t\) and the subspace \(span\{X_{t_1}, \ldots X_{t_n}\}\). By distance here, we mean in the Hilbert-space \(L^2(\Omega )\). We have that for \(\alpha _1, \ldots \alpha _n \in \mathbb {R}\)
We get that
We have the following estimates on the local time
Lemma 4.3
There exists a constant C such that for all even integers m,
Proof
We note that it is sufficient to prove that
for all real numbers h.
Since we assume m is even, we have
Above we have used the change of variables \(u_m = v_m\) and \(u_j = v_j - v_{j+1}\), that is \(u = Mv\) where
For notational convenience we have used \(X_{s_0} = y\) and \(v_{m+1} = 0\).
Taking the expectation we get
where we have used the local non-determinism in the second-to-last inequality, and \(Var(X_{s_j} - X_{s_{j-1}}) \ge c \sqrt{s_j - s_{j-1}}\) in the last.
We write
where \(X \sim \mathcal {N}(0, c^{-1}\Sigma )\), and we have defined \((\Sigma )_{j,k} = \delta _{j,k} ( s_j - s_{j-1})^{-1/2}\). Let \(Y = MX\), so that \(Y \sim \mathcal {N}(0,c^{-1}M \Sigma M^T)\) and it follows from [6] that
Above, per(A) denotes the permanent of the matrix A. Consequently
Using Hölder’s inequality we get
One can check that there exists a constant \(C_1>0\), such that
when \(p < 4\). We can find a constant \(C_2 > 0\) such that
when \(q < 2\). The proof is technical and is postponed to the Appendix, Sect. 1.
This gives
for an appropriate constant C, and we choose \(p=3\) and \(q = 3/2\) to get the result. \(\square \)
We are ready to conclude this section with its most central result:
Proposition 4.4
There exists a constant \(C > 0\) such that for all integers m
Proof
We begin by noting that for any \(p \ge 1\) we have \( E[ |X_t^*|^p] < \infty \), where we have defined \(X_t^* := \sup _{0 \le s \le t} |X_s|\). To see this, note that we may regard u(t, x) as a \(C([0,T] \times [0,1])\)-valued Gaussian random variable. From [2] we get that \(E[\Vert u\Vert _{\infty }^p] < \infty \) for all \(p \ge 1\), so that
We may write
For the first term we can estimate
For the second term, we note that from (6) that the support of \(L^X(t,\cdot )\) is included in the interval \([ - X_t^*, X_t^* ]\), This gives
Above we have denoted \(B = \{ y \in \mathbb {R}^m | \, |y_j| \ge 1 \, \forall j \, \}\). We use the estimate
where we have used Chebyshevs inequality in the last step. This gives
The result follows from Lemma 4.3. \(\square \)
5 Derivative free estimates
In this section we assume that \(b \in C^1_c (\mathbb {R})\) and denote by u the solution to (1).
Since b is continuously differentiable it is well known that u(t, x) is Malliavin differentiable, and we have
Let now \(h \in C^2([0,T] \times [0,1])\). Then the random field
satisfies the following linear equation
or, equivalently
with initial condition \(v(0,x) = 0\) and Neumann boundary conditions.
If we let \(\mu _x\) denote the measure on \((C([0,T]), \mathcal {B}(C([0,T])))\) such that \(\omega \mapsto \omega (s)\) is a doubly reflected (in 0 and 1) Brownian motion starting in x, then we get from the Feynman-Kac formula that the above equation is uniquely solved by
Lemma 5.1
There exists an increasing continuous function \(C: [0,\infty ) \rightarrow [0, \infty )\) such that
Proof
Define the measure \(\tilde{P}\) by
Then \(\tilde{P}\) is a probability measure and under \(\tilde{P}\),
is space–time white noise. Under this measure we have that u is Gaussian, and more precisely
From (8) we double the variables to get
Now we write
Denote by L(r, y) the local time of the process \((u(t-s, \omega (s)))_{s \in [0,r]}\). From the occupation time density formula and integration by parts:
From Proposition 4.4 we have
which converges by Stirling’s formula.
It is easy to see that we can bound \(\tilde{E}[Z^{-2}]\) by a function only depending on \(\Vert b\Vert _{\infty }\).
Combining the above we get
for an appropriate function C, and the result follows. \(\square \)
In the above we assumed that \(h \in C^2([0,T] \times [0,1])\). We may extend this to \(h \in L^2([0,T] \times [0,1])\), which is exactly Theorem 1.1.
Proof of Theorem 1.1
We know that the random variable \(\omega \mapsto \omega (r)\) has density \(G(r,x,\cdot )\) under \(\mu _x\). From Lemma 5.1 we see that for \(h \in C^2([0,T] \times [0,1])\), by Hölder’s inequality
Consequently we may extend the linear operator
by continuity. The result follows. \(\square \)
6 Directional derivatives when the drift is discontinuous
In [5] the authors successfully generalize the famous results by Zvonkin [11] and Veretennikov [9] to infinite dimension, i.e. they show that (1) has a unique strong solution when b is bounded and measurable. In fact, they show that this holds true even when the drift is of linear growth.
Let us briefly explain the idea of the proof; let b be bounded and measurable and define for \(n \in \mathbb {N}\)
where \(\rho \) is a non-negative smooth function with compact support in \(\mathbb {R}\) such that \(\int _{\mathbb {R}} \rho (y)dy = 1\).
We let
and
so that \(\tilde{b}_{n,k}\) is Lipschitz. Denote by \(\tilde{u}_{n,k}(t,x)\) the unique solution to (1) when we replace b by \(\tilde{b}_{n,k}\). Then one can use comparison to show that
where \(u_n(t,x)\) solves (1) when we replace b by \(B_n\). Furthermore,
where u(t, x) is a solution to (1). For details see [5].
We are ready to prove our main theorem:
Proof of Theorem 1.2
From the discussion above we know that we have \(u_n(t,x) \rightarrow u(t,x)\) in \(L^2(\Omega )\). From Lemma 5.1 we see that
for any \(h \in L^2([0,T] \times [0,1])\). It follows from Lemma 2.2 that \(u(t,x) \in \mathbb {D}^{h,2}\). \(\square \)
As an application we can prove the following.
Corollary 6.1
For all (t, x) there exists a density of the solution u(t, x). Moreover its support is connected.
Proof of Theorem 1.2
Existence of a density follows easily without using Malliavin calculus. Let \(A \subset \mathbb {R}\) have zero Lebesgue measure. Then
and the first above factor is zero since u(t, x) has a density under \(\tilde{P}\)—it is Gaussian. Consequently \(P \circ (u(t,x))^{-1}\) is absolutely continuos w.r.t. Lebesque measure, and hence there exists a density.
From Proposition 2.3 we can conclude that the support of the density is connected. \(\square \)
References
Da Prato, G., Flandoli, F., Priola, E., Röckner, M.: Strong uniqueness for stochastic evolution equations in Hilbert spaces with unbounded measurable drift. Ann. Probab. 41(5), 3306–3344 (2013)
Fernique, X.: Regularite des trajectoires des fonctions aleatoires gaussiennes. Ecole d’Ete de Probabilites de Saint-Flour IV-1974. Lecture Notes in Mathematics, vol. 480, pp. 1–96. Springer, Berlin (1975)
Flandoli, F., Nilssen, T., Proske, F.: Malliavin differentiability and strong solutions for a class of SDE in Hilbert spaces. University of Oslo series. https://www.duo.uio.no/handle/10852/38087
Geman, D., Horowitz, J.: Occupation densities. Ann. Probab. 8(1), 1–67 (1980)
Gyöngy, I., Pardoux, E.: On quasi-linear stochastic partial differential equations. Probab. Theory Relat. Fields 94, 413–425 (1993)
Li, W., Wei, A.: A Gaussian inequality for expected absolute products. J. Theor. Probab. 25(1), 92–99 (2012)
Menoukeu-Pamen, O., Meyer-Brandis, T., Nilssen, T., Proske, F., Zhang, T.: A variational approach to the construction and Malliavin differentiability of strong solutions of SDE’s. Math. Ann. 357(2), 761–799 (2013)
Nualart, D.: The Malliavin Calculus and Related Topics. Springer, Berlin (1995)
Veretennikov, A.Y.: On the strong solutions of stochastic differential equations. Theory Probab. Appl. 24, 354–366 (1979)
Xiao, Y.: Sample Path Properties of Anisotropic Gaussian Random Fields. A Minicourse on Stochastic Partial Differential Equations. Lecture Notes in Mathematics, vol. 1962, pp. 145–212. Springer, Berlin (2009)
Zvonkin, A.K.: A transformation of the state space of a diffusion process that removes the drift. Math. USSR (Sbornik) 22, 129–149 (1974)
Acknowledgments
The author would like to thank the editors for suggesting improvement of the paper, as well as an anonymous referee for careful reading of the paper and helpful corrections. In addition, the author would like to thank Frank Proske for fruitful discussions and proofreading. Funded by Norwegian Research Council (Project 230448/F20).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Consider the matrices \(\Sigma \) and M from Sect. 4. The purpose of this section in to show that the function \(f_m(s_1, \ldots s_m) := \mathrm{per}( M \Sigma M^T)\) is such that for \(\beta \in (0,1)\) we have
for some constant \(C = C(t,\beta )\).
We start by noting that
where
and \(b_j = -(s_{j} - s_{j-1})^{-1/2}\) for \(j = m, \ldots , 2\) and \(b_{2} = -s_1^{-1/2}\).
Using the definition of the permanent of a matrix we see that we have the following recursive relation
with
We write \(f_m(s_1,\ldots ,s_m) = p_m(s_1^{-1/2}, (s_2 - s_{1})^{-1/2}, \ldots ,(s_m - s_{m-1})^{-1/2})\) where \(p_m\) is the polynomial recursively defined by
with
If we denote by \(deg_{x_i}p_m\) the degree of the polynomial in the variable \(x_i\), for \(i=1, \ldots m\) we see from the recursive relation that
Moreover, if we denote by \(\gamma _m\) the number of terms in this polynomial, it is clear from the recursive relation that
and
So that we have \(\gamma _m \le C^m\) for C large enough.
It follows that we may write
where the sum is taken over all multiindices \(\alpha \in \mathbb {N}^m\) with \(\alpha _i \le 2\) and \(\alpha _1 \le 1\). Here we have denoted \(x^{\alpha } = x_1^{\alpha _1} \ldots x_m^{\alpha _m}\). Moreover, there are at most \(C^m\) terms in this sum with C as above and one can show that \(|c_{\alpha }| \le 3^m\) for all \(\alpha \).
Consequently
Since \(\frac{\beta \alpha _i}{2} < 1\) for all \(i = 1, \ldots , m\), each of the above terms are integrable over \( 0 < s_1 < \cdots < s_m < t\), and there are at most \(C^m\) such terms. The result follows.
Rights and permissions
About this article
Cite this article
Nilssen, T. Quasi-linear stochastic partial differential equations with irregular coefficients: Malliavin regularity of the solutions. Stoch PDE: Anal Comp 3, 339–359 (2015). https://doi.org/10.1007/s40072-015-0053-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-015-0053-y