Abstract
In this chapter, we collect several results which are used in the book, but whose presentation we have preferred to postpone until now. A first section presents notations and elementary results on matrices. The second section presents some elements of nonlinear and convex analysis.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Uniform Exterior Ball Condition
- Maximal Monotone Operator
- Nonlinear Neumann Boundary Conditions
- Solution Viscosity
- Stochastic Inequalities
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
6.1 Introduction
In this chapter, we collect several results which are used in the book, but whose presentation we have preferred to postpone until now. A first section presents notations and elementary results on matrices. The second section presents some elements of nonlinear and convex analysis. It is mainly used in Chap. 4. The third section presents Gronwall’s inequality, both in the forward and in the backward time direction, together with various original extensions of this inequality to stochastic processes. The most important stochastic inequalities are given in Propositions 6.71, 6.74, 6.80. Section four presents the notion of viscosity solutions of nonlinear PDEs, and establishes three different uniqueness results for viscosity solutions of PDEs which appear in previous chapters of this book. These are variants of more or less known results scattered in the literature. We could not possibly cover all types of elliptic and parabolic equations (and systems of equations) with various types of boundary conditions. But we believe that the reader can adapt our arguments to all situations considered in Chaps. 3–5 of the book.
Finally a last section is devoted to giving hints for the solutions to some of the exercises from the book.
6.2 Annex A: Vectors and Matrices
Denote by \(\mathbb{R}^{d\times k}\) the linear space of matrices \(A\,=\,\left(a_{i,j}\right)_{d\times k}\), where \(a_{i,j}\,\in \,\mathbb{R}\). If k = 1 then \(\mathbb{R}^{d\times 1}\) is the Euclidean space \(\mathbb{R}^{d}\). Denote by \(A^{{\ast}} = \left(a_{j,i}\right)_{k\times d}\) the transposed matrix of A.
Let \(x = \left(x_{i}\right)_{i=\overline{1,d}} \in \mathbb{R}^{d}\) and \(y = \left(y_{i}\right)_{i=\overline{1,d}} \in \mathbb{R}^{d}\). The usual inner product on \(\mathbb{R}^{d}\) is given by
and the norm
We also introduce the notation \(x^{+}:= \left(x_{i}^{+}\right)_{d\times 1}\).
The tensor product of the two vectors x and y is the linear operator \(x \otimes y: \mathbb{R}^{d} \times \mathbb{R}^{d} \rightarrow \mathbb{R}\) defined by
Hence one can identify
If \(A = \left(a_{i,j}\right)_{d\times d}\) and \(\left\{\mathbf{u}_{1},\ldots,\mathbf{u}_{d}\right\}\) is an orthonormal basis of \(\mathbb{R}^{d}\), that is
we define
The “Trace” is independent of the basis \(\left\{\mathbf{u}_{1},\ldots,\mathbf{u}_{d}\right\}\) and
Moreover if \(A,B \in \mathbb{R}^{d\times d}\) then one verifies that
Let \(A = \left(a_{i,j}\right)_{d\times k} \in \mathbb{R}^{d\times k}\), \(B = \left(b_{i,j}\right)_{d\times k} \in \mathbb{R}^{d\times k}\). We define the inner product on \(\mathbb{R}^{d\times k}\) by
and the norm
We have
We note that the above matrix norm is not the operator norm
since \(\left\Vert I_{d}\right\Vert = 1\neq \sqrt{d} = \left\vert I_{d}\right\vert\). Note that
We denote by \(\mathbb{S}^{d\times d} \subset \mathbb{R}^{d\times d}\) the set of symmetric matrices. If \(Q,P \in \mathbb{S}^{d\times d}\), we say that Q ≤ P if \(\left\langle Qx,x\right\rangle \leq \left\langle Px,x\right\rangle\), for all \(x \in \mathbb{R}^{d}\); Q is semipositive definite if Q ≥ 0.
\(Q \in \mathbb{S}^{d\times d}\) is semipositive definite if and only if there exists an orthonormal basis \(\left\{\mathbf{v}_{1},\ldots,\mathbf{v}_{d}\right\}\) of \(\mathbb{R}^{d}\) and \(\left\{\lambda _{1},\ldots,\lambda _{d}\right\} \subset [0,\infty [\), such that
Then \(\mathbf{Tr\ }Q =\sum \limits _{ i=1}^{d}\lambda _{i}\) and for all \(A \in \mathbb{R}^{d\times d}\) we have
6.3 Annex B: Elements of Nonlinear Analysis
6.3.1 Notations
As references for this Annex, see e.g. [2] or [12]. Throughout in this Annex \(\mathbb{H}\) is a real separable Hilbert space with norm \(\left\vert \cdot \right\vert\) and scalar product \(\left\langle \cdot,\cdot \right\rangle\).
Let \(\left(\mathbb{X},\left\Vert \cdot \right\Vert _{\mathbb{X}}\right)\) be a real Banach space with dual \(\left(\mathbb{X}^{{\ast}},\left\Vert \cdot \right\Vert _{\mathbb{X}^{{\ast}}}\right)\). The duality paring \(\left(\mathbb{X}^{{\ast}}, \mathbb{X}\right)\) is also denoted \(\left\langle \cdot,\cdot \right\rangle\); hence if \(x \in \mathbb{X}\) and \(\hat{x} \in \mathbb{X}^{{\ast}}\), then by \(\left\langle \hat{x},x\right\rangle\) and \(\left\langle x,\hat{x}\right\rangle\) we understand the value, \(\hat{x}\left(x\right)\), of \(\hat{x}\) in x.
Given \(x \in \mathbb{X}\), \(\hat{x} \in \mathbb{X}^{{\ast}}\) and the sequences \(x_{n} \in \mathbb{X}\), \(\hat{x}_{n} \in \mathbb{X}^{{\ast}}\) we say that as n → ∞
6.3.2 Maximal Monotone Operators
Let \(\mathbb{X}\) and \(\mathbb{Y}\) be Banach spaces. A multivalued operator \(A: \mathbb{X} \rightrightarrows \mathbb{Y}\) (a point-to-set operator \(A: \mathbb{X} \rightarrow 2^{\mathbb{Y}}\)) will also be regarded as a subset of \(\mathbb{X} \times \mathbb{Y}\) setting for \(A \subset \mathbb{X} \times \mathbb{Y}\),
Define
and define \(A^{-1}: \mathbb{Y} \rightrightarrows \mathbb{X}\) to be the point-to-set operator defined by \(x \in A^{-1}\left(y\right)\) if \(y \in A\left(x\right)\).
We give some definitions:
-
\(A: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is monotone if
$$\displaystyle{\left\langle y_{1} - y_{2},x_{1} - x_{2}\right\rangle \geq 0,\text{ for all }\,\left(x_{1},y_{1}\right) \in A,\,\left(x_{2},y_{2}\right) \in A.}$$ -
\(A: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is a maximal monotone operator if A is a monotone operator and it is maximal in the set of monotone operators: that is,
$$\displaystyle{\left\langle v - y,u - x\right\rangle \geq 0\;\;\;\forall \,\left(x,y\right) \in A,\;\;\Longrightarrow\;\left(u,v\right) \in A.}$$ -
\(\mathbf{J}_{X}: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) defined by
$$\displaystyle\begin{array}{rcl} \mathbf{J}_{X}\left(x\right)& =& \left\{\hat{x}: \left\Vert \hat{x}\right\Vert _{{\ast}}^{2} = \left\Vert x\right\Vert ^{2} = \left\langle \hat{x},x\right\rangle \right\} {}\\ & =& \left\{\hat{x}: \left\langle \hat{x},y - x\right\rangle + \frac{1} {2}\left\Vert x\right\Vert ^{2} \leq \frac{1} {2}\left\Vert y\right\Vert ^{2},\;\;\forall \ y \in \mathbb{X}\right\} {}\\ \end{array}$$is called the duality mapping; if \(\mathbb{X} = \mathbb{H}\) is a Hilbert space then \(\mathbf{J}_{X}\left(x\right) = \mathbf{I}_{\mathbb{H}}\left(x\right) = x\) for all \(x \in \mathbb{H}\).
-
\(A: \mathbb{X} \rightrightarrows \mathbb{Y}\) is locally bounded at x 0 ∈ D(A) if there exists a neighborhood V of x 0 such that \(A\left(V \right) = \left\{y \in \mathbb{Y}: \exists \ x \in D(A) \cap V,\text{ s.t. }y \in \mathit{Ax}\right\}\) is bounded in \(\mathbb{Y}\).
We have:
Proposition 6.1 (Rockafellar).
Let \(\mathbb{X}\) be a reflexive Banach space. Then \(A: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is maximal monotone operator if and only if A is a monotone operator and
Proposition 6.2.
Let \(A: \mathbb{H} \rightrightarrows \mathbb{H}\) be a maximal monotone operator. Then:
-
(a)
A is a closed subset of \(\mathbb{H} \times \mathbb{H};\) moreover if \(\left(x_{n},y_{n}\right) \in A\) and
$$\displaystyle{\begin{array}{l} x_{n} \rightarrow x\;\ \text{ (strongly) in }\mathbb{H}\text{ and}\;y_{n}\;\mathop{ \rightarrow }\limits^{ w}y\quad \;\text{ (weakly) in }\mathbb{H},\quad \text{ or} \\ x_{n}\mathop{ \rightarrow }\limits^{ w}x,\quad \text{ and}\quad \quad y_{n} \rightarrow y,\;\;\text{ or} \\ x_{n}\mathop{ \rightarrow }\limits^{ w}x,\quad \;y_{n}\mathop{ \rightarrow }\limits^{ w}y,\quad \text{ and }\overline{\lim }_{n}\left\langle x_{n},y_{n}\right\rangle \leq \left\langle x,y\right\rangle,\end{array} }$$then \(\left(x,y\right) \in A\) ;
-
(b)
\(\overline{D\left(A\right)}\) and \(\overline{R\left(A\right)}\) are convex subsets of \(\mathbb{H}\) ;
-
(c)
Ax is a convex closed subset of \(\mathbb{H}\) for all \(x \in D\left(A\right)\) ;
-
(d)
A is locally bounded on \(\mathrm{int}\left(D\left(A\right)\right)\) that is: for every \(u_{0} \in \mathrm{ int}\left(\mathrm{Dom}\left(A\right)\right)\) there exists an r 0 > 0 such that
$$\displaystyle{\bar{B}\left(u_{0},r_{0}\right)\mathop{ =}\limits^{ \mathit{def }}\left\{u_{0} + r_{0}v: \left\vert v\right\vert \leq 1\right\} \subset \mathrm{ Dom}\left(A\right)}$$and
$$\displaystyle{A_{u_{0},r_{0}}^{\#}\mathop{ =}\limits^{ \mathit{def }}\sup \left\{\left\vert \hat{u}\right\vert:\hat{ u} \in A\left(u_{ 0} + r_{0}v\right),\;\left\vert v\right\vert \leq 1\right\} < \infty.}$$
Proposition 6.3.
-
1.
If \(A: \mathbb{H} \rightarrow \mathbb{H}\) is a single-valued monotone hemicontinuous operator then A is maximal monotone ( \(A: \mathbb{H} \rightarrow \mathbb{H}\) is hemicontinuous if the function \(t\longrightarrow \left\langle A\left(x + tz\right),y\right\rangle: \mathbb{R} \rightarrow \mathbb{R}\) is continuous for all \(x,y,z \in \mathbb{H}\) ).
-
2.
If \(A,B \subset \mathbb{H} \times \mathbb{H}\) are maximal monotone sets and \(\mathrm{int}\left(D\left(A\right)\right) \cap D\left(B\right)\neq \varnothing \) , then \(A + B\mathop{ =}\limits^{ \mathit{def }}\left\{\left(x,y + z\right): \left(x,y\right) \in A,\ \left(x,z\right) \in B\right\}\) is maximal monotone in \(\mathbb{H} \times \mathbb{H}\) .
Let \(A \subset \mathbb{H} \times \mathbb{H}\) be a maximal monotone operator. Then for each \(\varepsilon > 0\) the operators
from \(\mathbb{H}\) to \(\mathbb{H}\) are single-valued. The operator \(A_{\varepsilon }\) is called Yosida’s approximation of the operator A. In [2, 12] we can find the proof of the following properties:
Proposition 6.4.
Let \(A: \mathbb{H} \rightrightarrows \mathbb{H}\) be a maximal monotone operator. Then:
-
(j)
For all \(\varepsilon,\delta > 0\) and for all \(\;x,y \in \mathbb{H}\)
$$\displaystyle{\begin{array}{r@{\quad }l} i)\quad &\quad \left(J_{\varepsilon }x,A_{\varepsilon }x\right) \in A, \\ \mathit{ii})\quad &\quad \left\vert J_{\varepsilon }x - J_{\varepsilon }y\right\vert \leq \left\vert x - y\right\vert, \\ \mathit{iii})\quad &\quad \left\vert A_{\varepsilon }x - A_{\varepsilon }y\right\vert \leq \dfrac{1} {\varepsilon } \left\vert x - y\right\vert, \\ \mathit{iv})\quad &\quad \left\vert J_{\varepsilon }x - J_{\delta }x\right\vert \leq \left\vert \varepsilon -\delta \right\vert \left\vert A_{\delta }x\right\vert, \\ \mathit{v})\quad &\quad \left\vert J_{\varepsilon }x\right\vert \leq \left\vert x\right\vert + (1 + \left\vert \varepsilon -1\right\vert )\left\vert J_{1}0\right\vert, \\ \mathit{vi})\quad &\quad A_{\varepsilon }: \mathbb{H} \rightarrow \mathbb{H}\quad \text{ is a maximal monotone operator.}\end{array} }$$ -
(jj)
If \(\varepsilon _{n} \rightarrow 0\), \(x_{n}\mathop{ \rightarrow }\limits^{ w}x\), \(A_{\varepsilon _{n}}x_{n}\mathop{ \rightarrow }\limits^{ w}y\) and
$$\displaystyle{\limsup \limits _{n,m\rightarrow \infty }\left\langle x_{n} - x_{m},A_{\varepsilon _{n}}x_{n} - A_{\varepsilon _{m}}x_{m}\right\rangle \leq 0,}$$then \(\left(x,y\right) \in A\) and \(\lim \limits _{n,m\rightarrow \infty }\left\langle x_{n} - x_{m},A_{\varepsilon _{n}}x_{n} - A_{\varepsilon _{m}}x_{m}\right\rangle = 0\) .
-
(jjj)
\(\lim \limits _{\varepsilon \searrow 0}J_{\varepsilon }x =\Pr _{\overline{D\left(A\right)}}x,\;\forall x \in \mathbb{H}\) and
$$\displaystyle{\lim \limits _{\varepsilon \searrow 0}x_{\varepsilon } = x \in D\left(A\right)\quad \Rightarrow \quad \lim \limits _{\varepsilon \searrow 0}J_{\varepsilon }x_{\varepsilon } = x.}$$( \(\Pr _{\overline{D\left(A\right)}}x\) is the orthogonal projection of x on \(\overline{D\left(A\right)}\) .)
-
(jv)
\(\lim \limits _{\varepsilon \searrow 0}A_{\varepsilon }x =\Pr _{\mathit{Ax}}\left\{0\right\}\mathop{ =}\limits^{ \mathit{def }}\;A^{0}x \in \mathit{Ax}\) , for all \(x \in D\left(A\right)\) .
-
(v)
\(\left\vert A_{\varepsilon }x\right\vert\) is monotone decreasing in \(\varepsilon > 0\) , and when \(\varepsilon \searrow 0\)
$$\displaystyle{\left\vert A_{\varepsilon }\left(x\right)\right\vert \nearrow \left\{\begin{array}{@{}l@{\quad }l@{}} \left\vert A^{0}\left(x\right)\right\vert,\quad &\text{ if}\quad x \in D\left(A\right), \\ +\infty, \quad &\text{ if}\quad x\notin D\left(A\right).\end{array} \right.}$$ -
(vj)
\(\left\vert J_{\varepsilon }x - x\right\vert =\varepsilon \left\vert A_{\varepsilon }x\right\vert \leq \varepsilon \left\vert A^{0}x\right\vert \leq \varepsilon \left\vert z\right\vert\) , for all \(\left(x,z\right) \in A\) .
-
(vjj)
For all \(x \in \mathbb{H}\),
$$\displaystyle\begin{array}{rcl} \left\vert J_{\varepsilon }x - x\right\vert & \leq &\left\vert J_{\varepsilon }x - J_{\varepsilon }\left(J_{1}x\right)\right\vert + \left\vert J_{\varepsilon }\left(J_{1}x\right) - J_{1}x\right\vert + \left\vert J_{1}x - x\right\vert {}\\ & \leq & 2\left\vert J_{1}x - x\right\vert +\varepsilon \left\vert A^{0}\left(J_{ 1}x\right)\right\vert. {}\\ \end{array}$$ -
(vjjj)
For all \(x \in \mathbb{H}\) and \(y \in \mathrm{ Dom}\left(A\right)\)
$$\displaystyle\begin{array}{rcl} \left\vert J_{\varepsilon }x - J_{\delta }y\right\vert & \leq &\left\vert x - y\right\vert + \left\vert \varepsilon -\delta \right\vert \left\vert A_{\delta }y\right\vert {}\\ & \leq &\left\vert x - y\right\vert + \left\vert \varepsilon -\delta \right\vert \left\vert A^{0}y\right\vert. {}\\ \end{array}$$
The operator A is uniquely defined by its principal section \(A^{0}x\mathop{ =}\limits^{ \mathit{def }}\Pr _{\mathit{Ax}}\left\{0\right\}\) in the following sense: if \(\left(x,y\right) \in \overline{D\left(A\right)} \times \mathbb{H}\) such that
then \(\left(x,y\right) \in A\).
Proposition 6.5.
Let \(A: \mathbb{H} \rightrightarrows \mathbb{H}\) be a maximal monotone operator.
-
I.
If \(\bar{B}\left(x_{0},r_{0}\right) \subset \mathrm{ Dom}\left(A\right)\) and
$$\displaystyle{A_{x_{0},r_{0}}^{\#}\mathop{ =}\limits^{ \mathit{def }}\sup \left\{\left\vert \hat{u}\right\vert:\hat{ u} \in A\left(x_{ 0} + r_{0}v\right),\;\left\vert v\right\vert \leq 1\right\},}$$then
$$\displaystyle{ r_{0}\left\vert \hat{x}\right\vert \leq \left\langle \hat{x},x - x_{0}\right\rangle + A_{x_{0},r_{0}}^{\#}\left\vert x - x_{ 0}\right\vert + r_{0}A_{x_{0},r_{0}}^{\#},\quad \forall \,\left(x,\hat{x}\right) \in A. }$$(6.2) -
II.
If there exist \(x_{0} \in \mathbb{H}\) and \(a_{0},\hat{a}_{0} \geq 0\) such that
$$\displaystyle{r_{0}\left\vert \hat{x}\right\vert \leq \left\langle \hat{x},x - x_{0}\right\rangle + a_{0}\left\vert x - x_{0}\right\vert +\hat{ a}_{0},\quad \forall \,\left(x,\hat{x}\right) \in A,}$$then there exists a b 0 ≥ 0 such that for all \(x \in \mathbb{H}\) , for all \(\varepsilon \in \left]0,1\right]\) :
$$\displaystyle{ r_{0}\left\vert A_{\varepsilon }x\right\vert \leq \left\langle A_{\varepsilon }x,x - x_{0}\right\rangle + a_{0}\left\vert x - x_{0}\right\vert + b_{0}. }$$(6.3)If \(x_{0} \in \mathrm{ Dom}\left(A\right)\) and 0 ∈Ax 0 , then \(b_{0} =\hat{ a}_{0}\) .
Proof.
-
I.
By monotonicity of A we have \(\forall \,\left(x,\hat{x}\right) \in A\), \(\forall \left\vert v\right\vert \leq 1\):
$$\displaystyle\begin{array}{rcl} r_{0}\left\langle \hat{x},v\right\rangle & \leq & r_{0}\left\langle \hat{x},v\right\rangle + \left\langle \hat{x} -\hat{ y},x -\left(x_{0} + r_{0}v\right)\right\rangle {}\\ & =& \left\langle \hat{x},x - x_{0}\right\rangle -\left\langle \hat{y},x - x_{0}\right\rangle + r_{0}\left\langle \hat{y},v\right\rangle {}\\ & \leq &\left\langle \hat{x},x - x_{0}\right\rangle + A_{x_{0},r_{0}}^{\#}\left\vert x - x_{ 0}\right\vert + r_{0}A_{x_{0},r_{0}}^{\#}, {}\\ \end{array}$$which yields (6.2).
-
II.
Since \(A_{\varepsilon }\left(x\right) \in A\left(J_{\varepsilon }\left(x\right)\right)\), it follows that
$$\displaystyle\begin{array}{rcl} r_{0}\left\vert A_{\varepsilon }x\right\vert & \leq &\left\langle A_{\varepsilon }x,J_{\varepsilon }\left(x\right) - x_{0}\right\rangle + a_{0}\left\vert J_{\varepsilon }\left(x\right) - x_{0}\right\vert +\hat{ a}_{0} {}\\ & \leq &\left\langle A_{\varepsilon }x,x - x_{0}\right\rangle + a_{0}\left[\left\vert J_{\varepsilon }\left(x\right) - J_{\varepsilon }\left(x_{0}\right)\right\vert + \left\vert J_{\varepsilon }\left(x_{0}\right) - x_{0}\right\vert \right] +\hat{ a}_{0} {}\\ & \leq &\left\langle A_{\varepsilon }x,x - x_{0}\right\rangle + a_{0}\left\vert x - x_{0}\right\vert + a_{0}\left\vert J_{\varepsilon }\left(x_{0}\right) - x_{0}\right\vert +\hat{ a}_{0}. {}\\ \end{array}$$Hence the inequality (6.3) holds for \(b_{0} = a_{0}\left[2\left\vert J_{1}x_{0} - x_{0}\right\vert + \left\vert A^{0}\left(J_{1}x_{0}\right)\right\vert \right] +\hat{ a}_{0}\). If 0 ∈ Ax 0 then \(J_{\varepsilon }\left(x_{0}\right) = x_{0}\) and \(b_{0} =\hat{ a}_{0}\).
■
Proposition 6.6.
If A is a maximal monotone set in \(\mathbb{H} \times \mathbb{H}\) and \(\mathcal{A}\subset L^{2}\left(0,T; \mathbb{H}\right) \times L^{2}\left(0,T; \mathbb{H}\right)\) is defined by
then \(\mathcal{A}\) is a maximal monotone set in \(L^{2}\left(0,T; \mathbb{H}\right) \times L^{2}\left(0,T; \mathbb{H}\right)\) .
6.3.3 Stochastic Monotone Functions
Let \(\left(\Omega,\mathcal{F}, \mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0}\right)\) be a complete stochastic basis and
such that
-
\(\diamond \) \(F\left(\cdot,\cdot,y,z\right)\) is \(\mathcal{P}\)-m.s.p. for every \(\left(y,z\right) \in \mathbb{R}^{d} \times \mathbb{R}^{d\times k}\);
-
\(\diamond \) for all \(y,y^{{\prime}}\in \mathbb{R}^{d},\;z,z^{{\prime}}\in \mathbb{R}^{d\times k},\;t \geq 0\):
$$\displaystyle{\left\langle y - y^{{\prime}},F(t,y,z) - F(t,y^{{\prime}},z)\right\rangle \leq 0,\;\; \mathbb{P}\text{ -a.s.};}$$ -
\(\diamond \) for all \(z,z^{{\prime}}\in \mathbb{R}^{d\times k},\;t \geq 0\):
$$\displaystyle{y\longmapsto F(t,y,z): \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}\,\text{ is continuous, }\mathbb{P}\text{ -a.s.};}$$ -
\(\diamond \) there exists a \(\mathcal{P}\)-m.s.p. \(\ell: \Omega \times \left[0,+\infty \right[ \rightarrow \mathbb{R}_{+}\) such that for all \(y \in \mathbb{R}^{d}\), \(z,z^{{\prime}}\in \mathbb{R}^{d\times k},\;t \geq 0\):
$$\displaystyle{\left\vert F(t,y,z) - F(t,y,z^{{\prime}})\right\vert \leq \ell_{ t}\left\vert z - z^{{\prime}}\right\vert,\;\; \mathbb{P}\text{ -a.s.}}$$
Since \(y\mapsto - F(t,y,z): \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}\) is a monotone continuous operator (hence also maximal monotone operator), it follows that for every \(\varepsilon > 0\) and \(\left(\omega,t,y,z\right) \in \Omega \times [0,T] \times \mathbb{R}^{d} \times \mathbb{R}^{d\times k}\) there exists a unique \(J_{\varepsilon } = J_{\varepsilon }\left(\omega,t,y,z\right) \in \mathbb{R}^{d}\) such that
The Yosida approximation of F is defined by
Note that \(F_{\varepsilon } = F_{\varepsilon }(t,y,z)\) is the unique solution of
The functions \(J_{\varepsilon }\left(\cdot,\cdot,y,z\right)\), \(F_{\varepsilon }\left(\cdot,\cdot,y,z\right): \Omega \times \left[0,T\right] \rightarrow \mathbb{R}^{d}\) are \(\mathcal{P}\)-m.s.p. for every \(\left(y,z\right) \in \mathbb{R}^{d} \times \mathbb{R}^{d\times k}\) and we have:
Proposition 6.7.
For all \(\varepsilon,\delta > 0\), \(\forall \ t \in \left[0,T\right]\), \(\forall \ y,y^{{\prime}}\in \mathbb{R}^{d}\), \(\forall \ z,z^{{\prime}}\in \mathbb{R}^{d\times k}\) :
and
Proof.
-
(a):
If \(J = J_{\varepsilon }(t,y,z)\), \(J^{{\prime}} = J_{\varepsilon }(t,y^{{\prime}},z^{{\prime}})\), then
$$\displaystyle\begin{array}{rcl} & & \left\vert J - J^{{\prime}}\right\vert ^{2} {}\\ & & \qquad =\varepsilon \left\langle F\left(t,J,z\right) - F\left(t,J^{{\prime}},z^{{\prime}}\right),J - J^{{\prime}}\right\rangle + \left\langle y - y^{{\prime}},J - J^{{\prime}}\right\rangle {}\\ &&\qquad =\varepsilon \left\langle F\left(t,J,z\right) - F\left(t,J^{{\prime}},z\right),J - J^{{\prime}}\right\rangle {}\\ &&\qquad \qquad +\varepsilon \left\langle F\left(t,J^{{\prime}},z\right) - F\left(t,J^{{\prime}},z^{{\prime}}\right),J - J^{{\prime}}\right\rangle + \left\langle y - y^{{\prime}},J - J^{{\prime}}\right\rangle {}\\ &&\qquad \leq \varepsilon \left[\ell_{t}\left\vert z - z^{{\prime}}\right\vert \left\vert J - J^{{\prime}}\right\vert \right] + \left\vert y - y^{{\prime}}\right\vert \left\vert J - J^{{\prime}}\right\vert {}\\ \end{array}$$and (6.5-a) follows.
-
(b):
With the notation \(J^{0} = J\left(t,0,0\right)\),
$$\displaystyle{\left\vert J^{0}\right\vert ^{2} =\varepsilon \left\langle F\left(t,J^{0},0\right),J^{0}\right\rangle \leq \varepsilon \left\langle F\left(t,0,0\right),J^{0}\right\rangle \leq \varepsilon \left\vert F\left(t,0,0\right)\right\vert \left\vert J^{0}\right\vert }$$which gives (6.5-b).
-
(c):
We have
$$\displaystyle\begin{array}{rcl} & & \left\langle F_{\varepsilon }\left(t,y,z\right) - F_{\varepsilon }\left(t,y^{{\prime}},z^{{\prime}}\right),y - y^{{\prime}}\right\rangle {}\\ && = \frac{1} {\varepsilon } \left\langle J_{\varepsilon }(t,y,z) - J_{\varepsilon }(t,y^{{\prime}},z^{{\prime}}),y - y^{{\prime}}\right\rangle -\frac{1} {\varepsilon } \left\vert y - y^{{\prime}}\right\vert ^{2} {}\\ & & \leq \frac{1} {\varepsilon } \left[\vert y - y^{{\prime}}\vert +\varepsilon \ell _{ t}\left\vert z - z^{{\prime}}\right\vert \right]\left\vert y - y^{{\prime}}\right\vert -\frac{1} {\varepsilon } \left\vert y - y^{{\prime}}\right\vert ^{2} {}\\ & & \leq \ell_{t}\left\vert z - z^{{\prime}}\right\vert \left\vert y - y^{{\prime}}\right\vert {}\\ \end{array}$$that is (6.5-b).
-
(d):
From \(\left(a\right)\) and the definition of \(F_{\varepsilon }\) the inequality \(\left(d\right)\) clearly follows.
-
(e):
The properties follow from those of the Yosida approximation, \(A_{\varepsilon }\), of a maximal operator A (here \(A_{\varepsilon }\left(y\right) = -F_{\varepsilon }(t,y,z)\) for \(\left(\omega,t,z\right)\) fixed.
-
(6.6):
Let \(J_{\varepsilon } = J_{\varepsilon }(t,y,z)\) and \(J_{\delta }^{{\prime}} = J_{\delta }(t,y^{{\prime}},z^{{\prime}})\). Then
$$\displaystyle{\begin{array}{l} \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert ^{2} = \left(\varepsilon -\delta \right)\left\langle F\left(t,J_{\varepsilon },z\right),J_{\varepsilon } - J_{\delta }^{{\prime}}\right\rangle \\ {r} { +\delta \left\langle F\left(t,J_{\varepsilon },z\right) - F\left(t,J_{\delta }^{{\prime}},z^{{\prime}}\right),J_{\varepsilon } - J_{\delta }^{{\prime}}\right\rangle + \left\langle y - y^{{\prime}},J_{\varepsilon } - J_{\delta }^{{\prime}}\right\rangle } \\ {r}{\leq \left\vert \varepsilon -\delta \right\vert \left\vert F\left(t,J_{\varepsilon },z\right)\right\vert \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert +\delta \ell _{t}\left\vert z - z^{{\prime}}\right\vert \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert + \left\vert y - y^{{\prime}}\right\vert \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert }\end{array} }$$and (6.6) follows.
-
(6.7):
Now, we have
$$\displaystyle\begin{array}{rcl} & & \left\langle J_{\varepsilon } - J_{\delta }^{{\prime}},F_{\varepsilon }(t,y,z) - F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & \qquad = \left\langle J_{\varepsilon } - J_{\delta }^{{\prime}},F(t,J_{\varepsilon },z) - F(t,J_{\delta }^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & \qquad \leq 0 + \left\langle J_{\varepsilon } - J_{\delta }^{{\prime}},F(t,J_{\delta }^{{\prime}},z) - F(t,J_{\delta }^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & \qquad \leq \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert \ell_{ t}\left\vert z - z^{{\prime}}\right\vert {}\\ &&\qquad \leq \ell_{t}\left[\varepsilon \left\vert F\left(t,y,z\right)\right\vert +\delta \left\vert F\left(t,y^{{\prime}},z^{{\prime}}\right)\right\vert + \left\vert y - y^{{\prime}}\right\vert \right]\left\vert z - z^{{\prime}}\right\vert {}\\ \end{array}$$and then
$$\displaystyle\begin{array}{rcl} & & \left\langle y - y^{{\prime}},F_{\varepsilon }(t,y,z) - F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & = \left\langle J_{\varepsilon } -\varepsilon F_{\varepsilon }\left(t,y,z\right) - J_{\delta }^{{\prime}} +\delta F_{\delta }\left(t,y^{{\prime}},z^{{\prime}}\right),F_{\varepsilon }(t,y,z) - F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & \leq -\varepsilon \left\vert F_{\varepsilon }\left(t,y,z\right)\right\vert ^{2} -\delta \left\vert F_{\delta }\left(t,y^{{\prime}},z^{{\prime}}\right)\right\vert ^{2} + \left(\varepsilon +\delta \right)\left\langle F_{\varepsilon }\left(t,y,z\right),F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & +\ell_{t}\left[\varepsilon \left\vert F\left(t,y,z\right)\right\vert +\delta \left\vert F\left(t,y^{{\prime}},z^{{\prime}}\right)\right\vert + \left\vert y - y^{{\prime}}\right\vert \right]\left\vert z - z^{{\prime}}\right\vert {}\\ \end{array}$$that is (6.7). ■
If we define
then we have the following:
Proposition 6.8.
For all \(\varepsilon > 0\) , p,a > 1, r 0 ≥ 0, \(y \in \mathbb{R}^{d}\), \(z \in \mathbb{R}^{d\times k}\), \(t \in \left[0,T\right]\) :
where
Proof.
Let 0 ≤ r 0 ≤ 1. The monotonicity property of \(F_{\varepsilon }\) implies that for all \(\left\vert u\right\vert \leq 1\):
and, consequently, \(\forall \,\left\vert u\right\vert \leq 1\):
The inequality (6.8) follows by taking the sup of the left-hand side over all vectors u such that \(\left\vert u\right\vert \leq 1\). ■
Finally we give some convergence results.
Let \(F: \Omega \times \left[0,T\right] \times \mathbb{R}^{d} \rightarrow \mathbb{R}^{d^{{\prime}} }\) be a function satisfying
Proposition 6.9.
Assume that F satisfies (6.9). Let
be such that
Then
Moreover if for some p,α > 0:
, then
If, in addition, \(x\longmapsto - F\left(t,x\right)\) is a monotone operator and \(F_{\varepsilon } = F_{\varepsilon }(t,x)\), \(\varepsilon > 0\) , is the Yosida approximation of F ( \(F_{\varepsilon }\) is the unique solution of \(F(\omega,t,x +\varepsilon F_{\varepsilon }) = F_{\varepsilon }\) ) then \(\forall \ q \in ]0,p[\) :
Proof.
Let \(\varepsilon _{n} \rightarrow 0\) such that
Then by the Lebesgue dominated convergence theorem
Since the convergence in probability is given by a metric, by reductio ad absurdum we infer that
Also, if C p < ∞, then Fatou’s lemma clearly yields (6.11-c 1).
-
I.
Denote by C positive constants independent of \(\varepsilon _{n}\). Let
$$\displaystyle{\left(\Delta _{n} =\right)\Delta _{\varepsilon _{n}}\mathop{ =}\limits^{ { \mathit{def }}}\int _{0}^{T}\left\vert F\left(s,X_{ s}^{\varepsilon _{n} }\right) - F\left(s,X_{s}\right)\right\vert ^{\alpha }\mathit{ds}.}$$Then by the Lebesgue dominated convergence theorem \(\Delta _{n} \rightarrow 0,\quad \mathbb{P}\text{ -a.s.}\), and
$$\displaystyle{\mathbb{E}\Delta _{n}^{p} \leq C.}$$Since
$$\displaystyle\begin{array}{rcl} \mathbb{E}\Delta _{n}^{q}& \leq & \mathbb{E}\left(\Delta _{ n}^{q}\mathbf{1}_{ \Delta _{n}\leq R}\right) + \mathbb{E}\left(\Delta _{n}^{q}\frac{\Delta _{n}^{p-q}} {R^{p-q}} \mathbf{1}_{\Delta _{n}>R}\right) {}\\ & \leq & \mathbb{E}\left(\Delta _{n}^{q}\mathbf{1}_{ \Delta _{n}\leq R}\right) + \frac{C} {R^{p-q}}, {}\\ \end{array}$$it follows that
$$\displaystyle{0\, \leq \limsup _{\varepsilon _{n}\rightarrow 0}\mathbb{E}\Delta _{n}^{q} \leq \frac{C} {R^{p-q}}\quad \forall R > 0,}$$that is \(\lim _{\varepsilon _{n}\rightarrow 0}\mathbb{E}\Delta _{n}^{q} = 0\) and by reductio ad absurdum the full sequence \(\Delta _{\varepsilon }\) has the property (6.11-c 2).
-
II.
Since \(\left\vert F_{\varepsilon }\left(t,X_{t}^{\varepsilon }\right)\right\vert \leq \left\vert F(t,X_{t}^{\varepsilon })\right\vert\), on a subsequence
$$\displaystyle\begin{array}{rcl} \lim _{\varepsilon _{n}\rightarrow 0}F_{\varepsilon _{n}}\left(t,X_{t}^{\varepsilon _{n} }\right)& =& \lim _{\varepsilon _{n}\rightarrow 0}F\left(t,X_{t}^{\varepsilon } +\varepsilon _{ n}F_{\varepsilon _{n}}\left(t,X_{t}^{\varepsilon _{n} }\right)\right) {}\\ & =& F(t,X_{t}),\;\; \mathbb{P}\text{ -a.s.} {}\\ \end{array}$$and then the convergence result, (6.12), follows in exactly the same manner with \(\Delta _{\varepsilon _{n}}:=\int _{ 0}^{T}\left\vert F_{\varepsilon _{n}}\left(s,X_{s}^{\varepsilon _{n}}\right) - F\left(s,X_{s}\right)\right\vert ^{\alpha }\mathit{ds}\).
■
6.3.4 Compactness Results
Let \(I \subset \mathbb{R}\) be an interval. Denote by \(C\left(I; \mathbb{R}^{d}\right)\) the space of continuous functions \(g: I \rightarrow \mathbb{R}^{d}\). If \(I = \left[a,b\right]\) then \(C\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) is a separable Banach space with respect to the norm \(\left\Vert \cdot \right\Vert _{\left[a,b\right]}\), where if \(g: \left[a,b\right] \rightarrow \mathbb{R}^{d}\) we define
If \(\left[a,b\right] = \left[0,t\right]\) then
For \(g \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) we define for \(t \in \left[0,T\right]\) and \(\varepsilon \geq 0\):
the modulus of continuity at t, and
the modulus of uniformly continuity.
We also introduce the notation
Note that
and
If \(\mathcal{M}\subset C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) and \(\varepsilon > 0\), then
and
Theorem 6.10 (Arzelà–Ascoli).
Let \(\mathcal{M}\subset C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\). Then the following three conditions are equivalent:
-
(A)
\(\mathcal{M}\) is relatively compact in \(C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) ;
-
(B)
- \(\left(B_{1}\right)\) :
-
(equicontinuity) : \(\lim \limits _{\varepsilon \rightarrow 0}\mathbf{m}_{\mathcal{M}}\left(t,\varepsilon \right) = 0\), \(\forall \ t \in \left[0,T\right]\) ;
- \(\left(B_{2}\right)\) :
-
(bounded images) for each \(t \in \left[0,T\right]\) the set \(\mathcal{M}\left(t\right) = \left\{g\left(t\right): g \in \mathcal{M}\right\}\) is bounded in \(\mathbb{R}^{d}\) ;
-
(C)
- \(\left(C_{1}\right)\) :
-
(uniform equicontinuity) : \(\lim \limits _{\varepsilon \rightarrow 0}\mathbf{m}_{\mathcal{M}}\left(\varepsilon \right) = 0\) ;
- \(\left(C_{2}\right)\) :
-
the set \(\left\{g\left(t\right): t \in \left[0,T\right],\;g \in \mathcal{M}\right\}\) is bounded in \(\mathbb{R}^{d}\) .
Theorem 6.11 (Kolmogorov–Riesz–Weil).
Let \(p \in \left[1,\infty \right[\). A set \(\mathcal{S}\subset L^{p}\left(0,T; \mathbb{R}^{d}\right)\) is relatively compact in \(L^{p}\left(0,T; \mathbb{R}^{d}\right)\) if and only if:
-
(j)
(p-equi-integrability)
$$\displaystyle{\lim _{\varepsilon \searrow 0}\left[\sup _{g\in \mathcal{S}}\int _{\varepsilon }^{T-\varepsilon }\left\Vert g\left(t+\varepsilon \right) - g\left(t\right)\right\Vert _{ \mathbb{R}^{d}}^{p}\mathit{dt}\right] = 0,}$$ -
(jj)
(boundedness):
$$\displaystyle{\sup _{g\in \mathcal{S}}\int _{0}^{T}\left\vert g\left(t\right)\right\vert \mathit{dt} < \infty.}$$
(For the proofs of these two last theorems see as example the book of Vrabie [70].)
Clearly we have:
Corollary 6.12.
Let M > 0 and \(\gamma _{n} \searrow 0\), \(\varepsilon _{n} \searrow 0\) be two sequences.
-
a)
Then the set
$$\displaystyle\begin{array}{rcl} \mathcal{K}_{1}& =& \left\{z \in L^{2}(0,T; \mathbb{R}^{d}): \int _{ 0}^{T}\left\vert z\left(t\right)\right\vert ^{2}\mathit{dt} \leq M,\right. {}\\ & & \left.\sup _{0\leq \theta \leq \varepsilon _{n}}\int _{0}^{T-\varepsilon _{n} }\left\vert z\left(t+\theta \right) - z\left(t\right)\right\vert ^{2}\mathit{dt} \leq \gamma _{ n},\;\forall n \in \mathbb{N}^{{\ast}}\right\} {}\\ \end{array}$$is a compact subset of \(L^{2}(0,T; \mathbb{R}^{d})\) .
-
b)
If \(N_{n} = \left[\frac{T} {\varepsilon _{n}} \right]\) and \(t_{i} = \frac{(i-1)T} {N_{n}}\) , for 1 ≤ i ≤ N n ,n ≥ 1, then the set
$$\displaystyle\begin{array}{rcl} \mathcal{K}_{2}& =& \left\{z \in C([0,T]; \mathbb{R}^{d}): \left\vert z\left(0\right)\right\vert \leq M,\right. {}\\ & & \left.\sup \limits _{1\leq i\leq N_{n}}\sup \limits _{0<\theta \leq \varepsilon _{n}}\left\vert z\left(t_{i}+\theta \right) - z\left(t_{i}\right)\right\vert \leq \gamma _{n},\forall n \in \mathbb{N}^{{\ast}}\right\} {}\\ \end{array}$$is a compact subset of \(C([0,T]; \mathbb{R}^{d})\) (here z t is extended outside of [0,T] by continuity z s = z T , for s ≥ T and z s = z 0 , for s ≤ 0).
6.3.5 Bounded Variation Functions
Let \(\left[a,b\right]\) be a closed interval from \(\mathbb{R}\) and \(\mathcal{D}_{\left[a,b\right]}\) be the set of all partitions
Define \(\left\Vert \Delta \right\Vert =\sup \left\{t_{i+1} - t_{i}: 0 \leq i \leq n - 1\right\}\).
Let
be the variation of k corresponding to the partition \(\Delta \in \mathcal{D}_{\left[a,b\right]}\). We define the total variation of k on \(\left[a,b\right]\) by
and if \(\left[a,b\right] = \left[0,T\right]\) then
Proposition 6.13.
If \(k \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) and \(\overline{\Delta }_{N} \in \mathcal{D}_{\left[0,T\right]}\)
then
Proof.
Clearly \(V _{\overline{\Delta }_{ N}}\left(k\right)\) is increasing with respect to N and \(V _{\overline{\Delta }_{ N}}\left(k\right) \leq \left\updownarrow k\right\updownarrow _{T}\).
Let \(\Delta \in \mathcal{D}_{\left[a,b\right]}\) be arbitrary
and \(j_{i} = \left[\frac{t_{i}} {T}2^{N}\right]\) be the integer part of \(\frac{t_{i}} {T}2^{N}\). Then
and passing to the limit for \(N \nearrow \infty \) we obtain
Hence \(\lim \limits _{N\nearrow \infty }V _{\overline{\Delta }_{ N}}\left(k\right) = \left\updownarrow k\right\updownarrow _{T}\). ■
Definition 6.14.
A function \(k: \left[a,b\right] \rightarrow \mathbb{R}^{d}\) has bounded variation on \(\left[a,b\right]\) if \(\left\updownarrow k\right\updownarrow _{\left[a,b\right]} < \infty \). The space of bounded variation functions on \(\left[a,b\right]\) will be denoted by \(\mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\).
If \(x \in C\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) and \(k \in \mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) then the Riemann–Stieltjes integral is defined by
where the integral is independent of the arbitrary choice of \(\tau _{i} \in \left[t_{i},t_{i+1}\right]\).
The Riemann–Stieltjes integral satisfies
Proposition 6.15.
Equipped with the norm
the space \(\mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) is a Banach space. An element k of \(\mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) can be identified with the following linear continuous mapping on \(C\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) :
With this identification, \(\mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) is the dual of the space \(C\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) .
Proposition 6.16 (Helly–Bray).
Let \(n \in \mathbb{N}^{{\ast}}\), \(x_{n},x \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\), \(k_{n} \in \mathit{BV }\left(\left[0,T\right]; \mathbb{R}^{d}\right)\), \(k: \left[0,T\right] \rightarrow \mathbb{R}^{d}\) , such that
Then \(k \in \mathit{BV }\left(\left[0,T\right]; \mathbb{R}^{d}\right)\), \(\left\updownarrow k\right\updownarrow _{T} \leq M\) , and \(\forall \,0 \leq s \leq t \leq T\) :
In particular \(k_{n}\mathop{ \rightarrow }\limits^{ w^{{\ast}}}k\) in \(\mathit{BV }\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) , that is for all \(y \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) :
Proof.
First let \(\Delta _{N} \in \mathcal{D}_{\left[0,T\right]}\) be a sequence such that
From the definition of \(\left\updownarrow \cdot \right\updownarrow _{T}\) we have
Since \(k_{n}\left(t\right) \rightarrow k\left(t\right)\) for all \(t \in \left[0,T\right]\), it follows that \(V _{\Delta _{N}}\left(k_{n}\right) \rightarrow V _{\Delta _{N}}\left(k\right)\). Hence
and passing to the limit as \(N \nearrow \infty \) we obtain
Let \(\varepsilon > 0\)
with \(t_{i} \in \left[0,T\right]\), \(\left\Vert \Delta \right\Vert =\sup \left\{t_{i+1} - t_{i}: 0 \leq i \leq N - 1\right\} \leq \varepsilon\). For \(x_{i} = x\left(t_{i}\right)\), \(k_{i} = k\left(t_{i}\right)\), define
and \(\mathbf{m}_{x}: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\)
the modulus of continuity of x on \(\left[0,T\right]\).
We have
Indeed
Then
Now we obtain the estimate
Since \(k_{n}\left(t\right) \rightarrow k\left(t\right)\) for all \(t \in \left[0,T\right]\), it follows that \(\lim _{n\rightarrow \infty }\left\vert S_{\Delta }\left(x,k_{n} - k\right)\right\vert = 0\) and
Hence the limit \(\lim \limits _{n\rightarrow \infty }\int _{s}^{t}\left\langle x_{ n}\left(r\right),\mathit{dk}_{n}\left(r\right)\right\rangle\) exists, as does
Now, let \(\alpha \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\), \(\left\Vert \alpha \right\Vert _{T} \leq 1\). Then
and passing to \(\sup _{\left\Vert \alpha \right\Vert _{T}\leq 1}\) we obtain
■
We now give some other auxiliary results used in the book:
Proposition 6.17.
Let \(A: \mathbb{R}^{d} \rightrightarrows \mathbb{R}^{d}\) be a maximal monotone operator and \(\mathcal{A}: C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right) \rightrightarrows \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) be defined by:
Then the relation (6.15) is equivalent to: for all \(u,\hat{u} \in C(\mathbb{R}_{+}; \mathbb{R}^{d})\) such that \(\left(u(r),\hat{u}(r)\right) \in A,\,\forall \,r \geq 0\)
and \(\mathcal{A}\) is a monotone operator, that is:
\(\text{ for all}\ \left(x,k\right),\left(y,\ell\right) \in \mathcal{A}\)
Moreover \(\mathcal{A}\) is a maximal monotone operator.
Proof.
Let \(\forall \,u,\hat{u} \in C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) be such that \(\left(u(r),\hat{u}(r)\right) \in A,\,\forall \,r \geq 0\). Then
The implication is obtained for \(u\left(r\right) = z\) and \(\hat{u}\left(r\right) =\hat{ z}\).
Let \(\left(x,k\right),(y,\ell) \in \mathcal{A}\) be arbitrary. Then for all \(u,\hat{u} \in C(\mathbb{R}_{+}; \mathbb{R}^{d})\) such that \(\left(u(r),\hat{u}(r)\right) \in A,\,\forall \,r \geq 0\) we have for all 0 ≤ s ≤ t,
We put here
and
where \(J_{\varepsilon }(z) = \left(I +\varepsilon A\right)^{-1}(z)\), \(A_{\varepsilon }\left(z\right) = \dfrac{1} {\varepsilon } \left(z - J_{\varepsilon }\left(z\right)\right)\). Since A is a maximal operator on \(\mathbb{R}^{d}\) it follows that \(\overline{D(A)}\) is convex and \(\lim \limits _{\varepsilon \rightarrow 0}\varepsilon A_{\varepsilon }\left(u\right) \rightarrow 0,\,\forall \,\,u \in \overline{D(A)}\). Also for all \(a \in D\left(A\right)\)
Adding the inequalities term by term we obtain:
Passing to \(\lim _{\varepsilon \searrow 0}\) we obtain \(\int \nolimits _{s}^{t}\left\langle y\left(r\right) - x\left(r\right),d\ell\left(r\right) -\mathit{dk}\left(r\right)\right\rangle \geq 0\). \(\mathcal{A}\) is a maximal monotone operator since if \(\left(y,\ell\right) \in C\left(\mathbb{R}_{+};\overline{D(A)}\right) \times \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) satisfies
then this last inequality is satisfied for all \(\left(x,k\right)\) of the form \(\left(x\left(t\right),k\left(t\right)\right) = \left(z,\hat{z}t\right)\), where \(\left(z,\hat{z}\right) \in A\), and consequently (from the definition of \(\mathcal{A}\)) \(\left(y,\ell\right) \in \mathcal{A}\). The proof is complete. ■
Remark 6.18.
Often we restrict the realization to
and we write (for this case) \(\mathit{dk}\left(t\right) \in A\left(x\left(t\right)\right)\left(\mathit{dt}\right)\) if
Proposition 6.19.
Let \(A \subset \mathbb{R}^{d} \times \mathbb{R}^{d}\) be a maximal subset and \(\mathcal{A}\) be the realization of A on \(C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right) \times \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) defined by (6.15). Assume that \(\mathrm{int}\left(\mathrm{Dom}\left(A\right)\right)\neq \varnothing \). Let \(u_{0} \in \mathrm{ int}\left(\mathrm{Dom}\left(A\right)\right)\) and r 0 > 0 be such that \(\bar{B}\left(u_{0},r_{0}\right) \subset \mathrm{ Dom}\left(A\right)\). Then
and for all \(\left(x,k\right) \in \mathcal{A}\) :
as signed measures on \(\mathbb{R}_{+}\). Moreover there exists a constant b 0 > 0 such that
for all 0 ≤ s ≤ t ≤ T, \(y \in C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) and \(0 <\varepsilon \leq 1\) .
Proof.
Since A is locally bounded on \(\mathrm{int}\left(\mathrm{Dom}\left(A\right)\right)\), it follows that for \(u_{0} \in \mathrm{ int}\left(\mathrm{Dom}\left(A\right)\right)\), there exists an r 0 > 0 such that \(u_{0} + r_{0}v \in \mathrm{ int}\left(\mathrm{Dom}\left(A\right)\right)\) for all \(\left\vert v\right\vert \leq 1\) and
Let \(0 \leq s = t_{0} < t_{1} <\ldots < t_{n} = t \leq T\), \(\max _{i}\left(t_{i+1} - t_{i}\right) =\delta _{n} \rightarrow 0\).
We put in (6.15) z = u 0 + r 0 v. Then
and we obtain
for all \(\left\vert v\right\vert \leq 1\). Hence
and adding term by term for i = 0 to i = n − 1 the inequality
holds and clearly (6.17) follows.
Setting in (6.3) \(x = y\left(r\right)\), x 0 = u 0 and integrating from s to t the inequality (6.18) follows. ■
Often in the book we use some energy type equalities that we describe in the next lemma.
Lemma 6.20.
Let \(x,k,m \in C\left(\left[0,\infty \right[; \mathbb{R}^{d}\right)\), \(k \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right)\), \(k\left(0\right) = m\left(0\right) = 0\) such that
Then
-
(I):
For all t ≥ 0 and for all \(u \in \mathbb{R}^{d}\) :
$$\displaystyle{ \begin{array}{r} \left\vert x\left(t\right) - m\left(t\right) - u\right\vert ^{2} + 2\int _{0}^{t}\left\langle x\left(r\right) - u,\mathit{dk}\left(r\right)\right\rangle \\ = \left\vert x_{0} - u\right\vert ^{2} + 2\int _{0}^{t}\left\langle m\left(r\right),\mathit{dk}\left(r\right)\right\rangle.\end{array} }$$(6.19) -
(II):
For all 0 ≤ s ≤ t:
$$\displaystyle{ \begin{array}{r} \left\vert x\left(t\right) - x\left(s\right) - m\left(t\right) + m\left(s\right)\right\vert ^{2} + 2\int _{s}^{t}\left\langle x\left(r\right) - x\left(s\right),\mathit{dk}\left(r\right)\right\rangle \\ = 2\int _{s}^{t}\left\langle m\left(r\right) - m\left(s\right),\mathit{dk}\left(r\right)\right\rangle.\end{array} }$$(6.20)
Proof.
-
(I):
We have
$$\displaystyle\begin{array}{rcl} & & \left\vert x\left(t\right) - m\left(t\right) - u\right\vert ^{2} {}\\ & & = \left\vert x_{0} - k\left(t\right) - u\right\vert ^{2} {}\\ & & = \left\vert x_{0} - u\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle x_{ 0} - k\left(r\right) - u,d\left(x_{0} - k - u\right)\left(r\right)\right\rangle {}\\ & & = \left\vert x_{0} - u\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle x\left(r\right) - m\left(r\right) - u,-\mathit{dk}\left(r\right)\right\rangle {}\\ & & = \left\vert x_{0} - u\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle m\left(r\right),\mathit{dk}\left(r\right)\right\rangle - 2\int _{ 0}^{t}\left\langle x\left(r\right) - u,\mathit{dk}\left(r\right)\right\rangle, {}\\ \end{array}$$that is (6.19).
-
(II):
From (6.19) we have for u = 0
$$\displaystyle{\begin{array}{r} \left\vert x\left(t\right) - m\left(t\right)\right\vert ^{2} -\left\vert x\left(s\right) - m\left(s\right)\right\vert ^{2} + 2\int _{ s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle \\ = 2\int _{s}^{t}\left\langle m\left(r\right),\mathit{dk}\left(r\right)\right\rangle.\end{array} }$$But \(k\left(t\right) - k\left(s\right) = m\left(t\right) - x\left(t\right) - m\left(s\right) + x\left(s\right)\),
$$\displaystyle{\begin{array}{r} \left\vert x\left(t\right) - m\left(t\right)\right\vert ^{2} = \left\vert x\left(t\right) - x\left(s\right) - m\left(t\right) + m\left(s\right)\right\vert ^{2} + \left\vert x\left(s\right) - m\left(s\right)\right\vert ^{2} \\ - 2\left\langle x\left(s\right) - m\left(s\right),k\left(t\right) - k\left(s\right)\right\rangle \end{array} }$$and
$$\displaystyle\begin{array}{rcl} 2\int _{s}^{t}\left\langle m\left(r\right),\mathit{dk}\left(r\right)\right\rangle & =& 2\int _{ s}^{t}\left\langle m\left(r\right)-m\left(s\right),\mathit{dk}\left(r\right)\right\rangle +2\left\langle m\left(s\right),k\left(t\right) - k\left(s\right)\right\rangle, {}\\ 2\int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle & =& 2\int _{ s}^{t}\left\langle x\left(r\right)-x\left(s\right),\mathit{dk}\left(r\right)\right\rangle + 2\left\langle x\left(s\right),k\left(t\right) - k\left(s\right)\right\rangle. {}\\ \end{array}$$Hence, the equality (6.20) holds. ■
Finally we give an approximation result via Stieltjes integrals.
Lemma 6.21.
Let
-
\(Q: \left[0,T\right] \rightarrow \mathbb{R}\) be a strictly increasing continuous function such that \(Q\left(0\right) = 0\),
-
\(f,\gamma: \left[0,T\right] \rightarrow \mathbb{R}^{d}\) be bounded measurable functions,
-
\(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) be a proper convex lower semicontinuous function.
If
then as \(\varepsilon \rightarrow 0_{+}\)
If \(f: \left[0,T\right] \rightarrow \mathbb{R}^{d}\) is a continuous function it moreover follows that
Remark 6.22.
The same conclusions are true if we replace \(f_{\varepsilon }\left(t\right)\) by
Proof of Lemma 6.21 \(\left(j\right)\).
Obviously we have
But
since
Therefore the following limit exists
and \(\left(j\right)\) follows. In the case where f is continuous, it is sufficient to write
where \(t_{\varepsilon }:= Q^{-1}\left(Q\left(t\right) -\sqrt{Q\left(\varepsilon \right)}\right)\mathop{\longrightarrow}\limits_{}^{\;\;\;\;}t\), as \(\varepsilon \rightarrow 0\), and \(t_{\varepsilon } < t\).
\(\left(\,\mathit{jj}\right)\) We have
Using Remark 6.22 we have
By Lebesgue’s dominated convergence theorem and the lower semicontinuity of \(\varphi\) we conclude that
■
6.3.6 Semicontinuity
Let \(\left(\mathbb{X},\rho \right)\) be a metric space.
Definition 6.23.
A function \(f: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is lower semicontinuous (l.s.c.) at \(x \in \mathbb{X}\) if
i.e. for all \(\varepsilon > 0\) there exists a \(\delta =\delta \left(\varepsilon,x\right) > 0\) such that \(\rho \left(x,y\right) <\delta\) implies \(f\left(y\right) \geq f\left(x\right)-\varepsilon\). The function f is l.s.c. if it is l.s.c. at all \(x \in \mathbb{X}\).
A function \(g: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is upper semicontinuous (u.s.c.) if − g is l.s.c.
Proposition 6.24.
The following assertions are equivalent:
-
(i)
\(f: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is lower semicontinuous;
-
(ii)
the set \(\left\{x \in \mathbb{X}: f\left(x\right) \leq a\right\}\) is closed in \(\mathbb{X}\) , for all \(a \in \mathbb{R}\) .
It is easy to prove that:
-
\(\blacktriangle \) If \(g_{n}: \mathbb{X} \rightarrow \overline{\mathbb{R}}\), \(n \in \mathbb{N}\), are l.s.c. functions and
$$\displaystyle{g(x) =\sup \{ g_{n}(x): n \in \mathbb{N}\},}$$then \(g: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is a l.s.c. function.
-
\(\blacktriangle \) If \(f: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is a l.s.c. function, then f is bounded from below on compact subsets of \(\mathbb{X}\).
Lemma 6.25.
Let \(\left(\mathbb{X},\rho \right)\) be a metric space. If \(f: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is bounded below on bounded subsets of \(\mathbb{X}\) , then there exists a continuous function \(\mu: \mathbb{X} \rightarrow \mathbb{R}\) such that
Proof.
Let \(n \in \mathbb{N}^{{\ast}}\) and \(a \in \mathbb{X}\). Define
Then \(\mu _{n} \in \mathbb{R}\). Define \(\mu: \mathbb{X} \rightarrow \mathbb{R}\) such that, if \(n - 1 \leq \rho \left(x,a\right) < n\)
The function μ is continuous on \(\mathbb{X}\) and
■
Proposition 6.26.
Let \(\left(\mathbb{X},\rho \right)\) be a separable metric space. If \(f: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is a l.s.c. function and \(\mu: \mathbb{X} \rightarrow \mathbb{R}\) is a continuous function such that
then there exists a sequence of continuous functions \(f_{n}: \mathbb{X} \rightarrow \mathbb{R}\), \(n \in \mathbb{N}^{{\ast}}\) , such that for all \(x \in \mathbb{X}\)
Proof.
Using only the boundedness from below (6.22) we shall show that there exists a sequence of continuous functions \(f_{n}: \mathbb{X} \rightarrow \mathbb{R}\), \(n \in \mathbb{N}^{{\ast}}\), such that
and such that for all \(x \in \mathbb{X}\) there exists a sequence \(y_{n} \rightarrow x\) satisfying
Then the result follows using the lower semicontinuity of f:
Let us prove (6.23) and (6.24). Let \(n,i \in \mathbb{N}^{{\ast}}\) and \(a \in \mathbb{X}\). Define
Then \(-\infty \leq \mu _{i,n} < +\infty \). Define \(\psi _{n}\left(\cdot,a\right): \mathbb{X} \rightarrow \mathbb{R}\) such that, if \(\frac{i-1} {n} \leq \rho \left(y,a\right) < \frac{i} {n}\), \(i \in \mathbb{N}^{{\ast}}\), then
For each \(a \in \mathbb{X}\) the function \(\psi _{n}\left(\cdot,a\right)\) is continuous on \(\mathbb{X}\) and
Let \(A_{1} \subset A_{2} \subset \ldots \subset A_{n} \subset \ldots\) be finite sets such that \(A={\bigcup\limits_{n\in\mathbb{N}^{\ast}}}A_{n}\) is a dense subset of \(\mathbb{X}\). Define \(f_{n}: \mathbb{X} \rightarrow \mathbb{R}\)
Clearly \(f_{n},\;n \in \mathbb{N}^{{\ast}}\), are continuous functions and
Let \(x \in \mathbb{X}\) be arbitrary. Then there exist a n ∈ A and k n ≥ n such that
If \(\mu _{1,n}\left(a_{n}\right) \in \mathbb{R}\), then from the definition of \(\mu _{1,n}\left(a_{n}\right)\), there exists \(y_{n} \in B(a_{n}, \frac{1} {n})\) such that
If \(\mu _{1,n}\left(a_{n}\right) = -\infty \) then, once again from the definition of μ 1, n (a n ), there exists \(y_{n} \in B(a_{n}, \frac{1} {n})\) such that
We remark that
and consequently y n → x. The proof is complete. ■
We also have the following:
Proposition 6.27.
If \(f: \mathbb{X} \rightarrow \mathbb{R}\) is a continuous function and \(f_{n}: \mathbb{X} \rightarrow \mathbb{R}\), \(n \in \mathbb{N}^{{\ast}}\) , are lower semicontinuous functions such that for all \(x \in \mathbb{X}\) :
then for every compact set \(K \subset \mathbb{X}\)
Proof.
For each \(\varepsilon > 0\), \(G_{n} = \left\{x \in \mathbb{X}: f\left(x\right) - f_{n}\left(x\right) <\varepsilon \right\}\) is an open subset of \(\mathbb{X}\) and
Hence, by the compactness of K, there exists an \(n \in \mathbb{N}^{{\ast}}\) such that K ⊂ G n and the uniform convergence (6.25) follows. ■
We now give some examples (as exercises for the reader) of lower semicontinuous functions that are used in the book.
Example 6.28.
Let \(\left(\mathbb{X},\rho \right)\) be a separable metric space and \(E \subset \mathbb{X}\). Then E is a closed subset of \(\mathbb{X}\) if and only if the function
is a l.s.c. function on \(\mathbb{X}\).
Example 6.29.
Let \(\left(\mathbb{X},\rho \right)\) be a separable metric space. Let 0 ≤ s < t ≤ T. If \(f: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is a l.s.c. function bounded below on bounded subsets of \(\mathbb{X}\) and \(\Phi: C\left(\left[0,T\right]; \mathbb{X}\right) \rightarrow ] -\infty,+\infty ]\) is defined by
then \(\Phi \) is a l.s.c. function.
Let 0 ≤ s < t ≤ T. Let \(\mathcal{D}_{\left[s,t\right]}\) be the set of partitions \(\Delta:\; s = r_{0} < r_{1} < \cdots < r_{n} = t\), and
Define the total variation of k on \(\left[s,t\right]\) by
Then as a sup of continuous functions:
Example 6.30.
The mapping \(k\longmapsto \left\updownarrow k\right\updownarrow _{\left[s,t\right]}: C\left(\left[0,T\right]; \mathbb{X}\right) \rightarrow \left[0,\infty \right]\) is a l.s.c. function.
Finally we present Ekeland’s principle (see [26], or [4], p. 29, Th. 3.2):
Lemma 6.31 (Ekeland).
Let \((\mathbb{X},\rho )\) be a complete metric space and \(J: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) be a proper lower-semicontinuous function bounded from below. Then for any \(\varepsilon > 0\) there exists an \(x_{\varepsilon } \in \mathbb{X}\) such that:
6.3.7 Convex Functions
6.3.7.1 Definitions: Properties
Let \(\left(\mathbb{X},\left\Vert \cdot \right\Vert \right)\) be a real Banach space and \(\left(\mathbb{X}^{{\ast}},\left\Vert \cdot \right\Vert _{{\ast}}\right)\) its dual. A function \(\varphi: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is convex if
Denote by
the domain of \(\varphi\) and
the subdifferential of the function \(\varphi\) at x. We say that \(\varphi\) is proper if \(\mathrm{Dom}(\varphi )\neq \varnothing \). Clearly if \(\varphi\) is a convex function then \(\mathrm{Dom}(\varphi )\) is a convex subset of \(\mathbb{X}\).
Theorem 6.32.
If \(\mathbb{X}\) is a Banach space and \(\varphi: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is a proper convex l.s.c. function then
is non-empty and \(\partial \varphi: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is a maximal monotone operator.
If K is a convex subset of \(\mathbb{X}\) then the function \(I_{K}: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) defined by
is a convex function called the convex indicator function of K.
Recall, from [71] Chapter 2, the following:
Proposition 6.33.
Let \(g: \mathbb{R} \rightarrow ] -\infty,+\infty ]\) be a convex function. Then:
-
(a)
Dom (g) is an interval in \(\mathbb{R}\) ;
-
(b)
the left derivative \(g_{-}^{{\prime}}:\mathrm{ Dom}(g) \rightarrow \left[-\infty,+\infty \right]\) and the right derivative \(g_{+}^{{\prime}}:\mathrm{ Dom}(g) \rightarrow \left[-\infty,+\infty \right]\) are well defined increasing functions and they satisfy:
$$\displaystyle{\begin{array}{r@{\quad }l} \left(j\right)\;\quad &g_{+}^{{\prime}}\left(r\right) \leq \dfrac{g\left(s\right) - g\left(r\right)} {s - r} \leq g_{-}^{{\prime}}\left(s\right),\;\;\forall \ r,s \in \mathrm{ Dom}(g),\ r < s; \\ \left(\,\mathit{jj}\right)\;\quad &g_{-}^{{\prime}}\left(r\right) \leq g_{+}^{{\prime}}\left(r\right),\;\;\forall \ r \in \mathrm{ Dom}(g); \\ \left(\,\mathit{jjj}\right)\;\quad &g_{-}^{{\prime}}\;\text{ is left continuous and }g_{+}^{{\prime}}\;\text{ is right continuous on }\mathrm{int}(\mathrm{Dom}(g)); \\ \left(\,\mathit{jv}\right)\;\quad &u \in \left[g_{-}^{{\prime}}\left(r\right),g_{+}^{{\prime}}\left(r\right)\right] \cap \mathbb{R}\;\Longleftrightarrow\;u\left(s - r\right) \leq g\left(s\right) - g\left(r\right),\;\forall \ s \in \mathbb{R}; \\ \left(\mathit{v}\right)\;\quad &\left\{r \in \mathrm{ Dom}(g): g_{-}^{{\prime}}\left(r\right)\neq g_{+}^{{\prime}}\left(r\right)\right\}\;\text{ is at most countable;}\end{array} }$$ -
(c)
g is locally Lipschitz continuous on \(\mathrm{int}\left(\mathrm{Dom}(g)\right)\) ;
-
(d)
\(A \subset \mathbb{R} \times \mathbb{R}\) is a maximal monotone operator if and only if there exists a convex function \(j: \mathbb{R} \rightarrow ] -\infty,+\infty ]\) such that β = ∂j.
Note that if \( \varphi \) is a proper convex lower semicontinuous (l.s.c.) function then:
-
\(\varphi\) is bounded from below by an affine function, that is \(\exists \) \(v \in \mathbb{X}^{{\ast}}\) and \(a \in \mathbb{R}\) such that
$$\displaystyle{\varphi \left(x\right) \geq \left\langle v,x\right\rangle + a,\text{ for all }x \in \mathbb{X},}$$and, moreover, if \(\mathbb{X}\) is reflexive and\(\lim \limits _{\left\Vert x\right\Vert \rightarrow \infty }\varphi \left(x\right) = +\infty \) then there exists an \(x_{0} \in \mathrm{ Dom}(\varphi )\) such that
$$\displaystyle{\varphi \left(x\right) \geq \varphi \left(x_{0}\right),\text{ for all }x \in \mathbb{X};}$$ -
(Fenchel–Moreau theorem on biconjugate functions)
$$\displaystyle{\varphi \left(x\right) =\varphi ^{{\ast}{\ast}}\left(x\right) =\sup \left\{\left\langle x,x^{{\ast}}\right\rangle -\varphi ^{{\ast}}\left(x^{{\ast}}\right): x^{{\ast}}\in \mathbb{X}^{{\ast}}\right\},}$$where \(\varphi ^{{\ast}}: \mathbb{X}^{{\ast}}\rightarrow \bar{\mathbb{R}}\) is the conjugate of the function \(\varphi\), i.e.
$$\displaystyle{\varphi ^{{\ast}}\left(x^{{\ast}}\right) =\sup \left\{\left\langle u,x^{{\ast}}\right\rangle -\varphi \left(u\right): u \in \mathrm{ Dom}(\varphi )\right\};}$$ -
\(\varphi\) is continuous on \(\mathrm{int}\left(\mathrm{Dom}(\varphi )\right)\);
-
\(\partial \varphi: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is maximal monotone;
-
\(\mathrm{int}\left(\mathrm{Dom}\left(\varphi \right)\right) =\mathrm{ int}\left(\mathrm{Dom}\left(\partial \varphi \right)\right)\) and \(\overline{\mathrm{Dom}\left(\partial \varphi \right)} = \overline{\mathrm{Dom}\left(\varphi \right)}\).
We have the following instance of Jensen’s inequality.
Lemma 6.34.
Let \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) be a proper convex lower semicontinuous function. If \(a,b \in \mathbb{R}\) , a < b, \(y \in L^{\infty }\left(a,b; \mathbb{R}^{d}\right)\) and \(\rho \in L^{1}\left(a,b; \mathbb{R}_{+}\right)\) such that \(\int _{a}^{b}\rho \left(r\right)\mathit{dr} = 1\) , then
Proof.
Since there exists a set \(\Gamma \subset \mathbb{R}^{d} \times \mathbb{R}\) such that
we have
and the result follows passing to \(\sup _{\left(v,\gamma \right)\in \Gamma }\). ■
6.3.7.2 Regularization of Convex Functions
Let \(\left(\mathbb{H},\left\vert \cdot \right\vert \right)\) be a real separable Hilbert space and \(\varphi: \mathbb{H} \rightarrow ] -\infty,+\infty ]\) be a proper convex l.s.c. function. The Moreau regularization \(\varphi _{\varepsilon }\) of the convex l.s.c. function \(\varphi\) is defined by
The function \(\varphi _{\varepsilon }\) is a convex function of class C 1 on \(\mathbb{H}\); the gradient \(\nabla \varphi _{\varepsilon }\) is a Lipschitz function on \(\mathbb{H}\) with the Lipschitz constant equal to \(\varepsilon ^{-1}\). If we define:
then one can easily prove (see e.g. Brezis [12], Barbu [2], Rockafellar [65] or Zălinescu [71]) that for all \(x \in \mathbb{H}\) and \(\varepsilon > 0\):
-
1.
\(\varphi _{\varepsilon }(x) = \dfrac{\varepsilon } {2}\vert \nabla \varphi _{\varepsilon }(x)\vert ^{2} +\varphi (J_{\varepsilon }x)\),
-
2.
\(\varphi (J_{\varepsilon }x) \leq \varphi _{\varepsilon }(x) \leq \varphi (x)\),
-
3.
\(\nabla \varphi _{\varepsilon }\left(x\right) = \partial \varphi _{\varepsilon }\left(x\right)\) and
$$\displaystyle\begin{array}{rcl} \varphi (J_{\varepsilon }x)& \leq &\varphi _{\varepsilon }(x) {}\\ & \leq &\varphi _{\varepsilon }(z) + \left\langle x - z,\nabla \varphi _{\varepsilon }(x)\right\rangle {}\\ & \leq &\varphi (z) + \left\langle x - z,\nabla \varphi _{\varepsilon }(x)\right\rangle,\;\forall \;z \in \mathbb{H}, {}\\ \end{array}$$ -
4.
\(\nabla \varphi _{\varepsilon }(x) \in \partial \varphi (J_{\varepsilon }x)\) i.e.
$$\displaystyle{\left\langle \nabla \varphi _{\varepsilon }(x),z - J_{\varepsilon }x\right\rangle +\varphi (J_{\varepsilon }x) \leq \varphi (z),\,\forall \,z \in \mathbb{H}.}$$Hence \(J_{\varepsilon }x = \left(I +\varepsilon \partial \varphi \right)^{-1}\left(x\right)\) and \(\nabla \varphi _{\varepsilon }(x) = A_{\varepsilon }\left(x\right)\), where A is the maximal monotone operator \(\partial \varphi;\) \(\nabla \varphi _{\varepsilon }\) is called the Moreau–Yosida approximation of\(\partial \varphi\).
-
5.
If \(\left(u_{0},\hat{u}_{0}\right) \in \partial \varphi\), then for all \(y \in \mathbb{H}\)
$$\displaystyle{ \left\{\begin{array}{l} \left(a\right)\quad \left\vert \nabla \varphi _{\varepsilon }\left(u_{0}\right)\right\vert \leq \left\vert \hat{u}_{0}\right\vert, \\ \left(b\right)\quad 0 \leq \varphi \left(u_{0}\right) -\varphi _{\varepsilon }\left(u_{0}\right) \leq \varphi \left(u_{0}\right) -\varphi \left(J_{\varepsilon }u_{0}\right) \leq \varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}, \\ \left(c\right)\quad \left\vert J_{\varepsilon }\left(y\right)\right\vert \leq \left\vert y - u_{0}\right\vert +\varepsilon \left\vert \hat{u}_{0}\right\vert + \left\vert u_{0}\right\vert, \\ \left(d\right)\quad \varphi \left(J_{\varepsilon }y\right) \geq \varphi \left(u_{0}\right) -\left\vert \hat{u}_{0}\right\vert \left\vert y - u_{0}\right\vert -\varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}, \\ \left(e\right)\quad \dfrac{\varepsilon } {2}\left\vert \nabla \varphi _{\varepsilon }\left(y\right)\right\vert ^{2} \leq \varphi _{\varepsilon }\left(y\right) -\varphi \left(u_{0}\right) + \left\vert \hat{u}_{0}\right\vert \left\vert y - u_{0}\right\vert +\varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}.\end{array} \right. }$$(6.26)Indeed \(\left\vert \nabla \varphi _{\varepsilon }\left(u_{0}\right)\right\vert = \left\vert A_{\varepsilon }\left(u_{0}\right)\right\vert \leq \left\vert A^{0}\left(x\right)\right\vert\) and
$$\displaystyle\begin{array}{rcl} -\varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}& \leq &-\varepsilon \left\langle \hat{u}_{ 0},\nabla \varphi _{\varepsilon }\left(u_{0}\right)\right\rangle {}\\ & =& \left\langle \hat{u}_{0},J_{\varepsilon }u_{0} - u_{0}\right\rangle {}\\ & \leq &\varphi \left(J_{\varepsilon }u_{0}\right) -\varphi \left(u_{0}\right) {}\\ & \leq &\varphi _{\varepsilon }\left(u_{0}\right) -\varphi \left(u_{0}\right) {}\\ & \leq & 0. {}\\ \end{array}$$For the inequality \(\left(c\right)\) we have
$$\displaystyle{\left\vert J_{\varepsilon }\left(y\right)\right\vert \leq \left\vert J_{\varepsilon }\left(y\right) - J_{\varepsilon }\left(u_{0}\right)\right\vert + \left\vert J_{\varepsilon }\left(u_{0}\right) - u_{0}\right\vert + \left\vert u_{0}\right\vert,}$$and therefore
$$\displaystyle\begin{array}{rcl} \varphi \left(J_{\varepsilon }y\right)& \geq &\varphi \left(u_{0}\right) + \left\langle \hat{u}_{0},J_{\varepsilon }\left(y\right) - u_{0}\right\rangle {}\\ & \geq &\varphi \left(u_{0}\right) -\left\vert \hat{u}_{0}\right\vert \left\vert J_{\varepsilon }\left(y\right) - J_{\varepsilon }\left(u_{0}\right)\right\vert -\left\vert \hat{u}_{0}\right\vert \left\vert J_{\varepsilon }\left(u_{0}\right) - u_{0}\right\vert {}\\ \end{array}$$which yields \(\left(d\right)\).
For the last inequality, \(\left(d\right)\), we have
$$\displaystyle\begin{array}{rcl} \frac{\varepsilon } {2}\left\vert \nabla \varphi _{\varepsilon }\left(y\right)\right\vert ^{2}& =& \varphi _{\varepsilon }\left(y\right) -\varphi \left(J_{\varepsilon }y\right) {}\\ & \leq &\varphi _{\varepsilon }\left(y\right) -\varphi \left(u_{0}\right) + \left\vert \hat{u}_{0}\right\vert \left\vert y - u_{0}\right\vert +\varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}. {}\\ \end{array}$$ -
6.
If \(0 =\varphi (0) \leq \varphi (x)\), \(\forall x \in \mathbb{H}\), it is easy to verify that, moreover
$$\displaystyle\begin{array}{rcl} \begin{array}{r@{\quad }l} j)\quad &\quad 0 \in \partial \varphi \left(0\right),\,\;0 =\varphi _{\varepsilon }(0) \leq \varphi _{\varepsilon }(x),\,\;J_{\varepsilon }\left(0\right) = \nabla \varphi _{\varepsilon }\left(0\right) = 0, \\ \mathit{jj})\quad &\quad \dfrac{\varepsilon } {2}\vert \nabla \varphi _{\varepsilon }(x)\vert ^{2} \leq \varphi _{\varepsilon }(x) \leq \left\langle \nabla \varphi _{\varepsilon }(x),x\right\rangle,\quad \forall x \in \mathbb{H}, \\ \mathit{jjj})\quad &\quad \vert \nabla \varphi _{\varepsilon }(x)\vert \leq \dfrac{1} {\varepsilon } \vert x\vert,\text{ and }0 \leq \varphi _{\varepsilon }(x) \leq \dfrac{1} {2\varepsilon }\vert x\vert ^{2}\,,\;\forall x \in \mathbb{H}, \\ \mathit{jv})\quad &\quad \left\langle \nabla \varphi _{\varepsilon }(x),x - y\right\rangle \geq -\varphi (J_{\varepsilon }x) -\varepsilon \left\langle \nabla \varphi _{\varepsilon }(x),\nabla \varphi _{\varepsilon }(y)\right\rangle,\;\forall x,y \in \mathbb{H}.\end{array} & & {}\end{array}$$(6.27)
If for a fixed a ≥ 0
or equivalently the function
is convex, too, then by the definition of \(J_{\varepsilon }\) and the monotonicity of the operator \(\partial \varphi\) we have \(\forall \,r \in \left]0,1\right[\):
and then
for all \(x,y \in \mathbb{H}\), \(r \in (0,1),\ \ \varepsilon,\delta > 0\) such that \(0 \leq a(1 - r)\varepsilon \leq r,\ 0 \leq a(1 - r)\delta \leq \ r\).
Let \(u_{0} \in \mathbb{H}\) and r 0 ≥ 0 be such that
Note that if
then we have for all \(\left(x,\hat{x}\right) \in \partial \varphi\)
and in particular for r 0 = 0
Let us prove (6.29). For \(\left(x,\hat{x}\right) \in \partial \varphi\) and \(\left\vert v\right\vert \leq 1\) we have
and consequently
which yields (6.29-a) taking the \(\sup _{\left\vert v\right\vert \leq 1}\).
On the other hand for all arbitrary \(\hat{u}_{0} \in \partial \varphi \left(u_{0}\right)\),
which yields
Hence for all \(\left\vert v\right\vert \leq 1\):
which yields (6.29-b).
Observing that \(\nabla \varphi _{\varepsilon }(x) \in \partial \varphi (J_{\varepsilon }x)\), we have
But
and
Hence for all \(\varepsilon \in ]0,1]\), \(x \in \mathbb{H}\) and \(\hat{u}_{0} \in \partial \varphi (u_{0})\):
In particular for u 0 = 0 and \(\hat{u}_{0} = 0\) we obtain
-
\(\blacklozenge\) If \(\varphi \left(x\right) \geq \varphi \left(0\right) = 0\), for all \(x \in \mathbb{H}\) and
$$\displaystyle{\varphi _{r_{0}}^{\#} =\sup \left\{\varphi \left(r_{ 0}v\right): \left\vert v\right\vert \leq 1\right\} < \infty,}$$then:
$$\displaystyle{ \begin{array}{l@{\quad }l} a)\quad &\quad r_{0}\left\vert \hat{x}\right\vert +\varphi (x) \leq \left\langle \hat{x},x\right\rangle +\varphi _{ r_{0}}^{\#},\quad \forall \,\left(x,\hat{x}\right) \in \partial \varphi, \\ b)\quad &\quad r_{0}\vert \nabla \varphi _{\varepsilon }(x)\vert +\varphi (J_{\varepsilon }x) +\varepsilon \vert \nabla \varphi _{\varepsilon }(x)\vert ^{2} \leq \left\langle \nabla \varphi _{\varepsilon }(x),x\right\rangle +\varphi _{ r_{0}}^{\#}, \\ \quad &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \forall \,\varepsilon > 0,\forall \,x \in \mathbb{H}.\end{array} }$$(6.32)
6.3.7.3 Convex Functions on \(C([0,T]; \mathbb{R}^{d})\)
Proposition 6.35.
If \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) is a proper convex l.s.c. function and
\(\Phi:\) \(C([0,T]; \mathbb{R}^{d}) \rightarrow ] -\infty,+\infty ]\),
then
- \(c_{1}\)):
-
\(\Phi \) is a proper convex l.s.c. function,
- c2):
-
\(\partial \Phi (x)\;\mathop{ =}\limits^{ \mathit{def }}\bigg\{k \in \ \mathit{BV }([0,T]; \mathbb{R}^{d})\) :
$$\displaystyle{\left.\int \nolimits _{0}^{T}\left\langle y(r) - x(r),\mathit{dk}(r)\right\rangle + \Phi (x) \leq \Phi (y),\;\,\forall \,y \in C([0,T]; \mathbb{R}^{d})\right\}}$$is a maximal monotone operator.
Proof.
We shall prove only the maximal property of the operator \(\partial \Phi \), since the other properties are immediate. Let \(\mathbb{X} =\) \(C([0,T]; \mathbb{R}^{d})\). Then the dual space is \(\mathbb{X}^{{\ast}} =\) \(\mathit{BV }([0,T]; \mathbb{R}^{d})\). Let
The function \(\psi (z) = \Phi (z) + \dfrac{1} {2}\left\Vert z - x\right\Vert _{\mathbb{X}}^{2} -\left\langle k,z\right\rangle\) defined on \(\mathbb{X}\) is a proper convex l.s.c. function. Furthermore, there exists a \(c \in \mathbb{R}\) such that \(\Phi (z) \geq c,\forall \,z \in \mathbb{X}\). By Ekeland’s principle there exists a \(z_{\varepsilon } \in \mathbb{X}\) such that
Then \(0 \in \partial \tilde{\psi }\) (\(z_{\varepsilon })\), which means
where\(\,F: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is the duality mapping and \(\left\Vert \theta _{\varepsilon }\right\Vert _{\mathbb{X}^{{\ast}}} \leq 1\). Multiplying by \(z_{\varepsilon } - x\) we have \(\;\left\langle \zeta _{\varepsilon } - k,\,z_{\varepsilon } - x\right\rangle + \left\Vert \ z_{\varepsilon } - x\right\Vert _{\mathbb{X}}^{2} + \sqrt{\varepsilon }\left\langle \theta _{\varepsilon },\,z_{\varepsilon } - x\right\rangle = 0\), for some \(\zeta _{\varepsilon } \in \partial \Phi (\ z_{\varepsilon })\), which implies by (6.34) \(\,\left\Vert \ z_{\varepsilon } - x\right\Vert _{\mathbb{X}} \leq \sqrt{\varepsilon }\). Hence \(z_{\varepsilon }\mathop{ \rightarrow }\limits^{ \mathbb{X}}x\), and by (6.35) \(\,\zeta _{\varepsilon }\mathop{ \rightarrow }\limits^{ \mathbb{X}^{{\ast}}}k\), as \(\varepsilon \rightarrow 0\). From the definition of the subdifferential operator: \(\left\langle \zeta _{\varepsilon },\,y - z_{\varepsilon }\right\rangle + \Phi (\) \(z_{\varepsilon }) \leq \Phi (y)\), \(\forall \,y \in \mathbb{X}\) and passing to the limit as \(\varepsilon \rightarrow 0\) we obtain \(\left(x,k\right) \in \partial \Phi \). ■
Proposition 6.36.
If \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) is a proper convex l.s.c. function, \(\Phi \) is defined by (6.33), \(x \in C([0,T]; \mathbb{R}^{d})\) and \(k \in C([0,T]; \mathbb{R}^{d}) \cap \mathit{BV }([0,T]; \mathbb{R}^{d})\) , then the following assertions are equivalent:
Proof.
We shall show that \(a_{1} \Leftrightarrow a_{2} \Rightarrow a_{3} \Rightarrow a_{4} \Rightarrow a_{5} \Rightarrow a_{2}\).
a 2 ⇒ a 1: is evident.
\(a_{1} \Rightarrow a_{2}\): Let \(y \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\). We extend \(y\left(t\right) = y\left(0\right)\) for t ≤ 0 and \(y\left(t\right) = y\left(T\right)\) for t ≥ T. The same extension will be considered for the functions x and k. To prove a 2) it is sufficient to consider the case 0 < s < t < T.
Since \(\varphi\) is bounded from below by an affine function, from a 1) we deduce that \(\varphi \left(x\right) \in L^{1}\left(0,T\right)\).
Let \(n_{0} \in \mathbb{N}^{{\ast}}\) be such that \(0 < \frac{1} {n_{0}} < s < t < t + \frac{1} {n_{0}} < T\) and n ≥ n 0. Let \(u \in [s,t]\). From \(\left(a_{1}\right)\) we have for \(z = y\left(u\right)\)
Integrating on \(\left[s,t\right]\) with respect to u we deduce that
By Fatou’s Lemma we have
On the other hand by the Lebesgue dominated convergence theorem
Passing to \(\liminf _{n\rightarrow +\infty }\) in (6.37) \(\left(a_{2}\right)\) follows.
a 2 ⇒ a 3: is obtained by adding the following inequalities term by term:
a 3 ⇒ a 4 : is proved in Proposition 6.17 since \(A = \partial \varphi\) is maximal monotone.
a 4 ⇒ a 5: Let \((\tilde{x},\tilde{k}) \in \partial \Phi \) be arbitrary. Hence for all \(y,\,\hat{y} \in \) \(C([0,T]; \mathbb{R}^{d})\), \(\,\left(y(r),\hat{y}(r)\right) \in \partial \varphi\) we have: \((y,\int _{0}^{\cdot }\hat{y}\mathit{dt}) \in \partial \Phi \) and
Since \(A = \partial \varphi\) is maximal monotone, by Proposition 6.17 we have
where \((\tilde{x},\tilde{k}) \in \partial \Phi \) is arbitrary. But by Proposition 6.35, \(\partial \Phi \) is a maximal monotone operator. Hence \(\left(x,k\right) \in \partial \Phi \).
a 5 ⇒ a 2: Let a, b ≥ 0 such that \(\varphi (y) + a\left\vert y\right\vert + b \geq 0\). From a 5) it follows that \(\varphi (x) \in L^{1}(0,T)\). Let α n ∈ \(C([0,T];\ \mathbb{R})\), 0 ≤ α n ≤ 1, and \(\alpha _{n} \nearrow \mathbf{1}_{\left]s,t\right[}\). In a 5) we put \(y\left(r\right)\) \(:=\, (1 -\alpha _{n}\left(r\right))x\left(r\right) +\alpha _{n}\left(r\right)y\left(r\right)\). So we have
and furthermore
Passing to the limit as n → ∞, a 2) follows. ■
Proposition 6.37.
If \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) is a proper convex l.s.c. function and
\(\tilde{\Phi }\) : \(L^{2}(\Omega;C([0,T]; \mathbb{R}^{d})) \rightarrow ] -\infty,+\infty ]\),
then
-
a)
\(\tilde{\Phi }\) is a proper convex l.s.c. function,
-
b)
\(\partial \tilde{\Phi }(x)\;\mathop{ =}\limits^{ \mathit{def }}\left\{K \in L^{2}(\Omega;\ \mathit{BV }([0,T]; \mathbb{R}^{d})): \mathbb{E}\int \nolimits _{0}^{T}\left\langle Y _{ t} - X_{t},\mathit{dK}_{t}\right\rangle \right.\)
$$\displaystyle{\left.+\mathbb{E}\int \nolimits _{0}^{T}\varphi (X_{ t})\mathit{dt} \leq \mathbb{E}\int \nolimits _{0}^{T}\varphi (Y _{ t})\mathit{dt},\;\,\forall \,Y \in \ L^{2}(\Omega;C([0,T]; \mathbb{R}^{d}))\right\}}$$is a maximal monotone operator,
-
c)
\(K \in \partial \tilde{\Phi }(x)\) iff \(K_{\cdot }\left(\omega \right) \in \partial \Phi (X_{\cdot }(\omega )),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \) , with \(\partial \Phi \) characterized in Proposition 6.36.
Proof.
The assertions a) and b) are obtained in the same manner as c1) and c2) from Proposition 6.35. The point c) follows from b) putting \(Y:= X1_{A^{c}} + Y 1_{A}\), where A ∈ F is arbitrary. ■
Proposition 6.38.
Let \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) be a proper convex l.s.c. function such that \(\mathrm{int}\left(\mathrm{Dom}\left(\varphi \right)\right)\neq \varnothing \). Let
\(\Phi \) : \(C([0,T]; \mathbb{R}^{d}) \rightarrow ] -\infty,+\infty ]\) be defined by (6.33). Let \(\left(u_{0},\hat{u}_{0}\right) \in \partial \varphi\) , r 0 ≥ 0 and
Then for all 0 ≤ s ≤ t ≤ T and \(\left(x,k\right) \in \partial \Phi \) :
Moreover for all 0 ≤ s ≤ t ≤ T and for all \(\left(x,k\right) \in \partial \Phi \) :
Proof.
Let \(0 \leq s = t_{0} < t_{1} <\ldots < t_{n} = t \leq T\), \(\max _{i}\left(t_{i+1} - t_{i}\right) =\delta _{n} \rightarrow 0\). By (6.36-a1) for z = u 0 + r 0 v. We obtain
for all \(\left\vert v\right\vert \leq 1\). Hence
and adding term by term for i = 0 to i = n − 1 we have
which clearly yields (6.39). The second inequality (6.40) now follows, using the fact that
for all \(x \in \mathbb{R}^{d}\) and \(\left(u_{0},\hat{u}_{0}\right) \in \partial \varphi\). ■
Remark 6.39.
Since \(\varphi\) is locally bounded on \(\mathrm{int}(\mathrm{Dom}\varphi )\), it follows that for
there exists r 0 > 0 and M 0 ≥ 0, such that
6.3.8 Semiconvex Functions
Let \(\varphi: \mathbb{R}^{d} \rightarrow \left]-\infty,+\infty \right]\).
Define
We say that \(\varphi\) is a proper function if \(\mathrm{Dom}\left(\varphi \right)\neq \varnothing \) and \(\mathrm{Dom}\left(\varphi \right)\) has no isolated points.
Definition 6.40.
The (Fréchet) subdifferential of \(\varphi\) at \(x \in \mathbb{R}^{d}\) is defined by
if \(x \in \mathrm{ Dom}\left(\varphi \right)\), and \(\partial ^{-}\varphi \left(x\right) = \varnothing \), if \(x\notin \mathrm{Dom}\left(\varphi \right)\).
Example 6.41.
If E is a non-empty closed subset of \(\mathbb{R}^{d}\) and
then \(\varphi\) is l.s.c. and (by a result of Colombo and Goncharov [17] we have for any closed subset E of a Hilbert space)
where \(N_{E}\left(x\right)\) is the closed normal cone at E in \(x \in \mathrm{ Bd}\left(E\right)\)
and
is the distance of a point \(z \in \mathbb{R}^{d}\) to E.
Denote
Definition 6.42.
A closed set \(E \subset \mathbb{R}^{d}\) is γ–semiconvex, γ ≥ 0, if for all \(x \in \mathrm{ Bd}\left(E\right)\) there exists an \(\hat{x}\neq 0\) such that
Note that if E is a semiconvex set, then
Definition 6.43.
\(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) is a semiconvex function if there exist ρ, γ ≥ 0 such that
-
(a)
\(\overline{\mathrm{Dom}\left(\varphi \right)}\) is γ–semiconvex;
-
(b)
\(\mathrm{Dom}\left(\partial ^{-}\varphi \right)\neq \varnothing;\)
-
(c)
for all \(y \in \mathbb{R}^{d}\) and for all \(\left(x,\hat{x}\right) \in \partial ^{-}\varphi\)
$$\displaystyle{\left\langle \hat{x},y - x\right\rangle +\varphi \left(x\right) \leq \varphi \left(y\right) + \left(\rho +\gamma \left\vert \hat{x}\right\vert \right)\left\vert y - x\right\vert ^{2}.}$$
A function \(\varphi\) satisfying the properties of this definition will sometimes be called a (ρ, γ)–semiconvex function, or a γ–semiconvex function (since the second parameter is the most important one).
Note that a convex function is a \(\left(\rho,\gamma \right)\)–semiconvex function for all ρ, γ ≥ 0.
A set E is γ–semiconvex iff I E is a (0, γ)–semiconvex function.
If we write the definition of semiconvexity for a fixed \(\left(x_{0},\hat{x}_{0}\right) \in \partial ^{-}\varphi\), then it is clear that we have:
Proposition 6.44.
If \(\varphi: \mathbb{R}^{d} \rightarrow \left]-\infty,+\infty \right]\) is a semiconvex function, then there exists an a ≥ 0 such that
In particular \(\varphi\) is bounded below on bounded subsets of \(\mathbb{R}^{d}\) .
The following properties also hold:
Proposition 6.45.
Let \(\varphi: \mathbb{R}^{d} \rightarrow \left]-\infty,+\infty \right]\) be a semiconvex function. If there exist \(u_{0} \in \mathrm{ Dom}\left(\varphi \right)\) , r 0 ,M 0 > 0 such that
then there exist ρ 0 > 0 and b ≥ 0 such that
and moreover there exist M ≥ 0 and δ 0 ∈]0,r 0 ] such that
Proof.
Let \(\left(x,\hat{x}\right) \in \partial ^{-}\varphi\). Then for all \(\left\vert v\right\vert \leq 1\) and λ ∈ [0, 1]:
which yields
Taking the \(\sup _{\left\vert v\right\vert \leq 1}\), we deduce for λ = 1∕(1 + 2γ r 0):
that is (6.41).
Moreover if \(\left\vert x - u_{0}\right\vert \leq \delta _{0} = 1 \wedge \frac{\rho _{0}} {2\left(1+b\right)} \wedge r_{0}\), then
and (6.42) follows. ■
Let E be a non-empty closed subset of \(\mathbb{R}^{d}\) and \(\varepsilon > 0\). We denote by
the open \(\varepsilon\)-neighbourhood of E and
the closed \(\varepsilon\)-neighbourhood of E.
Given \(z \in \mathbb{R}^{d}\), we denote by \(\varPi _{E}\left(z\right)\) the set of elements x ∈ E with \(\left\vert z - x\right\vert = d_{E}\left(z\right)\). We remark that \(\varPi _{E}\left(z\right)\) is always non-empty since E is non-empty and closed. We also note that if \(z \in \mathbb{R}^{d}\) and \(\hat{z} \in \varPi _{E}\left(z\right),\) then \(z -\hat{ z} \in N_{E}\left(\hat{z}\right).\) This follows from the fact that for \(0 <\varepsilon < 1\) we have
and
We recall the notations
Definition 6.46.
We say that E satisfies the “uniform exterior ball condition” (abbreviated UEBC) if
-
\(N_{E}\left(x\right)\neq \left\{0\right\}\) for all \(x \in \mathrm{ Bd}\left(E\right)\),
-
∃ r 0 > 0 such that, \(\forall \) \(x \in \mathrm{ Bd}\left(E\right)\) and \(\forall \ u \in N_{E}\left(x\right)\), \(\left\vert u\right\vert = r_{0}\):
$$\displaystyle{d_{E}\left(x + u\right) = r_{0}\quad \text{ or equivalently}\quad B\left(x + u,r_{0}\right) \cap E = \varnothing,}$$(in this case we say that E satisfies r 0-UEBC).
Note that for all \(v \in N_{E}\left(x\right)\), \(\left\vert v\right\vert \leq r_{0}\), we also have
Indeed since
and
then (6.43) follows.
It is clear that, under the uniform exterior ball condition with ball radius r 0, for all \(z \in \mathbb{R}^{d}\) with \(d_{E}\left(z\right) < r_{0}\), the set \(\varPi _{E}\left(z\right)\) is a singleton. The unique element of \(\varPi _{E}\left(z\right)\) is called the projection of z on E, and it is denoted by \(\pi _{E}\left(z\right)\).
We have the following characterization of the notion of the uniform exterior ball condition:
Lemma 6.47.
Let E be a non-empty closed subset of \(\mathbb{R}^{d}\). The following assertions are equivalent:
-
(i)
E satisfies the uniform exterior ball condition;
-
(ii)
E is a semiconvex subset of \(\mathbb{R}^{d}\) , that is ∃γ ≥ 0 and for all \(x \in \mathrm{ Bd}\left(E\right)\) there exists an \(\hat{x}\neq 0\) such that
$$\displaystyle{\left\langle \hat{x},y - x\right\rangle \leq \gamma \left\vert \hat{x}\right\vert \left\vert y - x\right\vert ^{2},\quad \text{ for all }y \in E,}$$(in this case \(\hat{x} \in N_{E}\left(x\right)\) follows);
-
(iii)
\(\exists \gamma \geq 0,\forall x,y \in \mathrm{ Bd}\left(E\right),\forall \lambda \in ]0,1[\) :
$$\displaystyle{d_{E}\left(\left(1-\lambda \right)x +\lambda y\right) \leq 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2};}$$ -
(iii’)
\(\exists \gamma \geq 0,\forall x,y \in E,\forall \lambda \in ]0,1[:\)
$$\displaystyle{d_{E}\left(\left(1-\lambda \right)x +\lambda y\right) \leq 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2};}$$ -
(iv)
\(\exists \gamma \geq 0,\forall x,y \in \mathrm{ Bd}(E)\) :
$$\displaystyle{d_{E}\left(\frac{x + y} {2} \right) \leq \gamma \left\vert x - y\right\vert ^{2},}$$ -
(iv’)
\(\exists \gamma \geq 0,\forall x,y \in E\) :
$$\displaystyle{d_{E}\left(\frac{x + y} {2} \right) \leq \gamma \left\vert x - y\right\vert ^{2},}$$ -
(v)
∃ δ > 0 and μ > 0 such that the function
$$\displaystyle{x\longrightarrow \psi _{E}^{\mu }\left(x\right)\mathop{ =}\limits^{ \mathit{def }}d_{ E}\left(x\right) +\mu \left\vert x\right\vert ^{2}: U_{\delta }\left(E\right) \rightarrow \mathbb{R}}$$is convex on each convex subset of \(U_{\delta }\left(E\right)\) .
Proof.
We first remark that the conditions \(\left(\mathit{ii}\right),\left(\mathit{iii}\right),\left(\mathit{iii}^{{\prime}}\right),\left(\mathit{iv}\right),\left(\mathit{iv}^{{\prime}}\right)\) are satisfied for γ = 0 if and only if E is convex; the convex sets satisfy the r-UEBC for all r > 0.
Step I. \(\left(i\right) \Leftrightarrow (\mathit{ii})\)
\(\left(i\right) \Rightarrow \left(\mathit{ii}\right)\): Let \(x \in \mathrm{ Bd}\left(E\right)\) and \(\hat{x} \in N_{E}\left(x\right)\), \(\hat{x}\neq 0\). Then there exists an r 0 > 0 such that
We have for all y ∈ E and \(\gamma = \frac{1} {2r_{0}}\)
\(\left(\mathit{ii}\right) \Rightarrow \left(i\right)\): Let r 0 > 0 be such that 2γ r 0 ≤ 1. Let \(x \in \mathrm{ Bd}\left(E\right)\) be arbitrary and \(u = r_{0}\dfrac{\hat{x}} {\left\vert \hat{x}\right\vert }\). Then
Hence
that is E satisfies the r 0-uniform exterior ball condition.
From this equivalence we have that
Step II. \(\left(\mathit{iii}\right) \Leftrightarrow (\mathit{iii}^{{\prime}})\).
We have to prove only \(\left(\mathit{iii}\right) \Rightarrow (\mathit{iii}^{{\prime}})\). Let x, y ∈ E and 0 < λ < 1. Let \(u_{\lambda } = \left(1-\lambda \right)x +\lambda y = x +\lambda \left(y - x\right)\). If u λ ∈ E, then
If \(u_{\lambda }\notin E\), then there exist 0 < α < λ < β < 1 such that
and
We have
and consequently
Step II. \(\left(\mathit{iii}^{{\prime}}\right) \Rightarrow (\mathit{iv}^{{\prime}}) \Rightarrow (\mathit{iv}) \Rightarrow \left(i\right) \Rightarrow \left(\mathit{iii}\right)\).
\(\left(\mathit{iii}^{{\prime}}\right) \Rightarrow (\mathit{iv}^{{\prime}}) \Rightarrow (\mathit{iv})\) as particular cases: (iv ′) for \(\left(\mathit{iii}^{{\prime}}\right)\) and (iv) for (iv ′).
\((\mathit{iv})\Longrightarrow\left(i\right)\): We prove by contradiction. We can assume γ > 0. We suppose that there is some \(z \in \mathbb{R}^{d}\) in the r 0-neighbourhood of E such that, for two different \(x,y \in \mathrm{ Bd}\left(E\right)\),
Under this hypothesis the vectors \(z -\frac{1} {2}\left(x + y\right) = \frac{1} {2}\left[\left(z - y\right) + \left(z - x\right)\right]\) and \(2\left(x - y\right) = 2\left[\left(z - y\right) -\left(z - x\right)\right]\) are orthogonal and, consequently,
Let \(u \in \varPi _{E}\left(\frac{1} {2}\left(x + y\right)\right)\). Then, from condition (iv) we obtain
Hence, we have
from which we easily deduce that
which is a contradiction. Consequently, condition (iv) implies the \(\frac{1} {2\gamma }\)-uniform exterior ball condition.
\((i)\Longrightarrow\left(\mathit{iii}\right)\): Let us now suppose that E satisfies the uniform exterior ball condition with an r 0-ball. Let \(x,y \in \mathrm{ Bd}\left(E\right)\). In a first step we assume that x, y are two different elements such that \(0 < \left\vert x - y\right\vert \leq r_{0}\). Let λ ∈ ]0, 1[ be such that \(x_{\lambda } = x +\lambda \left(y - x\right)\notin E\) (if there is not such a λ, we are done), and let \(\overline{x}_{\lambda } \in \varPi _{E}\left(x_{\lambda }\right)\). We fix any \(u_{\lambda } \in N_{E}\left(\overline{x}_{\lambda }\right)\), \(\left\vert u_{\lambda }\right\vert = r_{0}\) and put \(z_{\lambda } = \overline{x}_{\lambda } + u_{\lambda }\). Then, due to condition \(\left(i\right)\), \(\left\vert v - z_{\lambda }\right\vert \geq r_{0}\), for all v ∈ E. In particular, we have
We also observe that
and
Hence,
On the other hand, for \(\gamma \geq 1/\left(2r_{0}\right)\),
Consequently, \(d_{E}\left(x_{\lambda }\right) = \left\vert x_{\lambda } -\overline{x}_{\lambda }\right\vert \leq 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2}\), if \(\gamma \geq 1/\left(4r_{0}\right)\).
In order to complete the proof, we still have to consider the case of \(x,y \in \mathrm{ Bd}\left(E\right)\) with \(\left\vert x - y\right\vert > r_{0}\). In this case, for \(\gamma \geq 1/\left(2r_{0}\right)\), we have
This proves that under the r 0-uniform exterior ball condition the statement (iii) holds with \(\gamma \geq 1/\left(2r_{0}\right)\).
Step III. \(\left(\mathit{v}\right) \Rightarrow (\mathit{iii}) \Rightarrow (\mathit{v})\).
\(\left(\mathit{v}\right)\Longrightarrow\left(\mathit{iii}\right)\): Let \(\lambda \in \left(0,1\right)\) and \(x,y \in \mathrm{ Bd}\left(E\right)\) with \(\left\vert x - y\right\vert <\delta\). Then \(x,y \in B\left(x;\delta \right) = \left\{z \in \mathbb{R}^{d}: \left\vert z - x\right\vert <\delta \right\} \subset U_{\delta }\left(E\right)\), and, consequently,
By subtracting \(\mu \left\vert \lambda x + \left(1-\lambda \right)y\right\vert ^{2}\) on the left-hand and the right-hand sides of this inequality we obtain
On the other hand, if \(x,y \in \mathrm{ Bd}\left(E\right)\) are such that \(\left\vert x - y\right\vert \geq \delta\), then
This shows that (iii) is fulfilled for \(\gamma \geq \frac{1} {2\delta } \vee \frac{\mu } {4}\).
\(\left(\mathit{iii}\right)\Longrightarrow\left(\mathit{v}\right)\): We fix any \(\delta \in \left(0,r_{0}\right)\), and we recall that \(\pi _{E}: \overline{U}_{\delta }\left(E\right) \rightarrow E\) is Lipschitz continuous with Lipschitz constant \(L_{\delta } = r_{0}/\left(r_{0}-\delta \right)\). Let \(\lambda \in \left(0,1\right)\) and \(u,v \in \overline{U}_{\delta }\left(E\right)\) be such that \(\left(1-\lambda \right)u +\lambda v \in \overline{U}_{\delta }\left(E\right)\). For simplicity of notation we put \(x =\pi _{E}\left(u\right)\), \(y =\pi _{E}\left(v\right)\), \(z_{\lambda } = \left(1-\lambda \right)u +\lambda v\), and \(\overline{z}_{\lambda } = \left(1-\lambda \right)x +\lambda y\). Then,
Hence, for \(\mu \geq 4\gamma L_{\delta }^{2}\),
This proves that ψ E μ is convex on each convex subset of \(\overline{U}_{\delta }\left(E\right)\). ■
Corollary 6.48.
If E is a closed subset of \(\mathbb{R}^{d}\) and satisfies the r 0 -uniform exterior ball condition, then for all x ∈ E
and \(\varphi = I_{E}\) is a \((0, \dfrac{1} {2r_{0}})\) –semiconvex l.s.c. function. Moreover \(N_{E}\left(x\right) = \partial ^{-}I_{E}\left(x\right)\) .
Let r 0 > 0. The set E satisfies the r 0-uniform exterior ball condition if and only if E is \(\frac{1} {2r_{0}}\)–semiconvex.
We recall the following well-known property of the projection.
Lemma 6.49.
Suppose that E satisfies the uniform exterior ball condition with ball radius r 0 and \(\varepsilon \in ]0,r_{0}[\). Then the projection π E restricted to \(\overline{U}_{\varepsilon }\left(E\right)\) (the closed \( \varepsilon \) -neighbourhood of E) is Lipschitz with Lipschitz constant \(L_{\varepsilon } = r_{0}/\left(r_{0}-\varepsilon \right)\) , and the function d E 2 is of class C 1 on \(\overline{U}_{\varepsilon }\left(E\right)\) with
for all \(z \in \overline{U}_{\varepsilon }\left(E\right)\) .
Proof.
To simplify we denote π = π E and d = d E . Let \(x,y \in \overline{U}_{\varepsilon }\left(E\right).\) Then we have \(x -\pi \left(x\right) \in N_{E}\left(\pi \left(x\right)\right)\), \(y -\pi \left(y\right) \in N_{E}\left(\pi \left(y\right)\right)\) and
Hence
To obtain the second part of lemma it is sufficient to show that there exist a positive constant \(C = C_{\varepsilon,r_{0}}\) such that
We have
Since
and
the inequality (6.46) follows from this and (6.45). ■
6.3.9 Differential Equations
Let \(\mathbb{H}\) be a separable real Hilbert space. If \(A: \mathbb{H} \rightrightarrows \mathbb{H}\) is a maximal monotone operator, \(u_{0} \in \overline{D\left(A\right)}\), \(f \in L^{1}\left(0,T; \mathbb{H}\right)\), then the strong solution of the Cauchy problem
is defined as a function \(u \in C\left(\left[0,T\right]; \mathbb{H}\right)\) satisfying:
and we shall write \(u = \mathcal{S}\left(A;u_{0},f\right)\). Note that the strong solution is unique when it exists. Indeed if u, v are two solutions corresponding to \(\left(u_{0},f\right)\), \(\left(v_{0},g\right)\), respectively, then
and by the monotonicity of A it follows that
Using Gronwall’s inequality (Lemma 6.63, Annex C) we obtain
We recall from Barbu [3], p. 31, that the following proposition holds:
Proposition 6.50.
If A is maximal monotone operator on \(\mathbb{H}\) , \(u_{0} \in D\left(A\right)\) and \(f \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) , then the Cauchy problem (6.47) has a unique strong solution \(u \in W^{1,\infty }\left(\left[0,T\right]; \mathbb{H}\right)\). Moreover if \(A_{\varepsilon }\) is the Yosida approximation of the operator A and \(u_{\varepsilon }\) is the solution of the approximate equation
then for all \(\left(x_{0},y_{0}\right) \in A\) there exists a constant \(C = C\left(\alpha,T,x_{0},y_{0}\right) > 0\) such that
- c 1):
-
\(\left\Vert u_{\varepsilon }\right\Vert _{C\left(\left[0,T\right];\mathbb{H}\right)} \leq C\left(1 + \left\vert u_{0}\right\vert + \left\Vert f\right\Vert _{L^{1}(0,T;\mathbb{H})}\right)\) , and
- c2):
-
\(\lim \limits _{\varepsilon \searrow 0}u_{\varepsilon } = u\) in \(C\left(\left[0,T\right]; \mathbb{H}\right)\) .
We introduce the notation
From Barbu [2] (Chap. IV, p. 197, Theorem 2.5) we recall:
Proposition 6.51.
Let A be a maximal monotone operator on \(\mathbb{H}\) such that
If \(u_{0} \in \overline{D\left(A\right)}\) and \(f \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) , then the Cauchy problem (6.47) has a unique strong solution \(u \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) .
By the continuity property (6.48) one can generalize the notion of the solution of Eq. (6.47) as follows:
-
\(\blacklozenge\) u is a generalized solution of the Cauchy problem (6.47) with
$$\displaystyle{u_{0} \in \overline{D\left(A\right)},\quad f \in L^{1}\left(0,T; \mathbb{H}\right),}$$(and we shall write \(u = \mathcal{G}\mathcal{S}\left(A;u_{0},f\right)\)) if
-
\(\diamond \) \(u \in C\left(\left[0,T\right]; \mathbb{H}\right)\) and
-
\(\diamond \) there exist \(u_{0n} \in D\left(A\right)\), \(f_{n} \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) such that
$$\displaystyle{\begin{array}{l@{\quad }l} a)\quad &\quad u_{0n} \rightarrow u_{0}\quad \;\text{ in }\mathbb{H}, \\ b)\quad &\quad f_{n} \rightarrow f\;\quad \text{ in }L^{1}\left(0,T; \mathbb{H}\right), \\ c)\quad &\quad u_{n} = \mathcal{S}\left(A;u_{0n},f_{n}\right) \rightarrow u\quad \text{ in }C\left(\left[0,T\right]; \mathbb{H}\right).\end{array} }$$
-
Clearly we have:
Proposition 6.52.
If A is a maximal monotone operator on \(\mathbb{H}\), \(u_{0} \in \overline{D\left(A\right)}\) and \(f \in L^{1}\left(0,T; \mathbb{H}\right)\) , then the Cauchy problem (6.47) has a unique generalized solution \(u \in C\left(\left[0,T\right]; \mathbb{H}\right)\). Moreover if \(u = \mathcal{G}\mathcal{S}\left(A;u_{0},f\right)\) and \(v = \mathcal{G}\mathcal{S}\left(A;v_{0},g\right)\) then
and for all \(\left(x_{0},\hat{x}_{0}\right) \in A\) there exists a constant \(C = C\left(T,x_{0},\hat{x}_{0}\right) > 0\) such that
In the case when \(\mathrm{int}\left(D\left(A\right)\right)\neq \varnothing \) one can give supplementary properties of generalized solutions.
Proposition 6.53.
Let \(A \subset \mathbb{H} \times \mathbb{H}\) be a maximal monotone operator such that
Let \(u_{0} \in \overline{D\left(A\right)}\) and \(f \in L^{1}\left(0,T; \mathbb{H}\right)\). Then:
-
I.
there exists a unique pair (u,k) such that
$$\displaystyle{\left(P_{A}\right): \left\{\begin{array}{l@{\quad }l} a)\quad &\quad u \in C([0,T]; \mathbb{H}),\quad u(t) \in \overline{D(A)}\;\;\forall t \in [0,T],\;u(0) = u_{0}, \\ b)\quad &\quad k \in C([0,T]; \mathbb{H}) \cap \mathit{BV }([0,T]; \mathbb{H}),\;k(0) = 0, \\ c)\quad &\quad u(t) + k\left(t\right) = u_{0} + \int \nolimits _{0}^{t}f(s)\mathit{ds},\;\forall t \in [0,T], \\ d)\quad &\quad \int _{s}^{t}\left\langle u\left(r\right) - x,\mathit{dk}\left(r\right) -\hat{ x}\mathit{dr}\right\rangle \geq 0, \\ \quad &\quad \quad \quad \quad \quad \quad \quad \quad \forall \,0 \leq s \leq t \leq T,\;\forall \left(x,\hat{x}\right) \in A;\end{array} \right.}$$ -
II.
\(u = \mathcal{G}\mathcal{S}\left(A;u_{0},f\right)\) if and only if u is solution of the problem \(\left(P_{A}\right)\) ;
-
III.
the following estimate holds:
$$\displaystyle{\left\Vert u\right\Vert _{C\left(\left[0,T\right];\mathbb{H}\right)}^{2} + \left\Vert k\right\Vert _{\mathit{ BV}\left(\left[0,T\right];\mathbb{H}\right)} \leq C\left(1 + \left\vert u_{0}\right\vert ^{2} + \left\Vert f\right\Vert _{ L^{1}(0,T;\mathbb{H})}^{2}\right),}$$where C is a positive constant independent of u 0 and f.
Proof.
Uniqueness. If \(\left(u,k\right)\) and \(\left(v,\ell\right)\) are two solutions of the problem \(\left(P_{A}\right)\) corresponding to \(\left(u_{0},f\right)\), \(\left(v_{0},g\right)\) respectively, then
But by Proposition 6.17, the monotonicity of A and \(\left(P_{A} - d\right)\) we have
Hence
which yields (6.49) and, in particular, the uniqueness follows.
Existence. Let \(u_{0n} \in D\left(A\right)\), \(f_{n} \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) such that
Let \(u_{n} = \mathcal{S}\left(A;u_{0n},f_{n}\right)\) be the strong solution corresponding to \(\left(A;u_{0n},f_{n}\right)\). Hence there exists an \(h_{n} \in L^{1}\left(0,T; \mathbb{H}\right)\) such that \(h_{n}\left(t\right) \in Au_{n}\left(t\right)\), a. e. \(t \in \left]0,T\right[\) and denoting \(k_{n}\left(t\right) = \int _{0}^{t}h_{ n}\left(s\right)\mathit{ds}\) we have
Let \(x_{0} \in \mathrm{ int}\left(D\left(A\right)\right)\) and \(\hat{x}_{0} \in A\left(x_{0}\right)\). Then
Since
we infer
By the Gronwall type inequality from Lemma 6.63, Annex C, we obtain
where \(C = C\left(x_{0},\hat{x}_{0},T\right) > 0\).
By Proposition 6.5 we have \(a.e.\;t \in \left]0,T\right[\):
and then
with C a constant depending on \(x_{0},\hat{x}_{0},T,M_{0},r_{0}\).
Hence \(k_{n}\left(t\right) = \int _{0}^{t}h_{ n}\left(s\right)\mathit{ds}\) is bounded in \(\mathit{BV }\left(\left[0,T\right]; \mathbb{H}\right)\). Then there exists a \(k \in \mathit{BV }\left(\left[0,T\right]; \mathbb{H}\right)\) such that on a subsequence also denoted by k n we have
The sequence \(\left(u_{n}\right)_{n\in \mathbb{N}^{{\ast}}}\) is a Cauchy sequence in \(C\left(\left[0,T\right]; \mathbb{H}\right)\) since if \(u_{m} = \mathcal{S}\left(A;u_{0m},f_{m}\right)\) then
Then there exists a \(u \in C\left(\left[0,T\right]; \mathbb{H}\right)\) such that
Passing to the limit in (6.51), we obtain that \(\left(u,k\right)\) satisfies \(\left(P_{A}\right)\). The proof is complete. ■
If the assumption \(\mathrm{int}\left(D\left(A\right)\right)\neq \varnothing \) has a smoothing effect as we saw in Proposition 6.51, the maximal monotone \(A = \partial \varphi\) also has a smoothing effect.
Consider the differential equation
where
Proposition 6.54.
If \(u_{0} \in \overline{D\left(\partial \varphi \right)}\left(= \overline{\mathrm{Dom}\left(\varphi \right)}\right)\) and \(f \in L^{2}\left(0,T; \mathbb{H}\right)\) , then the Cauchy problem (6.52) has a unique strong solution. Moreover \(u \in W^{1,2}\left(\delta,T; \mathbb{H}\right)\), \(\forall \delta > 0\), \(\sqrt{ t}\dfrac{\mathit{du}} {\mathit{dt}} \in L^{2}\left(0,T; \mathbb{H}\right)\), \(\varphi \left(u\right) \in L^{1}\left(0,T\right)\) and if \(u_{0} \in \mathrm{ Dom}\left(\varphi \right)\) , then \(\dfrac{\mathit{du}} {\mathit{dt}} \in L^{2}\left(0,T; \mathbb{H}\right)\) and \(\varphi \left(u\right) \in L^{\infty }\left(0,T\right)\) .
Consider now the Cauchy problem
where
and
Hence for all \(\left(x,\hat{x}\right) \in \partial ^{-}\varphi\)
We denote here by \(\partial ^{-}\varphi \left(x\right)\) the Fréchet subdifferential given in Definition 6.40. Recall that \(E\subset \mathbb{R}^{d}\) is locally closed if for all x ∈ E, there exists a δ > 0 such that \(E \cap \overline{B}\left(x,\delta \right)\) is closed.
From Degiovanni–Marino–Tosques [21] and Rossi–Savaré [66] we have:
Proposition 6.55.
Let the assumptions (6.54) and (6.55) be satisfied. Then there exist \(h \in L^{2}\left(0,T; \mathbb{R}^{d}\right)\) and a unique absolutely continuous function \(x: \left[0,T\right] \rightarrow \mathrm{ Dom}\left(\varphi \right)\) such that:
and
Moreover \(a.e.\,\;t,s \in \left]0,T\right[,\;s < t\) :
and there exists a positive constant C T (independent of x 0 and g) such that
Remark 6.56.
If we put
then
that is \(\left(x,k\right)\) is the solution of the generalized Skorohod problem \(\left(x_{0},m,\partial ^{-}\varphi \right)\) with \(m\left(t\right) = \int _{0}^{t}g\left(s\right)\mathit{ds}\) (see Definition 4.29).
6.3.10 Auxiliary Results
Proposition 6.57.
If \(g \in L^{1}\left(0,T\right)\) and
then
Proof.
Let the continuous function \(t\longmapsto G\left(t\right) = \int _{0}^{t}\left\vert g\left(s\right)\right\vert \mathit{ds}\) and \(\mathbf{m}_{G}\left(\varepsilon \right)\) be the modulus of continuity of G on \(\left[0,T\right]\). We have for all \(t \in \left[0,T\right]\) and λ > 0:
which yields the result. ■
We now give a variant of the Banach fixed point theorem.
Let \(\left\{\left(\mathbb{V}_{a},d_{a}\right): a \geq 0\right\}\) be a family of complete metric spaces such that for all 0 ≤ a ≤ b:
with a continuous embedding. Let
and assume \(\mathbb{V}\neq \varnothing \). Then \(\mathbb{V}\) is a complete metric space with respect to the metric
and if \(x_{n},x \in \mathbb{V}\), \(n \in \mathbb{N}^{{\ast}}\), then as n → ∞,
Lemma 6.58.
Let \(\Gamma: \mathbb{V} \rightarrow \mathbb{V}\) be a mapping satisfying:there exists an a 0 ≥ 0 and for all a ≥ a 0 there exists a δ a ∈]0,1[ such that
Then \(\Gamma \) has a unique fixed point, i.e. there exists a unique \(x \in \mathbb{V}\) such that
(Banach’s fixed point theorem corresponds to the case \(\left(\mathbb{V}_{a},d_{a}\right) \equiv \left(\mathbb{V}_{0},d_{0}\right)\) for all a ≥ 0.)
Proof.
We define
Then by recurrence we deduce that
and
for all a ≥ a 0, \(n,p \in \mathbb{N}^{{\ast}}\). Hence there exists a unique \(x^{\left(a\right)} \in \mathbb{V}_{a}\) such that as n → ∞
Moreover by the continuity of the embedding \(\mathbb{V}_{a} \subset \mathbb{V}_{b}\) for 0 ≤ b ≤ a, we infer
Consequently \(x^{\left(a\right)} = x^{\left(a_{0}\right)}\) for all a ≥ a 0, \(x\mathop{ =}\limits^{ \mathit{def }}x^{\left(a_{0}\right)} \in \mathbb{V}\) and for a ≥ a 0
which yields
The fixed point x is unique, since if \(x,y \in \mathbb{V}\) are two fixed points, then for a ≥ a 0
and x = y follows. ■
6.4 Annex C: Deterministic and Stochastic Inequalities
6.4.1 Deterministic Inequalities
Proposition 6.59 (Stieltjes–Gronwall Inequality).
Let \(K: \left[0,T\right] \rightarrow \mathbb{R}\) be a continuous increasing function, \(a: \left[0,T\right] \rightarrow \left[0,\infty \right[\) be an increasing function and \(x: \left[0,T\right] \rightarrow \mathbb{R}\) be a measurable function such that
If
then
Proof.
I. Note that if α, β 0, β 1, …, β n and \(z_{0},z_{1},\ldots,z_{n} \in \mathbb{R}\) satisfy
then
Indeed, associating the sequence
by recurrence
follows.
Let
Clearly g is an increasing function and
Let 0 < t 1 < … < t n = t be such that
Let \(g_{i} = g\left(t_{i}\right)\), c 0 = 0, \(c_{i} =\int _{ t_{i-1}}^{t_{i}}\mathit{dK}\left(r\right) = K\left(t_{i}\right) - K\left(t_{i-1}\right) \leq \gamma _{n}\). We have
which yields
for all \(i \in \left\{1,2,\ldots,n\right\}\). Hence
The inequality (6.56) follows by letting n → ∞. ■
For \(K\left(t\right) =\int _{ 0}^{t}b\left(r\right)\mathit{dr}\), where \(b: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) is a locally integrable function, the following lemma holds.
Corollary 6.60 (Gronwall Inequality).
Let \(a: \left[0,T\right] \rightarrow \left[0,\infty \right[\) be an increasing function and \(x,b: \left[0,T\right] \rightarrow \mathbb{R}\) , b ≥ 0, be integrable functions such that
If
then
Corollary 6.61 (Backward Stieltjes–Gronwall Inequality).
Let \(\tilde{K}: \left[0,T\right] \rightarrow \mathbb{R}\) be a continuous increasing function, \(\tilde{a}: \left[0,T\right] \rightarrow \left[0,\infty \right[\) be a decreasing function and \(y: \left[0,T\right] \rightarrow \mathbb{R}\) be a measurable function such that
If
then
Proof.
Let \(x\left(t\right) = y\left(T - t\right)\), \(a\left(t\right) =\tilde{ a}\left(T - t\right)\) and \(K\left(t\right) =\tilde{ K}\left(T\right) -\tilde{ K}\left(T - t\right)\). Then
and by Proposition 6.59
that is (6.58) replacing t by T − t. ■
In particular for \(K\left(t\right) =\int _{ 0}^{t}b\left(r\right)\mathit{dr}\), we have:
Corollary 6.62 (Backward Gronwall Inequality).
Let \(\tilde{a}: \left[0,T\right] \rightarrow \left[0,\infty \right[\) be a decreasing function and \(y,b: \left[0,T\right] \rightarrow \mathbb{R}\) , b ≥ 0, be integrable functions such that
If
then
We now give some other deterministic inequalities used in the book.
Lemma 6.63.
Let \(\alpha,\beta \in L_{\mathit{loc}}^{1}\left(\left[0,\infty \right[\right)\) .
-
I.
If α ≥ 0 a.e. and \(x: \left[0,\infty \right[ \rightarrow \mathbb{R}^{d}\) is an absolutely continuous function such that
$$\displaystyle{\left\langle x^{{\prime}}\left(t\right),x\left(t\right)\right\rangle \leq \alpha \left(t\right)\left\vert x\left(t\right)\right\vert +\beta \left(t\right)\left\vert x\left(t\right)\right\vert ^{2},\quad \text{ a.e. }t \geq 0,\ }$$then
$$\displaystyle{ \left\vert x\left(t\right)\right\vert \leq \left\vert x\left(\tau \right)\right\vert e^{\int _{\tau }^{t}\beta \left(s\right)\mathit{ds} } +\int _{ \tau }^{t}\alpha \left(s\right)e^{\int _{s}^{t}\beta \left(r\right)\mathit{dr} }\mathit{ds} }$$(6.60)for all 0 ≤τ ≤ t.
-
II.
If α,β ≥ 0 a.e., \(a: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) is an increasing function and \(\varphi: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) is a continuous function such that \(\forall \,t \geq 0\)
$$\displaystyle{\varphi ^{2}\left(t\right) \leq a\left(t\right) + 2\int _{ 0}^{t}\alpha \left(s\right)\varphi \left(s\right)\mathit{ds} + 2\int _{ 0}^{t}\beta \left(s\right)\varphi ^{2}\left(s\right)\mathit{ds},}$$then
$$\displaystyle{ \varphi \left(t\right) \leq \sqrt{a\left(t\right)}e^{\int _{0}^{t}\beta \left(s\right)\mathit{ds} } +\int _{ 0}^{t}\alpha \left(s\right)e^{\int _{s}^{t}\beta \left(r\right)\mathit{dr} }\mathit{ds},\;\forall \,t \geq 0. }$$(6.61)
Proof.
-
I.
Let \(u_{\varepsilon }\left(t\right) = \left\vert x\left(t\right)\right\vert ^{2}e^{-2\int _{0}^{t}\beta \left(s\right)\mathit{ds} }+\varepsilon\), \(\varepsilon > 0\). Then
$$\displaystyle\begin{array}{rcl} u_{\varepsilon }^{{\prime}}\left(t\right)& =& 2\left\langle x^{{\prime}}\left(t\right),x\left(t\right)\right\rangle e^{-2\int _{0}^{t}\beta \left(s\right)\mathit{ds} } - 2\beta \left(t\right)\left\vert x\left(t\right)\right\vert ^{2}e^{-2\int _{0}^{t}\beta \left(s\right)\mathit{ds} } {}\\ & \leq & 2\alpha \left(t\right)\left\vert x\left(t\right)\right\vert e^{-2\int _{0}^{t}\beta \left(s\right)\mathit{ds} } {}\\ & \leq & 2\alpha \left(t\right)\sqrt{u_{\varepsilon }\left(t\right)}e^{-\int _{0}^{t}\beta \left(s\right)\mathit{ds} }, {}\\ \end{array}$$which yields
$$\displaystyle\begin{array}{rcl} \frac{d} {\mathit{dt}}\left(\sqrt{u_{\varepsilon }\left(t\right)}\right)& =& \frac{u_{\varepsilon }^{{\prime}}\left(t\right)} {2\sqrt{u_{\varepsilon }\left(t\right)}} {}\\ & \leq &\alpha \left(t\right)e^{-\int _{0}^{t}\beta \left(s\right)\mathit{ds} }. {}\\ \end{array}$$Hence
$$\displaystyle{\sqrt{u_{\varepsilon }\left(t\right)} \leq \sqrt{u_{\varepsilon }\left(\tau \right)} +\int _{ \tau }^{t}\alpha \left(s\right)e^{-\int _{0}^{s}\beta \left(r\right)\mathit{dr} }\mathit{ds}.}$$Passing to the limit as \(\varepsilon \searrow 0\) the inequality (6.60) follows.
-
II.
Let \(\theta \in \left[0,T\right]\) be fixed and
$$\displaystyle{x\left(t\right) = \left(a\left(\theta \right) + 2\int _{0}^{t}\alpha \left(s\right)\varphi \left(s\right)\mathit{ds} + 2\int _{ 0}^{t}\beta \left(s\right)\varphi ^{2}\left(s\right)\mathit{ds}\right)^{1/2}.}$$Then for all \(t \in \left[0,\theta \right]\):
$$\displaystyle{\varphi ^{2}\left(t\right) \leq a\left(\theta \right) + 2\int _{ 0}^{t}\alpha \left(s\right)\varphi \left(s\right)\mathit{ds} + 2\int _{ 0}^{t}\beta \left(s\right)\varphi ^{2}\left(s\right)\mathit{ds} = x^{2}\left(t\right),}$$and
$$\displaystyle\begin{array}{rcl} x^{{\prime}}\left(t\right)x\left(t\right)& =& \alpha \left(t\right)\varphi \left(t\right) +\beta \left(t\right)\varphi ^{2}\left(t\right) {}\\ & \leq &\alpha \left(t\right)x\left(t\right) +\beta \left(t\right)x^{2}\left(t\right), {}\\ \end{array}$$which implies, by the first part, that for \(t \in \left[0,\theta \right]\):
$$\displaystyle{\varphi \left(t\right) \leq x\left(t\right) \leq x\left(0\right)e^{\int _{0}^{t}\beta \left(s\right)\mathit{ds} } +\int _{ 0}^{t}\alpha \left(s\right)e^{\int _{s}^{t}\beta \left(r\right)\mathit{dr} }\mathit{ds},\,}$$which is (6.61) if we choose t = θ. ■
Corollary 6.64.
If α,β ≥ 0 a.e., \(\tilde{a}: \left[0,T\right] \rightarrow \left[0,\infty \right[\) is a decreasing function and \(\psi: \left[0,T\right] \rightarrow \left[0,\infty \right[\) is a continuous function such that \(\forall \,t \in \left[0,T\right]\) :
then
Proof.
Note that \(\forall \,t \in \left[0,T\right]\):
Hence by (6.61)
which clearly yields (6.62) replacing T − t by t. ■
If \(f,g \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[\right)\left(= \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}\right)\right)\), we say that \(df\left(s\right) \leq dg\left(s\right)\) as signed measures on \(\left[0,\infty \right[\) if
-
d1.
$$\displaystyle{\int _{t}^{s}\varphi \left(r\right)df\left(r\right) \leq \int _{ t}^{s}\varphi \left(r\right)dg\left(r\right),}$$
for all 0 ≤ t ≤ s and for all continuous function \(\varphi: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\), or equivalently
-
d2.
\(f\left(s\right) - f\left(t\right) =\int _{ t}^{s}df\left(r\right) \leq \int _{t}^{s}dg\left(r\right) = g\left(s\right) - g\left(t\right),\quad \forall \,0 \leq t \leq s\), or equivalently
-
d3.
\(h\left(s\right) = f\left(s\right) - g\left(s\right)\) is a decreasing function on \(\left[0,\infty \right[\).
Lemma 6.65.
Let \(x,N,V \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[\right)\). If
or equivalently
as signed measures on \(\left[0,\infty \right[\) , then for all 0 ≤ t ≤ s:
Proof.
We have
and the result follows. ■
Corollary 6.66.
Let \(\alpha,\beta \in L_{\mathit{loc}}^{1}\left(\left[0,\infty \right[\right)\) and \(y: \left[0,\infty \right[ \rightarrow \mathbb{R}\) be a continuous function such that
then
Proof.
By Lemma 6.65 and
the result follows. ■
Finally we have:
Proposition 6.67.
Let \(x \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right)\) and \(V \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}\right)\) be continuous functions. Let \(R,N: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) be continuous increasing functions. If
as signed measures on \(\left[0,\infty \right[\) , then for all 0 ≤ t ≤ T:
and
If R = 0 then for all 0 ≤ t ≤ s:
Proof.
Let \(u_{\varepsilon }\left(r\right) = \left\vert x\left(r\right)\right\vert ^{2}e^{-2V _{r}}+\varepsilon\), \(\varepsilon > 0\). We have as signed measures on \(\left[0,\infty \right[\)
If R = 0 then
and consequently
which yields (6.65) passing to the limit as \(\varepsilon \rightarrow 0\).
If R ≠ 0 then
Hence for all t ≤ τ ≤ T
and the results follow. ■
6.4.2 Stochastic Inequalities
In this subsection \(\left\{B_{t}: t \geq 0\right\}\) is a k-dimensional Brownian motion with respect to a given stochastic basis \(\left(\Omega,\mathcal{F}, \mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0}\right)\).
Proposition 6.68 (Stochastic Gronwall Inequality).
Let
-
\(\diamond \) \(a,b: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) be measurable deterministic functions and
-
\(\diamond \) \(H,\alpha,\beta,\gamma,\delta: \Omega \times \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) be stochastic processes, where H is a continuous stochastic processes. If for all t ≥ 0
$$\displaystyle{ \left\vert X_{t}\right\vert + \left\vert U_{t}\right\vert \leq \left\vert H_{t}\right\vert + \int _{0}^{t}\left(\alpha _{ s} + a\left(s\right)\left\vert X_{s}\right\vert \right)\mathit{ds} + \left\vert \int _{0}^{t}G_{ s}\mathit{dB}_{s}\right\vert,\; \mathbb{P}\text{ -a.s.}, }$$(6.66)where
$$\displaystyle{\begin{array}{l@{\quad }l} \mathit{i}) \quad &\quad X,U \in S_{d}^{0},\,\quad G \in \varLambda _{d\times k}^{0}, \\ \mathit{ii})\quad &\quad \left\vert G_{t}\right\vert \leq \beta _{t} + b\left(t\right)\left\vert X_{t}\right\vert,\;\quad d\mathbb{P} \otimes \mathit{dt}\text{ -}a.e.,\end{array} }$$then for all q ≥ 1 there exists a positive constant C q such that for all T ≥ 0:
$$\displaystyle{ \begin{array}{l} \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert X_{t}\right\vert ^{q}+\mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert U_{t}\right\vert ^{q} \leq \left[\mathbb{E}\left\Vert H\right\Vert _{T}^{q}+\mathbb{E}\left(\int _{0}^{T}\alpha _{ s}\mathit{ds}\right)^{q}\right. \\ \quad \quad \quad \left.+\mathbb{E}\left(\int _{0}^{T}\beta _{ s}^{2}\mathit{ds}\right)^{q/2}\right] \times \exp \left\{C_{ q}\left[1 + T^{q-1}\int _{ 0}^{T}\left(a^{q}\left(s\right)+b^{2q}\left(s\right)\right)\mathit{ds}\right]\right\}.\end{array} }$$(6.67)In particular if the right-hand side of the inequality (6.67) is finite then
$$\displaystyle{X,U \in S_{d}^{q},\quad \quad G \in \varLambda _{ d\times k}^{q}.}$$
Proof.
Clearly we can assume that the right-hand side of the inequality (6.67) is finite. Denote by C q different constants depending only q and which can be changed from one line to another. For each n ≥ 1, we define the stopping time
Note that for all positive stochastic processes Z,
By the convexity of the function \(\varphi \left(r\right) = \left\vert r\right\vert ^{q}\) we have
By the Burkholder–Davis–Gundy and Hölder inequalities:
Also
Hence, defining
we have
Using Gronwall’s inequality (6.57) we obtain
where
Since \(1 + xe^{ax} \leq e^{\left(a+1\right)x}\), for all x ≥ 0, it follows that
We also have
for some \(\hat{C}_{q,t}\) independent of n. Passing to the limit in (6.68)–(6.70) as \(n \rightarrow \infty \), we obtain \(X,U \in S_{d}^{q}\left[0,T\right]\), \(G \in \varLambda _{d\times k}^{q}\left(0,T\right)\) and (6.67) follows. ■
Proposition 6.69.
Let \(\delta \in \left\{-1,1\right\}\). Let \(\left\{B_{t}: t \geq 0\right\}\) be a k-dimensional Brownian motion. Let \(Y,K,V: \Omega \times \mathbb{R}_{+} \rightarrow \mathbb{R}\) and \(G: \Omega \times \mathbb{R}_{+} \rightarrow \mathbb{R}^{k}\) be progressively measurable stochastic processes such that
If for all 0 ≤ t ≤ s,
then
Proof.
Denoting
we obtain
Hence
is a decreasing function and then
and integrating from t to s
Now by (6.71) we obtain the conclusions. ■
6.4.3 Forward Stochastic Inequalities
In this subsection \(\left\{B_{t}: t \geq 0\right\}\) is a k-dimensional Brownian motion with respect to a stochastic basis \(\left(\Omega,\mathcal{F}, \mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0}\right)\).
We shall derive some estimates on the local semimartingale X ∈ S d 0 of the form
where
-
\(\diamond \) \(K \in S_{d}^{0}\); \(K_{\cdot }\in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right),\;K_{0} = 0,\; \mathbb{P}\text{ -a.s.}\);
-
\(\diamond \) \(G \in \varLambda _{d\times k}^{0}\).
Notation 6.70.
Let p ≥ 1 and \(m_{p}\mathop{ =}\limits^{ \mathit{def }}1 \vee \left(p - 1\right)\) .
Proposition 6.71.
Let X ∈ S d 0 be a local semimartingale of the form (6.72). Assume there exist p ≥ 1, a \(\mathcal{P}\) -m.i.c.s.p. D and a \(\mathcal{P}\) -m.b-v.c.s.p. V, D 0 = V 0 = 0, such that as signed measures on \(\left[0,\infty \right[\)
then for all 0 ≤ t ≤ s:
Moreover for all δ ≥ 0, 0 ≤ t ≤ s:
The proof of this Proposition is contained in the proof of the next Proposition.
Remark 6.72.
Since by (2.27)
we see that the condition (6.73) yields
We now formulate a more general assumption.
\(\left(\mathbf{FB}\right)\) There exist
-
p ≥ 1, λ ≥ 0,
-
three \(\mathcal{P}\)-m.i.c.s.p. D, R, N, D 0 = R 0 = N 0 = 0, and
-
a \(\mathcal{P}\)-m.b-v.c.s.p. V, V 0 = 0,
such that, as signed measures on \(\left[0,\infty \right[\):
Remark 6.73.
From the condition (6.76), we deduce that
Proposition 6.74.
Let X ∈ S d 0 be a local semimartingale of the form (6.72). Assume that there exist p ≥ 1 and λ > 1 such that \(\left(\mathbf{FB}\right)\) is satisfied. Then there exists a positive constant C p,λ depending only on \(\left(p,\lambda \right)\) such that for all δ ≥ 0, and 0 ≤ t ≤ s:
If we set δ = 0 in (6.77), we obtain the following:
Corollary 6.75.
Under the assumption \(\left(\mathbf{FB}\right)\) , for all 0 ≤ t ≤ s:
Proof (of Proposition 6.74).
In view of the monotone convergence theorem it suffices to treat the case δ > 0, which we assume from now on.
To simplify, we define
and
We remark that
Step 1. General calculation.
We begin by assuming a condition which is more general than the assumptions (6.73) and (6.76), namely that there exists a γ ≥ 0 such that
Since by Itô’s formula
it follows from the inequality (2.28) in Corollary 2.28 that for all 0 ≤ t ≤ s and any stopping time θ
But
hence we deduce that
and using the assumption (6.79) it follows that for any stopping time θ and for all 0 ≤ t ≤ s, \(\mathbb{P}\text{ -}a.s.\):
Since for all T > 0:
it follows that for all 0 ≤ t ≤ s:
For each \(n \in \mathbb{N}^{{\ast}}\) we define the stopping time
Note that for θ = θ n
is a martingale and consequently, for all 0 ≤ t ≤ s:
Step 2. Proof of the inequality (6.75).
In view of the first step, the assumption (6.73) yields (6.80) with γ = 0 and R = N = 0, from which we deduce
and passing to the limit as n → ∞ (the first two terms converge monotonically and the third one converges a. s.) the estimate (6.75) follows in view of Remark 6.73, since R = 0.
Step 3. Proof of the inequality (6.77).
\(\left(\mathbf{A}\right)\) Let γ > 0. From (6.80) we have
By the Burkholder–Davis–Gundy inequality
for all λ > 0. Hence
Let γ = 9p λ, λ > 1. By Hölder’s inequality
We deduce from the above that
The argument used in order to take the limit in (6.82) yields as n → ∞:
\(\left(\mathbf{B}\right)\) From (6.80) for p = 2, γ = 1 and θ = θ n we have
which yields
By the Burkholder–Davis–Gundy inequality (2.8)
Hence
We take the limit as n → ∞ in the last inequality and the estimate (6.77) follows from (6.84), (6.85), Remark 6.73 and the identity
This last fact follows from the increasing monotonicity of the function
The proof is complete. ■
We shall give a supplementary result in the case when R, N, V are deterministic functions.
Corollary 6.76.
Let X ∈ S d 0 be a local semimartingale of the form
where
-
\(\diamond \) K ∈ S d 0 ; \(K_{\cdot }\in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right),\;K_{0} = 0,\; \mathbb{P}\text{ -a.s.}\) ;
-
\(\diamond \) \(G \in \varLambda _{d\times k}^{0}\) .
Assume that there exist
-
p ≥ 1, \(m_{p}\mathop{ =}\limits^{ \mathit{def }}1 \vee \left(p - 1\right);\)
-
two continuous increasing deterministic functions \(R,N: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\), \(R\left(0\right) = N\left(0\right) = 0\) , and
-
a continuous deterministic function with bounded variation \(V: \left[0,\infty \right[ \rightarrow \mathbb{R}\), \(V \left(0\right) = 0\),
such that as signed measures on \(\left[0,\infty \right[\) :
Define
Then for all δ ≥ 0 and 0 ≤ t ≤ s:
In particular for \(\delta \searrow 0\) and 0 = t ≤ s:
for all α,λ > 0.
Proof.
We follow from (6.80) the first steps from the proof of Proposition 6.74 but now
and θ = θ n is defined similarly.
From the inequality (2.28) in Corollary 2.28, we have for all 0 ≤ t ≤ s
Taking into account that
and passing to the limit as n → ∞ we have for all 0 ≤ t ≤ s:
that is
By Gronwall’s inequality (Proposition 6.69), we have for all 0 ≤ t ≤ s:
and the inequality (6.87) follows. The inequality (6.88-b) clearly follows from (6.88-a) using the elementary inequality
■
Let X, \(\hat{X} \in S_{d}^{0}\) be two semimartingales given by
where
-
\(\diamond \) \(K,\hat{K} \in S_{d}^{0}\);
-
\(\diamond \) \(K_{\cdot }\left(\omega \right)\), \(\hat{K}_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right),\;K_{0}\left(\omega \right) =\hat{ K}_{0}\left(\omega \right) = 0,\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \);
-
\(\diamond \) \(G,\hat{G} \in \varLambda _{d\times k}^{0}\).
\(\left(FB^{{\prime}}\right)\): Assume there exist p ≥ 1 and λ ≥ 0 and a \(\mathcal{P}\)-m.b-v.c.s.p. V, V 0 = 0, such that as measures on \(\left[0,\infty \right[\):
$$\displaystyle{ \left\langle X_{t} -\hat{ X}_{t},\mathit{dK}_{t} - d\hat{K}_{t}\right\rangle + \left(\frac{1} {2}m_{p} + 9p\lambda \right)\left\vert G_{t} -\hat{ G}_{t}\right\vert ^{2}\mathit{dt} \leq \vert X_{ t} -\hat{ X}_{t}\vert ^{2}\mathit{dV }_{ t}. }$$(6.90)
Corollary 6.77.
Let p ≥ 1 and A be a \(\mathcal{P}\) -m.i.c.s.p., A 0 = 0.
-
(I)
If the assumption (6.90) is satisfied with λ = 0, then for all δ ≥ 0, 0 ≤ t ≤ s:
$$\displaystyle{\begin{array}{r} \mathbb{E}^{\mathcal{F}_{t}} \frac{e^{-p\left(V_{s}+A_{s}\right)}\left\vert X_{ s}-\hat{X}_{s}\right\vert ^{p}} {\left(1+\delta e^{-2\left(V_{r}+A_{r}\right)}\left\vert X_{r}-\hat{X}_{r}\right\vert ^{2}\right)^{p/2}} + \mathbb{E}^{\mathcal{F}_{t}}\int _{t}^{s} \frac{e^{-p\left(V_{r}+A_{r}\right)}\left\vert X_{ r}-\hat{X}_{r}\right\vert ^{p}} {\left(1+\delta e^{-2\left(V_{r}+A_{r}\right)}\left\vert X_{r}-\hat{X}_{r}\right\vert ^{2}\right)^{\left(p+2\right)/2}} dA_{r} \\ \leq \frac{e^{-p\left(V_{t}+A_{t}\right)}\left\vert X_{ t}-\hat{X}_{t}\right\vert ^{p}} {\left(1+\delta e^{-2\left(V_{t}+A_{t}\right)}\left\vert X_{t}-\hat{X}_{t}\right\vert ^{2}\right)^{p/2}},\;\; \mathbb{P} - a.s.\end{array} }$$In particular for δ = 0
$$\displaystyle{\begin{array}{r} \mathbb{E}^{\mathcal{F}_{t}}e^{-p\left(V _{s}+A_{s}\right)}\left\vert X_{ s} -\hat{ X}_{s}\right\vert ^{p} + \mathbb{E}^{\mathcal{F}_{t}}\int _{ t}^{s}e^{-p\left(V _{r}+A_{r}\right)}\left\vert X_{ r} -\hat{ X}_{r}\right\vert ^{p}dA_{ r} \\ \leq e^{-p\left(V _{t}+A_{t}\right)}\left\vert X_{t} -\hat{ X}_{t}\right\vert ^{p},\;\; \mathbb{P}\text{ -a.s.},\end{array} }$$for all 0 ≤ t ≤ s.
-
(II)
If the assumption (6.90) is satisfied with λ > 1, then there exists a positive constant C p,λ depending only on \(\left(p,\lambda \right)\) such that for all δ ≥ 0, 0 ≤ t ≤ s:
$$\displaystyle{\begin{array}{r} \mathbb{E}^{\mathcal{F}_{t}} \frac{\left\Vert e^{-V-A}\left(X-\hat{X}\right)\right\Vert _{\left[t,s\right]}^{p}} {\left(1+\delta \left\Vert e^{-V-A}\left(X-\hat{X}\right)\right\Vert _{\left[t,s\right]}^{2}\right)^{p/2}} + \mathbb{E}^{\mathcal{F}_{t}}\bigg(\int _{t}^{s} \frac{e^{-2\left(V_{r}+A_{r}\right)}\left\vert X_{ r}-\hat{X}_{r}\right\vert ^{2}} {\left(1+\delta e^{-2V_{r}-2A_{r}}\left\vert X_{r}-\hat{X}_{r}\right\vert ^{2}\right)^{2}} dA_{r}\bigg)^{p/2} \\ \leq C_{p,\lambda } \frac{e^{-p\left(V_{t}+A_{t}\right)}\left\vert X_{ t}-\hat{X}_{t}\right\vert ^{p}} {\left(1+\delta e^{-2\left(V_{t}+A_{t}\right)}\left\vert X_{t}-\hat{X}_{t}\right\vert ^{2}\right)^{p/2}},\;\; \mathbb{P}\text{ -a.s.}\end{array} }$$In particular for δ = 0
$$\displaystyle{\begin{array}{r} \mathbb{E}^{\mathcal{F}_{t}}\left\Vert e^{-V -A}\left(X -\hat{ X}\right)\right\Vert _{\left[t,s\right]}^{p} + \mathbb{E}^{\mathcal{F}_{t}}\bigg(\int _{t}^{s}e^{-2\left(V _{r}+A_{r}\right)}\left\vert X_{r} -\hat{ X}_{r}\right\vert ^{2}dA_{r}\bigg)^{p/2} \\ \leq C_{p,\lambda }\ e^{-p\left(V _{t}+A_{t}\right)}\left\vert X_{t} -\hat{ X}_{t}\right\vert ^{p},\;\; \mathbb{P}\text{ -a.s.},\end{array} }$$for all 0 ≤ t ≤ s.
Proof.
Since the assumption (6.90) is equivalent to
with
the results clearly follow from Propositions 6.71 and 6.74 applied to the identity
■
Since
we have:
Corollary 6.78.
If the assumption (6.90) is satisfied with λ > 1 and p ≥ 1, then there exists a positive constant C p,λ depending only on \(\left(p,\lambda \right)\) such that \(\mathbb{P}\text{ -}a.s.\)
for all 0 ≤ t ≤ s.
6.4.4 Backward Stochastic Inequalities
Let \(\left\{B_{t}: t \geq 0\right\}\) be a k-dimensional Brownian motion with respect to a given stochastic basis \(\left(\Omega,\mathcal{F}, \mathbb{P},\{\mathcal{F}_{t}^{B}\}_{t\geq 0}\right)\), where \(\mathcal{F}_{t}^{B}\) is the natural filtration associated to \(\left\{B_{t}: t \geq 0\right\}\).
Notation 6.79.
For p > 1 define
In this subsection we shall derive some estimates on \(\left(Y,Z\right) \in S_{m}^{0} \times \varLambda _{m\times k}^{0}\) satisfying for all T ≥ 0 and \(t \in \left[0,T\right]\):
where K ∈ S m 0 and \(K_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \).
We note that if the interval \(\left[0,T\right]\) is fixed then the equality (6.91) will be extended to \(\mathbb{R}_{+}\) by Y s = Y T , K s = K T and Z s = 0 for all s > T.
Proposition 6.80.
Let \(\left(Y,Z\right) \in S_{m}^{0} \times \varLambda _{m\times k}^{0}\) satisfy
where K ∈ S m 0 and \(K_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \) .
Assume given
-
\(\blacktriangle \) three \(\mathcal{P}\) -m.i.c.s.p. D, R, N, D 0 = R 0 = N 0 = 0,
-
\(\blacktriangle \) a \(\mathcal{P}\) -m.b-v.c.s.p. V, V 0 = 0,
-
\(\blacktriangle \) two stopping times τ and σ such that 0 ≤τ ≤σ < ∞.
-
(A)
If λ < 1, q > 0 and
$$\displaystyle{\mathit{dD}_{t} + \left\langle Y _{t},\mathit{dK}_{t}\right\rangle \leq \mathit{dR}_{t} + \vert Y _{t}\vert \mathit{dN}_{t} + \vert Y _{t}\vert ^{2}\mathit{dV }_{ t} + \dfrac{\lambda } {2}\left\vert Z_{t}\right\vert ^{2}\mathit{dt},}$$then there exists a positive constant C q,λ , depending only on \(\left(q,\lambda \right)\) , such that
$$\displaystyle{ \begin{array}{l} \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{r}}\mathit{dD}_{r}\right)^{q/2} + \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{q/2} \\ {r} { \leq C_{q,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\left[\sup \limits _{s\in \left[\tau,\sigma \right]}\left\vert e^{V _{s}}Y _{s}\right\vert ^{q} + \left(\int _{\tau }^{\sigma }e^{2V _{s}}\mathit{dR}_{s}\right)^{q/2} + \left(\int _{\tau }^{\sigma }e^{V _{s}}\mathit{dN}_{s}\right)^{q}\right],} \\ {r} {\quad \mathbb{P}\text{ -a.s.}\;}\end{array} }$$(6.92) -
(B)
If λ < 1 < p,
$$\displaystyle{ \begin{array}{rl} \left(i\right)&\quad \mathit{dD}_{t} + \left\langle Y _{t},\mathit{dK}_{t}\right\rangle \leq \left(\mathbf{1}_{p\geq 2}\mathit{dR}_{t} + \vert Y _{t}\vert \mathit{dN}_{t} + \vert Y _{t}\vert ^{2}\mathit{dV }_{t}\right) + \dfrac{n_{p}} {2} \lambda \left\vert Z_{t}\right\vert ^{2}\mathit{dt}, \\ \left(\mathit{ii}\right)&\quad \mathbb{E}\ \sup \limits _{s\in \left[\tau,\sigma \right]}e^{pV _{s}}\left\vert Y _{s}\right\vert ^{p} < \infty,\end{array} }$$(6.93)then there exists a positive constant C p,λ , depending only on \(\left(p,\lambda \right)\) , such that \(\mathbb{P}\text{ -a.s.}\),
$$\displaystyle{ \begin{array}{l} \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\sup \limits _{s\in \left[\tau,\sigma \right]}\left\vert e^{V _{s}}Y _{s}\right\vert ^{p}\right) + \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{s}}\mathit{dD}_{s}\right)^{p/2} \\ {r} { + \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{s}}\left\vert Z_{s}\right\vert ^{2}\mathit{ds}\right)^{p/2}} \\ {r} {\;\; + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{\tau }^{\sigma }e^{pV _{s} }\left\vert Y _{s}\right\vert ^{p-2}\mathbf{1}_{Y _{s}\neq 0}\mathit{dD}_{s} + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{\tau }^{\sigma }e^{pV _{s} }\left\vert Y _{s}\right\vert ^{p-2}\mathbf{1}_{Y _{s}\neq 0}\left\vert Z_{s}\right\vert ^{2}\mathit{ds}} \\ \leq C_{p,\lambda }\ \mathbb{E}^{\mathcal{F}_{\tau }}\left[\left\vert e^{V _{\sigma }}Y _{\sigma }\right\vert ^{p} + \left(\int _{\tau }^{\sigma }e^{2V _{s}}\mathbf{1}_{p\geq 2}\mathit{dR}_{s}\right)^{p/2} + \left(\int _{\tau }^{\sigma }e^{V _{s}}\mathit{dN}_{s}\right)^{p}\right].\end{array} }$$(6.94)
Proof.
Step I.
By the Itô formula, we have for all 0 ≤ t ≤ s:
Since
we get
Let the stopping times 0 ≤ τ ≤ σ < ∞ and
We have \(\tau \leq \theta _{n} \leq \sigma\) and \(\theta _{n} \nearrow \sigma\) \(\mathbb{P}\)-a.s. Replacing in (6.95) t by τ and s by θ n we obtain
Moreover, by Minkowski’s inequality we infer for all q > 0
But by the Burkholder–Davis–Gundy and Cauchy–Schwarz inequalities, we get
Since \(\int _{\tau }^{\theta _{n}}e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\) is finite, from (6.96) we infer
By the monotone convergence theorem as n → ∞ the inequality (6.92) follows.
Step II. Let us first assume that p ≥ 1.
Noting that
then by the inequality (2.30) from Corollary (2.30) we get, for p ≥ 1 and for all stopping times \(t \in \left[\tau,\sigma \right]\)
We note that the right-hand side of (6.98) is finite \(\mathbb{P}\text{ -}a.s.\) and consequently
By the assumption (6.93)
It follows that
where
and
Note that \(\left\{M_{s}: s \in \left[0,T\right]\right\}\) is a martingale since
Therefore from (6.99),
From here we assume that p > 1. From (6.99) we also get
Since
we obtain from (6.99) that
By the Burkholder–Davis–Gundy inequality (2.8) and (6.102):
Plugging this last estimate into (6.103) we obtain with another constant C p, λ
We deduce from (6.102) and (6.104)
But
Hence
Now letting n → ∞, by the Beppo Levi monotone convergence theorem for the first member and by the Lebesgue dominated convergence theorem for the right-hand side of the inequality, we conclude (6.94) (using of course the first step: inequality (6.92)).
The proof is complete. ■
Corollary 6.81.
Let \(\left(Y,Z\right) \in S_{m}^{0} \times \varLambda _{m\times k}^{0}\) satisfy
where K ∈ S m 0 and \(K_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \) .
Assume given
-
\(\blacktriangle \) D and N are \(\mathcal{P}\) -m.i.c.s.p., N 0 = 0,
-
\(\blacktriangle \) V a \(\mathcal{P}\) -m.b-v.c.s.p., V 0 = 0,
-
\(\blacktriangle \) τ, θ and σ are three stopping times such that 0 ≤τ ≤θ ≤σ < ∞.
If
then
and for all 0 < α < 1
Proof.
From (6.101) for p = 1 we deduce, using the definition (6.100) of U s , that
and the inequality (6.105) follows as n → ∞. Moreover
and by the martingale inequality (1.11-A 3) from Theorem 1.60 we infer
The inequality (6.106) is now a consequence of (6.92). ■
Corollary 6.82.
Let \(\left(Y,Z\right) \in S_{m}^{0} \times \varLambda _{m\times k}^{0}\) satisfy
where K ∈ S m 0 and \(K_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \) .
Assume given
-
\(\blacktriangle \) a \(\mathcal{P}\) -m.b-v.c.s.p. V, V 0 = 0,
-
\(\blacktriangle \) τ, θ and σ are three stopping times such that 0 ≤τ ≤θ ≤σ < ∞.
If λ < 1 ≤ p, \(n_{p} = 1 \wedge \left(p - 1\right)\) and
then for all 1 ≤ q ≤ p,
If p > 1 then
and if p = 1 (and n p = 0) then for all 0 < α < 1,
Proof.
Since
the inequality (6.101) with p replaced by q yields (6.107). The next two inequalities follow from Proposition 6.80 and Corollary 6.81, respectively. ■
Corollary 6.83.
Let p ≥ 1 and \(\left(V _{t}\right)_{t\geq 0}\) be a bounded variation continuous progressively measurable stochastic process with V 0 = 0. Let T > 0 and \(\eta: \Omega \rightarrow \mathbb{R}^{m}\) be a random variable such that \(\mathbb{E}\left(\sup _{r\in \left[0,T\right]}e^{pV _{r}}\left\vert \eta \right\vert ^{p}\right) < \infty \). If \(\left(\xi,\zeta \right) \in S_{m}^{p}\left[0,T\right] \times \varLambda _{m\times k}^{p}\left[0,T\right]\) satisfies
(the pair \(\left(\xi,\zeta \right)\) exists and it is unique by the martingale representation: Corollary 2.44), then there exists a \(C = C\left(p\right) > 0\) such that for all \(t \in \left[0,T\right]\) , for p > 1
and for p = 1
Proof.
We see at once that the stochastic pair \(\left(\xi,\zeta \right)\) satisfy the equation
The stochastic process \(\tilde{V }_{t} =\sup _{s\in \left[0,t\right]}V _{s};\) \(\tilde{V }\) is increasing continuous progressively measurable and \(\tilde{V }_{0} = 0\). Since for all \(t \in \left[0,T\right]\)
by Proposition 1.56 we infer for all p > 1
and consequently by Proposition 6.80-B (for \(\left(Y,Z\right) = \left(\xi,\zeta \right)\) with λ = 0, K = R = N = 0, \(\mathit{dD}_{t} = \left\vert \xi _{t}\right\vert ^{2}d\tilde{V }\)) the inequality (6.109) follows; we also use that \(V \leq \tilde{ V }\) and
In the case p = 1 we have for all 0 < α < 1, by Proposition 1.56
and by Proposition 6.80-A
Also we can see that from (6.111) \(\mathbb{E}\left\vert e^{\tilde{V }_{t}}\xi _{t}\right\vert \leq \mathbb{E}\left(e^{\tilde{V }_{T}}\left\vert \eta \right\vert \right)\) and therefore
From (6.112)–(6.114) the inequality (6.110) follows. ■
Let \(\left(Y,Z\right),(\hat{Y },\hat{Z}) \in S_{m}^{0}\left[0,T\right] \times \varLambda _{m\times k}^{0}\left(0,T\right)\) satisfying for all \(t \in \left[0,T\right]\):
and respectively
where
-
\(\diamond \) \(K,\hat{K} \in S_{m}^{0},\)
-
\(\diamond \) \(K_{\cdot }\left(\omega \right)\), \(\hat{K}_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \).
Assume there exist λ < 1 ≤ p and V a \(\mathcal{P}\)-m.b-v.c.s.p., V 0 = 0, such that as signed measures on \(\left[0,T\right]\):
where \(n_{p} = 1 \wedge \left(p - 1\right)\).
Corollary 6.84.
Let λ < 1 ≤ p be given. Let the assumption (6.115) be satisfied and \(\left\{A_{t}: t \geq 0\right\}\) be a \(\mathcal{P}\) -m.i.c.s.p., A 0 = 0, such that
Then for all 0 ≤ t ≤ T,
Moreover if p > 1, then
where C p,λ is a positive constant depending only on \(\left(p,\lambda \right)\) .
Proof.
The results clearly follow from Corollary 6.82 and the inequality (6.94) from Proposition 6.80, applied to
satisfying
with
■
6.5 Annex D: Viscosity Solutions
The aim of this section is to introduce the notion of viscosity solutions to second order elliptic and parabolic PDEs and give uniqueness results for such solutions. This notion, which was invented by Crandall and Lions, allows us to state that a continuous function satisfies a PDE, without any differentiability requirement on that function. This notion has been invented specifically for nonlinear equations, for which the notion of weak solutions in the sense of distributions is not convenient. We use this notion here for linear and semilinear equations.
This section is divided into four parts. In the first part, we state the main definitions of viscosity solutions to elliptic and parabolic PDEs (or systems of PDEs). We prove three uniqueness results in the next three parts. We do not prove any existence results, since such results for the equations considered in this book are provided by our probabilistic formulas. Concerning uniqueness, it would be too long and repetitive to give a uniqueness result for each PDE considered in this book. The last three parts of this section give uniqueness results, corresponding to three large classes of semilinear PDEs or systems of PDEs. All other relevant results can be proved by combining the arguments given in those three proofs.
The first uniqueness result concerns an elliptic PDE with Dirichlet boundary condition at the boundary of a bounded set. We shall also explain how the proof can be adapted to the parabolic case. The second result treats the case of a system of parabolic PDEs in the whole space. Finally the third result concerns a parabolic PDE with subdifferential operators and nonlinear Neumann boundary condition.
We refer to the well-known “user’s guide” of Crandall et al. [18] for more details, which complements the material presented here.
6.5.1 Definitions
Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\), that is for all \(x \in \mathcal{O}\) there exists a δ > 0 such that \(\mathcal{O}\cap \overline{B}\left(x,\delta \right)\) is closed.
A function \(h: \mathcal{O}\subset \mathbb{R}^{d} \rightarrow \mathbb{R}\) is lower semicontinuous and we write \(h \in \mathit{LSC}\left(\mathcal{O}\right)\) if there exist \(\{h_{n},\ n \geq 1\} \subset C(\mathcal{O})\) such that
The function \(h: \mathcal{O}\subset \mathbb{R}^{d} \rightarrow \mathbb{R}\) is upper semicontinuous and we write \(h \in \mathit{USC}\left(\mathcal{O}\right)\) if − h is lower semicontinuous.
In particular for all R > 0 we have
6.5.1.1 Elliptic PDE
Consider the PDE
where
and \(\mathbb{S}^{d}\) denotes the set of symmetric d × d matrices.
Definition 6.85.
-
(i)
\(\;u \in \mathit{USC}(\mathcal{O})\) is a viscosity sub-solution of (6.116) if for any \(\varphi \in C^{2}(\mathcal{O})\) and \(\hat{x} \in \mathcal{O}\) a local maximum of \(u-\varphi\):
$$\displaystyle{\Phi (\hat{x},u(\hat{x}),D\varphi (\hat{x}),D^{2}\varphi (\hat{x})) \leq 0.}$$ -
(ii)
\(\;u \in \mathit{LSC}(\mathcal{O})\) is a viscosity super-solution of (6.116) if for any \(\varphi \in C^{2}(\mathcal{O})\) and \(\hat{x} \in \mathcal{O}\) a local minimum of \(u-\varphi\):
$$\displaystyle{\Phi (\hat{x},u(\hat{x}),D\varphi (\hat{x}),D^{2}\varphi (\hat{x})) \geq 0.}$$ -
(iii)
\(\;u \in C\left(\mathcal{O}\right)\) is viscosity solution if it is both a viscosity sub- and super-solution.
In these definitions we can also assume that \(u\left(\hat{x}\right) =\varphi \left(\hat{x}\right)\) since we can translate \(\varphi\).
Note that the class of PDEs for which probabilistic formulas are given in this book is the class of semilinear equations, where the function \(\Phi \) has the following particular form
In the Definition 6.85 we can replace local maximum (minimum) by strict global maximum (minimum).
Remark 6.86.
Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\) and \(u \in C^{2}(\mathcal{O})\).
-
(i)
If u is a viscosity solution of (6.116), then u is a classical solution.
-
(ii)
If u is a classical solution of (6.116) and \(\Phi \) satisfies the degenerate ellipticity condition
$$\displaystyle{X \leq Y \Rightarrow \Phi (x,r,p,X) \geq \Phi (x,r,p,Y ),\,\forall \,x,r,p,}$$then u is a viscosity solution.
Definition 6.87.
A function \(u \in \mathit{USC}(\mathcal{O})\) satisfies the maximum principle if for all \(\varphi \in C^{2}(\mathcal{O})\) and all open subsets \(D \subseteq \mathcal{O}\) the inequality
implies that at every \(\hat{x} \in D\) which is a local maximum of \(u-\varphi\):
Proposition 6.88.
Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\) and
Then each viscosity sub-solution u satisfies the maximum principle.
Proof.
If we assume that there exist \(\varphi \in C^{2}(\mathcal{O})\), an open subset \(D \subseteq \mathcal{O}\) such that
and \(\hat{x} \in D\) a local maximum of \(u-\varphi\) such that \(u\left(\hat{x}\right) \geq \varphi \left(\hat{x}\right)\) then
since u is a sub-solution. Hence necessarily \(u(\hat{x}) <\varphi (\hat{x})\). ■
We next introduce the notion of a proper function (in the sense of the theory of viscosity solutions, which should not be confused with the notion of proper convex function), for which the notion of a viscosity solution makes sense.
Definition 6.89.
A continuous function
is said to be proper, if \(\Phi \) satisfies:
-
(1)
Monotonicity condition
$$\displaystyle{r \leq s\; \Rightarrow \; \Phi (x,r,p,X) \leq \Phi (x,s,p,X),\,\forall \,x,p,X,}$$and
-
(2)
Degenerate ellipticity condition
$$\displaystyle{ X \leq Y \Rightarrow \Phi (x,r,p,X) \geq \Phi (x,r,p,Y ),\,\forall \,x,r,p. }$$(6.118)
Definition 6.85 of a viscosity solution can be reformulated in terms of subjets and superjets of u.
Definition 6.90.
Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\), \(u: \mathcal{O}\rightarrow \mathbb{R}\) and \(x \in \mathcal{O}\).
-
(i)
\((p,X) \in \mathbb{R}^{d} \times \mathbb{S}^{d}\) is a superjet to u at x if
$$\displaystyle{\mathop{\lim \sup }\limits_{\mathcal{O}\ni y \rightarrow x}\tfrac{u(y)-u(x)-\langle p,y-x\rangle -\frac{1} {2} \langle X(y-x),y-x\rangle } {\vert y-x\vert ^{2}} \leq 0.}$$The set of superjets to u at x will be denoted \(J_{\mathcal{O}}^{2,+}u(x)\).
-
(ii)
\((p,X) \in \mathbb{R}^{d} \times \mathbb{S}^{d}\) is a subjet to u at x if
$$\displaystyle{\mathop{\lim \inf }\limits_{\mathcal{O}\ni y \rightarrow x}\tfrac{u(y)-u(x)-\langle p,y-x\rangle -\frac{1} {2} \langle X(y-x),y-x\rangle } {\vert y-x\vert ^{2}} \geq 0.}$$The set of subjets to u at x will be denoted \(J_{\mathcal{O}}^{2,-}u(x)\).
If \(\mathcal{O} = \mathbb{R}^{d}\), then the index \(\mathcal{O}\) will be omitted.
Proposition 6.91.
Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\).
-
(i)
Let \(u \in \mathit{USC}\left(\mathcal{O}\right)\) and \(\tilde{x} \in \mathcal{O}\).
-
(a)
If \(\left(p,X\right) \in J_{\mathcal{O}}^{2,+}u(\tilde{x})\) , then there exists a \(\varphi \in C^{2}(\mathcal{O})\) such that \(u(\tilde{x}) =\varphi (\tilde{x})\),
$$\displaystyle{\left(p,X\right) = \left(\varphi _{x}^{{\prime}}(\tilde{x}),\varphi _{ xx}^{{\prime\prime}}(\tilde{x})\right)}$$and \(\tilde{x}\) is a strict global maximum of \(u-\varphi\) in \(\mathcal{O}\) .
-
(b)
If \(\varphi \in C^{2}(\mathcal{O})\) and \(\tilde{x}\) is a local maximum of \(u-\varphi\) in \(\mathcal{O}\) , then
$$\displaystyle{\left(\varphi _{x}^{{\prime}}(\tilde{x}),\varphi _{ xx}^{{\prime\prime}}(\tilde{x})\right) \in J_{ \mathcal{O}}^{2,+}u(\tilde{x}).}$$
-
(a)
-
(ii)
Let \(u \in \mathit{LSC}\left(\mathcal{O}\right)\) and \(\tilde{x} \in \mathcal{O}\).
-
(a)
If \(\left(p,X\right) \in J_{\mathcal{O}}^{2,-}u(\tilde{x})\) , then there exists a \(\varphi \in C^{2}(\mathcal{O})\) such that \(u(\tilde{x}) =\varphi (\tilde{x})\),
$$\displaystyle{\left(p,X\right) = \left(\varphi _{x}^{{\prime}}(\tilde{x}),\varphi _{ xx}^{{\prime\prime}}(\tilde{x})\right)}$$and \(\tilde{x}\) is a strict global minimum of \(u-\varphi\) in \(\mathcal{O}\) .
-
(b)
If \(\varphi \in C^{2}(\mathcal{O})\) and \(\tilde{x}\) is a local minimum of \(u-\varphi\) in \(\mathcal{O}\) , then
$$\displaystyle{\left(\varphi _{x}^{{\prime}}(\tilde{x}),\varphi _{ xx}^{{\prime\prime}}(\tilde{x})\right) \in J_{ \mathcal{O}}^{2,-}u(\tilde{x}).}$$
-
(a)
Proof.
It is sufficient to prove (i) since \(J_{\mathcal{O}}^{2,-}u(\tilde{x}) = -J_{\mathcal{O}}^{2,+}\left(-u\right)(\tilde{x})\). Also the equivalence is clear if \(\tilde{x}\) is an isolated point of \(\mathcal{O}\).
Let \(\tilde{x}\) be a non-isolated point of \(\mathcal{O}\).
\(\left(\Rightarrow \right)\): Let \(\left(p,X\right) \in J_{\mathcal{O}}^{2,+}u(\tilde{x})\). Then there exists a strictly increasing function \(\rho =\rho ^{\left(\tilde{x}\right)}: [0,+\infty [\rightarrow [0,+\infty [\), ρ(0+) = 0 such that \(\forall y \in \mathcal{O}\)
One can define ρ by
Let
and β(r) = 0 if r ≤ 0. Then \(r\rho (\sqrt{r}) <\beta (r) < 8r\rho (8\sqrt{r})\) for all r > 0, \(\beta \in C^{2}(\left]0,\infty \right[)\), β(0+) = β ′(0+) = 0 and
Define \(\varphi \in C^{2}(\mathbb{R}^{d})\) by
Then
and \(\tilde{x}\) is a strict global maximum of \(u-\varphi\) since for \(y \in \mathcal{O}\setminus \left\{\tilde{x}\right\}\):
\(\left(\Leftarrow \right)\): Let \(\varphi \in C^{2}(\mathcal{O})\) and \(\tilde{x}\) be a local maximum of \(u-\varphi\). Let
By Taylor’s formula
■
Corollary 6.92.
Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\).
-
(i)
\(\;u \in \mathit{USC}(\mathcal{O})\) is a viscosity sub-solution of (6.116) iff for any \(x \in \mathcal{O}\) and \((p,X) \in J_{\mathcal{O}}^{2,+}u(x)\)
$$\displaystyle{\Phi (x,u(x),p,X) \leq 0.}$$ -
(ii)
\(\;u \in \mathit{LSC}(\mathcal{O})\) is a viscosity super-solution of (6.116) iff for any \(x \in \mathcal{O}\) and \((p,X) \in J_{\mathcal{O}}^{2,-}u(x)\)
$$\displaystyle{\Phi (x,u(x),p,X) \geq 0.}$$
Definition 6.93.
Let \(u: \mathcal{O}\rightarrow \mathbb{R}\) and \(x \in \mathcal{O}\).
\(\overline{J}_{\mathcal{O}}^{2,+}u(x)\) (respect. \(\overline{J}_{\mathcal{O}}^{2,-}u(x)\)) is the set of \((p,X) \in \mathbb{R}^{d} \times \mathbb{S}^{d}\) such that there exists a sequence \((x_{n},p_{n},X_{n}) \in \mathcal{O}\times \mathbb{R}^{d} \times \mathbb{S}^{d}\), \(n \in \mathbb{N}^{{\ast}}\), with the properties
and
6.5.1.2 Systems of PDEs
Backward stochastic differential equations naturally give probabilistic formulas for systems of PDEs, not just for single PDEs.
Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\), \(\Phi \in C(\overline{\mathcal{O}}\times \mathbb{R}^{m} \times \mathbb{R}^{d} \times \mathbb{S}^{d}; \mathbb{R}^{m})\). We want to explain what we mean by the fact that \(u \in C(\mathcal{O}, \mathbb{R}^{m})\) solves in the viscosity sense the following systems of PDEs
Note that the various equations are coupled only through the vector u(x). The i-th equation depends upon all coordinates of u(x), but only on the i-th coordinate of Du(x) and D 2 u(x). This is essential for the following definition to make sense.
Definition 6.94.
Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\).
-
(i) \(u \in \mathit{USC}(\mathcal{O})\) is a viscosity sub-solution of (6.120) if
$$\displaystyle{\Phi _{i}(x,u(x),p,X) \leq 0\ \text{ for }x \in \mathcal{O},\ 1 \leq i \leq m,\ (p,X) \in \overline{J}_{\mathcal{O}}^{2,+}u_{ i}(x).}$$ -
(ii) \(u \in \mathit{LSC}(\mathcal{O})\) is a viscosity super-solution of (6.120) if
$$\displaystyle{\Phi _{i}(x,u(x),p,X) \geq 0\ \text{ for }x \in \mathcal{O},\ 1 \leq i \leq m,\ (p,X) \in \overline{J}_{\mathcal{O}}^{2,-}u_{ i}(x).}$$ -
(iii) \(u \in C(\mathcal{O})\) is a viscosity solution of (6.120) if it is both a viscosity sub- and super-solution.
6.5.1.3 Boundary Conditions
We now discuss the formulation of the boundary condition in the framework of viscosity solutions. Suppose for simplicity that the boundary \(\partial \mathcal{O}\) of the open set \(\mathcal{O}\) is of class C 1 and that \(\mathcal{O}\) satisfies the uniform exterior ball condition. We shall consider two types of boundary conditions, namely:
-
Dirichlet boundary conditions, of the form
$$\displaystyle{u(x) -\kappa (x) = 0,\quad x \in \partial \mathcal{O};}$$ -
Nonlinear Neumann boundary conditions, of the form
$$\displaystyle{\langle n(x),\mathit{Du}(x)\rangle + G(x,u(x)) = 0,\quad x \in \partial \mathcal{O},}$$where n(x) denotes the outward normal vector to the boundary \(\partial \mathcal{O}\) at x.
Consider the function
defined in the case of the Dirichlet boundary condition by
and in the case of the Neumann boundary condition by
where \(G \in C(\partial \mathcal{O}\times \mathbb{R})\) and r → G(x, r) is assumed to be nonincreasing for all \(x \in \partial \mathcal{O}\). The correct formulation of the boundary value problem
is as follows.
Definition 6.95.
Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\), \(\Phi \in C(\overline{\mathcal{O}}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d})\) be proper and \(\Gamma \in C(\mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d})\) be as defined above.
-
(i) \(u \in \mathit{USC}(\overline{\mathcal{O}})\) is a viscosity sub-solution of (6.121) if
$$\displaystyle{\left\{\begin{array}{l} \Phi (x,u(x),p,X) \leq 0\ \text{ for }x \in \mathcal{O},\ (p,X) \in \overline{J}_{\overline{\mathcal{O}}}^{2,+}u(x), \\ \Phi (x,u(x),p,X) \wedge \Gamma (x,u(x),p) \leq 0\ \text{ for }x \in \partial \mathcal{O},\ (p,X) \in \overline{J}_{\overline{\mathcal{O}}}^{2,+}u(x).\end{array} \right.}$$ -
(ii) \(u \in \mathit{LSC}(\overline{\mathcal{O}})\) is a viscosity super-solution of (6.121) if
$$\displaystyle{\left\{\begin{array}{l} \Phi (x,u(x),p,X) \geq 0\ \text{ for }x \in \mathcal{O},\ (p,X) \in \overline{J}_{\overline{\mathcal{O}}}^{2,-}u(x), \\ \Phi (x,u(x),p,X) \vee \Gamma (x,u(x),p) \geq 0\ \text{ for }x \in \partial \mathcal{O},\ (p,X) \in \overline{J}_{\overline{\mathcal{O}}}^{2,-}u(x).\end{array} \right.}$$ -
(iii) \(u \in C(\overline{\mathcal{O}})\) is a viscosity solution of (6.121) if it is both a viscosity sub- and super-solution.
6.5.1.4 Parabolic PDEs
One might think that a parabolic PDE is an elliptic PDE with one more variable, namely time t. However, because we are considering equations with first derivatives in t only, the variable t plays a specific role. In particular, there will be a boundary condition either at the initial point or at the final point of the time interval, not at both.
Given \(\mathcal{O}\subset \mathbb{R}^{d}\) and \(\Phi \in C([0,T] \times \mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d})\), we consider the parabolic equation
where as previously, Du stands for the vector of first order partial derivatives with respect to the x i ’s, and D 2 u for the matrix of second order derivatives with respect to x i and x j , 1 ≤ i, j ≤ d. Only in the case \(\mathcal{O} = \mathbb{R}^{d}\) can we hope that the above parabolic PDE is well posed. If \(\mathcal{O}\not =\mathbb{R}^{d}\), some boundary condition is needed. This will be discussed later.
We denote by \(\mathcal{P}_{\mathcal{O}}^{2,+}\) and \(\mathcal{P}_{\mathcal{O}}^{2,-}\) the parabolic analogs of \(J_{\mathcal{O}}^{2,+}\) and \(J_{\mathcal{O}}^{2,-}\). More specifically, for \(\mathcal{O}\) a locally compact subset of \(\mathbb{R}^{d}\), T > 0, denoting \(\mathcal{O}_{T} = (0,T) \times \mathcal{O}\), if \(u: \mathcal{O}_{T} \rightarrow \mathcal{R}\), 0 < s, t < T, \(x,y \in \mathcal{O}\), \((p,q,X) \in \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d}\), we say that \((p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,+}u(t,x)\), whenever
Moreover \(\mathcal{P}_{\mathcal{O}}^{2,-}u = -\mathcal{P}_{\mathcal{O}}^{2,+}(-u)\). The corresponding definitions of \(\overline{\mathcal{P}}_{\mathcal{O}}^{2,+}u(t,x)\) and \(\overline{\mathcal{P}}_{\mathcal{O}}^{2,-}u(t,x)\) are now clear.
We now give a definition of the notion of a viscosity solution of equation (6.122).
Definition 6.96.
With the above notation:
-
(i) \(u \in \mathit{USC}([0,T) \times \mathcal{O})\) is a viscosity sub-solution of Eq. (6.122) if u(0, x) ≤ κ(x), \(x \in \mathcal{O}\) and
$$\displaystyle{p + \Phi (t,x,u(t,x),q,X) \leq 0,\text{ for }(t,x) \in \mathcal{O}_{T},(p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,+}u(t,x).}$$ -
(ii) \(u \in \mathit{LSC}([0,T) \times \mathcal{O})\) is a viscosity super-solution of Eq. (6.122) if u(0, x) ≥ κ(x), \(x \in \mathcal{O}\) and
$$\displaystyle{p + \Phi (t,x,u(t,x),q,X) \geq 0,\text{ for }(t,x) \in \mathcal{O}_{T},(p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,-}u(t,x).}$$ -
(iii) \(u \in C([0,T) \times \mathcal{O})\) is a viscosity solution of (6.122) if it is both a sub- and a super-solution.
We remark that u(t, x) solves the parabolic PDE (6.122) if and only if v(t, x) = e λ t u(t, x) solves the same equation with \(\Phi \) replaced by \(\Phi +\lambda r\), which in the case where \(\Phi \) has the form (6.117) is proper iff r → λ r − F(t, x, r, q) is increasing for any (t, x, q). The fact that this is true for some λ is one of our standing assumptions on F for existence and uniqueness of the solution to the associated BSDE.
Note that we also consider parabolic PDEs with a final condition (at time t = T) rather than an initial condition (at time t = 0). In that case, the equation becomes
and the condition u(0, x) ≤ κ(x) (resp. u(0, x) ≥ κ(x)) becomes u(T, x) ≤ κ(x) (resp. u(T, x) ≥ κ(x)).
Finally we explain what we mean by a viscosity solution of the parabolic PDE
where \(\partial \varphi\) is the subdifferential of the convex lower semicontinuous function \(\varphi: \mathbb{R} \rightarrow (-\infty,+\infty ]\).
A sub-solution is a function \(u \in \mathit{USC}(\mathcal{O}_{T})\) which is such that for any \((t,x) \in \mathcal{O}_{T}\), \(u(t,x) \in \mathrm{ Dom}(\varphi )\) and whenever \((p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,+}u(t,x)\),
where \(\varphi _{-}^{{\prime}}(r)\) is the left derivative of \(\varphi\) at the point r. A super-solution is defined similarly with the usual changes, the left derivative of \(\varphi\) being replaced by its right derivative.
6.5.2 A First Uniqueness Result
Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\) and \(\Phi \in C(\mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d})\).
The basic assumptions of this subsection are:
-
\(\left(A_{1}\right)\) Super-monotonicity: there exists a δ > 0 such that for all \(x \in \mathcal{O}\), \(p \in \mathbb{R}^{d}\), \(X \in \mathbb{S}^{d}\), \(r,s \in \mathbb{R}\):
$$\displaystyle{r_{1} \leq r_{2}\; \Rightarrow \; \Phi (x,r_{2},p,X) - \Phi (x,r_{1},p,X) \geq \left(r_{2} - r_{1}\right)\ \delta,}$$and
-
\(\left(A_{2}\right)\) Super-degenerate-ellipticity: for all R > 0 there exists an increasing function \(\mathbf{m}_{R}: \mathbb{R}_{+} \rightarrow \mathbb{R}_{+}\), \(\mathbf{m}_{R}\left(0+\right) = 0\) such that if α > 0, \(X,Y \in \mathbb{S}^{d}\) and
$$\displaystyle{ \left(\begin{array}{cc} X & 0\\ 0 & - Y\end{array} \right) \leq 3\alpha \left(\begin{array}{cc} I & - I\\ - I & I\end{array} \right), }$$(6.123)or equivalently
$$\displaystyle{\left\langle Xz,z\right\rangle -\left\langle Y w,w\right\rangle \leq 3\alpha \left\vert z - w\right\vert ^{2},\;\;\forall \ z,w \in \mathbb{R}^{d},}$$then for all \(x,y \in \mathcal{O}\cap \overline{B\left(0,R\right)}\), \(r \in \mathbb{R}\):
$$\displaystyle{ \Phi (y,r,\alpha (x - y),Y ) - \Phi (x,r,\alpha (x - y),X) \leq \mathbf{m}_{R}\left(\left\vert x - y\right\vert +\alpha \left\vert x - y\right\vert ^{2}\right). }$$(6.124)
Note that if X and Y satisfy (6.123) then Y ≤ X (setting z = w).
In the particular case of the function \(\Phi \) given by (6.117), the super-monotonicity of \(\Phi \) is a consequence of the same property for − F. As for the super degenerate ellipticity, we have the following:
Lemma 6.97.
If g is globally Lipschitz, f is globally monotone, and − F satisfies (6.124), then \(\Phi \) is super-degenerate-elliptic.
Proof.
The global monotonicity of f implies that
Now consider the term involving g. We take advantage of (6.123) and the Lipschitz continuity of g:
■
Theorem 6.98 (Comparison Principle).
Let \(\mathcal{O}\) be a bounded open subset of \(\mathbb{R}^{d}\) and assume that \(\Phi \,: \mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d} \rightarrow \mathbb{R}\) satisfies \(\left(A_{1}\right)\) and \(\left(A_{2}\right)\). If
then
We first prove auxiliary results.
Lemma 6.99.
Given \(u,v \in C(\bar{\mathcal{O}})\) , α > 0, we define
Let \((\hat{x},\hat{y})\) be a local maximum in \(\mathcal{O}\times \mathcal{O}\) of ψ α. Then there exist \(X,Y \in \mathbb{S}^{d}\) such that
-
(j)
\((\alpha (\hat{x} -\hat{ y}),X) \in \bar{ J}_{\mathcal{O}}^{2,+}u(\hat{x})\),
-
(jj)
\((\alpha (\hat{x} -\hat{ y}),Y ) \in \bar{ J}_{\mathcal{O}}^{2,-}v(\hat{y})\),
-
(jjj)
\(\left(\begin{array}{lr} X & 0\\ 0 & - Y\end{array} \right) \leq 3\alpha \left(\begin{array}{rr} I & - I\\ - I & I\end{array} \right)\) .
Proof.
We shall use the notation
It is sufficient to prove the proposition in case \(\mathcal{O} = \mathbb{R}^{d}\), \(\hat{x} =\hat{ y} = 0\), u(0) = v(0) = 0, (0, 0) is a global maximum of ψ α , u and − v are bounded from above. Hence we may assume that for all \(x,y \in \mathbb{R}^{d}\),
and we need to show that there exist X, Y ∈ S d such that
-
( j’)
\((0,X) \in \bar{ J}^{2,+}u(0)\),
-
( jj’)
\((0,Y ) \in \bar{ J}^{2,+}v(0)\),
-
( jjj’)
\(\left(\begin{array}{lr} X & 0\\ 0 & - Y\end{array} \right) \leq 3A\).
With the notations \(\bar{x} = \left(\begin{array}{c} x\\ y \end{array} \right)\), \(\bar{\xi }= \left(\begin{array}{c} \xi \\ \eta \end{array} \right)\), we deduce from Schwarz’s inequality that (with the notation \(\Vert A\Vert \mathop{ =}\limits^{ \mathit{def }}\sup \{\vert \left\langle A\bar{\xi },\bar{\xi }\right\rangle \vert;\vert \bar{\xi }\vert \leq 1\}\)):
Hence if \(B\mathop{ =}\limits^{ \mathit{def }}3A = A + \frac{1} {\alpha } A^{2}\), \(\lambda \mathop{=}\limits^{ \mathit{def }}\alpha +\Vert A\Vert\), and \(w(\bar{x})\mathop{ =}\limits^{ \mathit{def }}u(x) - v(y)\), (6.125) implies
We now introduce inf- and sup-convolutions. Let
where
Since a supremum (resp. an infimum) of convex (resp. concave) functions is convex (resp. concave), the mappings
are convex, while
is concave. Hence \(\hat{w}\), \(\hat{u}\) and \(-\hat{v}\) are “semiconvex”, i.e. they are the sum of a convex function and a function of class C 2. Note that the hyphen is here on purpose, in order to distinguish this notion from the notion of semiconvex functions, as introduced in Chap. 4.3.
Moreover:
and from (6.126)
hence
and consequently
If \(\hat{w}\) is smooth, we could deduce that there exists an \(\mathcal{X} \in S_{2d}\) such that \((0,\mathcal{X}) \in J^{2}\hat{w}(0)\), and \(\mathcal{X} \leq B\). Since \(\hat{w}\) is semiconvex, it is possible to show, using Alexandrov’s theorem (which says that a semiconvex function is a.e. twice differentiable), and a lemma due to R. Jensen, which states that the above is essentially true in the sense that it is true provided the first condition is changed to \((0,\mathcal{X}) \in \bar{ J}^{2}\hat{w}(0)\). We refer to the user’s guide [18] for more details. Now, since \(\hat{w}(\bar{\xi }) =\hat{ u}(\xi ) -\hat{ v}(\eta )\), it is not hard to deduce that \(\mathcal{X} = \left(\begin{array}{lr} X & 0\\ 0 & - Y \end{array} \right)\), and \((0,X) \in \bar{ J}^{2}\hat{u}(0)\), \((0,Y ) \in \bar{ J}^{2}\hat{v}(0)\).
The magical property of sup-convolution is that this is enough to conclude that \((0,X) \in \bar{ J}^{2,+}u(0)\) and \((0,Y ) \in \bar{ J}^{2,-}v(0)\), which is a consequence of the next Lemma. ■
Lemma 6.100.
Let λ > 0, \(u \in C(\mathbb{R}^{d})\) be bounded from above, and
If \(\eta,q \in \mathbb{R}^{d}\) , X ∈ S d and \((\eta,X) \in J^{2,+}\hat{u}(\eta )\) , then (q,X) ∈ J 2,+ u(η + q∕λ).
Proof.
We assume that \((q,X) \in J^{2,+}\hat{u}(\eta )\). Let \(y \in \mathbb{R}^{d}\) be such that
Then for any \(x,\zeta \in \mathbb{R}^{d}\),
If we choose ζ = x − y +η, then we deduce from the above that
On the other hand, choosing x = y and ζ = η +α(λ(η − y) + q), we obtain that
The first inequality says that (q, X) ∈ J 2, + u(y), while the second, with α < 0 small enough in absolute value, implies that \(y =\eta +\frac{q} {\lambda }\). The result is proved. ■
We shall also need the following:
Lemma 6.101.
Let \(\mathcal{O}\) be locally closed subset of \(\mathbb{R}^{d}\), \(\Phi \in \mathit{USC}(\mathcal{O})\), \(\varPsi \in \mathit{LSC}(\mathcal{O})\) , Ψ ≥ 0, \(\varepsilon > 0\) and
If \(\lim \limits _{\varepsilon \rightarrow 0}M_{\varepsilon }\) exists in \(\mathbb{R}\) and \(x_{\varepsilon } \in \mathcal{O}\) satisfies
then
Moreover if \(\hat{x} \in \mathcal{O}\) and there exists an \(\varepsilon _{n} \rightarrow 0\) such that \(x_{\varepsilon _{n}} \rightarrow \hat{ x}\) , then
Proof.
Let \(\alpha _{\varepsilon } = M_{\varepsilon } - \Phi (x_{\varepsilon }) + \dfrac{1} {\varepsilon } \varPsi (x_{\varepsilon })\). Note that for \(0 <\varepsilon <\delta\) we have \(M_{\varepsilon } \leq M_{\delta }\) and
Then
and (6.127) follows. Moreover by the lower semicontinuity of Ψ
Using now the upper semicontinuity of \(\Phi \) we have
The result follows. ■
Proof of the comparison principle.
Assume that
Let \(\varepsilon > 0\) and
Clearly for \(\delta >\varepsilon\), \(M_{\delta } \geq M_{\varepsilon } \geq u(x) - v(x)\), \(\forall \ x \in \overline{\mathcal{O}}\) and consequently \(M_{\varepsilon }\) converges in \(\mathbb{R}\) as \(\varepsilon \rightarrow 0\),
Since \(\overline{\mathcal{O}}\) is compact and (x, y)↦u(x) − v(y) is upper semicontinuous on \(\overline{\mathcal{O}}\times \overline{\mathcal{O}}\), there exists \((x_{\varepsilon },y_{\varepsilon }) \in \overline{\mathcal{O}}\times \overline{\mathcal{O}}\) such that
By Lemma 6.101, with \(\Phi (x,y) = u(x) - v(y)\) and \(\varPsi (x,y) = \dfrac{1} {2}\left\vert x - y\right\vert ^{2}\) we obtain
We now conclude that there exists an \(\varepsilon _{0} > 0\) such that
Since \(u\left(x\right) \leq v\left(x\right),\;\;\forall \ x \in \partial \mathcal{O}\) and whenever \(\varepsilon _{n} \rightarrow 0\) and \(x_{\varepsilon _{n}},y_{\varepsilon _{n}} \rightarrow \hat{ x}\), it follows that
By Lemma 6.99, for \(0 <\varepsilon \leq \varepsilon _{0}\) there exist \(X_{\varepsilon }\), \(Y _{\varepsilon } \in \mathbb{S}^{d}\) such that
and the inequality (jjj) in Lemma 6.99 reads here
Let R > 0 such that \(\mathcal{O}\subset B\left(0,R\right)\). From \(\left(A_{2}\right)\) with \(\alpha =\varepsilon ^{-1}\), we deduce that
and since \(u(x_{\varepsilon }) > v(y_{\varepsilon })\) for \(\varepsilon\) small enough, we deduce from \(\left(A_{1}\right)\) that
It follows that
Since u is a viscosity sub-solution and v is a viscosity super-solution of the equation \(\Phi = 0\), we deduce from (6.129) that
Hence
then also
and letting \(\varepsilon \rightarrow 0\), we infer the contradiction
The Theorem is established. ■
We deduce from this theorem the uniqueness of the viscosity solution for the Dirichlet problem.
Corollary 6.102.
Under the assumptions of Theorem 6.98, if \(u,v \in C\left(\overline{\mathcal{O}}\right)\) are two viscosity solutions of \(\Phi = 0\) on \(\mathcal{O}\) then
This Corollary proves that our probabilistic formula provides the unique solution of the corresponding elliptic PDE, satisfying the Dirichlet boundary condition in the classical sense. However it follows from Theorem 7.9 in [18] that it is also the unique solution in the larger class of those solutions satisfying the Dirichlet boundary condition in the (relaxed) viscosity sense.
Let us now indicate how the above proof can be modified, in order to treat the case of a parabolic PDE with Dirichlet condition at the boundary of a bounded set.
Let \(\mathcal{O}\) be a bounded open subset of \(\mathbb{R}^{d}\). Consider the Cauchy–Dirichlet problem
where \(\kappa \in C([0,T[\times \overline{\mathcal{O}})\).
The notion of a viscosity solution to (6.130) is expressed as in Definition 6.96, adding the requirement u(t, x) ≤ κ(t, x) (resp. ≥ ) for \((t,x) \in (0,T) \times \partial \mathcal{O}\) for u to be a sub-solution (resp. a super-solution).
We have the comparison principle:
Theorem 6.103.
Let \(\Phi \in C(\left[0,T\right] \times \overline{\mathcal{O}}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d})\) be a proper function satisfying \(\left(A_{1}\right)\) and \(\left(A_{2}\right)\) for each fixed \(t \in \left[0,T\right[\) , with the same δ and m R. If \(u \in \mathit{USC}(\left[0,T\right) \times \overline{\mathcal{O}})\) is a viscosity sub-solution of (6.130) and \(v \in \mathit{LSC}(\left[0,T\right) \times \overline{\mathcal{O}})\) is a viscosity super-solution of (6.130) then
An essential tool for the proof of this Theorem is the parabolic analog of Lemma 6.99, which is as follows:
Lemma 6.104.
Given \(u,v \in C(\mathcal{O}_{T})\) , α > 0, let
Let \((\hat{t},\hat{x},\hat{y})\) be a local maximum of ψ α in \((0,T) \times \mathcal{O}\times \mathcal{O}\). Suppose moreover that there is an r > 0 such that for every M > 0 there is a C with the property that whenever \((p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,+}u(t,x)\), \(\vert x -\hat{ x}\vert + \vert t -\hat{ t}\vert \leq r\) and |u(t,x)| + |q| + |X|≤ M, then p ≤ C, and the same is true if we replace \(\mathcal{P}_{\mathcal{O}}^{2,+}u(t,x)\) by \(-\mathcal{P}_{\mathcal{O}}^{2,-}v(t,x)\). Then there exist \(p \in \mathbb{R}\), \(X,Y \in \mathbb{S}^{d}\) such that
-
(j)
\((p,\alpha (\hat{x} -\hat{ y}),X) \in \overline{\mathcal{P}}_{\mathcal{O}}^{2,+}u(\hat{t},\hat{x})\),
-
(jj)
\((-p,\alpha (\hat{x} -\hat{ y}),Y ) \in \overline{\mathcal{P}}_{\mathcal{O}}^{2,-}v(\hat{t},\hat{x})\),
-
(jjj)
\(\left(\begin{array}{lr} X & 0\\ 0 & - Y\end{array} \right) \leq 3\alpha \left(\begin{array}{rr} I & - I\\ - I & I\end{array} \right)\) .
Proof of the Theorem.
We only sketch the proof. We first observe that it suffices to prove that \(\tilde{u}(t,x) = u(t,x) -\varepsilon /(T - t) \leq v(t,x)\) for all \((t,x) \in (0,T) \times \mathcal{O}\) and all \(\varepsilon > 0\). Now \(\tilde{u}\) satisfies
From now on we write u instead of \(\tilde{u}\). We want to contradict the assumption that \(\max _{(0,T)\times \mathcal{O}}[u - v] =\delta > 0\). Let \((\hat{t},\hat{x},\hat{y})\) be a local maximum of ψ α (t, x, y) from Lemma 6.104, and write
From our standing assumption, M α ≥ δ > 0. It is not hard to show that for α large enough, \(0 <\hat{ t} < T\), \(\hat{x},\hat{y} \in \mathcal{O}\). Arguing as in the proof of Theorem 6.98 with the help this time of Lemma 6.104, we conclude that there exist \(p \in \mathbb{R}\), \(X,Y \in \mathbb{S}^{d}\) c > 0 such that
while
We deduce that
from which a contradiction follows. ■
6.5.3 A Second Uniqueness Result
We are given a continuous and globally monotone \(f\,:\, \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}\) and a globally Lipschitz \(g: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d\times d}\) together with
such that, for each 1 ≤ i ≤ k, F i (t, x, y, z) depends on the matrix z only through its i-th column z i . As already explained, this assumption is essential for the notion of a viscosity solution of the system of partial differential equations to be considered below to make sense. We assume specifically that for some constants C, p > 0:
-
(A.2i)
| F(t, x, 0, 0, 0) | ≤ C(1 + | x | p), | κ(x) | ≤ C(1 + | x | p),
-
(A.2ii)
F = F(t, x, y, z) is globally Lipschitz in (y, z), uniformly in (t, x).
Remark 6.105.
In the case of systems of equations, it does not seem possible to weaken the Lipschitz continuity of F in y to a monotonicity condition as we do in the case m = 1.
Under the assumptions (A.2i) and (A.2ii), for each \(t \in [0,T]\) and \(x \in \mathbb{R}^{d}\), we consider the system of PDEs
where
The notion of a viscosity solution for such a system is easily deduced from a combination of Definitions 6.94 and 6.96.
We can replace “global maximum point” or “global minimum point” by “strict global maximum point” or “strict global minimum point”. The proof of this claim is very simple and we leave it as an exercise for the reader.
Now we give a uniqueness result for (6.131). This result is obtained under the following additional assumption:
-
(A.2 iii)
\(\vert F(t,x,r,p) - F(t,y,r,p)\vert \leq \mathbf{m}_{R}(\vert x - y\vert (1 + \vert p\vert )),\)
for all \(x,y \in \mathbb{R}^{d}\) such that | x | ≤ R, | y | ≤ R, \(r \in \mathbb{R}^{m}\), \(p \in \mathbb{R}^{d}\), where for each R > 0, \(\mathbf{m}_{R} \in C(\mathbb{R}_{+})\) is increasing and m R (0) = 0.
Our result is the following:
Theorem 6.106.
Assume that f,g satisfy (A2). Then there exists at most one viscosity solution u of (6.131) such that
uniformly for t ∈ [0,T], for some δ > 0.
Remark 6.107.
Notice that any function which has at most a polynomial growth at infinity satisfies (6.132).
The growth condition (6.132) is optimal to obtain such a uniqueness result for (6.131). Indeed, consider the equation
then u is a solution of (6.133) if and only if the function v(t, y) = u(t, e y) is a solution of the Heat Equation
But it is well-known that, for the Heat Equation, the uniqueness holds in the class of solutions v satisfying
uniformly for \(t \in [0,T]\), for some δ > 0. And (6.135) gives back (6.132) for (6.133) since y = log(x).
Let us finally mention that, in our case, the growth condition (6.132) is mainly a consequence of the assumptions on the coefficients of the differential operator and in particular on a = gg ∗; under the assumptions of Theorem 6.106, the matrix a has, a priori, a quadratic growth at infinity. If a is assumed to have a linear growth at infinity, an easy adaptation of the proof of Theorem 6.106 shows that the uniqueness holds in the class of solutions satisfying
uniformly for \(t \in [0,T]\), for some δ > 0.
Proof of Theorem 6.106.
Let u and v be two viscosity solutions of (6.131). The proof consists of two steps. We first show that u − v and v − u are viscosity sub-solutions of an integral partial differential system; then we build a suitable sequence of smooth super-solutions of this system to show that | u − v | = 0 in \([0,T] \times \mathbb{R}^{d}\). Here and below, we denote by | ⋅ | the sup norm in \(\mathbb{R}^{m}\).
Lemma 6.108.
Let u be a sub-solution and v a super-solution of (6.131). Then the function ω:= u − v is a viscosity sub-solution of the system
for 1 ≤ i ≤ k, where \(\tilde{K}\) is the Lipschitz constant of F in (r,p).
Proof.
Let \(\varphi \in C^{2}([0,T] \times \mathbb{R}^{d})\) and let \((t_{0},x_{0}) \in (0,T) \times \mathbb{R}^{d}\) be a strict global maximum point of \(\omega _{i}-\varphi\) for some 1 ≤ i ≤ k.
We introduce the function
where n is devoted to tend to infinity.
Since (t 0, x 0) is a strict global maximum point of \(u_{i} - v_{i}-\varphi\), by a classical argument in the theory of viscosity solutions, there exists a sequence (t n , x n , y n ) such that:
-
(i)
(t n , x n , y n ) is a global maximum point of ψ n in \([0,T] \times (\overline{B}_{R})^{2}\), where B R is a ball with a large radius R;
-
(ii)
(t n , x n ), (t n , y n ) → (t 0, x 0) as n → ∞;
-
(iii)
\(n\vert x_{n} - y_{n}\vert ^{2}\) is bounded and tends to zero as n → ∞.
It follows from a variant of Lemma 6.104, see also Theorem 8.3 in the user’s guide [18], that there exist \(X,Y \in \mathbb{S}^{d}\) such that
where
Modifying if necessary ψ n by adding terms of the form χ(x) and χ(y) with supports in \(B_{R/2}^{c}\), we may assume that (t n , x n , y n ) is a global maximum point of ψ n in \(([0,T] \times \mathbb{R}^{d})^{2}\). Since u and v are respectively sub and super-solutions of (6.131), we have
and
The computation of Lemma 6.97 yields
Finally, we consider the difference between the nonlinear terms
The first term on the right-hand side comes from (A.2 iii): we have denoted by m the modulus m R which appears in this assumption for R large enough. The two last terms come from the Lipschitz continuity of F i with respect to the two last variables.
We notice that
because of the Lipschitz continuity of g and that
Now we subtract the viscosity inequalities for u and v: thanks to the above estimates, we can write the obtained inequality in the following way
where we have gathered in ω 1(n) all the terms of the form \(n\vert x_{n} - y_{n}\vert ^{2}\) and | x n − y n | ; ω 1(n) → 0 when n tends to ∞. To conclude we let n → ∞. Since (t n , x n ), (t n , y n ) → (t 0, x 0), we obtain:
and therefore ω is a sub-solution of the desired equation. ■
Now we are going to build suitable smooth super-solutions for the equation (6.136).
Lemma 6.109.
For any δ > 0, there exists a C 1 > 0 such that the function
where
satisfies
for 1 ≤ i ≤ k where t 1 = T −δ∕C 1 .
Proof.
We first estimate the term K χ, the main point being its dependence in x. For the sake of simplicity of notation, we denote below by C all the positive constants which enter into these estimates. These constants depend only on δ and on the bounds on the coefficients of the equations.
We first give estimates on the first and second derivatives of ψ: easy computations yield
and
These estimates imply that, if \(t \in [t_{1},T]\)
and, in the same way
It is worth noticing that, because of our choice of t 1, the above estimates do not depend on C 1.
Since gg ∗ and \(\langle f(x),x\rangle\) grow at most quadratically at infinity, we have
Since ψ(x) ≥ 1 in \(\mathbb{R}^{d}\), by using the Cauchy–Schwartz inequality, it is clear enough that for C 1 large enough the quantity in the brackets is positive and the proof is complete. ■
To conclude the proof, we are going to show that ω = u − v satisfies
for any α > 0. Then we will let α tend to zero.
To prove this inequality, we first remark that because of (6.132)
uniformly for t ∈ [0, T], for some δ > 0. From now on we choose δ in the definition of χ such that this holds. Then | ω i | −α χ is bounded from above in \([t_{1},T] \times \mathbb{R}^{d}\) for any 1 ≤ i ≤ k and
is attained at some point (t 0, x 0) and for some i 0.
We first remark that, since | ⋅ | is the sup norm in \(\mathbb{R}^{m}\), we have
and \(\vert \omega _{i_{0}}(t_{0},x_{0})\vert = \vert \omega (t_{0},x_{0})\vert \). We may assume without loss of generality that \(\vert \omega _{i_{0}}(t_{0},x_{0})\vert > 0\), otherwise we are done.
There are two cases: either \(\omega _{i_{0}}(t_{0},x_{0}) > 0\) or \(\omega _{i_{0}}(t_{0},x_{0}) < 0\). We treat the first case, the second one is treated in a similar way since the roles of u and v are symmetric.
From the maximum point property, we deduce that
and this inequality can be interpreted as the property for the function \(\omega _{i_{0}}-\phi\) to have a global maximum point at (t 0, x 0), where
Since ω is a viscosity sub-solution of (6.136), if \(t_{0} \in [t_{1},T[\), we have
But the left-hand side of this inequality is nothing but
since \(\omega _{i_{0}}(t_{0},x_{0}) = \vert \omega (t_{0},x_{0})\vert \); so, by Lemma 6.109, we have a contradiction. Therefore t 0 = T and since | ω(T, x) | = 0, we have
Letting α tend to zero, we obtain
Applying successively the same argument on the intervals [t 2, t 1] where \(t_{2} = (t_{1} -\delta /C_{1})^{+}\) and then, if t 2 > 0, on [t 3, t 2] where \(t_{3} = (t_{2} -\delta /C_{1})^{+}\) … etc, we finally obtain that
and the proof is complete.
6.5.4 A Third Uniqueness Result
Let D be an open connected bounded subset of \(\mathbb{R}^{d}\) of the form
where \(\phi \in C_{b}^{3}\left(\mathbb{R}^{d}\right)\), \(\left\vert \nabla \phi \left(x\right)\right\vert = 1\), for all \(x \in \mathrm{ Bd}\left(D\right)\).
We define the outward normal derivative of v at the point \(x \in \mathrm{ Bd}\left(D\right)\) by
The aim of this section is to prove uniqueness of a viscosity solution for the following parabolic variational inequality (PVI) with a mixed nonlinear multivalued Neumann–Dirichlet boundary condition:
where the operator \(\mathcal{A}_{t}\) is given by
We will make the following assumptions:
-
(I)
The functions
$$\displaystyle{ \begin{array}{l} f: \left[0,\infty \right) \times \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}, \\ g: \left[0,\infty \right) \times \mathbb{R}^{d} \rightarrow \mathbb{R}^{d\times d}, \\ F: \left[0,\infty \right) \times \overline{D} \times \mathbb{R} \times \mathbb{R}^{d} \rightarrow \mathbb{R}, \\ G: \left[0,\infty \right) \times \mathrm{ Bd}\left(D\right) \times \mathbb{R} \rightarrow \mathbb{R}, \\ \kappa: \overline{D} \rightarrow \mathbb{R}\;\;\;\;\end{array} }$$(6.138)are continuous.
We assume that for all T > 0, there exist \(\alpha \in \mathbb{R}\) and L, β, γ ≥ 0 (which can depend on T) such that \(\forall t \in \left[0,T\right],\;\forall x,\tilde{x} \in \mathbb{R}^{d}\):
$$\displaystyle{ \langle f\left(t,x\right) - f\left(t,\tilde{x}\right), \frac{x -\tilde{ x}} {\vert x -\tilde{ x}\vert }\rangle ^{+} +\big \vert g\left(t,x\right) - g\left(t,\tilde{x}\right)\big\vert \leq L\left\vert x -\tilde{ x}\right\vert,\quad }$$(6.139)and \(\ \forall t \in \left[0,T\right]\), \(\forall x \in \overline{D}\), \(x^{{\prime}}\in \mathrm{ Bd}\left(D\right)\), \(y,\tilde{y} \in \mathbb{R},z,\tilde{z} \in \mathbb{R}^{d}\):
$$\displaystyle{ \begin{array}{rl} \left(i\right)\ &(y -\tilde{ y})\left(F(t,x,y,z) - F(t,x,\tilde{y},z)\right) \leq \alpha \vert y -\tilde{ y}\vert ^{2}, \\ \left(\mathit{ii}\right)\ &\big\vert F(t,x,y,z) - F(t,x,y,\tilde{z})\big\vert \leq \beta \vert z -\tilde{ z}\vert, \\ \left(\mathit{iii}\right)\ &\big\vert F(t,x,y,0)\big\vert \leq \gamma \big (1 + \vert y\vert \big), \\ \left(\mathit{iv}\right)\ &(y -\tilde{ y})\left(G(t,x^{{\prime}},y) - G(t,x^{{\prime}},\tilde{y})\right) \leq \alpha \vert y -\tilde{ y}\vert ^{2}, \\ \left(\mathit{v}\right)\ &\big\vert G(t,x^{{\prime}},y)\big\vert \leq \gamma \left(1 + \vert y\vert \right).\end{array} }$$(6.140)In fact, the conditions (6.140-i) and (6.140-iv) mean that, for all \(t \in \left[0,T\right]\), \(x \in \overline{D}\), \(x^{{\prime}}\in \mathrm{ Bd}\left(D\right)\), \(z \in \mathbb{R}^{d}\),
$$\displaystyle{r\mapsto \alpha y - F\left(t,x,ry,z\right)\quad \text{ and}\quad r\mapsto \alpha r - G\left(t,x^{{\prime}},r\right)}$$are increasing functions.
-
(II)
We assume that
$$\displaystyle{ \begin{array}{rl} \left(i\right)\ &\varphi,\psi: \mathbb{R} \rightarrow (-\infty,+\infty ]\text{ are proper convex l.s.c. functions,} \\ \left(\mathit{ii}\right)\ &\varphi \left(y\right) \geq \varphi \left(0\right) = 0\text{ and }\psi \left(y\right) \geq \psi \left(0\right) = 0,\ \forall \;y \in \mathbb{R},\end{array} }$$(6.141)and there exists a positive constant M such that
$$\displaystyle{ \begin{array}{rl} \left(i\right)\ &\Big\vert \varphi \big(\kappa (x)\big)\Big\vert \leq M,\;\;\forall x \in \overline{D}, \\ \left(\mathit{ii}\right)\ &\Big\vert \psi \big(\kappa (x)\big)\Big\vert \leq M,\;\;\forall x \in \mathrm{ Bd}\left(D\right).\end{array} }$$(6.142)
Remark 6.110.
Condition (6.141-ii) is generally satisfied after a translation of both the functions \(\varphi\), ψ and their arguments.
We define
and we will use the same notions with \(\varphi\) replaced by ψ.
At every point \(y \in \mathrm{ Dom}\left(\varphi \right)\) we have
where \(\varphi _{-}^{{\prime}}(y)\) and \(\varphi _{+}^{{\prime}}(y)\) are resp. the left and right derivatives of \(\varphi\) at y.
For the reader’s convenience we recall here from Sect. 5.8 the definition of a viscosity solution of the parabolic variational inequality (6.137). We define
Definition 6.111.
Let \(u: \left[0,\infty \right) \times \overline{D} \rightarrow \mathbb{R}\) be a continuous function, which satisfies \(u(0,x) =\kappa \left(x\right),\;\forall \ x \in \overline{D}\).
-
(a)
u is a viscosity sub-solution of (6.137) if:
$$\displaystyle{\left\vert \ \begin{array}{l} u(t,x) \in \mathrm{ Dom}\left(\varphi \right),\ \ \forall (t,x) \in (0,\infty ) \times \overline{D}, \\ u(t, x) \in \mathrm{ Dom }\left(\psi \right),\ \ \ \forall (t, x) \in (0, \infty ) \times \mathrm{ Bd}\left(D\right), \end{array} \right.}$$and for any \(\left(t,x\right) \in (0,\infty ) \times \overline{D}\), any \((p,q,X) \in \mathcal{P}^{2,+}u(t,x)\):
$$\displaystyle\begin{array}{rcl} \left\{\begin{array}{l} p + \Phi \left(t,x,u(t,x),q,X\right) +\varphi _{ -}^{{\prime}}\left(u(t,x)\right) \leq 0\ \;if\;x \in D, \\ \min \Big\{p + \Phi \left(t,x,u(t,x),q,X\right) +\varphi _{ -}^{{\prime}}\left(u(t,x)\right), \\ \quad \quad \quad \quad \Gamma (t,x,u(t,x),q) +\psi _{ -}^{{\prime}}\left(u(t,x)\right)\Big\} \leq 0\;\ if\;x \in \mathrm{ Bd}\left(D\right).\end{array} \right.& & {}\end{array}$$(6.143) -
(b)
The viscosity super-solution of (6.137) is defined in a similar manner as above, with \(\mathcal{P}^{2,+}\) replaced by \(\mathcal{P}^{2,-}\), the left derivative replaced by the right derivative, min by max, and the inequalities ≤ by ≥ .
-
(c)
A continuous function \(u: \left[0,\infty \right) \times \overline{D}\) is a viscosity solution of (6.137) if it is both a viscosity sub- and super-solution.
We now present the main result of this section.
Theorem 6.112.
Let the assumptions (6.138)–(6.142) be satisfied. If moreover the function
and there exists a continuous function \(\mathbf{m}: [0,\infty ) \rightarrow [0,\infty )\) , \(\mathbf{m}\left(0\right) = 0\) , such that
then the parabolic variational inequality (6.137) has at most one viscosity solution.
Proof.
It is sufficient to prove uniqueness on a fixed arbitrary interval \(\left[0,T\right]\).
Also, it suffices to prove that if u is a sub-solution and v is a super-solution such that \(u(0,x) = v(0,x) =\kappa \left(x\right)\), \(x \in \overline{D}\), then u ≤ v.
Clearly by adding a constant we may assume that ϕ(x) ≥ 0 on \(\overline{D}\).
For λ = α + + 1 and \(\delta,\varepsilon,c > 0\) let
Let
Clearly \(r \rightarrow \tilde{ \Phi }\left(t,x,r,q,X\right)\) is an increasing function for all \(\left(t,x,q,X\right) \in \left[0,T\right] \times \mathbb{R}^{d} \times \mathbb{R}^{d} \times \mathbb{S}^{d}\). Moreover, since
then for any δ > 0, we can choose \(c = c\left(\delta \right) > 0\) such that c(δ) → 0 as δ → 0 and for all δ, \(\varepsilon > 0\),
We will prove that \(\bar{u} \leq \bar{ v}\) for all δ > 0, \(\varepsilon > 0\), c = c(δ). This will imply u ≤ v on \([0,T) \times \overline{D}\) by letting \(\delta,\varepsilon \rightarrow 0\). The result will follow, since T is arbitrary.
Using the two last properties, assumption (6.144) and the fact that the left and right derivative of \(\varphi\) and ψ are increasing we infer that \(\bar{u}\) satisfies in the viscosity sense:
Analogously we see that \(\bar{v}\) satisfies in the viscosity sense:
For simplicity of notation we write from now on u, v instead of \(\bar{u},\bar{v}\) respectively.
We now assume that
By an argument similar to that of Theorem 6.103, see Theorem 4.2 in [56] for more details, there exists \((\hat{t},\hat{x}) \in \left(0,T\right] \times \mathrm{ Bd}\left(D\right)\) such that
We now let
where
Let \(\left(t_{n},x_{n},y_{n}\right)\) be a maximum point of ψ n .
We observe that \(u\left(t,x\right) - v\left(t,x\right) -\left\vert x -\hat{ x}\right\vert ^{4} -\vert t -\hat{ t}\vert ^{4}\) has \((\hat{t},\hat{x})\) as its unique maximum point. Then, by Lemma 6.101, we have that as n → ∞
But the domain D satisfies the uniform exterior sphere condition:
where \(S\left(x,r_{0}\right)\) denotes the closed ball of radius r 0 centered at x.
Then
or equivalently
If \(x_{n} \in \mathrm{ Bd}\left(D\right)\), we have, using the form of ρ n given by (6.150) and (6.152), that
Then (6.151) and the lower semicontinuity property of ψ − ′ implies that along a subsequence {x n } which belongs to ∂ D:
Analogously if y n ∈ ∂ D we infer
From Lemma 6.104 we deduce that there exists
such that
and
where \(A = D_{x,y}^{2}\rho _{n}\left(t_{n},x_{n},y_{n}\right)\). From (6.150) we have
Then (6.155) becomes
where δ n → 0.
Then from (6.147), (6.148) together with (6.154) and (6.153), we deduce that for n large enough
and
Subtracting the last two inequalities, we deduce that
By (6.149) and (6.151) there exists an N ≥ 1 such that for all n ≥ N, the above holds together with
and consequently
Combining this with (6.157), we deduce that
where ω n → 0 as n → ∞. Note that we have used the assumption (6.145), (6.151), (6.158), the fact that r → λ r − F(t, x, r, z) is increasing, and the Lipschitz continuity of F with respect to its last variable.
From (6.156), \(\forall \) \(q,\tilde{q} \in \mathbb{R}^{d}\),
Hence by the same computation as in Lemma 6.97 we obtain
and consequently taking the limit in the above set of inequalities yields
which is a contradiction.
Then
■
6.6 Annex E: Hints for Some Exercises
Chapter 1
Exercise 1.7
By Proposition 1.34 we have
Setting here \(g\left(u\right) = \mathbf{1}_{(-\infty,a]}\left(u\right)\), the second assertion follows.
Exercise 1.15: Let N > 0 and a sequence \(\varepsilon _{n} \searrow 0\) as n → ∞. Then
Exercise 1.16: Let us write \(S_{n}^{\left(p\right)} = S_{\Delta _{n}}^{\left(p\right)}\left(B_{ \cdot };\left[s,t\right]\right)\). The results are consequences of the following inequalities (see Proposition 1.86 for the first one) combined with Proposition 1.14 and Proposition 1.7:
and
where
is the modulus of continuity of \(\left\{B_{u}: u \in \left[s,t\right]\right\}\).
Exercise 1.17: Applying the inequality (1.25) with \(\alpha = \frac{1} {2} - \frac{\varepsilon } {2}\) and \(p = \dfrac{2} {\varepsilon }\), we deduce that for all \(s,t \in [0,T]\)
where
Let \(1 \leq q \leq \frac{2} {\varepsilon } \leq p\). By Lyapunov’s inequality and Minkowski’s inequality (1.24) from Exercise 1.2 we obtain
since
Exercise 1.19: Deduce from the proof of Theorem 1.40 that for any 0 < δ < b∕a, there exists a constant K = K(M, T, a, b, δ) such that for all \(\varepsilon,\lambda > 0\),
and conclude that (ii) in Theorem 1.46 is satisfied.
Exercise 1.20 \(\left(2\right)\) By Lemma 1.73 and Proposition 1.65, we infer that \((U_{t}^{\left(\lambda \right)})_{t\in \left[0,T\right]}\) and \((Z_{t}^{\left(\lambda \right)})_{t\in \left[0,T\right]}\) are continuous martingales.
\(\left(3\right)\) Let the stopping time \(\tau _{n} =\inf \left\{t \geq 0: \left\vert M_{t}\right\vert + < M >\ _{t} \geq n\right\}\). Then \(\left\{Z_{t\wedge \tau _{n}}^{\left(\lambda \right)};t \geq 0\right\}\) is a martingale and for all 0 ≤ s ≤ t
\(\left(5\right)\) By Proposition 1.59 with \(\varphi \left(x\right) = e^{ax}\), \(\left\{e^{aM_{t\wedge \theta }};t \geq 0\right\}\) is a sub-martingale and the inequality follows by Doob’s inequality (Theorem 1.60) and Hölder’s inequality.
\(\left(6\right)\) The inequality yields that \(\left\{Z_{t\wedge \theta _{n}}^{\left(\lambda \right)};n \in \mathbb{N}^{{\ast}}\right\}\) is uniformly integrable and consequently \(\mathbb{E}\ Z_{t}^{\left(\lambda \right)} =\lim _{n\rightarrow \infty }\mathbb{E}Z_{t\wedge \theta _{n}}^{\left(\lambda \right)} = 1\).
\(\left(7\right)\) In the inequality from \(\left(6\right)\) with \(A = \Omega \), one passes to the limit as n → ∞ and then \(\lambda \nearrow 1\).
\(\left(8\right)\) We have \(\mathbb{E}\left(e^{\frac{1} {2} M_{T}}\right) = \mathbb{E}\left(\sqrt{Z}_{T}e^{\frac{1} {4} <M>_{T}}\right) \leq \left(\mathbb{E}\ Z_{T}\right)^{2}\mathbb{E}\left(e^{\frac{1} {2} <M>_{T}}\right) \leq \mathbb{E}\left(e^{\frac{1} {2} <M>_{T}}\right) < \infty \).
Chapter 2
Exercise 2.1: \(\left(\Leftarrow \right)\): From the theory of the Riemann–Stieltjes integral we know that if \(g \in \mathit{BV }\left[0,T\right]\), then \(S_{n}\left(f\right)\) converges (to the Riemann–Stieltjes integral \(\int _{0}^{T}f\left(t\right)dg\left(t\right)\)).
\(\left(\Rightarrow \right)\): Let \(S_{n}\left(f\right)\) be convergent for all \(f \in C\left[0,T\right]\). Then \(S_{n}: C\left[0,T\right] \rightarrow R\) is a bounded linear operator such that
and by the Banach–Steinhauss Theorem
where \(\left\Vert S_{n}\right\Vert =\sup \left\{\left\vert S_{n}\left(f\right)\right\vert: \left\vert \!\left\vert \!\left\vert f\right\vert \!\right\vert \!\right\vert _{T} \leq 1\right\}\). For a fixed n we can construct \(h_{n} \in C\left[0,T\right]\) such that \(h_{n}\left(t_{i}^{n}\right) =\mathrm{ sign}\left\{g\left(t_{i+1}^{n}\right) - g\left(t_{i}^{n}\right)\right\}\) and \(\left\vert \!\left\vert \!\left\vert h_{n}\right\vert \!\right\vert \!\right\vert _{T} = 1\). Hence
and as a consequence g is of finite variation.
Note (Banach–Steinhauss Theorem).
Let X be a Banach space and let Y be a normed linear space. Let S i : X → Y, i ∈ I, be a family of bounded linear operators. If for each x ∈ X the set \(\left\{S_{i}\left(x\right): i \in I\right\}\) is bounded then the set \(\left\{\left\Vert S_{i}\right\Vert: i \in I\right\}\) is bounded.
Remark: This is not a contradiction since the subsequence \(\left\{n_{k}\right\}\) depends on f.
Exercise 2.3: If \(\mathcal{E}\) is the linear subspace of \(L^{2}(\mathbb{R}_{+})\) consisting of those functions f of the form:
then H[B] is the closure of \(\{\mathbb{B}(f),\,f \in \mathcal{E}\}\), which coincides with \(\{\mathbb{B}(f),\,f \in L^{2}(\mathbb{R}_{+})\}\). Moreover the set \(\{B_{t} = \mathbb{B}(\mathbf{1}_{[0,t]}),\,t > 0\}\) is total in H[B].
Exercise 2.4: Let \(s \in \left[0,T\right]\). We have
Since \(\mathbb{E}\vert B_{t}\vert = \sqrt{\frac{2t} {\pi }}\), it follows that \(\int _{0}^{\infty }\vert f^{{\prime}}(t)\vert \vert B_{t}\vert \,\mathit{dt} < \infty \quad a.s.\), and
Exercise 2.5: Note that
and for \(x \in \left[1,2\right]\)
and therefore for all \(x \in \left[1,2\right]\),
The relation (2.67) follows by taking the limit as \(\varepsilon \rightarrow 0\) in Itô’s formula for \(\varphi _{\varepsilon }\left(X_{t}\right)\).
Chapter 3
Exercise 3.1: Consider the equation
By Theorem 3.27, it has a unique solution X ∈ S d 0 and from the inequality (3.18) we clearly have (3.131)
where X ∈ S d 0 is the solution of the Eq. (6.159). The inequality (3.132) shows us that \(Y _{t} = e^{at} \tfrac{\left\vert X_{t}\right\vert ^{p}} {\left(1+\delta \left\vert X_{t}\right\vert ^{2}\right)^{p/2}}\) is a super-martingale and then (3.134).
Exercise 3.3: First deduce the following from the stochastic Gronwall inequalities (Annex C)
Exercise 3.9: \(\left(1i\right)\) We clearly have
for all C, t ≥ 0 if and only if 0 ≤ b < 2.
\(\left(1\mathit{ii}\right)\) If 0 ≤ a < 2, then by Jensen’s inequality
If − 1 < a < 0, then by Corollary 2.30 we have
Hence by (2.62-b)
\(\left(1\mathit{iii}\right)\)
\(\left(1\mathit{iv}\right)\) Observe that for every α ∈ ]0, 1[, there exists a C α > 0 such that
and consequently \(\left(\mathit{iii}\right)\) follows from \(\left(\mathit{ii}\right)\).
\(\left(2\right)\). Existence follows in both cases from Lemma 2.49 and Girsanov’s Theorem 2.51. Uniqueness in law on \(\left(\Omega,\mathcal{F}_{n\wedge \tilde{T}_{n}}\right)\) (resp. \(\left(\Omega,\mathcal{F}_{n\wedge \hat{T}_{n}}\right)\)) follows again from Girsanov’s Theorem, where
It remains to note that \(\tilde{T}_{n} \rightarrow \infty \), \(\hat{T}_{n} \rightarrow \infty \), as n → ∞.
Exercise 3.10 The function \(F: \mathbb{R} \rightarrow \mathbb{R}\), \(F\left(x\right) = f\left(x\right)\sqrt{\left\vert x\right\vert }\mathrm{sign}\left(x\right)\) is locally monotone and \(xF\left(x\right) \leq 0\), but it is not locally Lipschitz.
Chapter 4
Exercise 4.1
-
1.
The existence and the uniqueness of the solution \(X^{n} \in S^{2}\left[0,T\right]\) follows from Theorem 3.17; by the comparison result from Proposition 3.12 we have \(X_{t}^{n+1} \geq X_{t}^{n}\), for all \(t \in \left[0,T\right]\), \(\mathbb{P}\text{ -}a.s.\)
-
2.
Let L and ℓ be the Lipschitz constants of f and, respectively, g. We have
$$\displaystyle{X_{t}^{n} - 1 = \left(x - 1\right) +\int _{ 0}^{t}\mathit{dK}_{ s}^{n} +\int _{ 0}^{t}g(X_{ s}^{n})\mathit{dB}_{ s},}$$with \(\mathit{dK}_{s}^{n} = \left[f(X_{s}^{n}) + n(X_{s}^{n})^{-}\right]\mathit{ds}\) and \(G_{s}^{n} = g(X_{s}^{n})\). Since
$$\displaystyle{\mathit{dD}_{t}^{n} + \left(X_{ t}^{n} - 1\right)\mathit{dK}_{ t}^{n} + \left\vert G_{ t}^{n}\right\vert ^{2}\mathit{dt} \leq \mathit{dR}_{ t} + \left\vert X_{t}^{n} - 1\right\vert ^{2}\mathit{dV }_{ t},}$$where \(D_{t}^{n} = n\int _{0}^{t}\!\left[(X_{ s}^{n})^{-}\right]^{2}\mathit{ds} + n\int _{ 0}^{t}(X_{ s}^{n})^{-}\mathit{ds}\), \(R_{t}=\left(\dfrac{1} {2}\left\vert f\left(1\right)\right\vert ^{2}+2\left\vert g\left(1\right)\right\vert ^{2}\right)t\) and \(V _{t}=\left(L + \dfrac{1} {2} + 2\ell^{2}\right)t\), it follows by (6.78) (with p = 2 and λ = 1∕18) that
$$\displaystyle{\mathbb{E}\sup _{t\in \left[0,T\right]}\left\vert X_{t}^{n} - 1\right\vert ^{2} + \mathbb{E}\left(\int _{ 0}^{T}n\left[(X_{ t}^{n})^{-}\right]^{2}\mathit{dt}\right) + \mathbb{E}\left(\int _{ 0}^{T}n(X_{ t}^{n})^{-}\mathit{dt}\right) \leq C_{ 2}.}$$ -
3.
Since, moreover, \(\left(X_{t}^{n}\right)^{-}\geq \left(X_{t}^{n+1}\right)^{-}\) for all \(t \in \left[0,T\right]\), \(\mathbb{P}\text{ -a.s.}\), it follows that \(\lim _{n\rightarrow \infty }\left(X_{t}^{n}\right)^{-} = 0\), \(d\mathbb{P} \otimes \mathit{dt} - a.e\). By Itô’s formula for \(\left[\left(X_{t}^{n}\right)^{-}\right]^{2}\) (see Proposition 2.35), we deduce \(\mathbb{E}\sup \limits _{0\leq t\leq T}\ \vert \left(X_{t}^{n}\right)^{-}\vert ^{2} \rightarrow 0\), as n → ∞.
-
4.
Since
$$\displaystyle\begin{array}{rcl} & & \left(X_{t}^{n} - X_{ t}^{m}\right)\left[f\left(X_{ t}^{n}\right) - f\left(X_{ t}^{m}\right) + n(X_{ t}^{n})^{-}- m(X_{ t}^{m})^{-}\right]\mathit{dt} {}\\ & & \qquad \quad + \left\vert g\left(X_{t}^{n}\right) - g\left(X_{ t}^{m}\right)\right\vert ^{2}\mathit{dt} {}\\ & & \qquad \leq \left(n + m\right)\left[(X_{t}^{n})^{-}(X_{ t}^{m})^{-}\right]\mathit{dt} + \left(L +\ell ^{2}\right)\left\vert X_{ t}^{n} - X_{ t}^{m}\right\vert \mathit{dt}, {}\\ \end{array}$$we see, by (3.138), that
$$\displaystyle\begin{array}{rcl} \mathbb{E}\sup _{t\in \left[0,T\right]}\left\vert X_{t}^{n} - X_{ t}^{m}\right\vert ^{2}& \leq & C\mathbb{E}\int _{ 0}^{T}\left(n + m\right)\left[(X_{ t}^{n})^{-}(X_{ t}^{m})^{-}\right]\mathit{dt} {}\\ & \leq & C\mathbb{E}\left(\mathbb{E}\sup _{t\in \left[0,T\right]}\left[(X_{t}^{m})^{-}\right]^{2}\right)^{1/2}\left[\mathbb{E}\left(\int _{ 0}^{T}n(X_{ t}^{n})^{-}\mathit{dt}\right)^{2}\right]^{1/2} {}\\ & & \,+\,C\mathbb{E}\left(\mathbb{E}\sup _{t\in \left[0,T\right]}\left[(X_{t}^{n})^{-}\right]^{2}\right)^{1/2}\left[\mathbb{E}\left(\!\int _{ 0}^{T}m(X_{ t}^{m})^{-}\mathit{dt}\right)^{2}\right]^{1/2} {}\\ & & \rightarrow 0,\quad \text{ as }n,m \rightarrow \infty. {}\\ \end{array}$$ -
9.
It is sufficient to prove that the SDE
$$\displaystyle{X_{t} = x + \int _{0}^{t}\left(f\left(X_{ s}\right) + \left[f\left(0\right)\right]^{-}\mathbf{1}_{ X_{s}=0}\right)\mathit{ds} + \int _{0}^{t}g\left(X_{ s}\right)\mathit{dB}_{s}}$$has a unique positive solution \(X \in S^{2}\left[0,T\right]\). The uniqueness of positive solutions follows from
$$\displaystyle{\begin{array}{r} \left(X_{s} -\hat{ X}_{s}\right)\left[f\left(X_{s}\right) + \left[f\left(0\right)\right]^{-}\mathbf{1}_{X_{s}=0} - f\left(\hat{X}_{s}\right) -\left[f\left(0\right)\right]^{-}\mathbf{1}_{\hat{X}_{s}=0}\right] \\ + \left\vert g\left(X_{s}\right) - g\left(\hat{X}_{s}\right)\right\vert ^{2} \\ {l} { \leq \left(L +\ell ^{2}\right)\left\vert X_{s} -\hat{ X}_{s}\right\vert ^{2}}\end{array} }$$and Corollary 6.77. The existence of a positive solution follows from the approximating equation
$$\displaystyle{X_{t}^{\varepsilon } = x + \int _{ 0}^{t}\left[f\left(X_{ s}^{\varepsilon }\right) + \left[f\left(0\right)\right]^{-}\left(1 -\frac{\left\vert X_{s}^{\varepsilon }\right\vert } {\varepsilon } \right)^{+}\right]\mathit{ds} + \int _{ 0}^{t}g\left(X_{ s}^{\varepsilon }\right)\mathit{dB}_{ s}.}$$Note that \(\tilde{X}^{\varepsilon } = 0\) is the unique solution of the SDE
$$\displaystyle{\tilde{X}_{t}^{\varepsilon } = 0 + \int _{ 0}^{t}\left[f\left(\tilde{X}_{ s}^{\varepsilon }\right) - f\left(0\right)\left(1 -\frac{\left\vert \tilde{X}_{s}^{\varepsilon }\right\vert } {\varepsilon } \right)^{+}\right]\mathit{ds} + \int _{ 0}^{t}g\left(\tilde{X}_{ s}^{\varepsilon }\right)\mathit{dB}_{ s},}$$and \(f\left(0\right) + \left[f\left(0\right)\right]^{-}\left(1 -\frac{\left\vert 0\right\vert } {\varepsilon } \right)^{+} \geq 0\), which yields (by Proposition 3.12) \(X_{t}^{\varepsilon } \geq 0\).
-
10.
By Remark 2.27 we have for all t ≥ 0,
$$\displaystyle{0 = \int _{0}^{t}\mathbf{1}_{ X_{s}=y}g^{2}\left(X_{ s}\right)\mathit{ds} = g^{2}\left(y\right)\int _{ 0}^{t}\mathbf{1}_{ X_{s}=y}\mathit{ds}.}$$
Exercise 4.2: On each interval \(\mathbf{I}_{i}^{n}\) the equations from the schema (4.149) have unique adapted solutions U n, V n and Y n, respectively; \(U^{n}\) is absolutely continuous; \(H_{\cdot }^{n} = F_{1}(\cdot,U_{\cdot }^{n}) - \dfrac{d} {\mathit{dt}}U\cdot \in L^{1}(\Omega \times ]0,T[)\). Let \(K_{t}^{n} = \int _{0}^{t}H_{ s}^{n}\mathit{ds}\). To prove (4.150) the steps are:
-
1.
\(\begin{array}{c} \mathbb{E}\left(\left\vert U_{t}^{n}\right\vert ^{4} + \left\vert V _{t}^{n}\right\vert ^{4} + \left\vert Y _{t}^{n}\right\vert ^{4} + \left\vert X_{t}^{n}\right\vert ^{4} + \left\updownarrow K^{n}\right\updownarrow _{t}^{2}\right) \leq C\left(1 + \mathbb{E}\vert H_{0}\vert ^{4}\right);\end{array}\)
-
2.
\(\begin{array}{c} \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert V _{t}^{n} - U_{t}^{n}\right\vert ^{4} \leq \dfrac{C} {n^{3}}\left(1 + \mathbb{E}\vert H_{0}\vert ^{4}\right);\end{array}\)
-
3.
\(\begin{array}{c} \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert Y _{t}^{n} - U_{t}^{n}\right\vert ^{4} + \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\vert X_{t}^{n} - U_{t}^{n}\vert ^{4} \leq \dfrac{C} {n} \left(1 + \mathbb{E}\vert H_{0}\vert ^{4}\right);\end{array}\)
-
4.
\(\begin{array}{c} \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert Y _{t}^{n} - U_{ t}^{n}\right\vert ^{2} \leq \dfrac{C} {\sqrt{n}}\left(1 + \mathbb{E}\vert H_{0}\vert ^{4}\right); \end{array}\)
-
5.
Let \(t \in \mathbf{I}_{i}^{n}\). By Itô’s formula for \(\left\vert Y _{t}^{n} - X_{t}\right\vert ^{2}\) and the above estimates we obtain (4.150).
Exercise 4.3: In the same manner as the estimate from Proposition 4.8 is obtained, we derive using Proposition 6.74 the boundedness of approximating quantities. Then estimating, via the same Proposition 4.8, \(X^{\varepsilon } - X\) and \(\hat{X}^{\varepsilon } - X\) and using Proposition 6.9 the convergence results follow.
Exercise 4.4: For the first four questions, choose the control in feedback form as follows:
For the last question, choose
Exercise 4.5: The equivalence follows easily from Example 4.79.
Exercise 4.6: 1&2 Let \(\hat{x} \in \varPi _{E}\left(x\right)\) and \(\hat{y} \in \varPi _{E}\left(y\right)\). Then
3. Let 0 < λ < 1 and \(x,y \in \mathbb{R}^{d}\). Put z = λ x + (1 −λ)y. Then there exists a \(\hat{z} \in \, E\) such that \(d_{K}(z) =\Vert z -\hat{ z}\Vert\). Hence
4. According to Alexandrov’s Theorem (1939),Footnote 1 the function \(x\mapsto \left\vert x\right\vert ^{2} - d_{K}^{2}(x)\) is almost everywhere twice differentiable, consequently so is \(x\mapsto d_{K}^{2}(x)\).
Chapter 5
Exercise 5.1
Let p ≥ 2, δ ≥ 0 and the Banach space
where
and the complete metric space \(\mathbb{V}_{m,k}^{p}\left(0,T\right) =\bigcap _{\delta \geq 0}\mathbb{V}_{m,k}^{\delta,p}\left(0,T\right)\).
Using Lemma 6.58 we show that the mapping \(\Gamma: \mathbb{V}_{m,k}^{p}\left(0,T\right) \rightarrow \mathbb{V}_{m,k}^{p}\left(0,T\right)\) given by
has a unique fixed point in \(\mathbb{V}_{m,k}^{p}\left(0,T\right)\). First \(\Gamma \) is well defined because by Corollary 2.45 \(\mathbb{E}\ \sup _{t\in [0,T]}e^{p\delta V _{t}}\left\vert Y _{t}\right\vert ^{p} < \infty \) and by the inequality
and Proposition 5.2 we get \(\left\Vert \left(Y,Z\right)\right\Vert _{\delta V }^{p} < \infty \) for all δ > 1.
From the inequality
and Proposition 5.2 we obtain
which tells us there exists a δ 0 > 1 such that \(\Gamma \) is a strict contraction on \(\left(\mathbb{V}_{m,k}^{p}\left(0,T\right),\left\Vert \ \cdot \ \right\Vert _{\delta V }\right)\), for all δ ≥ δ 0, and consequently, by Lemma 6.58, \(\Gamma \) has a unique fixed point in \(\mathbb{V}_{m,k}^{p}\left(0,T\right)\).
Exercise 5.3: Since
we obtain, by Proposition 5.2, with N = 0, V = 0, λ = 0, that
Exercise 5.5: We apply the existence and uniqueness result from Theorem 5.27 and the comparison result from Theorem 5.33 for the BSDE
with 0 ≤ η ≤ 1.
Exercise 5.7: Assume that E is not convex. We shall show there exists a bounded continuous function \(g: \mathbb{R}^{k} \rightarrow E\) such that
If E is not convex, we can find \(a,b \in \mathrm{ Bd}\left(E\right)\) such that a ≠ b and \(a +\lambda \left(b - a\right)\notin E\) for all λ ∈ ]0, 1[. Let \(\delta = \frac{1} {4}d_{E}(\frac{a+b} {2} ) > 0\). Define \(g: \mathbb{R}^{k} \rightarrow E\) by \(g\left(x^{\left(1\right)},x^{\left(2\right)},\ldots,x^{\left(k\right)}\right) = a + \left(b - a\right)\mathbf{1}_{(-\infty,1]}\left(x^{\left(1\right)}\right)\). By Exercise 1.7 we have
where
We also have
if \(t \in \,[T - \frac{\delta } {M},T],\) where M > 0 denotes the bound of F.
Then for all \(t \in \,[T - \frac{\delta } {M},T],\)
Therefore
Notes
- 1.
Alexandrov, Alexandr Danilovich (1939) The existence almost everywhere of the second differential of a convex function and some associated properties of convex surfaces. (in Russian), Ucenye Zapiski Leningrad. Gos. Univ. Ser. Math. Vol. 37, N. 6, pp. 3–35.
References
Asiminoaei, Ioan; Răşcanu, Aurel: are not cited in the text. Please provide the citation or delete them from the list. Approximation and simulation of stochastic variational inequalities—splitting up method, Numer. Funct. Anal. Optim. 18 (1997), no. 3-4, 251–282.
Barbu, Viorel: Nonlinear semigroups and differential equations in Banach spaces, Editura Academiei Republicii Socialiste România, Bucharest, 1976, Translated from the Romanian.
Barbu, Viorel: Optimal Control of Variational Inequalities, Pitman Research Notes in Mathematics, 100, London - Boston, 1984.
Barbu, Viorel: Mathematical Methods in Optimization of Differential Systems, Kluwer Academic Publishers, 1994.
Barbu, Viorel; Răşcanu, Aurel: Parabolic variational inequalities with singular inputs, Differential Integral Equations 10 (1997), no. 1, 67–83.
Barles, Guy; Buckdahn, Rainer; Pardoux, Etienne: Backward stochastic differential equations and integral-partial differential equations, Stochastics and Stochastic Reports 60 (1997), 57–83.
Bensoussan, Alain: Some existence results for stochastic partial differential equations, Stochastic partial differential equations and applications (Trento, 1990), pp.37–53, Pitman Res. Notes Math. Ser., 268, Lagman Sci. Tech., Harlow, 1992.
Bensoussan, Alain; Răşcanu, Aurel: Parabolic variational inequalities with random inputs, Les grands systèmes des sciences et de la technologie, RMA Res. Notes Appl. Math., vol. 28, Masson, Paris, 1994, pp. 77–94.
Bensoussan, Alain; Răşcanu, Aurel: Stochastic variational inequalities in infinite-dimensional spaces, Numer. Funct. Anal. Optim. 18 (1997), no. 1-2, 19–54.
Billingsley, Patrick: Probability and measure, third ed., Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons Inc., New York, 1995, A Wiley-Interscience Publication.
Billingsley, Patrick: Convergence of Probability Measures, second ed., Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons Inc., New York, 1999.
Brezis, Haïm: Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert, Math. studies 5, North Holland (1973).
Briand, Philippe; Deylon, Bernard; Hu, Ying; Pardoux, Etienne; Stoica, Lucretiu: L p solutions of backward stochastic differential equations, Stochastic Process. Appl., 108 (2003), 109–129.
Buckdahn, Rainer; Quincampoix, Marc; Rainer, Catherine; Răşcanu, Aurel: Viability of moving sets for stochastic differential equation, Adv. Differential Equations 7 (2002), no. 9, 1045–1072.
Buckdahn, Rainer; Quincampoix, Marc; Răşcanu, Aurel: Viability property for a backward stochastic differential equation and applications to partial differential equations, Probab. Theory Related Fields 116 (2000), no. 4, 485–504.
Buckdahn, Rainer; Răşcanu, Aurel: On the existence of stochastic optimal control of distributed state system, Nonlinear Anal. 52 (2003), no. 4, 1153–1184.
Colombo, Giovanni; Goncharov, Vladimir: Variational Inequalities and Regularity Properties of Closed Sets in Hilbert Spaces, Journal of Convex Analysis, Volume 8 (2001), No. 1, 197–221.
Crandall, Michael G.; Ishii, Hitoshi; Lions, Pierre-Louis: User’s guide to viscosity solutions of second order partial differential equations, Bull. Amer. Soc. 27 (1992), 1–67.
Darling, Richard W. R.: Constructing gamma-martingales with prescribed limit, using backwards SDE. Ann. Probab. 23 (1995), no. 3, 1234–1261.
Darling, Richard W. R.; Pardoux, Etienne: Backwards SDE with random terminal time and applications to semilinear elliptic PDE, Ann. Probab. 25 (1997), no. 3, 1135–1159.
Degiovanni, Marco; Marino, Antonio; Tosques, Mario: Evolution equations with lack of convexity, Nonlinear Anal. Th. Meth. Appl. 9(2) (1985), 1401–1443.
Delarue, François: On the existence and uniqueness of solutions to FBSDEs in a non-degenerate case, Stochastic Process. Appl. 99 (2002), no. 2, 209–286.
Delarue, François; Guatteri, Giuseppina.: Weak existence and uniqueness for forward-backward SDEs, Stochastic Process. Appl. 116 (2006), no. 12, 1712–1742.
Dupuis, Paul; Ishii, Hitoshi: SDEs with oblique reflection on nonsmooth domains, Ann. Probab. 21 (1993), no. 1, 554–580.
Dynkin, Eugene B.: Superprocesses and partial differential equations, Annals of Prob. 21, 1185–1262, 1993.
Ekeland, Ivar: On the Variational Principle, J. Math. Anal. Appl., 47, p. 324–353, 1974.
El Karoui, Nicole; Kapoudjian, Christophe; Pardoux, Étienne; Peng, Shige; Quenez, Marie Claire: Reflected solutions of backward SDE’s, and related obstacle problems for PDE’s, Ann. Probab. 25 (1997), no. 2, 702–737.
El Karoui, Nicole; Mazliak, Laurent (eds.): Backward stochastic differential equations, Pitman Research Notes in Mathematics Series, vol. 364, Longman, Harlow, 1997, Papers from the study group held at the University of Paris VI, Paris, 1995–1996.
El Karoui, Nicole; Pardoux, Étienne; Quenez, Marie Claire: Reflected backward SDEs and American options, Numerical methods in finance, Publ. Newton Inst., Cambridge Univ. Press, Cambridge, 1997, pp. 215–231.
El Karoui, Nicole; Peng, Shige; Quenez, Marie Claire: Backward stochastic differential equations in finance, Math. Finance 7 (1997), no. 1, 1–71.
Fleming, Wendell H.; Soner, Halil Mete: Controlled Markov Processes and Viscosity Solutions, Springer-Verlag, New York, 1993.
Friedman, Avner: Stochastic differential equations and applications. Vol. 1, Academic Press, New York, Vol. 1 - 1975, Vol. 2 - 1976, Probability and Mathematical Statistics, Vol. 28.
Gassous, Anouar; Răşcanu, Aurel; Rotenstein, Eduard: Stochastic variational inequalities with oblique subgradients, Stochastic Process. Appl., 122 (2012), 2668–2700.
Gégout-Petit, Anne; Pardoux, Étienne: Équations différentielles stochastiques rétrogrades réfléchies dans un convexe, Stochastics and Stochastics Rep. 57, no. 1–2, 111–128, 1996.
Gikhman, Iosif Ilich; Skorokhod, Anatoliy Volodymyrovych: Stochastic differential equations, Springer-Verlag, New York, 1972, Translated from the Russian by Kenneth Wickwire, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 72.
Gyöngy, Istvan; Krylov, Nikolai Vladimirovich: On stochastic equations with respect to semimartingales. I, Stochastics 4, no. 1, 1–21., 1980/81.
Hamadene, Said; Lepeltier, Jean-Pierre; Peng, Shige: BSDEs with continuous coefficients and stochastic differential games, Backward stochastic differential equations (Harlow) (N. El Karoui and L. Mazliak, eds.), Longman, 1997, pp. 115–128.
Ikeda, Nobuyuki; Watanabe, Shinzo: Stochastic differential equations and diffusion processes, North–Holland/Kodansha, 1981.
Itô, Kiyosi: Stochastic integral, Proc. Imp. Acad. Tokyo 20 (1944), 519–524.
Jacod, Jean: Une condition d’existence et d’unicité pour les solutions fortes d’équations différentielles stochastiques, (French) Stochastics 4, no. 1, 23–38, 1980/81.
Jakubowski, Adam: A non-Skorohod Topology on the Skorohod space, Electronic Journal of Probability 2 (1997), 1–21.
Karatzas, Ioannis; Shreve, Steven E.: Brownian motion and stochastic calculus, second ed., Graduate Texts in Mathematics, vol. 113, Springer-Verlag, New York, 1991.
Lions, Pierre-Louis; Sznitman, Alain-Sol: Stochastic differential equations with reflecting boundary conditions, Comm. Pure Appl. Math. 37 (1984), no. 4, 511–537.
Ma, Jin; Yong, Jiongmin: Forward-backward stochastic differential equations and their applications, Lecture Notes in Mathematics, vol. 1702, Springer-Verlag, Berlin, 1999.
MacKean, Henry P.: A class of Markov processes associated with nonlinear parabolic equations, Proc. Nat. Acad. Sci. 56, 1907–1911, 1966.
Maticiuc, Lucian; Răşcanu, Aurel: On the continuity of the probabilistic representation of a semi linear Neumann–Dirichlet problem, Bernoulli (2014), to appear.
Maticiuc, Lucian; Răşcanu, Aurel: Backward Stochastic Variational Inequalities on Random Intervals, submitted.
Maticiuc, Lucian; Pardoux, Étienne; Răşcanu, Aurel; Zalinescu, Adrian: Viscosity solutions for systems of parabolic variational inequalities, Bernoulli 16 (2010), no. 1, 258–273.
Øksendal, Bernt: Stochastic differential equations, sixth ed., Universitext, Springer-Verlag, Berlin, 2003.
Paley, Raymond; Wiener, Norbert; Zygmund, Antoni: Notes on random functions, Mathematische Zeitschrift, 3, (1933), 647–668.
Pardoux, Étienne; Peng, Shige: Adapted solution of a backward stochastic differential equation, Syst. Control Lett. 14 (1990), 55–61.
Pardoux, Étienne: Backward stochastic differential equations and viscosity solutions of systems of semilinear parabolic and elliptic PDEs of second order, Stochastic analysis and related topics, VI (Geilo, 1996), Progr. Probab., vol. 42, Birkhäuser Boston, Boston, MA, 1998, pp. 79–127.
Pardoux, Étienne: BSDEs, weak convergence and homogenization of semilinear PDEs, Nonlinear analysis, differential equations and control (Montreal, QC, 1998), NATO Sci. Ser. C Math. Phys. Sci., vol. 528, Kluwer Acad. Publ., Dordrecht, 1999, pp. 503–549.
Pardoux, Étienne; Peng, Shige: Backward stochastic differential equations and quasilinear parabolic partial differential equations, Lecture Notes in Control and Inform. Sci., vol. 176, Springer, 1992, pp. 200–217.
Pardoux, Étienne; Pradeilles, Frédéric; Rao, Zusheng: Probabilistic interpretation of a system of semi-linear parabolic partial differential equations, Ann. Inst. H. Poincaré Probab. Statist. 33 (1997), no. 4, 467–490.
Pardoux, Étienne; Răşcanu, Aurel: Backward stochastic differential equations with subdifferential operator and related variational inequalities, Stochastic Process. Appl. 76 (1998), no. 2, 191–215.
Pardoux, Étienne; Răşcanu, Aurel: Backward stochastic variational inequalities, Stochastics and Stochastics Rep. 67 (1999), no. 3-4, 159–167.
Pardoux, Étienne; Sow, Ahmadou Bamba: Probabilistic interpretation of a system of quasilinear parabolic PDEs, Stochastics and Stochastic Rep. 76 (2004), 429–477.
Pardoux, Étienne; Zhang, Shuguang: Generalized BSDEs and nonlinear Neumann boundary value problems, Probab. Theory Related Fields 110 (1998), no. 4, 535–558.
Pham, Huyên: Continuous-time Stochastic Control and Optimization with Financial Applications, Stochastic Modelling and Applied Probability, Vol. 61, Springer 2009 (Original French edition published as volume 61 in the series Mathématiques & Applications).
Rao, Malempati Mahusudana; Swift, Randall: Probability Theory with Applications, New York, Springer, 2006.
Răşcanu, Aurel: Existence for a class of stochastic parabolic variational inequalities, Stochastics 5 (1981), no. 3, 201–239.
Răşcanu, Aurel: Deterministic and stochastic differential equations in Hilbert spaces involving multivalued maximal monotone operators, Panamer. Math. J. 6 (1996), no. 3, 83–119.
Revuz, Daniel; Yor, Marc: Continuous martingales and Brownian motion, third ed., Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 293, Springer-Verlag, Berlin, 1999.
Rockafellar, R. T.: Convex analysis, Princeton University Press, Princeton, N.J. Princeton Mathematical Series, No. 28, 1970.
Rossi, Riccarda; Savaré, Giuseppe: Gradient flows of non convex functionals in Hilbert spaces, ESAIM Control Optim. Calc. Var. 12 (2006) 564–614.
Saisho, Yasumasa: Stochastic differential equations for multidimensional domain with reflecting boundary, Probab. Theory Related Fields 74 (1987), no. 3, 455–477.
Stroock Daniel W.: Lectures on stochastic analysis: diffusion theory, London Mathematical Society Student Texts, vol. 6, Cambridge University Press, Cambridge, 1987.
Stroock, Daniel W.; Varadhan, S.R.S.: On the support of diffusion processes with applications to the strong maximum principle, Proc. of the sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. of California Press) 3: 333–359, 1972.
Vrabie, Ioan: C 0 -semigroups and applications, North-Holland, Elsevier, Amsterdam, 2003.
Zălinescu, Constantin: Convex analysis in general vector spaces, World Scientific, Singapore, 2002.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Pardoux, E., Răşcanu, A. (2014). Annexes. In: Stochastic Differential Equations, Backward SDEs, Partial Differential Equations. Stochastic Modelling and Applied Probability, vol 69. Springer, Cham. https://doi.org/10.1007/978-3-319-05714-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-05714-9_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-05713-2
Online ISBN: 978-3-319-05714-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)