Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

6.1 Introduction

In this chapter, we collect several results which are used in the book, but whose presentation we have preferred to postpone until now. A first section presents notations and elementary results on matrices. The second section presents some elements of nonlinear and convex analysis. It is mainly used in Chap. 4. The third section presents Gronwall’s inequality, both in the forward and in the backward time direction, together with various original extensions of this inequality to stochastic processes. The most important stochastic inequalities are given in Propositions 6.71, 6.74, 6.80. Section four presents the notion of viscosity solutions of nonlinear PDEs, and establishes three different uniqueness results for viscosity solutions of PDEs which appear in previous chapters of this book. These are variants of more or less known results scattered in the literature. We could not possibly cover all types of elliptic and parabolic equations (and systems of equations) with various types of boundary conditions. But we believe that the reader can adapt our arguments to all situations considered in Chaps. 35 of the book.

Finally a last section is devoted to giving hints for the solutions to some of the exercises from the book.

6.2 Annex A: Vectors and Matrices

Denote by \(\mathbb{R}^{d\times k}\) the linear space of matrices \(A\,=\,\left(a_{i,j}\right)_{d\times k}\), where \(a_{i,j}\,\in \,\mathbb{R}\). If k = 1 then \(\mathbb{R}^{d\times 1}\) is the Euclidean space \(\mathbb{R}^{d}\). Denote by \(A^{{\ast}} = \left(a_{j,i}\right)_{k\times d}\) the transposed matrix of A.

Let \(x = \left(x_{i}\right)_{i=\overline{1,d}} \in \mathbb{R}^{d}\) and \(y = \left(y_{i}\right)_{i=\overline{1,d}} \in \mathbb{R}^{d}\). The usual inner product on \(\mathbb{R}^{d}\) is given by

$$\displaystyle{\left\langle x,y\right\rangle = x_{1}y_{1} + x_{2}y_{2} + \cdots + x_{d}y_{d} = x^{{\ast}}y}$$

and the norm

$$\displaystyle{\vert x\vert = \sqrt{\left\langle x, x\right\rangle } = \left(x_{1}^{2} + x_{ 2}^{2} + \cdots + x_{ d}^{2}\right)^{1/2} = \sqrt{x^{{\ast} } x}.}$$

We also introduce the notation \(x^{+}:= \left(x_{i}^{+}\right)_{d\times 1}\).

The tensor product of the two vectors x and y is the linear operator \(x \otimes y: \mathbb{R}^{d} \times \mathbb{R}^{d} \rightarrow \mathbb{R}\) defined by

$$\displaystyle{\left(x \otimes y\right)\left(u,v\right) = \left\langle x,u\right\rangle \left\langle y,v\right\rangle = u^{{\ast}}\left(\mathit{xy}^{{\ast}}\right)v.}$$

Hence one can identify

$$\displaystyle{x \otimes y = \left(x_{i}y_{j}\right)_{d\times d} = \mathit{xy}^{{\ast}}.}$$

If \(A = \left(a_{i,j}\right)_{d\times d}\) and \(\left\{\mathbf{u}_{1},\ldots,\mathbf{u}_{d}\right\}\) is an orthonormal basis of \(\mathbb{R}^{d}\), that is

$$\displaystyle{\left\langle \mathbf{u}_{i}\mathbf{,u}_{j}\right\rangle =\delta _{i,j} = \left\{\begin{array}{@{}l@{\quad }l@{}} 1\quad &\text{ if }i = j,\\ 0\quad &\text{ if } i\neq j, \end{array} \right.}$$

we define

$$\displaystyle{\mathbf{Tr\ }A =\mathrm{ Trace}\left(A\right) =\sum _{ i=1}^{d}\left\langle A\mathbf{u}_{ i}\mathbf{,u}_{i}\right\rangle.}$$

The “Trace” is independent of the basis \(\left\{\mathbf{u}_{1},\ldots,\mathbf{u}_{d}\right\}\) and

$$\displaystyle{\mathbf{Tr}A =\sum _{ i=1}^{d}a_{\mathit{ ii}} = \mathbf{Tr}A^{{\ast}}.}$$

Moreover if \(A,B \in \mathbb{R}^{d\times d}\) then one verifies that

$$\displaystyle{\mathbf{Tr}\left(\mathit{AB}\right) = \mathbf{Tr}\left(\mathit{BA}\right) = \mathbf{Tr}\left(A^{{\ast}}B^{{\ast}}\right) = \mathbf{Tr}\left(B^{{\ast}}A^{{\ast}}\right).}$$

Let \(A = \left(a_{i,j}\right)_{d\times k} \in \mathbb{R}^{d\times k}\), \(B = \left(b_{i,j}\right)_{d\times k} \in \mathbb{R}^{d\times k}\). We define the inner product on \(\mathbb{R}^{d\times k}\) by

$$\displaystyle\begin{array}{rcl} \left\langle A,B\right\rangle & =& \mathbf{Tr}\left(A^{{\ast}}B\right) = \mathbf{Tr}\left(\mathit{AB}^{{\ast}}\right) {}\\ & =& \sum \limits _{i=1}^{d}\sum \limits _{ j=1}^{k}a_{\mathit{ ij}}b_{\mathit{ij}} {}\\ \end{array}$$

and the norm

$$\displaystyle{\left\vert A\right\vert = \sqrt{\mathbf{Tr } \left(A^{{\ast} } A \right)} = \sqrt{\mathbf{Tr } \left(\mathit{AA } ^{{\ast} } \right)} = \left(\sum \limits _{i=1}^{d}\sum \limits _{ j=1}^{k}a_{\mathit{ ij}}^{2}\right)^{1/2}.}$$

We have

$$\displaystyle{\begin{array}{l@{\quad }l} a)\quad &\quad \left\vert \mathit{AB}\right\vert \leq \left\vert A\right\vert \left\vert B\right\vert, \\ b) \quad &\quad \left\vert \mathit{Ax}\right\vert \leq \left\vert A\right\vert \left\vert x\right\vert, \\ c)\quad &\quad \left\vert A\right\vert = \left\vert A^{{\ast}}\right\vert, \\ d)\quad &\quad \mathbf{Tr}\left(x \otimes y\right) = \left\langle x,y\right\rangle, \\ e)\quad &\quad \mathbf{Tr}\left[\left(x \otimes y\right)\mathit{AB}^{{\ast}}\right] = \left\langle x,\mathit{BA}^{{\ast}}y\right\rangle, \\ f)\quad &\quad \mathbf{Tr}\left[\left(x \otimes x\right)\mathit{AA}^{{\ast}}\right] = \left\vert A^{{\ast}}x\right\vert ^{2}, \\ g)\quad &\quad \left\vert x \otimes y\right\vert = \left\vert x\right\vert \left\vert y\right\vert. \end{array} }$$

We note that the above matrix norm is not the operator norm

$$\displaystyle{\left\Vert A\right\Vert =\sup \left\{\left\vert \mathit{Ax}\right\vert: \left\vert x\right\vert \leq 1\right\} \leq \left\vert A\right\vert,}$$

since \(\left\Vert I_{d}\right\Vert = 1\neq \sqrt{d} = \left\vert I_{d}\right\vert\). Note that

$$\displaystyle{\left\Vert A\right\Vert \leq \left\vert A\right\vert.}$$

We denote by \(\mathbb{S}^{d\times d} \subset \mathbb{R}^{d\times d}\) the set of symmetric matrices. If \(Q,P \in \mathbb{S}^{d\times d}\), we say that Q ≤ P if \(\left\langle Qx,x\right\rangle \leq \left\langle Px,x\right\rangle\), for all \(x \in \mathbb{R}^{d}\); Q is semipositive definite if Q ≥ 0.

\(Q \in \mathbb{S}^{d\times d}\) is semipositive definite if and only if there exists an orthonormal basis \(\left\{\mathbf{v}_{1},\ldots,\mathbf{v}_{d}\right\}\) of \(\mathbb{R}^{d}\) and \(\left\{\lambda _{1},\ldots,\lambda _{d}\right\} \subset [0,\infty [\), such that

$$\displaystyle{Q\mathbf{v}_{i} =\lambda _{i}\mathbf{v}_{i},\;\forall \ i \in \overline{1,n}.}$$

Then \(\mathbf{Tr\ }Q =\sum \limits _{ i=1}^{d}\lambda _{i}\) and for all \(A \in \mathbb{R}^{d\times d}\) we have

$$\displaystyle{ \mathbf{Tr}\left(\mathit{AQ}\right) =\sum \limits _{ i=1}^{d}\lambda _{ i}\left\langle A\mathbf{v}_{i},\mathbf{v}_{i}\right\rangle \leq \left\Vert A\right\Vert \mathbf{Tr}Q \leq \left\vert A\right\vert \mathbf{Tr}Q. }$$
(6.1)

6.3 Annex B: Elements of Nonlinear Analysis

6.3.1 Notations

As references for this Annex, see e.g. [2] or [12]. Throughout in this Annex \(\mathbb{H}\) is a real separable Hilbert space with norm \(\left\vert \cdot \right\vert\) and scalar product \(\left\langle \cdot,\cdot \right\rangle\).

Let \(\left(\mathbb{X},\left\Vert \cdot \right\Vert _{\mathbb{X}}\right)\) be a real Banach space with dual \(\left(\mathbb{X}^{{\ast}},\left\Vert \cdot \right\Vert _{\mathbb{X}^{{\ast}}}\right)\). The duality paring \(\left(\mathbb{X}^{{\ast}}, \mathbb{X}\right)\) is also denoted \(\left\langle \cdot,\cdot \right\rangle\); hence if \(x \in \mathbb{X}\) and \(\hat{x} \in \mathbb{X}^{{\ast}}\), then by \(\left\langle \hat{x},x\right\rangle\) and \(\left\langle x,\hat{x}\right\rangle\) we understand the value, \(\hat{x}\left(x\right)\), of \(\hat{x}\) in x.

Given \(x \in \mathbb{X}\), \(\hat{x} \in \mathbb{X}^{{\ast}}\) and the sequences \(x_{n} \in \mathbb{X}\), \(\hat{x}_{n} \in \mathbb{X}^{{\ast}}\) we say that as n → 

$$\displaystyle\begin{array}{rcl} & & x_{n} \rightarrow x\ \left(\mathit{strongly}\right)\ \text{ in}\ \mathbb{X}\ \text{ if}\ \left\Vert x_{n} - x\right\Vert _{\mathbb{X}} \rightarrow 0, {}\\ & & x_{n}\mathop{ \rightharpoonup }\limits^{ w}x\ \left(\mathit{weakly}\right)\ \text{ in}\ \mathbb{X}\ \text{ if}\ \left\langle \hat{y},x_{n}\right\rangle \rightarrow \left\langle \hat{y},x\right\rangle,\ \text{ for all}\ \hat{y} \in \mathbb{X}^{{\ast}}, {}\\ & & \hat{x}_{n}\mathop{ \rightharpoonup }\limits^{ w^{{\ast}}}\hat{x}\ \left(\mathit{weak}\,\mathit{star}\right)\ \text{ in}\ \mathbb{X}^{{\ast}}\ \text{ if}\ \left\langle \hat{x}_{ n},y\right\rangle \rightarrow \left\langle \hat{x},y\right\rangle,\ \text{ for all}\ y \in \mathbb{X}. {}\\ \end{array}$$

6.3.2 Maximal Monotone Operators

Let \(\mathbb{X}\) and \(\mathbb{Y}\) be Banach spaces. A multivalued operator \(A: \mathbb{X} \rightrightarrows \mathbb{Y}\) (a point-to-set operator \(A: \mathbb{X} \rightarrow 2^{\mathbb{Y}}\)) will also be regarded as a subset of \(\mathbb{X} \times \mathbb{Y}\) setting for \(A \subset \mathbb{X} \times \mathbb{Y}\),

$$\displaystyle{\mathit{Ax} = \left\{y \in \mathbb{Y}:\, \left(x,y\right) \in A\right\}.}$$

Define

$$\displaystyle\begin{array}{rcl} D(A)& =& \text{ Dom}\left(A\right) = \left\{x \in \mathbb{X}:\, \mathit{Ax}\neq \varnothing \right\}\text{ \textendash the domain of }A, {}\\ R\left(A\right)& =& \left\{y \in \mathbb{Y}: \exists \ x \in D(A),\text{ s.t. }y \in \mathit{Ax}\right\}\text{ \textendash the range of }A, {}\\ \end{array}$$

and define \(A^{-1}: \mathbb{Y} \rightrightarrows \mathbb{X}\) to be the point-to-set operator defined by \(x \in A^{-1}\left(y\right)\) if \(y \in A\left(x\right)\).

We give some definitions:

  • \(A: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is monotone if

    $$\displaystyle{\left\langle y_{1} - y_{2},x_{1} - x_{2}\right\rangle \geq 0,\text{ for all }\,\left(x_{1},y_{1}\right) \in A,\,\left(x_{2},y_{2}\right) \in A.}$$
  • \(A: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is a maximal monotone operator if A is a monotone operator and it is maximal in the set of monotone operators: that is,

    $$\displaystyle{\left\langle v - y,u - x\right\rangle \geq 0\;\;\;\forall \,\left(x,y\right) \in A,\;\;\Longrightarrow\;\left(u,v\right) \in A.}$$
  • \(\mathbf{J}_{X}: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) defined by

    $$\displaystyle\begin{array}{rcl} \mathbf{J}_{X}\left(x\right)& =& \left\{\hat{x}: \left\Vert \hat{x}\right\Vert _{{\ast}}^{2} = \left\Vert x\right\Vert ^{2} = \left\langle \hat{x},x\right\rangle \right\} {}\\ & =& \left\{\hat{x}: \left\langle \hat{x},y - x\right\rangle + \frac{1} {2}\left\Vert x\right\Vert ^{2} \leq \frac{1} {2}\left\Vert y\right\Vert ^{2},\;\;\forall \ y \in \mathbb{X}\right\} {}\\ \end{array}$$

    is called the duality mapping; if \(\mathbb{X} = \mathbb{H}\) is a Hilbert space then \(\mathbf{J}_{X}\left(x\right) = \mathbf{I}_{\mathbb{H}}\left(x\right) = x\) for all \(x \in \mathbb{H}\).

  • \(A: \mathbb{X} \rightrightarrows \mathbb{Y}\) is locally bounded at x 0 ∈ D(A) if there exists a neighborhood V of x 0 such that \(A\left(V \right) = \left\{y \in \mathbb{Y}: \exists \ x \in D(A) \cap V,\text{ s.t. }y \in \mathit{Ax}\right\}\) is bounded in \(\mathbb{Y}\).

We have:

Proposition 6.1 (Rockafellar).

Let \(\mathbb{X}\) be a reflexive Banach space. Then \(A: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is maximal monotone operator if and only if A is a monotone operator and

$$\displaystyle{R\left(\mathbf{J}_{X} +\varepsilon A\right) = \mathbb{X}^{{\ast}},\text{ for all }\varepsilon > 0.}$$

Proposition 6.2.

Let \(A: \mathbb{H} \rightrightarrows \mathbb{H}\) be a maximal monotone operator. Then:

  1. (a)

    A is a closed subset of \(\mathbb{H} \times \mathbb{H};\) moreover if \(\left(x_{n},y_{n}\right) \in A\) and

    $$\displaystyle{\begin{array}{l} x_{n} \rightarrow x\;\ \text{ (strongly) in }\mathbb{H}\text{ and}\;y_{n}\;\mathop{ \rightarrow }\limits^{ w}y\quad \;\text{ (weakly) in }\mathbb{H},\quad \text{ or} \\ x_{n}\mathop{ \rightarrow }\limits^{ w}x,\quad \text{ and}\quad \quad y_{n} \rightarrow y,\;\;\text{ or} \\ x_{n}\mathop{ \rightarrow }\limits^{ w}x,\quad \;y_{n}\mathop{ \rightarrow }\limits^{ w}y,\quad \text{ and }\overline{\lim }_{n}\left\langle x_{n},y_{n}\right\rangle \leq \left\langle x,y\right\rangle,\end{array} }$$

    then \(\left(x,y\right) \in A\) ;

  2. (b)

    \(\overline{D\left(A\right)}\) and \(\overline{R\left(A\right)}\) are convex subsets of \(\mathbb{H}\) ;

  3. (c)

    Ax is a convex closed subset of \(\mathbb{H}\) for all \(x \in D\left(A\right)\) ;

  4. (d)

    A is locally bounded on \(\mathrm{int}\left(D\left(A\right)\right)\) that is: for every \(u_{0} \in \mathrm{ int}\left(\mathrm{Dom}\left(A\right)\right)\) there exists an r 0 > 0 such that

    $$\displaystyle{\bar{B}\left(u_{0},r_{0}\right)\mathop{ =}\limits^{ \mathit{def }}\left\{u_{0} + r_{0}v: \left\vert v\right\vert \leq 1\right\} \subset \mathrm{ Dom}\left(A\right)}$$

    and

    $$\displaystyle{A_{u_{0},r_{0}}^{\#}\mathop{ =}\limits^{ \mathit{def }}\sup \left\{\left\vert \hat{u}\right\vert:\hat{ u} \in A\left(u_{ 0} + r_{0}v\right),\;\left\vert v\right\vert \leq 1\right\} < \infty.}$$

Proposition 6.3.

  1. 1.

    If \(A: \mathbb{H} \rightarrow \mathbb{H}\) is a single-valued monotone hemicontinuous operator then A is maximal monotone ( \(A: \mathbb{H} \rightarrow \mathbb{H}\) is hemicontinuous if the function \(t\longrightarrow \left\langle A\left(x + tz\right),y\right\rangle: \mathbb{R} \rightarrow \mathbb{R}\) is continuous for all \(x,y,z \in \mathbb{H}\) ).

  2. 2.

    If \(A,B \subset \mathbb{H} \times \mathbb{H}\) are maximal monotone sets and \(\mathrm{int}\left(D\left(A\right)\right) \cap D\left(B\right)\neq \varnothing \) , then \(A + B\mathop{ =}\limits^{ \mathit{def }}\left\{\left(x,y + z\right): \left(x,y\right) \in A,\ \left(x,z\right) \in B\right\}\) is maximal monotone in \(\mathbb{H} \times \mathbb{H}\) .

Let \(A \subset \mathbb{H} \times \mathbb{H}\) be a maximal monotone operator. Then for each \(\varepsilon > 0\) the operators

$$\displaystyle{J_{\varepsilon }x = (I +\varepsilon A)^{-1}(x)\text{ and }A_{\varepsilon }\left(x\right) = \frac{1} {\varepsilon } (x - J_{\varepsilon }x)}$$

from \(\mathbb{H}\) to \(\mathbb{H}\) are single-valued. The operator \(A_{\varepsilon }\) is called Yosida’s approximation of the operator A. In [2, 12] we can find the proof of the following properties:

Proposition 6.4.

Let \(A: \mathbb{H} \rightrightarrows \mathbb{H}\) be a maximal monotone operator. Then:

  1. (j)

    For all \(\varepsilon,\delta > 0\) and for all \(\;x,y \in \mathbb{H}\)

    $$\displaystyle{\begin{array}{r@{\quad }l} i)\quad &\quad \left(J_{\varepsilon }x,A_{\varepsilon }x\right) \in A, \\ \mathit{ii})\quad &\quad \left\vert J_{\varepsilon }x - J_{\varepsilon }y\right\vert \leq \left\vert x - y\right\vert, \\ \mathit{iii})\quad &\quad \left\vert A_{\varepsilon }x - A_{\varepsilon }y\right\vert \leq \dfrac{1} {\varepsilon } \left\vert x - y\right\vert, \\ \mathit{iv})\quad &\quad \left\vert J_{\varepsilon }x - J_{\delta }x\right\vert \leq \left\vert \varepsilon -\delta \right\vert \left\vert A_{\delta }x\right\vert, \\ \mathit{v})\quad &\quad \left\vert J_{\varepsilon }x\right\vert \leq \left\vert x\right\vert + (1 + \left\vert \varepsilon -1\right\vert )\left\vert J_{1}0\right\vert, \\ \mathit{vi})\quad &\quad A_{\varepsilon }: \mathbb{H} \rightarrow \mathbb{H}\quad \text{ is a maximal monotone operator.}\end{array} }$$
  2. (jj)

    If \(\varepsilon _{n} \rightarrow 0\), \(x_{n}\mathop{ \rightarrow }\limits^{ w}x\), \(A_{\varepsilon _{n}}x_{n}\mathop{ \rightarrow }\limits^{ w}y\) and

    $$\displaystyle{\limsup \limits _{n,m\rightarrow \infty }\left\langle x_{n} - x_{m},A_{\varepsilon _{n}}x_{n} - A_{\varepsilon _{m}}x_{m}\right\rangle \leq 0,}$$

    then \(\left(x,y\right) \in A\) and \(\lim \limits _{n,m\rightarrow \infty }\left\langle x_{n} - x_{m},A_{\varepsilon _{n}}x_{n} - A_{\varepsilon _{m}}x_{m}\right\rangle = 0\) .

  3. (jjj)

    \(\lim \limits _{\varepsilon \searrow 0}J_{\varepsilon }x =\Pr _{\overline{D\left(A\right)}}x,\;\forall x \in \mathbb{H}\) and

    $$\displaystyle{\lim \limits _{\varepsilon \searrow 0}x_{\varepsilon } = x \in D\left(A\right)\quad \Rightarrow \quad \lim \limits _{\varepsilon \searrow 0}J_{\varepsilon }x_{\varepsilon } = x.}$$

    ( \(\Pr _{\overline{D\left(A\right)}}x\) is the orthogonal projection of x on \(\overline{D\left(A\right)}\) .)

  4. (jv)

    \(\lim \limits _{\varepsilon \searrow 0}A_{\varepsilon }x =\Pr _{\mathit{Ax}}\left\{0\right\}\mathop{ =}\limits^{ \mathit{def }}\;A^{0}x \in \mathit{Ax}\) , for all \(x \in D\left(A\right)\) .

  5. (v)

    \(\left\vert A_{\varepsilon }x\right\vert\) is monotone decreasing in \(\varepsilon > 0\) , and when \(\varepsilon \searrow 0\)

    $$\displaystyle{\left\vert A_{\varepsilon }\left(x\right)\right\vert \nearrow \left\{\begin{array}{@{}l@{\quad }l@{}} \left\vert A^{0}\left(x\right)\right\vert,\quad &\text{ if}\quad x \in D\left(A\right), \\ +\infty, \quad &\text{ if}\quad x\notin D\left(A\right).\end{array} \right.}$$
  6. (vj)

    \(\left\vert J_{\varepsilon }x - x\right\vert =\varepsilon \left\vert A_{\varepsilon }x\right\vert \leq \varepsilon \left\vert A^{0}x\right\vert \leq \varepsilon \left\vert z\right\vert\) , for all \(\left(x,z\right) \in A\) .

  7. (vjj)

    For all \(x \in \mathbb{H}\),

    $$\displaystyle\begin{array}{rcl} \left\vert J_{\varepsilon }x - x\right\vert & \leq &\left\vert J_{\varepsilon }x - J_{\varepsilon }\left(J_{1}x\right)\right\vert + \left\vert J_{\varepsilon }\left(J_{1}x\right) - J_{1}x\right\vert + \left\vert J_{1}x - x\right\vert {}\\ & \leq & 2\left\vert J_{1}x - x\right\vert +\varepsilon \left\vert A^{0}\left(J_{ 1}x\right)\right\vert. {}\\ \end{array}$$
  8. (vjjj)

    For all \(x \in \mathbb{H}\) and \(y \in \mathrm{ Dom}\left(A\right)\)

    $$\displaystyle\begin{array}{rcl} \left\vert J_{\varepsilon }x - J_{\delta }y\right\vert & \leq &\left\vert x - y\right\vert + \left\vert \varepsilon -\delta \right\vert \left\vert A_{\delta }y\right\vert {}\\ & \leq &\left\vert x - y\right\vert + \left\vert \varepsilon -\delta \right\vert \left\vert A^{0}y\right\vert. {}\\ \end{array}$$

The operator A is uniquely defined by its principal section \(A^{0}x\mathop{ =}\limits^{ \mathit{def }}\Pr _{\mathit{Ax}}\left\{0\right\}\) in the following sense: if \(\left(x,y\right) \in \overline{D\left(A\right)} \times \mathbb{H}\) such that

$$\displaystyle{\left\langle y - A^{0}u,x - u\right\rangle \geq 0,\text{ for all }u \in D\left(A\right)}$$

then \(\left(x,y\right) \in A\).

Proposition 6.5.

Let \(A: \mathbb{H} \rightrightarrows \mathbb{H}\) be a maximal monotone operator.

  1. I.

    If \(\bar{B}\left(x_{0},r_{0}\right) \subset \mathrm{ Dom}\left(A\right)\) and

    $$\displaystyle{A_{x_{0},r_{0}}^{\#}\mathop{ =}\limits^{ \mathit{def }}\sup \left\{\left\vert \hat{u}\right\vert:\hat{ u} \in A\left(x_{ 0} + r_{0}v\right),\;\left\vert v\right\vert \leq 1\right\},}$$

    then

    $$\displaystyle{ r_{0}\left\vert \hat{x}\right\vert \leq \left\langle \hat{x},x - x_{0}\right\rangle + A_{x_{0},r_{0}}^{\#}\left\vert x - x_{ 0}\right\vert + r_{0}A_{x_{0},r_{0}}^{\#},\quad \forall \,\left(x,\hat{x}\right) \in A. }$$
    (6.2)
  2. II.

    If there exist \(x_{0} \in \mathbb{H}\) and \(a_{0},\hat{a}_{0} \geq 0\) such that

    $$\displaystyle{r_{0}\left\vert \hat{x}\right\vert \leq \left\langle \hat{x},x - x_{0}\right\rangle + a_{0}\left\vert x - x_{0}\right\vert +\hat{ a}_{0},\quad \forall \,\left(x,\hat{x}\right) \in A,}$$

    then there exists a b 0 ≥ 0 such that for all \(x \in \mathbb{H}\) , for all \(\varepsilon \in \left]0,1\right]\) :

    $$\displaystyle{ r_{0}\left\vert A_{\varepsilon }x\right\vert \leq \left\langle A_{\varepsilon }x,x - x_{0}\right\rangle + a_{0}\left\vert x - x_{0}\right\vert + b_{0}. }$$
    (6.3)

    If \(x_{0} \in \mathrm{ Dom}\left(A\right)\) and 0 ∈Ax 0 , then \(b_{0} =\hat{ a}_{0}\) .

Proof.

  1. I.

    By monotonicity of A we have \(\forall \,\left(x,\hat{x}\right) \in A\), \(\forall \left\vert v\right\vert \leq 1\):

    $$\displaystyle\begin{array}{rcl} r_{0}\left\langle \hat{x},v\right\rangle & \leq & r_{0}\left\langle \hat{x},v\right\rangle + \left\langle \hat{x} -\hat{ y},x -\left(x_{0} + r_{0}v\right)\right\rangle {}\\ & =& \left\langle \hat{x},x - x_{0}\right\rangle -\left\langle \hat{y},x - x_{0}\right\rangle + r_{0}\left\langle \hat{y},v\right\rangle {}\\ & \leq &\left\langle \hat{x},x - x_{0}\right\rangle + A_{x_{0},r_{0}}^{\#}\left\vert x - x_{ 0}\right\vert + r_{0}A_{x_{0},r_{0}}^{\#}, {}\\ \end{array}$$

    which yields (6.2).

  2. II.

    Since \(A_{\varepsilon }\left(x\right) \in A\left(J_{\varepsilon }\left(x\right)\right)\), it follows that

    $$\displaystyle\begin{array}{rcl} r_{0}\left\vert A_{\varepsilon }x\right\vert & \leq &\left\langle A_{\varepsilon }x,J_{\varepsilon }\left(x\right) - x_{0}\right\rangle + a_{0}\left\vert J_{\varepsilon }\left(x\right) - x_{0}\right\vert +\hat{ a}_{0} {}\\ & \leq &\left\langle A_{\varepsilon }x,x - x_{0}\right\rangle + a_{0}\left[\left\vert J_{\varepsilon }\left(x\right) - J_{\varepsilon }\left(x_{0}\right)\right\vert + \left\vert J_{\varepsilon }\left(x_{0}\right) - x_{0}\right\vert \right] +\hat{ a}_{0} {}\\ & \leq &\left\langle A_{\varepsilon }x,x - x_{0}\right\rangle + a_{0}\left\vert x - x_{0}\right\vert + a_{0}\left\vert J_{\varepsilon }\left(x_{0}\right) - x_{0}\right\vert +\hat{ a}_{0}. {}\\ \end{array}$$

    Hence the inequality (6.3) holds for \(b_{0} = a_{0}\left[2\left\vert J_{1}x_{0} - x_{0}\right\vert + \left\vert A^{0}\left(J_{1}x_{0}\right)\right\vert \right] +\hat{ a}_{0}\). If 0 ∈ Ax 0 then \(J_{\varepsilon }\left(x_{0}\right) = x_{0}\) and \(b_{0} =\hat{ a}_{0}\).

 ■ 

Proposition 6.6.

If A is a maximal monotone set in \(\mathbb{H} \times \mathbb{H}\) and \(\mathcal{A}\subset L^{2}\left(0,T; \mathbb{H}\right) \times L^{2}\left(0,T; \mathbb{H}\right)\) is defined by

$$\displaystyle{\mathcal{A} = \left\{\left(x,\hat{x}\right) \in L^{2}\left(0,T;H\right) \times L^{2}\left(0,T;H\right): \left(x\left(t\right),\hat{x}\left(t\right)\right) \in A,\;a.e.\,t \in \left]0,T\right[\right\},}$$

then \(\mathcal{A}\) is a maximal monotone set in \(L^{2}\left(0,T; \mathbb{H}\right) \times L^{2}\left(0,T; \mathbb{H}\right)\) .

6.3.3 Stochastic Monotone Functions

Let \(\left(\Omega,\mathcal{F}, \mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0}\right)\) be a complete stochastic basis and

$$\displaystyle{F: \Omega \times \left[0,+\infty \right[ \times \mathbb{R}^{d} \times \mathbb{R}^{d\times k} \rightarrow \mathbb{R}^{d}}$$

such that

  • \(\diamond \) \(F\left(\cdot,\cdot,y,z\right)\) is \(\mathcal{P}\)-m.s.p. for every \(\left(y,z\right) \in \mathbb{R}^{d} \times \mathbb{R}^{d\times k}\);

  • \(\diamond \) for all \(y,y^{{\prime}}\in \mathbb{R}^{d},\;z,z^{{\prime}}\in \mathbb{R}^{d\times k},\;t \geq 0\):

    $$\displaystyle{\left\langle y - y^{{\prime}},F(t,y,z) - F(t,y^{{\prime}},z)\right\rangle \leq 0,\;\; \mathbb{P}\text{ -a.s.};}$$
  • \(\diamond \) for all \(z,z^{{\prime}}\in \mathbb{R}^{d\times k},\;t \geq 0\):

    $$\displaystyle{y\longmapsto F(t,y,z): \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}\,\text{ is continuous, }\mathbb{P}\text{ -a.s.};}$$
  • \(\diamond \) there exists a \(\mathcal{P}\)-m.s.p. \(\ell: \Omega \times \left[0,+\infty \right[ \rightarrow \mathbb{R}_{+}\) such that for all \(y \in \mathbb{R}^{d}\), \(z,z^{{\prime}}\in \mathbb{R}^{d\times k},\;t \geq 0\):

    $$\displaystyle{\left\vert F(t,y,z) - F(t,y,z^{{\prime}})\right\vert \leq \ell_{ t}\left\vert z - z^{{\prime}}\right\vert,\;\; \mathbb{P}\text{ -a.s.}}$$

Since \(y\mapsto - F(t,y,z): \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}\) is a monotone continuous operator (hence also maximal monotone operator), it follows that for every \(\varepsilon > 0\) and \(\left(\omega,t,y,z\right) \in \Omega \times [0,T] \times \mathbb{R}^{d} \times \mathbb{R}^{d\times k}\) there exists a unique \(J_{\varepsilon } = J_{\varepsilon }\left(\omega,t,y,z\right) \in \mathbb{R}^{d}\) such that

$$\displaystyle{J_{\varepsilon } -\varepsilon F(\omega,t,J_{\varepsilon },z) = y.}$$

The Yosida approximation of F is defined by

$$\displaystyle{F_{\varepsilon }(t,y,z)\mathop{ =}\limits^{ \mathit{def }}\frac{1} {\varepsilon } \left(J_{\varepsilon }(t,y,z) - y\right) = F(t,J_{\varepsilon }(t,y,z),z).}$$

Note that \(F_{\varepsilon } = F_{\varepsilon }(t,y,z)\) is the unique solution of

$$\displaystyle{ F(\omega,t,y +\varepsilon F_{\varepsilon },z) = F_{\varepsilon }. }$$
(6.4)

The functions \(J_{\varepsilon }\left(\cdot,\cdot,y,z\right)\), \(F_{\varepsilon }\left(\cdot,\cdot,y,z\right): \Omega \times \left[0,T\right] \rightarrow \mathbb{R}^{d}\) are \(\mathcal{P}\)-m.s.p. for every \(\left(y,z\right) \in \mathbb{R}^{d} \times \mathbb{R}^{d\times k}\) and we have:

Proposition 6.7.

For all \(\varepsilon,\delta > 0\), \(\forall \ t \in \left[0,T\right]\), \(\forall \ y,y^{{\prime}}\in \mathbb{R}^{d}\), \(\forall \ z,z^{{\prime}}\in \mathbb{R}^{d\times k}\) :

$$\displaystyle{ \begin{array}{l@{\quad }l} \left(a\right)\;\quad &\vert J_{\varepsilon }(t,y,z) - J_{\varepsilon }(t,y^{{\prime}},z^{{\prime}})\vert \leq \vert y - y^{{\prime}}\vert +\varepsilon \ell _{t}\left\vert z - z^{{\prime}}\right\vert, \\ \left(b\right)\; \quad &\left\vert J_{\varepsilon }(t,0,0)\right\vert \leq \varepsilon \left\vert F\left(t,0,0\right)\right\vert, \\ \left(c\right)\;\quad &\left\langle F_{\varepsilon }\left(t,y,z\right) - F_{\varepsilon }\left(t,y^{{\prime}},z^{{\prime}}\right),y - y^{{\prime}}\right\rangle \leq \ell_{t}\left\vert z - z^{{\prime}}\right\vert \left\vert y - y^{{\prime}}\right\vert, \\ \left(d\right)\;\quad &\left\vert F_{\varepsilon }\left(t,y,z\right) - F_{\varepsilon }\left(t,y^{{\prime}},z^{{\prime}}\right)\right\vert \leq \dfrac{2} {\varepsilon } \vert y - y^{{\prime}}\vert +\ell _{t}\left\vert z - z^{{\prime}}\right\vert, \\ \left(e\right)\;\quad &\vert J_{\varepsilon }(t,y,z) - y\vert \leq \varepsilon \left\vert F_{\varepsilon }\left(t,y,z\right)\right\vert \leq \varepsilon \left\vert F\left(t,y,z\right)\right\vert, \\ \left(f\right)\;\quad &\lim \limits _{\varepsilon \rightarrow 0}F_{\varepsilon }\left(t,y,z\right) = F\left(t,y,z\right),\end{array} }$$
(6.5)
$$\displaystyle{ \vert J_{\varepsilon }(t,y,z) - J_{\delta }(t,y^{{\prime}},z^{{\prime}})\vert \leq \left\vert y - y^{{\prime}}\right\vert +\delta \ \ell_{ t}\left\vert z - z^{{\prime}}\right\vert + \left\vert \varepsilon -\delta \right\vert \left\vert F\left(t,y,z\right)\right\vert }$$
(6.6)

and

$$\displaystyle{ \begin{array}{l} \left\langle y - y^{{\prime}},F_{\varepsilon }(t,y,z) - F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle +\varepsilon \left\vert F_{\varepsilon }\left(t,y,z\right)\right\vert ^{2} +\delta \left\vert F_{\delta }\left(t,y^{{\prime}},z^{{\prime}}\right)\right\vert ^{2} \\ \quad \quad \leq \left(\varepsilon +\delta \right)\left\langle F_{\varepsilon }\left(t,y,z\right),F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle \\ \quad \quad \quad \quad +\ell _{t}\left[\left\vert y - y^{{\prime}}\right\vert +\varepsilon \left\vert F\left(t,y,z\right)\right\vert +\delta \left\vert F\left(t,y^{{\prime}},z^{{\prime}}\right)\right\vert \right]\left\vert z - z^{{\prime}}\right\vert. \end{array} }$$
(6.7)

Proof.

  1. (a):

    If \(J = J_{\varepsilon }(t,y,z)\), \(J^{{\prime}} = J_{\varepsilon }(t,y^{{\prime}},z^{{\prime}})\), then

    $$\displaystyle\begin{array}{rcl} & & \left\vert J - J^{{\prime}}\right\vert ^{2} {}\\ & & \qquad =\varepsilon \left\langle F\left(t,J,z\right) - F\left(t,J^{{\prime}},z^{{\prime}}\right),J - J^{{\prime}}\right\rangle + \left\langle y - y^{{\prime}},J - J^{{\prime}}\right\rangle {}\\ &&\qquad =\varepsilon \left\langle F\left(t,J,z\right) - F\left(t,J^{{\prime}},z\right),J - J^{{\prime}}\right\rangle {}\\ &&\qquad \qquad +\varepsilon \left\langle F\left(t,J^{{\prime}},z\right) - F\left(t,J^{{\prime}},z^{{\prime}}\right),J - J^{{\prime}}\right\rangle + \left\langle y - y^{{\prime}},J - J^{{\prime}}\right\rangle {}\\ &&\qquad \leq \varepsilon \left[\ell_{t}\left\vert z - z^{{\prime}}\right\vert \left\vert J - J^{{\prime}}\right\vert \right] + \left\vert y - y^{{\prime}}\right\vert \left\vert J - J^{{\prime}}\right\vert {}\\ \end{array}$$

    and (6.5-a) follows.

  2. (b):

    With the notation \(J^{0} = J\left(t,0,0\right)\),

    $$\displaystyle{\left\vert J^{0}\right\vert ^{2} =\varepsilon \left\langle F\left(t,J^{0},0\right),J^{0}\right\rangle \leq \varepsilon \left\langle F\left(t,0,0\right),J^{0}\right\rangle \leq \varepsilon \left\vert F\left(t,0,0\right)\right\vert \left\vert J^{0}\right\vert }$$

    which gives (6.5-b).

  3. (c):

    We have

    $$\displaystyle\begin{array}{rcl} & & \left\langle F_{\varepsilon }\left(t,y,z\right) - F_{\varepsilon }\left(t,y^{{\prime}},z^{{\prime}}\right),y - y^{{\prime}}\right\rangle {}\\ && = \frac{1} {\varepsilon } \left\langle J_{\varepsilon }(t,y,z) - J_{\varepsilon }(t,y^{{\prime}},z^{{\prime}}),y - y^{{\prime}}\right\rangle -\frac{1} {\varepsilon } \left\vert y - y^{{\prime}}\right\vert ^{2} {}\\ & & \leq \frac{1} {\varepsilon } \left[\vert y - y^{{\prime}}\vert +\varepsilon \ell _{ t}\left\vert z - z^{{\prime}}\right\vert \right]\left\vert y - y^{{\prime}}\right\vert -\frac{1} {\varepsilon } \left\vert y - y^{{\prime}}\right\vert ^{2} {}\\ & & \leq \ell_{t}\left\vert z - z^{{\prime}}\right\vert \left\vert y - y^{{\prime}}\right\vert {}\\ \end{array}$$

    that is (6.5-b).

  4. (d):

    From \(\left(a\right)\) and the definition of \(F_{\varepsilon }\) the inequality \(\left(d\right)\) clearly follows.

  5. (e):

    The properties follow from those of the Yosida approximation, \(A_{\varepsilon }\), of a maximal operator A (here \(A_{\varepsilon }\left(y\right) = -F_{\varepsilon }(t,y,z)\) for \(\left(\omega,t,z\right)\) fixed.

  6. (6.6):

    Let \(J_{\varepsilon } = J_{\varepsilon }(t,y,z)\) and \(J_{\delta }^{{\prime}} = J_{\delta }(t,y^{{\prime}},z^{{\prime}})\). Then

    $$\displaystyle{\begin{array}{l} \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert ^{2} = \left(\varepsilon -\delta \right)\left\langle F\left(t,J_{\varepsilon },z\right),J_{\varepsilon } - J_{\delta }^{{\prime}}\right\rangle \\ {r} { +\delta \left\langle F\left(t,J_{\varepsilon },z\right) - F\left(t,J_{\delta }^{{\prime}},z^{{\prime}}\right),J_{\varepsilon } - J_{\delta }^{{\prime}}\right\rangle + \left\langle y - y^{{\prime}},J_{\varepsilon } - J_{\delta }^{{\prime}}\right\rangle } \\ {r}{\leq \left\vert \varepsilon -\delta \right\vert \left\vert F\left(t,J_{\varepsilon },z\right)\right\vert \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert +\delta \ell _{t}\left\vert z - z^{{\prime}}\right\vert \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert + \left\vert y - y^{{\prime}}\right\vert \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert }\end{array} }$$

    and (6.6) follows.

  7. (6.7):

    Now, we have

    $$\displaystyle\begin{array}{rcl} & & \left\langle J_{\varepsilon } - J_{\delta }^{{\prime}},F_{\varepsilon }(t,y,z) - F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & \qquad = \left\langle J_{\varepsilon } - J_{\delta }^{{\prime}},F(t,J_{\varepsilon },z) - F(t,J_{\delta }^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & \qquad \leq 0 + \left\langle J_{\varepsilon } - J_{\delta }^{{\prime}},F(t,J_{\delta }^{{\prime}},z) - F(t,J_{\delta }^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & \qquad \leq \left\vert J_{\varepsilon } - J_{\delta }^{{\prime}}\right\vert \ell_{ t}\left\vert z - z^{{\prime}}\right\vert {}\\ &&\qquad \leq \ell_{t}\left[\varepsilon \left\vert F\left(t,y,z\right)\right\vert +\delta \left\vert F\left(t,y^{{\prime}},z^{{\prime}}\right)\right\vert + \left\vert y - y^{{\prime}}\right\vert \right]\left\vert z - z^{{\prime}}\right\vert {}\\ \end{array}$$

    and then

    $$\displaystyle\begin{array}{rcl} & & \left\langle y - y^{{\prime}},F_{\varepsilon }(t,y,z) - F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & = \left\langle J_{\varepsilon } -\varepsilon F_{\varepsilon }\left(t,y,z\right) - J_{\delta }^{{\prime}} +\delta F_{\delta }\left(t,y^{{\prime}},z^{{\prime}}\right),F_{\varepsilon }(t,y,z) - F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & \leq -\varepsilon \left\vert F_{\varepsilon }\left(t,y,z\right)\right\vert ^{2} -\delta \left\vert F_{\delta }\left(t,y^{{\prime}},z^{{\prime}}\right)\right\vert ^{2} + \left(\varepsilon +\delta \right)\left\langle F_{\varepsilon }\left(t,y,z\right),F_{\delta }(t,y^{{\prime}},z^{{\prime}})\right\rangle {}\\ & & +\ell_{t}\left[\varepsilon \left\vert F\left(t,y,z\right)\right\vert +\delta \left\vert F\left(t,y^{{\prime}},z^{{\prime}}\right)\right\vert + \left\vert y - y^{{\prime}}\right\vert \right]\left\vert z - z^{{\prime}}\right\vert {}\\ \end{array}$$

    that is (6.7). ■ 

If we define

$$\displaystyle{F_{R}^{\#}\left(t\right)\mathop{ =}\limits^{ \mathit{def }}\sup \limits _{\left\vert y\right\vert \leq R}\vert F(t,y,0)\vert,}$$

then we have the following:

Proposition 6.8.

For all \(\varepsilon > 0\) , p,a > 1, r 0 ≥ 0, \(y \in \mathbb{R}^{d}\), \(z \in \mathbb{R}^{d\times k}\), \(t \in \left[0,T\right]\) :

$$\displaystyle{ \begin{array}{l} r_{0}\left\vert F_{\varepsilon }\left(t,y,z\right)\right\vert + \left\langle F_{\varepsilon }\left(t,y,z\right),y\right\rangle \leq r_{0}\left(F_{r_{0}}^{\#}\left(t\right) + r_{0} \dfrac{a} {2n_{p}}\left(\ell_{t}\right)^{2}\right) \\ \;\;\;\;\;\;\;\;\; + \left(F_{r_{0}}^{\#}\left(t\right) + r_{0} \dfrac{a} {n_{p}}\left(\ell_{t}\right)^{2}\right)\left\vert y\right\vert + \dfrac{a} {2n_{p}}\left(\ell_{t}\right)^{2}\left\vert y\right\vert ^{2} + \dfrac{n_{p}} {2a}\left\vert z\right\vert ^{2},\;a.s., \end{array} }$$
(6.8)

where

$$\displaystyle{n_{p}\mathop{ =}\limits^{ \mathit{def }}1 \wedge \left(p - 1\right).}$$

Proof.

Let 0 ≤ r 0 ≤ 1. The monotonicity property of \(F_{\varepsilon }\) implies that for all \(\left\vert u\right\vert \leq 1\):

$$\displaystyle{\left\langle F_{\varepsilon }\left(t,r_{0}u,z\right) - F_{\varepsilon }\left(t,y,z\right),r_{0}u - y\right\rangle \leq 0,}$$

and, consequently, \(\forall \,\left\vert u\right\vert \leq 1\):

$$\displaystyle\begin{array}{rcl} & & r_{0}\left\langle F_{\varepsilon }\left(t,y,z\right),-u\right\rangle + \left\langle F_{\varepsilon }\left(t,y,z\right),y\right\rangle {}\\ & & \leq \left\vert F_{\varepsilon }\left(t,r_{0}u,z\right)\right\vert \left\vert y - r_{0}u\right\vert {}\\ & & \leq \left\vert F_{\varepsilon }\left(t,r_{0}u,0\right)\right\vert \left(\left\vert y\right\vert + r_{0}\right) +\ell _{t}\left\vert z\right\vert \left(\left\vert y\right\vert + r_{0}\right) {}\\ & & \leq \left\vert F\left(t,r_{0}u,0\right)\right\vert \left(\left\vert y\right\vert + r_{0}\right) + \dfrac{a} {2n_{p}}\left(\ell_{t}\right)^{2}\left(\left\vert y\right\vert + r_{ 0}\right)^{2} + \dfrac{n_{p}} {2a}\left\vert z\right\vert ^{2}. {}\\ \end{array}$$

The inequality (6.8) follows by taking the sup of the left-hand side over all vectors u such that \(\left\vert u\right\vert \leq 1\). ■ 

Finally we give some convergence results.

Let \(F: \Omega \times \left[0,T\right] \times \mathbb{R}^{d} \rightarrow \mathbb{R}^{d^{{\prime}} }\) be a function satisfying

$$\displaystyle{ \begin{array}{r@{\quad }l} i)\;\quad &F\left(\cdot,\cdot,x\right)\quad \text{ is }\mathcal{F}\otimes \mathcal{B}_{\left[0,T\right]}\text{ -}{ \mathit{measurable, }}\forall \,x \in \mathbb{R}^{d}, \\ \mathit{ii})\;\quad &F\left(\omega,t,\cdot \right)\quad \text{ is continuous }d\mathbb{P} \otimes \mathit{dt}\text{ -}a.e.\;\left(\omega,t\right) \in \Omega \times \left[0,T\right], \\ \mathit{iii})\;\quad &\exists \ \alpha > 0\text{ such that}\int _{0}^{T}\left(F_{ R}^{\#}\left(t\right)\right)^{\alpha }\mathit{dt} < +\infty,\quad \mathbb{P}\text{ -a.s.},\ \forall \,R > 0. \end{array} }$$
(6.9)

Proposition 6.9.

Assume that F satisfies (6.9). Let

$$\displaystyle{X^{\varepsilon },X \in L^{0}\left(\Omega;C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\right)}$$

be such that

$$\displaystyle{\sup _{t\in \left[0,T\right]}\left\vert X_{t}^{\varepsilon } - X_{ t}\right\vert \mathop{\longrightarrow}\limits_{\varepsilon \rightarrow 0}^{\mathit{prob}.}0\text{.}}$$

Then

$$\displaystyle\begin{array}{rcl} & & \sup _{t\in \left[0,T\right]}\left\vert \int _{0}^{t}F\left(s,X_{ s}^{\varepsilon }\right)\mathit{ds} -\int _{ 0}^{t}F\left(s,X_{ s}\right)\mathit{ds}\right\vert {}\\ & & \leq \int _{0}^{T}\left\vert F\left(s,X_{ s}^{\varepsilon }\right) - F\left(s,X_{ s}\right)\right\vert \mathit{ds}\;\mathop{\longrightarrow}\limits_{\varepsilon \rightarrow 0}^{\mathit{prob}.}\;0. {}\\ \end{array}$$

Moreover if for some p,α > 0:

$$ \displaystyle {C_{p,\alpha}} \mathop{def}\limits_{=} \sup_{0 < \varepsilon \leq \varepsilon_{0}} \mathbb{E} \left(\int_{0}^{T} \left \vert F \left(t, X_{ t}^{\varepsilon}\right) \right \vert^{\alpha} \mathit{dt} \right)^{p} < + \infty$$
(6.10)

, then

$$\displaystyle{ \begin{array}{l@{\quad }l} c_{1})\quad &\quad \mathbb{E}\left(\int _{0}^{T}\left\vert F\left(t,X_{ t}\right)\right\vert ^{\alpha }\mathit{dt}\right)^{p} \leq C_{ p}, \\ c_{2})\quad &\quad \mathbb{E}\left(\int _{0}^{T}\left\vert F\left(t,X_{ t}^{\varepsilon }\right) - F\left(t,X_{ t}\right)\right\vert ^{\alpha }\mathit{dt}\right)^{q}\mathop{\longrightarrow}\limits_{\varepsilon \rightarrow 0}^{}0,\quad \forall \,q \in ]0,p[. \end{array} }$$
(6.11)

If, in addition, \(x\longmapsto - F\left(t,x\right)\) is a monotone operator and \(F_{\varepsilon } = F_{\varepsilon }(t,x)\), \(\varepsilon > 0\) , is the Yosida approximation of F ( \(F_{\varepsilon }\) is the unique solution of \(F(\omega,t,x +\varepsilon F_{\varepsilon }) = F_{\varepsilon }\) ) then \(\forall \ q \in ]0,p[\) :

$$\displaystyle{ \mathbb{E}\left(\int _{0}^{T}\left\vert F_{\varepsilon }\left(t,X_{ t}^{\varepsilon }\right) - F\left(t,X_{ t}\right)\right\vert ^{\alpha }\mathit{dt}\right)^{q}\mathop{\longrightarrow}\limits_{\varepsilon \rightarrow 0}^{}0. }$$
(6.12)

Proof.

Let \(\varepsilon _{n} \rightarrow 0\) such that

$$\displaystyle{\lim _{\varepsilon _{n}\rightarrow 0}\sup _{t\in \left[0,T\right]}\left\vert X_{t}^{\varepsilon _{n} } - X_{t}\right\vert = 0,\quad \mathbb{P}\text{ -a.s.}}$$

Then by the Lebesgue dominated convergence theorem

$$\displaystyle{\lim _{\varepsilon _{n}\rightarrow 0}\int _{0}^{T}\left\vert F\left(s,X_{ s}^{\varepsilon _{n} }\right) - F\left(s,X_{s}\right)\right\vert ^{\alpha }\mathit{ds} = 0,\quad \mathbb{P}\text{ -a.s.}}$$

Since the convergence in probability is given by a metric, by reductio ad absurdum we infer that

$$\displaystyle{\int _{0}^{T}\left\vert F\left(s,X_{ s}^{\varepsilon }\right) - F\left(s,X_{ s}\right)\right\vert ^{\alpha }\mathit{ds}\mathop{\longrightarrow}\limits_{\varepsilon \rightarrow 0}^{\mathit{prob}.}0.}$$

Also, if C p  < , then Fatou’s lemma clearly yields (6.11-c 1).

  1. I.

    Denote by C positive constants independent of \(\varepsilon _{n}\). Let

    $$\displaystyle{\left(\Delta _{n} =\right)\Delta _{\varepsilon _{n}}\mathop{ =}\limits^{ { \mathit{def }}}\int _{0}^{T}\left\vert F\left(s,X_{ s}^{\varepsilon _{n} }\right) - F\left(s,X_{s}\right)\right\vert ^{\alpha }\mathit{ds}.}$$

    Then by the Lebesgue dominated convergence theorem \(\Delta _{n} \rightarrow 0,\quad \mathbb{P}\text{ -a.s.}\), and

    $$\displaystyle{\mathbb{E}\Delta _{n}^{p} \leq C.}$$

    Since

    $$\displaystyle\begin{array}{rcl} \mathbb{E}\Delta _{n}^{q}& \leq & \mathbb{E}\left(\Delta _{ n}^{q}\mathbf{1}_{ \Delta _{n}\leq R}\right) + \mathbb{E}\left(\Delta _{n}^{q}\frac{\Delta _{n}^{p-q}} {R^{p-q}} \mathbf{1}_{\Delta _{n}>R}\right) {}\\ & \leq & \mathbb{E}\left(\Delta _{n}^{q}\mathbf{1}_{ \Delta _{n}\leq R}\right) + \frac{C} {R^{p-q}}, {}\\ \end{array}$$

    it follows that

    $$\displaystyle{0\, \leq \limsup _{\varepsilon _{n}\rightarrow 0}\mathbb{E}\Delta _{n}^{q} \leq \frac{C} {R^{p-q}}\quad \forall R > 0,}$$

    that is \(\lim _{\varepsilon _{n}\rightarrow 0}\mathbb{E}\Delta _{n}^{q} = 0\) and by reductio ad absurdum the full sequence \(\Delta _{\varepsilon }\) has the property (6.11-c 2).

  2. II.

    Since \(\left\vert F_{\varepsilon }\left(t,X_{t}^{\varepsilon }\right)\right\vert \leq \left\vert F(t,X_{t}^{\varepsilon })\right\vert\), on a subsequence

    $$\displaystyle\begin{array}{rcl} \lim _{\varepsilon _{n}\rightarrow 0}F_{\varepsilon _{n}}\left(t,X_{t}^{\varepsilon _{n} }\right)& =& \lim _{\varepsilon _{n}\rightarrow 0}F\left(t,X_{t}^{\varepsilon } +\varepsilon _{ n}F_{\varepsilon _{n}}\left(t,X_{t}^{\varepsilon _{n} }\right)\right) {}\\ & =& F(t,X_{t}),\;\; \mathbb{P}\text{ -a.s.} {}\\ \end{array}$$

    and then the convergence result, (6.12), follows in exactly the same manner with \(\Delta _{\varepsilon _{n}}:=\int _{ 0}^{T}\left\vert F_{\varepsilon _{n}}\left(s,X_{s}^{\varepsilon _{n}}\right) - F\left(s,X_{s}\right)\right\vert ^{\alpha }\mathit{ds}\).

 ■ 

6.3.4 Compactness Results

Let \(I \subset \mathbb{R}\) be an interval. Denote by \(C\left(I; \mathbb{R}^{d}\right)\) the space of continuous functions \(g: I \rightarrow \mathbb{R}^{d}\). If \(I = \left[a,b\right]\) then \(C\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) is a separable Banach space with respect to the norm \(\left\Vert \cdot \right\Vert _{\left[a,b\right]}\), where if \(g: \left[a,b\right] \rightarrow \mathbb{R}^{d}\) we define

$$\displaystyle{\left\Vert g\right\Vert _{\left[a,t\right]} =\sup \left\{\left\vert g\left(s\right)\right\vert: a \leq s \leq t\right\}.}$$

If \(\left[a,b\right] = \left[0,t\right]\) then

$$\displaystyle{\left\Vert g\right\Vert _{t}\mathop{ =}\limits^{ \mathit{def }}\left\Vert g\right\Vert _{\left[0,t\right]} =\sup \left\{\left\vert g\left(s\right)\right\vert: 0 \leq s \leq t\right\}.}$$

For \(g \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) we define for \(t \in \left[0,T\right]\) and \(\varepsilon \geq 0\):

$$\displaystyle{\mathbf{m}_{g}\left(t,\varepsilon \right) =\sup \left\{\left\Vert g\left(t\right) - g\left(s\right)\right\Vert _{\mathbb{R}^{d}}: \left\vert t - s\right\vert \leq \varepsilon,\;s \in \left[0,T\right]\right\}}$$

the modulus of continuity at t, and

$$\displaystyle{\mathbf{m}_{g}\left(\varepsilon \right) = \mathbf{m}\left(\varepsilon;g\right) =\sup \left\{\left\vert g\left(t\right) - g\left(s\right)\right\vert: \left\vert t - s\right\vert \leq \varepsilon,\;t,s \in \left[0,T\right]\right\}}$$

the modulus of uniformly continuity.

We also introduce the notation

$$\displaystyle{\boldsymbol{\mu }_{g}\left(\varepsilon \right) =\boldsymbol{\mu } \left(\varepsilon;g\right) =\varepsilon +\mathbf{m}_{g}\left(\varepsilon \right).}$$

Note that

$$\displaystyle{ \begin{array}{l@{\quad }l} m_{1})\quad &\quad 0 = \mathbf{m}_{g}\left(0\right) \leq \mathbf{m}_{g}\left(\varepsilon \right) \leq \mathbf{m}_{g}\left(\delta \right) \leq 2\left\Vert g\right\Vert _{T},\ \quad \forall 0 <\varepsilon <\delta, \\ m_{2})\quad &\quad 0 =\boldsymbol{\mu } _{g}\left(0\right) <\boldsymbol{\mu } _{g}\left(\varepsilon \right) <\boldsymbol{\mu } _{g}\left(\delta \right),\ \quad \forall 0 <\varepsilon <\delta, \\ m_{3})\quad &\quad \mathbf{m}_{g}\left(\varepsilon +\delta \right) \leq \mathbf{m}_{g}\left(\varepsilon \right) + \mathbf{m}_{g}\left(\delta \right),\quad \forall \,\varepsilon,\delta \geq 0, \\ m_{4})\quad &\quad \lim \limits _{\varepsilon \searrow 0}\mathbf{m}_{g}\left(\varepsilon \right) =\lim \limits _{\varepsilon \searrow 0}\boldsymbol{\mu }_{g}\left(\varepsilon \right) = 0 \end{array} }$$
(6.13)

and

$$\displaystyle{\left\vert \mathbf{m}_{g}\left(t,\varepsilon \right) -\mathbf{m}_{h}\left(t,\varepsilon \right)\right\vert \leq \left\vert \mathbf{m}_{g}\left(\varepsilon \right) -\mathbf{m}_{h}\left(\delta \right)\right\vert \leq 2\left\Vert g - h\right\Vert _{T} + m_{g}\left(\left\vert \varepsilon -\delta \right\vert \right).}$$

If \(\mathcal{M}\subset C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) and \(\varepsilon > 0\), then

$$\displaystyle{\begin{array}{lll} \mathbf{m}_{\mathcal{M}}\left(t,\varepsilon \right)&\mathop{ =}\limits^{ \mathit{def }}&\sup \left\{\mathbf{m}_{g}\left(t,\varepsilon \right): g \in \mathcal{M}\right\}, \\ \left\Vert \mathcal{M}\right\Vert _{T} &\mathop{ =}\limits^{ \mathit{def }}&\sup \left\{\left\Vert g\right\Vert _{T}: g \in \mathcal{M}\right\}, \\ \mathcal{M}\left(t\right) &\mathop{ =}\limits^{ \mathit{def }}&\left\{g\left(t\right): g \in \mathcal{M}\right\}, \end{array} }$$

and

$$\displaystyle\begin{array}{rcl} & & \mathbf{m}_{\mathcal{M}}\left(\varepsilon \right)\mathop{ =}\limits^{ \mathit{def }}\sup \left\{\mathbf{m}_{g}\left(\varepsilon \right): g \in \mathcal{M}\right\}, {}\\ & & \boldsymbol{\mu }_{\mathcal{M}}\left(\varepsilon \right)\mathop{ =}\limits^{ \mathit{def }}\varepsilon + \mathbf{m}_{\mathcal{M}}\left(\varepsilon \right). {}\\ \end{array}$$

Theorem 6.10 (Arzelà–Ascoli).

Let \(\mathcal{M}\subset C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\). Then the following three conditions are equivalent:

  1. (A)

    \(\mathcal{M}\) is relatively compact in \(C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) ;

  2. (B)
    \(\left(B_{1}\right)\) :

    (equicontinuity) : \(\lim \limits _{\varepsilon \rightarrow 0}\mathbf{m}_{\mathcal{M}}\left(t,\varepsilon \right) = 0\), \(\forall \ t \in \left[0,T\right]\) ;

    \(\left(B_{2}\right)\) :

    (bounded images) for each \(t \in \left[0,T\right]\) the set \(\mathcal{M}\left(t\right) = \left\{g\left(t\right): g \in \mathcal{M}\right\}\) is bounded in \(\mathbb{R}^{d}\) ;

  3. (C)
    \(\left(C_{1}\right)\) :

    (uniform equicontinuity) : \(\lim \limits _{\varepsilon \rightarrow 0}\mathbf{m}_{\mathcal{M}}\left(\varepsilon \right) = 0\) ;

    \(\left(C_{2}\right)\) :

    the set \(\left\{g\left(t\right): t \in \left[0,T\right],\;g \in \mathcal{M}\right\}\) is bounded in \(\mathbb{R}^{d}\) .

Theorem 6.11 (Kolmogorov–Riesz–Weil).

Let \(p \in \left[1,\infty \right[\). A set \(\mathcal{S}\subset L^{p}\left(0,T; \mathbb{R}^{d}\right)\) is relatively compact in \(L^{p}\left(0,T; \mathbb{R}^{d}\right)\) if and only if:

  1. (j)

    (p-equi-integrability)

    $$\displaystyle{\lim _{\varepsilon \searrow 0}\left[\sup _{g\in \mathcal{S}}\int _{\varepsilon }^{T-\varepsilon }\left\Vert g\left(t+\varepsilon \right) - g\left(t\right)\right\Vert _{ \mathbb{R}^{d}}^{p}\mathit{dt}\right] = 0,}$$
  2. (jj)

    (boundedness):

    $$\displaystyle{\sup _{g\in \mathcal{S}}\int _{0}^{T}\left\vert g\left(t\right)\right\vert \mathit{dt} < \infty.}$$

(For the proofs of these two last theorems see as example the book of Vrabie [70].)

Clearly we have:

Corollary 6.12.

Let M > 0 and \(\gamma _{n} \searrow 0\), \(\varepsilon _{n} \searrow 0\) be two sequences.

  1. a)

    Then the set

    $$\displaystyle\begin{array}{rcl} \mathcal{K}_{1}& =& \left\{z \in L^{2}(0,T; \mathbb{R}^{d}): \int _{ 0}^{T}\left\vert z\left(t\right)\right\vert ^{2}\mathit{dt} \leq M,\right. {}\\ & & \left.\sup _{0\leq \theta \leq \varepsilon _{n}}\int _{0}^{T-\varepsilon _{n} }\left\vert z\left(t+\theta \right) - z\left(t\right)\right\vert ^{2}\mathit{dt} \leq \gamma _{ n},\;\forall n \in \mathbb{N}^{{\ast}}\right\} {}\\ \end{array}$$

    is a compact subset of \(L^{2}(0,T; \mathbb{R}^{d})\) .

  2. b)

    If \(N_{n} = \left[\frac{T} {\varepsilon _{n}} \right]\) and \(t_{i} = \frac{(i-1)T} {N_{n}}\) , for 1 ≤ i ≤ N n ,n ≥ 1, then the set

    $$\displaystyle\begin{array}{rcl} \mathcal{K}_{2}& =& \left\{z \in C([0,T]; \mathbb{R}^{d}): \left\vert z\left(0\right)\right\vert \leq M,\right. {}\\ & & \left.\sup \limits _{1\leq i\leq N_{n}}\sup \limits _{0<\theta \leq \varepsilon _{n}}\left\vert z\left(t_{i}+\theta \right) - z\left(t_{i}\right)\right\vert \leq \gamma _{n},\forall n \in \mathbb{N}^{{\ast}}\right\} {}\\ \end{array}$$

    is a compact subset of \(C([0,T]; \mathbb{R}^{d})\) (here z t is extended outside of [0,T] by continuity z s = z T , for s ≥ T and z s = z 0 , for s ≤ 0).

6.3.5 Bounded Variation Functions

Let \(\left[a,b\right]\) be a closed interval from \(\mathbb{R}\) and \(\mathcal{D}_{\left[a,b\right]}\) be the set of all partitions

$$\displaystyle{\Delta:\; a = t_{0} < t_{1} < \cdots < t_{n} = b,\quad n = n_{\Delta } \in \mathbb{N}^{{\ast}}.}$$

Define \(\left\Vert \Delta \right\Vert =\sup \left\{t_{i+1} - t_{i}: 0 \leq i \leq n - 1\right\}\).

Let

$$\displaystyle{V _{\Delta }\left(k\right)\mathop{ =}\limits^{ \mathit{def.}}\sum _{i=0}^{n-1}\left\vert k\left(t_{ i+1}\right) - k\left(t_{i}\right)\right\vert }$$

be the variation of k corresponding to the partition \(\Delta \in \mathcal{D}_{\left[a,b\right]}\). We define the total variation of k on \(\left[a,b\right]\) by

$$\displaystyle\begin{array}{rcl} \left\updownarrow k\right\updownarrow _{\left[a,b\right]}& =& \sup _{\Delta \in \mathcal{D}_{\left[a,b\right]}}V _{\Delta }\left(k\right) {}\\ & =& \sup \left\{\sum _{i=0}^{n_{\Delta }-1}\left\vert k\left(t_{ i+1}\right) - k\left(t_{i}\right)\right\vert: \Delta \in \mathcal{D}_{\left[a,b\right]}\right\} {}\\ \end{array}$$

and if \(\left[a,b\right] = \left[0,T\right]\) then

$$\displaystyle{\left\updownarrow k\right\updownarrow _{T}:= \left\updownarrow k\right\updownarrow _{\left[0,T\right]}\,.}$$

Proposition 6.13.

If \(k \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) and \(\overline{\Delta }_{N} \in \mathcal{D}_{\left[0,T\right]}\)

$$\displaystyle{\overline{\Delta }_{N}: \left\{0 = \frac{0} {2^{N}}T < \frac{1} {2^{N}}T < \cdots < \frac{2^{N} - 1} {2^{N}} T < \frac{2^{N}} {2^{N}}T = T\right\},}$$

then

$$\displaystyle{V _{\overline{\Delta }_{ N}}\left(k\right) \nearrow \left\updownarrow k\right\updownarrow _{T}\quad \text{ as }N \nearrow \infty.}$$

Proof.

Clearly \(V _{\overline{\Delta }_{ N}}\left(k\right)\) is increasing with respect to N and \(V _{\overline{\Delta }_{ N}}\left(k\right) \leq \left\updownarrow k\right\updownarrow _{T}\).

Let \(\Delta \in \mathcal{D}_{\left[a,b\right]}\) be arbitrary

$$\displaystyle{\Delta:\; 0 = t_{0} < t_{1} < \cdots < t_{n_{\Delta }} = T,}$$

and \(j_{i} = \left[\frac{t_{i}} {T}2^{N}\right]\) be the integer part of \(\frac{t_{i}} {T}2^{N}\). Then

$$\displaystyle\begin{array}{rcl} & & V _{\Delta }\left(k\right) =\sum \limits _{ i=1}^{n_{\Delta }-1}\left\vert k\left(t_{ i+1}\right) - k\left(t_{i}\right)\right\vert {}\\ & & \qquad \leq \sum \limits _{i=1}^{n_{\Delta }-1}\left[\left\vert k\left(t_{ i+1}\right) - k\left(\frac{j_{i+1}T} {2^{N}} \right)\right\vert + \left\vert k\left(\frac{j_{i+1}T} {2^{N}} \right) - k\left(\frac{j_{i}T} {2^{N}} \right)\right\vert \right. {}\\ & & \left.\qquad \quad + \left\vert k\left(\frac{j_{i}T} {2^{N}} \right) - k\left(t_{i}\right)\right\vert \right] {}\\ & & \qquad \leq 2n_{\Delta }\mathbf{m}_{k}\left( \frac{T} {2^{N}}\right) + V _{\overline{\Delta }_{ N}}\left(k\right) {}\\ \end{array}$$

and passing to the limit for \(N \nearrow \infty \) we obtain

$$\displaystyle{V _{\Delta }\left(k\right) \leq \lim _{N\nearrow \infty }V _{\overline{\Delta }_{ N}}\left(k\right) \leq \left\updownarrow k\right\updownarrow _{T},\quad \forall \Delta \in \mathcal{D}_{\left[a,b\right]}.}$$

Hence \(\lim \limits _{N\nearrow \infty }V _{\overline{\Delta }_{ N}}\left(k\right) = \left\updownarrow k\right\updownarrow _{T}\). ■ 

Definition 6.14.

A function \(k: \left[a,b\right] \rightarrow \mathbb{R}^{d}\) has bounded variation on \(\left[a,b\right]\) if \(\left\updownarrow k\right\updownarrow _{\left[a,b\right]} < \infty \). The space of bounded variation functions on \(\left[a,b\right]\) will be denoted by \(\mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\).

If \(x \in C\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) and \(k \in \mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) then the Riemann–Stieltjes integral is defined by

$$\displaystyle{\int _{a}^{b}\left\langle x\left(t\right),\mathit{dk}\left(t\right)\right\rangle =\lim _{\left\Vert \Delta \right\Vert \rightarrow 0}\sum _{i=0}^{n_{\Delta }-1}\left\langle x\left(\tau _{ i}\right),k\left(t_{i+1}\right) - k\left(t_{i}\right)\right\rangle,}$$

where the integral is independent of the arbitrary choice of \(\tau _{i} \in \left[t_{i},t_{i+1}\right]\).

The Riemann–Stieltjes integral satisfies

$$\displaystyle{\left\vert \int _{a}^{b}\left\langle x\left(t\right),\mathit{dk}\left(t\right)\right\rangle \right\vert \leq \left\Vert x\right\Vert _{\left[a,b\right]}\ \left\updownarrow k\right\updownarrow _{\left[a,b\right]}\,.}$$

Proposition 6.15.

Equipped with the norm

$$\displaystyle{\left\Vert k\right\Vert _{\mathit{BV}\left(\left[a,b\right];\mathbb{R}^{d}\right)}:= \left\vert k\left(a\right)\right\vert + \left\updownarrow k\right\updownarrow _{\left[a,b\right]},}$$

the space \(\mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) is a Banach space. An element k of \(\mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) can be identified with the following linear continuous mapping on \(C\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) :

$$\displaystyle{x\longmapsto \left\langle x\left(a\right),k\left(a\right)\right\rangle +\int _{ a}^{b}\left\langle x\left(t\right),\mathit{dk}\left(t\right)\right\rangle.}$$

With this identification, \(\mathit{BV }\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) is the dual of the space \(C\left(\left[a,b\right]; \mathbb{R}^{d}\right)\) .

Proposition 6.16 (Helly–Bray).

Let \(n \in \mathbb{N}^{{\ast}}\), \(x_{n},x \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\), \(k_{n} \in \mathit{BV }\left(\left[0,T\right]; \mathbb{R}^{d}\right)\), \(k: \left[0,T\right] \rightarrow \mathbb{R}^{d}\) , such that

$$\displaystyle{\begin{array}{r@{\quad }l} \left(i\right)\quad &\quad x_{n} \rightarrow x\quad \text{ in }C\left(\left[0,T\right]; \mathbb{R}^{d}\right), \\ \left(\mathit{ii}\right)\quad &\quad k_{n}\left(t\right) \rightarrow k\left(t\right),\;\forall \ t \in \left[0,T\right]\quad \text{ and} \\ \left(\mathit{iii}\right)\quad &\quad \sup \limits _{n\in \mathbb{N}^{{\ast}}}\left\updownarrow k_{n}\right\updownarrow _{T} = M < +\infty. \end{array} }$$

Then \(k \in \mathit{BV }\left(\left[0,T\right]; \mathbb{R}^{d}\right)\), \(\left\updownarrow k\right\updownarrow _{T} \leq M\) , and \(\forall \,0 \leq s \leq t \leq T\) :

$$\displaystyle{\begin{array}{r@{\quad }l} \left(j\right)\quad &\quad \int _{s}^{t}\left\langle x_{ n}\left(r\right),\mathit{dk}_{n}\left(r\right)\right\rangle \rightarrow \int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle,\;\text{ as }n \rightarrow \infty, \\ \left(\,\mathit{jj}\right)\quad &\quad \int _{s}^{t}\left\vert x\left(r\right)\right\vert d\left\updownarrow k\right\updownarrow _{ r} \leq \liminf \limits _{n\rightarrow +\infty }\int _{s}^{t}\left\vert x_{ n}\left(r\right)\right\vert d\left\updownarrow k_{n}\right\updownarrow _{r}. \end{array} }$$

In particular \(k_{n}\mathop{ \rightarrow }\limits^{ w^{{\ast}}}k\) in \(\mathit{BV }\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) , that is for all \(y \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\) :

$$\displaystyle{\int _{0}^{T}\left\langle y\left(t\right),\mathit{dk}_{ n}\left(t\right)\right\rangle \rightarrow \int _{0}^{T}\left\langle y\left(t\right),\mathit{dk}\left(t\right)\right\rangle.}$$

Proof.

First let \(\Delta _{N} \in \mathcal{D}_{\left[0,T\right]}\) be a sequence such that

$$\displaystyle{V _{\Delta _{N}}\left(k\right) \nearrow \left\updownarrow k\right\updownarrow _{T}\quad \text{ as }N \nearrow \infty.}$$

From the definition of \(\left\updownarrow \cdot \right\updownarrow _{T}\) we have

$$\displaystyle{V _{\Delta _{N}}\left(k_{n}\right) \leq \left\updownarrow k_{n}\right\updownarrow _{T} \leq M.}$$

Since \(k_{n}\left(t\right) \rightarrow k\left(t\right)\) for all \(t \in \left[0,T\right]\), it follows that \(V _{\Delta _{N}}\left(k_{n}\right) \rightarrow V _{\Delta _{N}}\left(k\right)\). Hence

$$\displaystyle{V _{\Delta _{N}}\left(k\right) \leq M\quad \text{ for all }N \in \mathbb{N}^{{\ast}}}$$

and passing to the limit as \(N \nearrow \infty \) we obtain

$$\displaystyle{\left\updownarrow k\right\updownarrow _{T} \leq M.}$$

Let \(\varepsilon > 0\)

$$\displaystyle{\Delta:\; s = t_{0} < t_{1} < \cdots < t_{N} = t,\quad N = N_{\Delta } \in \mathbb{N}^{{\ast}},}$$

with \(t_{i} \in \left[0,T\right]\), \(\left\Vert \Delta \right\Vert =\sup \left\{t_{i+1} - t_{i}: 0 \leq i \leq N - 1\right\} \leq \varepsilon\). For \(x_{i} = x\left(t_{i}\right)\), \(k_{i} = k\left(t_{i}\right)\), define

$$\displaystyle{S_{\Delta }\left(x,k\right) =\sum _{ i=0}^{N-1}\left\langle x_{ i},k_{i+1} - k_{i}\right\rangle }$$

and \(\mathbf{m}_{x}: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\)

$$\displaystyle{\mathbf{m}_{x}\left(\varepsilon \right) =\sup \left\{\left\vert x\left(r\right) - x\left(s\right)\right\vert: \left\vert r - s\right\vert \leq \varepsilon,\;r,s \in \left[0,T\right]\right\}}$$

the modulus of continuity of x on \(\left[0,T\right]\).

We have

$$\displaystyle{ \left\vert \int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle - S_{ \Delta }\left(x,k\right)\right\vert \leq \mathbf{m}_{x}\left(\varepsilon \right)\left\updownarrow k\right\updownarrow _{T}\,. }$$
(6.14)

Indeed

$$\displaystyle\begin{array}{rcl} \left\vert \int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle - S_{ \Delta }\left(x,k\right)\right\vert & =& \left\vert \sum _{i=0}^{N-1}\int _{ t_{i}}^{t_{i+1} }\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle -\int _{t_{i}}^{t_{i+1} }\left\langle x_{i},\mathit{dk}\left(r\right)\right\rangle \right\vert {}\\ & \leq & \sum _{i=0}^{N-1}\left\vert \int _{ t_{i}}^{t_{i+1} }\left\langle x\left(r\right) - x_{i},\mathit{dk}\left(r\right)\right\rangle \right\vert {}\\ & \leq & \mathbf{m}_{x}\left(\left\Vert \Delta \right\Vert \right)\sum _{i=0}^{N-1}\left\updownarrow k\right\updownarrow _{\left[t_{i},t_{i+1}\right]} {}\\ & \leq & \mathbf{m}_{x}\left(\varepsilon \right)\left\updownarrow k\right\updownarrow _{T}\,. {}\\ \end{array}$$

Then

$$\displaystyle{\begin{array}{l} \left\vert \int _{s}^{t}\left\langle x\left(r\right),d\left(k_{ n}\left(r\right) - k\left(r\right)\right)\right\rangle - S_{\Delta }\left(x,k_{n} - k\right)\right\vert \leq \mathbf{m}_{x}\left(\left\Vert \Delta \right\Vert \right)\left\updownarrow k_{n} - k\right\updownarrow _{T} \\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \leq \mathbf{m}_{x}\left(\varepsilon \right)\left[\left\updownarrow k_{n}\right\updownarrow _{T} + \left\updownarrow k\right\updownarrow _{T}\right]. \end{array} }$$

Now we obtain the estimate

$$\displaystyle\begin{array}{rcl} & & \left\vert \int _{s}^{t}\left\langle x_{ n}\left(r\right),\mathit{dk}_{n}\left(r\right)\right\rangle -\int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle \right\vert {}\\ & & \qquad = \left\vert \int _{s}^{t}\left\langle x_{ n}\left(r\right) - x\left(r\right),\mathit{dk}_{n}\left(r\right)\right\rangle + \int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}_{ n}\left(r\right) -\mathit{dk}\left(r\right)\right\rangle \right\vert {}\\ & & \qquad \leq \left\Vert x_{n} - x\right\Vert _{T}\ \left\updownarrow k_{n}\right\updownarrow _{T} + \left\vert \int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}_{ n}\left(r\right) -\mathit{dk}\left(r\right)\right\rangle \right\vert {}\\ & & \qquad \leq \left\Vert x_{n} - x\right\Vert _{T}\ \left\updownarrow k_{n}\right\updownarrow _{T} + \mathbf{m}_{x}\left(\varepsilon \right)\left[\left\updownarrow k_{n}\right\updownarrow _{T} + \left\updownarrow k\right\updownarrow _{T}\right] + \left\vert S_{\Delta }\left(x,k_{n} - k\right)\right\vert. {}\\ \end{array}$$

Since \(k_{n}\left(t\right) \rightarrow k\left(t\right)\) for all \(t \in \left[0,T\right]\), it follows that \(\lim _{n\rightarrow \infty }\left\vert S_{\Delta }\left(x,k_{n} - k\right)\right\vert = 0\) and

$$\displaystyle{\mathop{\lim \sup }\limits_{n \rightarrow \infty }\left\vert \int _{s}^{t}\left\langle x_{ n}\left(r\right),\mathit{dk}_{n}\left(r\right)\right\rangle -\int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle \right\vert \leq 2M\,\mathbf{m}_{ x}\left(\varepsilon \right),\quad \forall \varepsilon > 0.}$$

Hence the limit \(\lim \limits _{n\rightarrow \infty }\int _{s}^{t}\left\langle x_{ n}\left(r\right),\mathit{dk}_{n}\left(r\right)\right\rangle\) exists, as does

$$\displaystyle{\lim _{n\rightarrow \infty }\int _{s}^{t}\left\langle x_{ n}\left(r\right),\mathit{dk}_{n}\left(r\right)\right\rangle = \int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle.}$$

Now, let \(\alpha \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\), \(\left\Vert \alpha \right\Vert _{T} \leq 1\). Then

$$\displaystyle\begin{array}{rcl} \int _{s}^{t}\left\vert x\left(r\right)\right\vert \left\langle \alpha \left(r\right),\mathit{dk}\left(r\right)\right\rangle & =& \lim _{ n\rightarrow \infty }\int _{s}^{t}\left\vert x_{ n}\left(r\right)\right\vert \left\langle \alpha \left(r\right),\mathit{dk}_{n}\left(r\right)\right\rangle {}\\ & \leq & \liminf _{n\rightarrow +\infty }\int _{s}^{t}\left\vert x_{ n}\left(r\right)\right\vert d\left\updownarrow k_{n}\right\updownarrow _{r} {}\\ \end{array}$$

and passing to \(\sup _{\left\Vert \alpha \right\Vert _{T}\leq 1}\) we obtain

$$\displaystyle{\int _{s}^{t}\left\vert x\left(r\right)\right\vert d\left\updownarrow k\right\updownarrow _{ r} \leq \liminf _{n\rightarrow +\infty }\int _{s}^{t}\left\vert x_{ n}\left(r\right)\right\vert d\left\updownarrow k_{n}\right\updownarrow _{r}.}$$

 ■ 

We now give some other auxiliary results used in the book:

Proposition 6.17.

Let \(A: \mathbb{R}^{d} \rightrightarrows \mathbb{R}^{d}\) be a maximal monotone operator and \(\mathcal{A}: C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right) \rightrightarrows \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) be defined by:

$$\displaystyle{\left(x,k\right) \in \mathcal{A}\quad if\quad x \in C\left(\mathbb{R}_{+};\overline{D(A)}\right),\;\;k \in \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\ \text{ and}}$$
$$\displaystyle{ \int _{s}^{t}\left\langle x\left(r\right) - z,\mathit{dk}\left(r\right) -\hat{ z}\mathit{dr}\right\rangle \geq 0,\quad \forall \,\left(z,\hat{z}\right) \in A,\;\forall \,0 \leq s \leq t. }$$
(6.15)

Then the relation (6.15) is equivalent to: for all \(u,\hat{u} \in C(\mathbb{R}_{+}; \mathbb{R}^{d})\) such that \(\left(u(r),\hat{u}(r)\right) \in A,\,\forall \,r \geq 0\)

$$\displaystyle{ \int \nolimits _{s}^{t}\left\langle x(r) - u(r),\mathit{dk}(r) -\hat{ u}(r)\mathit{dr}\right\rangle \geq 0,\quad \,\forall \,0 \leq s \leq t, }$$
(6.16)

and \(\mathcal{A}\) is a monotone operator, that is:

\(\text{ for all}\ \left(x,k\right),\left(y,\ell\right) \in \mathcal{A}\)

$$\displaystyle{\int _{s}^{t}\left\langle x\left(r\right) - y\left(r\right),\mathit{dk}\left(r\right) - d\ell\left(r\right)\right\rangle \geq 0,\quad \forall \,0 \leq s \leq t.}$$

Moreover \(\mathcal{A}\) is a maximal monotone operator.

Proof.

(6.15) ⇒ (6.16):

Let \(\forall \,u,\hat{u} \in C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) be such that \(\left(u(r),\hat{u}(r)\right) \in A,\,\forall \,r \geq 0\). Then

$$\displaystyle\begin{array}{rcl} & & \int \nolimits _{s}^{t}\left\langle x(r) - u(r),\mathit{dk}(r) -\hat{ u}(r)\mathit{dr}\right\rangle {}\\ & & =\lim _{n\rightarrow \infty }\int \nolimits _{s}^{t}\left\langle x(r) - u(\frac{\lfloor nr\rfloor } {n} ),\mathit{dk}(r) -\hat{ u}(\frac{\lfloor nr\rfloor } {n} )\mathit{dr}\right\rangle \geq 0. {}\\ \end{array}$$

(6.16) ⇒ (6.15):

The implication is obtained for \(u\left(r\right) = z\) and \(\hat{u}\left(r\right) =\hat{ z}\).

Let \(\left(x,k\right),(y,\ell) \in \mathcal{A}\) be arbitrary. Then for all \(u,\hat{u} \in C(\mathbb{R}_{+}; \mathbb{R}^{d})\) such that \(\left(u(r),\hat{u}(r)\right) \in A,\,\forall \,r \geq 0\) we have for all 0 ≤ s ≤ t,

$$\displaystyle{\begin{array}{l} \int \nolimits _{s}^{t}\left\langle y(r) - u(r),d\ell\left(r\right) -\hat{ u}\left(r\right)\mathit{dr}\right\rangle \geq 0, \\ \int \nolimits _{s}^{t}\left\langle x(r) - u(r),\mathit{dk}\left(r\right) -\hat{ u}\left(r\right)\mathit{dr}\right\rangle \geq 0.\end{array} }$$

We put here

$$\displaystyle{u\left(r\right) = J_{\varepsilon }\left(\frac{x\left(r\right) + y\left(r\right)} {2} \right) = \frac{x\left(r\right) + y\left(r\right)} {2} -\varepsilon A_{\varepsilon }\left(\frac{x\left(r\right) + y\left(r\right)} {2} \right)}$$

and 

$$\displaystyle{\hat{u}\left(r\right) = A_{\varepsilon }\left(\frac{x\left(r\right) + y\left(r\right)} {2} \right),}$$

where \(J_{\varepsilon }(z) = \left(I +\varepsilon A\right)^{-1}(z)\), \(A_{\varepsilon }\left(z\right) = \dfrac{1} {\varepsilon } \left(z - J_{\varepsilon }\left(z\right)\right)\). Since A is a maximal operator on \(\mathbb{R}^{d}\) it follows that \(\overline{D(A)}\) is convex and \(\lim \limits _{\varepsilon \rightarrow 0}\varepsilon A_{\varepsilon }\left(u\right) \rightarrow 0,\,\forall \,\,u \in \overline{D(A)}\). Also for all \(a \in D\left(A\right)\)

$$\displaystyle{\varepsilon \left\vert A_{\varepsilon }\left(u\right)\right\vert \leq \varepsilon \left\vert A_{\varepsilon }\left(u\right) - A_{\varepsilon }\left(a\right)\right\vert +\varepsilon \left\vert A_{\varepsilon }\left(a\right)\right\vert \leq \left\vert u - a\right\vert +\varepsilon \left\vert A^{0}\left(a\right)\right\vert.}$$

Adding the inequalities term by term we obtain:

$$\displaystyle{0 \leq \dfrac{1} {2}\!\int \nolimits _{s}^{t}\left\langle y\left(r\right)-x\left(r\right),d\ell\left(r\right)-\mathit{dk}\left(r\right)\right\rangle +\varepsilon \!\int \nolimits _{ s}^{t}\left\langle A_{\varepsilon }(\frac{x\left(r\right)+y\left(r\right)} {2} ),d\ell\left(r\right)+\mathit{dk}\left(r\right)\right\rangle.}$$

Passing to \(\lim _{\varepsilon \searrow 0}\) we obtain \(\int \nolimits _{s}^{t}\left\langle y\left(r\right) - x\left(r\right),d\ell\left(r\right) -\mathit{dk}\left(r\right)\right\rangle \geq 0\). \(\mathcal{A}\) is a maximal monotone operator since if \(\left(y,\ell\right) \in C\left(\mathbb{R}_{+};\overline{D(A)}\right) \times \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) satisfies

$$\displaystyle{\int \nolimits _{s}^{t}\left\langle y\left(r\right) - x\left(r\right),d\ell\left(r\right) -\mathit{dk}\left(r\right)\right\rangle \geq 0,\quad \forall \left(x,k\right) \in \mathcal{A},}$$

then this last inequality is satisfied for all \(\left(x,k\right)\) of the form \(\left(x\left(t\right),k\left(t\right)\right) = \left(z,\hat{z}t\right)\), where \(\left(z,\hat{z}\right) \in A\), and consequently (from the definition of \(\mathcal{A}\)) \(\left(y,\ell\right) \in \mathcal{A}\). The proof is complete. ■ 

Remark 6.18.

Often we restrict the realization to

$$\displaystyle{C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right) \times \left[C\left(\mathbb{R}_{ +}; \mathbb{R}^{d}\right)\bigcap \mathit{BV }_{ 0,\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\right]}$$

and we write (for this case) \(\mathit{dk}\left(t\right) \in A\left(x\left(t\right)\right)\left(\mathit{dt}\right)\) if

$$\displaystyle{\begin{array}{l@{\quad }l} \left(a_{1}\right)\quad &\quad x \in C\left(\mathbb{R}_{+};\overline{\mathrm{Dom}(A)}\right), \\ \left(a_{2}\right)\quad &\quad k \in C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\bigcap \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right),\;\;k\left(0\right) = 0, \\ \left(a_{3}\right)\quad &\quad \left\langle x\left(t\right) - u,\,\mathit{dk}\left(t\right) -\hat{ u}\mathit{dt}\right\rangle \geq 0,\quad \text{ on }\mathbb{R}_{+},\;\;\forall \,\left(u,\hat{u}\right) \in A. \end{array} }$$

Proposition 6.19.

Let \(A \subset \mathbb{R}^{d} \times \mathbb{R}^{d}\) be a maximal subset and \(\mathcal{A}\) be the realization of A on \(C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right) \times \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) defined by (6.15). Assume that \(\mathrm{int}\left(\mathrm{Dom}\left(A\right)\right)\neq \varnothing \). Let \(u_{0} \in \mathrm{ int}\left(\mathrm{Dom}\left(A\right)\right)\) and r 0 > 0 be such that \(\bar{B}\left(u_{0},r_{0}\right) \subset \mathrm{ Dom}\left(A\right)\). Then

$$\displaystyle{A_{u_{0},r_{0}}^{\#}\mathop{ =}\limits^{ \mathit{def }}\sup \left\{\left\vert \hat{u}\right\vert:\hat{ u} \in Au,\;u \in \bar{ B}\left(u_{ 0},r_{0}\right)\right\} < \infty,}$$

and for all \(\left(x,k\right) \in \mathcal{A}\) :

$$\displaystyle{ r_{0}d\left\updownarrow k\right\updownarrow _{t} \leq \left\langle x\left(t\right) - u_{0},\mathit{dk}\left(t\right)\right\rangle + \left(A_{u_{0},r_{0}}^{\#}\left\vert x\left(t\right) - u_{ 0}\right\vert + r_{0}A_{u_{0},r_{0}}^{\#}\right)\mathit{dt} }$$
(6.17)

as signed measures on \(\mathbb{R}_{+}\). Moreover there exists a constant b 0 > 0 such that

$$\displaystyle{ \begin{array}{r} r_{0}\int _{s}^{t}\left\vert A_{\varepsilon }y\left(r\right)\right\vert \mathit{dr} \leq \int _{ s}^{t}\left\langle y\left(r\right) - u_{ 0},A_{\varepsilon }y\left(r\right)\right\rangle \mathit{dr} \\ + A_{u_{0},r_{0}}^{\#}\int _{s}^{t}\left\vert y\left(r\right) - u_{ 0}\right\vert \mathit{dr} + b_{0}\left(t - s\right), \end{array} }$$
(6.18)

for all 0 ≤ s ≤ t ≤ T, \(y \in C\left(\mathbb{R}_{+}; \mathbb{R}^{d}\right)\) and \(0 <\varepsilon \leq 1\) .

Proof.

Since A is locally bounded on \(\mathrm{int}\left(\mathrm{Dom}\left(A\right)\right)\), it follows that for \(u_{0} \in \mathrm{ int}\left(\mathrm{Dom}\left(A\right)\right)\), there exists an r 0 > 0 such that \(u_{0} + r_{0}v \in \mathrm{ int}\left(\mathrm{Dom}\left(A\right)\right)\) for all \(\left\vert v\right\vert \leq 1\) and

$$\displaystyle{A_{u_{0},r_{0}}^{\#}\mathop{ =}\limits^{ \mathit{def }}\sup \left\{\left\vert \hat{z}\right\vert:\hat{ z} \in Az,\;z \in \bar{ B}\left(u_{ 0},r_{0}\right)\right\} < \infty.}$$

Let \(0 \leq s = t_{0} < t_{1} <\ldots < t_{n} = t \leq T\), \(\max _{i}\left(t_{i+1} - t_{i}\right) =\delta _{n} \rightarrow 0\).

We put in (6.15) z = u 0 + r 0 v. Then

$$\displaystyle{\int _{t_{i}}^{t_{i+1} }\left\langle x\left(r\right) -\left(u_{0} + r_{0}v\right),\mathit{dk}\left(r\right) -\hat{ z}\mathit{dr}\right\rangle \geq 0,\quad \forall \,\left\vert v\right\vert \leq 1,\;\forall \,0 \leq s \leq t \leq T,}$$

and we obtain

$$\displaystyle\begin{array}{rcl} & & r_{0}\left\langle k\left(t_{i+1}\right) - k\left(t_{i}\right),v\right\rangle {}\\ & & \leq \int _{t_{i}}^{t_{i+1} }\left\langle x\left(r\right) - u_{0},\mathit{dk}\left(r\right)\right\rangle + A_{u_{0},r_{0}}^{\#}\int _{ t_{i}}^{t_{i+1} }\left\vert x\left(r\right) - u_{0}\right\vert \mathit{dr} + r_{0}A_{u_{0},r_{0}}^{\#}\left(t_{ i+1} - t_{i}\right), {}\\ \end{array}$$

for all \(\left\vert v\right\vert \leq 1\). Hence

$$\displaystyle\begin{array}{rcl} & & r_{0}\left\vert k\left(t_{i+1}\right) - k\left(t_{i}\right)\right\vert {}\\ & & \leq \int _{t_{i}}^{t_{i+1} }\left\langle x\left(r\right) - u_{0},\mathit{dk}\left(r\right)\right\rangle + A_{u_{0},r_{0}}^{\#}\int _{ t_{i}}^{t_{i+1} }\left\vert x\left(r\right) - u_{0}\right\vert \mathit{dr} + r_{0}A_{u_{0},r_{0}}^{\#}\left(t_{ i+1} - t_{i}\right) {}\\ \end{array}$$

and adding term by term for i = 0 to i = n − 1 the inequality

$$\displaystyle\begin{array}{rcl} r_{0}\sum _{i=0}^{n-1}\left\vert k\left(t_{ i+1}\right) - k\left(t_{i}\right)\right\vert & \leq & \int _{s}^{t}\left\langle x\left(t\right) - u_{ 0},\mathit{dk}\left(t\right)\right\rangle {}\\ & & +A_{u_{0},r_{0}}^{\#}\int _{ s}^{t}\left\vert x\left(r\right) - u_{ 0}\right\vert \mathit{dr} + \left(t - s\right)r_{0}A_{u_{0},r_{0}}^{\#}, {}\\ \end{array}$$

holds and clearly (6.17) follows.

Setting in (6.3) \(x = y\left(r\right)\), x 0 = u 0 and integrating from s to t the inequality (6.18) follows. ■ 

Often in the book we use some energy type equalities that we describe in the next lemma.

Lemma 6.20.

Let \(x,k,m \in C\left(\left[0,\infty \right[; \mathbb{R}^{d}\right)\), \(k \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right)\), \(k\left(0\right) = m\left(0\right) = 0\) such that

$$\displaystyle{x\left(t\right) + k\left(t\right) = x_{0} + m\left(t\right),\ \forall \ t \geq 0.}$$

Then

  1. (I):

    For all t ≥ 0 and for all \(u \in \mathbb{R}^{d}\) :

    $$\displaystyle{ \begin{array}{r} \left\vert x\left(t\right) - m\left(t\right) - u\right\vert ^{2} + 2\int _{0}^{t}\left\langle x\left(r\right) - u,\mathit{dk}\left(r\right)\right\rangle \\ = \left\vert x_{0} - u\right\vert ^{2} + 2\int _{0}^{t}\left\langle m\left(r\right),\mathit{dk}\left(r\right)\right\rangle.\end{array} }$$
    (6.19)
  2. (II):

    For all 0 ≤ s ≤ t:

    $$\displaystyle{ \begin{array}{r} \left\vert x\left(t\right) - x\left(s\right) - m\left(t\right) + m\left(s\right)\right\vert ^{2} + 2\int _{s}^{t}\left\langle x\left(r\right) - x\left(s\right),\mathit{dk}\left(r\right)\right\rangle \\ = 2\int _{s}^{t}\left\langle m\left(r\right) - m\left(s\right),\mathit{dk}\left(r\right)\right\rangle.\end{array} }$$
    (6.20)

Proof.

  1. (I):

    We have

    $$\displaystyle\begin{array}{rcl} & & \left\vert x\left(t\right) - m\left(t\right) - u\right\vert ^{2} {}\\ & & = \left\vert x_{0} - k\left(t\right) - u\right\vert ^{2} {}\\ & & = \left\vert x_{0} - u\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle x_{ 0} - k\left(r\right) - u,d\left(x_{0} - k - u\right)\left(r\right)\right\rangle {}\\ & & = \left\vert x_{0} - u\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle x\left(r\right) - m\left(r\right) - u,-\mathit{dk}\left(r\right)\right\rangle {}\\ & & = \left\vert x_{0} - u\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle m\left(r\right),\mathit{dk}\left(r\right)\right\rangle - 2\int _{ 0}^{t}\left\langle x\left(r\right) - u,\mathit{dk}\left(r\right)\right\rangle, {}\\ \end{array}$$

    that is (6.19).

  2. (II):

    From (6.19) we have for u = 0

    $$\displaystyle{\begin{array}{r} \left\vert x\left(t\right) - m\left(t\right)\right\vert ^{2} -\left\vert x\left(s\right) - m\left(s\right)\right\vert ^{2} + 2\int _{ s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle \\ = 2\int _{s}^{t}\left\langle m\left(r\right),\mathit{dk}\left(r\right)\right\rangle.\end{array} }$$

    But \(k\left(t\right) - k\left(s\right) = m\left(t\right) - x\left(t\right) - m\left(s\right) + x\left(s\right)\),

    $$\displaystyle{\begin{array}{r} \left\vert x\left(t\right) - m\left(t\right)\right\vert ^{2} = \left\vert x\left(t\right) - x\left(s\right) - m\left(t\right) + m\left(s\right)\right\vert ^{2} + \left\vert x\left(s\right) - m\left(s\right)\right\vert ^{2} \\ - 2\left\langle x\left(s\right) - m\left(s\right),k\left(t\right) - k\left(s\right)\right\rangle \end{array} }$$

    and

    $$\displaystyle\begin{array}{rcl} 2\int _{s}^{t}\left\langle m\left(r\right),\mathit{dk}\left(r\right)\right\rangle & =& 2\int _{ s}^{t}\left\langle m\left(r\right)-m\left(s\right),\mathit{dk}\left(r\right)\right\rangle +2\left\langle m\left(s\right),k\left(t\right) - k\left(s\right)\right\rangle, {}\\ 2\int _{s}^{t}\left\langle x\left(r\right),\mathit{dk}\left(r\right)\right\rangle & =& 2\int _{ s}^{t}\left\langle x\left(r\right)-x\left(s\right),\mathit{dk}\left(r\right)\right\rangle + 2\left\langle x\left(s\right),k\left(t\right) - k\left(s\right)\right\rangle. {}\\ \end{array}$$

    Hence, the equality (6.20) holds. ■ 

Finally we give an approximation result via Stieltjes integrals.

Lemma 6.21.

Let

  • \(Q: \left[0,T\right] \rightarrow \mathbb{R}\) be a strictly increasing continuous function such that \(Q\left(0\right) = 0\),

  • \(f,\gamma: \left[0,T\right] \rightarrow \mathbb{R}^{d}\) be bounded measurable functions,

  • \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) be a proper convex lower semicontinuous function.

If

$$\displaystyle{f_{\varepsilon }\left(t\right) = f\left(0\right)e^{\frac{-Q\left(t\right)} {Q\left(\varepsilon \right)} } + \frac{1} {Q\left(\varepsilon \right)}\int _{0}^{t}e^{\frac{Q\left(r\right)-Q\left(t\right)} {Q\left(\varepsilon \right)} }f\left(r\right)\mathit{dQ}\left(r\right),\quad t \in \left[0,T\right]\text{, }\varepsilon > 0}$$

then as \(\varepsilon \rightarrow 0_{+}\)

$$\displaystyle{\begin{array}{ll} \left(j\right) &\quad f_{\varepsilon }\left(r\right)\mathop{\longrightarrow}\limits_{}^{\;\;\;\;}f\left(r\right),\text{ a.e. }r \in \left[0,T\right], \\ \left(\,\mathit{jj}\right)&\quad \int _{t}^{s}\varphi \left(f_{\varepsilon }\left(r\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right)\mathop{\longrightarrow}\limits_{}^{\;\;\;\;}\int _{ t}^{s}\varphi \left(f\left(r\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right),\;\forall \left[t,s\right] \subset \left[0,T\right].\end{array} }$$

If \(f: \left[0,T\right] \rightarrow \mathbb{R}^{d}\) is a continuous function it moreover follows that

$$\displaystyle{\sup _{t\in \left[0,T\right]}\left\vert f_{\varepsilon }\left(t\right)-f\left(t\right)\right\vert \mathop{\longrightarrow}\limits_{}^{\;\;\;\;}0.}$$

Remark 6.22.

The same conclusions are true if we replace \(f_{\varepsilon }\left(t\right)\) by

$$\displaystyle{g_{\varepsilon }\left(t\right) = f\left(T\right)e^{\frac{Q\left(t\right)-Q\left(T\right)} {Q\left(\varepsilon \right)} } + \frac{1} {Q\left(\varepsilon \right)}\int _{t}^{T}e^{\frac{Q\left(t\right)-Q\left(r\right)} {Q\left(\varepsilon \right)} }f\left(r\right)\mathit{dQ}\left(r\right),\;t \in \left[0,T\right].}$$

Proof of Lemma 6.21 \(\left(j\right)\).

Obviously we have

$$\displaystyle{ \begin{array}{l} \int _{0}^{t} \frac{1} {Q\left(\varepsilon \right)}e^{\frac{Q\left(r\right)-Q\left(t\right)} {Q\left(\varepsilon \right)} }f\left(r\right)\mathit{dQ}\left(r\right) = \int _{\frac{-Q\left(t\right)} {Q\left(\varepsilon \right)} }^{0}e^{u}f\left(\left(Q^{-1}\left(\mathit{uQ}\left(\varepsilon \right) + Q\left(t\right)\right)\right)\right)\mathit{du} \\ = \int _{\frac{-Q\left(t\right)} {Q\left(\varepsilon \right)} }^{0}e^{u}\left[f\left(Q^{-1}\left(\mathit{uQ}\left(\varepsilon \right) + Q\left(t\right)\right)\right) - f\left(Q^{-1}\left(Q\left(t\right)\right)\right)\right]\mathit{du}+f\left(t\right)\int _{\frac{ -Q\left(t\right)} {Q\left(\varepsilon \right)} }^{0}e^{u}\mathit{du}. \end{array} }$$
(6.21)

But

$$\displaystyle{\begin{array}{l} \limsup \limits _{\varepsilon \rightarrow 0}\Big\vert \int _{\frac{-Q\left(t\right)} {Q\left(\varepsilon \right)} }^{0}e^{u}\left[f\left(Q^{-1}\left(\mathit{uQ}\left(\varepsilon \right) + Q\left(t\right)\right)\right) - f\left(Q^{-1}\left(Q\left(t\right)\right)\right)\right]\mathit{du}\Big\vert \\ \leq \limsup \limits _{\varepsilon \rightarrow 0}\int _{-\infty }^{0}e^{u}\left\vert f\left(Q^{-1}\left(\left(\mathit{uQ}\left(\varepsilon \right) + Q\left(t\right)\right) \vee 0\right)\right) - f\left(Q^{-1}\left(Q\left(t\right)\right)\right)\right\vert \mathit{du} \\ \leq 2C\int _{-\infty }^{-n}e^{u}\mathit{du} + \int _{ -n}^{0}e^{u}\left\vert f\left(Q^{-1}\left(\left(\mathit{uQ}\left(\varepsilon \right) + Q\left(t\right)\right) \vee 0\right)\right) - f\left(Q^{-1}\left(Q\left(t\right)\right)\right)\right\vert \mathit{du} \\ \leq 2Ce^{-n} +\limsup \limits _{\varepsilon \rightarrow 0}\int _{-n}^{0}\left\vert f\left(Q^{-1}\left(\left(\mathit{uQ}\left(\varepsilon \right) + Q\left(t\right)\right) \vee 0\right)\right) - f\left(Q^{-1}\left(Q\left(t\right)\right)\right)\right\vert \mathit{du} \\ \leq 2Ce^{-n},\text{ for all }n, \end{array} }$$

since

$$\displaystyle{\lim _{\delta \rightarrow 0}\int _{\alpha }^{\beta }\left\vert f\left(Q^{-1}\left(s +\delta u\right)\right) - f\left(Q^{-1}\left(s\right)\right)\right\vert \mathit{du} = 0,\;\text{ a.e.}}$$

Therefore the following limit exists

$$\displaystyle{\lim \limits _{\varepsilon \rightarrow 0}\Big\vert \int _{\frac{-Q\left(t\right)} {Q\left(\varepsilon \right)} }^{0}e^{u}\left[f\left(Q^{-1}\left(\mathit{uQ}\left(\varepsilon \right) + Q\left(t\right)\right)\right) - f\left(t\right)\right]\mathit{du}\Big\vert = 0,}$$

and \(\left(j\right)\) follows. In the case where f is continuous, it is sufficient to write

$$\displaystyle{\begin{array}{l} f_{\varepsilon }\left(t\right) = f\left(0\right)e^{\frac{-Q\left(t\right)} {Q\left(\varepsilon \right)} } + \frac{1} {Q\left(\varepsilon \right)}\int _{0}^{t}e^{\frac{Q\left(r\right)-Q\left(t\right)} {Q\left(\varepsilon \right)} }f\left(r\right)\mathit{dQ}\left(r\right) \\ = f\left(0\right)e^{\frac{-Q\left(t\right)} {Q\left(\varepsilon \right)} } + \frac{1} {Q\left(\varepsilon \right)}\int _{0}^{t_{\varepsilon }}e^{\frac{Q\left(r\right)-Q\left(t\right)} {Q\left(\varepsilon \right)} }f\left(r\right)\mathit{dQ}\left(r\right) + \frac{1} {Q\left(\varepsilon \right)}\int _{t_{\varepsilon }}^{t}e^{\frac{Q\left(r\right)-Q\left(t\right)} {Q\left(\varepsilon \right)} }f\left(r\right)\mathit{dQ}\left(r\right), \end{array} }$$

where \(t_{\varepsilon }:= Q^{-1}\left(Q\left(t\right) -\sqrt{Q\left(\varepsilon \right)}\right)\mathop{\longrightarrow}\limits_{}^{\;\;\;\;}t\), as \(\varepsilon \rightarrow 0\), and \(t_{\varepsilon } < t\).

\(\left(\,\mathit{jj}\right)\) We have

$$\displaystyle\begin{array}{rcl} & & \int _{t}^{s}\varphi \left(f_{\varepsilon }\left(r\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right) {}\\ & & \qquad \leq \int _{t}^{s}e^{\frac{-Q\left(r\right)} {Q\left(\varepsilon \right)} }\varphi \left(f\left(0\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right) {}\\ & & \qquad \qquad + \int _{t}^{s}\left(\int _{ 0}^{r} \frac{1} {Q\left(\varepsilon \right)}e^{\frac{Q\left(u\right)-Q\left(r\right)} {Q\left(\varepsilon \right)} }\varphi \left(f\left(u\right)\right)\mathit{dQ}\left(u\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right) {}\\ & & \qquad =\varphi \left(f\left(0\right)\right)\int _{t}^{s}e^{\frac{-Q\left(r\right)} {Q\left(\varepsilon \right)} }\gamma \left(r\right)\mathit{dQ}\left(r\right) {}\\ & & \qquad \qquad + \int _{0}^{s}\left(\int _{ 0}^{s} \frac{1} {Q\left(\varepsilon \right)}e^{\frac{Q\left(u\right)-Q\left(r\right)} {Q\left(\varepsilon \right)} }\varphi \left(f\left(u\right)\right)\mathbf{1}_{\left[0,r\right]}\left(u\right)\mathit{dQ}\left(u\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right) {}\\ & & \qquad \qquad -\int _{0}^{t}\left(\int _{ 0}^{t} \frac{1} {Q\left(\varepsilon \right)}e^{\frac{Q\left(u\right)-Q\left(r\right)} {Q\left(\varepsilon \right)} }\varphi \left(f\left(u\right)\right)\mathbf{1}_{\left[0,r\right]}\left(u\right)\mathit{dQ}\left(u\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right) {}\\ & & \qquad =\varphi \left(f\left(0\right)\right)\int _{t}^{s}e^{\frac{-Q\left(r\right)} {Q\left(\varepsilon \right)} }\gamma \left(r\right)\mathit{dQ}\left(r\right) {}\\ & & \qquad \qquad + \int _{0}^{s}\left(\varphi \left(f\left(u\right)\right)\int _{ 0}^{s} \frac{1} {Q\left(\varepsilon \right)}e^{\frac{Q\left(u\right)-Q\left(r\right)} {Q\left(\varepsilon \right)} }\mathbf{1}_{\left[u,s\right]}\left(r\right)\gamma \left(r\right)\mathit{dQ}\left(r\right)\right)\mathit{dQ}\left(u\right) {}\\ & & \qquad \qquad -\int _{0}^{t}\left(\varphi \left(f\left(u\right)\right)\int _{ 0}^{t} \frac{1} {Q\left(\varepsilon \right)}e^{\frac{Q\left(u\right)-Q\left(r\right)} {Q\left(\varepsilon \right)} }\mathbf{1}_{\left[u,t\right]}\left(r\right)\gamma \left(r\right)\mathit{dQ}\left(r\right)\right)\mathit{dQ}\left(u\right). {}\\ \end{array}$$

Using Remark 6.22 we have

$$\displaystyle\begin{array}{rcl} \lim \limits _{\varepsilon \rightarrow 0}\int _{u}^{s} \frac{1} {Q\left(\varepsilon \right)}e^{\frac{Q\left(u\right)-Q\left(r\right)} {Q\left(\varepsilon \right)} }\gamma \left(r\right)\mathit{dQ}\left(r\right)& =& \lim \limits _{\varepsilon \rightarrow 0}\int _{u}^{t} \frac{1} {Q\left(\varepsilon \right)}e^{\frac{Q\left(u\right)-Q\left(r\right)} {Q\left(\varepsilon \right)} }\gamma \left(r\right)\mathit{dQ}\left(r\right). {}\\ & =& \gamma \left(u\right),\;\text{ a.e.} {}\\ \end{array}$$

By Lebesgue’s dominated convergence theorem and the lower semicontinuity of \(\varphi\) we conclude that

$$\displaystyle\begin{array}{rcl} \int _{t}^{s}\varphi \left(f\left(r\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right)& \leq & \liminf _{\varepsilon \rightarrow 0}\int _{t}^{s}\varphi \left(f_{\varepsilon }\left(r\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right) {}\\ & \leq & \limsup _{\varepsilon \rightarrow 0}\int _{t}^{s}\varphi \left(f_{\varepsilon }\left(r\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right) {}\\ & \leq & \int _{t}^{s}\varphi \left(f\left(r\right)\right)\gamma \left(r\right)\mathit{dQ}\left(r\right). {}\\ \end{array}$$

 ■ 

6.3.6 Semicontinuity

Let \(\left(\mathbb{X},\rho \right)\) be a metric space.

Definition 6.23.

A function \(f: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is lower semicontinuous (l.s.c.) at \(x \in \mathbb{X}\) if

$$\displaystyle{f\left(x\right) \leq \liminf _{y\rightarrow x}f\left(y\right),}$$

i.e. for all \(\varepsilon > 0\) there exists a \(\delta =\delta \left(\varepsilon,x\right) > 0\) such that \(\rho \left(x,y\right) <\delta\) implies \(f\left(y\right) \geq f\left(x\right)-\varepsilon\). The function f is l.s.c. if it is l.s.c. at all \(x \in \mathbb{X}\).

A function \(g: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is upper semicontinuous (u.s.c.) if − g is l.s.c.

Proposition 6.24.

The following assertions are equivalent:

  1. (i)

    \(f: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is lower semicontinuous;

  2. (ii)

    the set \(\left\{x \in \mathbb{X}: f\left(x\right) \leq a\right\}\) is closed in \(\mathbb{X}\) , for all \(a \in \mathbb{R}\) .

It is easy to prove that:

  • \(\blacktriangle \) If \(g_{n}: \mathbb{X} \rightarrow \overline{\mathbb{R}}\), \(n \in \mathbb{N}\), are l.s.c. functions and

    $$\displaystyle{g(x) =\sup \{ g_{n}(x): n \in \mathbb{N}\},}$$

    then \(g: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is a l.s.c. function.

  • \(\blacktriangle \) If \(f: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is a l.s.c. function, then f is bounded from below on compact subsets of \(\mathbb{X}\).

Lemma 6.25.

Let \(\left(\mathbb{X},\rho \right)\) be a metric space. If \(f: \mathbb{X} \rightarrow \overline{\mathbb{R}}\) is bounded below on bounded subsets of \(\mathbb{X}\) , then there exists a continuous function \(\mu: \mathbb{X} \rightarrow \mathbb{R}\) such that

$$\displaystyle{\mu \left(x\right) \leq f\left(x\right),\;\;\;\text{ for all }x \in \mathbb{X}.}$$

Proof.

Let \(n \in \mathbb{N}^{{\ast}}\) and \(a \in \mathbb{X}\). Define

$$\displaystyle{\mu _{n} = n \wedge \inf \{ f(x):\rho (x,a) \leq n\}.}$$

Then \(\mu _{n} \in \mathbb{R}\). Define \(\mu: \mathbb{X} \rightarrow \mathbb{R}\) such that, if \(n - 1 \leq \rho \left(x,a\right) < n\)

$$\displaystyle{\mu \left(x\right) =\mu _{n} - 2\left[\rho \left(x,a\right) -\left(n -\frac{1} {2}\right)\right]^{+}\left(\mu _{ n} -\mu _{n+1}\right).}$$

The function μ is continuous on \(\mathbb{X}\) and

$$\displaystyle{\mu \left(x\right) \leq f\left(x\right)\quad \text{ for all }x \in \mathbb{X}.}$$

 ■ 

Proposition 6.26.

Let \(\left(\mathbb{X},\rho \right)\) be a separable metric space. If \(f: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is a l.s.c. function and \(\mu: \mathbb{X} \rightarrow \mathbb{R}\) is a continuous function such that

$$\displaystyle{ \mu \left(x\right) \leq f\left(x\right),\;\;\;\text{ for all }x \in \mathbb{X}, }$$
(6.22)

then there exists a sequence of continuous functions \(f_{n}: \mathbb{X} \rightarrow \mathbb{R}\), \(n \in \mathbb{N}^{{\ast}}\) , such that for all \(x \in \mathbb{X}\)

$$\displaystyle{\mu \left(x\right) \leq f_{1}\left(x\right) \leq \ldots \leq f_{n}\left(x\right) \leq \ldots \leq f\left(x\right)\quad \text{ and}\quad \lim _{n\rightarrow \infty }f_{n}\left(x\right) = f\left(x\right).}$$

Proof.

Using only the boundedness from below (6.22) we shall show that there exists a sequence of continuous functions \(f_{n}: \mathbb{X} \rightarrow \mathbb{R}\), \(n \in \mathbb{N}^{{\ast}}\), such that

$$\displaystyle{ \mu \left(y\right) \leq f_{1}\left(y\right) \leq \ldots \leq f_{n}\left(y\right) \leq \ldots \leq f\left(y\right),\quad \text{ for all }y \in \mathbb{X}, }$$
(6.23)

and such that for all \(x \in \mathbb{X}\) there exists a sequence \(y_{n} \rightarrow x\) satisfying

$$\displaystyle{ n \wedge \left(f\left(y_{n}\right) - \frac{1} {n}\right) \leq \sup _{j\in \mathbb{N}^{{\ast}}}f_{j}\left(x\right) \leq f\left(x\right),\quad \text{ for all }n \in \mathbb{N}^{{\ast}}. }$$
(6.24)

Then the result follows using the lower semicontinuity of f:

$$\displaystyle{f\left(x\right) \leq \liminf _{n\rightarrow +\infty }\left[n \wedge \left(f\left(y_{n}\right) - \frac{1} {n}\right)\right] \leq \sup _{j\in \mathbb{N}^{{\ast}}}f_{j}\left(x\right) \leq f\left(x\right).}$$

Let us prove (6.23) and (6.24). Let \(n,i \in \mathbb{N}^{{\ast}}\) and \(a \in \mathbb{X}\). Define

$$\displaystyle{\mu _{i,n} =\mu _{i,n}\left(a\right) = n \wedge \inf \left\{f\left(y\right):\rho \left(y,a\right) < \frac{i} {n}\right\}.}$$

Then \(-\infty \leq \mu _{i,n} < +\infty \). Define \(\psi _{n}\left(\cdot,a\right): \mathbb{X} \rightarrow \mathbb{R}\) such that, if \(\frac{i-1} {n} \leq \rho \left(y,a\right) < \frac{i} {n}\), \(i \in \mathbb{N}^{{\ast}}\), then

$$\displaystyle{\begin{array}{r} \psi _{n}\left(y,a\right) =\mu \left(y\right) \vee \mu _{i,n} - 2n\left[\rho \left(y,a\right) -\left(i -\dfrac{1} {2}\right) \dfrac{1} {n}\right]^{+} \\ \times \left[\left(\mu \left(y\right) \vee \mu _{i,n}\right) -\left(\mu \left(y\right) \vee \mu _{i+1,n}\right)\right]. \end{array} }$$

For each \(a \in \mathbb{X}\) the function \(\psi _{n}\left(\cdot,a\right)\) is continuous on \(\mathbb{X}\) and

$$\displaystyle{\psi _{n}\left(y,a\right) \leq f\left(y\right)\quad \text{ for all }y \in \mathbb{X}.}$$

Let \(A_{1} \subset A_{2} \subset \ldots \subset A_{n} \subset \ldots\) be finite sets such that \(A={\bigcup\limits_{n\in\mathbb{N}^{\ast}}}A_{n}\) is a dense subset of \(\mathbb{X}\). Define \(f_{n}: \mathbb{X} \rightarrow \mathbb{R}\)

$$\displaystyle{f_{n}\left(y\right) =\max _{k\in \overline{1,n}}\left[\max _{a\in A_{n}}\ \psi _{k}\left(y,a\right)\right],\quad y \in \mathbb{X}.}$$

Clearly \(f_{n},\;n \in \mathbb{N}^{{\ast}}\), are continuous functions and

$$\displaystyle{f_{1}\left(y\right) \leq \ldots \leq f_{n}\left(y\right) \leq \ldots \leq f\left(y\right),\;\;\forall \ y \in \mathbb{X}.}$$

Let \(x \in \mathbb{X}\) be arbitrary. Then there exist a n  ∈ A and k n  ≥ n such that

$$\displaystyle{\rho \left(x,a_{n}\right) < \frac{1} {2n}\quad \text{ and}\quad a_{n} \in A_{k_{n}}.}$$

If \(\mu _{1,n}\left(a_{n}\right) \in \mathbb{R}\), then from the definition of \(\mu _{1,n}\left(a_{n}\right)\), there exists \(y_{n} \in B(a_{n}, \frac{1} {n})\) such that

$$\displaystyle{n \wedge \left(f\left(y_{n}\right) - \frac{1} {n}\right) \leq \mu _{1,n}\left(a_{n}\right) \leq \psi _{n}\left(x,a_{n}\right) \leq f_{k_{n}}\left(x\right) \leq \sup _{j\in \mathbb{N}^{{\ast}}}f_{j}\left(x\right) \leq f\left(x\right).}$$

If \(\mu _{1,n}\left(a_{n}\right) = -\infty \) then, once again from the definition of μ 1, n (a n ), there exists \(y_{n} \in B(a_{n}, \frac{1} {n})\) such that

$$\displaystyle{n \wedge \left(f\left(y_{n}\right) - \frac{1} {n}\right) \leq f_{1}\left(x\right) \leq \sup _{j\in \mathbb{N}^{{\ast}}}f_{j}\left(x\right) \leq f\left(x\right).}$$

We remark that

$$\displaystyle{\rho \left(y_{n},x\right) \leq \rho \left(y_{n},a_{n}\right) +\rho \left(a_{n},x\right) \leq \frac{3} {2n}}$$

and consequently y n  → x. The proof is complete. ■ 

We also have the following:

Proposition 6.27.

If \(f: \mathbb{X} \rightarrow \mathbb{R}\) is a continuous function and \(f_{n}: \mathbb{X} \rightarrow \mathbb{R}\), \(n \in \mathbb{N}^{{\ast}}\) , are lower semicontinuous functions such that for all \(x \in \mathbb{X}\) :

$$\displaystyle{f_{1}\left(x\right) \leq f_{2}\left(x\right) \leq \ldots \leq f_{n}\left(x\right) \leq \ldots \leq f\left(x\right)\quad \text{ and}\quad \lim _{n\rightarrow \infty }f_{n}\left(x\right) = f(x),}$$

then for every compact set \(K \subset \mathbb{X}\)

$$\displaystyle{ \lim _{n\rightarrow \infty }\left[\sup _{x\in K}\left\vert f_{n}\left(x\right) - f\left(x\right)\right\vert \right] = 0. }$$
(6.25)

Proof.

For each \(\varepsilon > 0\), \(G_{n} = \left\{x \in \mathbb{X}: f\left(x\right) - f_{n}\left(x\right) <\varepsilon \right\}\) is an open subset of \(\mathbb{X}\) and

$$\displaystyle{K \subset \mathbb{X} =\bigcup \limits _{n\in \mathbb{N}^{{\ast}}} \uparrow G_{n}.}$$

Hence, by the compactness of K, there exists an \(n \in \mathbb{N}^{{\ast}}\) such that K ⊂ G n and the uniform convergence (6.25) follows. ■ 

We now give some examples (as exercises for the reader) of lower semicontinuous functions that are used in the book.

Example 6.28.

Let \(\left(\mathbb{X},\rho \right)\) be a separable metric space and \(E \subset \mathbb{X}\). Then E is a closed subset of \(\mathbb{X}\) if and only if the function

$$\displaystyle{I_{E}\left(x\right) = \left\{\begin{array}{ll} 0,\;\; &\text{ if }x \in E,\\ + \infty, &\text{ otherwise,} \end{array} \right.}$$

is a l.s.c. function on \(\mathbb{X}\).

Example 6.29.

Let \(\left(\mathbb{X},\rho \right)\) be a separable metric space. Let 0 ≤ s < t ≤ T. If \(f: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is a l.s.c. function bounded below on bounded subsets of \(\mathbb{X}\) and \(\Phi: C\left(\left[0,T\right]; \mathbb{X}\right) \rightarrow ] -\infty,+\infty ]\) is defined by

$$\displaystyle{\Phi \left(x\right) = \left\{\begin{array}{l@{\quad }l} \int _{s}^{t}f\left(x\left(r\right)\right)\mathit{dr},\;\;\quad &\text{ if }f\left(x\right) \in L^{1}\left(0,T\right) \\ + \infty, \quad &\;\quad \quad \quad \text{ otherwise} \end{array} \right.}$$

then \(\Phi \) is a l.s.c. function.

Let 0 ≤ s < t ≤ T. Let \(\mathcal{D}_{\left[s,t\right]}\) be the set of partitions \(\Delta:\; s = r_{0} < r_{1} < \cdots < r_{n} = t\), and

$$\displaystyle{V _{\Delta }\left(k\right)\mathop{ =}\limits^{ \mathit{def.}}\sum _{i=0}^{n-1}\rho \left(k\left(r_{ i}\right),k\left(r_{i+1}\right)\right).}$$

Define the total variation of k on \(\left[s,t\right]\) by

$$\displaystyle{\left\updownarrow k\right\updownarrow _{\left[s,t\right]} =\sup _{\Delta \in \mathcal{D}_{\left[a,b\right]}}V _{\Delta }\left(k\right).}$$

Then as a sup of continuous functions:

Example 6.30.

The mapping \(k\longmapsto \left\updownarrow k\right\updownarrow _{\left[s,t\right]}: C\left(\left[0,T\right]; \mathbb{X}\right) \rightarrow \left[0,\infty \right]\) is a l.s.c. function.

Finally we present Ekeland’s principle (see [26], or [4], p. 29, Th. 3.2):

Lemma 6.31 (Ekeland).

Let \((\mathbb{X},\rho )\) be a complete metric space and \(J: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) be a proper lower-semicontinuous function bounded from below. Then for any \(\varepsilon > 0\) there exists an \(x_{\varepsilon } \in \mathbb{X}\) such that:

$$\displaystyle{\left\{\begin{array}{l} J(x_{\varepsilon }) \leq \inf \limits _{x\in \mathbb{X}}J(x) +\varepsilon \quad \text{ and} \\ J(x_{\varepsilon }) < J(x) + \sqrt{\varepsilon }\rho (x_{\varepsilon },x),\quad \forall x \in \mathbb{X}\setminus \{x_{\varepsilon }\}.\end{array} \right.}$$

6.3.7 Convex Functions

6.3.7.1 Definitions: Properties

Let \(\left(\mathbb{X},\left\Vert \cdot \right\Vert \right)\) be a real Banach space and \(\left(\mathbb{X}^{{\ast}},\left\Vert \cdot \right\Vert _{{\ast}}\right)\) its dual. A function \(\varphi: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is convex if

$$\displaystyle{\varphi \left((1-\lambda )x +\lambda y\right) \leq (1-\lambda )\varphi \left(x\right) +\lambda \varphi \left(y\right),\text{ for all }x,y \in \mathbb{X}\;\text{ and }\lambda \in ]0,1[.}$$

Denote by

$$\displaystyle{\mathrm{Dom}(\varphi ) =\{ x \in \mathbb{X}:\varphi (x) < +\infty \}}$$

the domain of \(\varphi\) and

$$\displaystyle{\partial \varphi (x) =\{\hat{ x} \in \mathbb{X}^{{\ast}}: \left\langle \hat{x},z - x\right\rangle +\varphi (x) \leq \varphi (z),\;\forall \,z \in \mathbb{X}\}}$$

the subdifferential of the function \(\varphi\) at x. We say that \(\varphi\) is proper if \(\mathrm{Dom}(\varphi )\neq \varnothing \). Clearly if \(\varphi\) is a convex function then \(\mathrm{Dom}(\varphi )\) is a convex subset of \(\mathbb{X}\).

Theorem 6.32.

If \(\mathbb{X}\) is a Banach space and \(\varphi: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) is a proper convex l.s.c. function then

$$\displaystyle{\mathrm{Dom}\left(\partial \varphi \right)\mathop{ =}\limits^{ \mathit{def }}\left\{x \in \mathbb{X}: \partial \varphi (x)\neq \varnothing \right\}}$$

is non-empty and \(\partial \varphi: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is a maximal monotone operator.

If K is a convex subset of \(\mathbb{X}\) then the function \(I_{K}: \mathbb{X} \rightarrow ] -\infty,+\infty ]\) defined by

$$\displaystyle{I_{K}\left(x\right) = \left\{\begin{array}{@{}l@{\quad }l@{}} 0, \quad &\text{ if }x \in K,\\ +\infty,\quad &\text{ if } x \in \mathbb{X}\setminus K, \end{array} \right.}$$

is a convex function called the convex indicator function of K.

Recall, from [71] Chapter 2, the following:

Proposition 6.33.

Let \(g: \mathbb{R} \rightarrow ] -\infty,+\infty ]\) be a convex function. Then:

  1. (a)

    Dom (g) is an interval in \(\mathbb{R}\) ;

  2. (b)

    the left derivative \(g_{-}^{{\prime}}:\mathrm{ Dom}(g) \rightarrow \left[-\infty,+\infty \right]\) and the right derivative \(g_{+}^{{\prime}}:\mathrm{ Dom}(g) \rightarrow \left[-\infty,+\infty \right]\) are well defined increasing functions and they satisfy:

    $$\displaystyle{\begin{array}{r@{\quad }l} \left(j\right)\;\quad &g_{+}^{{\prime}}\left(r\right) \leq \dfrac{g\left(s\right) - g\left(r\right)} {s - r} \leq g_{-}^{{\prime}}\left(s\right),\;\;\forall \ r,s \in \mathrm{ Dom}(g),\ r < s; \\ \left(\,\mathit{jj}\right)\;\quad &g_{-}^{{\prime}}\left(r\right) \leq g_{+}^{{\prime}}\left(r\right),\;\;\forall \ r \in \mathrm{ Dom}(g); \\ \left(\,\mathit{jjj}\right)\;\quad &g_{-}^{{\prime}}\;\text{ is left continuous and }g_{+}^{{\prime}}\;\text{ is right continuous on }\mathrm{int}(\mathrm{Dom}(g)); \\ \left(\,\mathit{jv}\right)\;\quad &u \in \left[g_{-}^{{\prime}}\left(r\right),g_{+}^{{\prime}}\left(r\right)\right] \cap \mathbb{R}\;\Longleftrightarrow\;u\left(s - r\right) \leq g\left(s\right) - g\left(r\right),\;\forall \ s \in \mathbb{R}; \\ \left(\mathit{v}\right)\;\quad &\left\{r \in \mathrm{ Dom}(g): g_{-}^{{\prime}}\left(r\right)\neq g_{+}^{{\prime}}\left(r\right)\right\}\;\text{ is at most countable;}\end{array} }$$
  3. (c)

    g is locally Lipschitz continuous on \(\mathrm{int}\left(\mathrm{Dom}(g)\right)\) ;

  4. (d)

    \(A \subset \mathbb{R} \times \mathbb{R}\) is a maximal monotone operator if and only if there exists a convex function \(j: \mathbb{R} \rightarrow ] -\infty,+\infty ]\) such that β = ∂j.

Note that if \( \varphi \) is a proper convex lower semicontinuous (l.s.c.) function then:

  • \(\varphi\) is bounded from below by an affine function, that is \(\exists \)\(v \in \mathbb{X}^{{\ast}}\) and \(a \in \mathbb{R}\) such that

    $$\displaystyle{\varphi \left(x\right) \geq \left\langle v,x\right\rangle + a,\text{ for all }x \in \mathbb{X},}$$

    and, moreover, if \(\mathbb{X}\) is reflexive and\(\lim \limits _{\left\Vert x\right\Vert \rightarrow \infty }\varphi \left(x\right) = +\infty \) then there exists an \(x_{0} \in \mathrm{ Dom}(\varphi )\) such that

    $$\displaystyle{\varphi \left(x\right) \geq \varphi \left(x_{0}\right),\text{ for all }x \in \mathbb{X};}$$
  • (Fenchel–Moreau theorem on biconjugate functions)

    $$\displaystyle{\varphi \left(x\right) =\varphi ^{{\ast}{\ast}}\left(x\right) =\sup \left\{\left\langle x,x^{{\ast}}\right\rangle -\varphi ^{{\ast}}\left(x^{{\ast}}\right): x^{{\ast}}\in \mathbb{X}^{{\ast}}\right\},}$$

    where \(\varphi ^{{\ast}}: \mathbb{X}^{{\ast}}\rightarrow \bar{\mathbb{R}}\) is the conjugate of the function \(\varphi\), i.e.

    $$\displaystyle{\varphi ^{{\ast}}\left(x^{{\ast}}\right) =\sup \left\{\left\langle u,x^{{\ast}}\right\rangle -\varphi \left(u\right): u \in \mathrm{ Dom}(\varphi )\right\};}$$
  • \(\varphi\) is continuous on \(\mathrm{int}\left(\mathrm{Dom}(\varphi )\right)\);

  • \(\partial \varphi: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is maximal monotone;

  • \(\mathrm{int}\left(\mathrm{Dom}\left(\varphi \right)\right) =\mathrm{ int}\left(\mathrm{Dom}\left(\partial \varphi \right)\right)\) and \(\overline{\mathrm{Dom}\left(\partial \varphi \right)} = \overline{\mathrm{Dom}\left(\varphi \right)}\).

We have the following instance of Jensen’s inequality.

Lemma 6.34.

Let \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) be a proper convex lower semicontinuous function. If \(a,b \in \mathbb{R}\) , a < b, \(y \in L^{\infty }\left(a,b; \mathbb{R}^{d}\right)\) and \(\rho \in L^{1}\left(a,b; \mathbb{R}_{+}\right)\) such that \(\int _{a}^{b}\rho \left(r\right)\mathit{dr} = 1\) , then

$$\displaystyle{\varphi \left(\int _{a}^{b}\rho \left(r\right)y\left(r\right)\mathit{dr}\right) \leq \int _{ a}^{b}\rho \left(r\right)\varphi \left(y\left(r\right)\right)\mathit{dr}.}$$

Proof.

Since there exists a set \(\Gamma \subset \mathbb{R}^{d} \times \mathbb{R}\) such that

$$\displaystyle{\varphi \left(x\right) =\sup \left\{\left\langle v,x\right\rangle +\gamma: \left(v,\gamma \right) \in \Gamma \right\},}$$

we have

$$\displaystyle{\big\langle v,\int _{a}^{b}\rho \left(r\right)y\left(r\right)\mathit{dr}\big\rangle +\gamma =\int _{ a}^{b}\rho \left(r\right)\left[\left\langle v,y\left(r\right)\right\rangle +\gamma \right]\mathit{dr} \leq \int _{ a}^{b}\rho \left(r\right)\varphi \left(y\left(r\right)\right)\mathit{dr}}$$

and the result follows passing to \(\sup _{\left(v,\gamma \right)\in \Gamma }\). ■ 

6.3.7.2 Regularization of Convex Functions

Let \(\left(\mathbb{H},\left\vert \cdot \right\vert \right)\) be a real separable Hilbert space and \(\varphi: \mathbb{H} \rightarrow ] -\infty,+\infty ]\) be a proper convex l.s.c. function. The Moreau regularization \(\varphi _{\varepsilon }\) of the convex l.s.c. function \(\varphi\) is defined by

$$\displaystyle{\varphi _{\varepsilon }(x) =\inf \, \left\{\frac{1} {2\varepsilon }\vert z - x\vert ^{2} +\varphi (z);z \in \mathbb{H}\right\},\quad \boldsymbol{\varepsilon } > 0.}$$

The function \(\varphi _{\varepsilon }\) is a convex function of class C 1 on \(\mathbb{H}\); the gradient \(\nabla \varphi _{\varepsilon }\) is a Lipschitz function on \(\mathbb{H}\) with the Lipschitz constant equal to \(\varepsilon ^{-1}\). If we define:

$$\displaystyle{J_{\varepsilon }x = x -\varepsilon \nabla \varphi _{\varepsilon }(x),}$$

then one can easily prove (see e.g. Brezis [12], Barbu [2], Rockafellar [65] or Zălinescu [71]) that for all \(x \in \mathbb{H}\) and \(\varepsilon > 0\):

  1. 1.

    \(\varphi _{\varepsilon }(x) = \dfrac{\varepsilon } {2}\vert \nabla \varphi _{\varepsilon }(x)\vert ^{2} +\varphi (J_{\varepsilon }x)\),

  2. 2.

    \(\varphi (J_{\varepsilon }x) \leq \varphi _{\varepsilon }(x) \leq \varphi (x)\),

  3. 3.

    \(\nabla \varphi _{\varepsilon }\left(x\right) = \partial \varphi _{\varepsilon }\left(x\right)\) and

    $$\displaystyle\begin{array}{rcl} \varphi (J_{\varepsilon }x)& \leq &\varphi _{\varepsilon }(x) {}\\ & \leq &\varphi _{\varepsilon }(z) + \left\langle x - z,\nabla \varphi _{\varepsilon }(x)\right\rangle {}\\ & \leq &\varphi (z) + \left\langle x - z,\nabla \varphi _{\varepsilon }(x)\right\rangle,\;\forall \;z \in \mathbb{H}, {}\\ \end{array}$$
  4. 4.

    \(\nabla \varphi _{\varepsilon }(x) \in \partial \varphi (J_{\varepsilon }x)\) i.e.

    $$\displaystyle{\left\langle \nabla \varphi _{\varepsilon }(x),z - J_{\varepsilon }x\right\rangle +\varphi (J_{\varepsilon }x) \leq \varphi (z),\,\forall \,z \in \mathbb{H}.}$$

    Hence \(J_{\varepsilon }x = \left(I +\varepsilon \partial \varphi \right)^{-1}\left(x\right)\) and \(\nabla \varphi _{\varepsilon }(x) = A_{\varepsilon }\left(x\right)\), where A is the maximal monotone operator \(\partial \varphi;\) \(\nabla \varphi _{\varepsilon }\) is called the Moreau–Yosida approximation of\(\partial \varphi\).

  5. 5.

    If \(\left(u_{0},\hat{u}_{0}\right) \in \partial \varphi\), then for all \(y \in \mathbb{H}\)

    $$\displaystyle{ \left\{\begin{array}{l} \left(a\right)\quad \left\vert \nabla \varphi _{\varepsilon }\left(u_{0}\right)\right\vert \leq \left\vert \hat{u}_{0}\right\vert, \\ \left(b\right)\quad 0 \leq \varphi \left(u_{0}\right) -\varphi _{\varepsilon }\left(u_{0}\right) \leq \varphi \left(u_{0}\right) -\varphi \left(J_{\varepsilon }u_{0}\right) \leq \varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}, \\ \left(c\right)\quad \left\vert J_{\varepsilon }\left(y\right)\right\vert \leq \left\vert y - u_{0}\right\vert +\varepsilon \left\vert \hat{u}_{0}\right\vert + \left\vert u_{0}\right\vert, \\ \left(d\right)\quad \varphi \left(J_{\varepsilon }y\right) \geq \varphi \left(u_{0}\right) -\left\vert \hat{u}_{0}\right\vert \left\vert y - u_{0}\right\vert -\varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}, \\ \left(e\right)\quad \dfrac{\varepsilon } {2}\left\vert \nabla \varphi _{\varepsilon }\left(y\right)\right\vert ^{2} \leq \varphi _{\varepsilon }\left(y\right) -\varphi \left(u_{0}\right) + \left\vert \hat{u}_{0}\right\vert \left\vert y - u_{0}\right\vert +\varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}.\end{array} \right. }$$
    (6.26)

    Indeed \(\left\vert \nabla \varphi _{\varepsilon }\left(u_{0}\right)\right\vert = \left\vert A_{\varepsilon }\left(u_{0}\right)\right\vert \leq \left\vert A^{0}\left(x\right)\right\vert\) and

    $$\displaystyle\begin{array}{rcl} -\varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}& \leq &-\varepsilon \left\langle \hat{u}_{ 0},\nabla \varphi _{\varepsilon }\left(u_{0}\right)\right\rangle {}\\ & =& \left\langle \hat{u}_{0},J_{\varepsilon }u_{0} - u_{0}\right\rangle {}\\ & \leq &\varphi \left(J_{\varepsilon }u_{0}\right) -\varphi \left(u_{0}\right) {}\\ & \leq &\varphi _{\varepsilon }\left(u_{0}\right) -\varphi \left(u_{0}\right) {}\\ & \leq & 0. {}\\ \end{array}$$

    For the inequality \(\left(c\right)\) we have

    $$\displaystyle{\left\vert J_{\varepsilon }\left(y\right)\right\vert \leq \left\vert J_{\varepsilon }\left(y\right) - J_{\varepsilon }\left(u_{0}\right)\right\vert + \left\vert J_{\varepsilon }\left(u_{0}\right) - u_{0}\right\vert + \left\vert u_{0}\right\vert,}$$

    and therefore

    $$\displaystyle\begin{array}{rcl} \varphi \left(J_{\varepsilon }y\right)& \geq &\varphi \left(u_{0}\right) + \left\langle \hat{u}_{0},J_{\varepsilon }\left(y\right) - u_{0}\right\rangle {}\\ & \geq &\varphi \left(u_{0}\right) -\left\vert \hat{u}_{0}\right\vert \left\vert J_{\varepsilon }\left(y\right) - J_{\varepsilon }\left(u_{0}\right)\right\vert -\left\vert \hat{u}_{0}\right\vert \left\vert J_{\varepsilon }\left(u_{0}\right) - u_{0}\right\vert {}\\ \end{array}$$

    which yields \(\left(d\right)\).

    For the last inequality, \(\left(d\right)\), we have

    $$\displaystyle\begin{array}{rcl} \frac{\varepsilon } {2}\left\vert \nabla \varphi _{\varepsilon }\left(y\right)\right\vert ^{2}& =& \varphi _{\varepsilon }\left(y\right) -\varphi \left(J_{\varepsilon }y\right) {}\\ & \leq &\varphi _{\varepsilon }\left(y\right) -\varphi \left(u_{0}\right) + \left\vert \hat{u}_{0}\right\vert \left\vert y - u_{0}\right\vert +\varepsilon \left\vert \hat{u}_{0}\right\vert ^{2}. {}\\ \end{array}$$
  6. 6.

    If \(0 =\varphi (0) \leq \varphi (x)\), \(\forall x \in \mathbb{H}\), it is easy to verify that, moreover

    $$\displaystyle\begin{array}{rcl} \begin{array}{r@{\quad }l} j)\quad &\quad 0 \in \partial \varphi \left(0\right),\,\;0 =\varphi _{\varepsilon }(0) \leq \varphi _{\varepsilon }(x),\,\;J_{\varepsilon }\left(0\right) = \nabla \varphi _{\varepsilon }\left(0\right) = 0, \\ \mathit{jj})\quad &\quad \dfrac{\varepsilon } {2}\vert \nabla \varphi _{\varepsilon }(x)\vert ^{2} \leq \varphi _{\varepsilon }(x) \leq \left\langle \nabla \varphi _{\varepsilon }(x),x\right\rangle,\quad \forall x \in \mathbb{H}, \\ \mathit{jjj})\quad &\quad \vert \nabla \varphi _{\varepsilon }(x)\vert \leq \dfrac{1} {\varepsilon } \vert x\vert,\text{ and }0 \leq \varphi _{\varepsilon }(x) \leq \dfrac{1} {2\varepsilon }\vert x\vert ^{2}\,,\;\forall x \in \mathbb{H}, \\ \mathit{jv})\quad &\quad \left\langle \nabla \varphi _{\varepsilon }(x),x - y\right\rangle \geq -\varphi (J_{\varepsilon }x) -\varepsilon \left\langle \nabla \varphi _{\varepsilon }(x),\nabla \varphi _{\varepsilon }(y)\right\rangle,\;\forall x,y \in \mathbb{H}.\end{array} & & {}\end{array}$$
    (6.27)

If for a fixed a ≥ 0

$$\displaystyle{\left\langle \hat{x} -\hat{ y},x - y\right\rangle \geq a\vert x - y\vert ^{2},\ \forall \,\left(x,\hat{x}\right),\left(y,\hat{y}\right) \in \partial \varphi,}$$

or equivalently the function

$$\displaystyle{\psi \left(x\right) =\varphi \left(x\right) -\frac{a} {2}\vert x\vert ^{2}}$$

is convex, too, then by the definition of \(J_{\varepsilon }\) and the monotonicity of the operator \(\partial \varphi\) we have \(\forall \,r \in \left]0,1\right[\):

$$\displaystyle\begin{array}{rcl} & & a\left[(1 - r)\vert x - y\vert ^{2} -\dfrac{1 - r} {r} \vert \varepsilon \nabla \varphi _{\varepsilon }(x) -\delta \nabla \varphi _{\delta }(y)\vert ^{2}\right] {}\\ & & \quad \leq a\vert J_{\varepsilon }x - J_{\delta }y\vert ^{2} {}\\ & & \quad \leq \left\langle \nabla \varphi _{\varepsilon }(x) -\nabla \varphi _{\delta }(y),J_{\varepsilon }x - J_{\delta }y\right\rangle {}\\ & & \quad = \left\langle \nabla \varphi _{\varepsilon }(x) -\nabla \varphi _{\delta }(y),x - y\right\rangle -\varepsilon \vert \nabla \varphi _{\varepsilon }(x)\vert ^{2} -\delta \vert \nabla \varphi _{\delta }(y)\vert ^{2} {}\\ & & \quad \qquad \qquad \qquad \qquad \qquad \quad + \left(\varepsilon +\delta \right)\left\langle \nabla \varphi _{\varepsilon }(x),\nabla \varphi _{\delta }(y)\right\rangle {}\\ \end{array}$$

and then

$$\displaystyle{ \begin{array}{r@{\quad }r} a)\quad &\left\langle \nabla \varphi _{\varepsilon }(x) -\nabla \varphi _{\varepsilon }(y),x - y\right\rangle \geq a(1 - r)\vert x - y\vert ^{2} \\ b)\quad &\left\langle \nabla \varphi _{\varepsilon }(x) -\nabla \varphi _{\delta }(y),x - y\right\rangle \geq a(1 - r)\vert x - y\vert ^{2} \\ \quad & -\left(\varepsilon +\delta \right)\vert \nabla \varphi _{\varepsilon }(x)\vert \vert \nabla \varphi _{\delta }(y)\vert \end{array} }$$
(6.28)

for all \(x,y \in \mathbb{H}\), \(r \in (0,1),\ \ \varepsilon,\delta > 0\) such that \(0 \leq a(1 - r)\varepsilon \leq r,\ 0 \leq a(1 - r)\delta \leq \ r\).

Let \(u_{0} \in \mathbb{H}\) and r 0 ≥ 0 be such that

$$\displaystyle{\left\{u_{0} + r_{0}v: \left\vert v\right\vert \leq 1\right\} \subset \mathrm{ Dom}\varphi.}$$

Note that if

$$\displaystyle{\varphi _{u_{0},r_{0}}^{\#}\left(\mathop{=}\limits^{ \mathit{def }}\sup \left\{\varphi \left(u_{ 0} + r_{0}v\right): \left\vert v\right\vert \leq 1\right\}\right) < \infty,}$$

then we have for all \(\left(x,\hat{x}\right) \in \partial \varphi\)

$$\displaystyle{ \begin{array}{l@{\quad }l} a)\;\quad &r_{0}\vert \hat{x}\vert +\varphi (x) \leq \left\langle \hat{x},x - u_{0}\right\rangle +\varphi _{ u_{0},r_{0}}^{\#},\quad \forall \,\left(x,\hat{x}\right) \in \partial \varphi, \\ b)\;\quad &r_{0}\vert \hat{x}\vert + \left\vert \varphi (x) -\varphi \left(u_{0}\right)\right\vert \leq \left\langle \hat{x},x - u_{0}\right\rangle \\ {r} {} &{r} { + 2\left\vert \left(\partial \varphi \right)^{0}\left(u_{0}\right)\right\vert \left\vert x - u_{0}\right\vert +\varphi _{ u_{0},r_{0}}^{\#} -\varphi \left(u_{0}\right)}\end{array} }$$
(6.29)

and in particular for r 0 = 0

$$\displaystyle{ \left\vert \varphi (x) -\varphi \left(u_{0}\right)\right\vert \leq \left\langle \hat{x},x - u_{0}\right\rangle + 2\left\vert \left(\partial \varphi \right)^{0}\left(u_{ 0}\right)\right\vert \left\vert x - u_{0}\right\vert. }$$
(6.30)

Let us prove (6.29). For \(\left(x,\hat{x}\right) \in \partial \varphi\) and \(\left\vert v\right\vert \leq 1\) we have

$$\displaystyle{\left\langle \hat{x},u_{0} + r_{0}v - x\right\rangle +\varphi (x) \leq \varphi (\left(u_{0} + r_{0}v\right) \leq \varphi _{u_{0},r_{0}}^{\#}}$$

and consequently

$$\displaystyle{r_{0}\left\langle \hat{x},v\right\rangle +\varphi (x) \leq \left\langle \hat{x},x - u_{0}\right\rangle +\varphi _{ u_{0},r_{0}}^{\#}}$$

which yields (6.29-a) taking the \(\sup _{\left\vert v\right\vert \leq 1}\).

On the other hand for all arbitrary \(\hat{u}_{0} \in \partial \varphi \left(u_{0}\right)\),

$$\displaystyle{\left\langle \hat{u}_{0},x - u_{0}\right\rangle +\varphi \left(u_{0}\right) \leq \varphi (x),}$$

which yields

$$\displaystyle{\left\vert \varphi (x) -\varphi \left(u_{0}\right)\right\vert \leq \varphi (x) -\varphi \left(u_{0}\right) + 2\left\vert \hat{u}_{0}\right\vert \left\vert x - u_{0}\right\vert.}$$

Hence for all \(\left\vert v\right\vert \leq 1\):

$$\displaystyle\begin{array}{rcl} r_{0}\left\langle \hat{x},v\right\rangle + \left\vert \varphi (x) -\varphi \left(u_{0}\right)\right\vert & \leq & \left\langle \hat{x},x - u_{0}\right\rangle + 2\left\vert \hat{u}_{0}\right\vert \left\vert x - u_{0}\right\vert {}\\ & & +\varphi _{u_{0},r_{0}}^{\#} -\varphi \left(u_{ 0}\right) {}\\ \end{array}$$

which yields (6.29-b).

Observing that \(\nabla \varphi _{\varepsilon }(x) \in \partial \varphi (J_{\varepsilon }x)\), we have

$$\displaystyle\begin{array}{rcl} r_{0}\vert \nabla \varphi _{\varepsilon }(x)\vert + \left\vert \varphi (J_{\varepsilon }x) -\varphi \left(u_{0}\right)\right\vert & \leq & \left\langle \nabla \varphi _{\varepsilon }(x),J_{\varepsilon }x - u_{0}\right\rangle + 2\left\vert \hat{u}_{0}\right\vert \left\vert J_{\varepsilon }x - u_{0}\right\vert {}\\ & & +\varphi _{u_{0},r_{0}}^{\#} -\varphi \left(u_{ 0}\right). {}\\ \end{array}$$

But

$$\displaystyle{\left\langle \nabla \varphi _{\varepsilon }(x),J_{\varepsilon }x - u_{0}\right\rangle = (\nabla \varphi _{\varepsilon }(x),x - u_{0} -\varepsilon \vert \nabla \varphi _{\varepsilon }(x)\vert ^{2}}$$

and

$$\displaystyle\begin{array}{rcl} \left\vert J_{\varepsilon }x - u_{0}\right\vert & \leq & \left\vert J_{\varepsilon }x - J_{\varepsilon }u_{0}\right\vert + \left\vert J_{\varepsilon }u_{0} - u_{0}\right\vert {}\\ & \leq & \left\vert x - u_{0}\right\vert +\varepsilon \left\vert \hat{u}_{0}\right\vert. {}\\ \end{array}$$

Hence for all \(\varepsilon \in ]0,1]\), \(x \in \mathbb{H}\) and \(\hat{u}_{0} \in \partial \varphi (u_{0})\):

$$\displaystyle{ \begin{array}{l} r_{0}\vert \nabla \varphi _{\varepsilon }(x)\vert + \left\vert \varphi (J_{\varepsilon }x) -\varphi \left(u_{0}\right)\right\vert +\varepsilon \vert \nabla \varphi _{\varepsilon }(x)\vert ^{2} \\ \quad \leq \left\langle \nabla \varphi _{\varepsilon }(x),x - u_{0}\right\rangle + 2\left\vert \hat{u}_{0}\right\vert \left\vert x - u_{0}\right\vert + \left[2\left\vert \hat{u}_{0}\right\vert ^{2} +\varphi _{ u_{0},r_{0}}^{\#} -\varphi \left(u_{0}\right)\right].\end{array} }$$
(6.31)

In particular for u 0 = 0 and \(\hat{u}_{0} = 0\) we obtain

  • \(\blacklozenge\) If \(\varphi \left(x\right) \geq \varphi \left(0\right) = 0\), for all \(x \in \mathbb{H}\) and

    $$\displaystyle{\varphi _{r_{0}}^{\#} =\sup \left\{\varphi \left(r_{ 0}v\right): \left\vert v\right\vert \leq 1\right\} < \infty,}$$

    then:

    $$\displaystyle{ \begin{array}{l@{\quad }l} a)\quad &\quad r_{0}\left\vert \hat{x}\right\vert +\varphi (x) \leq \left\langle \hat{x},x\right\rangle +\varphi _{ r_{0}}^{\#},\quad \forall \,\left(x,\hat{x}\right) \in \partial \varphi, \\ b)\quad &\quad r_{0}\vert \nabla \varphi _{\varepsilon }(x)\vert +\varphi (J_{\varepsilon }x) +\varepsilon \vert \nabla \varphi _{\varepsilon }(x)\vert ^{2} \leq \left\langle \nabla \varphi _{\varepsilon }(x),x\right\rangle +\varphi _{ r_{0}}^{\#}, \\ \quad &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \forall \,\varepsilon > 0,\forall \,x \in \mathbb{H}.\end{array} }$$
    (6.32)

6.3.7.3 Convex Functions on \(C([0,T]; \mathbb{R}^{d})\)

Proposition 6.35.

If \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) is a proper convex l.s.c. function and

\(\Phi:\) \(C([0,T]; \mathbb{R}^{d}) \rightarrow ] -\infty,+\infty ]\),

$$\displaystyle{ \Phi (x) = \left\{\begin{array}{l} \int \nolimits _{0}^{T}\varphi (x(t))\mathit{dt},\,\,{ \mathit{if }}\varphi \left(x\right) \in L^{1}(0,T), \\ + \infty,\;\quad { \mathit{otherwise,}} \end{array} \right. }$$
(6.33)

then

\(c_{1}\)):

\(\Phi \) is a proper convex l.s.c. function,

c2):

\(\partial \Phi (x)\;\mathop{ =}\limits^{ \mathit{def }}\bigg\{k \in \ \mathit{BV }([0,T]; \mathbb{R}^{d})\) :

$$\displaystyle{\left.\int \nolimits _{0}^{T}\left\langle y(r) - x(r),\mathit{dk}(r)\right\rangle + \Phi (x) \leq \Phi (y),\;\,\forall \,y \in C([0,T]; \mathbb{R}^{d})\right\}}$$

is a maximal monotone operator.

Proof.

We shall prove only the maximal property of the operator \(\partial \Phi \), since the other properties are immediate. Let \(\mathbb{X} =\) \(C([0,T]; \mathbb{R}^{d})\). Then the dual space is \(\mathbb{X}^{{\ast}} =\) \(\mathit{BV }([0,T]; \mathbb{R}^{d})\). Let

$$\displaystyle\begin{array}{rcl} \left\langle k-\zeta,x - z\right\rangle \geq 0,\,\text{ for all }\left(z,\zeta \right) \in \partial \Phi.& &{}\end{array}$$
(6.34)

The function \(\psi (z) = \Phi (z) + \dfrac{1} {2}\left\Vert z - x\right\Vert _{\mathbb{X}}^{2} -\left\langle k,z\right\rangle\) defined on \(\mathbb{X}\) is a proper convex l.s.c. function. Furthermore, there exists a \(c \in \mathbb{R}\) such that \(\Phi (z) \geq c,\forall \,z \in \mathbb{X}\). By Ekeland’s principle there exists a \(z_{\varepsilon } \in \mathbb{X}\) such that

$$\displaystyle\begin{array}{rcl} \psi (\mathit{\ }z_{\varepsilon })& \rightarrow & \inf \,\{\psi (z): z \in \mathbb{X}\}, {}\\ \psi (\mathit{\ }z_{\varepsilon })& \leq & \psi (z) + \sqrt{\varepsilon }\left\Vert z -\mathit{\ }z_{\varepsilon }\right\Vert _{\mathbb{X}} =\tilde{\psi } (z)\,,\,\,\forall \,z \in \mathbb{X}. {}\\ \end{array}$$

Then \(0 \in \partial \tilde{\psi }\) (\(z_{\varepsilon })\), which means

$$\displaystyle{ \partial \Phi (\mathit{\ }z_{\varepsilon }) + F(\mathit{\ }z_{\varepsilon } - x) - k + \sqrt{\varepsilon }\theta _{\varepsilon }\ni 0, }$$
(6.35)

where\(\,F: \mathbb{X} \rightrightarrows \mathbb{X}^{{\ast}}\) is the duality mapping and \(\left\Vert \theta _{\varepsilon }\right\Vert _{\mathbb{X}^{{\ast}}} \leq 1\). Multiplying by \(z_{\varepsilon } - x\) we have \(\;\left\langle \zeta _{\varepsilon } - k,\,z_{\varepsilon } - x\right\rangle + \left\Vert \ z_{\varepsilon } - x\right\Vert _{\mathbb{X}}^{2} + \sqrt{\varepsilon }\left\langle \theta _{\varepsilon },\,z_{\varepsilon } - x\right\rangle = 0\), for some \(\zeta _{\varepsilon } \in \partial \Phi (\ z_{\varepsilon })\), which implies by (6.34) \(\,\left\Vert \ z_{\varepsilon } - x\right\Vert _{\mathbb{X}} \leq \sqrt{\varepsilon }\). Hence \(z_{\varepsilon }\mathop{ \rightarrow }\limits^{ \mathbb{X}}x\), and by (6.35) \(\,\zeta _{\varepsilon }\mathop{ \rightarrow }\limits^{ \mathbb{X}^{{\ast}}}k\), as \(\varepsilon \rightarrow 0\). From the definition of the subdifferential operator: \(\left\langle \zeta _{\varepsilon },\,y - z_{\varepsilon }\right\rangle + \Phi (\) \(z_{\varepsilon }) \leq \Phi (y)\), \(\forall \,y \in \mathbb{X}\) and passing to the limit as \(\varepsilon \rightarrow 0\) we obtain \(\left(x,k\right) \in \partial \Phi \). ■ 

Proposition 6.36.

If \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\)  is a proper convex l.s.c. function, \(\Phi \) is defined by (6.33), \(x \in C([0,T]; \mathbb{R}^{d})\) and \(k \in C([0,T]; \mathbb{R}^{d}) \cap \mathit{BV }([0,T]; \mathbb{R}^{d})\) , then the following assertions are equivalent:

$$\displaystyle{ \begin{array}{l@{\quad }l} a_{1})\quad &\int \nolimits _{s}^{t}\left\langle z - x(r),\mathit{dk}(r)\right\rangle + \int \nolimits _{ s}^{t}\varphi (x(r))\mathit{dr} \leq (t - s)\varphi (z), \\ {r} {}&{r} {\forall \,z \in \mathbb{R}^{d},\;\forall \,0 \leq s \leq t \leq T,} \\ a_{2})\quad &\int \nolimits _{s}^{t}\left\langle y(r) - x(r),\mathit{dk}(r)\right\rangle + \int \nolimits _{ s}^{t}\varphi (x(r))\mathit{dr} \leq \int \nolimits _{ s}^{t}\varphi (y(r))\mathit{dr}, \\ {r} {}&{r} {\forall \,y \in C([0,T]; \mathbb{R}^{d}),\;\forall \,0 \leq s \leq t \leq T,} \\ a_{3})\quad &\int \nolimits _{s}^{t}\left\langle x(r) - z,\mathit{dk}(r) -\hat{ z}\mathit{dr}\right\rangle \geq 0,\,\,\forall \,\left(z,\hat{z}\right) \in \partial \varphi, \\ {r} {}&{r} {\forall \,0 \leq s \leq t \leq T,} \\ a_{4})\quad &\int \nolimits _{s}^{t}\left\langle x(r) - y(r),\mathit{dk}(r) -\hat{ y}(r)\mathit{dr}\right\rangle \geq 0,\,\forall \,y,\hat{y} \in C([0,T]; \mathbb{R}^{d}), \\ {r} {}&{r} {\left(y(r),\hat{y}(r)\right) \in \partial \varphi,\,\forall \,r \in [0,T],\,\,\forall \,0 \leq s \leq t \leq T,} \\ a_{5})\quad &\left(x,k\right) \in \partial \Phi,\text{ that is, }\forall y \in C([0,T], \mathbb{R}^{d}): \\ {r} {}&{r} {\int \nolimits _{0}^{T}\left\langle y(r) - x(r),\mathit{dk}(r)\right\rangle + \int \nolimits _{ 0}^{T}\varphi (x(r))\mathit{dr} \leq \int \nolimits _{ 0}^{T}\varphi (y(r))\mathit{dr}.} \end{array} }$$
(6.36)

Proof.

We shall show that \(a_{1} \Leftrightarrow a_{2} \Rightarrow a_{3} \Rightarrow a_{4} \Rightarrow a_{5} \Rightarrow a_{2}\).

a 2 ⇒ a 1: is evident.

\(a_{1} \Rightarrow a_{2}\): Let \(y \in C\left(\left[0,T\right]; \mathbb{R}^{d}\right)\). We extend \(y\left(t\right) = y\left(0\right)\) for t ≤ 0 and \(y\left(t\right) = y\left(T\right)\) for t ≥ T. The same extension will be considered for the functions x and k. To prove a 2) it is sufficient to consider the case 0 < s < t < T.

Since \(\varphi\) is bounded from below by an affine function, from a 1) we deduce that \(\varphi \left(x\right) \in L^{1}\left(0,T\right)\).

Let \(n_{0} \in \mathbb{N}^{{\ast}}\) be such that \(0 < \frac{1} {n_{0}} < s < t < t + \frac{1} {n_{0}} < T\) and n ≥ n 0. Let \(u \in [s,t]\). From \(\left(a_{1}\right)\) we have for \(z = y\left(u\right)\)

$$\displaystyle{\int _{u-1/n}^{u}\left\langle y\left(u\right) - x\left(r\right),\mathit{dk}\left(r\right)\right\rangle +\int _{ u-1/n}^{u}\varphi \left(x\left(r\right)\right)\mathit{dr} \leq \frac{1} {n}\ \varphi (y(u)).}$$

Integrating on \(\left[s,t\right]\) with respect to u we deduce that

$$\displaystyle{ \begin{array}{r} \int _{s}^{t}\left(n\int _{ u-1/n}^{u}\left\langle y\left(u\right) - x\left(r\right),\mathit{dk}\left(r\right)\right\rangle \right)\mathit{du} + \int _{ s}^{t}\left(n\int _{ u-1/n}^{u}\varphi \left(x\left(r\right)\right)\mathit{dr}\right)\mathit{du} \\ \leq \int _{s}^{t}\varphi \left(y\left(u\right)\right)\mathit{du}. \end{array} }$$
(6.37)

By Fatou’s Lemma we have

$$\displaystyle{\int _{s}^{t}\varphi \left(x\left(u\right)\right)\mathit{du} \leq \liminf _{ n\rightarrow +\infty }\int _{s}^{t}\left(n\int _{ u-1/n}^{u}\varphi \left(x\left(r\right)\right)\mathit{dr}\right)\mathit{du}.}$$

On the other hand by the Lebesgue dominated convergence theorem

$$\displaystyle\begin{array}{rcl} & & \int _{s}^{t}\left(n\int _{ u-1/n}^{u}\left\langle y\left(u\right) - x\left(r\right),\mathit{dk}\left(r\right)\right\rangle \right)\mathit{du} {}\\ & & = \int _{-\infty }^{+\infty }\left(\int _{ -\infty }^{+\infty }n\mathbf{1}_{\left[s,t\right]}\left(u\right)\mathbf{1}_{\left[u-1/n,u\right]}\left(r\right)\left\langle y\left(u\right) - x\left(r\right),\mathit{dk}\left(r\right)\right\rangle \right)\mathit{du} {}\\ & & = \int _{-\infty }^{+\infty }\left\langle \int _{ -\infty }^{+\infty }n\mathbf{1}_{\left[s,t\right]}\left(u\right)\mathbf{1}_{\left[r,r+1/n\right]}\left(u\right)\left[y\left(u\right) - x\left(r\right)\right]\mathit{du},\mathit{dk}\left(r\right)\right\rangle {}\\ & & = \int _{-\infty }^{+\infty }\left\langle n\int _{ r}^{r+1/n}\mathbf{1}_{\left[s,t\right]}\left(u\right)\left[y\left(u\right) - x\left(r\right)\right]\mathit{du},\mathit{dk}\left(r\right)\right\rangle {}\\ & & \rightarrow \int _{-\infty }^{+\infty }\mathbf{1}_{\left[s,t\right]}\left(r\right)\left\langle y\left(r\right) - x\left(r\right),\mathit{dk}\left(r\right)\right\rangle,\quad \text{ as }n \rightarrow \infty. {}\\ \end{array}$$

Passing to \(\liminf _{n\rightarrow +\infty }\) in (6.37) \(\left(a_{2}\right)\) follows.

a 2 ⇒ a 3: is obtained by adding the following inequalities term by term:

$$\displaystyle\begin{array}{rcl} \int \nolimits _{s}^{t}\left\langle z - x(r),\mathit{dk}(r)\right\rangle +\int \nolimits _{ s}^{t}\varphi (x(r))\mathit{dr}& \leq & \int \nolimits _{ s}^{t}\varphi (z)\mathit{dr} {}\\ \int \nolimits _{s}^{t}\left\langle x(r) - z,\hat{z}\right\rangle \mathit{dr} +\int \nolimits _{ s}^{t}\varphi (z)\mathit{dr}& \leq & \int \nolimits _{ s}^{t}\varphi (x(r))\mathit{dr}. {}\\ \end{array}$$

a 3 ⇒ a 4 : is proved in Proposition 6.17 since \(A = \partial \varphi\) is maximal monotone.

a 4 ⇒ a 5: Let \((\tilde{x},\tilde{k}) \in \partial \Phi \) be arbitrary. Hence for all \(y,\,\hat{y} \in \) \(C([0,T]; \mathbb{R}^{d})\), \(\,\left(y(r),\hat{y}(r)\right) \in \partial \varphi\) we have: \((y,\int _{0}^{\cdot }\hat{y}\mathit{dt}) \in \partial \Phi \) and

$$\displaystyle{\begin{array}{l} \int \nolimits _{0}^{T}\left\langle \tilde{x}(r) - y(r),d\tilde{k}\left(r\right) -\hat{ y}\left(r\right)\mathit{dr}\right\rangle \geq 0\quad (\partial \Phi \text{ is monotone}), \\ \int \nolimits _{0}^{T}\left\langle x(r) - y(r),\mathit{dk}\left(r\right) -\hat{ y}\left(r\right)\mathit{dr}\right\rangle \geq 0\;\,(\text{ by }a_{ 4}). \end{array} }$$

Since \(A = \partial \varphi\) is maximal monotone, by Proposition 6.17 we have

$$\displaystyle{\int \nolimits _{0}^{T}\left\langle \tilde{x}\left(t\right) - x\left(t\right),d\tilde{k}\left(t\right) -\mathit{dk}\left(t\right)\right\rangle \geq 0,}$$

where \((\tilde{x},\tilde{k}) \in \partial \Phi \) is arbitrary. But by Proposition 6.35, \(\partial \Phi \) is a maximal monotone operator. Hence \(\left(x,k\right) \in \partial \Phi \).

a 5 ⇒ a 2: Let a, b ≥ 0 such that \(\varphi (y) + a\left\vert y\right\vert + b \geq 0\). From a 5) it follows that \(\varphi (x) \in L^{1}(0,T)\). Let  α n  ∈  \(C([0,T];\ \mathbb{R})\), 0 ≤ α n  ≤ 1, and \(\alpha _{n} \nearrow \mathbf{1}_{\left]s,t\right[}\). In a 5) we put \(y\left(r\right)\) \(:=\, (1 -\alpha _{n}\left(r\right))x\left(r\right) +\alpha _{n}\left(r\right)y\left(r\right)\). So we have

$$\displaystyle{\int \nolimits _{0}^{T}\left\langle (y - x)\alpha _{ n},\mathit{dk}(r)\right\rangle +\int \nolimits _{ 0}^{T}\varphi \left(x\right)\mathit{dr} \leq \int \nolimits _{ 0}^{T}\left(\left(1 -\alpha _{ n}\right)\varphi \left(x\right) +\alpha _{n}\varphi \left(y\right)\right)\mathit{dr}}$$

and furthermore

$$\displaystyle\begin{array}{rcl} & & \int \nolimits _{0}^{T}\left\langle \left(y - x\right)\alpha _{ n},\mathit{dk}\left(r\right)\right\rangle + \int \nolimits _{0}^{T}\alpha _{ n}\varphi \left(x\right)\mathit{dr} {}\\ & & \qquad \qquad \qquad \leq \int \nolimits _{0}^{T}\alpha _{ n}\varphi \left(y\right)\mathit{dr} {}\\ & & \qquad \qquad \qquad \leq \int \nolimits _{0}^{T}\alpha _{ n}\left(\varphi \left(y\right) + a\left\vert y\right\vert + b\right)\mathit{dr} -\int \nolimits _{0}^{T}\alpha _{ n}\left(a\left\vert y\right\vert + b\right)\mathit{dr} {}\\ & & \qquad \qquad \qquad \leq \int \nolimits _{0}^{T}1_{ [s,t]}\left(r\right)\left(\varphi \left(y\right) + a\left\vert y\right\vert + b\right)\mathit{dr} -\int \nolimits _{0}^{T}\alpha _{ n}\left(a\left\vert y\right\vert + b\right)\mathit{dr} {}\\ & & \qquad \qquad \qquad \leq \int \nolimits _{s}^{t}\varphi \left(y\right)\mathit{dr} + \int \nolimits _{ 0}^{T}\left(1_{ [s,t]} -\alpha _{n}\right)\left(a\left\vert y\right\vert + b\right)\mathit{dr}. {}\\ \end{array}$$

Passing to the limit as n → , a 2) follows. ■ 

Proposition 6.37.

If \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) is a proper convex l.s.c. function and

\(\tilde{\Phi }\) : \(L^{2}(\Omega;C([0,T]; \mathbb{R}^{d})) \rightarrow ] -\infty,+\infty ]\),

$$\displaystyle{ \tilde{\Phi }(x) = \left\{\begin{array}{l} \mathbb{E}\int \nolimits _{0}^{T}\varphi (x(t)\mathit{dt},\,\,{ \mathit{if }}\varphi \left(x\right) \in L^{1}(\Omega \times ]0,T[), \\ + \infty,\;\quad \quad { \mathit{otherwise}} \end{array} \right. }$$
(6.38)

then

  1. a)

    \(\tilde{\Phi }\) is a proper convex l.s.c. function,

  2. b)

    \(\partial \tilde{\Phi }(x)\;\mathop{ =}\limits^{ \mathit{def }}\left\{K \in L^{2}(\Omega;\ \mathit{BV }([0,T]; \mathbb{R}^{d})): \mathbb{E}\int \nolimits _{0}^{T}\left\langle Y _{ t} - X_{t},\mathit{dK}_{t}\right\rangle \right.\)

    $$\displaystyle{\left.+\mathbb{E}\int \nolimits _{0}^{T}\varphi (X_{ t})\mathit{dt} \leq \mathbb{E}\int \nolimits _{0}^{T}\varphi (Y _{ t})\mathit{dt},\;\,\forall \,Y \in \ L^{2}(\Omega;C([0,T]; \mathbb{R}^{d}))\right\}}$$

    is a maximal monotone operator,

  3. c)

    \(K \in \partial \tilde{\Phi }(x)\) iff \(K_{\cdot }\left(\omega \right) \in \partial \Phi (X_{\cdot }(\omega )),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \) , with \(\partial \Phi \) characterized in Proposition 6.36.

Proof.

The assertions a) and b) are obtained in the same manner as c1) and c2) from Proposition 6.35. The point c) follows from b) putting \(Y:= X1_{A^{c}} + Y 1_{A}\), where A ∈ F is arbitrary. ■ 

Proposition 6.38.

Let \(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) be a proper convex l.s.c. function such that \(\mathrm{int}\left(\mathrm{Dom}\left(\varphi \right)\right)\neq \varnothing \). Let

\(\Phi \) \(C([0,T]; \mathbb{R}^{d}) \rightarrow ] -\infty,+\infty ]\) be defined by (6.33). Let \(\left(u_{0},\hat{u}_{0}\right) \in \partial \varphi\) , r 0 ≥ 0 and

$$\displaystyle{\varphi _{u_{0},r_{0}}^{\#}\mathop{ =}\limits^{ \mathit{def }}\sup \left\{\varphi \left(u_{ 0} + r_{0}v\right): \left\vert v\right\vert \leq 1\right\}.}$$

Then for all 0 ≤ s ≤ t ≤ T and \(\left(x,k\right) \in \partial \Phi \) :

$$\displaystyle{ \begin{array}{l} r_{0}\left(\left\updownarrow k\right\updownarrow _{t} -\left\updownarrow k\right\updownarrow _{s}\right) + \int _{s}^{t}\varphi (x(r))\mathit{dr} \\ \quad \quad \quad \leq \int _{s}^{t}\left\langle x\left(r\right) - u_{ 0},\mathit{dk}\left(r\right)\right\rangle + \left(t - s\right)\varphi _{u_{0},r_{0}}^{\#}.\end{array} }$$
(6.39)

Moreover for all 0 ≤ s ≤ t ≤ T and for all \(\left(x,k\right) \in \partial \Phi \) :

$$\displaystyle{ \begin{array}{l} r_{0}\left(\left\updownarrow k\right\updownarrow _{t} -\left\updownarrow k\right\updownarrow _{s}\right) + \int _{s}^{t}\left\vert \varphi (x(r)) -\varphi \left(u_{ 0}\right)\right\vert \mathit{dr} \leq \int _{s}^{t}\left\langle x\left(r\right) - u_{ 0},\mathit{dk}\left(r\right)\right\rangle \\ \quad \quad \quad + \int \nolimits _{s}^{t}\left(2\left\vert \hat{u}_{ 0}\right\vert \left\vert x(r) - u_{0}\right\vert +\varphi _{ u_{0},r_{0}}^{\#} -\varphi \left(u_{ 0}\right)\right)\mathit{dr}.\end{array} }$$
(6.40)

Proof.

Let \(0 \leq s = t_{0} < t_{1} <\ldots < t_{n} = t \leq T\), \(\max _{i}\left(t_{i+1} - t_{i}\right) =\delta _{n} \rightarrow 0\). By (6.36-a1) for z = u 0 + r 0 v. We obtain

$$\displaystyle\begin{array}{rcl} r_{0}\left\langle k\left(t_{i+1}\right) - k\left(t_{i}\right),v\right\rangle + \int _{t_{i}}^{t_{i+1} }\varphi (x(r))\mathit{dr}& \leq & \int _{t_{i}}^{t_{i+1} }\left\langle x(r) - u_{0},\mathit{dk}(r)\right\rangle {}\\ & +& \left(t_{i+1} - t_{i}\right)\varphi _{u_{0},r_{0}}^{\#}, {}\\ \end{array}$$

for all \(\left\vert v\right\vert \leq 1\). Hence

$$\displaystyle\begin{array}{rcl} r_{0}\left\vert k\left(t_{i+1}\right) - k\left(t_{i}\right)\right\vert + \int _{t_{i}}^{t_{i+1} }\varphi (x(r))\mathit{dr}& \leq & \int _{t_{i}}^{t_{i+1} }\left\langle x(r) - u_{0},\mathit{dk}(r)\right\rangle {}\\ & +& \left(t_{i+1} - t_{i}\right)\varphi _{u_{0},r_{0}}^{\#} {}\\ \end{array}$$

and adding term by term for i = 0 to i = n − 1 we have

$$\displaystyle{r_{0}\sum _{i=0}^{n-1}\left\vert k\left(t_{ i+1}\right) - k\left(t_{i}\right)\right\vert +\int _{ s}^{t}\varphi (x(t))\mathit{dt} \leq \int _{ s}^{t}\left\langle x\left(t\right) - u_{ 0},\mathit{dk}\left(t\right)\right\rangle + \left(t - s\right)\varphi _{u_{0},r_{0}}^{\#},}$$

which clearly yields (6.39). The second inequality (6.40) now follows, using the fact that

$$\displaystyle{\left\vert \varphi (x) -\varphi \left(u_{0}\right)\right\vert \leq \varphi (x) -\varphi \left(u_{0}\right) + 2\left\vert \hat{u}_{0}\right\vert \left\vert x - u_{0}\right\vert,}$$

for all \(x \in \mathbb{R}^{d}\) and \(\left(u_{0},\hat{u}_{0}\right) \in \partial \varphi\). ■ 

Remark 6.39.

Since \(\varphi\) is locally bounded on \(\mathrm{int}(\mathrm{Dom}\varphi )\), it follows that for

$$\displaystyle{u_{0} \in \mathrm{ int}(\mathrm{Dom}\varphi )\left[=\mathrm{ int}\left(\mathrm{Dom}\left(\partial \varphi \right)\right)\right],}$$

there exists r 0 > 0 and M 0 ≥ 0, such that

$$\displaystyle{\sup \left\{\left\vert \varphi \left(u_{0} + r_{0}v\right)\right\vert: \left\vert v\right\vert \leq 1\right\} \leq M_{0}.}$$

6.3.8 Semiconvex Functions

Let \(\varphi: \mathbb{R}^{d} \rightarrow \left]-\infty,+\infty \right]\).

Define

$$\displaystyle{\mathrm{Dom}\left(\varphi \right) = \left\{v \in \mathbb{R}^{d}:\varphi \left(v\right) < +\infty \right\}.}$$

We say that \(\varphi\) is a proper function if \(\mathrm{Dom}\left(\varphi \right)\neq \varnothing \) and \(\mathrm{Dom}\left(\varphi \right)\) has no isolated points.

Definition 6.40.

The (Fréchet) subdifferential of \(\varphi\) at \(x \in \mathbb{R}^{d}\) is defined by

$$\displaystyle{\partial ^{-}\varphi \left(x\right) = \left\{\hat{x} \in \mathbb{R}^{d}: \quad \mathop{\lim \inf }\limits_{y \rightarrow x}\dfrac{\varphi \left(y\right) -\varphi \left(x\right) -\left\langle \hat{x},y - x\right\rangle } {\left\vert y - x\right\vert } \geq 0\right\},}$$

if \(x \in \mathrm{ Dom}\left(\varphi \right)\), and \(\partial ^{-}\varphi \left(x\right) = \varnothing \), if \(x\notin \mathrm{Dom}\left(\varphi \right)\).

Example 6.41.

If E is a non-empty closed subset of \(\mathbb{R}^{d}\) and

$$\displaystyle{\varphi \left(x\right) = I_{E}\left(x\right) = \left\{\begin{array}{c@{\quad }l} 0, \quad &\text{ if }x \in E,\\ + \infty,\quad &\text{ if } x\notin E, \end{array} \right.}$$

then \(\varphi\) is l.s.c. and (by a result of Colombo and Goncharov [17] we have for any closed subset E of a Hilbert space)

$$\displaystyle\begin{array}{rcl} \partial ^{-}I_{ E}\left(x\right)& =& \{\hat{x} \in \mathbb{R}^{d}: \quad \limsup _{ y\rightarrow x,\ y\in E}\dfrac{\left\langle \hat{x},y - x\right\rangle } {\left\vert y - x\right\vert } \leq 0\} {}\\ & =& \left\{\begin{array}{ll} 0, &\quad \text{ if }x \in \mathrm{ int}\left(E\right), \\ N_{E}\left(x\right),&\quad \text{ if }x \in \mathrm{ Bd}\left(E\right), \\ \varnothing, &\quad \text{ if }x\notin E, \end{array} \right.{}\\ \end{array}$$

where \(N_{E}\left(x\right)\) is the closed normal cone at E in \(x \in \mathrm{ Bd}\left(E\right)\)

$$\displaystyle{N_{E}\left(x\right)\mathop{ =}\limits^{ \mathit{def }}\left\{u \in \mathbb{R}^{d}:\lim _{\varepsilon \searrow 0}\frac{d_{E}\left(x +\varepsilon u\right)} {\varepsilon } = \left\vert u\right\vert \right\}}$$

and

$$\displaystyle{d_{E}\left(z\right)\mathop{ =}\limits^{ \mathit{def }}\inf \left\{\left\vert z - x\right\vert: x \in E\right\}}$$

is the distance of a point \(z \in \mathbb{R}^{d}\) to E.

Denote

$$\displaystyle{\begin{array}{l} a)\quad \mathrm{Dom}\left(\partial ^{-}\varphi \right) = \left\{x \in \mathbb{R}^{d}: \partial ^{-}\varphi \left(x\right)\neq \varnothing \right\}, \\ b)\quad \partial ^{-}\varphi = \left\{\left(x,\hat{x}\right):\; x \in \mathrm{ Dom}\left(\partial ^{-}\varphi \right),\;\hat{x} \in \partial ^{-}\varphi \left(x\right)\right\}. \end{array} }$$

Definition 6.42.

A closed set \(E \subset \mathbb{R}^{d}\) is γ–semiconvex, γ ≥ 0, if for all \(x \in \mathrm{ Bd}\left(E\right)\) there exists an \(\hat{x}\neq 0\) such that

$$\displaystyle{\left\langle \hat{x},y - x\right\rangle \leq \gamma \left\vert \hat{x}\right\vert \left\vert y - x\right\vert ^{2},\quad \text{ for all }y \in E.}$$

Note that if E is a semiconvex set, then

$$\displaystyle{\partial ^{-}I_{ E}\left(x\right) =\{\hat{ x} \in \mathbb{R}^{d}: \left\langle \hat{x},y - x\right\rangle \leq \gamma \left\vert \hat{x}\right\vert \left\vert y - x\right\vert ^{2},\quad \text{ for all }y \in E\}.}$$

Definition 6.43.

\(\varphi: \mathbb{R}^{d} \rightarrow ] -\infty,+\infty ]\) is a semiconvex function if there exist ρ, γ ≥ 0 such that

  1. (a)

    \(\overline{\mathrm{Dom}\left(\varphi \right)}\) is γ–semiconvex;

  2. (b)

    \(\mathrm{Dom}\left(\partial ^{-}\varphi \right)\neq \varnothing;\)

  3. (c)

    for all \(y \in \mathbb{R}^{d}\) and for all \(\left(x,\hat{x}\right) \in \partial ^{-}\varphi\)

    $$\displaystyle{\left\langle \hat{x},y - x\right\rangle +\varphi \left(x\right) \leq \varphi \left(y\right) + \left(\rho +\gamma \left\vert \hat{x}\right\vert \right)\left\vert y - x\right\vert ^{2}.}$$

A function \(\varphi\) satisfying the properties of this definition will sometimes be called a (ρ, γ)–semiconvex function, or a γ–semiconvex function (since the second parameter is the most important one).

Note that a convex function is a \(\left(\rho,\gamma \right)\)–semiconvex function for all ρ, γ ≥ 0.

A set E is γ–semiconvex iff I E is a (0, γ)–semiconvex function.

If we write the definition of semiconvexity for a fixed \(\left(x_{0},\hat{x}_{0}\right) \in \partial ^{-}\varphi\), then it is clear that we have:

Proposition 6.44.

If \(\varphi: \mathbb{R}^{d} \rightarrow \left]-\infty,+\infty \right]\) is a semiconvex function, then there exists an a ≥ 0 such that

$$\displaystyle{\varphi \left(y\right) + a\left\vert y\right\vert ^{2} + a \geq 0,\;\;\forall \ y \in \mathbb{R}^{d}.}$$

In particular \(\varphi\) is bounded below on bounded subsets of \(\mathbb{R}^{d}\) .

The following properties also hold:

Proposition 6.45.

Let \(\varphi: \mathbb{R}^{d} \rightarrow \left]-\infty,+\infty \right]\) be a semiconvex function. If there exist \(u_{0} \in \mathrm{ Dom}\left(\varphi \right)\) , r 0 ,M 0 > 0 such that

$$\displaystyle{\varphi \left(u_{0} + r_{0}v\right) \leq M_{0},\;\;\forall \ \left\vert v\right\vert \leq 1,}$$

then there exist ρ 0 > 0 and b ≥ 0 such that

$$\displaystyle{ \rho _{0}\left\vert \hat{x}\right\vert \leq \left\langle \hat{x},x - u_{0}\right\rangle + b + b\left(1 + \left\vert \hat{x}\right\vert \right)\left\vert x - u_{0}\right\vert ^{2},\;\;\forall \ \left(x,\hat{x}\right) \in \partial ^{-}\varphi }$$
(6.41)

and moreover there exist M ≥ 0 and δ 0 ∈]0,r 0 ] such that

$$\displaystyle{ \left\vert \hat{x}\right\vert \leq M,\;\;\forall \ x \in \bar{ B}\left(u_{0},\delta _{0}\right) \subset \mathrm{ Dom}\left(\varphi \right)\text{ and }\hat{x} \in \partial ^{-}\varphi \left(x\right). }$$
(6.42)

Proof.

Let \(\left(x,\hat{x}\right) \in \partial ^{-}\varphi\). Then for all \(\left\vert v\right\vert \leq 1\) and λ ∈ [0, 1]:

$$\displaystyle{\left\langle \hat{x},\left(u_{0} + r_{0}\lambda v\right) - x\right\rangle +\varphi \left(x\right) \leq \varphi \left(u_{0} + r_{0}\lambda v\right) + \left(\rho +\gamma \left\vert \hat{x}\right\vert \right)\left\vert \left(u_{0} + r_{0}\lambda v\right) - x\right\vert ^{2},}$$

which yields

$$\displaystyle{r_{0}\lambda \left\langle \hat{x},v\right\rangle \leq \left\langle \hat{x},x - u_{0}\right\rangle + \left(a\left\vert x\right\vert ^{2} + a\right) + M_{ 0} + 2\left(\rho +\gamma \left\vert \hat{x}\right\vert \right)\left[\left\vert x - u_{0}\right\vert ^{2} + r_{ 0}^{2}\lambda ^{2}\right].}$$

Taking the \(\sup _{\left\vert v\right\vert \leq 1}\), we deduce for λ = 1∕(1 + 2γ r 0):

$$\displaystyle{ \frac{r_{0}} {\left(1 + 2\gamma r_{0}\right)^{2}}\left\vert \hat{x}\right\vert \leq \left\langle \hat{x},x - u_{0}\right\rangle + C + C\left(1 + \left\vert \hat{x}\right\vert \right)\left\vert x - u_{0}\right\vert ^{2}}$$

that is (6.41).

Moreover if \(\left\vert x - u_{0}\right\vert \leq \delta _{0} = 1 \wedge \frac{\rho _{0}} {2\left(1+b\right)} \wedge r_{0}\), then

$$\displaystyle\begin{array}{rcl} \rho _{0}\left\vert \hat{x}\right\vert & \leq & \left\langle \hat{x},x - u_{0}\right\rangle + b + b\left(1 + \left\vert \hat{x}\right\vert \right)\left\vert x - u_{0}\right\vert ^{2} {}\\ & \leq & \left(\delta _{0} + b\delta _{0}^{2}\right)\left\vert \hat{x}\right\vert + b + b\delta _{ 0}^{2} {}\\ & \leq & \delta _{0}\left(1 + b\right)\left\vert \hat{x}\right\vert + 2b {}\\ & \leq & \frac{\rho _{0}} {2}\left\vert \hat{x}\right\vert + 2b {}\\ \end{array}$$

and (6.42) follows. ■ 

Let E be a non-empty closed subset of \(\mathbb{R}^{d}\) and \(\varepsilon > 0\). We denote by

$$\displaystyle{U_{\varepsilon }\left(E\right)\mathop{ =}\limits^{ \mathit{def }}\left\{y \in \mathbb{R}^{d}: d_{ E}\left(y\right) <\varepsilon \right\}}$$

the open \(\varepsilon\)-neighbourhood of E and

$$\displaystyle{\overline{U}_{\varepsilon }\left(E\right)\mathop{ =}\limits^{ \mathit{def }}\left\{z \in \mathbb{R}^{d}: d_{ E}\left(z\right) \leq \varepsilon \right\}}$$

the closed \(\varepsilon\)-neighbourhood of E.

Given \(z \in \mathbb{R}^{d}\), we denote by \(\varPi _{E}\left(z\right)\) the set of elements x ∈ E with \(\left\vert z - x\right\vert = d_{E}\left(z\right)\). We remark that \(\varPi _{E}\left(z\right)\) is always non-empty since E is non-empty and closed. We also note that if \(z \in \mathbb{R}^{d}\) and \(\hat{z} \in \varPi _{E}\left(z\right),\) then \(z -\hat{ z} \in N_{E}\left(\hat{z}\right).\) This follows from the fact that for \(0 <\varepsilon < 1\) we have

$$\displaystyle\begin{array}{rcl} d_{E}\left(\hat{z} +\varepsilon \left(z -\hat{ z}\right)\right)& =& d_{E}\left(z\right) + d_{E}\left(\hat{z} +\varepsilon \left(z -\hat{ z}\right)\right) - d_{E}\left(z\right) {}\\ & \geq & \left\vert z -\hat{ z}\right\vert -\left\vert \hat{z} +\varepsilon \left(z -\hat{ z}\right) - z\right\vert {}\\ & =& \varepsilon \left\vert z -\hat{ z}\right\vert {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} d_{E}\left(\hat{z} +\varepsilon \left(z -\hat{ z}\right)\right)& =& d_{E}\left(\hat{z} +\varepsilon \left(z -\hat{ z}\right)\right) - d_{E}\left(\hat{z}\right) {}\\ & \leq & \varepsilon \left\vert z -\hat{ z}\right\vert. {}\\ \end{array}$$

We recall the notations

$$\displaystyle\begin{array}{rcl} B\left(y,r\right)& =& \left\{u \in \mathbb{R}^{d}: \left\vert u - y\right\vert < r\right\},\;\;\text{ and} {}\\ \overline{B}\left(y,r\right)& =& \left\{u \in \mathbb{R}^{d}: \left\vert u - y\right\vert \leq r\right\}. {}\\ \end{array}$$

Definition 6.46.

We say that E satisfies the “uniform exterior ball condition” (abbreviated UEBC) if

  • \(N_{E}\left(x\right)\neq \left\{0\right\}\) for all \(x \in \mathrm{ Bd}\left(E\right)\),

  • r 0 > 0 such that, \(\forall \) \(x \in \mathrm{ Bd}\left(E\right)\) and \(\forall \ u \in N_{E}\left(x\right)\), \(\left\vert u\right\vert = r_{0}\):

    $$\displaystyle{d_{E}\left(x + u\right) = r_{0}\quad \text{ or equivalently}\quad B\left(x + u,r_{0}\right) \cap E = \varnothing,}$$

    (in this case we say that E satisfies r 0-UEBC).

Note that for all \(v \in N_{E}\left(x\right)\), \(\left\vert v\right\vert \leq r_{0}\), we also have

$$\displaystyle{ d_{E}\left(x + v\right) = \left\vert v\right\vert. }$$
(6.43)

Indeed since

$$\displaystyle{0 \leq d_{E}\left(x + v\right) = d_{E}\left(x + v\right) - d_{E}\left(x\right) \leq \left\vert v\right\vert }$$

and

$$\displaystyle\begin{array}{rcl} \left\vert v\right\vert & =& r_{0} + \left(\left\vert v\right\vert - r_{0}\right) {}\\ & =& d_{E}\left(x + \frac{r_{0}} {\left\vert v\right\vert } v\right) + \left(\left\vert v\right\vert - r_{0}\right) {}\\ & \leq & \left\vert d_{E}\left(x + \frac{r_{0}} {\left\vert v\right\vert } v\right) - d_{E}\left(x + v\right)\right\vert + d_{E}\left(x + v\right) + \left(\left\vert v\right\vert - r_{0}\right) {}\\ & \leq & \left(\frac{r_{0}} {\left\vert v\right\vert } - 1\right)\left\vert v\right\vert + d_{E}\left(x + v\right) + \left(\left\vert v\right\vert - r_{0}\right) {}\\ & =& d_{E}\left(x + v\right), {}\\ \end{array}$$

then (6.43) follows.

It is clear that, under the uniform exterior ball condition with ball radius r 0, for all \(z \in \mathbb{R}^{d}\) with \(d_{E}\left(z\right) < r_{0}\), the set \(\varPi _{E}\left(z\right)\) is a singleton. The unique element of \(\varPi _{E}\left(z\right)\) is called the projection of z on E, and it is denoted by \(\pi _{E}\left(z\right)\).

We have the following characterization of the notion of the uniform exterior ball condition:

Lemma 6.47.

Let E be a non-empty closed subset of \(\mathbb{R}^{d}\). The following assertions are equivalent:

  1. (i)

    E satisfies the uniform exterior ball condition;

  2. (ii)

    E is a semiconvex subset of \(\mathbb{R}^{d}\) , that is ∃γ ≥ 0 and for all \(x \in \mathrm{ Bd}\left(E\right)\) there exists an \(\hat{x}\neq 0\) such that

    $$\displaystyle{\left\langle \hat{x},y - x\right\rangle \leq \gamma \left\vert \hat{x}\right\vert \left\vert y - x\right\vert ^{2},\quad \text{ for all }y \in E,}$$

    (in this case \(\hat{x} \in N_{E}\left(x\right)\) follows);

  3. (iii)

    \(\exists \gamma \geq 0,\forall x,y \in \mathrm{ Bd}\left(E\right),\forall \lambda \in ]0,1[\) :

    $$\displaystyle{d_{E}\left(\left(1-\lambda \right)x +\lambda y\right) \leq 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2};}$$
  4. (iii’)

    \(\exists \gamma \geq 0,\forall x,y \in E,\forall \lambda \in ]0,1[:\)

    $$\displaystyle{d_{E}\left(\left(1-\lambda \right)x +\lambda y\right) \leq 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2};}$$
  5. (iv)

    \(\exists \gamma \geq 0,\forall x,y \in \mathrm{ Bd}(E)\) :

    $$\displaystyle{d_{E}\left(\frac{x + y} {2} \right) \leq \gamma \left\vert x - y\right\vert ^{2},}$$
  6. (iv’)

    \(\exists \gamma \geq 0,\forall x,y \in E\) :

    $$\displaystyle{d_{E}\left(\frac{x + y} {2} \right) \leq \gamma \left\vert x - y\right\vert ^{2},}$$
  7. (v)

    ∃ δ > 0 and μ > 0 such that the function

    $$\displaystyle{x\longrightarrow \psi _{E}^{\mu }\left(x\right)\mathop{ =}\limits^{ \mathit{def }}d_{ E}\left(x\right) +\mu \left\vert x\right\vert ^{2}: U_{\delta }\left(E\right) \rightarrow \mathbb{R}}$$

    is convex on each convex subset of \(U_{\delta }\left(E\right)\) .

Proof.

We first remark that the conditions \(\left(\mathit{ii}\right),\left(\mathit{iii}\right),\left(\mathit{iii}^{{\prime}}\right),\left(\mathit{iv}\right),\left(\mathit{iv}^{{\prime}}\right)\) are satisfied for γ = 0 if and only if E is convex; the convex sets satisfy the r-UEBC for all r > 0.

Step I. \(\left(i\right) \Leftrightarrow (\mathit{ii})\)

\(\left(i\right) \Rightarrow \left(\mathit{ii}\right)\): Let \(x \in \mathrm{ Bd}\left(E\right)\) and \(\hat{x} \in N_{E}\left(x\right)\), \(\hat{x}\neq 0\). Then there exists an r 0 > 0 such that

$$\displaystyle{d_{E}\left(x + \frac{r_{0}} {\left\vert \hat{x}\right\vert } \hat{x}\right) = r_{0}.}$$

We have for all y ∈ E and \(\gamma = \frac{1} {2r_{0}}\)

$$\displaystyle\begin{array}{rcl} \gamma \left\vert \hat{x}\right\vert \left\vert y - x\right\vert ^{2} -\left\langle \hat{x},y - x\right\rangle & =& \frac{1} {2r_{0}}\left\vert \hat{x}\right\vert \left[\left\vert y -\left(x + \frac{r_{0}} {\left\vert \hat{x}\right\vert } \hat{x}\right)\right\vert ^{2} - r_{ 0}^{2}\right] {}\\ & \geq & \frac{1} {2r_{0}}\left\vert \hat{x}\right\vert \left[d_{E}^{2}\left(x + \frac{r_{0}} {\left\vert \hat{x}\right\vert } \hat{x}\right) - r_{0}^{2}\right] {}\\ & =& 0. {}\\ \end{array}$$

\(\left(\mathit{ii}\right) \Rightarrow \left(i\right)\): Let r 0 > 0 be such that 2γ r 0 ≤ 1. Let \(x \in \mathrm{ Bd}\left(E\right)\) be arbitrary and \(u = r_{0}\dfrac{\hat{x}} {\left\vert \hat{x}\right\vert }\). Then

$$\displaystyle\begin{array}{rcl} \left\vert u\right\vert ^{2}& =& r_{ 0}^{2} {}\\ & \leq & r_{0}^{2} + \frac{2r_{0}} {\left\vert \hat{x}\right\vert } \left[\gamma \left\vert \hat{x}\right\vert \left\vert y - x\right\vert ^{2} -\left\langle \hat{x},y - x\right\rangle \right] {}\\ & \leq & \left\vert y -\left(x + u\right)\right\vert ^{2},\quad \forall \ y \in E. {}\\ \end{array}$$

Hence

$$\displaystyle{\left\vert u\right\vert = r_{0} \leq d_{E}\left(x + u\right) \leq \left\vert u\right\vert,}$$

that is E satisfies the r 0-uniform exterior ball condition.

From this equivalence we have that

$$\displaystyle{ E\text{ is }r_{0} -\mathit{UEBC} \Leftrightarrow E\text{ is } \frac{1} {2r_{0}}\text{ \textendash semiconvex.} }$$
(6.44)

Step II. \(\left(\mathit{iii}\right) \Leftrightarrow (\mathit{iii}^{{\prime}})\).

We have to prove only \(\left(\mathit{iii}\right) \Rightarrow (\mathit{iii}^{{\prime}})\). Let x, y ∈ E and 0 < λ < 1. Let \(u_{\lambda } = \left(1-\lambda \right)x +\lambda y = x +\lambda \left(y - x\right)\). If u λ  ∈ E, then

$$\displaystyle{d_{E}\left(\left(1-\lambda \right)x +\lambda y\right) = 0 \leq 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2}.}$$

If \(u_{\lambda }\notin E\), then there exist 0 < α < λ < β < 1 such that

$$\displaystyle{u_{\rho } = x +\rho \left(y - x\right)\notin E,\quad \text{ for all }\alpha <\rho <\beta }$$

and

$$\displaystyle{u_{\alpha } = x +\alpha \left(y - x\right) \in E,\quad u_{\beta } = x +\beta \left(y - x\right) \in E.}$$

We have

$$\displaystyle{u_{\lambda } = u_{\alpha } + \frac{\lambda -\alpha } {\beta -\alpha }\left(u_{\beta } - u_{\alpha }\right)}$$

and consequently

$$\displaystyle\begin{array}{rcl} d_{E}\left(\left(1-\lambda \right)x +\lambda y\right)& =& d_{E}\left(u_{\lambda }\right) {}\\ & \leq & 4\frac{\lambda -\alpha } {\beta -\alpha }\left(1 -\frac{\lambda -\alpha } {\beta -\alpha }\right)\gamma \left\vert u_{\beta } - u_{\alpha }\right\vert ^{2} {}\\ & \leq & 4\left(\lambda -\alpha \right)\left(\beta -\lambda \right)\gamma \left\vert y - x\right\vert ^{2} {}\\ & \leq & 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2}. {}\\ \end{array}$$

Step II. \(\left(\mathit{iii}^{{\prime}}\right) \Rightarrow (\mathit{iv}^{{\prime}}) \Rightarrow (\mathit{iv}) \Rightarrow \left(i\right) \Rightarrow \left(\mathit{iii}\right)\).

\(\left(\mathit{iii}^{{\prime}}\right) \Rightarrow (\mathit{iv}^{{\prime}}) \Rightarrow (\mathit{iv})\) as particular cases: (iv ) for \(\left(\mathit{iii}^{{\prime}}\right)\) and (iv) for (iv ).

\((\mathit{iv})\Longrightarrow\left(i\right)\): We prove by contradiction. We can assume γ > 0. We suppose that there is some \(z \in \mathbb{R}^{d}\) in the r 0-neighbourhood of E such that, for two different \(x,y \in \mathrm{ Bd}\left(E\right)\),

$$\displaystyle{\left\vert z - x\right\vert = \left\vert z - y\right\vert = d_{E}\left(z\right) < r_{0} = \frac{1} {2\gamma }.}$$

Under this hypothesis the vectors \(z -\frac{1} {2}\left(x + y\right) = \frac{1} {2}\left[\left(z - y\right) + \left(z - x\right)\right]\) and \(2\left(x - y\right) = 2\left[\left(z - y\right) -\left(z - x\right)\right]\) are orthogonal and, consequently,

$$\displaystyle{d_{E}^{2}\left(z\right) = \left\vert z - x\right\vert ^{2} = \left\vert z -\frac{1} {2}\left(x + y\right)\right\vert ^{2} + 4\left\vert y - x\right\vert ^{2}\text{.}}$$

Let \(u \in \varPi _{E}\left(\frac{1} {2}\left(x + y\right)\right)\). Then, from condition (iv) we obtain

$$\displaystyle\begin{array}{rcl} \gamma \left\vert x - y\right\vert ^{2}& \geq & d_{ E}\left(\frac{1} {2}\left(x + y\right)\right) = \left\vert \left(\frac{1} {2}\left(x + y\right)\right) - u\right\vert {}\\ & \geq & \left\vert z - u\right\vert -\left\vert z -\left(\frac{1} {2}\left(x + y\right)\right)\right\vert {}\\ & \geq & d_{E}\left(z\right) -\left\vert z -\left(\frac{1} {2}\left(x + y\right)\right)\right\vert {}\\ & =& \sqrt{\left\vert z - \frac{1} {2}\left(x + y\right)\right\vert ^{2} + 4\left\vert y - x\right\vert ^{2}} -\left\vert z -\left(\frac{1} {2}\left(x + y\right)\right)\right\vert. {}\\ \end{array}$$

Hence, we have

$$\displaystyle{\left\vert z -\frac{1} {2}\left(x + y\right)\right\vert ^{2} + 4\left\vert y - x\right\vert ^{2} \leq \left[\gamma \left\vert x - y\right\vert ^{2} + \left\vert z -\left(\frac{1} {2}\left(x + y\right)\right)\right\vert \right]^{2}\text{,}}$$

from which we easily deduce that

$$\displaystyle\begin{array}{rcl} 4& \leq & \gamma ^{2}\left\vert y - x\right\vert ^{2} + 2\gamma \left\vert z -\frac{1} {2}\left(x + y\right)\right\vert {}\\ & \leq & \gamma ^{2}\left[\left\vert z - x\right\vert + \left\vert z - y\right\vert \right]^{2} +\gamma \left[\left\vert z - x\right\vert + \left\vert z - y\right\vert \right] {}\\ & <& 2, {}\\ \end{array}$$

which is a contradiction. Consequently, condition (iv) implies the \(\frac{1} {2\gamma }\)-uniform exterior ball condition.

\((i)\Longrightarrow\left(\mathit{iii}\right)\): Let us now suppose that E satisfies the uniform exterior ball condition with an r 0-ball. Let \(x,y \in \mathrm{ Bd}\left(E\right)\). In a first step we assume that x, y are two different elements such that \(0 < \left\vert x - y\right\vert \leq r_{0}\). Let λ ∈ ]0, 1[ be such that \(x_{\lambda } = x +\lambda \left(y - x\right)\notin E\) (if there is not such a λ, we are done), and let \(\overline{x}_{\lambda } \in \varPi _{E}\left(x_{\lambda }\right)\). We fix any \(u_{\lambda } \in N_{E}\left(\overline{x}_{\lambda }\right)\), \(\left\vert u_{\lambda }\right\vert = r_{0}\) and put \(z_{\lambda } = \overline{x}_{\lambda } + u_{\lambda }\). Then, due to condition \(\left(i\right)\), \(\left\vert v - z_{\lambda }\right\vert \geq r_{0}\), for all v ∈ E. In particular, we have

$$\displaystyle{\left\vert x - z_{\lambda }\right\vert \geq r_{0},\quad \text{ and }\left\vert y - z_{\lambda }\right\vert \geq r_{0}\text{.}}$$

We also observe that

$$\displaystyle{\left\vert x_{\lambda } -\overline{x}_{\lambda }\right\vert = d_{E}\left(x_{\lambda }\right) \leq \left\vert x_{\lambda } - x\right\vert =\lambda \left\vert y - x\right\vert \leq r_{0} = \left\vert z_{\lambda } -\overline{x}_{\lambda }\right\vert \text{,}}$$

and

$$\displaystyle{\alpha _{\lambda } = \frac{\left\langle x - z_{\lambda },y - z_{\lambda }\right\rangle } {\left\vert x - z_{\lambda }\right\vert \left\vert y - z_{\lambda }\right\vert } \in \left[0,1\right].}$$

Hence,

$$\displaystyle\begin{array}{rcl} & & \left\vert x_{\lambda } -\overline{x}_{\lambda }\right\vert {}\\ & & = r_{0} -\left\vert z_{\lambda } - x_{\lambda }\right\vert {}\\ & & = r_{0} -\sqrt{\left(1-\lambda \right) ^{2 } \left\vert x - z_{\lambda } \right\vert ^{2 } +\lambda ^{2 } \left\vert y - z_{\lambda } \right\vert ^{2 } + 2\lambda \left(1-\lambda \right) \left\vert x - z_{\lambda } \right\vert \left\vert y - z_{\lambda }\right\vert \alpha } {}\\ & & \leq r_{0}\left(1 -\sqrt{\left(1-\lambda \right) ^{2 } +\lambda ^{2 } + 2\lambda \left(1-\lambda \right)\alpha }\right) {}\\ & & = r_{0}\left(1 -\sqrt{1 - 2\lambda \left(1-\lambda \right) \left(1-\alpha \right)}\right) {}\\ & & \leq r_{0}\left[1 -\left(1 - 2\lambda \left(1-\lambda \right)\left(1-\alpha \right)\right)\right] = 2r_{0}\lambda \left(1-\lambda \right)\left(1-\alpha \right). {}\\ \end{array}$$

On the other hand, for \(\gamma \geq 1/\left(2r_{0}\right)\),

$$\displaystyle\begin{array}{rcl} & & 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2} {}\\ & & = 4\lambda \left(1-\lambda \right)\gamma \left(\left\vert x - z_{\lambda }\right\vert ^{2} + \left\vert y - z_{\lambda }\right\vert ^{2} - 2\left\vert x - z_{\lambda }\right\vert \left\vert y - z_{\lambda }\right\vert \alpha \right) {}\\ & & \geq 8\lambda \left(1-\lambda \right)\gamma \left\vert x - z_{\lambda }\right\vert \left\vert y - z_{\lambda }\right\vert \left(1-\alpha \right) {}\\ & & \geq 8\lambda \left(1-\lambda \right)\gamma r_{0}^{2}\left(1-\alpha \right) \geq 2r_{ 0}\lambda \left(1-\lambda \right)\left(1-\alpha \right). {}\\ \end{array}$$

Consequently, \(d_{E}\left(x_{\lambda }\right) = \left\vert x_{\lambda } -\overline{x}_{\lambda }\right\vert \leq 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2}\), if \(\gamma \geq 1/\left(4r_{0}\right)\).

In order to complete the proof, we still have to consider the case of \(x,y \in \mathrm{ Bd}\left(E\right)\) with \(\left\vert x - y\right\vert > r_{0}\). In this case, for \(\gamma \geq 1/\left(2r_{0}\right)\), we have

$$\displaystyle\begin{array}{rcl} & & d_{E}\left(x +\lambda \left(y - x\right)\right)\left(= d_{E}\left(y -\left(1-\lambda \right)\left(y - x\right)\right)\right) {}\\ & & \leq \left[\lambda \wedge \left(1-\lambda \right)\right]\left\vert x - y\right\vert \leq 2\lambda \left(1-\lambda \right)\left\vert x - y\right\vert {}\\ & & \leq 4\lambda \left(1-\lambda \right) \frac{1} {2r_{0}}\left\vert x - y\right\vert ^{2}. {}\\ \end{array}$$

This proves that under the r 0-uniform exterior ball condition the statement (iii) holds with \(\gamma \geq 1/\left(2r_{0}\right)\).

Step III. \(\left(\mathit{v}\right) \Rightarrow (\mathit{iii}) \Rightarrow (\mathit{v})\).

\(\left(\mathit{v}\right)\Longrightarrow\left(\mathit{iii}\right)\): Let \(\lambda \in \left(0,1\right)\) and \(x,y \in \mathrm{ Bd}\left(E\right)\) with \(\left\vert x - y\right\vert <\delta\). Then \(x,y \in B\left(x;\delta \right) = \left\{z \in \mathbb{R}^{d}: \left\vert z - x\right\vert <\delta \right\} \subset U_{\delta }\left(E\right)\), and, consequently,

$$\displaystyle\begin{array}{rcl} & & d_{E}\left(\lambda x + \left(1-\lambda \right)y\right) +\mu \left\vert \lambda x + \left(1-\lambda \right)y\right\vert ^{2} {}\\ & & =\psi _{ E}^{\mu }\left(\lambda x + \left(1-\lambda \right)y\right) \leq \lambda \psi _{ E}^{\mu }\left(x\right) + \left(1-\lambda \right)\psi _{ E}^{\mu }\left(y\right) {}\\ & & =\lambda \mu \left\vert x\right\vert ^{2} + \left(1-\lambda \right)\mu \left\vert y\right\vert ^{2}. {}\\ \end{array}$$

By subtracting \(\mu \left\vert \lambda x + \left(1-\lambda \right)y\right\vert ^{2}\) on the left-hand and the right-hand sides of this inequality we obtain

$$\displaystyle{d_{E}\left(\lambda x + \left(1-\lambda \right)y\right) \leq \lambda \left(1-\lambda \right)\mu \left\vert x - y\right\vert ^{2}.}$$

On the other hand, if \(x,y \in \mathrm{ Bd}\left(E\right)\) are such that \(\left\vert x - y\right\vert \geq \delta\), then

$$\displaystyle{d_{E}\left(\lambda x + \left(1-\lambda \right)y\right) \leq \left[\lambda \wedge \left(1-\lambda \right)\right]\left\vert x - y\right\vert \leq \frac{2} {\delta } \lambda \left(1-\lambda \right)\left\vert x - y\right\vert ^{2}.}$$

This shows that (iii) is fulfilled for \(\gamma \geq \frac{1} {2\delta } \vee \frac{\mu } {4}\).

\(\left(\mathit{iii}\right)\Longrightarrow\left(\mathit{v}\right)\): We fix any \(\delta \in \left(0,r_{0}\right)\), and we recall that \(\pi _{E}: \overline{U}_{\delta }\left(E\right) \rightarrow E\) is Lipschitz continuous with Lipschitz constant \(L_{\delta } = r_{0}/\left(r_{0}-\delta \right)\). Let \(\lambda \in \left(0,1\right)\) and \(u,v \in \overline{U}_{\delta }\left(E\right)\) be such that \(\left(1-\lambda \right)u +\lambda v \in \overline{U}_{\delta }\left(E\right)\). For simplicity of notation we put \(x =\pi _{E}\left(u\right)\), \(y =\pi _{E}\left(v\right)\), \(z_{\lambda } = \left(1-\lambda \right)u +\lambda v\), and \(\overline{z}_{\lambda } = \left(1-\lambda \right)x +\lambda y\). Then,

$$\displaystyle\begin{array}{rcl} d_{E}\left(\left(1-\lambda \right)u +\lambda v\right)& =& d_{E}\left(z_{\lambda }\right) {}\\ & \leq & \left\vert z_{\lambda } -\pi _{E}\left(\overline{z}_{\lambda }\right)\right\vert {}\\ & \leq & \left\vert z_{\lambda } -\overline{z}_{\lambda }\right\vert + \left\vert \overline{z}_{\lambda } -\pi _{E}\left(\overline{z}_{\lambda }\right)\right\vert {}\\ & \leq & \left(1-\lambda \right)d_{E}\left(u\right) +\lambda d_{E}\left(v\right) + d_{E}\left(\overline{z}_{\lambda }\right) {}\\ & \leq & \left(1-\lambda \right)d_{E}\left(u\right) +\lambda d_{E}\left(v\right) + 4\lambda \left(1-\lambda \right)\gamma \left\vert x - y\right\vert ^{2} {}\\ & \leq & \left(1-\lambda \right)d_{E}\left(u\right) +\lambda d_{E}\left(v\right) + 4\lambda \left(1-\lambda \right)\gamma L_{\delta }^{2}\left\vert u - v\right\vert ^{2}. {}\\ \end{array}$$

Hence, for \(\mu \geq 4\gamma L_{\delta }^{2}\),

$$\displaystyle\begin{array}{rcl} \psi _{E}^{\mu }\left(\left(1-\lambda \right)u +\lambda v\right)& =& d_{ E}\left(\left(1-\lambda \right)u +\lambda v\right) +\mu \left\vert \left(1-\lambda \right)u +\lambda v\right\vert ^{2} {}\\ & \leq & \left(1-\lambda \right)\left[d_{E}\left(u\right) +\mu \left\vert u\right\vert ^{2}\right] +\lambda \left[d_{ E}\left(v\right) +\mu \left\vert v\right\vert ^{2}\right] {}\\ & =& \left(1-\lambda \right)\psi _{E}^{\mu }\left(u\right) +\lambda \psi _{ E}^{\mu }\left(v\right). {}\\ \end{array}$$

This proves that ψ E μ is convex on each convex subset of \(\overline{U}_{\delta }\left(E\right)\). ■ 

Corollary 6.48.

If E is a closed subset of \(\mathbb{R}^{d}\) and satisfies the r 0 -uniform exterior ball condition, then for all x ∈ E

$$\displaystyle{N_{E}\left(x\right) = \left\{\hat{x} \in \mathbb{R}^{d}: \left\langle \hat{x},y - x\right\rangle \leq \frac{1} {2r_{0}}\left\vert \hat{x}\right\vert \left\vert y - x\right\vert ^{2};\quad \forall y \in E\right\}}$$

and \(\varphi = I_{E}\) is a \((0, \dfrac{1} {2r_{0}})\) –semiconvex l.s.c. function. Moreover \(N_{E}\left(x\right) = \partial ^{-}I_{E}\left(x\right)\) .

Let r 0 > 0. The set E satisfies the r 0-uniform exterior ball condition if and only if E is \(\frac{1} {2r_{0}}\)–semiconvex.

We recall the following well-known property of the projection.

Lemma 6.49.

Suppose that E satisfies the uniform exterior ball condition with ball radius r 0 and \(\varepsilon \in ]0,r_{0}[\). Then the projection π E restricted to \(\overline{U}_{\varepsilon }\left(E\right)\) (the closed \( \varepsilon \) -neighbourhood of E) is Lipschitz with Lipschitz constant \(L_{\varepsilon } = r_{0}/\left(r_{0}-\varepsilon \right)\) , and the function d E 2 is of class C 1 on \(\overline{U}_{\varepsilon }\left(E\right)\) with

$$\displaystyle{\frac{1} {2}\nabla d_{E}^{2}\left(z\right) = z -\pi _{ E}\left(z\right),\quad \text{ and}\quad z -\pi _{E}\left(z\right) \in N_{E}\left(\pi _{E}\left(z\right)\right),}$$

for all \(z \in \overline{U}_{\varepsilon }\left(E\right)\) .

Proof.

To simplify we denote π = π E and d = d E . Let \(x,y \in \overline{U}_{\varepsilon }\left(E\right).\) Then we have \(x -\pi \left(x\right) \in N_{E}\left(\pi \left(x\right)\right)\), \(y -\pi \left(y\right) \in N_{E}\left(\pi \left(y\right)\right)\) and

$$\displaystyle\begin{array}{rcl} \left\vert \pi \left(x\right) -\pi \left(y\right)\right\vert ^{2}& =& \left\langle y -\pi \left(y\right),\pi \left(x\right) -\pi \left(y\right)\right\rangle + \left\langle x -\pi \left(x\right),\pi \left(y\right) -\pi \left(x\right)\right\rangle {}\\ & & +\left\langle x - y,\pi \left(x\right) -\pi \left(y\right)\right\rangle {}\\ & \leq & \frac{\varepsilon } {r_{0}}\left\vert \pi \left(x\right) -\pi \left(y\right)\right\vert ^{2} + \left\vert x - y\right\vert \left\vert \pi \left(x\right) -\pi \left(y\right)\right\vert. {}\\ \end{array}$$

Hence

$$\displaystyle{ \left\vert \pi \left(x\right) -\pi \left(y\right)\right\vert \leq \frac{r_{0}} {r_{0}-\varepsilon }\left\vert x - y\right\vert. }$$
(6.45)

To obtain the second part of lemma it is sufficient to show that there exist a positive constant \(C = C_{\varepsilon,r_{0}}\) such that

$$\displaystyle{ -C\left\vert y - x\right\vert ^{2} \leq d^{2}\left(y\right) - d^{2}\left(x\right) - 2\left\langle x -\pi \left(x\right),y - x\right\rangle \leq C\left\vert y - x\right\vert ^{2}. }$$
(6.46)

We have

$$\displaystyle\begin{array}{rcl} & & d^{2}\left(y\right) - d^{2}\left(x\right) - 2\left\langle x -\pi \left(x\right),y - x\right\rangle {}\\ & & = \left\vert \left(y - x\right) + \left(x -\pi \left(x\right)\right) +\pi \left(x\right) -\pi \left(y\right)\right\vert ^{2} -\left\vert x -\pi \left(x\right)\right\vert ^{2} {}\\ & & \quad - 2\left\langle x -\pi \left(x\right),y - x\right\rangle {}\\ & & = \left\vert y - x\right\vert ^{2} + \left\vert \pi \left(x\right) -\pi \left(y\right)\right\vert ^{2} + 2\left\langle y - x,\pi \left(x\right) -\pi \left(y\right)\right\rangle {}\\ & & \quad + 2\left\langle x -\pi \left(x\right),\pi \left(x\right) -\pi \left(y\right)\right\rangle. {}\\ \end{array}$$

Since

$$\displaystyle{\left\langle x -\pi \left(x\right),\pi \left(x\right) -\pi \left(y\right)\right\rangle \geq - \frac{\varepsilon } {2r_{0}}\left\vert \pi \left(y\right) -\pi \left(x\right)\right\vert ^{2}}$$

and

$$\displaystyle\begin{array}{rcl} & & \left\langle x -\pi \left(x\right),\pi \left(x\right) -\pi \left(y\right)\right\rangle {}\\ & & \leq \left\langle y -\pi \left(y\right),\pi \left(x\right) -\pi \left(y\right)\right\rangle + \left\langle x - y,\pi \left(x\right) -\pi \left(y\right)\right\rangle {}\\ & & \leq \frac{\varepsilon } {2r_{0}}\left\vert \pi \left(x\right) -\pi \left(y\right)\right\vert ^{2} + \left\vert x - y\right\vert \left\vert \pi \left(x\right) -\pi \left(y\right)\right\vert {}\\ \end{array}$$

the inequality (6.46) follows from this and (6.45). ■ 

6.3.9 Differential Equations

Let \(\mathbb{H}\) be a separable real Hilbert space. If \(A: \mathbb{H} \rightrightarrows \mathbb{H}\) is a maximal monotone operator, \(u_{0} \in \overline{D\left(A\right)}\), \(f \in L^{1}\left(0,T; \mathbb{H}\right)\), then the strong solution of the Cauchy problem

$$\displaystyle{ \left\{\begin{array}{l} \dfrac{\mathit{du}\left(t\right)} {\mathit{dt}} + Au\left(t\right) \ni f\left(t\right),\quad a.e.\;t \in \left]0,T\right[, \\ u\left(0\right) = u_{0}, \end{array} \right. }$$
(6.47)

is defined as a function \(u \in C\left(\left[0,T\right]; \mathbb{H}\right)\) satisfying:

$$\displaystyle{\begin{array}{r@{\quad }l} i)\quad &\quad u\left(t\right) \in D\left(A\right)\;\;a.e.\;t \in \left]0,T\right[, \\ \mathit{ii})\quad &\quad \exists \,h = h^{\left(u\right)} \in L^{1}\left(0,T; \mathbb{H}\right)\text{ such that }h\left(t\right) \in Au\left(t\right)\text{, }a.e.\ t \in \left]0,T\right[,\text{ and} \\ \quad &\quad \quad \quad \quad u\left(t\right) + \int _{0}^{t}h\left(s\right)\mathit{ds} = u_{ 0} + \int _{0}^{t}f\left(s\right)\mathit{ds},\;\forall \ t \in \left[0,T\right],\end{array} }$$

and we shall write \(u = \mathcal{S}\left(A;u_{0},f\right)\). Note that the strong solution is unique when it exists. Indeed if u, v are two solutions corresponding to \(\left(u_{0},f\right)\), \(\left(v_{0},g\right)\), respectively, then

$$\displaystyle\begin{array}{rcl} & & \left\vert u\left(t\right) - v\left(t\right)\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle h^{\left(u\right)}\left(s\right) - h^{\left(v\right)}\left(s\right),u\left(s\right) - v\left(s\right)\right\rangle \mathit{ds} {}\\ & & = \left\vert u_{0} - v_{0}\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle f\left(s\right) - g\left(s\right),u\left(s\right) - v\left(s\right)\right\rangle \mathit{ds} {}\\ \end{array}$$

and by the monotonicity of A it follows that

$$\displaystyle{\left\vert u\left(t\right) - v\left(t\right)\right\vert ^{2} \leq \left\vert u_{ 0} - v_{0}\right\vert ^{2} + 2\int _{ 0}^{t}\left\vert f\left(s\right) - g\left(s\right)\right\vert \left\vert u\left(s\right) - v\left(s\right)\right\vert \mathit{ds}.}$$

Using Gronwall’s inequality (Lemma 6.63, Annex C) we obtain

$$\displaystyle{ \left\vert u\left(t\right) - v\left(t\right)\right\vert \leq \left\vert u_{0} - v_{0}\right\vert +\int _{ 0}^{t}\left\vert f\left(s\right) - g\left(s\right)\right\vert \mathit{ds}. }$$
(6.48)

We recall from Barbu [3], p. 31, that the following proposition holds:

Proposition 6.50.

If A is maximal monotone operator on\(\mathbb{H}\) \(u_{0} \in D\left(A\right)\)  and \(f \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) , then the Cauchy problem (6.47) has a unique strong solution \(u \in W^{1,\infty }\left(\left[0,T\right]; \mathbb{H}\right)\). Moreover if \(A_{\varepsilon }\) is the Yosida approximation of the operator A and \(u_{\varepsilon }\)  is the solution of the approximate equation

$$\displaystyle{\frac{\mathit{du}_{\varepsilon }} {\mathit{dt}} + A_{\varepsilon }u_{\varepsilon } = f,\quad u_{\varepsilon }\left(0\right) = u_{0},}$$

then for all \(\left(x_{0},y_{0}\right) \in A\) there exists a constant \(C = C\left(\alpha,T,x_{0},y_{0}\right) > 0\)  such that

c 1):

\(\left\Vert u_{\varepsilon }\right\Vert _{C\left(\left[0,T\right];\mathbb{H}\right)} \leq C\left(1 + \left\vert u_{0}\right\vert + \left\Vert f\right\Vert _{L^{1}(0,T;\mathbb{H})}\right)\) , and

c2):

\(\lim \limits _{\varepsilon \searrow 0}u_{\varepsilon } = u\) in \(C\left(\left[0,T\right]; \mathbb{H}\right)\) .

We introduce the notation

$$\displaystyle{\begin{array}{r} W^{1,p}\left(\left[0,T\right]; \mathbb{H}\right) =\Big\{ f: \exists \ a \in \mathbb{H},\;g \in L^{p}\left(0,T; \mathbb{H}\right)\text{ such that} \\ f\left(t\right) = a + \int _{0}^{t}g\left(s\right)\mathit{ds},\;\forall \ t \in \left[0,T\right]\Big\}. \end{array} }$$

From Barbu [2] (Chap. IV, p. 197, Theorem 2.5) we recall:

Proposition 6.51.

Let A be a maximal monotone operator on\(\mathbb{H}\) such that

$$\displaystyle{\mathrm{int}\left(D\left(A\right)\right)\neq \varnothing.}$$

 If \(u_{0} \in \overline{D\left(A\right)}\)  and \(f \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) , then the Cauchy problem (6.47) has a unique strong solution \(u \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) .

By the continuity property (6.48) one can generalize the notion of the solution of Eq. (6.47) as follows:

  • \(\blacklozenge\) u is a generalized solution of the Cauchy problem (6.47) with

    $$\displaystyle{u_{0} \in \overline{D\left(A\right)},\quad f \in L^{1}\left(0,T; \mathbb{H}\right),}$$

    (and we shall write \(u = \mathcal{G}\mathcal{S}\left(A;u_{0},f\right)\)) if

    • \(\diamond \) \(u \in C\left(\left[0,T\right]; \mathbb{H}\right)\) and

    • \(\diamond \) there exist \(u_{0n} \in D\left(A\right)\), \(f_{n} \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) such that

      $$\displaystyle{\begin{array}{l@{\quad }l} a)\quad &\quad u_{0n} \rightarrow u_{0}\quad \;\text{ in }\mathbb{H}, \\ b)\quad &\quad f_{n} \rightarrow f\;\quad \text{ in }L^{1}\left(0,T; \mathbb{H}\right), \\ c)\quad &\quad u_{n} = \mathcal{S}\left(A;u_{0n},f_{n}\right) \rightarrow u\quad \text{ in }C\left(\left[0,T\right]; \mathbb{H}\right).\end{array} }$$

Clearly we have:

Proposition 6.52.

If A is a maximal monotone operator on\(\mathbb{H}\), \(u_{0} \in \overline{D\left(A\right)}\) and \(f \in L^{1}\left(0,T; \mathbb{H}\right)\) , then the Cauchy problem (6.47) has a unique generalized solution \(u \in C\left(\left[0,T\right]; \mathbb{H}\right)\). Moreover if \(u = \mathcal{G}\mathcal{S}\left(A;u_{0},f\right)\) and \(v = \mathcal{G}\mathcal{S}\left(A;v_{0},g\right)\) then

$$\displaystyle{ \left\vert u\left(t\right) - v\left(t\right)\right\vert \leq \left\vert u_{0} - v_{0}\right\vert +\int _{ 0}^{t}\left\vert f\left(s\right) - g\left(s\right)\right\vert \mathit{ds} }$$
(6.49)

and for all \(\left(x_{0},\hat{x}_{0}\right) \in A\) there exists a constant \(C = C\left(T,x_{0},\hat{x}_{0}\right) > 0\)  such that

$$\displaystyle{ \left\Vert u\right\Vert _{C\left(\left[0,T\right];\mathbb{H}\right)} \leq C\left(1 + \left\vert u_{0}\right\vert + \left\Vert f\right\Vert _{L^{1}(0,T;\mathbb{H})}\right). }$$
(6.50)

In the case when \(\mathrm{int}\left(D\left(A\right)\right)\neq \varnothing \) one can give supplementary properties of generalized solutions.

Proposition 6.53.

Let \(A \subset \mathbb{H} \times \mathbb{H}\) be a maximal monotone operator such that

$$\displaystyle{\mathrm{int}\left(D\left(A\right)\right)\neq \varnothing.}$$

Let \(u_{0} \in \overline{D\left(A\right)}\)  and \(f \in L^{1}\left(0,T; \mathbb{H}\right)\). Then:

  1. I.

    there exists a unique pair (u,k) such that

    $$\displaystyle{\left(P_{A}\right): \left\{\begin{array}{l@{\quad }l} a)\quad &\quad u \in C([0,T]; \mathbb{H}),\quad u(t) \in \overline{D(A)}\;\;\forall t \in [0,T],\;u(0) = u_{0}, \\ b)\quad &\quad k \in C([0,T]; \mathbb{H}) \cap \mathit{BV }([0,T]; \mathbb{H}),\;k(0) = 0, \\ c)\quad &\quad u(t) + k\left(t\right) = u_{0} + \int \nolimits _{0}^{t}f(s)\mathit{ds},\;\forall t \in [0,T], \\ d)\quad &\quad \int _{s}^{t}\left\langle u\left(r\right) - x,\mathit{dk}\left(r\right) -\hat{ x}\mathit{dr}\right\rangle \geq 0, \\ \quad &\quad \quad \quad \quad \quad \quad \quad \quad \forall \,0 \leq s \leq t \leq T,\;\forall \left(x,\hat{x}\right) \in A;\end{array} \right.}$$
  2. II.

    \(u = \mathcal{G}\mathcal{S}\left(A;u_{0},f\right)\) if and only if u is solution of the problem \(\left(P_{A}\right)\) ;

  3. III.

    the following estimate holds:

    $$\displaystyle{\left\Vert u\right\Vert _{C\left(\left[0,T\right];\mathbb{H}\right)}^{2} + \left\Vert k\right\Vert _{\mathit{ BV}\left(\left[0,T\right];\mathbb{H}\right)} \leq C\left(1 + \left\vert u_{0}\right\vert ^{2} + \left\Vert f\right\Vert _{ L^{1}(0,T;\mathbb{H})}^{2}\right),}$$

    where C is a positive constant independent of u 0 and f.

Proof.

Uniqueness. If \(\left(u,k\right)\) and \(\left(v,\ell\right)\) are two solutions of the problem \(\left(P_{A}\right)\) corresponding to \(\left(u_{0},f\right)\), \(\left(v_{0},g\right)\) respectively, then

$$\displaystyle\begin{array}{rcl} & & \left\vert u\left(t\right) - v\left(t\right)\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle \mathit{dk}\left(s\right) - d\ell\left(s\right),u\left(s\right) - v\left(s\right)\right\rangle \mathit{ds} {}\\ & & = \left\vert u_{0} - v_{0}\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle f\left(s\right) - g\left(s\right),u\left(s\right) - v\left(s\right)\right\rangle \mathit{ds}. {}\\ \end{array}$$

But by Proposition 6.17, the monotonicity of A and \(\left(P_{A} - d\right)\) we have

$$\displaystyle{\int _{0}^{t}\left\langle \mathit{dk}\left(s\right) - d\ell\left(s\right),u\left(s\right) - v\left(s\right)\right\rangle \mathit{ds} \geq 0.}$$

Hence

$$\displaystyle{\left\vert u\left(t\right) - v\left(t\right)\right\vert ^{2} \leq \left\vert u_{ 0} - v_{0}\right\vert ^{2} + 2\int _{ 0}^{t}\left\vert f\left(s\right) - g\left(s\right)\right\vert \left\vert u\left(s\right) - v\left(s\right)\right\vert \mathit{ds},}$$

which yields (6.49) and, in particular, the uniqueness follows.

Existence. Let \(u_{0n} \in D\left(A\right)\), \(f_{n} \in W^{1,1}\left(\left[0,T\right]; \mathbb{H}\right)\) such that

$$\displaystyle{u_{0n} \rightarrow u_{0}\quad \;\text{ in }\mathbb{H}\text{ and }f_{n} \rightarrow f\;\quad \text{ in }L^{1}\left(0,T; \mathbb{H}\right).}$$

Let \(u_{n} = \mathcal{S}\left(A;u_{0n},f_{n}\right)\) be the strong solution corresponding to \(\left(A;u_{0n},f_{n}\right)\). Hence there exists an \(h_{n} \in L^{1}\left(0,T; \mathbb{H}\right)\) such that \(h_{n}\left(t\right) \in Au_{n}\left(t\right)\), a. e. \(t \in \left]0,T\right[\) and denoting \(k_{n}\left(t\right) = \int _{0}^{t}h_{ n}\left(s\right)\mathit{ds}\) we have

$$\displaystyle{ \begin{array}{l@{\quad }l} a)\quad &\quad u_{n}\left(t\right) + k_{n}\left(t\right) = u_{0n} + \int _{0}^{t}f_{ n}\left(s\right)\mathit{ds},\quad \forall \,t \in \left[0,T\right], \\ b)\quad &\quad \int _{s}^{t}\left\langle u_{ n}\left(r\right) - x,\mathit{dk}_{n}\left(r\right) -\hat{ x}\mathit{dr}\right\rangle \geq 0, \\ \quad &\quad \quad \quad \quad \quad \quad \quad \forall \,0 \leq s \leq t \leq T,\;\forall \left(x,\hat{x}\right) \in A. \end{array} }$$
(6.51)

Let \(x_{0} \in \mathrm{ int}\left(D\left(A\right)\right)\) and \(\hat{x}_{0} \in A\left(x_{0}\right)\). Then

$$\displaystyle\begin{array}{rcl} & & \left\vert u_{n}\left(t\right) - x_{0}\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle h_{ n}\left(s\right),u_{n}\left(s\right) - x_{0}\right\rangle \mathit{ds} {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad = \left\vert u_{0n} - x_{0}\right\vert ^{2} + 2\int _{ 0}^{t}\left\langle f_{ n}\left(s\right),u_{n}\left(s\right) - x_{0}\right\rangle \mathit{ds}. {}\\ \end{array}$$

Since

$$\displaystyle{\left\langle h_{n}\left(s\right),u_{n}\left(s\right) - x_{0}\right\rangle \geq \left\langle \hat{x}_{0},u_{n}\left(s\right) - x_{0}\right\rangle,}$$

we infer

$$\displaystyle{\left\vert u_{n}\left(t\right) - x_{0}\right\vert ^{2} \leq \left\vert u_{ 0n} - x_{0}\right\vert ^{2} + 2\int _{ 0}^{t}\left[\left\vert f_{ n}\left(s\right)\right\vert + \left\vert \hat{x}_{0}\right\vert \right]\left\vert u_{n}\left(s\right) - x_{0}\right\vert \mathit{ds}.}$$

By the Gronwall type inequality from Lemma 6.63, Annex C, we obtain

$$\displaystyle\begin{array}{rcl} \left\vert u_{n}\left(t\right) - x_{0}\right\vert & \leq & \left\vert u_{0n} - x_{0}\right\vert +\int _{ 0}^{T}\left\vert f_{ n}\left(s\right)\right\vert \mathit{ds} + T\left\vert \hat{x}_{0}\right\vert {}\\ & \leq & C\left[1 + \left\vert u_{0n}\right\vert +\int _{ 0}^{T}\left\vert f_{ n}\left(s\right)\right\vert \mathit{ds}\right], {}\\ \end{array}$$

where \(C = C\left(x_{0},\hat{x}_{0},T\right) > 0\).

By Proposition 6.5 we have \(a.e.\;t \in \left]0,T\right[\):

$$\displaystyle{r_{0}\left\vert h_{n}\left(t\right)\right\vert \leq \left\langle h_{n}\left(t\right),u_{n}\left(t\right) - x_{0}\right\rangle + M_{0}\left\vert u_{n}\left(t\right) - x_{0}\right\vert + r_{0}M_{0},}$$

and then

$$\displaystyle\begin{array}{rcl} & & 2r_{0}\int _{0}^{t}\left\vert h_{ n}\left(s\right)\right\vert \mathit{ds} {}\\ & & \qquad \qquad \leq \left\vert u_{0n} - x_{0}\right\vert ^{2} + 2\int _{ 0}^{t}\left(\left\vert f_{ n}\left(s\right)\right\vert + M_{0}\right)\left\vert u_{n}\left(s\right) - x_{0}\right\vert \mathit{ds} + 2r_{0}M_{0}T {}\\ & & \qquad \qquad \leq C\left[1 + \left\vert u_{0n}\right\vert ^{2} + \left(\int _{ 0}^{T}\left\vert f_{ n}\left(s\right)\right\vert \mathit{ds}\right)^{2}\right] {}\\ \end{array}$$

with C a constant depending on \(x_{0},\hat{x}_{0},T,M_{0},r_{0}\).

Hence \(k_{n}\left(t\right) = \int _{0}^{t}h_{ n}\left(s\right)\mathit{ds}\) is bounded in \(\mathit{BV }\left(\left[0,T\right]; \mathbb{H}\right)\). Then there exists a \(k \in \mathit{BV }\left(\left[0,T\right]; \mathbb{H}\right)\) such that on a subsequence also denoted by k n we have

$$\displaystyle{k_{n}\mathop{ \rightarrow }\limits^{ w^{{\ast}}}k\quad \text{ in }\mathit{BV }\left(\left[0,T\right]; \mathbb{H}\right).}$$

The sequence \(\left(u_{n}\right)_{n\in \mathbb{N}^{{\ast}}}\) is a Cauchy sequence in \(C\left(\left[0,T\right]; \mathbb{H}\right)\) since if \(u_{m} = \mathcal{S}\left(A;u_{0m},f_{m}\right)\) then

$$\displaystyle{\sup _{t\in \left[0,T\right]}\left\vert u_{n}\left(t\right) - u_{m}\left(t\right)\right\vert \leq \left\vert u_{0n} - u_{0m}\right\vert +\int _{ 0}^{T}\left\vert f_{ n}\left(s\right) - f_{m}\left(s\right)\right\vert \mathit{ds}.}$$

Then there exists a \(u \in C\left(\left[0,T\right]; \mathbb{H}\right)\) such that

$$\displaystyle{u_{n} \rightarrow u\quad \quad \text{ in }C\left(\left[0,T\right]; \mathbb{H}\right).}$$

Passing to the limit in (6.51), we obtain that \(\left(u,k\right)\) satisfies \(\left(P_{A}\right)\). The proof is complete. ■ 

If the assumption \(\mathrm{int}\left(D\left(A\right)\right)\neq \varnothing \) has a smoothing effect as we saw in Proposition 6.51, the maximal monotone \(A = \partial \varphi\) also has a smoothing effect.

Consider the differential equation

$$\displaystyle{ \left\{\begin{array}{l} \dfrac{\mathit{du}\left(t\right)} {\mathit{dt}} + \partial \varphi u\left(t\right) \ni f\left(t\right),\quad a.e.\;t \in \left]0,T\right[, \\ u\left(0\right) = u_{0}, \end{array} \right. }$$
(6.52)

where

$$\displaystyle{\varphi: \mathbb{H} \rightarrow ] -\infty,+\infty ]\quad \text{ is a proper convex l.s.c. function.}}$$

Proposition 6.54.

If \(u_{0} \in \overline{D\left(\partial \varphi \right)}\left(= \overline{\mathrm{Dom}\left(\varphi \right)}\right)\) and \(f \in L^{2}\left(0,T; \mathbb{H}\right)\) , then the Cauchy problem (6.52) has a unique strong solution. Moreover \(u \in W^{1,2}\left(\delta,T; \mathbb{H}\right)\), \(\forall \delta > 0\), \(\sqrt{ t}\dfrac{\mathit{du}} {\mathit{dt}} \in L^{2}\left(0,T; \mathbb{H}\right)\), \(\varphi \left(u\right) \in L^{1}\left(0,T\right)\) and if \(u_{0} \in \mathrm{ Dom}\left(\varphi \right)\) , then \(\dfrac{\mathit{du}} {\mathit{dt}} \in L^{2}\left(0,T; \mathbb{H}\right)\) and \(\varphi \left(u\right) \in L^{\infty }\left(0,T\right)\) .

Consider now the Cauchy problem

$$\displaystyle{ \left\{\begin{array}{l} \dfrac{\mathit{dy}\left(t\right)} {\mathit{dt}} + \partial ^{-}\varphi \left(x\left(t\right)\right) \ni g\left(t\right),\quad \quad a.e.\;t \in \left[0,T\right] \\ x\left(0\right) = x_{0}, \end{array} \right. }$$
(6.53)

where

$$\displaystyle{ \begin{array}{l@{\quad }l} \left(i\right) \quad &\quad \varphi: \mathbb{R}^{d} \rightarrow \left]-\infty,+\infty \right]\text{ is a proper l.s.c.}\left(\rho,\gamma \right)\text{ \textendash semiconvex function,} \\ \left(\mathit{ii}\right)\quad &\quad \mathrm{Dom}\left(\varphi \right)\;\text{ is a locally closed subset of }\mathbb{R}^{d}, \end{array} }$$
(6.54)

and

$$\displaystyle{ \begin{array}{r@{\quad }l} \left(i\right)\quad &\quad x_{0} \in \mathrm{ Dom}\left(\varphi \right), \\ \left(\mathit{ii}\right)\quad &\quad g \in L^{2}\left(0,T; \mathbb{R}^{d}\right). \end{array} }$$
(6.55)

Hence for all \(\left(x,\hat{x}\right) \in \partial ^{-}\varphi\)

$$\displaystyle{\left\langle \hat{x},z - x\right\rangle +\varphi \left(x\right) \leq \varphi \left(z\right) + \left(\rho +\gamma \left\vert \hat{x}\right\vert \right)\left\vert z - x\right\vert ^{2},\quad \forall \,z \in \mathbb{R}^{d}.}$$

We denote here by \(\partial ^{-}\varphi \left(x\right)\) the Fréchet subdifferential given in Definition 6.40. Recall that \(E\subset \mathbb{R}^{d}\) is locally closed if for all x ∈ E, there exists a δ > 0 such that \(E \cap \overline{B}\left(x,\delta \right)\) is closed.

From Degiovanni–Marino–Tosques [21] and Rossi–Savaré [66] we have:

Proposition 6.55.

Let the assumptions (6.54) and (6.55) be satisfied. Then there exist \(h \in L^{2}\left(0,T; \mathbb{R}^{d}\right)\) and a unique absolutely continuous function \(x: \left[0,T\right] \rightarrow \mathrm{ Dom}\left(\varphi \right)\) such that:

$$\displaystyle{\begin{array}{l@{\quad }l} \left(a\right)\quad &\quad \int _{0}^{T}\left[\left\vert x^{{\prime}}\left(t\right)\right\vert ^{2} + \left\vert \varphi \left(x\left(t\right)\right)\right\vert \right]\mathit{dt} < \infty, \\ \left(b\right)\quad &\quad x\left(t\right) \in \mathrm{ Dom}\left(\partial ^{-}\varphi \right),\quad \quad a.e.\;t \in \left]0,T\right[, \\ \left(c\right)\quad &\quad h\left(t\right) \in \partial ^{-}\varphi \left(x\left(t\right)\right),\quad \quad a.e.\;t \in \left]0,T\right[, \end{array} }$$

and

$$\displaystyle{\left(P_{g}\right): \left\{\begin{array}{l} x^{{\prime}}\left(t\right) + h\left(t\right) = g\left(t\right),\quad \quad a.e.\;t \in \left]0,T\right[ \\ x\left(0\right) = x_{0}. \end{array} \right.}$$

Moreover \(a.e.\,\;t,s \in \left]0,T\right[,\;s < t\) :

$$\displaystyle{\int _{s}^{t}\left\vert x^{{\prime}}\left(r\right)\right\vert ^{2}\mathit{dr} =\varphi \left(x\left(s\right)\right) -\varphi \left(x\left(t\right)\right) + \int _{ s}^{t}\left(g\left(r\right),x^{{\prime}}\left(r\right)\right)\mathit{dr}}$$

and there exists a positive constant C T (independent of x 0 and g) such that

$$\displaystyle{\left\Vert x\right\Vert _{T} + \left\Vert \varphi \left(x\right)\right\Vert _{T} + \int _{0}^{T}\left\vert x^{{\prime}}\left(r\right)\right\vert ^{2}\mathit{dr} \leq C_{ T}\left(\left\vert x_{0}\right\vert ^{2} +\varphi ^{+}\left(x_{ 0}\right) + \int _{0}^{T}\left\vert g\left(r\right)\right\vert ^{2}\mathit{dr}\right).}$$

Remark 6.56.

If we put

$$\displaystyle{k\left(t\right) = \int _{0}^{t}h\left(s\right)\mathit{ds}}$$

then

$$\displaystyle{\left(\mathit{GSP}\right): \left\{\begin{array}{l@{\quad }l} j)\; \quad &k \in \mathit{BV }\left(\left[0,T\right]; \mathbb{R}^{d}\right),\ k\left(0\right) = 0\text{,} \\ \mathit{jj})\;\quad &x\left(t\right) + k\left(t\right) = x_{0} + \int _{0}^{t}g\left(s\right)\mathit{ds},\ \forall \ t \in \left[0,T\right], \\ \mathit{jv})\;\quad &\forall \,0 \leq s \leq t,\;\forall y: \left[0,\infty \right[ \rightarrow \mathbb{R}^{d}\text{ continuous:} \\ \quad &\begin{array}{l} \,\int _{s}^{t}\left\langle y\left(r\right) - x\left(r\right),\mathit{dk}\left(r\right)\right\rangle + \int _{ s}^{t}\varphi \left(x\left(r\right)\right)\mathit{dr} \\ \quad \quad \leq \int _{s}^{t}\varphi \left(y\left(r\right)\right)\mathit{dr} + \int _{ s}^{t}\left\vert y\left(r\right) - x\left(r\right)\right\vert ^{2}\left(\rho \mathit{dr} +\gamma d\left\updownarrow k\right\updownarrow _{ r}\right), \end{array} \end{array} \right.}$$

that is \(\left(x,k\right)\) is the solution of the generalized Skorohod problem \(\left(x_{0},m,\partial ^{-}\varphi \right)\) with \(m\left(t\right) = \int _{0}^{t}g\left(s\right)\mathit{ds}\) (see Definition 4.29).

6.3.10 Auxiliary Results

Proposition 6.57.

If \(g \in L^{1}\left(0,T\right)\) and

$$\displaystyle{\rho _{\lambda }\left(t\right) = e^{-\lambda t}\int _{ 0}^{t}\left\vert g\left(s\right)\right\vert e^{\lambda s}\mathit{ds},\quad t \in \left[0,T\right],\;\lambda > 0,}$$

then

$$\displaystyle{\lim _{\lambda \rightarrow \infty }\left[\sup _{t\in \left[0,T\right]}\rho _{\lambda }\left(t\right)\right] = 0.}$$

Proof.

Let the continuous function \(t\longmapsto G\left(t\right) = \int _{0}^{t}\left\vert g\left(s\right)\right\vert \mathit{ds}\) and \(\mathbf{m}_{G}\left(\varepsilon \right)\) be the modulus of continuity of G on \(\left[0,T\right]\). We have for all \(t \in \left[0,T\right]\) and λ > 0:

$$\displaystyle\begin{array}{rcl} 0& \leq & \rho _{\lambda }\left(t\right) = e^{-\lambda t}\left[\int _{ 0}^{\left(t-\sqrt{1/\lambda }\right)^{+} }\left\vert g\left(s\right)\right\vert e^{\lambda s}\mathit{ds} + \int _{ \left(t-\sqrt{1/\lambda }\right)^{+ }}^{t}\left\vert g\left(s\right)\right\vert e^{\lambda s}\mathit{ds}\right] {}\\ & \leq & e^{-\lambda t}e^{\lambda \left(t-\sqrt{1/\lambda }\right)^{+} }\int _{0}^{\left(t-\sqrt{1/\lambda }\right)^{+} }\left\vert g\left(s\right)\right\vert \mathit{ds} + e^{-\lambda t}e^{\lambda t}\mathbf{m}_{ G}(\sqrt{1/\lambda }) {}\\ & \leq & e^{-\sqrt{\lambda }}G\left(T\right) + \mathbf{m}_{ G}(\sqrt{1/\lambda }), {}\\ \end{array}$$

which yields the result. ■ 

We now give a variant of the Banach fixed point theorem.

Let \(\left\{\left(\mathbb{V}_{a},d_{a}\right): a \geq 0\right\}\) be a family of complete metric spaces such that for all 0 ≤ a ≤ b:

$$\displaystyle{\mathbb{V}_{b} \subset \mathbb{V}_{a}}$$

with a continuous embedding. Let

$$\displaystyle{\mathbb{V} = \bigcap _{a\geq 0}\mathbb{V}_{a} = \bigcap _{a\in \mathbb{N}^{{\ast}}}\mathbb{V}_{a},}$$

and assume \(\mathbb{V}\neq \varnothing \). Then \(\mathbb{V}\) is a complete metric space with respect to the metric

$$\displaystyle{\rho \left(x,y\right) =\sum _{a\in \mathbb{N}} \frac{1} {2^{a}} \frac{d_{a}\left(x,y\right)} {1 + d_{a}\left(x,y\right)},}$$

and if \(x_{n},x \in \mathbb{V}\), \(n \in \mathbb{N}^{{\ast}}\), then as n → ,

$$\displaystyle{x_{n} \rightarrow x\;\text{ in }\mathbb{V}\quad \Longleftrightarrow\quad x_{n} \rightarrow x\;\text{ in }\mathbb{V}_{a},\ \forall \ a \geq 0.}$$

Lemma 6.58.

Let \(\Gamma: \mathbb{V} \rightarrow \mathbb{V}\) be a mapping satisfying:there exists an a 0 ≥ 0 and for all a ≥ a 0 there exists a δ a ∈]0,1[ such that

$$\displaystyle{d_{a}\left(\Gamma \left(x\right),\Gamma \left(y\right)\right) \leq \delta _{a}\ d_{a}\left(x,y\right),\text{ for all }x,y \in \mathbb{V}.}$$

Then \(\Gamma \) has a unique fixed point, i.e. there exists a unique \(x \in \mathbb{V}\) such that

$$\displaystyle{x = \Gamma \left(x\right).}$$

(Banach’s fixed point theorem corresponds to the case \(\left(\mathbb{V}_{a},d_{a}\right) \equiv \left(\mathbb{V}_{0},d_{0}\right)\) for all a ≥ 0.)

Proof.

We define

$$\displaystyle{x_{0} \in \mathbb{V},\quad x_{n+1} = \Gamma \left(x_{n}\right).}$$

Then by recurrence we deduce that

$$\displaystyle{x_{n} \in \mathbb{V},\quad \text{ for all }n \in \mathbb{N},}$$

and

$$\displaystyle{d_{a}\left(x_{n+p},x_{n}\right) \leq \frac{\delta _{a}^{n}} {_{1-\delta _{a}}}d_{a}\left(x_{1},x_{0}\right),}$$

for all a ≥ a 0, \(n,p \in \mathbb{N}^{{\ast}}\). Hence there exists a unique \(x^{\left(a\right)} \in \mathbb{V}_{a}\) such that as n → 

$$\displaystyle{x_{n} \rightarrow x^{\left(a\right)}\quad \text{ in }\mathbb{V}_{ a}.}$$

Moreover by the continuity of the embedding \(\mathbb{V}_{a} \subset \mathbb{V}_{b}\) for 0 ≤ b ≤ a, we infer

$$\displaystyle{x_{n} \rightarrow x^{\left(a\right)}\quad \text{ in }\mathbb{V}_{ b}.}$$

Consequently \(x^{\left(a\right)} = x^{\left(a_{0}\right)}\) for all a ≥ a 0, \(x\mathop{ =}\limits^{ \mathit{def }}x^{\left(a_{0}\right)} \in \mathbb{V}\) and for a ≥ a 0

$$\displaystyle\begin{array}{rcl} d_{a}\left(x,\Gamma \left(x\right)\right)& \leq & d_{a}\left(x,x_{n+1}\right) + d_{a}\left(\Gamma \left(x_{n}\right),\Gamma \left(x\right)\right) {}\\ & \leq & d_{a}\left(x,x_{n+1}\right) +\delta _{a}d_{a}\left(x_{n},x\right) {}\\ & & \rightarrow 0,\;\;\text{ as }n \rightarrow \infty, {}\\ \end{array}$$

which yields

$$\displaystyle{x = \Gamma \left(x\right).}$$

The fixed point x is unique, since if \(x,y \in \mathbb{V}\) are two fixed points, then for a ≥ a 0

$$\displaystyle{d_{a}\left(x,y\right) = d_{a}\left(\Gamma \left(x\right),\Gamma \left(y\right)\right) \leq \delta _{a}d_{a}\left(x,y\right)}$$

and x = y follows. ■ 

6.4 Annex C: Deterministic and Stochastic Inequalities

6.4.1 Deterministic Inequalities

Proposition 6.59 (Stieltjes–Gronwall Inequality).

Let \(K: \left[0,T\right] \rightarrow \mathbb{R}\) be a continuous increasing function, \(a: \left[0,T\right] \rightarrow \left[0,\infty \right[\) be an increasing function and \(x: \left[0,T\right] \rightarrow \mathbb{R}\) be a measurable function such that

$$\displaystyle{\int _{0}^{T}\left\vert x\left(r\right)\right\vert \mathit{dK}\left(r\right) < \infty.}$$

If

$$\displaystyle{x\left(t\right) \leq a\left(t\right) +\int _{ 0}^{t}x\left(r\right)\mathit{dK}\left(r\right),\quad \forall \ t \in \left[0,T\right],}$$

then

$$\displaystyle{ x\left(t\right) \leq a\left(t\right)e^{K\left(t\right)-K\left(0\right)},\quad \forall \ t \in \left[0,T\right]. }$$
(6.56)

Proof.

I. Note that if α, β 0, β 1, , β n and \(z_{0},z_{1},\ldots,z_{n} \in \mathbb{R}\) satisfy

$$\displaystyle{\begin{array}{l} z_{0} \leq \alpha, \\ z_{i} \leq \alpha +\beta _{0}z_{0} +\beta _{1}z_{1} + \cdots +\beta _{i-1}z_{i-1},\;1 \leq i \leq n, \end{array} }$$

then

$$\displaystyle{z_{i} \leq \alpha e^{\beta _{0}+\beta _{1}+\cdots +\beta _{i-1}+\beta _{i}}.}$$

Indeed, associating the sequence

$$\displaystyle{\begin{array}{l} x_{0} =\alpha,\quad x_{i} =\alpha +\beta _{0}x_{0} +\beta _{1}x_{1} + \cdots +\beta _{i-1}x_{i-1}\,,\;1 \leq i \leq n, \end{array} }$$

by recurrence

$$\displaystyle{z_{i} \leq x_{i} =\alpha \left(1 +\beta _{0}\right)\left(1 +\beta _{1}\right)\cdots \left(1 +\beta _{i-1}\right) \leq \alpha e^{\beta _{0}+\beta _{1}+\cdots +\beta _{i-1}}}$$

follows.

Let

$$\displaystyle{g\left(t\right) = a\left(t\right) +\int _{ 0}^{t}x^{+}\left(r\right)\mathit{dK}\left(r\right).}$$

Clearly g is an increasing function and

$$\displaystyle{x\left(t\right) \leq x^{+}\left(t\right) \leq g\left(t\right) \leq a\left(t\right) +\int _{ 0}^{t}g\left(r\right)\mathit{dK}\left(r\right).}$$

Let 0 < t 1 <  < t n  = t be such that

$$\displaystyle{1 >\max \left\{K\left(t_{i}\right) - K\left(t_{i-1}\right): i \in \overline{1,n}\right\}\left(\mathop{=}\limits^{ \mathit{def }}\gamma _{n}\right) \rightarrow 0,\quad \text{ as }n \rightarrow \infty.}$$

Let \(g_{i} = g\left(t_{i}\right)\), c 0 = 0, \(c_{i} =\int _{ t_{i-1}}^{t_{i}}\mathit{dK}\left(r\right) = K\left(t_{i}\right) - K\left(t_{i-1}\right) \leq \gamma _{n}\). We have

$$\displaystyle\begin{array}{rcl} g_{i}& \leq & a\left(t_{i}\right) +\sum _{ j=1}^{i}\int _{ t_{j-1}}^{t_{j} }g\left(r\right)\mathit{dK}\left(r\right) {}\\ & \leq & a\left(t\right) +\sum _{ j=1}^{i}g_{ j}\int _{t_{j-1}}^{t_{j} }\mathit{dK}\left(r\right) {}\\ & \leq & a\left(t\right) + \left(c_{0}g_{0} + c_{1}g_{1} + \cdots + c_{i-1}g_{i-1}\right) +\gamma _{n}g_{i}, {}\\ \end{array}$$

which yields

$$\displaystyle{g_{i} \leq \frac{a\left(t\right)} {1 -\gamma _{n}} + \frac{c_{0}} {1 -\gamma _{n}}g_{0} + \frac{c_{1}} {1 -\gamma _{n}}g_{1} + \cdots + \frac{c_{i-1}} {1 -\gamma _{n}}g_{i-1}}$$

for all \(i \in \left\{1,2,\ldots,n\right\}\). Hence

$$\displaystyle\begin{array}{rcl} x\left(t\right)& \leq & g\left(t\right) = g_{n} \leq \frac{a\left(t\right)} {1 -\gamma _{n}}\exp \left[ \frac{1} {1 -\gamma _{n}}\sum _{j=0}^{n}c_{ i}\right] {}\\ & =& \frac{a\left(t\right)} {1 -\gamma _{n}}\exp \left[ \frac{1} {1 -\gamma _{n}}\left[K\left(t\right) - K\left(0\right)\right]\right]. {}\\ \end{array}$$

The inequality (6.56) follows by letting n → . ■ 

For \(K\left(t\right) =\int _{ 0}^{t}b\left(r\right)\mathit{dr}\), where \(b: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) is a locally integrable function, the following lemma holds.

Corollary 6.60 (Gronwall Inequality).

Let \(a: \left[0,T\right] \rightarrow \left[0,\infty \right[\) be an increasing function and \(x,b: \left[0,T\right] \rightarrow \mathbb{R}\) , b ≥ 0, be integrable functions such that

$$\displaystyle{\int _{0}^{T}b\left(t\right)\left\vert x\left(t\right)\right\vert \mathit{dt} < \infty.}$$

If

$$\displaystyle{x\left(t\right) \leq a\left(t\right) +\int _{ 0}^{t}b\left(s\right)x\left(s\right)\mathit{ds},\quad \forall \ t \in \left[0,T\right],}$$

then

$$\displaystyle{ x\left(t\right) \leq a\left(t\right)\exp \left(\int _{0}^{t}b\left(s\right)\mathit{ds}\right),\quad \forall \ t \in \left[0,T\right]. }$$
(6.57)

Corollary 6.61 (Backward Stieltjes–Gronwall Inequality).

Let \(\tilde{K}: \left[0,T\right] \rightarrow \mathbb{R}\) be a continuous increasing function, \(\tilde{a}: \left[0,T\right] \rightarrow \left[0,\infty \right[\) be a decreasing function and \(y: \left[0,T\right] \rightarrow \mathbb{R}\) be a measurable function such that

$$\displaystyle{\int _{0}^{T}\left\vert y\left(r\right)\right\vert d\tilde{K}\left(r\right) < \infty.}$$

If

$$\displaystyle{y\left(t\right) \leq \tilde{ a}\left(t\right) +\int _{ t}^{T}y\left(r\right)d\tilde{K}\left(r\right),\quad \forall \ t \in \left[0,T\right],}$$

then

$$\displaystyle{ y\left(t\right) \leq \tilde{ a}\left(t\right)e^{\tilde{K}\left(T\right)-\tilde{K}\left(t\right)},\quad \forall \ t \in \left[0,T\right]. }$$
(6.58)

Proof.

Let \(x\left(t\right) = y\left(T - t\right)\), \(a\left(t\right) =\tilde{ a}\left(T - t\right)\) and \(K\left(t\right) =\tilde{ K}\left(T\right) -\tilde{ K}\left(T - t\right)\). Then

$$\displaystyle{x\left(t\right) \leq a\left(t\right) +\int _{ 0}^{t}x\left(r\right)\mathit{dK}\left(r\right),\quad \forall \ t \in \left[0,T\right],}$$

and by Proposition 6.59

$$\displaystyle{x\left(t\right) \leq a\left(t\right)e^{K\left(t\right)-K\left(0\right)},\quad \forall \ t \in \left[0,T\right],}$$

that is (6.58) replacing t by Tt. ■ 

In particular for \(K\left(t\right) =\int _{ 0}^{t}b\left(r\right)\mathit{dr}\), we have:

Corollary 6.62 (Backward Gronwall Inequality).

Let \(\tilde{a}: \left[0,T\right] \rightarrow \left[0,\infty \right[\) be a decreasing function and \(y,b: \left[0,T\right] \rightarrow \mathbb{R}\) , b ≥ 0, be integrable functions such that

$$\displaystyle{\int _{0}^{T}b\left(t\right)\left\vert y\left(t\right)\right\vert \mathit{dt} < \infty.}$$

If

$$\displaystyle{y\left(t\right) \leq \tilde{ a}\left(t\right) +\int _{ t}^{T}b\left(s\right)y\left(s\right)\mathit{ds},\quad \forall \ t \in \left[0,T\right],}$$

then

$$\displaystyle{ y\left(t\right) \leq \tilde{ a}\left(t\right)\exp \left(\int _{t}^{T}b\left(s\right)\mathit{ds}\right),\quad \forall \ t \in \left[0,T\right]. }$$
(6.59)

We now give some other deterministic inequalities used in the book.

Lemma 6.63.

Let \(\alpha,\beta \in L_{\mathit{loc}}^{1}\left(\left[0,\infty \right[\right)\) .

  1. I.

    If α ≥ 0 a.e. and \(x: \left[0,\infty \right[ \rightarrow \mathbb{R}^{d}\) is an absolutely continuous function such that

    $$\displaystyle{\left\langle x^{{\prime}}\left(t\right),x\left(t\right)\right\rangle \leq \alpha \left(t\right)\left\vert x\left(t\right)\right\vert +\beta \left(t\right)\left\vert x\left(t\right)\right\vert ^{2},\quad \text{ a.e. }t \geq 0,\ }$$

    then

    $$\displaystyle{ \left\vert x\left(t\right)\right\vert \leq \left\vert x\left(\tau \right)\right\vert e^{\int _{\tau }^{t}\beta \left(s\right)\mathit{ds} } +\int _{ \tau }^{t}\alpha \left(s\right)e^{\int _{s}^{t}\beta \left(r\right)\mathit{dr} }\mathit{ds} }$$
    (6.60)

    for all 0 ≤τ ≤ t.

  2. II.

    If α,β ≥ 0 a.e., \(a: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) is an increasing function and \(\varphi: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) is a continuous function such that \(\forall \,t \geq 0\)

    $$\displaystyle{\varphi ^{2}\left(t\right) \leq a\left(t\right) + 2\int _{ 0}^{t}\alpha \left(s\right)\varphi \left(s\right)\mathit{ds} + 2\int _{ 0}^{t}\beta \left(s\right)\varphi ^{2}\left(s\right)\mathit{ds},}$$

    then

    $$\displaystyle{ \varphi \left(t\right) \leq \sqrt{a\left(t\right)}e^{\int _{0}^{t}\beta \left(s\right)\mathit{ds} } +\int _{ 0}^{t}\alpha \left(s\right)e^{\int _{s}^{t}\beta \left(r\right)\mathit{dr} }\mathit{ds},\;\forall \,t \geq 0. }$$
    (6.61)

Proof.

  1. I.

    Let \(u_{\varepsilon }\left(t\right) = \left\vert x\left(t\right)\right\vert ^{2}e^{-2\int _{0}^{t}\beta \left(s\right)\mathit{ds} }+\varepsilon\), \(\varepsilon > 0\). Then

    $$\displaystyle\begin{array}{rcl} u_{\varepsilon }^{{\prime}}\left(t\right)& =& 2\left\langle x^{{\prime}}\left(t\right),x\left(t\right)\right\rangle e^{-2\int _{0}^{t}\beta \left(s\right)\mathit{ds} } - 2\beta \left(t\right)\left\vert x\left(t\right)\right\vert ^{2}e^{-2\int _{0}^{t}\beta \left(s\right)\mathit{ds} } {}\\ & \leq & 2\alpha \left(t\right)\left\vert x\left(t\right)\right\vert e^{-2\int _{0}^{t}\beta \left(s\right)\mathit{ds} } {}\\ & \leq & 2\alpha \left(t\right)\sqrt{u_{\varepsilon }\left(t\right)}e^{-\int _{0}^{t}\beta \left(s\right)\mathit{ds} }, {}\\ \end{array}$$

    which yields

    $$\displaystyle\begin{array}{rcl} \frac{d} {\mathit{dt}}\left(\sqrt{u_{\varepsilon }\left(t\right)}\right)& =& \frac{u_{\varepsilon }^{{\prime}}\left(t\right)} {2\sqrt{u_{\varepsilon }\left(t\right)}} {}\\ & \leq &\alpha \left(t\right)e^{-\int _{0}^{t}\beta \left(s\right)\mathit{ds} }. {}\\ \end{array}$$

    Hence

    $$\displaystyle{\sqrt{u_{\varepsilon }\left(t\right)} \leq \sqrt{u_{\varepsilon }\left(\tau \right)} +\int _{ \tau }^{t}\alpha \left(s\right)e^{-\int _{0}^{s}\beta \left(r\right)\mathit{dr} }\mathit{ds}.}$$

    Passing to the limit as \(\varepsilon \searrow 0\) the inequality (6.60) follows.

  2. II.

    Let \(\theta \in \left[0,T\right]\) be fixed and

    $$\displaystyle{x\left(t\right) = \left(a\left(\theta \right) + 2\int _{0}^{t}\alpha \left(s\right)\varphi \left(s\right)\mathit{ds} + 2\int _{ 0}^{t}\beta \left(s\right)\varphi ^{2}\left(s\right)\mathit{ds}\right)^{1/2}.}$$

    Then for all \(t \in \left[0,\theta \right]\):

    $$\displaystyle{\varphi ^{2}\left(t\right) \leq a\left(\theta \right) + 2\int _{ 0}^{t}\alpha \left(s\right)\varphi \left(s\right)\mathit{ds} + 2\int _{ 0}^{t}\beta \left(s\right)\varphi ^{2}\left(s\right)\mathit{ds} = x^{2}\left(t\right),}$$

    and

    $$\displaystyle\begin{array}{rcl} x^{{\prime}}\left(t\right)x\left(t\right)& =& \alpha \left(t\right)\varphi \left(t\right) +\beta \left(t\right)\varphi ^{2}\left(t\right) {}\\ & \leq &\alpha \left(t\right)x\left(t\right) +\beta \left(t\right)x^{2}\left(t\right), {}\\ \end{array}$$

    which implies, by the first part, that for \(t \in \left[0,\theta \right]\):

    $$\displaystyle{\varphi \left(t\right) \leq x\left(t\right) \leq x\left(0\right)e^{\int _{0}^{t}\beta \left(s\right)\mathit{ds} } +\int _{ 0}^{t}\alpha \left(s\right)e^{\int _{s}^{t}\beta \left(r\right)\mathit{dr} }\mathit{ds},\,}$$

    which is (6.61) if we choose t = θ. ■ 

Corollary 6.64.

If α,β ≥ 0 a.e., \(\tilde{a}: \left[0,T\right] \rightarrow \left[0,\infty \right[\) is a decreasing function and \(\psi: \left[0,T\right] \rightarrow \left[0,\infty \right[\) is a continuous function such that \(\forall \,t \in \left[0,T\right]\) :

$$\displaystyle{\psi ^{2}\left(t\right) \leq \tilde{ a}\left(t\right) + 2\int _{ t}^{T}\alpha \left(s\right)\psi \left(s\right)\mathit{ds} + 2\int _{ t}^{T}\beta \left(s\right)\psi ^{2}\left(s\right)\mathit{ds},\;}$$

then

$$\displaystyle{ \psi \left(t\right) \leq \sqrt{\tilde{a}\left(t\right)}e^{\int _{t}^{T}\beta \left(s\right)\mathit{ds} } +\int _{ t}^{T}\alpha \left(s\right)e^{\int _{t}^{s}\beta \left(r\right)\mathit{dr} }\mathit{ds},\;\;\forall \,t \in \left[0,T\right]. }$$
(6.62)

Proof.

Note that \(\forall \,t \in \left[0,T\right]\):

$$\displaystyle\begin{array}{rcl} & & \psi ^{2}\left(T - t\right) {}\\ & & \leq \tilde{ a}\left(T - t\right) + 2\int _{T-t}^{T}\alpha \left(s\right)\psi \left(s\right)\mathit{ds} + 2\int _{ T-t}^{T}\beta \left(s\right)\psi ^{2}\left(s\right)\mathit{ds} {}\\ & & =\tilde{ a}\left(T - t\right) + 2\int _{0}^{t}\alpha \left(T - s\right)\psi \left(T - s\right)\mathit{ds} + 2\int _{ 0}^{t}\beta \left(T - s\right)\psi ^{2}\left(T - s\right)\mathit{ds}. {}\\ \end{array}$$

Hence by (6.61)

$$\displaystyle{\psi \left(T - t\right) \leq \tilde{ a}\left(T - t\right)e^{\int _{0}^{t}\beta \left(T-s\right)\mathit{ds} } +\int _{ 0}^{t}\alpha \left(T - s\right)e^{\int _{s}^{t}\beta \left(T-r\right)\mathit{dr} }\mathit{ds},}$$

which clearly yields (6.62) replacing Tt by t. ■ 

If \(f,g \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[\right)\left(= \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}\right)\right)\), we say that \(df\left(s\right) \leq dg\left(s\right)\) as signed measures on \(\left[0,\infty \right[\) if

  1. d1.
    $$\displaystyle{\int _{t}^{s}\varphi \left(r\right)df\left(r\right) \leq \int _{ t}^{s}\varphi \left(r\right)dg\left(r\right),}$$

    for all 0 ≤ t ≤ s and for all continuous function \(\varphi: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\), or equivalently

  2. d2.

    \(f\left(s\right) - f\left(t\right) =\int _{ t}^{s}df\left(r\right) \leq \int _{t}^{s}dg\left(r\right) = g\left(s\right) - g\left(t\right),\quad \forall \,0 \leq t \leq s\), or equivalently

  3. d3.

    \(h\left(s\right) = f\left(s\right) - g\left(s\right)\) is a decreasing function on \(\left[0,\infty \right[\).

Lemma 6.65.

Let \(x,N,V \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[\right)\). If

$$\displaystyle{x\left(s\right) \leq x\left(t\right) +\int _{ t}^{s}\left[\mathit{dN}\left(r\right) + x\left(r\right)\mathit{dV }\left(r\right)\right],\quad \forall \ 0 \leq t \leq s,}$$

or equivalently

$$\displaystyle{\mathit{dx}\left(r\right) \leq \mathit{dN}\left(r\right) + x\left(r\right)\mathit{dV }\left(r\right)}$$

as signed measures on \(\left[0,\infty \right[\) , then for all 0 ≤ t ≤ s:

$$\displaystyle{ e^{-V _{s} }x\left(s\right) \leq x\left(t\right)e^{-V _{t} } +\int _{ t}^{s}e^{-V _{r} }\mathit{dN}\left(r\right). }$$
(6.63)

Proof.

We have

$$\displaystyle\begin{array}{rcl} d\left(x\left(r\right)e^{-V _{r} }\right)& =& e^{-V _{r} }\mathit{dx}\left(r\right) - e^{-V _{r} }x\left(r\right)\mathit{dV }\left(r\right) {}\\ & \leq & e^{-V _{r} }\mathit{dN}\left(r\right) {}\\ \end{array}$$

and the result follows. ■ 

Corollary 6.66.

Let \(\alpha,\beta \in L_{\mathit{loc}}^{1}\left(\left[0,\infty \right[\right)\) and \(y: \left[0,\infty \right[ \rightarrow \mathbb{R}\) be a continuous function such that

$$\displaystyle{y\left(t\right) \leq y\left(s\right) +\int _{ t}^{s}\left[\alpha \left(r\right) +\beta \left(r\right)y\left(r\right)\right]\mathit{dr},\quad \forall \ 0 \leq t \leq s,}$$

then

$$\displaystyle{ e^{\int _{0}^{t}\beta \left(u\right)\mathit{du} }y\left(t\right) \leq y\left(s\right)e^{\int _{0}^{s}\beta \left(u\right)\mathit{du} } +\int _{ t}^{s}\alpha \left(r\right)e^{\int _{0}^{r}\beta \left(u\right)\mathit{du} }\mathit{dr}. }$$
(6.64)

Proof.

By Lemma 6.65 and

$$\displaystyle{\left(-y\left(s\right)\right) \leq \left(-y\left(t\right)\right) +\int _{ t}^{s}\left[\alpha \left(r\right) -\beta \left(r\right)\left(-y\left(r\right)\right)\right]\mathit{dr},}$$

the result follows. ■ 

Finally we have:

Proposition 6.67.

Let \(x \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right)\) and \(V \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}\right)\) be continuous functions. Let \(R,N: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) be continuous increasing functions. If

$$\displaystyle{\left\langle x\left(t\right),\mathit{dx}\left(t\right)\right\rangle \leq \mathit{dR}\left(t\right) + \left\vert x\left(t\right)\right\vert \mathit{dN}\left(t\right) + \left\vert x\left(t\right)\right\vert ^{2}\mathit{dV }\left(t\right)}$$

as signed measures on \(\left[0,\infty \right[\) , then for all 0 ≤ t ≤ T:

$$\displaystyle{\left\Vert e^{-V }x\right\Vert _{\left[t,T\right]} \leq 2\left[\left\vert e^{-V \left(t\right)}x\left(t\right)\right\vert + \left(\int _{ t}^{T}e^{-2V \left(s\right)}\mathit{dR}\left(s\right)\right)^{1/2} +\int _{ t}^{T}e^{-V \left(s\right)}\mathit{\mathit{dN}}\left(s\right)\right]}$$

and

$$\displaystyle{\left\Vert x\right\Vert _{\left[t,T\right]} \leq 2e^{\left\updownarrow V \right\updownarrow _{\left[t,T\right]} }\left[\left\vert x\left(t\right)\right\vert + \sqrt{R\left(T\right) - R\left(t\right)} + \left(N\left(T\right) - N\left(t\right)\right)\right].}$$

If R = 0 then for all 0 ≤ t ≤ s:

$$\displaystyle{ \left\vert x\left(s\right)\right\vert \leq e^{V \left(s\right)-V \left(t\right)}\left\vert x\left(t\right)\right\vert + \int _{ t}^{s}e^{V \left(s\right)-V \left(r\right)}\mathit{dN}\left(r\right). }$$
(6.65)

Proof.

Let \(u_{\varepsilon }\left(r\right) = \left\vert x\left(r\right)\right\vert ^{2}e^{-2V _{r}}+\varepsilon\), \(\varepsilon > 0\). We have as signed measures on \(\left[0,\infty \right[\)

$$\displaystyle\begin{array}{rcl} \mathit{du}_{\varepsilon }\left(r\right)& =& -2e^{-2V \left(r\right)}\left\vert x\left(r\right)\right\vert ^{2}\mathit{dV }\left(r\right) + 2e^{-2V \left(r\right)}\left\langle x\left(r\right),\mathit{dx}\left(r\right)\right\rangle {}\\ & \leq & 2e^{-2V \left(r\right)}\mathit{dR}\left(r\right) + 2e^{-2V \left(r\right)}\left\vert x\left(r\right)\right\vert \mathit{dN}\left(r\right) {}\\ & \leq & 2e^{-2V \left(r\right)}\mathit{dR}\left(r\right) + 2e^{-V \left(r\right)}\sqrt{u_{\varepsilon }\left(r\right)}\mathit{dN}\left(r\right). {}\\ \end{array}$$

If R = 0 then

$$\displaystyle{d\left(\sqrt{u_{\varepsilon }\left(r\right)}\right) = \frac{\mathit{du}_{\varepsilon }\left(r\right)} {2\sqrt{u_{\varepsilon }\left(r\right)}} \leq e^{-V \left(r\right)}\mathit{dN}\left(r\right),}$$

and consequently

$$\displaystyle{\sqrt{u_{\varepsilon }\left(s\right)} \leq \sqrt{u_{\varepsilon }\left(t\right)} + \int _{t}^{s}e^{-V \left(r\right)}\mathit{dN}\left(r\right),}$$

which yields (6.65) passing to the limit as \(\varepsilon \rightarrow 0\).

If R ≠ 0 then

$$\displaystyle\begin{array}{rcl} & & e^{-2V \left(s\right)}\left\vert x\left(s\right)\right\vert ^{2} {}\\ & & \leq e^{-2V \left(t\right)}\left\vert x\left(t\right)\right\vert ^{2} + 2\int _{ t}^{s}e^{-2V \left(r\right)}\mathit{dR}\left(r\right) + 2\int _{ t}^{s}e^{-2V \left(r\right)}\left\vert x\left(r\right)\right\vert \mathit{dN}\left(r\right) {}\\ & & \leq e^{-2V \left(t\right)}\left\vert x\left(t\right)\right\vert ^{2} + 2\int _{ t}^{s}e^{-2V \left(r\right)}\mathit{dR}\left(r\right) + 2\left\Vert e^{-V }x\right\Vert _{\left[t,T\right]}\int _{t}^{s}e^{-V \left(r\right)}\mathit{dN}\left(r\right) {}\\ & & \leq \left\vert e^{-V \left(t\right)}x\left(t\right)\right\vert ^{2}+2\int _{ t}^{T}e^{-2V \left(r\right)}\mathit{dR}\left(r\right) + \frac{1} {2}\left\Vert e^{-V }x\right\Vert _{\left[t,T\right]}^{2}+2\left(\int _{ t}^{T}e^{-V \left(r\right)}\mathit{dN}\left(r\right)\right)^{2}. {}\\ \end{array}$$

Hence for all t ≤ τ ≤ T

$$\displaystyle\begin{array}{rcl} e^{-V \left(\tau \right)}\left\vert x\left(\tau \right)\right\vert & \leq & \left\Vert e^{-V }x\right\Vert _{\left[t,T\right]}^{2} {}\\ & \leq & 2e^{-2V \left(t\right)}\left\vert x\left(t\right)\right\vert ^{2} + 4\int _{ t}^{T}e^{-2V \left(s\right)}\mathit{dR}\left(s\right) + 4\left(\int _{ t}^{T}e^{-V \left(s\right)}\mathit{dN}\left(s\right)\right)^{2} {}\\ \end{array}$$

and the results follow. ■ 

6.4.2 Stochastic Inequalities

In this subsection \(\left\{B_{t}: t \geq 0\right\}\) is a k-dimensional Brownian motion with respect to a given stochastic basis \(\left(\Omega,\mathcal{F}, \mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0}\right)\).

Proposition 6.68 (Stochastic Gronwall Inequality).

Let

  • \(\diamond \) \(a,b: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) be measurable deterministic functions and

  • \(\diamond \) \(H,\alpha,\beta,\gamma,\delta: \Omega \times \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\) be stochastic processes, where H is a continuous stochastic processes. If for all t ≥ 0

    $$\displaystyle{ \left\vert X_{t}\right\vert + \left\vert U_{t}\right\vert \leq \left\vert H_{t}\right\vert + \int _{0}^{t}\left(\alpha _{ s} + a\left(s\right)\left\vert X_{s}\right\vert \right)\mathit{ds} + \left\vert \int _{0}^{t}G_{ s}\mathit{dB}_{s}\right\vert,\; \mathbb{P}\text{ -a.s.}, }$$
    (6.66)

    where

    $$\displaystyle{\begin{array}{l@{\quad }l} \mathit{i}) \quad &\quad X,U \in S_{d}^{0},\,\quad G \in \varLambda _{d\times k}^{0}, \\ \mathit{ii})\quad &\quad \left\vert G_{t}\right\vert \leq \beta _{t} + b\left(t\right)\left\vert X_{t}\right\vert,\;\quad d\mathbb{P} \otimes \mathit{dt}\text{ -}a.e.,\end{array} }$$

    then for all q ≥ 1 there exists a positive constant C q such that for all T ≥ 0:

    $$\displaystyle{ \begin{array}{l} \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert X_{t}\right\vert ^{q}+\mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert U_{t}\right\vert ^{q} \leq \left[\mathbb{E}\left\Vert H\right\Vert _{T}^{q}+\mathbb{E}\left(\int _{0}^{T}\alpha _{ s}\mathit{ds}\right)^{q}\right. \\ \quad \quad \quad \left.+\mathbb{E}\left(\int _{0}^{T}\beta _{ s}^{2}\mathit{ds}\right)^{q/2}\right] \times \exp \left\{C_{ q}\left[1 + T^{q-1}\int _{ 0}^{T}\left(a^{q}\left(s\right)+b^{2q}\left(s\right)\right)\mathit{ds}\right]\right\}.\end{array} }$$
    (6.67)

    In particular if the right-hand side of the inequality (6.67) is finite then

    $$\displaystyle{X,U \in S_{d}^{q},\quad \quad G \in \varLambda _{ d\times k}^{q}.}$$

Proof.

Clearly we can assume that the right-hand side of the inequality (6.67) is finite. Denote by C q different constants depending only q and which can be changed from one line to another. For each n ≥ 1, we define the stopping time

$$\displaystyle{\tau _{n}\left(\omega \right) =\inf \left\{t \geq 0: \left\vert X_{t}\left(\omega \right)\right\vert \geq n\right\} \wedge n.}$$

Note that for all positive stochastic processes Z,

$$\displaystyle{\int _{0}^{t\wedge \tau _{n} }\left\vert X_{s}\right\vert ^{p}Z_{ s}\mathit{ds} =\int _{ 0}^{t\wedge \tau _{n} }\left\vert X_{s\wedge \tau _{n}}\right\vert ^{p}Z_{ s}\mathit{ds} \leq \int _{0}^{t}\sup _{ r\in \left[0,s\right]}\left\vert X_{r\wedge \tau _{n}}\right\vert ^{p}Z_{ s}\mathit{ds}.}$$

By the convexity of the function \(\varphi \left(r\right) = \left\vert r\right\vert ^{q}\) we have

$$\displaystyle\begin{array}{rcl} & & \left\vert X_{t\wedge \tau _{n}}\right\vert ^{q} + \left\vert U_{ t\wedge \tau _{n}}\right\vert ^{q} \leq 2^{q}\left\Vert H\right\Vert _{ t\wedge \tau _{n}}^{q} + 4^{q}\left\vert \int _{ 0}^{t\wedge \tau _{n} }\left(\alpha _{s} + a_{s}\left\vert X_{s}\right\vert \right)\mathit{ds}\right\vert ^{q} {}\\ & & \quad + 4^{q}\left\vert \int _{ 0}^{t\wedge \tau _{n} }G_{s}\mathit{\mathit{dB}}_{s}\right\vert ^{q}. {}\\ \end{array}$$

By the Burkholder–Davis–Gundy and Hölder inequalities:

$$\displaystyle\begin{array}{rcl} & & 4^{q}\mathbb{E}\sup \limits _{ s\in \left[0,t\right]}\left\vert \int _{0}^{s\wedge \tau _{n} }G_{r}\mathit{dB}_{r}\right\vert ^{q} \leq C_{ q}\mathbb{E}\left[\left(\int _{0}^{t\wedge \tau _{n} }\left\vert G_{s}\right\vert ^{2}\mathit{ds}\right)^{q/2}\right] {}\\ & & \quad \leq C_{q}\mathbb{E}\left(2\int _{0}^{t}\beta _{ s}^{2}\mathit{ds} + 2\int _{ 0}^{t}b^{2}\left(s\right)\left\vert X_{ s\wedge \tau _{n}}\right\vert ^{2}\mathit{ds}\right)^{q/2} {}\\ & & \quad \leq C_{q}\mathbb{E}\left(\int _{0}^{t}\beta _{ s}^{2}\mathit{ds}\right)^{q/2} + C_{ q}\mathbb{E}\Big[\sup \limits _{s\in \left[0,t\right]}\left\vert X_{s\wedge \tau _{n}}\right\vert ^{q/2}\left(\int _{ 0}^{t}b^{2}\left(s\right)\sup \limits _{ r\in \left[0,s\right]}\left\vert X_{r\wedge \tau _{n}}\right\vert \mathit{ds}\right)^{q/2}\Big] {}\\ & & \leq \dfrac{1} {4}\mathbb{E}\sup \limits _{s\in \left[0,t\right]}\left\vert X_{s\wedge \tau _{n}}\right\vert ^{q} + C_{ q}\mathbb{E}\left(\int _{0}^{t}\beta _{ s}^{2}\mathit{ds}\right)^{q/2} + C_{ q}\mathbb{E}\left(\int _{0}^{t}b^{2}\left(s\right)\sup \limits _{ r\in \left[0,s\right]}\left\vert X_{r\wedge \tau _{n}}\right\vert \mathit{ds}\right)^{q} {}\\ & & \quad \leq \dfrac{1} {4}\mathbb{E}\sup \limits _{s\in \left[0,t\right]}\left\vert X_{s\wedge \tau _{n}}\right\vert ^{q} + C_{ q}\mathbb{E}\left(\int _{0}^{t}\beta _{ s}^{2}\mathit{ds}\right)^{q/2} {}\\ & & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + C_{q}t^{q-1}\int _{ 0}^{t}b^{2q}\left(s\right)\mathbb{E}\sup \limits _{ r\in \left[0,s\right]}\left\vert X_{r\wedge \tau _{n}}\right\vert ^{q}\mathit{ds}. {}\\ \end{array}$$

Also

$$\displaystyle\begin{array}{rcl} & & 4^{q}\left\vert \int _{ 0}^{t\wedge \tau _{n} }\left(\alpha _{s} + a_{s}\left\vert X_{s}\right\vert \right)\mathit{ds}\right\vert ^{q} {}\\ & & \quad \leq C_{q}\left(\int _{0}^{t}\alpha _{ s}\mathit{ds}\right)^{q} + C_{ q}\left(\int _{0}^{t}a\left(s\right)\left\vert X_{ s\wedge \tau _{n}}\right\vert \mathit{ds}\right)^{q} {}\\ & & \quad \leq C_{q}\left(\int _{0}^{t}\alpha _{ s}\mathit{ds}\right)^{q} + C_{ q}t^{q-1}\int _{ 0}^{t}a^{q}\left(s\right)\sup \limits _{ r\in \left[0,s\right]}\left\vert X_{r\wedge \tau _{n}}\right\vert ^{q}\mathit{ds}. {}\\ \end{array}$$

Hence, defining

$$\displaystyle{K_{q,t} = \mathbb{E}\left\Vert H\right\Vert _{t}^{q} + \mathbb{E}\left[\left(\int _{ 0}^{t}\alpha _{ s}\mathit{ds}\right)^{q} + \left(\int _{ 0}^{t}\beta _{ s}^{2}\mathit{ds}\right)^{q/2}\right],}$$

we have

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\sup \limits _{s\in \left[0,t\right]}\left\vert X_{s\wedge \tau _{n}}\right\vert ^{q} + \mathbb{E}\sup \limits _{ s\in \left[0,t\right]}\left\vert U_{s\wedge \tau _{n}}\right\vert ^{q} \leq 2\mathbb{E}\sup \limits _{ s\in \left[0,t\right]}\left[\left\vert X_{s\wedge \tau _{n}}\right\vert ^{q} + \left\vert U_{ s\wedge \tau _{n}}\right\vert ^{q}\right] {}\\ & & \leq C_{q}K_{q,t} + C_{q}t^{q-1}\int _{ 0}^{t}\left(a^{q}\left(s\right) + b^{2q}\left(s\right)\right)\mathbb{E}\sup \limits _{ r\in \left[0,s\right]}\left\vert X_{r\wedge \tau _{n}}\right\vert ^{q}\mathit{ds}. {}\\ \end{array}$$

Using Gronwall’s inequality (6.57) we obtain

$$\displaystyle{ \mathbb{E}\sup \limits _{s\in \left[0,t\right]}\left\vert X_{s\wedge \tau _{n}}\right\vert ^{q} \leq C_{ q}K_{q,t}e^{C_{q}A_{q}\left(t\right)} < \infty, }$$
(6.68)

where

$$\displaystyle{A_{q}\left(t\right) = t^{q-1}\int _{ 0}^{t}\left(a^{q}\left(s\right) + b^{2q}\left(s\right)\right).}$$

Since \(1 + xe^{ax} \leq e^{\left(a+1\right)x}\), for all x ≥ 0, it follows that

$$\displaystyle{ \begin{array}{l} \mathbb{E}\sup \limits _{s\in \left[0,t\right]}\left\vert U_{s\wedge \tau _{n}}\right\vert ^{q} \leq C_{q}\left[K_{q,t} + A_{q}\left(t\right)C_{q}K_{q,t}e^{C_{q}A_{q}\left(t\right)}\right] \\ \quad \quad \quad \quad \quad \quad \leq C_{q}K_{q,t}e^{2C_{q}A_{q}\left(t\right)} < \infty. \end{array} }$$
(6.69)

We also have

$$\displaystyle{ \mathbb{E}\left[\left(\int _{0}^{t\wedge \tau _{n} }\left\vert G_{s}\right\vert ^{2}\mathit{ds}\right)^{q/2}\right] \leq \hat{ C}_{ q,t} < \infty }$$
(6.70)

for some \(\hat{C}_{q,t}\) independent of n. Passing to the limit in (6.68)–(6.70) as \(n \rightarrow \infty \), we obtain \(X,U \in S_{d}^{q}\left[0,T\right]\), \(G \in \varLambda _{d\times k}^{q}\left(0,T\right)\) and (6.67) follows. ■ 

Proposition 6.69.

Let \(\delta \in \left\{-1,1\right\}\). Let \(\left\{B_{t}: t \geq 0\right\}\) be a k-dimensional Brownian motion. Let \(Y,K,V: \Omega \times \mathbb{R}_{+} \rightarrow \mathbb{R}\) and \(G: \Omega \times \mathbb{R}_{+} \rightarrow \mathbb{R}^{k}\) be progressively measurable stochastic processes such that

$$\displaystyle{\begin{array}{rl@{\quad }l} i)\;\;&\quad &Y,K,V \;\text{ are continuous stochastic processes,} \\ \mathit{ii})\;\;&\quad &V _{\cdot },K_{\cdot }\in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}\right),\;V _{0} = K_{0} = 0,\; \mathbb{P}\text{ -a.s.}, \\ \mathit{iii})\;\;&\quad &\int _{t}^{s}\left\vert G_{ r}\right\vert ^{2}\mathit{dr} < \infty,\; \mathbb{P}\text{ -a.s.},\;\forall \ 0 \leq t \leq s. \end{array} }$$

If for all 0 ≤ t ≤ s,

$$\displaystyle{\delta \left(Y _{t} - Y _{s}\right) \leq \int _{t}^{s}\left(\mathit{dK}_{ r} + Y _{r}\mathit{dV }_{r}\right) +\int _{ t}^{s}\left\langle G_{ r},\mathit{dB}_{r}\right\rangle,\;\; \mathbb{P}\text{ -a.s.},}$$

then

$$\displaystyle{\delta \left(Y _{t}e^{\delta V _{t} } - Y _{s}e^{\delta V _{s} }\right) \leq \int _{t}^{s}e^{\delta V _{r} }\mathit{dK}_{r} +\int _{ t}^{s}e^{\delta V _{r} }\left\langle G_{r},\mathit{dB}_{r}\right\rangle,\;\; \mathbb{P}\text{ -a.s.}}$$

Proof.

Denoting

$$\displaystyle{ M_{s} =\int _{ 0}^{s}\left\langle G_{ r},\mathit{dB}_{r}\right\rangle,\;\tilde{Y }_{s} = -M_{s} -\delta Y _{s}, }$$
(6.71)

we obtain

$$\displaystyle{\tilde{Y }_{s} \leq \tilde{ Y }_{t} +\int _{ t}^{s}\left[\mathit{dK}_{ r} + \left(-\delta \tilde{Y }_{r} -\delta M_{r}\right)\mathit{dV }_{r}\right].}$$

Hence

$$\displaystyle{s\longmapsto L_{s}\mathop{ =}\limits^{ \mathit{def }}\tilde{Y }_{s} -\int _{0}^{s}\left[\mathit{dK}_{ r} + \left(-\delta \tilde{Y }_{r} -\delta M_{r}\right)\mathit{dV }_{r}\right]}$$

is a decreasing function and then

$$\displaystyle\begin{array}{rcl} d\left(\tilde{Y }_{s}e^{\delta V _{s} }\right)& =& \left\{dL_{s} + \left[\mathit{dK}_{s} + \left(-\delta \tilde{Y }_{s} -\delta M_{s}\right)\mathit{dV }_{s}\right]\right\}e^{\delta V _{s} } +\delta \ \tilde{Y }_{s}e^{\delta V _{s} }\mathit{dV }_{s} {}\\ & \leq & -\delta M_{s}e^{\delta V _{s} }\mathit{dV }_{s} + e^{\delta V _{s} }\mathit{dK}_{s} {}\\ \end{array}$$

and integrating from t to s

$$\displaystyle\begin{array}{rcl} \tilde{Y }_{s}e^{\delta V _{s} }& \leq & \tilde{Y }_{t}e^{\delta V _{t} } -\int _{t}^{s}\delta M_{ r}e^{\delta V _{r} }\mathit{dV }_{r} +\int _{ t}^{s}e^{\delta V _{r} }\mathit{dK}_{r} {}\\ & =& \tilde{Y }_{t}e^{\delta V _{t} } - M_{s}e^{\delta V _{s} } + M_{t}e^{\delta V _{t} } +\int _{ t}^{s}e^{\delta V _{r} }\left\langle G_{r},\mathit{dB}_{r}\right\rangle +\int _{ t}^{s}e^{\delta V _{r} }\mathit{dK}_{r}. {}\\ \end{array}$$

Now by (6.71) we obtain the conclusions. ■ 

6.4.3 Forward Stochastic Inequalities

In this subsection \(\left\{B_{t}: t \geq 0\right\}\) is a k-dimensional Brownian motion with respect to a stochastic basis \(\left(\Omega,\mathcal{F}, \mathbb{P},\{\mathcal{F}_{t}\}_{t\geq 0}\right)\).

We shall derive some estimates on the local semimartingale X ∈ S d 0 of the form

$$\displaystyle{ X_{t} = X_{0} + K_{t} +\int _{ 0}^{t}G_{ s}\mathit{dB}_{s},\;\,t \geq 0,\quad \mathbb{P}\text{ -a.s.}, }$$
(6.72)

where

  • \(\diamond \) \(K \in S_{d}^{0}\); \(K_{\cdot }\in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right),\;K_{0} = 0,\; \mathbb{P}\text{ -a.s.}\);

  • \(\diamond \) \(G \in \varLambda _{d\times k}^{0}\).

Notation 6.70.

Let p ≥ 1 and \(m_{p}\mathop{ =}\limits^{ \mathit{def }}1 \vee \left(p - 1\right)\) .

Proposition 6.71.

Let X ∈ S d 0 be a local semimartingale of the form (6.72). Assume there exist p ≥ 1, a \(\mathcal{P}\) -m.i.c.s.p. D and a \(\mathcal{P}\) -m.b-v.c.s.p. V, D 0 = V 0 = 0, such that as signed measures on \(\left[0,\infty \right[\)

$$\displaystyle{ \mathit{dD}_{t} + \left\langle X_{t},\mathit{dK}_{t}\right\rangle + \frac{1} {2}m_{p}\left\vert G_{t}\right\vert ^{2}\mathit{dt} \leq \vert X_{ t}\vert ^{2}\mathit{dV }_{ t},\;\; \mathbb{P}\text{ -a.s.}, }$$
(6.73)

then for all 0 ≤ t ≤ s:

$$\displaystyle{ \mathbb{E}^{\mathcal{F}_{t} }\left\vert e^{-V _{s} }X_{s}\right\vert ^{p} + p\ \mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}e^{-pV _{r} }\left\vert X_{r}\right\vert ^{p-2}\mathit{dD}_{ r} \leq \left\vert e^{-V _{t} }X_{t}\right\vert ^{p},\; \mathbb{P}\text{ -a.s.} }$$
(6.74)

Moreover for all δ ≥ 0, 0 ≤ t ≤ s:

$$\displaystyle{ \begin{array}{r} \mathbb{E}^{\mathcal{F}_{t}} \frac{\left\vert e^{-V_{s}}X_{ s}\right\vert ^{p}} {\left(1+\delta \left\vert e^{-V_{s}}X_{s}\right\vert ^{2}\right)^{p/2}} + p\ \mathbb{E}^{\mathcal{F}_{t}}\int _{t}^{s} \frac{e^{-pV_{r}}\left\vert X_{ r}\right\vert ^{p-2}} {\left(1+\delta \left\vert e^{-V_{r}}X_{r}\right\vert ^{2}\right)^{\left(p+2\right)/2}} \mathit{dD}_{r} \\ \leq \frac{\left\vert e^{-V_{t}}X_{ t}\right\vert ^{p}} {\left(1+\delta \left\vert e^{-V_{t}}X_{t}\right\vert ^{2}\right)^{p/2}},\; \mathbb{P}\text{ -a.s.}\end{array} }$$
(6.75)

The proof of this Proposition is contained in the proof of the next Proposition.

Remark 6.72.

Since by (2.27)

$$\displaystyle{\mathbf{1}_{X_{t}=0}\left\vert G_{t}\right\vert ^{2}\mathit{dt} = 0,}$$

we see that the condition (6.73) yields

$$\displaystyle{\mathbf{1}_{X_{t}=0}\mathit{dD}_{t} = 0.}$$

We now formulate a more general assumption.

\(\left(\mathbf{FB}\right)\)There exist

  • p ≥ 1, λ ≥ 0,

  • three \(\mathcal{P}\)-m.i.c.s.p. D, R, N, D 0 = R 0 = N 0 = 0, and

  • a \(\mathcal{P}\)-m.b-v.c.s.p. V, V 0 = 0,

such that, as signed measures on \(\left[0,\infty \right[\):

$$\displaystyle{ \begin{array}{r} \mathit{dD}_{t} + \left\langle X_{t},\mathit{dK}_{t}\right\rangle +\Big (\dfrac{1} {2}m_{p} + 9p\lambda \Big)\left\vert G_{t}\right\vert ^{2}\mathit{dt}\quad \quad \\ \leq \mathbf{1}_{p\geq 2}\mathit{dR}_{t} + \vert X_{t}\vert \mathit{dN}_{t} + \vert X_{t}\vert ^{2}\mathit{dV }_{t}.\end{array} }$$
(6.76)

Remark 6.73.

From the condition (6.76), we deduce that

$$\displaystyle\begin{array}{rcl} \mathbf{1}_{X_{t}=0}\mathit{dD}_{t}& =& 0,\quad \text{ if }1 \leq p < 2,\;\text{ and} {}\\ \mathbf{1}_{X_{t}=0}\mathit{dD}_{t}& \leq & \mathbf{1}_{X_{t}=0}\mathit{dR}_{t} \leq \mathit{dR}_{t},\quad \text{ if }p \geq 2. {}\\ \end{array}$$

Proposition 6.74.

Let X ∈ S d 0 be a local semimartingale of the form (6.72). Assume that there exist p ≥ 1 and λ > 1 such that \(\left(\mathbf{FB}\right)\) is satisfied. Then there exists a positive constant C p,λ depending only on \(\left(p,\lambda \right)\) such that for all δ ≥ 0, and 0 ≤ t ≤ s:

$$\displaystyle{ \begin{array}{rrl} \mathbb{E}^{\mathcal{F}_{t}} \frac{\left\Vert e^{-V }X\right\Vert _{\left[t,s\right]}^{p}} {\left(1+\delta \left\Vert e^{-V }X\right\Vert _{\left[t,s\right]}^{2}\right)^{p/2}} & +&\mathbb{E}^{\mathcal{F}_{t}}\int _{t}^{s} \frac{e^{-pV_{r}}\left\vert X_{ r}\right\vert ^{p-2}} {\left(1+\delta \left\vert e^{-V_{r}}X_{r}\right\vert ^{2}\right)^{\left(p+2\right)/2}} \mathit{dD}_{r} \\ &+&\mathbb{E}^{\mathcal{F}_{t}}\bigg(\int _{t}^{s} \frac{e^{-2V_{r}}} {\left(1+\delta \left\vert e^{-V_{r}}X_{r}\right\vert ^{2}\right)^{2}} \mathit{dD}_{r}\bigg)^{p/2} \\ & +&\mathbb{E}^{\mathcal{F}_{t}}\bigg(\int _{t}^{s} \tfrac{e^{-2V_{r}}} {\left(1+\delta \left\vert e^{-V_{r}}X_{r}\right\vert ^{2}\right)^{2}} \left\vert G_{r}\right\vert ^{2}\mathit{dr}\bigg)^{p/2} \\ \leq C_{p,\lambda }\bigg[ \frac{\left\vert e^{-V_{t}}X_{ t}\right\vert ^{p}} {\left(1+\delta \left\vert e^{-V_{t}}X_{t}\right\vert ^{2}\right)^{p/2}} & +&\mathbb{E}^{\mathcal{F}_{t}}\left(\int _{t}^{s}e^{-2V _{r}}\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\right)^{p/2} \\ & +&\mathbb{E}^{\mathcal{F}_{t}}\left(\int _{t}^{s}e^{-V _{r}}\mathit{dN}_{r}\right)^{p}\bigg],\;\; \mathbb{P}\text{ -a.s.}\end{array} }$$
(6.77)

If we set δ = 0 in (6.77), we obtain the following:

Corollary 6.75.

Under the assumption \(\left(\mathbf{FB}\right)\) , for all 0 ≤ t ≤ s:

$$\displaystyle{ \begin{array}{rrl} \mathbb{E}^{\mathcal{F}_{t}}\left\Vert e^{-V }X\right\Vert _{\left[t,s\right]}^{p}&+&\mathbb{E}^{\mathcal{F}_{t}}\int _{t}^{s}e^{-pV _{r}}\left\vert X_{r}\right\vert ^{p-2}\ \mathit{dD}_{r} \\ &+&\mathbb{E}^{\mathcal{F}_{t}}\bigg(\int _{t}^{s}e^{-2V _{r}}\mathit{dD}_{r}\bigg)^{p/2} \\ & +&\mathbb{E}^{\mathcal{F}_{t}}\bigg(\int _{t}^{s}e^{-2V _{r}}\left\vert G_{r}\right\vert ^{2}\mathit{dr}\bigg)^{p/2} \\ \leq C_{p,\lambda }\bigg[\left\vert e^{-V _{t}}X_{t}\right\vert ^{p}&+&\mathbb{E}^{\mathcal{F}_{t}}\left(\int _{t}^{s}e^{-2V _{r}}\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\right)^{p/2} \\ & +&\mathbb{E}^{\mathcal{F}_{t}}\left(\int _{t}^{s}e^{-V _{r}}\mathit{dN}_{r}\right)^{p}\bigg],\;\; \mathbb{P}\text{ -a.s.}\end{array} }$$
(6.78)

Proof (of Proposition 6.74).

In view of the monotone convergence theorem it suffices to treat the case δ > 0, which we assume from now on.

To simplify, we define

$$\displaystyle{\begin{array}{lll} J_{r}&\mathop{ =}\limits^{ \mathit{def }}& \dfrac{\left\vert e^{-V _{r}}X_{r}\right\vert } {\left(1 +\delta \left\vert e^{-V _{r}}X_{r}\right\vert ^{2}\right)^{1/2}} \\ & \leq &\dfrac{1} {\sqrt{\delta }}, \end{array} }$$

and

$$\displaystyle\begin{array}{rcl} \hat{J}_{r}^{\left(p\right)}& \mathop{=}\limits^{ \mathit{def }}& \dfrac{\left\vert e^{-V _{r}}X_{ r}\right\vert ^{p-2}\mathbf{1}_{ X_{r}\neq 0}} {\left(1 +\delta \left\vert e^{-V _{r}}X_{r}\right\vert ^{2}\right)^{\left(p+2\right)/2}} {}\\ & = & J_{r}^{p-2} \dfrac{\mathbf{1}_{X_{r}\neq 0}} {\left(1 +\delta \left\vert e^{-V _{r}}X_{r}\right\vert ^{2}\right)^{2}}. {}\\ \end{array}$$

We remark that

$$\displaystyle{\hat{J}_{r}^{\left(p\right)}e^{-V _{r} }\left\vert X_{r}\right\vert \leq J_{r}^{p-1}\mathbf{1}_{ X_{r}\neq 0}\;\;\text{ and }\hat{J}_{r}^{\left(p\right)}e^{-2V _{r} }\left\vert X_{r}\right\vert ^{2} \leq J_{ r}^{p}.}$$

Step 1. General calculation.

We begin by assuming a condition which is more general than the assumptions (6.73) and (6.76), namely that there exists a γ ≥ 0 such that

$$\displaystyle{ \begin{array}{r} \mathit{dD}_{r} + \left\langle X_{r},\mathit{dK}_{r}\right\rangle +\Big (\dfrac{m_{p}} {2} +\gamma \Big)\left\vert G_{r}\right\vert ^{2}\mathit{dr} \\ \leq \mathbf{1}_{p\geq 2}\mathit{dR}_{r} + \vert X_{r}\vert \mathit{dN}_{r} + \vert X_{r}\vert ^{2}\mathit{dV }_{r}. \end{array} }$$
(6.79)

Since by Itô’s formula

$$\displaystyle{e^{-V _{t} }X_{t} = X_{0} + \int _{0}^{t}\left(e^{-V _{r} }\mathit{dK}_{r} - e^{-V _{r} }X_{r}\mathit{dV }_{r}\right) + \int _{0}^{t}e^{-V _{r} }G_{r}\mathit{dB}_{r},}$$

it follows from the inequality (2.28) in Corollary 2.28 that for all 0 ≤ t ≤ s and any stopping time θ

$$\displaystyle\begin{array}{rcl} J_{s\wedge \theta }^{p}& \leq & J_{ t\wedge \theta }^{p} + p\int _{ t}^{s}\mathbf{1}_{ r<\theta }\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r} }\left\langle X_{r},G_{r}\mathit{dB}_{r}\right\rangle {}\\ & & +p\int _{t}^{s}\mathbf{1}_{ r<\theta }\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r} }\left[\left\langle X_{r},\mathit{dK}_{r} - X_{r}\mathit{dV }_{r}\right\rangle + \dfrac{1} {2}m_{p}\left\vert G_{r}\right\vert ^{2}\right]\mathit{dr},\ a.s. {}\\ \end{array}$$

But

$$\displaystyle{J_{s}^{p}\mathbf{1}_{ s<\theta } \leq J_{s\wedge \theta }^{p};}$$

hence we deduce that

$$\displaystyle{\begin{array}{l} J_{s}^{p}\mathbf{1}_{s<\theta } + p\int _{t}^{s}\mathbf{1}_{ r<\theta }\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\mathit{dD}_{ r} + p\gamma \int _{t}^{s}\mathbf{1}_{ r<\theta }\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\left\vert G_{ r}\right\vert ^{2}\mathit{dr} \\ \quad \leq J_{t\wedge \theta }^{p} + p\int _{t}^{s}\mathbf{1}_{ r<\theta }\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\left\langle X_{ r},G_{r}\mathit{dB}_{r}\right\rangle \\ \quad \quad + p\int _{t}^{s}\mathbf{1}_{ r<\theta }\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\left[\mathit{dD}_{ r} + \left\langle X_{r},\mathit{dK}_{r} - X_{r}\mathit{dV }_{r}\right\rangle + \left(\dfrac{1} {2}m_{p}+\gamma \right)\left\vert G_{r}\right\vert ^{2}\right]\mathit{dr}, \end{array} }$$

and using the assumption (6.79) it follows that for any stopping time θ and for all 0 ≤ t ≤ s, \(\mathbb{P}\text{ -}a.s.\):

$$\displaystyle{ \begin{array}{l} J_{s}^{p}\mathbf{1}_{s<\theta } + p\int _{t}^{s}\mathbf{1}_{ r<\theta }\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\mathit{dD}_{ r} + p\gamma \int _{t}^{s}\mathbf{1}_{ r<\theta }\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\left\vert G_{ r}\right\vert ^{2}\mathit{dr} \\ \quad \leq J_{t\wedge \theta }^{p} + p\int _{t}^{s}\mathbf{1}_{ r<\theta }\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\left\langle X_{ r},G_{r}\mathit{dB}_{r}\right\rangle \\ \quad \quad \quad + p\int _{t}^{s}\mathbf{1}_{ r<\theta }\ J_{r}^{p-2}\mathbf{1}_{ X_{r}\neq 0}e^{-2V _{r}}\mathbf{1}_{ p\geq 2}\mathit{dR}_{r} + p\int _{t}^{s}\mathbf{1}_{ r<\theta }\ J_{r}^{p-1}\mathbf{1}_{ X_{r}\neq 0}e^{-V _{r}}\mathit{dN}_{ r}.\end{array} }$$
(6.80)

Since for all T > 0:

$$\displaystyle\begin{array}{rcl} \int _{0}^{T}\left\vert \hat{J}_{ r}^{\left(p\right)}e^{-2V _{r} }X_{r}^{{\ast}}G_{ r}\right\vert ^{2}\mathit{dr}& \leq & \sup _{ r\in \left[0,T\right]}\left[e^{-pV _{r} }\left\vert X_{r}\right\vert ^{p-1}\right]\int _{ 0}^{T}\left\vert G_{ r}\right\vert ^{2}\mathit{dr} {}\\ & <& \infty,\quad \mathbb{P}\text{ -a.s.}, {}\\ \end{array}$$

it follows that for all 0 ≤ t ≤ s:

$$\displaystyle{\int _{t}^{s}\ \hat{J}_{ r}^{\left(p\right)}e^{-2V _{r} }\mathit{dD}_{r} +\gamma \int _{t}^{s}\hat{J}_{ r}^{\left(p\right)}e^{-2V _{r} }\left\vert G_{r}\right\vert ^{2}\mathit{dr} < \infty,\;\;a.s.}$$

For each \(n \in \mathbb{N}^{{\ast}}\) we define the stopping time

$$\displaystyle{ \begin{array}{c} \theta _{n} =\inf \bigg\{ t \geq 0: \int _{0}^{t}J_{ r}^{p-2}\mathbf{1}_{ X_{r}\neq 0}e^{-2V _{r}}\mathbf{1}_{ p\geq 2}\mathit{dR}_{r} + \int _{0}^{t}J_{ r}^{p-1}\mathbf{1}_{ X_{r}\neq 0}e^{-V _{r}}\mathit{dN}_{ r} \\ + \int _{0}^{t}\left\vert \hat{J}_{ r}^{\left(p\right)}e^{-2V _{r}}X_{ r}^{{\ast}}G_{ r}\right\vert ^{2}\mathit{dr} \geq n\bigg\}. \end{array} }$$
(6.81)

Note that for θ = θ n

$$\displaystyle{M_{t}^{n} = p\int _{ 0}^{t}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}\left\langle e^{-V _{r} }X_{r},e^{-V _{r} }G_{r}\mathit{dB}_{r}\right\rangle }$$

is a martingale and consequently, for all 0 ≤ t ≤ s:

$$\displaystyle{\mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r} }\mathit{dD}_{r} +\gamma \mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\ \hat{J}_{r}^{\left(p\right)}e^{-2V _{r} }\left\vert G_{r}\right\vert ^{2}\mathit{dr} < \infty,\;\;a.s.}$$

Step 2. Proof of the inequality (6.75).

In view of the first step, the assumption (6.73) yields (6.80) with γ = 0 and R = N = 0, from which we deduce

$$\displaystyle{ \mathbb{E}^{\mathcal{F}_{t} }J_{s}^{p}\mathbf{1}_{ s<\theta _{n}} + p\mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}e^{-2V _{r} }\mathit{dD}_{r} \leq J_{t\wedge \theta _{n}}^{p},\;\text{ a.s.,} }$$
(6.82)

and passing to the limit as n →  (the first two terms converge monotonically and the third one converges a. s.) the estimate (6.75) follows in view of Remark 6.73, since R = 0.

Step 3. Proof of the inequality (6.77).

\(\left(\mathbf{A}\right)\) Let γ > 0. From (6.80) we have

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}^{\mathcal{F}_{t} }\sup _{r\in \left[t,s\right]}\left(J_{r}^{p}\mathbf{1}_{ r<\theta _{n}}\right) + p\mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}e^{-2V _{r} }\mathit{dD}_{r} {}\\ & & \qquad + p\gamma \mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}\left\vert e^{-V _{r} }G_{r}\right\vert ^{2}\mathit{dr} {}\\ & & \leq 2J_{t\wedge \theta _{n}}^{p} + 2p\mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}J_{r}^{p-2}\mathbf{1}_{ X_{r}\neq 0}e^{-2V _{r} }\mathbf{1}_{p\geq 2}\mathit{dR}_{r} {}\\ & & \qquad + 2p\mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}J_{r}^{p-1}\mathbf{1}_{ X_{r}\neq 0}e^{-V _{r} }\mathit{dN}_{r} {}\\ & & \qquad + 2p\mathbb{E}^{\mathcal{F}_{t} }\sup _{u\in \left[t,s\right]}\left\vert \int _{t}^{u}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}e^{-2V _{r} }X_{r}^{{\ast}}G_{ r}\mathit{dB}_{r}\right\vert. {}\\ \end{array}$$

By the Burkholder–Davis–Gundy inequality

$$\displaystyle\begin{array}{rcl} & & 2p\ \mathbb{E}^{\mathcal{F}_{t} }\sup \limits _{u\in \left[t,s\right]}\left\vert \int _{t}^{u}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}\left\langle e^{-V _{r} }X_{r},e^{-V _{r} }G_{r}\mathit{dB}_{r}\right\rangle \right\vert {}\\ & & \leq 6p\ \mathbb{E}^{\mathcal{F}_{t} }\sqrt{\int _{t }^{s } \left\vert \mathbf{1} _{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}X_{r}^{{\ast}}G_{r}\right\vert ^{2}\mathit{dr}} {}\\ & & \leq 6p\ \mathbb{E}^{\mathcal{F}_{t} }\left[\sqrt{\sup \limits _{r\in \left[t,s \right] } \hat{J}_{r }^{\left(p\right) }e^{-2V _{r } } \left\vert X_{r } \right\vert ^{2 } \mathbf{1} _{r<\theta _{ n}}}\sqrt{\int _{t }^{s } \mathbf{1} _{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}\left\vert e^{-V _{r}}G_{r}\right\vert ^{2}\mathit{dr}}\right] {}\\ & & \leq \dfrac{1} {\lambda } \ \mathbb{E}^{\mathcal{F}_{t} }\sup \limits _{r\in \left[t,s\right]}\left(J_{r}^{p}\mathbf{1}_{ r<\theta _{n}}\right) + 9p^{2}\lambda \ \mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}\left\vert e^{-V _{r} }G_{r}\right\vert ^{2}\mathit{dr}, {}\\ \end{array}$$

for all λ > 0. Hence

$$\displaystyle{\begin{array}{l} \left(1 -\dfrac{1} {\lambda } \right)\mathbb{E}^{\mathcal{F}_{t}}\sup \limits _{r\in \left[t,s\right]}\left(J_{r}^{p}\mathbf{1}_{r<\theta _{ n}}\right) + p\mathbb{E}^{\mathcal{F}_{t}}\int _{ t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\mathit{dD}_{ r} \\ \quad \quad \quad \quad + p\left(\gamma -9p\lambda \right)\mathbb{E}^{\mathcal{F}_{t}}\int _{t}^{s}\mathbf{1}_{r<\theta _{ n}}\hat{J}_{r}^{\left(p\right)}\left\vert e^{-V _{r}}G_{ r}\right\vert ^{2}\mathit{dr} \\ \leq 2J_{t\wedge \theta _{n}}^{p} + 2p\ \mathbb{E}^{\mathcal{F}_{t}}\int _{t}^{s}\mathbf{1}_{r<\theta _{ n}}J_{r}^{p-2}\mathbf{1}_{ X_{r}\neq 0}e^{-2V _{r}}\mathbf{1}_{ p\geq 2}\mathit{dR}_{r} \\ \quad \quad \quad \quad + 2p\ \mathbb{E}^{\mathcal{F}_{t}}\int _{t}^{s}\mathbf{1}_{r<\theta _{ n}}J_{r}^{p-1}\mathbf{1}_{ X_{r}\neq 0}e^{-V _{r}}\mathit{dN}_{ r}. \end{array} }$$

Let γ = 9p λ, λ > 1. By Hölder’s inequality

$$\displaystyle\begin{array}{rcl} & & \quad 2p\mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}J_{r}^{p-2}\mathbf{1}_{ X_{r}\neq 0}e^{-2V _{r} }\mathbf{1}_{p\geq 2}\mathit{dR}_{r} + 2p\mathbb{E}^{\mathcal{F}_{t} }\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}J_{r}^{p-1}\mathbf{1}_{ X_{r}\neq 0}e^{-V _{r} }\mathit{dN}_{r} {}\\ & & \quad \leq 2p\mathbb{E}^{\mathcal{F}_{t} }\left[\sup \limits _{r\in \left[t,s\right]}\left(J_{r}^{p-2}\mathbf{1}_{ X_{r}\neq 0}\mathbf{1}_{p\geq 2}\mathbf{1}_{r<\theta _{n}}\right)\int _{t}^{s}e^{-2V _{r} }\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\right] {}\\ & & \quad \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + 2p\mathbb{E}^{\mathcal{F}_{t} }\left[\sup \limits _{r\in \left[t,s\right]}\left(J_{r}^{p-1}\mathbf{1}_{ X_{r}\neq 0}\mathbf{1}_{r<\theta _{n}}\right)\int _{t}^{s}e^{-V _{r} }\mathit{dN}_{r}\right] {}\\ & & \quad \leq \dfrac{1} {2}\left(1 -\dfrac{1} {\lambda } \right)\mathbb{E}^{\mathcal{F}_{t} }\sup \limits _{r\in \left[t,s\right]}\left(J_{r}^{p}\mathbf{1}_{ r<\theta _{n}}\right) + C_{p,\lambda }\mathbb{E}^{\mathcal{F}_{t} }\ \left(\int _{t}^{s}e^{-2V _{r} }\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\right)^{p/2} {}\\ & & \quad \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + C_{p,\lambda }\mathbb{E}^{\mathcal{F}_{t} }\ \left(\int _{t}^{s}e^{-V _{r} }\mathit{dN}_{r}\right)^{p}. {}\\ \end{array}$$

We deduce from the above that

$$\displaystyle\begin{array}{rcl} \begin{array}{l} \mathbb{E}^{\mathcal{F}_{t}}\sup \limits _{r\in \left[t,s\right]}\left(J_{r}^{p}\mathbf{1}_{r<\theta _{ n}}\right) + \mathbb{E}^{\mathcal{F}_{t}}\int _{ t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\mathit{dD}_{ r} \\ \; \leq C_{p,\lambda }\ \left[J_{t\wedge \theta _{n}}^{p} + \mathbb{E}^{\mathcal{F}_{t}}\ \left(\int _{t}^{s}e^{-2V _{r}}\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\right)^{p/2} + \mathbb{E}^{\mathcal{F}_{t}}\ \left(\int _{t}^{s}e^{-V _{r}}\mathit{dN}_{r}\right)^{p}\right].\end{array} & &{}\end{array}$$
(6.83)

The argument used in order to take the limit in (6.82) yields as n → :

$$\displaystyle{ \begin{array}{l} \mathbb{E}^{\mathcal{F}_{t}}\sup \limits _{r\in \left[t,s\right]}J_{r}^{p} + \mathbb{E}^{\mathcal{F}_{t}}\int _{t}^{s}\hat{J}_{r}^{\left(p\right)}e^{-2V _{r}}\mathit{dD}_{r} \leq C_{p,\lambda }\ \Big[J_{t}^{p} \\ \; + \mathbb{E}^{\mathcal{F}_{t}}\ \left(\int _{t}^{s}e^{-2V _{r}}\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\right)^{p/2} + \mathbb{E}^{\mathcal{F}_{t}}\ \left(\int _{t}^{s}e^{-V _{r}}\mathit{dN}_{r}\right)^{p}\Big].\end{array} }$$
(6.84)

\(\left(\mathbf{B}\right)\) From (6.80) for p = 2, γ = 1 and θ = θ n we have

$$\displaystyle{\begin{array}{l} J_{s\wedge \theta _{n}}^{2} + 2\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\ \hat{J}_{r}^{\left(2\right)}e^{-2V _{r}}\mathit{dD}_{ r} + 2\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\ \hat{J}_{r}^{\left(2\right)}\left\vert e^{-V _{r}}G_{ r}\right\vert ^{2}\mathit{dr} \\ {r}{\; \leq J_{t\wedge \theta _{n}}^{2} + 2\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\ \mathbf{1}_{X_{r}\neq 0}e^{-2V _{r}}\mathit{dR}_{ r} + 2\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\ J_{r}\mathbf{1}_{X_{r}\neq 0}e^{-V _{r}}\mathit{dN}_{ r}} \\ {r} { + 2\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\ \hat{J}_{r}^{\left(2\right)}\left\langle e^{-V _{r}}X_{ r},e^{-V _{r}}G_{ r}\mathit{dB}_{r}\right\rangle,} \end{array} }$$

which yields

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}^{\mathcal{F}_{t} }\left(\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(2\right)}e^{-2V _{r} }\mathit{dD}_{r}\right)^{p/2} + \mathbb{E}^{\mathcal{F}_{t} }\left(\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(2\right)}\left\vert e^{-V _{r} }G_{r}\right\vert ^{2}\mathit{dr}\right)^{p/2} {}\\ & & \quad \quad \leq C_{p}\ J_{t}^{p} + C_{ p}\ \mathbb{E}^{\mathcal{F}_{t} }\left(\int _{t}^{s}e^{-2V _{r} }\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\right)^{p/2} {}\\ & & \quad \quad \quad \quad \quad \quad + C_{p}\ \mathbb{E}^{\mathcal{F}_{t} }\sup \limits _{r\in \left[t,s\right]}\left(J_{r}^{p/2}\mathbf{1}_{ r<\theta _{n}}\right)\left(\int _{t}^{s}e^{-V _{r} }\mathit{dN}_{r}\right)^{p/2} {}\\ & & \quad \quad \quad \quad \quad \quad + C_{p}\ \mathbb{E}^{\mathcal{F}_{t} }\sup \limits _{u\in \left[t,s\right]}\left\vert \int _{t}^{u}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(2\right)}\left\langle e^{-V _{r} }X_{r},e^{-V _{r} }G_{r}\mathit{dB}_{r}\right\rangle \right\vert ^{p/2}. {}\\ \end{array}$$

By the Burkholder–Davis–Gundy inequality (2.8)

$$\displaystyle\begin{array}{rcl} & & C_{p}\ \mathbb{E}^{\mathcal{F}_{t} }\sup _{u\in \left[t,s\right]}\left\vert \int _{t}^{u}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(2\right)}\left\langle e^{-V _{r} }X_{r},e^{-V _{r} }G_{r}\mathit{dB}_{r}\right\rangle \right\vert ^{p/2} {}\\ & & \leq C_{p}^{{\prime}}\ \mathbb{E}^{\mathcal{F}_{t} }\left(\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\left\vert \hat{J}_{r}^{\left(2\right)}e^{-2V _{r} }\left\vert X_{r}^{{\ast}}G_{ r}\right\vert \right\vert ^{2}\mathit{dr}\right)^{p/4} {}\\ & & \leq C_{p}^{{\prime}}\mathbb{E}^{\mathcal{F}_{t} }\sup _{r\in \left[t,s\right]}\left(J_{r}^{p/2}\mathbf{1}_{ r<\theta _{n}}\right)\left(\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(2\right)}\left\vert e^{-V _{r} }G_{r}\right\vert ^{2}\mathit{dr}\right)^{p/4} {}\\ & & \leq C_{p}^{{\prime\prime}}\ \mathbb{E}^{\mathcal{F}_{t} }\sup _{r\in \left[t,s\right]}\left(J_{r}^{p}\mathbf{1}_{ r<\theta _{n}}\right) + \frac{1} {2}\mathbb{E}^{\mathcal{F}_{t} }\left(\int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\hat{J}_{r}^{\left(2\right)}\left\vert e^{-V _{r} }G_{r}\right\vert ^{2}\mathit{dr}\right)^{p/2}. {}\\ \end{array}$$

Hence

$$\displaystyle\begin{array}{rcl} \begin{array}{l} \mathbb{E}^{\mathcal{F}_{t}}\left(\int _{t}^{s}\mathbf{1}_{r<\theta _{ n}}\hat{J}_{r}^{\left(2\right)}e^{-2V _{r}}\mathit{dD}_{ r}\right)^{p/2} + \dfrac{1} {2}\mathbb{E}^{\mathcal{F}_{t}}\left(\int _{t}^{s}\mathbf{1}_{r<\theta _{ n}}\hat{J}_{r}^{\left(2\right)}\left\vert e^{-V _{r}}G_{ r}\right\vert ^{2}\mathit{dr}\right)^{p/2} \\ \leq C_{p}\ \left[\mathbb{E}^{\mathcal{F}_{t}}\sup \limits _{r\in \left[t,s\right]}J_{r}^{p} + \mathbb{E}^{\mathcal{F}_{t}}\left(\int _{t}^{s}e^{-2V _{r}}\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\right)^{p/2} + \mathbb{E}^{\mathcal{F}_{t}}\left(\int _{t}^{s}e^{-V _{r}}\mathit{dN}_{r}\right)^{p}\right].\end{array} & &{}\end{array}$$
(6.85)

We take the limit as n →  in the last inequality and the estimate (6.77) follows from (6.84), (6.85), Remark 6.73 and the identity

$$\displaystyle{\sup _{r\in \left[t,s\right]}J_{r}^{p} = \frac{\left\Vert e^{-V }X\right\Vert _{\left[t,s\right]}^{p}} {\left(1 +\delta \left\Vert e^{-V }X\right\Vert _{\left[t,s\right]}^{2}\right)^{p/2}}.}$$

This last fact follows from the increasing monotonicity of the function

$$\displaystyle{r\mapsto \frac{r^{p}} {\left(1 +\delta r^{2}\right)^{p/2}}: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[.}$$

The proof is complete. ■ 

We shall give a supplementary result in the case when R, N, V are deterministic functions.

Corollary 6.76.

Let X ∈ S d 0 be a local semimartingale of the form

$$\displaystyle{X_{t} = X_{0} + K_{t} +\int _{ 0}^{t}G_{ s}\mathit{dB}_{s},\;\,t \geq 0,\quad \mathbb{P}\text{ -a.s.},}$$

where

  • \(\diamond \)  K ∈ S d 0 ; \(K_{\cdot }\in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right),\;K_{0} = 0,\; \mathbb{P}\text{ -a.s.}\) ;

  • \(\diamond \) \(G \in \varLambda _{d\times k}^{0}\) .

Assume that there exist

  • p ≥ 1, \(m_{p}\mathop{ =}\limits^{ \mathit{def }}1 \vee \left(p - 1\right);\)

  • two continuous increasing deterministic functions \(R,N: \left[0,\infty \right[ \rightarrow \left[0,\infty \right[\), \(R\left(0\right) = N\left(0\right) = 0\) , and

  • a continuous deterministic function with bounded variation \(V: \left[0,\infty \right[ \rightarrow \mathbb{R}\), \(V \left(0\right) = 0\),

such that as signed measures on \(\left[0,\infty \right[\) :

$$\displaystyle{ \left\langle X_{t},\mathit{dK}_{t}\right\rangle + \dfrac{1} {2}m_{p}\left\vert G_{t}\right\vert ^{2}\mathit{dt} \leq \mathbf{1}_{ p\geq 2}\mathit{dR}\left(t\right) + \vert X_{t}\vert \mathit{dN}\left(t\right) + \vert X_{t}\vert ^{2}\mathit{dV }\left(t\right). }$$
(6.86)

Define

$$\displaystyle\begin{array}{rcl} Q\left(t\right)& =& 2R\left(t\right)\mathbf{1}_{p\geq 2} + N\left(t\right), {}\\ P\left(t\right)& =& \left(p - 2\right)R\left(t\right)\mathbf{1}_{p\geq 2} + \left(p - 1\right)N\left(t\right) + pV \left(t\right)\quad \text{ and} {}\\ M\left(t\right)& =& \int _{0}^{t}e^{-P\left(r\right)}\mathit{dQ}\left(r\right). {}\\ \end{array}$$

Then for all δ ≥ 0 and 0 ≤ t ≤ s:

$$\displaystyle{ \mathbb{E} \dfrac{\left\vert X_{s}\right\vert ^{p}e^{-P\left(s\right)}} {\left(1 +\delta \left\vert X_{s}\right\vert ^{2}\right)^{p/2}} \leq \mathbb{E} \dfrac{\left\vert X_{t}\right\vert ^{p}e^{-P\left(t\right)}} {\left(1 +\delta \left\vert X_{t}\right\vert ^{2}\right)^{p/2}} + M\left(s\right) - M\left(t\right)\text{.} }$$
(6.87)

In particular for \(\delta \searrow 0\) and 0 = t ≤ s:

$$\displaystyle{ \begin{array}{l@{\quad }l} \left(a\right)\quad &\quad e^{-P\left(s\right)}\mathbb{E}\ \left\vert X_{s}\right\vert ^{p} \leq \mathbb{E}\ \left\vert X_{0}\right\vert ^{p} + M\left(s\right), \\ \left(b\right)\quad &\quad \int _{0}^{\infty }e^{-P\left(s\right)-\alpha M\left(s\right)-\lambda s}\left(\mathbb{E}\ \left\vert X_{ s}\right\vert ^{p}\right)\mathit{ds} \leq \dfrac{1} {\lambda } \left(\mathbb{E}\ \left\vert X_{0}\right\vert ^{p} + \dfrac{1} {\alpha } \right)\end{array} }$$
(6.88)

for all α,λ > 0.

Proof.

We follow from (6.80) the first steps from the proof of Proposition 6.74 but now

$$\displaystyle{J_{r} = \dfrac{\left\vert X_{r}\right\vert } {\left(1 +\delta \left\vert X_{r}\right\vert ^{2}\right)^{1/2}}\quad \text{ and}\quad \hat{J}_{r}^{\left(p\right)} = \dfrac{\left\vert X_{r}\right\vert ^{p-2}\mathbf{1}_{ X_{r}\neq 0}} {\left(1 +\delta \left\vert X_{r}\right\vert ^{2}\right)^{\left(p+2\right)/2}}}$$

and θ = θ n is defined similarly.

From the inequality (2.28) in Corollary 2.28, we have for all 0 ≤ t ≤ s

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\ \left(J_{s}^{p}\mathbf{1}_{ s<\theta _{n}}\right) {}\\ & & \leq \mathbb{E}\ J_{s\wedge \theta _{n}}^{p} {}\\ & & \leq \mathbb{E}\ J_{t\wedge \theta _{n}}^{p} + p\mathbb{E}\ \int _{ t}^{s}\mathbf{1}_{ r<\theta _{n}}\ \hat{J}_{r}^{\left(p\right)}\left[\left\langle X_{ r},\mathit{dK}_{r}\right\rangle + \dfrac{1} {2}m_{p}\left\vert G_{r}\right\vert ^{2}\right]\mathit{dr} {}\\ & & \leq \mathbb{E}\ J_{t\wedge \theta _{n}}^{p} {}\\ & & \quad + p\mathbb{E}\ \int _{t}^{s}\mathbf{1}_{ r<\theta _{n}}\ \left[J_{r}^{p-2}\mathbf{1}_{ X_{r}\neq 0},\mathbf{1}_{p\geq 2}\mathit{dR}\left(r\right) + J_{r}^{p-1}\mathbf{1}_{ X_{r}\neq 0}\mathit{dN}\left(r\right) + J_{r}^{p}\mathit{dV }\left(r\right)\right]. {}\\ \end{array}$$

Taking into account that

$$\displaystyle{J_{r}^{p-2}\mathbf{1}_{ X_{r}\neq 0} \leq \frac{2} {p} + \frac{p - 2} {p} J_{r}^{p}\quad \text{ and}\quad J_{ r}^{p-1}\mathbf{1}_{ X_{r}\neq 0} \leq \frac{1} {p} + \frac{p - 1} {p} J_{r}^{p},}$$

and passing to the limit as n →  we have for all 0 ≤ t ≤ s:

$$\displaystyle{\begin{array}{l} \mathbb{E}\ J_{s}^{p} \leq \mathbb{E}\ J_{t}^{p} + 2\int _{t}^{s}\mathbf{1}_{ p\geq 2}\mathit{dR}\left(r\right) + \int _{t}^{s}\mathit{dN}\left(r\right) \\ \quad \quad + \int _{t}^{s}\left[\left(p - 2\right)\mathbf{1}_{ p\geq 2}\mathit{dR}\left(r\right) + \left(p - 1\right)\mathit{dN}\left(r\right) + \mathit{pdV }\left(r\right)\right]\mathbb{E}\left(J_{r}^{p}\right),\quad \text{ a.s.,} \end{array} }$$

that is

$$\displaystyle{\mathbb{E}\ J_{s}^{p} \leq \mathbb{E}\ J_{ t}^{p} + \int _{ t}^{s}\mathit{dQ}\left(r\right) + \int _{ t}^{s}\mathbb{E}\left(J_{ r}^{p}\right)dP\left(r\right).}$$

By Gronwall’s inequality (Proposition 6.69), we have for all 0 ≤ t ≤ s:

$$\displaystyle{e^{-P\left(s\right)}\mathbb{E}\ J_{ s}^{p} \leq e^{-P\left(t\right)}\mathbb{E}\ J_{ t}^{p} +\int _{ t}^{s}e^{-P\left(r\right)}\mathit{dQ}\left(r\right),}$$

and the inequality (6.87) follows. The inequality (6.88-b) clearly follows from (6.88-a) using the elementary inequality

$$\displaystyle{\mathit{ye}^{-x-\alpha y-\lambda s} \leq \frac{1} {\alpha } e^{-\lambda s},\;\;\;\text{ for all }x,y,s,\lambda \geq 0\text{ and }\alpha > 0.}$$

 ■ 

Let X, \(\hat{X} \in S_{d}^{0}\) be two semimartingales given by

$$\displaystyle{ \begin{array}{c} X_{t} = X_{0} + K_{t} +\int _{ 0}^{t}G_{s}\mathit{dB}_{s},\;t \geq 0, \\ \hat{X}_{t} =\hat{ X}_{0} +\hat{ K}_{t} +\int _{ 0}^{t}\hat{G}_{s}\mathit{dB}_{s},\;t \geq 0, \end{array} }$$
(6.89)

where

  • \(\diamond \) \(K,\hat{K} \in S_{d}^{0}\);

  • \(\diamond \) \(K_{\cdot }\left(\omega \right)\), \(\hat{K}_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{d}\right),\;K_{0}\left(\omega \right) =\hat{ K}_{0}\left(\omega \right) = 0,\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \);

  • \(\diamond \) \(G,\hat{G} \in \varLambda _{d\times k}^{0}\).

    \(\left(FB^{{\prime}}\right)\): Assume there exist p ≥ 1 and λ ≥ 0 and a \(\mathcal{P}\)-m.b-v.c.s.p. VV 0 = 0, such that as measures on \(\left[0,\infty \right[\):

    $$\displaystyle{ \left\langle X_{t} -\hat{ X}_{t},\mathit{dK}_{t} - d\hat{K}_{t}\right\rangle + \left(\frac{1} {2}m_{p} + 9p\lambda \right)\left\vert G_{t} -\hat{ G}_{t}\right\vert ^{2}\mathit{dt} \leq \vert X_{ t} -\hat{ X}_{t}\vert ^{2}\mathit{dV }_{ t}. }$$
    (6.90)

Corollary 6.77.

Let p ≥ 1 and A be a \(\mathcal{P}\) -m.i.c.s.p., A 0 = 0.

  1. (I)

    If the assumption (6.90) is satisfied with λ = 0, then for all δ ≥ 0, 0 ≤ t ≤ s:

    $$\displaystyle{\begin{array}{r} \mathbb{E}^{\mathcal{F}_{t}} \frac{e^{-p\left(V_{s}+A_{s}\right)}\left\vert X_{ s}-\hat{X}_{s}\right\vert ^{p}} {\left(1+\delta e^{-2\left(V_{r}+A_{r}\right)}\left\vert X_{r}-\hat{X}_{r}\right\vert ^{2}\right)^{p/2}} + \mathbb{E}^{\mathcal{F}_{t}}\int _{t}^{s} \frac{e^{-p\left(V_{r}+A_{r}\right)}\left\vert X_{ r}-\hat{X}_{r}\right\vert ^{p}} {\left(1+\delta e^{-2\left(V_{r}+A_{r}\right)}\left\vert X_{r}-\hat{X}_{r}\right\vert ^{2}\right)^{\left(p+2\right)/2}} dA_{r} \\ \leq \frac{e^{-p\left(V_{t}+A_{t}\right)}\left\vert X_{ t}-\hat{X}_{t}\right\vert ^{p}} {\left(1+\delta e^{-2\left(V_{t}+A_{t}\right)}\left\vert X_{t}-\hat{X}_{t}\right\vert ^{2}\right)^{p/2}},\;\; \mathbb{P} - a.s.\end{array} }$$

    In particular for δ = 0

    $$\displaystyle{\begin{array}{r} \mathbb{E}^{\mathcal{F}_{t}}e^{-p\left(V _{s}+A_{s}\right)}\left\vert X_{ s} -\hat{ X}_{s}\right\vert ^{p} + \mathbb{E}^{\mathcal{F}_{t}}\int _{ t}^{s}e^{-p\left(V _{r}+A_{r}\right)}\left\vert X_{ r} -\hat{ X}_{r}\right\vert ^{p}dA_{ r} \\ \leq e^{-p\left(V _{t}+A_{t}\right)}\left\vert X_{t} -\hat{ X}_{t}\right\vert ^{p},\;\; \mathbb{P}\text{ -a.s.},\end{array} }$$

    for all 0 ≤ t ≤ s.

  2. (II)

    If the assumption (6.90) is satisfied with λ > 1, then there exists a positive constant C p,λ depending only on \(\left(p,\lambda \right)\) such that for all δ ≥ 0, 0 ≤ t ≤ s:

    $$\displaystyle{\begin{array}{r} \mathbb{E}^{\mathcal{F}_{t}} \frac{\left\Vert e^{-V-A}\left(X-\hat{X}\right)\right\Vert _{\left[t,s\right]}^{p}} {\left(1+\delta \left\Vert e^{-V-A}\left(X-\hat{X}\right)\right\Vert _{\left[t,s\right]}^{2}\right)^{p/2}} + \mathbb{E}^{\mathcal{F}_{t}}\bigg(\int _{t}^{s} \frac{e^{-2\left(V_{r}+A_{r}\right)}\left\vert X_{ r}-\hat{X}_{r}\right\vert ^{2}} {\left(1+\delta e^{-2V_{r}-2A_{r}}\left\vert X_{r}-\hat{X}_{r}\right\vert ^{2}\right)^{2}} dA_{r}\bigg)^{p/2} \\ \leq C_{p,\lambda } \frac{e^{-p\left(V_{t}+A_{t}\right)}\left\vert X_{ t}-\hat{X}_{t}\right\vert ^{p}} {\left(1+\delta e^{-2\left(V_{t}+A_{t}\right)}\left\vert X_{t}-\hat{X}_{t}\right\vert ^{2}\right)^{p/2}},\;\; \mathbb{P}\text{ -a.s.}\end{array} }$$

    In particular for δ = 0

    $$\displaystyle{\begin{array}{r} \mathbb{E}^{\mathcal{F}_{t}}\left\Vert e^{-V -A}\left(X -\hat{ X}\right)\right\Vert _{\left[t,s\right]}^{p} + \mathbb{E}^{\mathcal{F}_{t}}\bigg(\int _{t}^{s}e^{-2\left(V _{r}+A_{r}\right)}\left\vert X_{r} -\hat{ X}_{r}\right\vert ^{2}dA_{r}\bigg)^{p/2} \\ \leq C_{p,\lambda }\ e^{-p\left(V _{t}+A_{t}\right)}\left\vert X_{t} -\hat{ X}_{t}\right\vert ^{p},\;\; \mathbb{P}\text{ -a.s.},\end{array} }$$

    for all 0 ≤ t ≤ s.

Proof.

Since the assumption (6.90) is equivalent to

$$\displaystyle{\begin{array}{r} \mathit{dD}_{t} + \left\langle X_{t} -\hat{ X}_{t},\mathit{dK}_{t} - d\hat{K}_{t}\right\rangle + \left(\dfrac{1} {2}m_{p} + 9p\lambda \right)\left\vert G_{t} -\hat{ G}_{t}\right\vert ^{2}\mathit{dt} \\ \leq \vert X_{t} -\hat{ X}_{t}\vert ^{2}d\left(V _{t} + A_{t}\right), \end{array} }$$

with

$$\displaystyle{D_{t} = \int _{0}^{t}\left\vert X_{ r} -\hat{ X}_{r}\right\vert ^{2}dA_{ r},}$$

the results clearly follow from Propositions 6.71 and 6.74 applied to the identity

$$\displaystyle{X_{t} -\hat{ X}_{t} = X_{0} -\hat{ X}_{0} + \left(K_{t} -\hat{ K}_{t}\right) +\int _{ 0}^{t}\left(G_{ s} -\hat{ G}_{s}\right)\mathit{dB}_{s}.}$$

 ■ 

Since

$$\displaystyle{\frac{1} {2}\left(r \wedge 1\right) \leq \frac{r} {\left(1 + r^{2}\right)^{1/2}} \leq r \wedge 1,\;\;\;\forall \ r \geq 0,}$$

we have:

Corollary 6.78.

If the assumption (6.90) is satisfied with λ > 1 and p ≥ 1, then there exists a positive constant C p,λ depending only on \(\left(p,\lambda \right)\) such that \(\mathbb{P}\text{ -}a.s.\)

$$\displaystyle{\mathbb{E}^{\mathcal{F}_{t} }\left[1 \wedge \left\Vert e^{-V }\left(X -\hat{ X}\right)\right\Vert _{\left[t,s\right]}^{p}\right] \leq C_{ p,\lambda }\left[1 \wedge \left\vert e^{-V _{t} }\left(X_{t} -\hat{ X}_{t}\right)\right\vert ^{p}\right],}$$

for all 0 ≤ t ≤ s.

6.4.4 Backward Stochastic Inequalities

Let \(\left\{B_{t}: t \geq 0\right\}\) be a k-dimensional Brownian motion with respect to a given stochastic basis \(\left(\Omega,\mathcal{F}, \mathbb{P},\{\mathcal{F}_{t}^{B}\}_{t\geq 0}\right)\), where \(\mathcal{F}_{t}^{B}\) is the natural filtration associated to \(\left\{B_{t}: t \geq 0\right\}\).

Notation 6.79.

For p > 1 define

$$\displaystyle{n_{p}\mathop{ =}\limits^{ \mathit{def }}1 \wedge \left(p - 1\right)\text{.}}$$

In this subsection we shall derive some estimates on \(\left(Y,Z\right) \in S_{m}^{0} \times \varLambda _{m\times k}^{0}\) satisfying for all T ≥ 0 and \(t \in \left[0,T\right]\):

$$\displaystyle{ Y _{t} = Y _{T} + \left(K_{T} - K_{t}\right) -\int _{t}^{T}Z_{ s}\mathit{dB}_{s},\; \mathbb{P}\text{ -a.s.}, }$$
(6.91)

where K ∈ S m 0 and \(K_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \).

We note that if the interval \(\left[0,T\right]\) is fixed then the equality (6.91) will be extended to \(\mathbb{R}_{+}\) by Y s  = Y T , K s  = K T and Z s  = 0 for all s > T.

Proposition 6.80.

Let \(\left(Y,Z\right) \in S_{m}^{0} \times \varLambda _{m\times k}^{0}\) satisfy

$$\displaystyle{Y _{t} = Y _{T} +\int _{ t}^{T}\mathit{dK}_{ s} -\int _{t}^{T}Z_{ s}\mathit{dB}_{s},\;0 \leq t \leq T,\quad \mathbb{P}\text{ -a.s.},}$$

where K ∈ S m 0 and \(K_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \) .

Assume given

  • \(\blacktriangle \)  three \(\mathcal{P}\) -m.i.c.s.p. D, R, N, D 0 = R 0 = N 0 = 0,

  • \(\blacktriangle \)  a \(\mathcal{P}\) -m.b-v.c.s.p. V, V 0 = 0,

  • \(\blacktriangle \)  two stopping times τ and σ such that 0 ≤τ ≤σ < ∞.

  1. (A)

    If λ < 1, q > 0 and

    $$\displaystyle{\mathit{dD}_{t} + \left\langle Y _{t},\mathit{dK}_{t}\right\rangle \leq \mathit{dR}_{t} + \vert Y _{t}\vert \mathit{dN}_{t} + \vert Y _{t}\vert ^{2}\mathit{dV }_{ t} + \dfrac{\lambda } {2}\left\vert Z_{t}\right\vert ^{2}\mathit{dt},}$$

    then there exists a positive constant C q,λ , depending only on \(\left(q,\lambda \right)\) , such that

    $$\displaystyle{ \begin{array}{l} \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{r}}\mathit{dD}_{r}\right)^{q/2} + \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{q/2} \\ {r} { \leq C_{q,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\left[\sup \limits _{s\in \left[\tau,\sigma \right]}\left\vert e^{V _{s}}Y _{s}\right\vert ^{q} + \left(\int _{\tau }^{\sigma }e^{2V _{s}}\mathit{dR}_{s}\right)^{q/2} + \left(\int _{\tau }^{\sigma }e^{V _{s}}\mathit{dN}_{s}\right)^{q}\right],} \\ {r} {\quad \mathbb{P}\text{ -a.s.}\;}\end{array} }$$
    (6.92)
  2. (B)

    If λ < 1 < p,

    $$\displaystyle{ \begin{array}{rl} \left(i\right)&\quad \mathit{dD}_{t} + \left\langle Y _{t},\mathit{dK}_{t}\right\rangle \leq \left(\mathbf{1}_{p\geq 2}\mathit{dR}_{t} + \vert Y _{t}\vert \mathit{dN}_{t} + \vert Y _{t}\vert ^{2}\mathit{dV }_{t}\right) + \dfrac{n_{p}} {2} \lambda \left\vert Z_{t}\right\vert ^{2}\mathit{dt}, \\ \left(\mathit{ii}\right)&\quad \mathbb{E}\ \sup \limits _{s\in \left[\tau,\sigma \right]}e^{pV _{s}}\left\vert Y _{s}\right\vert ^{p} < \infty,\end{array} }$$
    (6.93)

    then there exists a positive constant C p,λ , depending only on \(\left(p,\lambda \right)\) , such that \(\mathbb{P}\text{ -a.s.}\),

    $$\displaystyle{ \begin{array}{l} \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\sup \limits _{s\in \left[\tau,\sigma \right]}\left\vert e^{V _{s}}Y _{s}\right\vert ^{p}\right) + \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{s}}\mathit{dD}_{s}\right)^{p/2} \\ {r} { + \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{s}}\left\vert Z_{s}\right\vert ^{2}\mathit{ds}\right)^{p/2}} \\ {r} {\;\; + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{\tau }^{\sigma }e^{pV _{s} }\left\vert Y _{s}\right\vert ^{p-2}\mathbf{1}_{Y _{s}\neq 0}\mathit{dD}_{s} + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{\tau }^{\sigma }e^{pV _{s} }\left\vert Y _{s}\right\vert ^{p-2}\mathbf{1}_{Y _{s}\neq 0}\left\vert Z_{s}\right\vert ^{2}\mathit{ds}} \\ \leq C_{p,\lambda }\ \mathbb{E}^{\mathcal{F}_{\tau }}\left[\left\vert e^{V _{\sigma }}Y _{\sigma }\right\vert ^{p} + \left(\int _{\tau }^{\sigma }e^{2V _{s}}\mathbf{1}_{p\geq 2}\mathit{dR}_{s}\right)^{p/2} + \left(\int _{\tau }^{\sigma }e^{V _{s}}\mathit{dN}_{s}\right)^{p}\right].\end{array} }$$
    (6.94)

Proof.

Step I.

By the Itô formula, we have for all 0 ≤ t ≤ s:

$$\displaystyle\begin{array}{rcl} \left\vert Y _{t}\right\vert ^{2}e^{2V _{t} } +\int _{ t}^{s}e^{2V _{r} }\left\vert Z_{r}\right\vert ^{2}\mathit{dr}& =& \left\vert Y _{ s}\right\vert ^{2}e^{2V _{s} } + 2\int _{t}^{s}e^{2V _{r} }\left(\left\langle Y _{r},\mathit{dK}_{r}\right\rangle -\left\vert Y _{r}\right\vert ^{2}\mathit{dV }_{ r}\right) {}\\ & & -2\int _{t}^{s}e^{2V _{r} }\left\langle Y _{r},Z_{r}\mathit{dB}_{r}\right\rangle,\quad a.s. {}\\ \end{array}$$

Since

$$\displaystyle{\left\langle Y _{r},\mathit{dK}_{r}\right\rangle -\left\vert Y _{r}\right\vert ^{2}\mathit{dV }_{ r} \leq -\mathit{dD}_{r} + \mathit{dR}_{r} + \vert Y _{r}\vert \mathit{dN}_{r} + \frac{\lambda } {2}\left\vert Z_{r}\right\vert ^{2}\mathit{dr},}$$

we get

$$\displaystyle{ \begin{array}{l} \left\vert Y _{t}\right\vert ^{2}e^{2V _{t}} + 2\int _{t}^{s}e^{2V _{r}}\mathit{dD}_{r} + \left(1-\lambda \right)\int _{t}^{s}e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} \\ \; \leq \left\vert Y _{s}\right\vert ^{2}e^{2V _{s}} + 2\int _{t}^{s}e^{2V _{r}}\mathit{dR}_{r} + 2\int _{t}^{s}e^{V _{r}}\vert Y _{r}\vert \mathit{dN}_{r} - 2\int _{t}^{s}e^{2V _{r}}\left\langle Y _{r},Z_{r}\mathit{dB}_{r}\right\rangle. \end{array} }$$
(6.95)

Let the stopping times 0 ≤ τ ≤ σ <  and

$$\displaystyle{\begin{array}{r} \theta _{n} =\sigma \wedge \inf \Big\{s \geq \tau: \left\Vert e^{V }Y - e^{V _{\tau }}Y _{\tau }\right\Vert _{s} + \int _{\tau }^{s\vee \tau }e^{2V _{r} }\mathit{dD}_{r} +\int _{ \tau }^{s}e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} \\ + \int _{\tau }^{s\vee \tau }e^{2V _{r} }\mathit{dR}_{r} + \int _{\tau }^{s\vee \tau }e^{V _{r} }\mathit{dN}_{r} \geq n\Big\}. \end{array} }$$

We have \(\tau \leq \theta _{n} \leq \sigma\) and \(\theta _{n} \nearrow \sigma\) \(\mathbb{P}\)-a.s. Replacing in (6.95) t by τ and s by θ n  we obtain

$$\displaystyle\begin{array}{rcl} & & 2\int _{\tau }^{\theta _{n}}e^{2V _{r}}\mathit{dD}_{r} + \left(1-\lambda \right)\int _{\tau }^{\theta _{n}}e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} {}\\ & & \quad \leq \left\vert Y _{\theta _{n}}\right\vert ^{2}e^{2V _{\theta _{n}} } + 2\int _{\tau }^{\theta _{n}}e^{2V _{r}}\left(\mathit{dR}_{r} + \vert Y _{r}\vert \mathit{dN}_{r}\right) {}\\ & & \quad \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - 2\int _{\tau }^{\theta _{n}}e^{2V _{r}}\left\langle Y _{r},Z_{r}\mathit{dB}_{r}\right\rangle {}\\ & & \quad \leq \left\vert Y _{\theta _{n}}\right\vert ^{2}e^{2V _{\theta _{n}} } +\sup \limits _{r\in \left[\tau,\sigma \right]}\mathbf{1}_{\left[\tau,\theta _{n}\right]}\left(r\right)\left\vert e^{V _{r} }Y _{r}\right\vert ^{2} + 2\int _{\tau }^{\theta _{n} }e^{2V _{r} }\mathit{dR}_{r} {}\\ & & \quad \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \left(\int _{\tau }^{\theta _{n}}e^{V _{r}}\mathit{dN}_{r}\right)^{2} - 2\int _{\tau }^{\sigma }\mathbf{1}_{\left[\tau,\theta _{ n}\right]}\left(r\right)e^{2V _{r} }\left\langle Y _{r},Z_{r}\mathit{dB}_{r}\right\rangle. {}\\ \end{array}$$

Moreover, by Minkowski’s inequality we infer for all q > 0

$$\displaystyle\begin{array}{rcl} \begin{array}{l} \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\theta _{n}}e^{2V _{r}}\mathit{dD}_{r}\right)^{q/2} + \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\theta _{n}}e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{q/2} \\ \leq C_{q,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \sup \limits _{r\in \left[\tau,\sigma \right]}\left\vert e^{V _{r}}Y _{r}\right\vert ^{q} + C_{q,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{r}}\mathit{dR}_{r}\right)^{q/2} \\ + C_{q,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{V _{r}}\mathit{dN}_{r}\right)^{q} + C_{q,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \left\vert \int _{\tau }^{\sigma }\mathbf{1}_{\left[\tau,\theta _{ n}\right]}\left(r\right)e^{2V _{r}}\left\langle Y _{ r},Z_{r}\mathit{dB}_{r}\right\rangle \right\vert ^{q/2}.\end{array} & &{}\end{array}$$
(6.96)

But by the Burkholder–Davis–Gundy and Cauchy–Schwarz inequalities, we get

$$\displaystyle\begin{array}{rcl} & & C_{q,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \left\vert \int _{ \tau }^{\sigma }\mathbf{1}_{\left[\tau,\theta _{n}\right]}\left(r\right)e^{2V _{r} }\left\langle Y _{r},Z_{r}\mathit{dB}_{r}\right\rangle \right\vert ^{q/2} {}\\ & & \quad \leq C_{q,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }\mathbf{1}_{\left[\tau,\theta _{n}\right]}\left(r\right)e^{4V _{r} }\left\vert Y _{r}\right\vert ^{2}\left\vert Z_{ r}\right\vert ^{2}\mathit{dr}\right)^{q/4} {}\\ & & \quad \leq C_{q,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \left[\sup \limits _{ r\in \left[\tau,\sigma \right]}\left(\mathbf{1}_{\left[\tau,\theta _{n}\right]}\left(r\right)\left\vert e^{V _{r} }Y _{r}\right\vert ^{q/2}\right)\left(\int _{\tau }^{\sigma }\mathbf{1}_{\left[\tau,\theta _{n}\right]}\left(r\right)e^{2V _{r} }\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{q/4}\right] {}\\ & & \quad \leq C_{q,\lambda }^{{\prime}}\mathbb{E}^{\mathcal{F}_{\tau }}\ \sup \limits _{ r\in \left[\tau,\theta _{n}\right]}\left\vert e^{V _{r} }Y _{r}\right\vert ^{q} + \dfrac{1} {2}\mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\theta _{n} }e^{2V _{r} }\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{q/2}. {}\\ \end{array}$$

Since \(\int _{\tau }^{\theta _{n}}e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\) is finite, from (6.96) we infer

$$\displaystyle\begin{array}{rcl} \begin{array}{l} \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\theta _{n}}e^{2V _{r}}\mathit{dD}_{r}\right)^{q/2} + \dfrac{1} {2}\mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\theta _{n}}e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{q/2} \\ {r} { \leq C_{q,\lambda }\left[\mathbb{E}^{\mathcal{F}_{\tau }}\ \sup \limits _{r\in \left[\tau,\sigma \right]}\left\vert e^{V _{r}}Y _{r}\right\vert ^{q} + \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{2V _{r}}\mathit{dR}_{r}\right)^{q/2} + \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }e^{V _{r}}\mathit{dN}_{r}\right)^{q}\right].}\end{array} & &{}\end{array}$$
(6.97)

By the monotone convergence theorem as n →  the inequality (6.92) follows.

Step II. Let us first assume that p ≥ 1.

Noting that

$$\displaystyle{e^{V _{t} }Y _{t} = Y _{0} -\int _{0}^{t}e^{V _{r} }\left(\mathit{dK}_{r} - Y _{r}\mathit{dV }_{r}\right) + \int _{0}^{t}e^{V _{r} }Z_{r}\mathit{dB}_{r},}$$

then by the inequality (2.30) from Corollary (2.30) we get, for p ≥ 1 and for all stopping times \(t \in \left[\tau,\sigma \right]\)

$$\displaystyle{ \begin{array}{l} e^{pV _{t}}\left\vert Y _{t}\right\vert ^{p} + \dfrac{p} {2}n_{p}\int _{t}^{\theta _{n} }e^{pV _{r}}\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{Y _{ r}\neq 0}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} \leq e^{pV _{\theta _{n}}}\left\vert Y _{\theta _{ n}}\right\vert ^{p} \\ \;\;\;\;\;\;\;\;\;\; + p\int _{t}^{\theta _{n} }e^{pV _{r}}\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{Y _{ r}\neq 0}\left(\left\langle Y _{r},\mathit{dK}_{r}\right\rangle -\left\vert Y _{r}\right\vert ^{2}\mathit{dV }_{ r}\right) \\ \;\;\;\;\;\;\;\;\;\; - p\int _{t}^{t\vee \theta _{n} }e^{pV _{r}}\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{Y _{ r}\neq 0}\left\langle Y _{r},Z_{r}\mathit{dB}_{r}\right\rangle.\end{array} }$$
(6.98)

We note that the right-hand side of (6.98) is finite \(\mathbb{P}\text{ -}a.s.\) and consequently

$$\displaystyle{0 \leq n_{p}\int _{\tau }^{\theta _{n} }e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{ Y _{r}\neq 0}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} < \infty,\quad \mathbb{P}\text{ -a.s.}}$$

By the assumption (6.93)

$$\displaystyle{\left\langle Y _{r},\mathit{dK}_{r}\right\rangle -\left\vert Y _{r}\right\vert ^{2}\mathit{dV }_{ r} \leq -\mathit{dD}_{r} + \left(\mathbf{1}_{p\geq 2}\mathit{dR}_{r} + \vert Y _{r}\vert \mathit{dN}_{r}\right) + \frac{n_{p}} {2} \lambda \left\vert Z_{r}\right\vert ^{2}\mathit{dr}.}$$

It follows that

$$\displaystyle{ \begin{array}{l} e^{pV _{t}}\left\vert Y _{t}\right\vert ^{p} + p\int _{t}^{\sigma }\mathbf{1}_{\left[\tau,\theta _{ n}\right]}\left(r\right)e^{pV _{r}}\left\vert Y _{ r}\right\vert ^{p-2}\mathbf{1}_{ Y _{r}\neq 0}\mathit{dD}_{r} \\ \;\;\;\;\;\;\;\; + \dfrac{p} {2}n_{p}\left(1-\lambda \right)\int _{t}^{\sigma }\mathbf{1}_{\left[\tau,\theta _{n}\right]}\left(r\right)e^{pV _{r}}\left\vert Y _{ r}\right\vert ^{p-2}\mathbf{1}_{ Y _{r}\neq 0}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} \\ \;\;\;\;\;\;\;\; \leq e^{pV _{\theta _{n}}}\left\vert Y _{\theta _{ n}}\right\vert ^{p} + \left(U_{\theta _{ n}} - U_{t}\right) -\left(M_{\theta _{n}} - M_{t}\right), \end{array} }$$
(6.99)

where

$$\displaystyle{ U_{s} = p\int _{0}^{s}\mathbf{1}_{\left[\tau,\theta _{n}\right]}\left(r\right)e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{ Y _{r}\neq 0}\left(\mathbf{1}_{p\geq 2}\mathit{dR}_{r} + \vert Y _{r}\vert \mathit{dN}_{r}\right) }$$
(6.100)

and

$$\displaystyle{M_{s} = p\int _{0}^{s}\mathbf{1}_{\left[\tau,\theta _{n}\right]}\left(r\right)e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{ Y _{r}\neq 0}\left\langle Y _{r},Z_{r}\mathit{dB}_{r}\right\rangle.}$$

Note that \(\left\{M_{s}: s \in \left[0,T\right]\right\}\) is a martingale since

$$\displaystyle\begin{array}{rcl} \mathbb{E}\sqrt{<\!\! M\!\! > _{T}}& \leq & p\mathbb{E}\ \ \left(\int _{\tau }^{\theta _{n}}e^{2pV _{r}}\left\vert Y _{r}\right\vert ^{2p-4}\mathbf{1}_{Y _{ r}\neq 0}\left\vert Y _{r}\right\vert ^{2}\left\vert Z_{ r}\right\vert ^{2}\mathit{dr}\right)^{1/2} {}\\ & \leq & p\mathbb{E}\left[\left(\left\vert e^{V _{\tau }}Y _{\tau }\right\vert + n\right)^{p-1}\left(\int _{\tau }^{\theta _{n} }e^{2V _{r} }\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{1/2}\right] {}\\ & \leq & C_{p}\left(\mathbb{E}\left\vert e^{V _{\tau }}Y _{\tau }\right\vert ^{p-1} + n^{p-1}\right)\sqrt{n}. {}\\ \end{array}$$

Therefore from (6.99),

$$\displaystyle{ e^{pV _{\tau }}\left\vert Y _{\tau }\right\vert ^{p} \leq \mathbb{E}^{\mathcal{F}_{\tau }}e^{pV _{\theta _{n}} }\left\vert Y _{\theta _{n}}\right\vert ^{p} + \mathbb{E}^{\mathcal{F}_{\tau }}\left(U_{\theta _{ n}} - U_{\tau }\right). }$$
(6.101)

From here we assume that p > 1. From (6.99) we also get

$$\displaystyle{ \begin{array}{l} p\mathbb{E}^{\mathcal{F}_{\tau }}\int \nolimits _{\tau }^{\theta _{n} }e^{pV _{r}}\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{Y _{ r}\neq 0}\mathit{dD}_{r} \\ \quad \quad \quad + \dfrac{p} {2}n_{p}\left(1-\lambda \right)\mathbb{E}^{\mathcal{F}_{\tau }}\int _{\tau }^{\theta _{n} }e^{pV _{r}}\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{Y _{ r}\neq 0}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} \\ \leq \mathbb{E}^{\mathcal{F}_{\tau }}\ e^{pV _{\theta _{n}}}\left\vert Y _{\theta _{ n}}\right\vert ^{p} + \mathbb{E}^{\mathcal{F}_{\tau }}\ U_{\theta _{ n}}. \end{array} }$$
(6.102)

Since

$$\displaystyle{\sup _{t\in \left[\tau,\sigma \right]}\left\vert M_{\sigma } - M_{t}\right\vert \leq 2\sup _{t\in \left[\tau,\sigma \right]}\left\vert M_{t} - M_{\tau }\right\vert = 2\sup _{t\in \left[\tau,\sigma \right]}\left\vert M_{t}\right\vert,}$$

we obtain from (6.99) that

$$\displaystyle{ \begin{array}{l} \mathbb{E}^{\mathcal{F}_{\tau }}\ \sup \limits _{t\in \left[\tau,\theta _{n}\right]}\left(e^{pV _{t}}\left\vert Y _{t}\right\vert ^{p}\right) \\ \quad \leq \mathbb{E}^{\mathcal{F}_{\tau }}\ e^{pV _{\theta _{n}}}\left\vert Y _{\theta _{ n}}\right\vert ^{p} + \mathbb{E}^{\mathcal{F}_{\tau }}\left(U_{\theta _{ n}} - U_{\tau }\right) + 2\mathbb{E}^{\mathcal{F}_{\tau }}\sup \limits _{ t\in \left[\tau,\theta _{n}\right]}\left\vert M_{t}\right\vert. \end{array} }$$
(6.103)

By the Burkholder–Davis–Gundy inequality (2.8) and (6.102):

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}^{\mathcal{F}_{\tau }}\sup \limits _{ t\in \left[\tau,\sigma \right]}\left\vert M_{t}\right\vert {}\\ & & \leq 3p\ \mathbb{E}^{\mathcal{F}_{\tau }}\ \left(\int _{\tau }^{\sigma }\mathbf{1}_{\left[\tau,\theta _{n}\right]}\left(r\right)e^{2pV _{r} }\left\vert Y _{r}\right\vert ^{2p-4}\mathbf{1}_{ Y _{r}\neq 0}\left\vert Y _{r}\right\vert ^{2}\left\vert Z_{ r}\right\vert ^{2}\mathit{dr}\right)^{1/2} {}\\ & & \leq 3p\ \mathbb{E}^{\mathcal{F}_{\tau }}\ \left[\sup _{ r\in \left[\tau,\theta _{n}\right]}e^{\left(p/2\right)V _{r} }\left\vert Y _{r}\right\vert ^{p/2}\left(\int _{\tau }^{\theta _{n} }e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{ Y _{r}\neq 0}\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{1/2}\right] {}\\ & & \leq \frac{1} {4}\mathbb{E}^{\mathcal{F}_{\tau }}\ \sup _{ r\in \left[\tau,\theta _{n}\right]}\left(e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p}\right) + C_{ p}\ \mathbb{E}^{\mathcal{F}_{\tau }}\ \int _{ \tau }^{\theta _{n} }e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{ Y _{r}\neq 0}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} {}\\ & & \leq \frac{1} {4}\mathbb{E}^{\mathcal{F}_{\tau }}\ \sup _{ r\in \left[\tau,\theta _{n}\right]}e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p} + C_{ p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ e^{pV _{\theta _{n}} }\left\vert Y _{\theta _{n}}\right\vert ^{p} + C_{ p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \left(U_{\theta _{ n}} - U_{\tau }\right). {}\\ \end{array}$$

Plugging this last estimate into (6.103) we obtain with another constant C p, λ

$$\displaystyle{ \mathbb{E}^{\mathcal{F}_{\tau }}\ \sup _{ r\in \left[\tau,\theta _{n}\right]}e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p} \leq C_{ p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ e^{pV _{\theta _{n}} }\left\vert Y _{\theta _{n}}\right\vert ^{p} + C_{ p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \left(U_{\theta _{ n}} - U_{\tau }\right). }$$
(6.104)

We deduce from (6.102) and (6.104)

$$\displaystyle{\begin{array}{r} \mathbb{E}^{\mathcal{F}_{\tau }}\ \sup \limits _{r\in \left[\tau,\theta _{n}\right]}e^{pV _{r}}\left\vert Y _{r}\right\vert ^{p} + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{\tau }^{\theta _{n}}e^{pV _{r}}\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{Y _{ r}\neq 0}\mathit{dD}_{r} \\ \;\;\;\; + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{\tau }^{\theta _{n} }e^{pV _{r}}\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{Y _{ r}\neq 0}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} \\ \leq C_{p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ e^{pV _{\theta _{n}}}\left\vert Y _{\theta _{ n}}\right\vert ^{p} + C_{ p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ U_{\theta _{ n}}. \end{array} }$$

But

$$\displaystyle{\begin{array}{l} C_{p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\left(U_{\theta _{n}} - U_{\tau }\right) \\ \leq C_{p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\left[\sup \limits _{r\in \left[\tau,\theta _{n}\right]}\left[e^{\left(p-2\right)V _{r}}\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{Y _{ r}\neq 0}\mathbf{1}_{p\geq 2}\right]\int _{\tau }^{\theta _{n} }e^{2V _{r}}\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\right] \\ \quad \quad \quad \quad \quad \quad \quad + \mathbb{E}^{\mathcal{F}_{\tau }}\left[\sup \limits _{r\in \left[\tau,\theta _{n}\right]}\left[e^{\left(p-1\right)V _{r}}\left\vert Y _{r}\right\vert ^{p-1}\mathbf{1}_{Y _{ r}\neq 0}\right]\int _{\tau }^{\theta _{n} }e^{V _{r}}\mathit{dN}_{r}\right] \\ {r} { \leq \dfrac{1} {2}\mathbb{E}^{\mathcal{F}_{\tau }}\ \sup \limits _{r\in \left[\tau,\theta _{n}\right]}e^{pV _{r}}\left\vert Y _{r}\right\vert ^{p} + C_{p,\lambda }^{{\prime}}\mathbb{E}^{\mathcal{F}_{\tau }}\ \Big(\int _{\tau }^{\sigma }e^{2V _{r}}\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\Big)^{p/2}} \\ {r} { + C_{p,\lambda }^{{\prime}}\mathbb{E}^{\mathcal{F}_{\tau }}\ \Big(\int _{\tau }^{\sigma }e^{V _{r}}\mathit{dN}_{r}\Big)^{p}.} \end{array} }$$

Hence

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}^{\mathcal{F}_{\tau }}\ \sup \limits _{ r\in \left[\tau,\theta _{n}\right]}e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p} + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{ \tau }^{\theta _{n} }e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{ Y _{r}\neq 0}\mathit{dD}_{r} {}\\ & & \;\;\;\;\quad \quad + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{ \tau }^{\theta _{n} }e^{pV _{r} }\left\vert Y _{r}\right\vert ^{p-2}\mathbf{1}_{ Y _{r}\neq 0}\left\vert Z_{r}\right\vert ^{2}\mathit{dr} {}\\ & & \leq C_{p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ e^{pV _{\theta _{n}} }\left\vert Y _{\theta _{n}}\right\vert ^{p} + C_{ p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\ \Big(\int _{\tau }^{\sigma }e^{2V _{r} }\mathbf{1}_{p\geq 2}\mathit{dR}_{r}\Big)^{p/2} {}\\ & & \;\;\;\;\quad \quad + C_{p,\lambda }\mathbb{E}^{\mathcal{F}_{\tau }}\Big(\int _{\tau }^{\sigma }e^{V _{r} }\mathit{dN}_{r}\Big)^{p}. {}\\ \end{array}$$

Now letting n → , by the Beppo Levi monotone convergence theorem for the first member and by the Lebesgue dominated convergence theorem for the right-hand side of the inequality, we conclude (6.94) (using of course the first step: inequality (6.92)).

The proof is complete. ■ 

Corollary 6.81.

Let \(\left(Y,Z\right) \in S_{m}^{0} \times \varLambda _{m\times k}^{0}\) satisfy

$$\displaystyle{Y _{t} = Y _{T} +\int _{ t}^{T}\mathit{dK}_{ s} -\int _{t}^{T}Z_{ s}\mathit{dB}_{s},\;0 \leq t \leq T,\quad \mathbb{P}\text{ -a.s.},}$$

where K ∈ S m 0 and \(K_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \) .

Assume given

  • \(\blacktriangle \)  D and N are \(\mathcal{P}\) -m.i.c.s.p., N 0 = 0,

  • \(\blacktriangle \)  V a \(\mathcal{P}\) -m.b-v.c.s.p., V 0 = 0,

  • \(\blacktriangle \)  τ, θ and σ are three stopping times such that 0 ≤τ ≤θ ≤σ < ∞.

If

$$\displaystyle{\begin{array}{l@{\quad }l} \left(a\right)\quad &\quad \mathit{dD}_{t} + \left\langle Y _{t},\mathit{dK}_{t}\right\rangle \leq \vert Y _{t}\vert \mathit{dN}_{t} + \vert Y _{t}\vert ^{2}\mathit{dV }_{t}, \\ \left(b\right)\quad &\quad \mathbb{E}\ \sup \limits _{s\in \left[\tau,\sigma \right]}\left\vert e^{V _{s}}Y _{s}\right\vert < \infty, \end{array} }$$

then

$$\displaystyle{ e^{V _{\tau }}\left\vert Y _{\tau }\right\vert \leq \mathbb{E}^{\mathcal{F}_{\tau }}e^{V _{\sigma }}\left\vert Y _{\sigma }\right\vert + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{ \tau }^{\sigma }e^{V _{r} }\mathit{dN}_{r} }$$
(6.105)

and for all 0 < α < 1

$$\displaystyle{ \begin{array}{r} \sup \limits _{\theta \in \left[\tau,\sigma \right]}\left[\mathbb{E}\left(e^{V _{\theta }}\left\vert Y _{\theta }\right\vert \right)\right]^{\alpha } + \mathbb{E}\left(\sup \limits _{s\in \left[\tau,\sigma \right]}\ \left\vert e^{V _{s}}Y _{s}\right\vert ^{\alpha }\right) + \mathbb{E}\left(\int _{\tau }^{\sigma }e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{\alpha /2} \\ + \mathbb{E}\left(\int _{\tau }^{\sigma }e^{2V _{r}}\left\vert D_{r}\right\vert ^{2}\mathit{dr}\right)^{\alpha /2} \\ \leq C_{\alpha }\left[\Big(\mathbb{E}\left(e^{V _{\sigma }}\left\vert Y _{\sigma }\right\vert \right)\Big)^{\alpha } + \left(\mathbb{E}\int _{\tau }^{\sigma }e^{V _{r}}\mathit{dN}_{r}\right)^{\alpha }\right].\end{array} }$$
(6.106)

Proof.

From (6.101) for p = 1 we deduce, using the definition (6.100) of U s , that

$$\displaystyle\begin{array}{rcl} e^{V _{\tau }}\left\vert Y _{\tau }\right\vert & \leq & \mathbb{E}^{\mathcal{F}_{\tau }}e^{V _{\theta _{n}} }\left\vert Y _{\theta _{n}}\right\vert + \mathbb{E}^{\mathcal{F}_{\tau }}U_{\theta _{ n}} {}\\ & \leq & \mathbb{E}^{\mathcal{F}_{\tau }}e^{V _{\theta _{n}} }\left\vert Y _{\theta _{n}}\right\vert + \mathbb{E}^{\mathcal{F}_{\tau }}\int _{ \tau }^{\sigma }e^{V _{r} }\mathit{dN}_{r} {}\\ \end{array}$$

and the inequality (6.105) follows as n → . Moreover

$$\displaystyle{\sup \limits _{\theta \in \left[\tau,\sigma \right]}\mathbb{E}\left(e^{V _{\theta }}\left\vert Y _{\theta }\right\vert \right) \leq \mathbb{E}\left(e^{V _{\sigma }}\left\vert Y _{\sigma }\right\vert \right) + \mathbb{E}\int _{\tau }^{\sigma }e^{V _{r} }\mathit{dN}_{r}}$$

and by the martingale inequality (1.11-A 3) from Theorem 1.60 we infer

$$\displaystyle{\mathbb{E}\left(\sup \limits _{s\in \left[\tau,\sigma \right]}\ \left\vert e^{V _{s} }Y _{s}\right\vert ^{\alpha }\right) \leq \frac{1} {1-\alpha }\left[\mathbb{E}\left(e^{V _{\sigma }}\left\vert Y _{\sigma }\right\vert + \int _{\tau }^{\sigma }e^{V _{r} }\mathit{dN}_{r}\right)\right]^{\alpha }.}$$

The inequality (6.106) is now a consequence of (6.92). ■ 

Corollary 6.82.

Let \(\left(Y,Z\right) \in S_{m}^{0} \times \varLambda _{m\times k}^{0}\) satisfy

$$\displaystyle{Y _{t} = Y _{T} +\int _{ t}^{T}\mathit{dK}_{ s} -\int _{t}^{T}Z_{ s}\mathit{dB}_{s},\;0 \leq t \leq T,\quad \mathbb{P}\text{ -a.s.},}$$

where K ∈ S m 0 and \(K_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\mathbb{R}_{+}; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \) .

Assume given

  • \(\blacktriangle \)  a \(\mathcal{P}\) -m.b-v.c.s.p. V, V 0 = 0,

  • \(\blacktriangle \)  τ, θ and σ are three stopping times such that 0 ≤τ ≤θ ≤σ < ∞.

If λ < 1 ≤ p, \(n_{p} = 1 \wedge \left(p - 1\right)\) and

$$\displaystyle{\begin{array}{l@{\quad }l} \left(a\right)\quad &\quad \left\langle Y _{t},\mathit{dK}_{t}\right\rangle \leq \vert Y _{t}\vert ^{2}\mathit{dV }_{t} + \dfrac{n_{p}} {2} \lambda \left\vert Z_{t}\right\vert ^{2}\mathit{dt}, \\ \left(b\right)\quad &\quad \mathbb{E}\ \sup \limits _{s\in \left[\tau,\sigma \right]}e^{pV _{s}}\left\vert Y _{s}\right\vert ^{p} < \infty, \end{array} }$$

then for all 1 ≤ q ≤ p,

$$\displaystyle{ e^{qV _{\tau }}\left\vert Y _{\tau }\right\vert ^{q} \leq \mathbb{E}^{\mathcal{F}_{\tau }}e^{qV _{\sigma }}\left\vert Y _{\sigma }\right\vert ^{q},\;\; \mathbb{P}\text{ -}a.s. }$$
(6.107)

If p > 1 then

$$\displaystyle{\mathbb{E}\ \sup \limits _{s\in \left[\tau,\sigma \right]}e^{pV _{s} }\left\vert Y _{s}\right\vert ^{p} + \mathbb{E}\ \left(\int _{\tau }^{\sigma }e^{2V _{r} }\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{p/2} \leq C_{ p,\lambda }\mathbb{E}\Big(e^{pV _{\sigma }}\left\vert Y _{\sigma }\right\vert ^{p}\Big),}$$

and if p = 1 (and n p = 0) then for all 0 < α < 1,

$$\displaystyle{ \begin{array}{c} \sup \limits _{\theta \in \left[\tau,\sigma \right]}\Big(\mathbb{E}e^{V _{\theta }}\left\vert Y _{\theta }\right\vert \Big)^{\alpha } + \mathbb{E}\ \sup \limits _{s\in \left[\tau,\sigma \right]}e^{\alpha V _{s}}\left\vert Y _{s}\right\vert ^{\alpha } + \mathbb{E}\ \left(\int _{\tau }^{\sigma }e^{2V _{r}}\left\vert Z_{r}\right\vert ^{2}\mathit{dr}\right)^{\alpha /2} \\ \leq C_{\alpha }\Big(\mathbb{E}\ e^{V _{\sigma }}\left\vert Y _{\sigma }\right\vert \Big)^{\alpha }.\end{array} }$$
(6.108)

Proof.

Since

$$\displaystyle{\mathbb{E}\ \sup \limits _{s\in \left[\tau,\sigma \right]}e^{qV _{s} }\left\vert Y _{s}\right\vert ^{q} \leq \left(\mathbb{E}\ \sup \limits _{ s\in \left[\tau,\sigma \right]}e^{pV _{s} }\left\vert Y _{s}\right\vert ^{p}\right)^{q/p} < \infty,}$$

the inequality (6.101) with p replaced by q yields (6.107). The next two inequalities follow from Proposition 6.80 and Corollary 6.81, respectively. ■ 

Corollary 6.83.

Let p ≥ 1 and \(\left(V _{t}\right)_{t\geq 0}\) be a bounded variation continuous progressively measurable stochastic process with V 0 = 0. Let T > 0 and \(\eta: \Omega \rightarrow \mathbb{R}^{m}\) be a random variable such that \(\mathbb{E}\left(\sup _{r\in \left[0,T\right]}e^{pV _{r}}\left\vert \eta \right\vert ^{p}\right) < \infty \). If \(\left(\xi,\zeta \right) \in S_{m}^{p}\left[0,T\right] \times \varLambda _{m\times k}^{p}\left[0,T\right]\) satisfies

$$\displaystyle{\xi _{s} = \mathbb{E}^{\mathcal{F}_{T} }\eta -\int _{s}^{T}\zeta _{ r}\mathit{dB}_{r},\;s \in \left[0,T\right],\text{ a.s.}}$$

(the pair \(\left(\xi,\zeta \right)\) exists and it is unique by the martingale representation: Corollary 2.44), then there exists a \(C = C\left(p\right) > 0\) such that for all \(t \in \left[0,T\right]\) , for p > 1

$$\displaystyle\begin{array}{rcl} \mathbb{E}^{\mathcal{F}_{t} }\sup \limits _{s\in \left[t,T\right]}e^{pV _{s} }\left\vert \xi _{s}\right\vert ^{p} + \mathbb{E}^{\mathcal{F}_{t} }\ \left(\int _{t}^{T}e^{2V _{s} }\left\vert \zeta _{s}\right\vert ^{2}\mathit{ds}\right)^{p/2} \leq C_{ p}\,\mathbb{E}^{\mathcal{F}_{t} }\left(\sup _{r\in \left[0,T\right]}e^{pV _{r} }\left\vert \eta \right\vert ^{p}\right)& &{}\end{array}$$
(6.109)

and for p = 1

$$\displaystyle{ \begin{array}{r} \sup \limits _{s\in \left[0,T\right]}\Big(\mathbb{E}e^{V _{s}}\left\vert \xi _{s}\right\vert \Big)^{\alpha } + \mathbb{E}\ \sup _{t\in \left[0,T\right]}\left\vert e^{V _{t}}\xi _{t}\right\vert ^{\alpha } + \mathbb{E}\ \left(\int _{0}^{T}e^{2V _{s}}\left\vert \zeta _{s}\right\vert ^{2}\mathit{ds}\right)^{\alpha /2} \\ \leq C_{\alpha }\,\Big(\mathbb{E}\left(\sup _{t\in \left[0,T\right]}e^{V _{r}}\left\vert \eta \right\vert \right)\Big)^{\alpha },\quad \text{ for all }0 <\alpha < 1.\end{array} }$$
(6.110)

Proof.

We see at once that the stochastic pair \(\left(\xi,\zeta \right)\) satisfy the equation

$$\displaystyle{\xi _{t} =\xi _{T} -\int _{t}^{T}\zeta _{ s}\mathit{dB}_{s},\;t \in \left[0,T\right],\;{ \mathit{ a.s.}}}$$

The stochastic process \(\tilde{V }_{t} =\sup _{s\in \left[0,t\right]}V _{s};\) \(\tilde{V }\) is increasing continuous progressively measurable and \(\tilde{V }_{0} = 0\). Since for all \(t \in \left[0,T\right]\)

$$\displaystyle{ \mathbb{E}^{\mathcal{F}_{t} }\left\vert e^{\tilde{V }_{t} }\xi _{t}\right\vert = \left\vert e^{\tilde{V }_{t} }\xi _{t}\right\vert = e^{\tilde{V }_{t} }\left\vert \mathbb{E}^{\mathcal{F}_{t} }\eta \right\vert \leq \mathbb{E}^{\mathcal{F}_{t} }e^{\tilde{V }_{t} }\left\vert \eta \right\vert \leq \mathbb{E}^{\mathcal{F}_{t} }e^{\tilde{V }_{T} }\left\vert \eta \right\vert }$$
(6.111)

by Proposition 1.56 we infer for all p > 1

$$\displaystyle{\mathbb{E}\left\Vert \xi e^{\tilde{V }}\right\Vert _{ \left[0,T\right]}^{p} \leq \left( \dfrac{p} {p - 1}\right)^{p}\mathbb{E}\left(e^{p\tilde{V }_{T} }\left\vert \eta \right\vert ^{p}\right) < \infty }$$

and consequently by Proposition 6.80-B (for \(\left(Y,Z\right) = \left(\xi,\zeta \right)\) with λ = 0, K = R = N = 0, \(\mathit{dD}_{t} = \left\vert \xi _{t}\right\vert ^{2}d\tilde{V }\)) the inequality (6.109) follows; we also use that \(V \leq \tilde{ V }\) and

$$\displaystyle{\mathbb{E}^{\mathcal{F}_{t} }\left\vert e^{\tilde{V }_{T} }\xi _{T}\right\vert ^{p} = \mathbb{E}^{\mathcal{F}_{t} }\left[\left\vert e^{\tilde{V }_{T} }\mathbb{E}^{\mathcal{F}_{T} }\eta \right\vert ^{p}\right] \leq \mathbb{E}^{\mathcal{F}_{t} }\left\vert e^{\tilde{V }_{T} }\eta \right\vert ^{p}.}$$

In the case p = 1 we have for all 0 < α < 1, by Proposition 1.56

$$\displaystyle{ \mathbb{E}\ \sup _{t\in \left[0,T\right]}e^{\alpha V _{t} }\left\vert \xi _{t}\right\vert ^{\alpha } \leq \mathbb{E}\ \sup _{ t\in \left[0,T\right]}\left\vert e^{\tilde{V }_{t} }\xi _{t}\right\vert ^{\alpha } \leq \frac{1} {1-\alpha }\left(\mathbb{E}e^{\tilde{V }_{T} }\left\vert \eta \right\vert \right)^{\alpha } }$$
(6.112)

and by Proposition 6.80-A

$$\displaystyle{ \mathbb{E}\ \left(\int _{0}^{T}e^{2\tilde{V }_{s} }\left\vert \zeta _{s}\right\vert ^{2}\mathit{ds}\right)^{\alpha /2} \leq C_{ 1}\mathbb{E}\ \sup _{t\in \left[0,T\right]}\left\vert e^{\tilde{V }_{t} }\xi _{t}\right\vert ^{\alpha }. }$$
(6.113)

Also we can see that from (6.111) \(\mathbb{E}\left\vert e^{\tilde{V }_{t}}\xi _{t}\right\vert \leq \mathbb{E}\left(e^{\tilde{V }_{T}}\left\vert \eta \right\vert \right)\) and therefore

$$\displaystyle{ \sup _{t\in \left[0,T\right]}\mathbb{E}\left\vert e^{\tilde{V }_{t} }\xi _{t}\right\vert \leq \mathbb{E}\left(e^{\tilde{V }_{T} }\left\vert \eta \right\vert \right). }$$
(6.114)

From (6.112)–(6.114) the inequality (6.110) follows. ■ 

Let \(\left(Y,Z\right),(\hat{Y },\hat{Z}) \in S_{m}^{0}\left[0,T\right] \times \varLambda _{m\times k}^{0}\left(0,T\right)\) satisfying for all \(t \in \left[0,T\right]\):

$$\displaystyle{Y _{t} = Y _{T} +\int _{ t}^{T}\mathit{dK}_{ s} -\int _{t}^{T}Z_{ s}\mathit{dB}_{s},\quad \mathbb{P}\text{ -a.s.},}$$

and respectively

$$\displaystyle{\hat{Y }_{t} =\hat{ Y }_{T} +\int _{ t}^{T}d\hat{K}_{ s} -\int _{t}^{T}\hat{Z}_{ s}\mathit{dB}_{s},\quad \mathbb{P}\text{ -a.s.},}$$

where

  • \(\diamond \) \(K,\hat{K} \in S_{m}^{0},\)

  • \(\diamond \) \(K_{\cdot }\left(\omega \right)\), \(\hat{K}_{\cdot }\left(\omega \right) \in \mathit{BV }_{\mathit{loc}}\left(\left[0,\infty \right[; \mathbb{R}^{m}\right),\; \mathbb{P}\text{ -a.s.}\;\omega \in \Omega \).

Assume there exist λ < 1 ≤ p and V a \(\mathcal{P}\)-m.b-v.c.s.p., V 0 = 0, such that as signed measures on \(\left[0,T\right]\):

$$\displaystyle{ \langle Y _{t} -\hat{ Y }_{t},\mathit{dK}_{t} - d\hat{K}_{t}\rangle \leq \vert Y _{t} -\hat{ Y }_{t}\vert ^{2}\mathit{dV }_{ t} + \frac{n_{p}} {2} \lambda \left\vert Z_{t} -\hat{ Z}_{t}\right\vert ^{2}\mathit{dt}, }$$
(6.115)

where \(n_{p} = 1 \wedge \left(p - 1\right)\).

Corollary 6.84.

Let λ < 1 ≤ p be given. Let the assumption (6.115) be satisfied and \(\left\{A_{t}: t \geq 0\right\}\) be a \(\mathcal{P}\) -m.i.c.s.p., A 0 = 0, such that

$$\displaystyle{\mathbb{E}\ \sup \limits _{t\in \left[0,T\right]}\left(e^{p\left(A_{t}+V _{t}\right)}\left\vert Y _{ t} -\hat{ Y }_{t}\right\vert ^{p}\right) < \infty.}$$

Then for all 0 ≤ t ≤ T,

$$\displaystyle{e^{pV _{t} }\left\vert Y _{t} -\hat{ Y }_{t}\right\vert ^{p} \leq \mathbb{E}^{\mathcal{F}_{t} }\left(e^{pV _{T} }\left\vert Y _{T} -\hat{ Y }_{T}\right\vert ^{p}\right),\;\; \mathbb{P}\text{ -a.s.}}$$

Moreover if p > 1, then

$$\displaystyle{\begin{array}{l} \mathbb{E}^{\mathcal{F}_{t}}\ \left(\sup \limits _{s\in \left[t,T\right]}e^{p\left(A_{s}+V _{s}\right)}\left\vert Y _{s} -\hat{ Y }_{s}\right\vert ^{p}\right) + \mathbb{E}^{\mathcal{F}_{t}}\ \int _{t}^{T}e^{p\left(A_{s}+V _{s}\right)}\left\vert Y _{s} -\hat{ Y }_{s}\right\vert ^{p}dA_{s} \\ + \mathbb{E}^{\mathcal{F}_{t}}\ \left(\int _{t}^{T}e^{2\left(A_{s}+V _{s}\right)}\left\vert Y _{s} -\hat{ Y }_{s}\right\vert ^{2}dA_{s}\right)^{p/2} + \mathbb{E}^{\mathcal{F}_{t}}\ \left(\int _{t}^{T}e^{2\left(A_{s}+V _{s}\right)}\left\vert Z_{s} -\hat{ Z}_{s}\right\vert ^{2}\mathit{ds}\right)^{p/2} \\ {r} { \leq C_{p,\lambda }\mathbb{E}^{\mathcal{F}_{t}}\ e^{p\left(A_{T}+V _{T}\right)}\left\vert Y _{T} -\hat{ Y }_{T}\right\vert ^{p},\;\; \mathbb{P}\text{ -a.s.},} \end{array} }$$

where C p,λ is a positive constant depending only on \(\left(p,\lambda \right)\) .

Proof.

The results clearly follow from Corollary 6.82 and the inequality (6.94) from Proposition 6.80, applied to

$$\displaystyle{Y _{t} -\hat{ Y }_{t} = Y _{T} -\hat{ Y }_{T} +\int _{ t}^{T}d\left(K_{ s} -\hat{ K}_{s}\right) -\int _{t}^{T}\left(Z_{ s} -\hat{ Z}_{s}\right)\mathit{dB}_{s},}$$

satisfying

$$\displaystyle{\mathit{dD}_{t} +\langle Y _{t} -\hat{ Y }_{t},\mathit{dK}_{t} - d\hat{K}_{t}\rangle \leq \vert Y _{t} -\hat{ Y }_{t}\vert ^{2}d\left(A_{ t} + V _{t}\right) + \frac{n_{p}} {2} \lambda \left\vert Z_{t} -\hat{ Z}_{t}\right\vert ^{2}\mathit{dt}}$$

with

$$\displaystyle{\mathit{dD}_{t} = \vert Y _{t} -\hat{ Y }_{t}\vert ^{2}dA_{ t}.}$$

 ■ 

6.5 Annex D: Viscosity Solutions

The aim of this section is to introduce the notion of viscosity solutions to second order elliptic and parabolic PDEs and give uniqueness results for such solutions. This notion, which was invented by Crandall and Lions, allows us to state that a continuous function satisfies a PDE, without any differentiability requirement on that function. This notion has been invented specifically for nonlinear equations, for which the notion of weak solutions in the sense of distributions is not convenient. We use this notion here for linear and semilinear equations.

This section is divided into four parts. In the first part, we state the main definitions of viscosity solutions to elliptic and parabolic PDEs (or systems of PDEs). We prove three uniqueness results in the next three parts. We do not prove any existence results, since such results for the equations considered in this book are provided by our probabilistic formulas. Concerning uniqueness, it would be too long and repetitive to give a uniqueness result for each PDE considered in this book. The last three parts of this section give uniqueness results, corresponding to three large classes of semilinear PDEs or systems of PDEs. All other relevant results can be proved by combining the arguments given in those three proofs.

The first uniqueness result concerns an elliptic PDE with Dirichlet boundary condition at the boundary of a bounded set. We shall also explain how the proof can be adapted to the parabolic case. The second result treats the case of a system of parabolic PDEs in the whole space. Finally the third result concerns a parabolic PDE with subdifferential operators and nonlinear Neumann boundary condition.

We refer to the well-known “user’s guide” of Crandall et al. [18] for more details, which complements the material presented here.

6.5.1 Definitions

Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\), that is for all \(x \in \mathcal{O}\) there exists a δ > 0 such that \(\mathcal{O}\cap \overline{B}\left(x,\delta \right)\) is closed.

A function \(h: \mathcal{O}\subset \mathbb{R}^{d} \rightarrow \mathbb{R}\) is lower semicontinuous and we write \(h \in \mathit{LSC}\left(\mathcal{O}\right)\) if there exist \(\{h_{n},\ n \geq 1\} \subset C(\mathcal{O})\) such that

$$\displaystyle{h_{1}\left(x\right) \leq \cdots \leq h_{n}(x) \leq \cdots \leq h(x)\;\text{ and }\lim _{n\rightarrow \infty }h_{n}(x) = h(x),\;\forall \ x \in \mathcal{O}.}$$

The function \(h: \mathcal{O}\subset \mathbb{R}^{d} \rightarrow \mathbb{R}\) is upper semicontinuous and we write \(h \in \mathit{USC}\left(\mathcal{O}\right)\) if − h is lower semicontinuous.

In particular for all R > 0 we have

$$\displaystyle{\begin{array}{l@{\quad }l} (i)\; \quad &\inf \limits _{x\in \mathcal{O},\ \left\vert x\right\vert \leq R}h(x) > -\infty,\;\;\text{ if }h \in \mathit{LSC}\left(\mathcal{O}\right), \\ (\mathit{ii})\;\quad &\sup \limits _{x\in \mathcal{O},\ \left\vert x\right\vert \leq R}h(x) < \infty,\;\;\text{ if }h \in \mathit{USC}\left(\mathcal{O}\right). \end{array} }$$

6.5.1.1 Elliptic PDE

Consider the PDE

$$\displaystyle{ \Phi (x,u(x),\mathit{Du}(x),D^{2}u(x)) = 0,\;\ x \in \mathcal{O}, }$$
(6.116)

where

$$\displaystyle{\Phi \,: \mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d} \rightarrow \mathbb{R},}$$

and \(\mathbb{S}^{d}\) denotes the set of symmetric d × d matrices.

Definition 6.85.

  1. (i)

    \(\;u \in \mathit{USC}(\mathcal{O})\) is a viscosity sub-solution of (6.116) if for any \(\varphi \in C^{2}(\mathcal{O})\) and \(\hat{x} \in \mathcal{O}\) a local maximum of \(u-\varphi\):

    $$\displaystyle{\Phi (\hat{x},u(\hat{x}),D\varphi (\hat{x}),D^{2}\varphi (\hat{x})) \leq 0.}$$
  2. (ii)

    \(\;u \in \mathit{LSC}(\mathcal{O})\) is a viscosity super-solution of (6.116) if for any \(\varphi \in C^{2}(\mathcal{O})\) and \(\hat{x} \in \mathcal{O}\) a local minimum of \(u-\varphi\):

    $$\displaystyle{\Phi (\hat{x},u(\hat{x}),D\varphi (\hat{x}),D^{2}\varphi (\hat{x})) \geq 0.}$$
  3. (iii)

    \(\;u \in C\left(\mathcal{O}\right)\) is viscosity solution if it is both a viscosity sub- and super-solution.

In these definitions we can also assume that \(u\left(\hat{x}\right) =\varphi \left(\hat{x}\right)\) since we can translate \(\varphi\).

Note that the class of PDEs for which probabilistic formulas are given in this book is the class of semilinear equations, where the function \(\Phi \) has the following particular form

$$\displaystyle{ \Phi (x,r,p,X) = -\dfrac{1} {2}\mathbf{Tr}\left[g(x)g^{{\ast}}(x)X\right] -\langle f(x),p\rangle - F\left(x,r,p\right). }$$
(6.117)

In the Definition 6.85 we can replace local maximum (minimum) by strict global maximum (minimum).

Remark 6.86.

Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\) and \(u \in C^{2}(\mathcal{O})\).

  1. (i)

    If u is a viscosity solution of (6.116), then u is a classical solution.

  2. (ii)

    If u is a classical solution of (6.116) and \(\Phi \) satisfies the degenerate ellipticity condition

    $$\displaystyle{X \leq Y \Rightarrow \Phi (x,r,p,X) \geq \Phi (x,r,p,Y ),\,\forall \,x,r,p,}$$

    then u is a viscosity solution.

Definition 6.87.

A function \(u \in \mathit{USC}(\mathcal{O})\) satisfies the maximum principle if for all \(\varphi \in C^{2}(\mathcal{O})\) and all open subsets \(D \subseteq \mathcal{O}\) the inequality

$$\displaystyle{\Phi (x,\varphi (x),D\varphi (x),D^{2}\varphi (x)) > 0,\,\forall x \in D}$$

implies that at every \(\hat{x} \in D\) which is a local maximum of \(u-\varphi\):

$$\displaystyle{u\left(\hat{x}\right) <\varphi \left(\hat{x}\right).}$$

Proposition 6.88.

Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\) and

$$\displaystyle{r \leq s\; \Rightarrow \; \Phi (x,r,p,X) \leq \Phi (x,s,p,X),\,\forall \,x,p,X.}$$

Then each viscosity sub-solution u satisfies the maximum principle.

Proof.

If we assume that there exist \(\varphi \in C^{2}(\mathcal{O})\), an open subset \(D \subseteq \mathcal{O}\) such that

$$\displaystyle{\Phi (x,u(x),D\varphi (x),D^{2}\varphi (x)) > 0,\,\forall x \in D,}$$

and \(\hat{x} \in D\) a local maximum of \(u-\varphi\) such that \(u\left(\hat{x}\right) \geq \varphi \left(\hat{x}\right)\) then

$$\displaystyle{\Phi (\hat{x},\varphi (\hat{x}),D\varphi (\hat{x}),D^{2}\varphi (\hat{x})) \leq \Phi (\hat{x},u(\hat{x}),D\varphi (\hat{x}),D^{2}\varphi (\hat{x})) \leq 0,}$$

since u is a sub-solution. Hence necessarily \(u(\hat{x}) <\varphi (\hat{x})\). ■ 

We next introduce the notion of a proper function (in the sense of the theory of viscosity solutions, which should not be confused with the notion of proper convex function), for which the notion of a viscosity solution makes sense.

Definition 6.89.

A continuous function

$$\displaystyle{\Phi \,: \mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d} \rightarrow \mathbb{R}}$$

is said to be proper, if \(\Phi \) satisfies:

  1. (1)

    Monotonicity condition

    $$\displaystyle{r \leq s\; \Rightarrow \; \Phi (x,r,p,X) \leq \Phi (x,s,p,X),\,\forall \,x,p,X,}$$

    and

  2. (2)

    Degenerate ellipticity condition

    $$\displaystyle{ X \leq Y \Rightarrow \Phi (x,r,p,X) \geq \Phi (x,r,p,Y ),\,\forall \,x,r,p. }$$
    (6.118)

Definition 6.85 of a viscosity solution can be reformulated in terms of subjets and superjets of u.

Definition 6.90.

Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\), \(u: \mathcal{O}\rightarrow \mathbb{R}\) and \(x \in \mathcal{O}\).

  1. (i)

    \((p,X) \in \mathbb{R}^{d} \times \mathbb{S}^{d}\) is a superjet to u at x if

    $$\displaystyle{\mathop{\lim \sup }\limits_{\mathcal{O}\ni y \rightarrow x}\tfrac{u(y)-u(x)-\langle p,y-x\rangle -\frac{1} {2} \langle X(y-x),y-x\rangle } {\vert y-x\vert ^{2}} \leq 0.}$$

    The set of superjets to u at x will be denoted \(J_{\mathcal{O}}^{2,+}u(x)\).

  2. (ii)

    \((p,X) \in \mathbb{R}^{d} \times \mathbb{S}^{d}\) is a subjet to u at x if

    $$\displaystyle{\mathop{\lim \inf }\limits_{\mathcal{O}\ni y \rightarrow x}\tfrac{u(y)-u(x)-\langle p,y-x\rangle -\frac{1} {2} \langle X(y-x),y-x\rangle } {\vert y-x\vert ^{2}} \geq 0.}$$

    The set of subjets to u at x will be denoted \(J_{\mathcal{O}}^{2,-}u(x)\).

If \(\mathcal{O} = \mathbb{R}^{d}\), then the index \(\mathcal{O}\) will be omitted.

Proposition 6.91.

Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\).

  1. (i)

    Let \(u \in \mathit{USC}\left(\mathcal{O}\right)\) and \(\tilde{x} \in \mathcal{O}\).

    1. (a)

      If \(\left(p,X\right) \in J_{\mathcal{O}}^{2,+}u(\tilde{x})\) , then there exists a \(\varphi \in C^{2}(\mathcal{O})\) such that \(u(\tilde{x}) =\varphi (\tilde{x})\),

      $$\displaystyle{\left(p,X\right) = \left(\varphi _{x}^{{\prime}}(\tilde{x}),\varphi _{ xx}^{{\prime\prime}}(\tilde{x})\right)}$$

      and \(\tilde{x}\) is a strict global maximum of \(u-\varphi\) in \(\mathcal{O}\) .

    2. (b)

      If \(\varphi \in C^{2}(\mathcal{O})\) and \(\tilde{x}\) is a local maximum of \(u-\varphi\) in \(\mathcal{O}\) , then

      $$\displaystyle{\left(\varphi _{x}^{{\prime}}(\tilde{x}),\varphi _{ xx}^{{\prime\prime}}(\tilde{x})\right) \in J_{ \mathcal{O}}^{2,+}u(\tilde{x}).}$$
  2. (ii)

    Let \(u \in \mathit{LSC}\left(\mathcal{O}\right)\) and \(\tilde{x} \in \mathcal{O}\).

    1. (a)

      If \(\left(p,X\right) \in J_{\mathcal{O}}^{2,-}u(\tilde{x})\) , then there exists a \(\varphi \in C^{2}(\mathcal{O})\) such that \(u(\tilde{x}) =\varphi (\tilde{x})\),

      $$\displaystyle{\left(p,X\right) = \left(\varphi _{x}^{{\prime}}(\tilde{x}),\varphi _{ xx}^{{\prime\prime}}(\tilde{x})\right)}$$

      and \(\tilde{x}\) is a strict global minimum of \(u-\varphi\) in \(\mathcal{O}\) .

    2. (b)

      If \(\varphi \in C^{2}(\mathcal{O})\) and \(\tilde{x}\) is a local minimum of \(u-\varphi\) in \(\mathcal{O}\) , then

      $$\displaystyle{\left(\varphi _{x}^{{\prime}}(\tilde{x}),\varphi _{ xx}^{{\prime\prime}}(\tilde{x})\right) \in J_{ \mathcal{O}}^{2,-}u(\tilde{x}).}$$

Proof.

It is sufficient to prove (i) since \(J_{\mathcal{O}}^{2,-}u(\tilde{x}) = -J_{\mathcal{O}}^{2,+}\left(-u\right)(\tilde{x})\). Also the equivalence is clear if \(\tilde{x}\) is an isolated point of \(\mathcal{O}\).

Let \(\tilde{x}\) be a non-isolated point of \(\mathcal{O}\).

\(\left(\Rightarrow \right)\): Let \(\left(p,X\right) \in J_{\mathcal{O}}^{2,+}u(\tilde{x})\). Then there exists a strictly increasing function \(\rho =\rho ^{\left(\tilde{x}\right)}: [0,+\infty [\rightarrow [0,+\infty [\), ρ(0+) = 0 such that \(\forall y \in \mathcal{O}\)

$$\displaystyle{ u(y) \leq u(\tilde{x}) +\langle p,y -\tilde{ x}\rangle + \frac{1} {2}\langle X(y -\tilde{ x}),y -\tilde{ x}\rangle +\rho (\vert y -\tilde{ x}\vert )\vert y -\tilde{ x}\vert ^{2}. }$$
(6.119)

One can define ρ by

$$\displaystyle{\rho \left(r\right) = r +\sup _{y\in \mathcal{O},\ \left\vert y-\tilde{x}\right\vert \leq r}\tfrac{\left(u(y)-u(\tilde{x})-\langle p,y-\tilde{x}\rangle -\frac{1} {2} \langle X(y-\tilde{x}),y-\tilde{x}\rangle \right)^{+}} {\vert y-\tilde{x}\vert ^{2}}.}$$

Let

$$\displaystyle{\beta (r) = \frac{1} {r^{2}}\int _{r}^{2r}\int _{ r_{2}}^{2r_{2} }\int _{r_{1}}^{2r_{1} }\rho \left(\sqrt{\tau }\right)d\tau \mathit{dr}_{1}\mathit{dr}_{2},\quad \text{ for }r > 0,}$$

and β(r) = 0 if r ≤ 0. Then \(r\rho (\sqrt{r}) <\beta (r) < 8r\rho (8\sqrt{r})\) for all r > 0, \(\beta \in C^{2}(\left]0,\infty \right[)\), β(0+) = β (0+) = 0 and

$$\displaystyle{\lim _{r\searrow 0}r\beta ^{{\prime\prime}}\left(r\right) = 0.}$$

Define \(\varphi \in C^{2}(\mathbb{R}^{d})\) by

$$\displaystyle{\varphi (y)\mathop{ =}\limits^{ \mathit{def }}u(\tilde{x}) +\langle p,y -\tilde{ x}\rangle + \frac{1} {2}\langle X(y -\tilde{ x}),y -\tilde{ x}\rangle +\beta (\vert y -\tilde{ x}\vert ^{2}).}$$

Then

$$\displaystyle{\varphi _{x}^{{\prime}}(\tilde{x}) = p\;\;\;\text{ and }\varphi _{ xx}^{{\prime\prime}}(\tilde{x}) = X}$$

and \(\tilde{x}\) is a strict global maximum of \(u-\varphi\) since for \(y \in \mathcal{O}\setminus \left\{\tilde{x}\right\}\):

$$\displaystyle\begin{array}{rcl} u(y) -\varphi (y)& \leq & \rho (\vert y -\tilde{ x}\vert )\vert y -\tilde{ x}\vert ^{2} -\beta (\vert y -\tilde{ x}\vert ^{2}) {}\\ & <& 0 =\varphi (\tilde{x}) - u(\tilde{x}). {}\\ \end{array}$$

\(\left(\Leftarrow \right)\): Let \(\varphi \in C^{2}(\mathcal{O})\) and \(\tilde{x}\) be a local maximum of \(u-\varphi\). Let

$$\displaystyle{\psi \left(y\right) =\varphi \left(y\right) -\varphi \left(\tilde{x}\right) + u\left(\tilde{x}\right).}$$

By Taylor’s formula

$$\displaystyle\begin{array}{rcl} 0& =& \ \lim _{y\rightarrow \tilde{x}}\tfrac{\psi (y)-\psi (\tilde{x})-\langle \psi _{x}^{{\prime}}(\tilde{x}),y-\tilde{x}\rangle -\frac{1} {2} \langle \psi _{xx}^{{\prime\prime}}(\tilde{x})(y-\tilde{x}),y-\tilde{x}\rangle } {\vert y-\tilde{x}\vert ^{2}} {}\\ & \geq & \ \mathop{\lim \sup }\limits_{y \rightarrow \tilde{ x},y \in \mathcal{O}}\tfrac{u(y)-u(\tilde{x})-\langle \varphi _{x}^{{\prime}}(\tilde{x}),y-\tilde{x}\rangle -\frac{1} {2} \langle \varphi _{xx}^{{\prime\prime}}(\tilde{x})(y-\tilde{x}),y-\tilde{x}\rangle } {\vert y-\tilde{x}\vert ^{2}}. {}\\ \end{array}$$

 ■ 

Corollary 6.92.

Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\).

  1. (i)

    \(\;u \in \mathit{USC}(\mathcal{O})\) is a viscosity sub-solution of (6.116) iff for any \(x \in \mathcal{O}\) and \((p,X) \in J_{\mathcal{O}}^{2,+}u(x)\)

    $$\displaystyle{\Phi (x,u(x),p,X) \leq 0.}$$
  2. (ii)

    \(\;u \in \mathit{LSC}(\mathcal{O})\) is a viscosity super-solution of (6.116) iff for any \(x \in \mathcal{O}\) and \((p,X) \in J_{\mathcal{O}}^{2,-}u(x)\)

    $$\displaystyle{\Phi (x,u(x),p,X) \geq 0.}$$

Definition 6.93.

Let \(u: \mathcal{O}\rightarrow \mathbb{R}\) and \(x \in \mathcal{O}\).

\(\overline{J}_{\mathcal{O}}^{2,+}u(x)\) (respect. \(\overline{J}_{\mathcal{O}}^{2,-}u(x)\)) is the set of \((p,X) \in \mathbb{R}^{d} \times \mathbb{S}^{d}\) such that there exists a sequence \((x_{n},p_{n},X_{n}) \in \mathcal{O}\times \mathbb{R}^{d} \times \mathbb{S}^{d}\), \(n \in \mathbb{N}^{{\ast}}\), with the properties

$$\displaystyle{(p_{n},X_{n}) \in J_{\mathcal{O}}^{2,+}u(x_{ n}),\;\text{ (respect. }(p_{n},X_{n}) \in J_{\mathcal{O}}^{2,-}u(x_{ n})\text{ )},\;\forall \ n \in \mathbb{N}^{{\ast}},}$$

and

$$\displaystyle{(x_{n},u(x_{n}),p_{n},X_{n}) \rightarrow (x,u(x),p,X),\;\;\text{ as }n \rightarrow \infty.}$$

6.5.1.2 Systems of PDEs

Backward stochastic differential equations naturally give probabilistic formulas for systems of PDEs, not just for single PDEs.

Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\), \(\Phi \in C(\overline{\mathcal{O}}\times \mathbb{R}^{m} \times \mathbb{R}^{d} \times \mathbb{S}^{d}; \mathbb{R}^{m})\). We want to explain what we mean by the fact that \(u \in C(\mathcal{O}, \mathbb{R}^{m})\) solves in the viscosity sense the following systems of PDEs

$$\displaystyle{ \Phi _{i}(x,u(x),\mathit{Du}_{i}(x),D^{2}u_{ i}(x)) = 0,\ 1 \leq i \leq m,\ x \in \mathcal{O}. }$$
(6.120)

Note that the various equations are coupled only through the vector u(x). The i-th equation depends upon all coordinates of u(x), but only on the i-th coordinate of Du(x) and D 2 u(x). This is essential for the following definition to make sense.

Definition 6.94.

Let \(\mathcal{O}\) be a locally closed subset of \(\mathbb{R}^{d}\).

  • (i) \(u \in \mathit{USC}(\mathcal{O})\) is a viscosity sub-solution of (6.120) if

    $$\displaystyle{\Phi _{i}(x,u(x),p,X) \leq 0\ \text{ for }x \in \mathcal{O},\ 1 \leq i \leq m,\ (p,X) \in \overline{J}_{\mathcal{O}}^{2,+}u_{ i}(x).}$$
  • (ii) \(u \in \mathit{LSC}(\mathcal{O})\) is a viscosity super-solution of (6.120) if

    $$\displaystyle{\Phi _{i}(x,u(x),p,X) \geq 0\ \text{ for }x \in \mathcal{O},\ 1 \leq i \leq m,\ (p,X) \in \overline{J}_{\mathcal{O}}^{2,-}u_{ i}(x).}$$
  • (iii) \(u \in C(\mathcal{O})\) is a viscosity solution of (6.120) if it is both a viscosity sub- and super-solution.

6.5.1.3 Boundary Conditions

We now discuss the formulation of the boundary condition in the framework of viscosity solutions. Suppose for simplicity that the boundary \(\partial \mathcal{O}\) of the open set \(\mathcal{O}\) is of class C 1 and that \(\mathcal{O}\) satisfies the uniform exterior ball condition. We shall consider two types of boundary conditions, namely:

  • Dirichlet boundary conditions, of the form

    $$\displaystyle{u(x) -\kappa (x) = 0,\quad x \in \partial \mathcal{O};}$$
  • Nonlinear Neumann boundary conditions, of the form

    $$\displaystyle{\langle n(x),\mathit{Du}(x)\rangle + G(x,u(x)) = 0,\quad x \in \partial \mathcal{O},}$$

    where n(x) denotes the outward normal vector to the boundary \(\partial \mathcal{O}\) at x.

Consider the function

$$\displaystyle{\Gamma: \partial \mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d} \rightarrow \mathbb{R}}$$

defined in the case of the Dirichlet boundary condition by

$$\displaystyle{\Gamma (x,r,p) = r -\kappa (x),}$$

and in the case of the Neumann boundary condition by

$$\displaystyle{\Gamma (x,r,p) =\langle n(x),p\rangle - G(x,r),}$$

where \(G \in C(\partial \mathcal{O}\times \mathbb{R})\) and r → G(x, r) is assumed to be nonincreasing for all \(x \in \partial \mathcal{O}\). The correct formulation of the boundary value problem

$$\displaystyle{ \left\{\begin{array}{l} \Phi (x,u(x),\mathit{Du}(x),D^{2}u(x)) = 0,\ x \in \mathcal{O}, \\ \Gamma (x,u(x),\mathit{Du}(x)) = 0,\ x \in \partial \mathcal{O},\end{array} \right. }$$
(6.121)

is as follows.

Definition 6.95.

Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\), \(\Phi \in C(\overline{\mathcal{O}}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d})\) be proper and \(\Gamma \in C(\mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d})\) be as defined above.

  • (i) \(u \in \mathit{USC}(\overline{\mathcal{O}})\) is a viscosity sub-solution of (6.121) if

    $$\displaystyle{\left\{\begin{array}{l} \Phi (x,u(x),p,X) \leq 0\ \text{ for }x \in \mathcal{O},\ (p,X) \in \overline{J}_{\overline{\mathcal{O}}}^{2,+}u(x), \\ \Phi (x,u(x),p,X) \wedge \Gamma (x,u(x),p) \leq 0\ \text{ for }x \in \partial \mathcal{O},\ (p,X) \in \overline{J}_{\overline{\mathcal{O}}}^{2,+}u(x).\end{array} \right.}$$
  • (ii) \(u \in \mathit{LSC}(\overline{\mathcal{O}})\) is a viscosity super-solution of (6.121) if

    $$\displaystyle{\left\{\begin{array}{l} \Phi (x,u(x),p,X) \geq 0\ \text{ for }x \in \mathcal{O},\ (p,X) \in \overline{J}_{\overline{\mathcal{O}}}^{2,-}u(x), \\ \Phi (x,u(x),p,X) \vee \Gamma (x,u(x),p) \geq 0\ \text{ for }x \in \partial \mathcal{O},\ (p,X) \in \overline{J}_{\overline{\mathcal{O}}}^{2,-}u(x).\end{array} \right.}$$
  • (iii) \(u \in C(\overline{\mathcal{O}})\) is a viscosity solution of (6.121) if it is both a viscosity sub- and super-solution.

6.5.1.4 Parabolic PDEs

One might think that a parabolic PDE is an elliptic PDE with one more variable, namely time t. However, because we are considering equations with first derivatives in t only, the variable t plays a specific role. In particular, there will be a boundary condition either at the initial point or at the final point of the time interval, not at both.

Given \(\mathcal{O}\subset \mathbb{R}^{d}\) and \(\Phi \in C([0,T] \times \mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d})\), we consider the parabolic equation

$$\displaystyle{ \left\{\begin{array}{l} \dfrac{\partial u} {\partial t} (t,x) + \Phi (t,x,u(t,x),\mathit{Du}(t,x),D^{2}u(t,x)),\ 0 < t < T,\ x \in \mathcal{O}, \\ u(0,x) =\kappa (x),\ x \in \mathcal{O}, \end{array} \right. }$$
(6.122)

where as previously, Du stands for the vector of first order partial derivatives with respect to the x i ’s, and D 2 u for the matrix of second order derivatives with respect to x i and x j , 1 ≤ i, j ≤ d. Only in the case \(\mathcal{O} = \mathbb{R}^{d}\) can we hope that the above parabolic PDE is well posed. If \(\mathcal{O}\not =\mathbb{R}^{d}\), some boundary condition is needed. This will be discussed later.

We denote by \(\mathcal{P}_{\mathcal{O}}^{2,+}\) and \(\mathcal{P}_{\mathcal{O}}^{2,-}\) the parabolic analogs of \(J_{\mathcal{O}}^{2,+}\) and \(J_{\mathcal{O}}^{2,-}\). More specifically, for \(\mathcal{O}\) a locally compact subset of \(\mathbb{R}^{d}\), T > 0, denoting \(\mathcal{O}_{T} = (0,T) \times \mathcal{O}\), if \(u: \mathcal{O}_{T} \rightarrow \mathcal{R}\), 0 < s, t < T, \(x,y \in \mathcal{O}\), \((p,q,X) \in \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d}\), we say that \((p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,+}u(t,x)\), whenever

$$\displaystyle{u(s,y) \leq u(t,x) + p(s - t) +\langle q,y - x\rangle + \frac{1} {2}\langle X(y - x),y - x\rangle }$$
$$\displaystyle{\quad + o(\vert s - t\vert + \vert y - x\vert ^{2}).}$$

Moreover \(\mathcal{P}_{\mathcal{O}}^{2,-}u = -\mathcal{P}_{\mathcal{O}}^{2,+}(-u)\). The corresponding definitions of \(\overline{\mathcal{P}}_{\mathcal{O}}^{2,+}u(t,x)\) and \(\overline{\mathcal{P}}_{\mathcal{O}}^{2,-}u(t,x)\) are now clear.

We now give a definition of the notion of a viscosity solution of equation (6.122).

Definition 6.96.

With the above notation:

  • (i) \(u \in \mathit{USC}([0,T) \times \mathcal{O})\) is a viscosity sub-solution of Eq. (6.122) if u(0, x) ≤ κ(x), \(x \in \mathcal{O}\) and

    $$\displaystyle{p + \Phi (t,x,u(t,x),q,X) \leq 0,\text{ for }(t,x) \in \mathcal{O}_{T},(p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,+}u(t,x).}$$
  • (ii) \(u \in \mathit{LSC}([0,T) \times \mathcal{O})\) is a viscosity super-solution of Eq. (6.122) if u(0, x) ≥ κ(x), \(x \in \mathcal{O}\) and

    $$\displaystyle{p + \Phi (t,x,u(t,x),q,X) \geq 0,\text{ for }(t,x) \in \mathcal{O}_{T},(p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,-}u(t,x).}$$
  • (iii) \(u \in C([0,T) \times \mathcal{O})\) is a viscosity solution of (6.122) if it is both a sub- and a super-solution.

We remark that u(t, x) solves the parabolic PDE (6.122) if and only if v(t, x) = e λ t u(t, x) solves the same equation with \(\Phi \) replaced by \(\Phi +\lambda r\), which in the case where \(\Phi \) has the form (6.117) is proper iff r → λ rF(t, x, r, q) is increasing for any (t, x, q). The fact that this is true for some λ is one of our standing assumptions on F for existence and uniqueness of the solution to the associated BSDE.

Note that we also consider parabolic PDEs with a final condition (at time t = T) rather than an initial condition (at time t = 0). In that case, the equation becomes

$$\displaystyle{-\frac{\partial u} {\partial t} (t,x) + \Phi (t,x,u(t,x),\mathit{Du}(t,x),D^{2}u(t,x)) = 0,}$$

and the condition u(0, x) ≤ κ(x) (resp. u(0, x) ≥ κ(x)) becomes u(T, x) ≤ κ(x) (resp. u(T, x) ≥ κ(x)).

Finally we explain what we mean by a viscosity solution of the parabolic PDE

$$\displaystyle{\frac{\partial u} {\partial t} (t,x) + \Phi (t,x,u(t,x),\mathit{Du}(t,x),D^{2}u(t,x)) + \partial \varphi (u(t,x)) \ni 0,}$$

where \(\partial \varphi\) is the subdifferential of the convex lower semicontinuous function \(\varphi: \mathbb{R} \rightarrow (-\infty,+\infty ]\).

A sub-solution is a function \(u \in \mathit{USC}(\mathcal{O}_{T})\) which is such that for any \((t,x) \in \mathcal{O}_{T}\), \(u(t,x) \in \mathrm{ Dom}(\varphi )\) and whenever \((p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,+}u(t,x)\),

$$\displaystyle{p + \Phi (t,x,u(t,x),q,X) +\varphi _{ -}^{{\prime}}(u(t,x)) \leq 0,}$$

where \(\varphi _{-}^{{\prime}}(r)\) is the left derivative of \(\varphi\) at the point r. A super-solution is defined similarly with the usual changes, the left derivative of \(\varphi\) being replaced by its right derivative.

6.5.2 A First Uniqueness Result

Let \(\mathcal{O}\) be an open subset of \(\mathbb{R}^{d}\) and \(\Phi \in C(\mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d})\).

The basic assumptions of this subsection are:

  • \(\left(A_{1}\right)\) Super-monotonicity: there exists a δ > 0 such that for all \(x \in \mathcal{O}\), \(p \in \mathbb{R}^{d}\), \(X \in \mathbb{S}^{d}\), \(r,s \in \mathbb{R}\):

    $$\displaystyle{r_{1} \leq r_{2}\; \Rightarrow \; \Phi (x,r_{2},p,X) - \Phi (x,r_{1},p,X) \geq \left(r_{2} - r_{1}\right)\ \delta,}$$

    and

  • \(\left(A_{2}\right)\) Super-degenerate-ellipticity: for all R > 0 there exists an increasing function \(\mathbf{m}_{R}: \mathbb{R}_{+} \rightarrow \mathbb{R}_{+}\), \(\mathbf{m}_{R}\left(0+\right) = 0\) such that if α > 0, \(X,Y \in \mathbb{S}^{d}\) and

    $$\displaystyle{ \left(\begin{array}{cc} X & 0\\ 0 & - Y\end{array} \right) \leq 3\alpha \left(\begin{array}{cc} I & - I\\ - I & I\end{array} \right), }$$
    (6.123)

    or equivalently

    $$\displaystyle{\left\langle Xz,z\right\rangle -\left\langle Y w,w\right\rangle \leq 3\alpha \left\vert z - w\right\vert ^{2},\;\;\forall \ z,w \in \mathbb{R}^{d},}$$

    then for all \(x,y \in \mathcal{O}\cap \overline{B\left(0,R\right)}\), \(r \in \mathbb{R}\):

    $$\displaystyle{ \Phi (y,r,\alpha (x - y),Y ) - \Phi (x,r,\alpha (x - y),X) \leq \mathbf{m}_{R}\left(\left\vert x - y\right\vert +\alpha \left\vert x - y\right\vert ^{2}\right). }$$
    (6.124)

Note that if X and Y satisfy (6.123) then Y ≤ X (setting z = w).

In the particular case of the function \(\Phi \) given by (6.117), the super-monotonicity of \(\Phi \) is a consequence of the same property for − F. As for the super degenerate ellipticity, we have the following:

Lemma 6.97.

If g is globally Lipschitz, f is globally monotone, and − F satisfies (6.124), then \(\Phi \) is super-degenerate-elliptic.

Proof.

The global monotonicity of f implies that

$$\displaystyle{-\langle f(y) - f(x),\alpha (x - y)\rangle \leq \mu \alpha \vert x - y\vert ^{2}.}$$

Now consider the term involving g. We take advantage of (6.123) and the Lipschitz continuity of g:

$$\displaystyle\begin{array}{rcl} & \mathbf{Tr}\left[gg^{{\ast}}(x)X\right] -\mathbf{Tr}\left[gg^{{\ast}}(y)Y \right] = \mathbf{Tr}\left[g^{{\ast}}(x)Xg(x) - g^{{\ast}}(y)Y g(y)\right]& {}\\ & =\sum _{ i=1}^{d}\left[\langle Xg(x)e_{i},g(x)e_{i}\rangle -\langle Y g(y)e_{i},g(y)e_{i}\rangle \right] & {}\\ & \leq 3\alpha \sum _{i=1}^{d}\left\vert g(x)e_{i} - g(y)e_{i}\right\vert ^{2} & {}\\ & \leq C\vert x - y\vert ^{2}. & {}\\ \end{array}$$

 ■ 

Theorem 6.98 (Comparison Principle).

Let \(\mathcal{O}\) be a bounded open subset of \(\mathbb{R}^{d}\) and assume that \(\Phi \,: \mathcal{O}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d} \rightarrow \mathbb{R}\) satisfies \(\left(A_{1}\right)\) and \(\left(A_{2}\right)\). If

$$\displaystyle{\begin{array}{r@{\quad }l} \left(j\right)\;\quad &u \in \mathit{USC}\left(\overline{\mathcal{O}}\right)\text{ is a sub-solution of }\Phi = 0\text{ in }\mathcal{O}, \\ \left(\,\mathit{jj}\right)\;\quad &v \in \mathit{LSC}(\overline{\mathcal{O}})\text{ is a super-solution of }\Phi = 0\text{ in }\mathcal{O}, \\ \left(\,\mathit{jjj}\right)\;\quad &u\left(x\right) \leq v\left(x\right),\;\;\forall \ x \in \partial \mathcal{O}, \end{array} }$$

then

$$\displaystyle{u\left(x\right) \leq v\left(x\right)\;\;\forall \ x \in \overline{\mathcal{O}}.}$$

We first prove auxiliary results.

Lemma 6.99.

Given \(u,v \in C(\bar{\mathcal{O}})\) , α > 0, we define

$$\displaystyle{\psi _{\alpha }(x,y) = u(x) - v(y) - \frac{\alpha } {2}\vert x - y\vert ^{2}.}$$

Let \((\hat{x},\hat{y})\) be a local maximum in \(\mathcal{O}\times \mathcal{O}\) of ψ α. Then there exist \(X,Y \in \mathbb{S}^{d}\) such that

  1. (j)

    \((\alpha (\hat{x} -\hat{ y}),X) \in \bar{ J}_{\mathcal{O}}^{2,+}u(\hat{x})\),

  2. (jj)

    \((\alpha (\hat{x} -\hat{ y}),Y ) \in \bar{ J}_{\mathcal{O}}^{2,-}v(\hat{y})\),

  3. (jjj)

    \(\left(\begin{array}{lr} X & 0\\ 0 & - Y\end{array} \right) \leq 3\alpha \left(\begin{array}{rr} I & - I\\ - I & I\end{array} \right)\) .

Proof.

We shall use the notation

$$\displaystyle{A =\alpha \left(\begin{array}{rr} I & - I\\ - I & I \end{array} \right).}$$

It is sufficient to prove the proposition in case \(\mathcal{O} = \mathbb{R}^{d}\), \(\hat{x} =\hat{ y} = 0\), u(0) = v(0) = 0, (0, 0) is a global maximum of ψ α , u and − v are bounded from above. Hence we may assume that for all \(x,y \in \mathbb{R}^{d}\),

$$\displaystyle{ u(x) - v(y) \leq \frac{1} {2}\bigg\langle A\binom{x}{y},\binom{x}{y}\bigg\rangle, }$$
(6.125)

and we need to show that there exist X, Y ∈ S d such that

  1. ( j’)

    \((0,X) \in \bar{ J}^{2,+}u(0)\),

  2. ( jj’)

    \((0,Y ) \in \bar{ J}^{2,+}v(0)\),

  3. ( jjj’)

    \(\left(\begin{array}{lr} X & 0\\ 0 & - Y\end{array} \right) \leq 3A\).

With the notations \(\bar{x} = \left(\begin{array}{c} x\\ y \end{array} \right)\), \(\bar{\xi }= \left(\begin{array}{c} \xi \\ \eta \end{array} \right)\), we deduce from Schwarz’s inequality that (with the notation \(\Vert A\Vert \mathop{ =}\limits^{ \mathit{def }}\sup \{\vert \left\langle A\bar{\xi },\bar{\xi }\right\rangle \vert;\vert \bar{\xi }\vert \leq 1\}\)):

$$\displaystyle\begin{array}{rcl} \left\langle A\bar{x},\bar{x}\right\rangle & =& \left\langle A\bar{\xi },\bar{\xi }\right\rangle + \left\langle A(\bar{x}-\bar{\xi }),\bar{x}-\bar{\xi }\right\rangle + 2\left\langle \bar{x}-\bar{\xi },A\bar{\xi }\right\rangle {}\\ & \leq & \left\langle A\bar{\xi },\bar{\xi }\right\rangle + \frac{1} {\alpha } \vert A\bar{\xi }\vert ^{2} + (\alpha +\Vert A\Vert )\vert \bar{x} -\bar{\xi }\vert ^{2} {}\\ & \leq & \left\langle (A + \frac{1} {\alpha } A^{2})\bar{\xi },\bar{\xi }\right\rangle + (\alpha +\Vert A\Vert )\vert \bar{x} -\bar{\xi }\vert ^{2}. {}\\ \end{array}$$

Hence if \(B\mathop{ =}\limits^{ \mathit{def }}3A = A + \frac{1} {\alpha } A^{2}\), \(\lambda \mathop{=}\limits^{ \mathit{def }}\alpha +\Vert A\Vert\), and \(w(\bar{x})\mathop{ =}\limits^{ \mathit{def }}u(x) - v(y)\), (6.125) implies

$$\displaystyle{ w(\bar{x}) - \frac{\lambda } {2}\vert \bar{x} -\bar{\xi }\vert ^{2} \leq \frac{1} {2}\left\langle B\bar{\xi },\bar{\xi }\right\rangle. }$$
(6.126)

We now introduce inf- and sup-convolutions. Let

$$\displaystyle\begin{array}{rcl} \hat{w}(\bar{\xi })& \mathop{=}\limits^{ \mathit{def }}& \sup _{\bar{x}}(w(\bar{x}) - \frac{\lambda } {2}\vert \bar{x} -\bar{\xi }\vert ^{2}) {}\\ & = & \hat{u}(\xi ) -\hat{ v}(\eta ), {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} \hat{u}(\xi )& =& \sup _{x}(u(x) - \frac{\lambda } {2}\vert x -\xi \vert ^{2}), {}\\ \hat{v}(\eta )& =& \inf _{y}(v(y) + \frac{\lambda } {2}\vert y -\eta \vert ^{2}). {}\\ \end{array}$$

Since a supremum (resp. an infimum) of convex (resp. concave) functions is convex (resp. concave), the mappings

$$\displaystyle{\bar{\xi }\rightarrow \hat{ w}(\bar{\xi }) + \frac{\lambda } {2}\vert \bar{\xi }\vert ^{2},\quad \text{ and }\quad \xi \rightarrow \hat{ u}(\xi ) + \frac{\lambda } {2}\vert \xi \vert ^{2}}$$

are convex, while

$$\displaystyle{\eta \rightarrow \hat{ v}(\eta ) - \frac{\lambda } {2}\vert \eta \vert ^{2}}$$

is concave. Hence \(\hat{w}\), \(\hat{u}\) and \(-\hat{v}\) are “semiconvex”, i.e. they are the sum of a convex function and a function of class C 2. Note that the hyphen is here on purpose, in order to distinguish this notion from the notion of semiconvex functions, as introduced in Chap. 4.3.

Moreover:

$$\displaystyle{\hat{w}(0) \leq w(0) = 0,}$$

and from (6.126)

$$\displaystyle{\hat{w}(\bar{\xi }) \leq \frac{1} {2}\left\langle B\bar{\xi },\bar{\xi }\right\rangle,}$$

hence

$$\displaystyle{\hat{w}(0) \leq 0,}$$

and consequently

$$\displaystyle{\hat{w}(0) =\max \nolimits _{\bar{\xi }}\left(\hat{w}(\bar{\xi }) -\frac{1} {2}\left\langle B\bar{\xi },\bar{\xi }\right\rangle \right).}$$

If \(\hat{w}\) is smooth, we could deduce that there exists an \(\mathcal{X} \in S_{2d}\) such that \((0,\mathcal{X}) \in J^{2}\hat{w}(0)\), and \(\mathcal{X} \leq B\). Since \(\hat{w}\) is semiconvex, it is possible to show, using Alexandrov’s theorem (which says that a semiconvex function is a.e. twice differentiable), and a lemma due to R. Jensen, which states that the above is essentially true in the sense that it is true provided the first condition is changed to \((0,\mathcal{X}) \in \bar{ J}^{2}\hat{w}(0)\). We refer to the user’s guide [18] for more details. Now, since \(\hat{w}(\bar{\xi }) =\hat{ u}(\xi ) -\hat{ v}(\eta )\), it is not hard to deduce that \(\mathcal{X} = \left(\begin{array}{lr} X & 0\\ 0 & - Y \end{array} \right)\), and \((0,X) \in \bar{ J}^{2}\hat{u}(0)\), \((0,Y ) \in \bar{ J}^{2}\hat{v}(0)\).

The magical property of sup-convolution is that this is enough to conclude that \((0,X) \in \bar{ J}^{2,+}u(0)\) and \((0,Y ) \in \bar{ J}^{2,-}v(0)\), which is a consequence of the next Lemma. ■ 

Lemma 6.100.

Let λ > 0, \(u \in C(\mathbb{R}^{d})\) be bounded from above, and

$$\displaystyle{\hat{u}(\zeta ) =\sup _{x\in \mathbb{R}^{d}}(u(x) - \frac{\lambda } {2}\vert x -\zeta \vert ^{2}).}$$

If \(\eta,q \in \mathbb{R}^{d}\) , X ∈ S d and \((\eta,X) \in J^{2,+}\hat{u}(\eta )\) , then (q,X) ∈ J 2,+ u(η + q∕λ).

Proof.

We assume that \((q,X) \in J^{2,+}\hat{u}(\eta )\). Let \(y \in \mathbb{R}^{d}\) be such that

$$\displaystyle{\hat{u}(\eta ) = u(y) - \frac{\lambda } {2}\vert y -\eta \vert ^{2}.}$$

Then for any \(x,\zeta \in \mathbb{R}^{d}\),

$$\displaystyle\begin{array}{rcl} u(x) - \frac{\lambda } {2}\vert x -\zeta \vert ^{2}& \leq & \hat{u}(\zeta ) {}\\ & \leq & \hat{u}(\eta ) + \left\langle q,\zeta -\eta \right\rangle + \frac{1} {2}\left\langle X(\zeta -\eta ),\zeta -\eta \right\rangle + o(\vert \zeta -\eta \vert ^{2}) {}\\ & =& u(y) - \frac{\lambda } {2}\vert y -\eta \vert ^{2} + \left\langle q,\zeta -\eta \right\rangle {}\\ & & +\frac{1} {2}\left\langle X(\zeta -\eta ),\zeta -\eta \right\rangle + o(\vert \zeta -\eta \vert ^{2}) {}\\ & =& u(y) - \frac{\lambda } {2}\vert y -\eta \vert ^{2} + \left\langle q,\zeta -\eta \right\rangle + O(\vert \zeta -\eta \vert ^{2}). {}\\ \end{array}$$

If we choose ζ = xy +η, then we deduce from the above that

$$\displaystyle{u(x) \leq u(y) + \left\langle q,x - y\right\rangle + \frac{1} {2}\left\langle X(x - y),x - y\right\rangle + o(\vert x - y\vert ^{2}).}$$

On the other hand, choosing x = y and ζ = η +α(λ(ηy) + q), we obtain that

$$\displaystyle{0 \leq \alpha \vert \lambda (\eta -y) + q\vert ^{2} + O(\alpha ^{2}).}$$

The first inequality says that (q, X) ∈ J 2, + u(y), while the second, with α < 0 small enough in absolute value, implies that \(y =\eta +\frac{q} {\lambda }\). The result is proved. ■ 

We shall also need the following:

Lemma 6.101.

Let \(\mathcal{O}\) be locally closed subset of \(\mathbb{R}^{d}\), \(\Phi \in \mathit{USC}(\mathcal{O})\), \(\varPsi \in \mathit{LSC}(\mathcal{O})\) , Ψ ≥ 0, \(\varepsilon > 0\) and

$$\displaystyle{M_{\varepsilon } =\sup _{x\in \mathcal{O}}\left\{\Phi \left(x\right) -\frac{1} {\varepsilon } \varPsi \left(x\right)\right\}.}$$

If \(\lim \limits _{\varepsilon \rightarrow 0}M_{\varepsilon }\) exists in \(\mathbb{R}\) and \(x_{\varepsilon } \in \mathcal{O}\) satisfies

$$\displaystyle{\lim \limits _{\varepsilon \rightarrow 0}\left[M_{\varepsilon } - \Phi (x_{\varepsilon }) + \frac{1} {\varepsilon } \varPsi (x_{\varepsilon })\right] = 0,}$$

then

$$\displaystyle{ \lim \limits _{\varepsilon \rightarrow 0}\frac{\varPsi (x_{\varepsilon })} {\varepsilon } = 0. }$$
(6.127)

Moreover if \(\hat{x} \in \mathcal{O}\) and there exists an \(\varepsilon _{n} \rightarrow 0\) such that \(x_{\varepsilon _{n}} \rightarrow \hat{ x}\) , then

$$\displaystyle{ \varPsi (\hat{x}) = 0,\;\text{ and}\;\lim \limits _{\varepsilon \rightarrow 0}M_{\varepsilon } = \Phi (\hat{x}) =\sup \left\{\Phi \left(x\right): x \in \mathcal{O},\varPsi \left(x\right) = 0\right\}. }$$
(6.128)

Proof.

Let \(\alpha _{\varepsilon } = M_{\varepsilon } - \Phi (x_{\varepsilon }) + \dfrac{1} {\varepsilon } \varPsi (x_{\varepsilon })\). Note that for \(0 <\varepsilon <\delta\) we have \(M_{\varepsilon } \leq M_{\delta }\) and

$$\displaystyle{M_{2\varepsilon } \geq \Phi \left(x_{\varepsilon }\right) -\frac{1} {2\varepsilon }\varPsi \left(x_{\varepsilon }\right) = M_{\varepsilon } -\alpha _{\varepsilon } + \frac{1} {2\varepsilon }\varPsi \left(x_{\varepsilon }\right).}$$

Then

$$\displaystyle{\frac{\varPsi (x_{\varepsilon })} {\varepsilon } \leq 2\left(M_{2\varepsilon } - M_{\varepsilon } +\alpha _{\varepsilon }\right)}$$

and (6.127) follows. Moreover by the lower semicontinuity of Ψ

$$\displaystyle{0 \leq \varPsi (\hat{x}) \leq \liminf _{\varepsilon _{n}\rightarrow 0}\varPsi (x_{\varepsilon _{n}}) = 0.}$$

Using now the upper semicontinuity of \(\Phi \) we have

$$\displaystyle\begin{array}{rcl} \Phi (\hat{x})& \geq & \limsup _{\varepsilon _{n}\rightarrow 0}\Phi (x_{\varepsilon _{n}}) {}\\ & =& \limsup _{\varepsilon _{n}\rightarrow 0}\left[M_{\varepsilon _{n}} -\alpha _{\varepsilon _{n}} + \frac{1} {\varepsilon _{n}}\varPsi \left(x_{\varepsilon _{n}}\right)\right] {}\\ & =& \lim _{\varepsilon \rightarrow 0}M_{\varepsilon } {}\\ & \geq & \sup \left\{\Phi \left(x\right): x \in \mathcal{O},\varPsi \left(x\right) = 0\right\} {}\\ & \geq & \Phi \left(\hat{x}\right). {}\\ \end{array}$$

The result follows. ■ 

Proof of the comparison principle.

Assume that

$$\displaystyle{M\mathop{ =}\limits^{ \mathit{def }}\sup _{x\in \overline{\mathcal{O}}}\left\{u\left(x\right) - v\left(x\right)\right\} > 0.}$$

Let \(\varepsilon > 0\) and

$$\displaystyle{M_{\varepsilon }\mathop{ =}\limits^{ \mathit{def }}\sup _{\left(x,y\right)\in \overline{\mathcal{O}}\times \overline{\mathcal{O}}}\left[u(x) - v(y) -\dfrac{1} {2\varepsilon }\left\vert x - y\right\vert ^{2}\right].}$$

Clearly for \(\delta >\varepsilon\), \(M_{\delta } \geq M_{\varepsilon } \geq u(x) - v(x)\), \(\forall \ x \in \overline{\mathcal{O}}\) and consequently \(M_{\varepsilon }\) converges in \(\mathbb{R}\) as \(\varepsilon \rightarrow 0\),

$$\displaystyle{M_{\varepsilon } \geq M > 0\;\;\text{ and }\lim _{\varepsilon \rightarrow 0}M_{\varepsilon } \geq M.}$$

Since \(\overline{\mathcal{O}}\) is compact and (x, y)↦u(x) − v(y) is upper semicontinuous on \(\overline{\mathcal{O}}\times \overline{\mathcal{O}}\), there exists \((x_{\varepsilon },y_{\varepsilon }) \in \overline{\mathcal{O}}\times \overline{\mathcal{O}}\) such that

$$\displaystyle{u(x_{\varepsilon }) - v(y_{\varepsilon }) -\dfrac{1} {2\varepsilon }\left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2} = M_{\varepsilon }.}$$

By Lemma 6.101, with \(\Phi (x,y) = u(x) - v(y)\) and \(\varPsi (x,y) = \dfrac{1} {2}\left\vert x - y\right\vert ^{2}\) we obtain

$$\displaystyle{\lim \limits _{\varepsilon \rightarrow \infty }\dfrac{1} {\varepsilon } \left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2} = 0.}$$

We now conclude that there exists an \(\varepsilon _{0} > 0\) such that

$$\displaystyle{x_{\varepsilon },y_{\varepsilon } \in \mathcal{O},\;\text{ for all }0 <\varepsilon \leq \varepsilon _{0}.}$$

Since \(u\left(x\right) \leq v\left(x\right),\;\;\forall \ x \in \partial \mathcal{O}\) and whenever \(\varepsilon _{n} \rightarrow 0\) and \(x_{\varepsilon _{n}},y_{\varepsilon _{n}} \rightarrow \hat{ x}\), it follows that

$$\displaystyle\begin{array}{rcl} \lim \limits _{\varepsilon \rightarrow 0}M_{\varepsilon }& =& u\left(\hat{x}\right) - v\left(\hat{x}\right) {}\\ & =& \sup \left\{\Phi \left(x,y\right): \left(x,y\right) \in \overline{\mathcal{O}}\times \overline{\mathcal{O}},\varPsi \left(x,y\right) = 0\right\} {}\\ & =& \sup _{x\in \overline{\mathcal{O}}}\left\{u\left(x\right) - v\left(x\right)\right\} {}\\ & >& 0. {}\\ \end{array}$$

By Lemma 6.99, for \(0 <\varepsilon \leq \varepsilon _{0}\) there exist \(X_{\varepsilon }\), \(Y _{\varepsilon } \in \mathbb{S}^{d}\) such that

$$\displaystyle{ \begin{array}{l} \left(\dfrac{1} {\varepsilon } \varPsi _{x}^{{\prime}}(x_{\varepsilon },y_{\varepsilon }),X_{\varepsilon }\right) \in \overline{J}_{\mathcal{O}}^{2,+}u(x_{\varepsilon }),\;\;\text{ and} \\ \left(-\dfrac{1} {\varepsilon } \varPsi _{y}^{{\prime}}(x_{\varepsilon },y_{\varepsilon }),Y _{\varepsilon }\right) \in \overline{J}_{\mathcal{O}}^{2,-}v(y_{\varepsilon }) \end{array} }$$
(6.129)

and the inequality (jjj) in Lemma 6.99 reads here

$$\displaystyle{\left(\begin{array}{cc} X_{\varepsilon }& 0\\ 0 & - Y _{\varepsilon } \end{array} \right) \leq \frac{3} {\varepsilon } \left(\begin{array}{cc} I & - I\\ - I & I \end{array} \right).}$$

Let R > 0 such that \(\mathcal{O}\subset B\left(0,R\right)\). From \(\left(A_{2}\right)\) with \(\alpha =\varepsilon ^{-1}\), we deduce that

$$\displaystyle\begin{array}{rcl} & & \Phi \left(y_{\varepsilon },v(y_{\varepsilon }), \dfrac{x_{\varepsilon } - y_{\varepsilon }} {\varepsilon },Y _{\varepsilon }\right) - \Phi \left(x_{\varepsilon },v(y_{\varepsilon }), \dfrac{x_{\varepsilon } - y_{\varepsilon }} {\varepsilon },X_{\varepsilon }\right) {}\\ & & \leq \mathbf{m}_{R}\left(\left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert + \dfrac{1} {\varepsilon } \left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2}\right), {}\\ \end{array}$$

and since \(u(x_{\varepsilon }) > v(y_{\varepsilon })\) for \(\varepsilon\) small enough, we deduce from \(\left(A_{1}\right)\) that

$$\displaystyle\begin{array}{rcl} & & \Phi \left(x_{\varepsilon },v(y_{\varepsilon }), \dfrac{x_{\varepsilon } - y_{\varepsilon }} {\varepsilon },X_{\varepsilon }\right) - \Phi \left(x_{\varepsilon },u(x_{\varepsilon }), \dfrac{x_{\varepsilon } - y_{\varepsilon }} {\varepsilon },X_{\varepsilon }\right) {}\\ & & \leq \delta \left[v(y_{\varepsilon }) - u(x_{\varepsilon })\right] {}\\ & & = -\delta \left[\dfrac{1} {2\varepsilon }\left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2} + M_{\varepsilon }\right]. {}\\ \end{array}$$

It follows that

$$\displaystyle\begin{array}{rcl} & & \Phi \left(y_{\varepsilon },v(y_{\varepsilon }), \frac{x_{\varepsilon } - y_{\varepsilon }} {\varepsilon },Y _{\varepsilon }\right) - \Phi \left(x_{\varepsilon },u(x_{\varepsilon }), \frac{x_{\varepsilon } - y_{\varepsilon }} {\varepsilon },X_{\varepsilon }\right) {}\\ & & \leq \mathbf{m}_{R}\left(\left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert + \frac{1} {\varepsilon } \left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2}\right) -\delta \left[\dfrac{1} {2\varepsilon }\left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2} + M_{\varepsilon }\right]. {}\\ \end{array}$$

Since u is a viscosity sub-solution and v is a viscosity super-solution of the equation \(\Phi = 0\), we deduce from (6.129) that

$$\displaystyle{\Phi \left(x_{\varepsilon },u(x_{\varepsilon }), \frac{x_{\varepsilon } - y_{\varepsilon }} {\varepsilon },X_{\varepsilon }\right) \leq 0 \leq \Phi \left(y_{\varepsilon },v(y_{\varepsilon }), \frac{x_{\varepsilon } - y_{\varepsilon }} {\varepsilon },Y _{\varepsilon }\right).}$$

Hence

$$\displaystyle{0 \leq \mathbf{m}_{R}\left(\left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert + \frac{1} {\varepsilon } \left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2}\right) -\delta \left[\dfrac{1} {2\varepsilon }\left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2} + M_{\varepsilon }\right],}$$

then also

$$\displaystyle{0 <\delta \ M \leq \delta \ M_{\varepsilon } \leq \mathbf{m}_{R}\left(\left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert + \frac{1} {\varepsilon } \left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2}\right) - \dfrac{\delta } {2\varepsilon }\left\vert x_{\varepsilon } - y_{\varepsilon }\right\vert ^{2}}$$

and letting \(\varepsilon \rightarrow 0\), we infer the contradiction

$$\displaystyle{0 <\delta \ M \leq 0.}$$

The Theorem is established. ■ 

We deduce from this theorem the uniqueness of the viscosity solution for the Dirichlet problem.

Corollary 6.102.

Under the assumptions of Theorem 6.98, if \(u,v \in C\left(\overline{\mathcal{O}}\right)\) are two viscosity solutions of \(\Phi = 0\) on \(\mathcal{O}\) then

$$\displaystyle{u\left(x\right) = v\left(x\right),\;\forall \ x \in \partial \mathcal{O}\;\;\;\Longrightarrow\;\;\;u\left(x\right) = v\left(x\right),\;\forall \ x \in \overline{\mathcal{O}}.}$$

This Corollary proves that our probabilistic formula provides the unique solution of the corresponding elliptic PDE, satisfying the Dirichlet boundary condition in the classical sense. However it follows from Theorem 7.9 in [18] that it is also the unique solution in the larger class of those solutions satisfying the Dirichlet boundary condition in the (relaxed) viscosity sense.

Let us now indicate how the above proof can be modified, in order to treat the case of a parabolic PDE with Dirichlet condition at the boundary of a bounded set.

Let \(\mathcal{O}\) be a bounded open subset of \(\mathbb{R}^{d}\). Consider the Cauchy–Dirichlet problem

$$\displaystyle{ \left\{\begin{array}{l} \dfrac{\partial u} {\partial t} + \Phi (t,x,u,u_{x}^{{\prime}},u_{xx}^{{\prime\prime}}) = 0\;\;\text{ in }\left]0,T\right[ \times \mathcal{O}, \\ u(t,x) =\kappa (t,x),\;\;\left(t,x\right) \in \left]0,T\right[ \times \partial \mathcal{O}, \\ u(0,x) =\kappa \left(0,x\right)\;\;x \in \overline{\mathcal{O}}, \end{array} \right. }$$
(6.130)

where \(\kappa \in C([0,T[\times \overline{\mathcal{O}})\).

The notion of a viscosity solution to (6.130) is expressed as in Definition 6.96, adding the requirement u(t, x) ≤ κ(t, x) (resp. ≥ ) for \((t,x) \in (0,T) \times \partial \mathcal{O}\) for u to be a sub-solution (resp. a super-solution).

We have the comparison principle:

Theorem 6.103.

Let \(\Phi \in C(\left[0,T\right] \times \overline{\mathcal{O}}\times \mathbb{R} \times \mathbb{R}^{d} \times \mathbb{S}^{d})\) be a proper function satisfying \(\left(A_{1}\right)\) and \(\left(A_{2}\right)\) for each fixed \(t \in \left[0,T\right[\) , with the same δ and m R. If \(u \in \mathit{USC}(\left[0,T\right) \times \overline{\mathcal{O}})\) is a viscosity sub-solution of (6.130) and \(v \in \mathit{LSC}(\left[0,T\right) \times \overline{\mathcal{O}})\) is a viscosity super-solution of (6.130) then

$$\displaystyle{u\left(t,x\right) \leq v\left(t,x\right),\;\;\text{ for all }\left(t,x\right) \in \left[0,T\right) \times \overline{\mathcal{O}}.}$$

An essential tool for the proof of this Theorem is the parabolic analog of Lemma 6.99, which is as follows:

Lemma 6.104.

Given \(u,v \in C(\mathcal{O}_{T})\) , α > 0, let

$$\displaystyle{\psi _{\alpha }(t,x,y) = u(t,x) - v(t,y) - \frac{\alpha } {2}\vert x - y\vert ^{2}.}$$

Let \((\hat{t},\hat{x},\hat{y})\) be a local maximum of ψ α in \((0,T) \times \mathcal{O}\times \mathcal{O}\). Suppose moreover that there is an r > 0 such that for every M > 0 there is a C with the property that whenever \((p,q,X) \in \mathcal{P}_{\mathcal{O}}^{2,+}u(t,x)\), \(\vert x -\hat{ x}\vert + \vert t -\hat{ t}\vert \leq r\) and |u(t,x)| + |q| + |X|≤ M, then p ≤ C, and the same is true if we replace \(\mathcal{P}_{\mathcal{O}}^{2,+}u(t,x)\) by \(-\mathcal{P}_{\mathcal{O}}^{2,-}v(t,x)\). Then there exist \(p \in \mathbb{R}\), \(X,Y \in \mathbb{S}^{d}\) such that

  1. (j)

    \((p,\alpha (\hat{x} -\hat{ y}),X) \in \overline{\mathcal{P}}_{\mathcal{O}}^{2,+}u(\hat{t},\hat{x})\),

  2. (jj)

    \((-p,\alpha (\hat{x} -\hat{ y}),Y ) \in \overline{\mathcal{P}}_{\mathcal{O}}^{2,-}v(\hat{t},\hat{x})\),

  3. (jjj)

    \(\left(\begin{array}{lr} X & 0\\ 0 & - Y\end{array} \right) \leq 3\alpha \left(\begin{array}{rr} I & - I\\ - I & I\end{array} \right)\) .

Proof of the Theorem.

We only sketch the proof. We first observe that it suffices to prove that \(\tilde{u}(t,x) = u(t,x) -\varepsilon /(T - t) \leq v(t,x)\) for all \((t,x) \in (0,T) \times \mathcal{O}\) and all \(\varepsilon > 0\). Now \(\tilde{u}\) satisfies

$$\displaystyle{\left\{\begin{array}{l} \dfrac{\partial \tilde{u}} {\partial t} (t,x) + \Phi (t,x,\tilde{u}(t,x),D\tilde{u}(t,x),D^{2}\tilde{u}(t,x)) \leq - \dfrac{\varepsilon } {(T - t)^{2}}, \\ \lim \limits _{t\rightarrow T}\tilde{u}(t,x) = -\infty.\end{array} \right.}$$

From now on we write u instead of \(\tilde{u}\). We want to contradict the assumption that \(\max _{(0,T)\times \mathcal{O}}[u - v] =\delta > 0\). Let \((\hat{t},\hat{x},\hat{y})\) be a local maximum of ψ α (t, x, y) from Lemma 6.104, and write

$$\displaystyle{M_{\alpha } = u(\hat{t},\hat{x}) - v(\hat{t},\hat{y}) - \frac{\alpha } {2}\vert \hat{x} -\hat{ y}\vert ^{2}.}$$

From our standing assumption, M α  ≥ δ > 0. It is not hard to show that for α large enough, \(0 <\hat{ t} < T\), \(\hat{x},\hat{y} \in \mathcal{O}\). Arguing as in the proof of Theorem 6.98 with the help this time of Lemma 6.104, we conclude that there exist \(p \in \mathbb{R}\), \(X,Y \in \mathbb{S}^{d}\) c > 0 such that

$$\displaystyle\begin{array}{rcl} p + \Phi (\hat{t},\hat{x},u(\hat{t},\hat{x}),\alpha (\hat{x} -\hat{ y},X)& \leq & -c, {}\\ -p + \Phi (\hat{t},\hat{y},v(\hat{t},\hat{y}),\alpha (\hat{x} -\hat{ y},Y )& \geq & 0, {}\\ \end{array}$$

while

$$\displaystyle{\left(\begin{array}{lr} X & 0\\ 0 & - Y \end{array} \right) \leq 3\alpha \left(\begin{array}{rr} I & - I\\ - I & I \end{array} \right).}$$

We deduce that

$$\displaystyle\begin{array}{rcl} c& \leq & \Phi (\hat{t},\hat{y},v(\hat{t},\hat{y}),\alpha (\hat{x} -\hat{ y},Y ) - \Phi (\hat{t},\hat{x},u(\hat{t},\hat{x}),\alpha (\hat{x} -\hat{ y},X) {}\\ & \leq & m(\alpha \vert \hat{x} -\hat{ y}\vert ^{2} + \vert \hat{x} -\hat{ y}\vert ), {}\\ \end{array}$$

from which a contradiction follows. ■ 

6.5.3 A Second Uniqueness Result

We are given a continuous and globally monotone \(f\,:\, \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}\) and a globally Lipschitz \(g: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d\times d}\) together with

$$\displaystyle{\kappa \in C(\mathbb{R}^{d}; \mathbb{R}^{m}),\quad \text{ and }F \in C([0,T] \times \mathbb{R}^{d} \times \mathbb{R}^{m} \times \mathbb{R}^{m\times d}; \mathbb{R}^{m})}$$

such that, for each 1 ≤ i ≤ k, F i (t, x, y, z) depends on the matrix z only through its i-th column z i . As already explained, this assumption is essential for the notion of a viscosity solution of the system of partial differential equations to be considered below to make sense. We assume specifically that for some constants C, p > 0:

  1. (A.2i)

     | F(t, x, 0, 0, 0) | ≤ C(1 + | x | p),  | κ(x) | ≤ C(1 + | x | p),

  2. (A.2ii)

    F = F(t, x, y, z) is globally Lipschitz in (y, z), uniformly in (t, x).

Remark 6.105.

In the case of systems of equations, it does not seem possible to weaken the Lipschitz continuity of F in y to a monotonicity condition as we do in the case m = 1.

Under the assumptions (A.2i) and (A.2ii), for each \(t \in [0,T]\) and \(x \in \mathbb{R}^{d}\), we consider the system of PDEs

$$\displaystyle{ \left\{\begin{array}{r} -\dfrac{\partial u_{i}} {\partial t} (t,x) + \Phi _{i}(t,x,u(t,x),\mathit{Du}_{i}(t,x),D^{2}u_{i}(t,x)) = 0, \\ (t,x) \in [0,T] \times \mathbb{R}^{d},\quad 1 \leq i \leq k, \\ {l}{u_{i}(T,x) =\kappa _{i}(x),\;x \in \mathbb{R}^{d},\quad 1 \leq i \leq m,} \end{array} \right. }$$
(6.131)

where

$$\displaystyle{\Phi _{i}(t,x,r,q,X) = -\frac{1} {2}Tr[gg^{{\ast}}(x)X] -\langle f(x),q\rangle - F_{ i}(t,x,r,q).}$$

The notion of a viscosity solution for such a system is easily deduced from a combination of Definitions 6.94 and 6.96.

We can replace “global maximum point” or “global minimum point” by “strict global maximum point” or “strict global minimum point”. The proof of this claim is very simple and we leave it as an exercise for the reader.

Now we give a uniqueness result for (6.131). This result is obtained under the following additional assumption:

  1. (A.2 iii)

    \(\vert F(t,x,r,p) - F(t,y,r,p)\vert \leq \mathbf{m}_{R}(\vert x - y\vert (1 + \vert p\vert )),\)

for all \(x,y \in \mathbb{R}^{d}\) such that | x | ≤ R, | y | ≤ R, \(r \in \mathbb{R}^{m}\), \(p \in \mathbb{R}^{d}\), where for each R > 0, \(\mathbf{m}_{R} \in C(\mathbb{R}_{+})\) is increasing and m R (0) = 0.

Our result is the following:

Theorem 6.106.

Assume that f,g satisfy (A2). Then there exists at most one viscosity solution u of (6.131) such that

$$\displaystyle{ \lim _{\vert x\vert \rightarrow +\infty }\vert u(t,x)\vert e^{-\delta \left[\log (\vert x\vert )\right]^{2} } = 0, }$$
(6.132)

uniformly for t ∈ [0,T], for some δ > 0.

Remark 6.107.

Notice that any function which has at most a polynomial growth at infinity satisfies (6.132).

The growth condition (6.132) is optimal to obtain such a uniqueness result for (6.131). Indeed, consider the equation

$$\displaystyle{ \frac{\partial u} {\partial t} -\frac{x^{2}} {2} \frac{\partial ^{2}u} {\partial x^{2}} -\frac{x} {2} \frac{\partial u} {\partial x} = 0\quad \text{ in}\;(0,T) \times (0,+\infty ), }$$
(6.133)

then u is a solution of (6.133) if and only if the function v(t, y) = u(t, e y) is a solution of the Heat Equation

$$\displaystyle{ \frac{\partial v} {\partial t} -\frac{1} {2} \frac{\partial ^{2}v} {\partial x^{2}} = 0\quad \text{ in}\;(0,T) \times \mathbb{R}. }$$
(6.134)

But it is well-known that, for the Heat Equation, the uniqueness holds in the class of solutions v satisfying

$$\displaystyle{ \lim _{\vert y\vert \rightarrow +\infty }\vert v(t,y)\vert e^{-\delta \vert y\vert ^{2} } = 0, }$$
(6.135)

uniformly for \(t \in [0,T]\), for some δ > 0. And (6.135) gives back (6.132) for (6.133) since y = log(x).

Let us finally mention that, in our case, the growth condition (6.132) is mainly a consequence of the assumptions on the coefficients of the differential operator and in particular on a = gg ; under the assumptions of Theorem 6.106, the matrix a has, a priori, a quadratic growth at infinity. If a is assumed to have a linear growth at infinity, an easy adaptation of the proof of Theorem 6.106 shows that the uniqueness holds in the class of solutions satisfying

$$\displaystyle{\lim _{\vert x\vert \rightarrow +\infty }\vert u(t,x)\vert e^{-\delta \vert x\vert } = 0,}$$

uniformly for \(t \in [0,T]\), for some δ > 0.

Proof of Theorem 6.106.

Let u and v be two viscosity solutions of (6.131). The proof consists of two steps. We first show that uv and vu are viscosity sub-solutions of an integral partial differential system; then we build a suitable sequence of smooth super-solutions of this system to show that | uv |  = 0 in \([0,T] \times \mathbb{R}^{d}\). Here and below, we denote by | ⋅ | the sup norm in \(\mathbb{R}^{m}\).

Lemma 6.108.

Let u be a sub-solution and v a super-solution of (6.131). Then the function ω:= u − v is a viscosity sub-solution of the system

$$\displaystyle{ -\frac{\partial \omega _{i}} {\partial t} -\mathcal{A}\omega _{i} -\tilde{ K}\left[\vert \omega \vert + \vert \nabla \omega _{i}g\vert \right] = 0\text{ in }[0,T] \times \mathbb{R}^{d}, }$$
(6.136)

for 1 ≤ i ≤ k, where \(\tilde{K}\) is the Lipschitz constant of F in (r,p).

Proof.

Let \(\varphi \in C^{2}([0,T] \times \mathbb{R}^{d})\) and let \((t_{0},x_{0}) \in (0,T) \times \mathbb{R}^{d}\) be a strict global maximum point of \(\omega _{i}-\varphi\) for some 1 ≤ i ≤ k.

We introduce the function

$$\displaystyle{\psi _{n}(t,x,y) = u_{i}(t,x) - v_{i}(t,y) - n\vert x - y\vert ^{2} -\varphi (t,x),}$$

where n is devoted to tend to infinity.

Since (t 0, x 0) is a strict global maximum point of \(u_{i} - v_{i}-\varphi\), by a classical argument in the theory of viscosity solutions, there exists a sequence (t n , x n , y n ) such that:

  1. (i)

    (t n , x n , y n ) is a global maximum point of ψ n in \([0,T] \times (\overline{B}_{R})^{2}\), where B R is a ball with a large radius R;

  2. (ii)

    (t n , x n ), (t n , y n ) → (t 0, x 0) as n → ;

  3. (iii)

    \(n\vert x_{n} - y_{n}\vert ^{2}\) is bounded and tends to zero as n → .

It follows from a variant of Lemma 6.104, see also Theorem 8.3 in the user’s guide [18], that there exist \(X,Y \in \mathbb{S}^{d}\) such that

$$\displaystyle{\left( \frac{\partial \varphi } {\partial t}(t_{n},x_{n}),\,q_{n} + D\varphi (t_{n},x_{n}),X\right) \in \mathcal{P}^{2,+}u_{ i}(t_{n},x_{n})}$$
$$\displaystyle{(0,q_{n},Y ) \in \mathcal{P}^{2,-}v_{ i}(t_{n},y_{n})}$$
$$\displaystyle{\left(\begin{array}{*{10}c} X & 0\\ 0 &-Y \end{array} \right) \leq 4n\left(\begin{array}{*{10}c} I &-I\\ -I & I \end{array} \right)+\left(\begin{array}{*{10}c} D^{2}\varphi (t_{ n},x_{n})&0 \\ 0 &0 \end{array} \right),}$$

where

$$\displaystyle{\quad q_{n} = 2n(x_{n} - y_{n}).}$$

Modifying if necessary ψ n by adding terms of the form χ(x) and χ(y) with supports in \(B_{R/2}^{c}\), we may assume that (t n , x n , y n ) is a global maximum point of ψ n in \(([0,T] \times \mathbb{R}^{d})^{2}\). Since u and v are respectively sub and super-solutions of (6.131), we have

$$\displaystyle{-\frac{\partial \varphi } {\partial t}(t_{n},x_{n}) -\frac{1} {2}\mathit{Tr}(a(x_{n})X) -\left\langle f(x_{n}),\,q_{n} + D\varphi (t_{n},x_{n})\right\rangle }$$
$$\displaystyle{-F_{i}(t_{n},x_{n},u(t_{n},x_{n}),\,(q_{n} + D\varphi (t_{n},x_{n}))g(x_{n})) \leq 0}$$

and

$$\displaystyle{-\frac{1} {2}\mathit{Tr}(a(y_{n})Y ) -\left\langle f(y_{n}),q_{n}\right\rangle - F_{i}(t_{n},y_{n},v(t_{n},y_{n}),p_{n}g(y_{n})) \geq 0.}$$

The computation of Lemma 6.97 yields

$$\displaystyle\begin{array}{rcl} & \frac{1} {2}\mathit{Tr}[\mathit{gg}^{{\ast}}(x_{ n})X] -\frac{1} {2}Tr[gg^{{\ast}}(y_{ n})Y ] +\langle f(x_{n}) - f(y_{n}),q_{n}\rangle & {}\\ & \leq n\vert x_{n} - y_{n}\vert ^{2} + \mathit{Tr}[\mathit{gg}^{{\ast}}(x_{n})D^{2}\varphi (t_{n},x_{n})]. & {}\\ \end{array}$$

Finally, we consider the difference between the nonlinear terms

$$\displaystyle\begin{array}{rcl} & F_{i}(t_{n},x_{n},\,u(t_{n},x_{n}),\,(q_{n} + D\varphi (t_{n},x_{n}))g(x_{n})) - F_{i}(t_{n},y_{n},\,v(t_{n},y_{n}),\,q_{n}g(y_{n}))& {}\\ & \leq \mathbf{m}(\vert x_{n} - y_{n}\vert (1 + \vert p_{n}g(y_{n})\vert )) +\tilde{ K}\vert u(t_{n},x_{n}) - v(t_{n},y_{n})\vert & {}\\ & \quad +\tilde{ K}\vert q_{n}(g(x_{n}) - g(y_{n})) + D\varphi (t_{n},x_{n})g(x_{n})\vert. & {}\\ \end{array}$$

The first term on the right-hand side comes from (A.2 iii): we have denoted by m the modulus m R which appears in this assumption for R large enough. The two last terms come from the Lipschitz continuity of F i with respect to the two last variables.

We notice that

$$\displaystyle{\vert q_{n}(g(x_{n}) - g(y_{n}))\vert \leq Cn\vert x_{n} - y_{n}\vert ^{2},}$$

because of the Lipschitz continuity of g and that

$$\displaystyle{\vert x_{n} - y_{n}\vert \cdot \vert q_{n}g(y_{n})\vert \leq Cn\vert x_{n} - y_{n}\vert ^{2}.}$$

Now we subtract the viscosity inequalities for u and v: thanks to the above estimates, we can write the obtained inequality in the following way

$$\displaystyle{-\frac{\partial \varphi } {\partial t}(t_{n},x_{n}) -\mathcal{A}\varphi (t_{n},x_{n}) -\tilde{ K}\vert u(t_{n},x_{n}) - v(t_{n},y_{n})\vert \leq \omega _{1}(n),}$$

where we have gathered in ω 1(n) all the terms of the form \(n\vert x_{n} - y_{n}\vert ^{2}\) and | x n y n  | ; ω 1(n) → 0 when n tends to . To conclude we let n → . Since (t n , x n ), (t n , y n ) → (t 0, x 0), we obtain:

$$\displaystyle{-\frac{\partial \varphi } {\partial t}(t_{0},x_{0}) -\mathcal{A}\varphi (t_{0},x_{0}) -\tilde{ K}\vert \omega (t_{0},x_{0})\vert -\tilde{ K}\vert D\varphi (t_{0},x_{0})g(x_{0})\vert \leq 0,}$$

and therefore ω is a sub-solution of the desired equation. ■ 

Now we are going to build suitable smooth super-solutions for the equation (6.136).

Lemma 6.109.

For any δ > 0, there exists a C 1 > 0 such that the function

$$\displaystyle{\chi (t,x) =\exp \left[(C_{1}(T - t) + \delta )\psi (x)\right]}$$

where

$$\displaystyle{\psi (x) = \left[\log \left((\vert x\vert ^{2} + 1)^{1/2}\right) + 1\right]^{2}\;,}$$

satisfies

$$\displaystyle{-\frac{\partial \chi } {\partial t} -\mathcal{A}\chi -\tilde{ K}\chi -\tilde{ K}\vert D\chi g\vert > 0\text{ in }[t_{1},T] \times \mathbb{R}^{d}}$$

for 1 ≤ i ≤ k where t 1 = T −δ∕C 1 .

Proof.

We first estimate the term K χ, the main point being its dependence in x. For the sake of simplicity of notation, we denote below by C all the positive constants which enter into these estimates. These constants depend only on δ and on the bounds on the coefficients of the equations.

We first give estimates on the first and second derivatives of ψ: easy computations yield

$$\displaystyle{\vert D\psi (x)\vert \leq \frac{2[\psi (x)]^{1/2}} {(\vert x\vert ^{2} + 1)^{1/2}} \leq 4\quad \text{ in}\,\;\mathbb{R}^{d}\;,}$$

and

$$\displaystyle{\vert D^{2}\psi (x)\vert \leq \frac{C(1 + [\psi (x)]^{1/2})} {\vert x\vert ^{2} + 1} \quad \text{ in}\,\;\mathbb{R}^{d}\;.}$$

These estimates imply that, if \(t \in [t_{1},T]\)

$$\displaystyle\begin{array}{rcl} \vert D\chi (t,x)\vert & \leq & (C_{1}(T - t) + \delta )\chi (t,x)\vert D\psi (x)\vert {}\\ &\leq & C\chi (t,x) \frac{[\psi (x)]^{1/2}} {(\vert x\vert ^{2} + 1)^{1/2}}\;, {}\\ \end{array}$$

and, in the same way

$$\displaystyle{\vert D^{2}\chi (t,x)\vert \leq C\chi (t,x) \frac{\psi (x)} {\vert x\vert ^{2} + 1}\;.}$$

It is worth noticing that, because of our choice of t 1, the above estimates do not depend on C 1.

Since gg and \(\langle f(x),x\rangle\) grow at most quadratically at infinity, we have

$$\displaystyle\begin{array}{rcl} & & -\frac{\partial \chi } {\partial t}(t,x) -\mathcal{A}\chi (t,x) -\tilde{ K}\chi (t,x) -\tilde{ K}\vert D\chi (t,x)g(x)\vert {}\\ & &\geq \chi {\biggl [ C_{1}\psi (x) - C\psi (x) - C \frac{\psi (x)} {\vert x\vert ^{2} + 1} -\tilde{ K} - C\tilde{K}[\psi (x)]^{1/2} - C\tilde{K} \frac{[\psi (x)]^{1/2}} {(\vert x\vert ^{2} + 1)^{1/2}}\biggr ]}. {}\\ \end{array}$$

Since ψ(x) ≥ 1 in \(\mathbb{R}^{d}\), by using the Cauchy–Schwartz inequality, it is clear enough that for C 1 large enough the quantity in the brackets is positive and the proof is complete. ■ 

To conclude the proof, we are going to show that ω = uv satisfies

$$\displaystyle{\vert \omega (t,x)\vert \leq \alpha \chi (t,x)\;\text{ in }[0,T] \times \mathbb{R}^{d}}$$

for any α > 0. Then we will let α tend to zero.

To prove this inequality, we first remark that because of (6.132)

$$\displaystyle{\lim _{\vert x\vert \rightarrow +\infty }\vert \omega (t,x)\vert e^{-\delta [\log ((\vert x\vert ^{2}+1)^{1/2})]^{2} } = -\infty }$$

uniformly for t ∈ [0, T], for some δ > 0. From now on we choose δ in the definition of χ such that this holds. Then | ω i  | −α χ is bounded from above in \([t_{1},T] \times \mathbb{R}^{d}\) for any 1 ≤ i ≤ k and

$$\displaystyle{M =\max _{1\leq i\leq m}\max _{[t_{1},T]\times \mathbb{R}^{d}}(\vert \omega _{i}\vert -\alpha \chi )(t,x)e^{-\tilde{K}(T-t)}}$$

is attained at some point (t 0, x 0) and for some i 0.

We first remark that, since | ⋅ | is the sup norm in \(\mathbb{R}^{m}\), we have

$$\displaystyle{M =\max _{[t_{1},T]\times \mathbb{R}^{d}}(\vert \omega \vert -\alpha \chi )(t,x)e^{-\tilde{K}(T-t)}}$$

and \(\vert \omega _{i_{0}}(t_{0},x_{0})\vert = \vert \omega (t_{0},x_{0})\vert \). We may assume without loss of generality that \(\vert \omega _{i_{0}}(t_{0},x_{0})\vert > 0\), otherwise we are done.

There are two cases: either \(\omega _{i_{0}}(t_{0},x_{0}) > 0\) or \(\omega _{i_{0}}(t_{0},x_{0}) < 0\). We treat the first case, the second one is treated in a similar way since the roles of u and v are symmetric.

From the maximum point property, we deduce that

$$\displaystyle{\omega _{i_{0}}(t,x) -\alpha \chi (t,x) \leq (\omega _{i_{0}}-\alpha \chi )(t_{0},x_{0})e^{-\tilde{K}(t-t_{0})}}$$

and this inequality can be interpreted as the property for the function \(\omega _{i_{0}}-\phi\) to have a global maximum point at (t 0, x 0), where

$$\displaystyle{\phi (t,x) =\alpha \chi (t,x) + (\omega _{i_{0}}-\alpha \chi )(t_{0},x_{0})e^{-\tilde{K}(t-t_{0})}.}$$

Since ω is a viscosity sub-solution of (6.136), if \(t_{0} \in [t_{1},T[\), we have

$$\displaystyle{-\frac{\partial \phi } {\partial t}(t_{0},x_{0}) -\mathcal{A}\phi (t_{0},x_{0}) -\tilde{ K}\vert \omega (t_{0},x_{0})\vert -\tilde{ K}\vert D\phi (t_{0},x_{0})g(x_{0})\vert \leq 0.}$$

But the left-hand side of this inequality is nothing but

$$\displaystyle{\alpha \left[-\frac{\partial \chi } {\partial t}(t_{0},x_{0}) -\mathcal{A}\chi (t_{0},x_{0}) -\tilde{ K}\chi (t_{0},x_{0}) -\tilde{ K}\vert D\chi (t_{0},x_{0})g(x_{0})\vert \right],}$$

since \(\omega _{i_{0}}(t_{0},x_{0}) = \vert \omega (t_{0},x_{0})\vert \); so, by Lemma 6.109, we have a contradiction. Therefore t 0 = T and since | ω(T, x) |  = 0, we have

$$\displaystyle{\vert \omega (t,x)\vert -\alpha \chi (t,x) \leq 0\text{ in }[t_{1},T] \times \mathbb{R}^{d}.}$$

Letting α tend to zero, we obtain

$$\displaystyle{\vert \omega (t,x)\vert = 0\text{ in }[t_{1},T] \times \mathbb{R}^{d}.}$$

Applying successively the same argument on the intervals [t 2, t 1] where \(t_{2} = (t_{1} -\delta /C_{1})^{+}\) and then, if t 2 > 0, on [t 3, t 2] where \(t_{3} = (t_{2} -\delta /C_{1})^{+}\) etc, we finally obtain that

$$\displaystyle{\vert \omega (t,x)\vert = 0\;\text{ in }[0,T] \times \mathbb{R}^{d}}$$

and the proof is complete.

6.5.4 A Third Uniqueness Result

Let D be an open connected bounded subset of \(\mathbb{R}^{d}\) of the form

$$\displaystyle{D = \left\{x \in \mathbb{R}^{d}:\phi \left(x\right) < 0\right\},\;\;\mathrm{Bd}\left(D\right) = \left\{x \in \mathbb{R}^{d}:\phi \left(x\right) = 0\right\},}$$

where \(\phi \in C_{b}^{3}\left(\mathbb{R}^{d}\right)\), \(\left\vert \nabla \phi \left(x\right)\right\vert = 1\), for all \(x \in \mathrm{ Bd}\left(D\right)\).

We define the outward normal derivative of v at the point \(x \in \mathrm{ Bd}\left(D\right)\) by

$$\displaystyle{\frac{\partial v\left(x\right)} {\partial n} =\sum _{ j=1}^{d}\frac{\partial \phi \left(x\right)} {\partial x_{j}} \frac{\partial v\left(x\right)} {\partial x_{j}} = \left\langle \nabla \phi \left(x\right),\nabla v\left(x\right)\right\rangle.}$$

The aim of this section is to prove uniqueness of a viscosity solution for the following parabolic variational inequality (PVI) with a mixed nonlinear multivalued Neumann–Dirichlet boundary condition:

$$\displaystyle{ \left\{\begin{array}{r} \dfrac{\partial u(t,x)} {\partial t} -\mathcal{A}_{t}u\left(t,x\right) + \partial \varphi \left(u(t,x)\right) \ni F\left(t,x,u(t,x),(\nabla ug)(t,x)\right), \\ t > 0,\;x \in D, \\ {l}{\dfrac{\partial u(t,x)} {\partial n} + \partial \psi \left(u(t,x)\right) \ni G\left(t,x,u(t,x)\right),\;\;t > 0,\;x \in \mathrm{ Bd}\left(D\right),} \\ {l}{u(0,x) =\kappa (x),\;\ x \in \overline{D},}\end{array} \right. }$$
(6.137)

where the operator \(\mathcal{A}_{t}\) is given by

$$\displaystyle{\mathcal{A}_{t}v(x) = \dfrac{1} {2}\mathrm{Tr}\big[g(t,x)g^{{\ast}}(t,x)D^{2}v(x)\big] +\big\langle f(t,x),\nabla v(x)\big\rangle.}$$

We will make the following assumptions:

  1. (I)

    The functions

    $$\displaystyle{ \begin{array}{l} f: \left[0,\infty \right) \times \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}, \\ g: \left[0,\infty \right) \times \mathbb{R}^{d} \rightarrow \mathbb{R}^{d\times d}, \\ F: \left[0,\infty \right) \times \overline{D} \times \mathbb{R} \times \mathbb{R}^{d} \rightarrow \mathbb{R}, \\ G: \left[0,\infty \right) \times \mathrm{ Bd}\left(D\right) \times \mathbb{R} \rightarrow \mathbb{R}, \\ \kappa: \overline{D} \rightarrow \mathbb{R}\;\;\;\;\end{array} }$$
    (6.138)

    are continuous.

    We assume that for all T > 0, there exist \(\alpha \in \mathbb{R}\) and L, β, γ ≥ 0 (which can depend on T) such that \(\forall t \in \left[0,T\right],\;\forall x,\tilde{x} \in \mathbb{R}^{d}\):

    $$\displaystyle{ \langle f\left(t,x\right) - f\left(t,\tilde{x}\right), \frac{x -\tilde{ x}} {\vert x -\tilde{ x}\vert }\rangle ^{+} +\big \vert g\left(t,x\right) - g\left(t,\tilde{x}\right)\big\vert \leq L\left\vert x -\tilde{ x}\right\vert,\quad }$$
    (6.139)

    and \(\ \forall t \in \left[0,T\right]\), \(\forall x \in \overline{D}\), \(x^{{\prime}}\in \mathrm{ Bd}\left(D\right)\), \(y,\tilde{y} \in \mathbb{R},z,\tilde{z} \in \mathbb{R}^{d}\):

    $$\displaystyle{ \begin{array}{rl} \left(i\right)\ &(y -\tilde{ y})\left(F(t,x,y,z) - F(t,x,\tilde{y},z)\right) \leq \alpha \vert y -\tilde{ y}\vert ^{2}, \\ \left(\mathit{ii}\right)\ &\big\vert F(t,x,y,z) - F(t,x,y,\tilde{z})\big\vert \leq \beta \vert z -\tilde{ z}\vert, \\ \left(\mathit{iii}\right)\ &\big\vert F(t,x,y,0)\big\vert \leq \gamma \big (1 + \vert y\vert \big), \\ \left(\mathit{iv}\right)\ &(y -\tilde{ y})\left(G(t,x^{{\prime}},y) - G(t,x^{{\prime}},\tilde{y})\right) \leq \alpha \vert y -\tilde{ y}\vert ^{2}, \\ \left(\mathit{v}\right)\ &\big\vert G(t,x^{{\prime}},y)\big\vert \leq \gamma \left(1 + \vert y\vert \right).\end{array} }$$
    (6.140)

    In fact, the conditions (6.140-i) and (6.140-iv) mean that, for all \(t \in \left[0,T\right]\), \(x \in \overline{D}\), \(x^{{\prime}}\in \mathrm{ Bd}\left(D\right)\), \(z \in \mathbb{R}^{d}\),

    $$\displaystyle{r\mapsto \alpha y - F\left(t,x,ry,z\right)\quad \text{ and}\quad r\mapsto \alpha r - G\left(t,x^{{\prime}},r\right)}$$

    are increasing functions.

  2. (II)

    We assume that

    $$\displaystyle{ \begin{array}{rl} \left(i\right)\ &\varphi,\psi: \mathbb{R} \rightarrow (-\infty,+\infty ]\text{ are proper convex l.s.c. functions,} \\ \left(\mathit{ii}\right)\ &\varphi \left(y\right) \geq \varphi \left(0\right) = 0\text{ and }\psi \left(y\right) \geq \psi \left(0\right) = 0,\ \forall \;y \in \mathbb{R},\end{array} }$$
    (6.141)

    and there exists a positive constant M such that

    $$\displaystyle{ \begin{array}{rl} \left(i\right)\ &\Big\vert \varphi \big(\kappa (x)\big)\Big\vert \leq M,\;\;\forall x \in \overline{D}, \\ \left(\mathit{ii}\right)\ &\Big\vert \psi \big(\kappa (x)\big)\Big\vert \leq M,\;\;\forall x \in \mathrm{ Bd}\left(D\right).\end{array} }$$
    (6.142)

Remark 6.110.

Condition (6.141-ii) is generally satisfied after a translation of both the functions \(\varphi\), ψ and their arguments.

We define

$$\displaystyle{\begin{array}{l} \mathrm{Dom}\left(\varphi \right) = \left\{u \in \mathbb{R}:\varphi \left(u\right) < \infty \right\}, \\ \partial \varphi \left(u\right) = \left\{\hat{u} \in \mathbb{R}:\hat{ u}\left(v - u\right) +\varphi \left(u\right) \leq \varphi \left(v\right),\forall v \in \mathbb{R}\right\}, \\ \mathrm{Dom}\left(\partial \varphi \right) = \left\{u \in \mathbb{R}: \partial \varphi \left(u\right)\neq \varnothing \right\}, \\ \left(u,\hat{u}\right) \in \partial \varphi \Leftrightarrow u \in \ \mathrm{ Dom}\partial \varphi,\;\;\hat{u} \in \partial \varphi \left(u\right) \end{array} }$$

and we will use the same notions with \(\varphi\) replaced by ψ.

At every point \(y \in \mathrm{ Dom}\left(\varphi \right)\) we have

$$\displaystyle{\partial \varphi (y) = \mathbb{R} \cap \big [\varphi _{-}^{{\prime}}(y),\varphi _{ +}^{{\prime}}(y)\big],}$$

where \(\varphi _{-}^{{\prime}}(y)\) and \(\varphi _{+}^{{\prime}}(y)\) are resp. the left and right derivatives of \(\varphi\) at y.

For the reader’s convenience we recall here from Sect. 5.8 the definition of a viscosity solution of the parabolic variational inequality (6.137). We define

$$\displaystyle{\Phi \left(t,x,r,q,X\right):= -\frac{1} {2}\mathrm{Tr}\left((\mathit{gg}^{{\ast}})(t,x)X\right) -\langle f(t,x),q\rangle - F\left(t,x,r,\mathit{qg}(t,x)\right),}$$
$$\displaystyle{\Gamma (t,x,r,q):=\langle \nabla \phi (x),q\rangle - G(t,x,r).}$$

Definition 6.111.

Let \(u: \left[0,\infty \right) \times \overline{D} \rightarrow \mathbb{R}\) be a continuous function, which satisfies \(u(0,x) =\kappa \left(x\right),\;\forall \ x \in \overline{D}\).

  1. (a)

    u is a viscosity sub-solution of (6.137) if:

    $$\displaystyle{\left\vert \ \begin{array}{l} u(t,x) \in \mathrm{ Dom}\left(\varphi \right),\ \ \forall (t,x) \in (0,\infty ) \times \overline{D}, \\ u(t, x) \in \mathrm{ Dom }\left(\psi \right),\ \ \ \forall (t, x) \in (0, \infty ) \times \mathrm{ Bd}\left(D\right), \end{array} \right.}$$

    and for any \(\left(t,x\right) \in (0,\infty ) \times \overline{D}\), any \((p,q,X) \in \mathcal{P}^{2,+}u(t,x)\):

    $$\displaystyle\begin{array}{rcl} \left\{\begin{array}{l} p + \Phi \left(t,x,u(t,x),q,X\right) +\varphi _{ -}^{{\prime}}\left(u(t,x)\right) \leq 0\ \;if\;x \in D, \\ \min \Big\{p + \Phi \left(t,x,u(t,x),q,X\right) +\varphi _{ -}^{{\prime}}\left(u(t,x)\right), \\ \quad \quad \quad \quad \Gamma (t,x,u(t,x),q) +\psi _{ -}^{{\prime}}\left(u(t,x)\right)\Big\} \leq 0\;\ if\;x \in \mathrm{ Bd}\left(D\right).\end{array} \right.& & {}\end{array}$$
    (6.143)
  2. (b)

    The viscosity super-solution of (6.137) is defined in a similar manner as above, with \(\mathcal{P}^{2,+}\) replaced by \(\mathcal{P}^{2,-}\), the left derivative replaced by the right derivative, min by max, and the inequalities ≤ by ≥ .

  3. (c)

    A continuous function \(u: \left[0,\infty \right) \times \overline{D}\) is a viscosity solution of (6.137) if it is both a viscosity sub- and super-solution.

We now present the main result of this section.

Theorem 6.112.

Let the assumptions (6.138)–(6.142) be satisfied. If moreover the function

$$\displaystyle{ r \rightarrow G(t,x,r)\text{ is decreasing for all }t \geq 0\text{, }x \in \mathrm{ Bd}\left(D\right), }$$
(6.144)

and there exists a continuous function \(\mathbf{m}: [0,\infty ) \rightarrow [0,\infty )\) \(\mathbf{m}\left(0\right) = 0\) , such that

$$\displaystyle{ \begin{array}{l} \big\vert F(t,x,r,q) - F(t,y,r,q)\big\vert \leq \mathbf{m}\left(\left\vert x - y\right\vert \left(1 + \left\vert q\right\vert \right)\right), \\ \forall \ t \geq 0,\;x,y \in \overline{D},\;q \in \mathbb{R}^{d}, \end{array} }$$
(6.145)

then the parabolic variational inequality (6.137) has at most one viscosity solution.

Proof.

It is sufficient to prove uniqueness on a fixed arbitrary interval \(\left[0,T\right]\).

Also, it suffices to prove that if u is a sub-solution and v is a super-solution such that \(u(0,x) = v(0,x) =\kappa \left(x\right)\), \(x \in \overline{D}\), then u ≤ v.

Clearly by adding a constant we may assume that ϕ(x) ≥ 0 on \(\overline{D}\).

For λ = α + + 1 and \(\delta,\varepsilon,c > 0\) let

$$\displaystyle{\begin{array}{l} \bar{u}\left(t,x\right) = e^{-\lambda t}u\left(t,x\right) -\delta \phi (x) - c \\ \bar{v}\left(t,x\right) = e^{-\lambda t}v\left(t,x\right) +\delta \phi (x) + c + \dfrac{\varepsilon } {T - t}. \end{array} }$$

Let

$$\displaystyle{ \begin{array}{r} \tilde{\Phi }\left(t,x,r,q,X\right) =\lambda r -\dfrac{1} {2}\mathrm{Tr}\big[\left(gg^{{\ast}}\right)(t,x)X\big] -\big\langle f(t,x),q\big\rangle \\ \quad - e^{-\lambda t}F\big(t,x,e^{\lambda t}r,e^{\lambda t}\mathit{qg}\left(t,x\right)\big), \\ {l}{\tilde{\Gamma }(t,x,r,q) =\langle \nabla \phi (x),q\rangle - e^{-\lambda t}G(t,x,e^{\lambda t}r).} \end{array} }$$
(6.146)

Clearly \(r \rightarrow \tilde{ \Phi }\left(t,x,r,q,X\right)\) is an increasing function for all \(\left(t,x,q,X\right) \in \left[0,T\right] \times \mathbb{R}^{d} \times \mathbb{R}^{d} \times \mathbb{S}^{d}\). Moreover, since

$$\displaystyle{\sup _{\left(t,x\right)\in \left[0,T\right]\times \overline{D}}\left\{\vert \phi \left(x\right)\vert + \vert D\phi \left(x\right)\vert + \vert D^{2}\phi \left(x\right)\vert + \left\vert f(t,x)\right\vert + \left\vert g(t,x)\right\vert \right\} < \infty,}$$

then for any δ > 0, we can choose \(c = c\left(\delta \right) > 0\) such that c(δ) → 0 as δ → 0 and for all δ, \(\varepsilon > 0\),

$$\displaystyle{\tilde{\Phi }\left(t,x,r,q,X\right) \leq \tilde{ \Phi }(t,x,r +\delta \phi +c,q +\delta D\phi,X +\delta D^{2}\phi ),}$$
$$\displaystyle{\tilde{\Phi }(t,x,r -\delta \phi -c - \frac{\varepsilon } {T - t},q -\delta D\phi,X -\delta D^{2}\phi ) \leq \tilde{ \Phi }\left(t,x,r,q,X\right).}$$

We will prove that \(\bar{u} \leq \bar{ v}\) for all δ > 0, \(\varepsilon > 0\), c = c(δ). This will imply u ≤ v on \([0,T) \times \overline{D}\) by letting \(\delta,\varepsilon \rightarrow 0\). The result will follow, since T is arbitrary.

Using the two last properties, assumption (6.144) and the fact that the left and right derivative of \(\varphi\) and ψ are increasing we infer that \(\bar{u}\) satisfies in the viscosity sense:

$$\displaystyle{ \left\{\begin{array}{l} \dfrac{\partial \bar{u}} {\partial t} (t,x) +\tilde{ \Phi }\left(t,x,\bar{u}(t,x),D\bar{u}(t,x),D^{2}\bar{u}\left(t,x\right)\right) + e^{-\lambda t}\varphi _{-}^{{\prime}}\big(e^{\lambda t}\bar{u}(t,x)\big) \leq 0 \\ {r} {\text{ if}\;x \in D,t > 0} \\ \min \bigg\{\dfrac{\partial \bar{u}} {\partial t} (t,x) +\tilde{ \Phi }\left(t,x,\bar{u}(t,x),D\bar{u}(t,x),D^{2}\bar{u}(t,x)\right) + e^{-\lambda t}\varphi _{-}^{{\prime}}\big(e^{\lambda t}\bar{u}(t,x)\big), \\ \tilde{\Gamma }\left(t,x,\bar{u}(t,x),D\bar{u}(t,x)\right) +\delta +e^{-\lambda t}\psi _{-}^{{\prime}}\big(e^{\lambda t}\bar{u}(t,x)\big)\bigg\} \leq 0 \\ {r} {\text{ if}\;x \in \mathrm{ Bd}\left(D\right),t > 0.\quad \quad \quad \quad \quad \quad } \end{array} \right. }$$
(6.147)

Analogously we see that \(\bar{v}\) satisfies in the viscosity sense:

$$\displaystyle{ \left\{\begin{array}{l} \dfrac{\partial \bar{v}} {\partial t} (t,x) +\tilde{ \Phi }\left(t,x,\bar{v}(t,x),D\bar{v}(t,x),D^{2}\bar{v}(t,x)\right) \\ + e^{-\lambda t}\varphi _{+}^{{\prime}}\big(e^{\lambda t}\bar{v}(t,x)\big) - \dfrac{\varepsilon } {(T - t)^{2}} \geq 0,\;\text{ if}\;x \in D,\ t > 0, \\ \max \bigg\{\dfrac{\partial \bar{v}} {\partial t} (t,x) +\tilde{ \Phi }\left(t,x,\bar{v}(t,x),D\bar{v}(t,x),D^{2}\bar{v}(t,x)\right) + e^{-\lambda t}\varphi _{+}^{{\prime}}\big(e^{\lambda t}\bar{v}(t,x)\big) \\ {r} { - \dfrac{\varepsilon } {(T - t)^{2}},\tilde{\Gamma }\left(t,x,\bar{v}(t,x),D\bar{v}(t,x)\right) -\delta +e^{-\lambda t}\psi _{+}^{{\prime}}\big(e^{\lambda t}\bar{v}(t,x)\big)\bigg\} \geq 0} \\ {r} {\text{ if}\;x \in \mathrm{ Bd}\left(D\right),t > 0.} \end{array} \right. }$$
(6.148)

For simplicity of notation we write from now on u, v instead of \(\bar{u},\bar{v}\) respectively.

We now assume that

$$\displaystyle{ \mathop{\max }\limits_{\left[0,T\right] \times \overline{D}}\left(u - v\right)^{+} > 0. }$$
(6.149)

By an argument similar to that of Theorem 6.103, see Theorem 4.2 in [56] for more details, there exists \((\hat{t},\hat{x}) \in \left(0,T\right] \times \mathrm{ Bd}\left(D\right)\) such that

$$\displaystyle{u(\hat{t},\hat{x}) - v(\hat{t},\hat{x}) =\mathop{\max }\limits_{ \left[0,T\right] \times \overline{D}}\left(u - v\right)^{+} > 0.}$$

We now let

$$\displaystyle{\psi _{n}\left(t,x,y\right) = u\left(t,x\right) - v\left(t,y\right) -\rho _{n}\left(t,x,y\right)\text{, with }\left(t,x,y\right) \in \left[0,T\right] \times \overline{D} \times \overline{D},}$$

where

$$\displaystyle{ \begin{array}{l} \rho _{n}\left(t,x,y\right) = \dfrac{n} {2} \left\vert x - y\right\vert ^{2} + e^{-\lambda \hat{t}}G\big(\hat{t},\hat{x},e^{\lambda \hat{t}}u(\hat{t},\hat{x})\big)\big\langle \nabla \phi \left(\hat{x}\right),x - y\big\rangle + \left\vert x -\hat{ x}\right\vert ^{4} \\ + \vert t -\hat{ t}\vert ^{4} - e^{\lambda \hat{t}}\psi _{-}^{{\prime}}\big(e^{-\lambda \hat{t}}u(\hat{t},\hat{x})\big)\big\langle \nabla \phi \left(\hat{x}\right),x - y\big\rangle. \end{array} }$$
(6.150)

Let \(\left(t_{n},x_{n},y_{n}\right)\) be a maximum point of ψ n .

We observe that \(u\left(t,x\right) - v\left(t,x\right) -\left\vert x -\hat{ x}\right\vert ^{4} -\vert t -\hat{ t}\vert ^{4}\) has \((\hat{t},\hat{x})\) as its unique maximum point. Then, by Lemma 6.101, we have that as n → 

$$\displaystyle{ \begin{array}{l} t_{n} \rightarrow \hat{ t},\;x_{n} \rightarrow \hat{ x},\;y_{n} \rightarrow \hat{ x},\;n\left\vert x_{n} - y_{n}\right\vert ^{2} \rightarrow 0, \\ u\left(t_{n},x_{n}\right) \rightarrow u(\hat{t},\hat{x}),\;v\left(t_{n},x_{n}\right) \rightarrow v(\hat{t},\hat{x}).\end{array} }$$
(6.151)

But the domain D satisfies the uniform exterior sphere condition:

$$\displaystyle{\exists \ r_{0} > 0\text{ such that }S\big(x + r_{0}\nabla \phi \left(x\right),r_{0}\big) \cap D = \varnothing \;,\;\text{ for all }x \in \mathrm{ Bd}\left(D\right),}$$

where \(S\left(x,r_{0}\right)\) denotes the closed ball of radius r 0 centered at x.

Then

$$\displaystyle{\big\vert y - x - r_{0}\nabla \phi \left(x\right)\big\vert ^{2} > r_{ 0}^{2}\text{, for }x \in \mathrm{ Bd}\left(D\right)\text{, }y \in \overline{D},}$$

or equivalently

$$\displaystyle{ \big\langle \nabla \phi \left(x\right),y - x\big\rangle < \frac{1} {2r_{0}}\left\vert y - x\right\vert ^{2}\text{ for }x \in \mathrm{ Bd}\left(D\right)\text{, }y \in \overline{D}. }$$
(6.152)

If \(x_{n} \in \mathrm{ Bd}\left(D\right)\), we have, using the form of ρ n given by (6.150) and (6.152), that

$$\displaystyle\begin{array}{rcl} & & \tilde{\Gamma }\big(t_{n},x_{n},u\left(t_{n},x_{n}\right),D_{x}\rho _{n}\left(t_{n},x_{n},y_{n}\right)\big) =\tilde{ \Gamma }\Big(t_{n},x_{n},u\left(t_{n},x_{n}\right),n\left(x_{n} - y_{n}\right) {}\\ & & \quad \quad + e^{-\lambda \hat{t}}G\big(\hat{t},\hat{x},e^{\lambda \hat{t}}u(\hat{t},\hat{x})\big)\nabla \phi \left(\hat{x}\right) + 4\left\vert x_{ n} -\hat{ x}\right\vert ^{2}\left(x_{ n} -\hat{ x}\right) {}\\ & & \quad \quad - e^{-\lambda \hat{t}}\psi _{ -}^{{\prime}}\big(e^{\lambda \hat{t}}u(\hat{t},\hat{x})\big)\nabla \phi \left(\hat{x}\right)\Big) {}\\ & & \quad \geq - \dfrac{n} {2r_{0}}\left\vert x_{n} - y_{n}\right\vert ^{2} + e^{-\lambda \hat{t}}G\big(\hat{t},\hat{x},e^{\lambda \hat{t}}u(\hat{t},\hat{x})\big)\big\langle \nabla \phi \left(\hat{x}\right),\nabla \phi \left(x_{ n}\right)\big\rangle {}\\ & & \quad \quad - e^{-\lambda t_{n} }G\big(t_{n},x_{n},e^{\lambda t_{n} }u\left(t_{n},x_{n}\right)\big) + 4\left\vert x_{n} -\hat{ x}\right\vert ^{2}\big\langle \nabla \phi \left(x_{ n}\right),x_{n} -\hat{ x}\big\rangle {}\\ & & \quad \quad - e^{-\lambda \hat{t}}\psi _{ -}^{{\prime}}\big(e^{\lambda \hat{t}}u(\hat{t},\hat{x})\big)\big\langle \nabla \phi \left(\hat{x}\right),\nabla \phi \left(x_{ n}\right)\big\rangle. {}\\ \end{array}$$

Then (6.151) and the lower semicontinuity property of ψ implies that along a subsequence {x n } which belongs to ∂ D:

$$\displaystyle\begin{array}{rcl} \liminf \limits _{n\rightarrow \infty }\Big[\tilde{\Gamma }\left(t_{n},x_{n},u\left(t_{n},x_{n}\right),D_{x}\rho _{n}\left(t_{n},x_{n},y_{n}\right)\right)+\delta + e^{-\lambda t_{n} }\psi _{-}^{{\prime}}\left(e^{\lambda t_{n} }u(t_{n},x_{n})\right)\Big] > 0.& &{}\end{array}$$
(6.153)

Analogously if y n  ∈ ∂ D we infer

$$\displaystyle\begin{array}{rcl} \limsup \limits _{n\rightarrow \infty }\!\Big[\!\tilde{\Gamma }\left(t_{n},y_{n},v\left(t_{n},y_{n}\right),-D_{y}\rho _{n}\left(t_{n},x_{n},y_{n}\right)\right)-\delta +e^{-\lambda t_{n} }\psi _{+}^{{\prime}}\left(e^{\lambda t_{n} }v(t_{n},x_{n})\right)\!\Big]\! < 0.& &{}\end{array}$$
(6.154)

From Lemma 6.104 we deduce that there exists

$$\displaystyle{\left(p,X,Y \right) \in \mathbb{R} \times \mathbb{S}^{d} \times \mathbb{S}^{d},}$$

such that

$$\displaystyle{\begin{array}{c} \big(p,D_{x}\rho _{n}\left(t_{n},x_{n},y_{n}\right),X\big) \in \overline{\mathcal{P}}^{2,+}u(t_{n},x_{n}), \\ \big(p,-D_{y}\rho _{n}\left(t_{n},x_{n},y_{n}\right),Y \big) \in \overline{\mathcal{P}}^{2,-}v(t_{n},y_{n}), \end{array} }$$

and

$$\displaystyle{ \left(\begin{array}{*{10}c} X & \ \ 0\\ 0 &-Y \end{array} \right) \leq A+ \frac{1} {n}A^{2}, }$$
(6.155)

where \(A = D_{x,y}^{2}\rho _{n}\left(t_{n},x_{n},y_{n}\right)\). From (6.150) we have

$$\displaystyle{\begin{array}{l} A = n\left(\begin{array}{*{10}c} \ \ I &\ -I\\ -I & \ \ \ I \end{array} \right) + O\big(\left\vert x_{n} -\hat{ x}\right\vert ^{2}\big), \\ A^{2} = 2n^{2}\left(\begin{array}{*{10}c} \ \ I &\ -I\\ -I & \ \ \ I \end{array} \right) + O\big(n\left\vert x_{n} -\hat{ x}\right\vert ^{2} + \left\vert x_{n} -\hat{ x}\right\vert ^{4}\big). \end{array}}$$

Then (6.155) becomes

$$\displaystyle{ \left(\begin{array}{*{10}c} X & \ \ 0\\ 0 &-Y \end{array} \right) \leq 3n\left(\begin{array}{*{10}c} \ \ I &\ -I\\ -I & \ \ \ I \end{array} \right)+\delta _{n}\left(\begin{array}{*{10}c} I &\ \ 0\\ 0 &\ \ I \end{array} \right), }$$
(6.156)

where δ n  → 0.

Then from (6.147), (6.148) together with (6.154) and (6.153), we deduce that for n large enough

$$\displaystyle{p +\tilde{ \Phi }\big(t_{n},x_{n},u\left(t_{n},x_{n}\right),D_{x}\rho _{n}\left(t_{n},x_{n},y_{n}\right),X\big) + e^{-\lambda t_{n} }\varphi _{-}^{{\prime}}\big(e^{\lambda t_{n} }u(t_{n},x_{n})\big) \leq 0,}$$

and

$$\displaystyle\begin{array}{rcl} & & p +\tilde{ \Phi }\big(t_{n},y_{n},v\left(t_{n},y_{n}\right),-D_{y}\rho _{n}\left(t_{n},x_{n},y_{n}\right),Y \big) + e^{-\lambda t_{n} }\varphi _{+}^{{\prime}}\big(e^{\lambda t_{n} }v(t_{n},y_{n})\big) {}\\ & & \geq \dfrac{\varepsilon } {(T - t_{n})^{2}}. {}\\ \end{array}$$

Subtracting the last two inequalities, we deduce that

$$\displaystyle{ \begin{array}{l} \dfrac{\varepsilon } {(T - t_{n})^{2}} \\ \leq \tilde{ \Phi }\big(t_{n},y_{n},v\left(t_{n},y_{n}\right),-D_{y}\rho _{n}\left(t_{n},x_{n},y_{n}\right),Y \big) + e^{-\lambda t_{n}}\varphi _{+}^{{\prime}}\big(e^{\lambda t_{n}}v(t_{n},y_{n})\big) \\ \ \ -\tilde{ \Phi }\big(t_{n},x_{n},u\left(t_{n},x_{n}\right),D_{x}\rho _{n}\left(t_{n},x_{n},y_{n}\right),X\big) - e^{-\lambda t_{n}}\varphi _{-}^{{\prime}}\big(e^{\lambda t_{n}}u(t_{n},x_{n})\big).\end{array} }$$
(6.157)

By (6.149) and (6.151) there exists an N ≥ 1 such that for all n ≥ N, the above holds together with

$$\displaystyle{ u(t_{n},x_{n}) > v(t_{n},y_{n}), }$$
(6.158)

and consequently

$$\displaystyle{e^{-\lambda t_{n} }\varphi _{-}^{{\prime}}\big(e^{\lambda t_{n} }u(t_{n},x_{n})\big) \geq e^{-\lambda t_{n} }\varphi _{+}^{{\prime}}\big(e^{\lambda t_{n} }v(t_{n},y_{n})\big).}$$

Combining this with (6.157), we deduce that

$$\displaystyle{\begin{array}{l} \dfrac{\varepsilon } {(T - t_{n})^{2}} \leq \tilde{ \Phi }\left(t_{n},y_{n},v\left(t_{n},y_{n}\right),-D_{y}\rho _{n}\left(t_{n},x_{n},y_{n}\right),Y \right) \\ \;\;\;\;\;\; -\tilde{ \Phi }\left(t_{n},x_{n},u\left(t_{n},x_{n}\right),D_{x}\rho _{n}\left(t_{n},x_{n},y_{n}\right),X\right) \\ \;\;\;\;\; \leq \dfrac{1} {2}\mathrm{Tr}\big[\left(gg^{{\ast}}\right)(t_{n},x_{n})X -\left(gg^{{\ast}}\right)(t_{n},y_{n})Y \big] + Cn\vert x_{n} - y_{n}\vert ^{2} +\omega _{n},\end{array} }$$

where ω n  → 0 as n → . Note that we have used the assumption (6.145), (6.151), (6.158), the fact that r → λ rF(t, x, r, z) is increasing, and the Lipschitz continuity of F with respect to its last variable.

From (6.156), \(\forall \) \(q,\tilde{q} \in \mathbb{R}^{d}\),

$$\displaystyle{\left\langle Xq,q\right\rangle -\left\langle Y \tilde{q},\tilde{q}\right\rangle \leq 3n\left\vert q -\tilde{ q}\right\vert ^{2} +\big (\left\vert q\right\vert ^{2} + \left\vert \tilde{q}\right\vert ^{2}\big)\delta _{ n}.}$$

Hence by the same computation as in Lemma 6.97 we obtain

$$\displaystyle{\mathrm{Tr}\big[\left(\mathit{gg}^{{\ast}}\right)(t_{ n},x_{n})X -\left(\mathit{gg}^{{\ast}}\right)(t_{ n},y_{n})Y \big]}$$
$$\displaystyle{\leq 3C\ n\left\vert x_{n} - y_{n}\right\vert ^{2} +\big (\left\vert g(t_{ n},x_{n})\right\vert ^{2} + \left\vert g(t_{ n},y_{n})\right\vert ^{2}\big)\delta _{ n},}$$

and consequently taking the limit in the above set of inequalities yields

$$\displaystyle{ \dfrac{\varepsilon } {(T -\hat{ t})^{2}} \leq 0,}$$

which is a contradiction.

Then

$$\displaystyle{u\left(t,x\right) \leq v\left(t,x\right),\;\forall \left(t,x\right) \in \left[0,T\right] \times \overline{D}.}$$

 ■ 

6.6 Annex E: Hints for Some Exercises

Chapter  1

Exercise 1.7

By Proposition 1.34 we have

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left(g\left(B_{T}\right)\vert \mathcal{F}_{t}\right)& =& \mathbb{E}\left(g\left(\frac{B_{T} - B_{t}} {\sqrt{T - t}} \sqrt{T - t} + B_{t}\right)\vert \mathcal{F}_{t}\right) {}\\ & =& \int _{\mathbb{R}}g\left(x\sqrt{T - t} + B_{t}\right)\rho \left(x\right)\mathit{dx}. {}\\ \end{array}$$

Setting here \(g\left(u\right) = \mathbf{1}_{(-\infty,a]}\left(u\right)\), the second assertion follows.

Exercise 1.15: Let N > 0 and a sequence \(\varepsilon _{n} \searrow 0\) as n → . Then

$$\displaystyle\begin{array}{rcl} \mathbb{P}\left(\limsup _{n\rightarrow +\infty }\frac{\left\vert B_{t+\varepsilon _{n}} - B_{t}\right\vert } {\varepsilon _{n}} > N\right)& =& \mathbb{P}\left(\bigcap \nolimits _{n\geq 1} \downarrow \left(\bigcup \nolimits _{k\geq n}\left(\left\vert B_{t+\varepsilon _{k}} - B_{t}\right\vert > N\varepsilon _{k}\right)\right)\right) {}\\ & =& \lim _{n\rightarrow \infty }\mathbb{P}\left(\bigcup \nolimits _{k\geq n}\left(\left\vert B_{t+\varepsilon _{k}} - B_{t}\right\vert > N\varepsilon _{k}\right)\right) {}\\ & \geq & \liminf _{n\rightarrow +\infty }\mathbb{P}\left(\left\vert B_{t+\varepsilon _{n}} - B_{t}\right\vert > N\varepsilon _{n}\right) {}\\ & =& \liminf _{n\rightarrow +\infty }\mathbb{P}\left(\left\vert B_{1}\right\vert > N\sqrt{\varepsilon _{n}}\right) {}\\ & =& \mathbb{P}\left(\left\vert B_{1}\right\vert > 0\right) {}\\ & =& 1. {}\\ \end{array}$$

Exercise 1.16: Let us write \(S_{n}^{\left(p\right)} = S_{\Delta _{n}}^{\left(p\right)}\left(B_{ \cdot };\left[s,t\right]\right)\). The results are consequences of the following inequalities (see Proposition 1.86 for the first one) combined with Proposition 1.14 and Proposition 1.7:

$$\displaystyle{\mathbb{E}\left[S_{n}^{\left(2\right)} -\left(t - s\right)\right]^{2} = \text{ Var}\left(S_{ n}^{\left(2\right)}\right) \leq 2\left\Vert \Delta _{ n}\right\Vert \left(t - s\right)}$$

and

$$\displaystyle\begin{array}{rcl} S_{\Delta _{n}}^{\left(p\right)}& \leq & S_{ \Delta _{n}}^{\left(2\right)} \times \left(\mathbf{m}_{ B}\left(\left\Vert \Delta _{n}\right\Vert \right)\right)^{p-2},\;\;\text{ for }p > 2,\;\;\text{ and} {}\\ S_{\Delta _{n}}^{\left(2\right)}& \leq & S_{ \Delta _{n}}^{\left(p\right)} \times \left(\mathbf{m}_{ B}\left(\left\Vert \Delta _{n}\right\Vert \right)\right)^{2-p},\;\;\text{ for }1 \leq p < 2, {}\\ \end{array}$$

where

$$\displaystyle{\mathbf{m}_{B}\left(\delta \right) =\sup \left\{\left\vert B_{u} - B_{v}\right\vert: u,v \in \left[s,t\right],\;\left\vert u - v\right\vert \leq \delta \right\},}$$

is the modulus of continuity of \(\left\{B_{u}: u \in \left[s,t\right]\right\}\).

Exercise 1.17: Applying the inequality (1.25) with \(\alpha = \frac{1} {2} - \frac{\varepsilon } {2}\) and \(p = \dfrac{2} {\varepsilon }\), we deduce that for all \(s,t \in [0,T]\)

$$\displaystyle{\left\vert X_{t}\left(\omega \right) - X_{s}\left(\omega \right)\right\vert \leq \xi \left(\omega \right)T^{\varepsilon }\left\vert t - s\right\vert ^{\frac{1} {2} -\varepsilon },}$$

where

$$\displaystyle{\xi \left(\omega \right) =\xi _{\varepsilon,T}\left(\omega \right) = \left\{\begin{array}{ll} 0, &\text{ if }T = 0, \\ \dfrac{C_{\varepsilon }} {T^{\varepsilon }}\left(\int _{0}^{T}\int _{ 0}^{T}\dfrac{\left\vert B_{u}\left(\omega \right) - B_{r}\left(\omega \right)\right\vert ^{\frac{2} {\varepsilon } }} {\left\vert u - r\right\vert ^{\frac{1} {\varepsilon } }} \mathit{du}\mathit{dr}\right)^{ \frac{\varepsilon }{ 2} },&\text{ if }T > 0. \end{array} \right.}$$

Let \(1 \leq q \leq \frac{2} {\varepsilon } \leq p\). By Lyapunov’s inequality and Minkowski’s inequality (1.24) from Exercise 1.2 we obtain

$$\displaystyle\begin{array}{rcl} \left\Vert \xi \right\Vert _{L^{q}\left(\Omega,\mathcal{F},\mathbb{P}\right)}& \leq & \left\Vert \xi \right\Vert _{L^{p}\left(\Omega,\mathcal{F},\mathbb{P}\right)} {}\\ & =& \frac{C_{\varepsilon }} {T^{\varepsilon }}\left\Vert \int _{0}^{T}\int _{ 0}^{T}\dfrac{\left\vert B_{u} - B_{r}\right\vert ^{\frac{2} {\varepsilon } }} {\left\vert u - r\right\vert ^{\frac{1} {\varepsilon } }} \mathit{du}\mathit{dr}\right\Vert _{L^{\varepsilon p/2}\left(\Omega,\mathcal{F},\mathbb{P}\right)}^{ \frac{\varepsilon }{ 2} } {}\\ & \leq & \frac{C_{\varepsilon }} {T^{\varepsilon }}\left(\int _{0}^{T}\int _{ 0}^{T}\frac{\left\Vert \left\vert B_{u} - B_{r}\right\vert ^{\frac{2} {\varepsilon } }\right\Vert _{L^{\varepsilon p/2}\left(\Omega,\mathcal{F},\mathbb{P}\right)}} {\left\vert u - r\right\vert ^{\frac{1} {\varepsilon } }} \mathit{du}\mathit{dr}\right)^{ \frac{\varepsilon }{ 2} } {}\\ & =& C_{\varepsilon,p}, {}\\ \end{array}$$

since

$$\displaystyle\begin{array}{rcl} \left\Vert \left\vert B_{u} - B_{r}\right\vert ^{\frac{2} {\varepsilon } }\right\Vert _{L^{\varepsilon p/2 }\left(\Omega,\mathcal{F},\mathbb{P}\right)}& =& \left(\mathbb{E}\left\vert B_{u} - B_{r}\right\vert ^{p}\right)^{\frac{2} {\varepsilon p} } {}\\ & =& \left(C_{p}\left\vert u - r\right\vert ^{p/2}\right)^{\frac{2} {\varepsilon p} }. {}\\ \end{array}$$

Exercise 1.19: Deduce from the proof of Theorem 1.40 that for any 0 < δ < ba, there exists a constant K = K(M, T, a, b, δ) such that for all \(\varepsilon,\lambda > 0\),

$$\displaystyle{\mathbb{P}\left(\mathbf{m}_{X^{n}}\left(\varepsilon;\left[0,T\right]\right) \geq \lambda \right) \leq \frac{1} {\lambda ^{a}}\mathbb{E}\left(\mathbf{m}_{X^{n}}^{a}\left(\varepsilon;\left[0,T\right]\right)\right) \leq \frac{K} {\lambda ^{a}} \varepsilon ^{b-a\delta }}$$

and conclude that (ii) in Theorem 1.46 is satisfied.

Exercise 1.20 \(\left(2\right)\) By Lemma 1.73 and Proposition 1.65, we infer that \((U_{t}^{\left(\lambda \right)})_{t\in \left[0,T\right]}\) and \((Z_{t}^{\left(\lambda \right)})_{t\in \left[0,T\right]}\) are continuous martingales.

\(\left(3\right)\) Let the stopping time \(\tau _{n} =\inf \left\{t \geq 0: \left\vert M_{t}\right\vert + < M >\ _{t} \geq n\right\}\). Then \(\left\{Z_{t\wedge \tau _{n}}^{\left(\lambda \right)};t \geq 0\right\}\) is a martingale and for all 0 ≤ s ≤ t

$$\displaystyle{\mathbb{E}^{\mathcal{F}_{s} }Z_{t}^{\left(\lambda \right)} \leq \liminf _{ n\rightarrow +\infty }\mathbb{E}^{\mathcal{F}_{s} }Z_{t\wedge \tau _{n}}^{\left(\lambda \right)} =\liminf _{ n\rightarrow +\infty }Z_{s\wedge \tau _{n}}^{\left(\lambda \right)} = Z_{ s}^{\left(\lambda \right)}.}$$

\(\left(5\right)\) By Proposition 1.59 with \(\varphi \left(x\right) = e^{ax}\), \(\left\{e^{aM_{t\wedge \theta }};t \geq 0\right\}\) is a sub-martingale and the inequality follows by Doob’s inequality (Theorem 1.60) and Hölder’s inequality.

\(\left(6\right)\) The inequality yields that \(\left\{Z_{t\wedge \theta _{n}}^{\left(\lambda \right)};n \in \mathbb{N}^{{\ast}}\right\}\) is uniformly integrable and consequently \(\mathbb{E}\ Z_{t}^{\left(\lambda \right)} =\lim _{n\rightarrow \infty }\mathbb{E}Z_{t\wedge \theta _{n}}^{\left(\lambda \right)} = 1\).

\(\left(7\right)\) In the inequality from \(\left(6\right)\) with \(A = \Omega \), one passes to the limit as n →  and then \(\lambda \nearrow 1\).

\(\left(8\right)\) We have \(\mathbb{E}\left(e^{\frac{1} {2} M_{T}}\right) = \mathbb{E}\left(\sqrt{Z}_{T}e^{\frac{1} {4} <M>_{T}}\right) \leq \left(\mathbb{E}\ Z_{T}\right)^{2}\mathbb{E}\left(e^{\frac{1} {2} <M>_{T}}\right) \leq \mathbb{E}\left(e^{\frac{1} {2} <M>_{T}}\right) < \infty \).

Chapter  2

Exercise 2.1: \(\left(\Leftarrow \right)\): From the theory of the Riemann–Stieltjes integral we know that if \(g \in \mathit{BV }\left[0,T\right]\), then \(S_{n}\left(f\right)\) converges (to the Riemann–Stieltjes integral \(\int _{0}^{T}f\left(t\right)dg\left(t\right)\)).

\(\left(\Rightarrow \right)\): Let \(S_{n}\left(f\right)\) be convergent for all \(f \in C\left[0,T\right]\). Then \(S_{n}: C\left[0,T\right] \rightarrow R\) is a bounded linear operator such that

$$\displaystyle{\sup _{n\geq 1}\left\vert S_{n}\left(f\right)\right\vert < \infty,}$$

and by the Banach–Steinhauss Theorem

$$\displaystyle{\sup _{n\geq 1}\left\Vert S_{n}\right\Vert = M < \infty,}$$

where \(\left\Vert S_{n}\right\Vert =\sup \left\{\left\vert S_{n}\left(f\right)\right\vert: \left\vert \!\left\vert \!\left\vert f\right\vert \!\right\vert \!\right\vert _{T} \leq 1\right\}\). For a fixed n we can construct \(h_{n} \in C\left[0,T\right]\) such that \(h_{n}\left(t_{i}^{n}\right) =\mathrm{ sign}\left\{g\left(t_{i+1}^{n}\right) - g\left(t_{i}^{n}\right)\right\}\) and \(\left\vert \!\left\vert \!\left\vert h_{n}\right\vert \!\right\vert \!\right\vert _{T} = 1\). Hence

$$\displaystyle{\sum _{i=0}^{n-1}\left\vert g\left(t_{ i+1}^{n}\right) - g\left(t_{ i}^{n}\right)\right\vert = S_{ n}\left(h_{n}\right) \leq \left\Vert S_{n}\right\Vert \leq M,}$$

and as a consequence g is of finite variation.

Note (Banach–Steinhauss Theorem).

Let X be a Banach space and let Y be a normed linear space. Let S i : X → Y, i ∈ I, be a family of bounded linear operators. If for each x ∈ X the set \(\left\{S_{i}\left(x\right): i \in I\right\}\) is bounded then the set \(\left\{\left\Vert S_{i}\right\Vert: i \in I\right\}\) is bounded.

Remark: This is not a contradiction since the subsequence \(\left\{n_{k}\right\}\) depends on f.

Exercise 2.3: If \(\mathcal{E}\) is the linear subspace of \(L^{2}(\mathbb{R}_{+})\) consisting of those functions f of the form:

$$\displaystyle{f =\sum _{ i=0}^{n-1}a_{ i}\mathbf{1}_{\left[t_{i},t_{i+1}\right[},\quad n \in \mathbb{N}^{{\ast}};0 = t_{ 0} < t_{1} < \cdots < t_{n};\ a_{i} \in \mathbb{R},\ i \leq n,}$$

then H[B] is the closure of \(\{\mathbb{B}(f),\,f \in \mathcal{E}\}\), which coincides with \(\{\mathbb{B}(f),\,f \in L^{2}(\mathbb{R}_{+})\}\). Moreover the set \(\{B_{t} = \mathbb{B}(\mathbf{1}_{[0,t]}),\,t > 0\}\) is total in H[B].

Exercise 2.4: Let \(s \in \left[0,T\right]\). We have

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left[\left(\int _{0}^{T}f(t)\,\mathit{dB}_{ t} +\int _{ 0}^{T}f^{{\prime}}(t)B_{ t}\,\mathit{dt}\right)\,B_{s}\right]& =& \int _{0}^{s}f(t)\,\mathit{dt} +\int _{ 0}^{T}f^{{\prime}}(t)(s \wedge t)\,\mathit{dt} {}\\ & =& f(T)s. {}\\ \end{array}$$

Since \(\mathbb{E}\vert B_{t}\vert = \sqrt{\frac{2t} {\pi }}\), it follows that \(\int _{0}^{\infty }\vert f^{{\prime}}(t)\vert \vert B_{t}\vert \,\mathit{dt} < \infty \quad a.s.\), and

$$\displaystyle\begin{array}{rcl} \int _{0}^{\infty }f^{2}(t)\,\mathit{dt}& =& \int _{ 0}^{\infty }\left(\int _{ 0}^{\infty }\mathbf{1}_{ [t,\infty [}\left(u\right)f^{{\prime}}(u)\,\mathit{du}\int _{ 0}^{\infty }\mathbf{1}_{ [t,\infty [}\left(v\right)f^{{\prime}}(v)\,\mathit{dv}\right)\,\mathit{dt} {}\\ & \leq & \int _{0}^{\infty }\int _{ 0}^{\infty }(u \wedge v)\vert f^{{\prime}}(u)\vert \vert f^{{\prime}}(v)\vert \,\mathit{du}\,\mathit{dv} {}\\ & \leq & \int _{0}^{\infty }\int _{ 0}^{\infty }\sqrt{uv}\vert f^{{\prime}}(u)\vert \vert f^{{\prime}}(v)\vert \,\mathit{du}\,\mathit{dv} {}\\ & =& \left(\int _{0}^{\infty }\sqrt{u}\vert f^{{\prime}}(u)\vert \,\mathit{du}\right)^{2} < \infty. {}\\ \end{array}$$

Exercise 2.5: Note that

$$\displaystyle{g^{{\prime}}\left(x\right) = 30\left(x - 1\right)^{2}\left(2 - x\right)^{2} \geq 0\quad \text{ and}\quad g^{{\prime\prime}}\left(x\right) = 60\left(x - 1\right)\left(2 - x\right)\left(3 - 2x\right).}$$

and for \(x \in \left[1,2\right]\)

$$\displaystyle{0 \leq \left(x - 1\right)\left(2 - x\right) \leq \left(\frac{x - 1 + 2 - x} {2} \right)^{2} = \frac{1} {4},}$$

and therefore for all \(x \in \left[1,2\right]\),

$$\displaystyle{0 \leq g^{{\prime}}\left(x\right) \leq 2,\quad \left\vert g^{{\prime\prime}}\left(x\right)\right\vert \leq 15.}$$

The relation (2.67) follows by taking the limit as \(\varepsilon \rightarrow 0\) in Itô’s formula for \(\varphi _{\varepsilon }\left(X_{t}\right)\).

Chapter  3

Exercise 3.1: Consider the equation

$$\displaystyle{ \begin{array}{l} X_{t} =\xi +\int _{0}^{t}F\left(s,X_{s}\right)\mathit{ds} \\ \quad \quad \quad \quad + \int _{0}^{t}\left(-\mu \left(s\right) -\dfrac{m_{p}} {2} \ell^{2}\left(s\right) -\dfrac{a} {p}\right)X_{s}\mathit{ds} +\int _{ 0}^{t}G\left(s,X_{s}\right)\mathit{dB}_{s}\text{.} \end{array} }$$
(6.159)

By Theorem 3.27, it has a unique solution X ∈ S d 0 and from the inequality (3.18) we clearly have (3.131)

$$\displaystyle{ U_{t} = \left(-\mu \left(t\right) -\dfrac{m_{p}} {2} \ell^{2}\left(t\right) -\dfrac{a} {p}\right)X_{t}, }$$
(6.160)

where X ∈ S d 0 is the solution of the Eq. (6.159). The inequality (3.132) shows us that \(Y _{t} = e^{at} \tfrac{\left\vert X_{t}\right\vert ^{p}} {\left(1+\delta \left\vert X_{t}\right\vert ^{2}\right)^{p/2}}\) is a super-martingale and then (3.134).

Exercise 3.3: First deduce the following from the stochastic Gronwall inequalities (Annex C)

$$\displaystyle{\mathbb{E}\sup _{t\in \left[0,T\right]}\left\vert X_{t}^{\varepsilon } - X_{ t}\right\vert ^{p} \leq C_{ p}\mathbb{E}\left(\int _{0}^{T}\left\vert F_{\varepsilon }\left(r,X_{ r}\right) - F\left(r,X_{r}\right)\right\vert \mathit{dr}\right)^{p}e^{C_{p}\int _{0}^{t}\left[\mu ^{+}\left(r\right)+\ell^{2}\left(r\right)\right]\mathit{dr} }.}$$

Exercise 3.9: \(\left(1i\right)\) We clearly have

$$\displaystyle{\mathbb{E}\ \exp \left(C\left\vert x + B_{t}\right\vert ^{b}\right) = \frac{1} {\left(2\pi \right)^{d/2}}\int _{\mathbb{R}^{d}}\exp \left(C\left\vert x + \sqrt{t}u\right\vert ^{b} -\frac{\left\vert u\right\vert ^{2}} {2} \right)\mathit{du} < \infty,}$$

for all C, t ≥ 0 if and only if 0 ≤ b < 2.

\(\left(1\mathit{ii}\right)\) If 0 ≤ a < 2, then by Jensen’s inequality

$$\displaystyle{\mathbb{E}\ \exp \left(C\int _{0}^{t}\left\vert x + B_{ s}\right\vert ^{a}\mathit{ds}\right) \leq \frac{1} {t}\int _{0}^{t}\mathbb{E}\exp \left(Ct\left\vert x + B_{ s}\right\vert ^{a}\right)\mathit{ds} < \infty.}$$

If − 1 < a < 0, then by Corollary 2.30 we have

$$\displaystyle{\left\vert x + B_{t}\right\vert ^{a+2} = \left(a + 2\right)\int _{ 0}^{t}\left\vert x+B_{ s}\right\vert ^{a}\left\langle x+B_{ s},\mathit{dB}_{s}\right\rangle + \frac{\left(a+2\right)\left(a+1\right)} {2} \int _{0}^{t}\left\vert x + B_{ s}\right\vert ^{a}\mathit{ds}.}$$

Hence by (2.62-b)

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\exp \left(C\int _{0}^{t}\left\vert x + B_{ s}\right\vert ^{a}\mathit{ds}\right) {}\\ & & \leq \left[\mathbb{E}\exp \left(C_{1}\left\vert x + B_{t}\right\vert ^{a+2}\right)\right]^{1/2}\left[\mathbb{E}\exp \left(C_{ 2}\int _{0}^{t}\left\vert x + B_{ s}\right\vert ^{a}\left\langle x + B_{ s},\mathit{dB}_{s}\right\rangle \right)\right]^{1/2} {}\\ & & \leq \left[\mathbb{E}\exp \left(C_{1}\left\vert x + B_{t}\right\vert ^{a+2}\right)\right]^{1/2}\left[\mathbb{E}\exp \left(2C_{ 2}\int _{0}^{t}\left\vert x + B_{ s}\right\vert ^{2a+2}\mathit{ds}\right)\right]^{1/2} {}\\ & & < \infty. {}\\ \end{array}$$

\(\left(1\mathit{iii}\right)\)

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\left\{\exp \left[C\log ^{2}\left(\vert x + B_{ t}\vert \right)\right]\right\} {}\\ & & \geq \int _{\left[0,1\right]^{k}}e^{C\log ^{2}\left\vert u\right\vert } \frac{1} {\left(2\pi t\right)^{k/2}}e^{-\left\vert u-x\right\vert ^{2}/2t }\mathit{du} {}\\ & & \geq \frac{1} {\left(2\pi t\right)^{k/2}}e^{-\left(k+\left\vert x\right\vert \right)^{2}/2t }\int _{\left[0,1\right]^{k}}e^{\frac{C} {4} \log ^{2}\left(u_{ 1}^{2}+\cdots +u_{ k}^{2}\right) }\mathit{du}_{1}\ldots \mathit{du}_{k} {}\\ & & \geq C_{k,t,x}\int _{0}^{1}e^{C\log ^{2}u_{ 1}}\mathit{du}_{1} {}\\ & & = C_{k,t,x}\int _{0}^{\infty }e^{Cy^{2} }e^{-y}\mathit{dy} {}\\ & & = \infty. {}\\ \end{array}$$

\(\left(1\mathit{iv}\right)\) Observe that for every α ∈ ]0, 1[, there exists a C α  > 0 such that

$$\displaystyle{\log ^{2}\left\vert x\right\vert \leq C_{\alpha } + \left\vert x\right\vert + \left\vert x\right\vert ^{-\alpha }}$$

and consequently \(\left(\mathit{iii}\right)\) follows from \(\left(\mathit{ii}\right)\).

\(\left(2\right)\). Existence follows in both cases from Lemma 2.49 and Girsanov’s Theorem 2.51. Uniqueness in law on \(\left(\Omega,\mathcal{F}_{n\wedge \tilde{T}_{n}}\right)\) (resp. \(\left(\Omega,\mathcal{F}_{n\wedge \hat{T}_{n}}\right)\)) follows again from Girsanov’s Theorem, where

$$\displaystyle\begin{array}{rcl} \tilde{T}_{n}& =& \inf \left\{t > 0: \int _{0}^{t}\left\vert g\left(X_{ s}\right)\right\vert ^{2}\log ^{2}(\vert X_{ s}\vert )\mathit{ds} > n\right\}, {}\\ \hat{T}_{n}& =& \inf \left\{t > 0: \int _{0}^{t}\left\vert g\left(X_{ s}\right)\right\vert ^{2}\vert X_{ s}\vert ^{a}\mathit{ds} > n\right\}. {}\\ \end{array}$$

It remains to note that \(\tilde{T}_{n} \rightarrow \infty \), \(\hat{T}_{n} \rightarrow \infty \), as n → .

Exercise 3.10 The function \(F: \mathbb{R} \rightarrow \mathbb{R}\), \(F\left(x\right) = f\left(x\right)\sqrt{\left\vert x\right\vert }\mathrm{sign}\left(x\right)\) is locally monotone and \(xF\left(x\right) \leq 0\), but it is not locally Lipschitz.

Chapter  4

Exercise 4.1

  1. 1.

    The existence and the uniqueness of the solution \(X^{n} \in S^{2}\left[0,T\right]\) follows from Theorem 3.17; by the comparison result from Proposition 3.12 we have \(X_{t}^{n+1} \geq X_{t}^{n}\), for all \(t \in \left[0,T\right]\), \(\mathbb{P}\text{ -}a.s.\)

  2. 2.

    Let L and be the Lipschitz constants of f and, respectively, g. We have

    $$\displaystyle{X_{t}^{n} - 1 = \left(x - 1\right) +\int _{ 0}^{t}\mathit{dK}_{ s}^{n} +\int _{ 0}^{t}g(X_{ s}^{n})\mathit{dB}_{ s},}$$

    with \(\mathit{dK}_{s}^{n} = \left[f(X_{s}^{n}) + n(X_{s}^{n})^{-}\right]\mathit{ds}\) and \(G_{s}^{n} = g(X_{s}^{n})\). Since

    $$\displaystyle{\mathit{dD}_{t}^{n} + \left(X_{ t}^{n} - 1\right)\mathit{dK}_{ t}^{n} + \left\vert G_{ t}^{n}\right\vert ^{2}\mathit{dt} \leq \mathit{dR}_{ t} + \left\vert X_{t}^{n} - 1\right\vert ^{2}\mathit{dV }_{ t},}$$

    where \(D_{t}^{n} = n\int _{0}^{t}\!\left[(X_{ s}^{n})^{-}\right]^{2}\mathit{ds} + n\int _{ 0}^{t}(X_{ s}^{n})^{-}\mathit{ds}\), \(R_{t}=\left(\dfrac{1} {2}\left\vert f\left(1\right)\right\vert ^{2}+2\left\vert g\left(1\right)\right\vert ^{2}\right)t\) and \(V _{t}=\left(L + \dfrac{1} {2} + 2\ell^{2}\right)t\), it follows by (6.78) (with p = 2 and λ = 1∕18) that

    $$\displaystyle{\mathbb{E}\sup _{t\in \left[0,T\right]}\left\vert X_{t}^{n} - 1\right\vert ^{2} + \mathbb{E}\left(\int _{ 0}^{T}n\left[(X_{ t}^{n})^{-}\right]^{2}\mathit{dt}\right) + \mathbb{E}\left(\int _{ 0}^{T}n(X_{ t}^{n})^{-}\mathit{dt}\right) \leq C_{ 2}.}$$
  3. 3.

    Since, moreover, \(\left(X_{t}^{n}\right)^{-}\geq \left(X_{t}^{n+1}\right)^{-}\) for all \(t \in \left[0,T\right]\), \(\mathbb{P}\text{ -a.s.}\), it follows that \(\lim _{n\rightarrow \infty }\left(X_{t}^{n}\right)^{-} = 0\), \(d\mathbb{P} \otimes \mathit{dt} - a.e\). By Itô’s formula for \(\left[\left(X_{t}^{n}\right)^{-}\right]^{2}\) (see Proposition 2.35), we deduce \(\mathbb{E}\sup \limits _{0\leq t\leq T}\ \vert \left(X_{t}^{n}\right)^{-}\vert ^{2} \rightarrow 0\), as n → .

  4. 4.

    Since

    $$\displaystyle\begin{array}{rcl} & & \left(X_{t}^{n} - X_{ t}^{m}\right)\left[f\left(X_{ t}^{n}\right) - f\left(X_{ t}^{m}\right) + n(X_{ t}^{n})^{-}- m(X_{ t}^{m})^{-}\right]\mathit{dt} {}\\ & & \qquad \quad + \left\vert g\left(X_{t}^{n}\right) - g\left(X_{ t}^{m}\right)\right\vert ^{2}\mathit{dt} {}\\ & & \qquad \leq \left(n + m\right)\left[(X_{t}^{n})^{-}(X_{ t}^{m})^{-}\right]\mathit{dt} + \left(L +\ell ^{2}\right)\left\vert X_{ t}^{n} - X_{ t}^{m}\right\vert \mathit{dt}, {}\\ \end{array}$$

    we see, by (3.138), that

    $$\displaystyle\begin{array}{rcl} \mathbb{E}\sup _{t\in \left[0,T\right]}\left\vert X_{t}^{n} - X_{ t}^{m}\right\vert ^{2}& \leq & C\mathbb{E}\int _{ 0}^{T}\left(n + m\right)\left[(X_{ t}^{n})^{-}(X_{ t}^{m})^{-}\right]\mathit{dt} {}\\ & \leq & C\mathbb{E}\left(\mathbb{E}\sup _{t\in \left[0,T\right]}\left[(X_{t}^{m})^{-}\right]^{2}\right)^{1/2}\left[\mathbb{E}\left(\int _{ 0}^{T}n(X_{ t}^{n})^{-}\mathit{dt}\right)^{2}\right]^{1/2} {}\\ & & \,+\,C\mathbb{E}\left(\mathbb{E}\sup _{t\in \left[0,T\right]}\left[(X_{t}^{n})^{-}\right]^{2}\right)^{1/2}\left[\mathbb{E}\left(\!\int _{ 0}^{T}m(X_{ t}^{m})^{-}\mathit{dt}\right)^{2}\right]^{1/2} {}\\ & & \rightarrow 0,\quad \text{ as }n,m \rightarrow \infty. {}\\ \end{array}$$
  5. 9.

    It is sufficient to prove that the SDE

    $$\displaystyle{X_{t} = x + \int _{0}^{t}\left(f\left(X_{ s}\right) + \left[f\left(0\right)\right]^{-}\mathbf{1}_{ X_{s}=0}\right)\mathit{ds} + \int _{0}^{t}g\left(X_{ s}\right)\mathit{dB}_{s}}$$

    has a unique positive solution \(X \in S^{2}\left[0,T\right]\). The uniqueness of positive solutions follows from

    $$\displaystyle{\begin{array}{r} \left(X_{s} -\hat{ X}_{s}\right)\left[f\left(X_{s}\right) + \left[f\left(0\right)\right]^{-}\mathbf{1}_{X_{s}=0} - f\left(\hat{X}_{s}\right) -\left[f\left(0\right)\right]^{-}\mathbf{1}_{\hat{X}_{s}=0}\right] \\ + \left\vert g\left(X_{s}\right) - g\left(\hat{X}_{s}\right)\right\vert ^{2} \\ {l} { \leq \left(L +\ell ^{2}\right)\left\vert X_{s} -\hat{ X}_{s}\right\vert ^{2}}\end{array} }$$

    and Corollary 6.77. The existence of a positive solution follows from the approximating equation

    $$\displaystyle{X_{t}^{\varepsilon } = x + \int _{ 0}^{t}\left[f\left(X_{ s}^{\varepsilon }\right) + \left[f\left(0\right)\right]^{-}\left(1 -\frac{\left\vert X_{s}^{\varepsilon }\right\vert } {\varepsilon } \right)^{+}\right]\mathit{ds} + \int _{ 0}^{t}g\left(X_{ s}^{\varepsilon }\right)\mathit{dB}_{ s}.}$$

    Note that \(\tilde{X}^{\varepsilon } = 0\) is the unique solution of the SDE

    $$\displaystyle{\tilde{X}_{t}^{\varepsilon } = 0 + \int _{ 0}^{t}\left[f\left(\tilde{X}_{ s}^{\varepsilon }\right) - f\left(0\right)\left(1 -\frac{\left\vert \tilde{X}_{s}^{\varepsilon }\right\vert } {\varepsilon } \right)^{+}\right]\mathit{ds} + \int _{ 0}^{t}g\left(\tilde{X}_{ s}^{\varepsilon }\right)\mathit{dB}_{ s},}$$

    and \(f\left(0\right) + \left[f\left(0\right)\right]^{-}\left(1 -\frac{\left\vert 0\right\vert } {\varepsilon } \right)^{+} \geq 0\), which yields (by Proposition 3.12) \(X_{t}^{\varepsilon } \geq 0\).

  6. 10.

    By Remark 2.27 we have for all t ≥ 0,

    $$\displaystyle{0 = \int _{0}^{t}\mathbf{1}_{ X_{s}=y}g^{2}\left(X_{ s}\right)\mathit{ds} = g^{2}\left(y\right)\int _{ 0}^{t}\mathbf{1}_{ X_{s}=y}\mathit{ds}.}$$

Exercise 4.2: On each interval \(\mathbf{I}_{i}^{n}\) the equations from the schema (4.149) have unique adapted solutions U n, V n and Y n, respectively; \(U^{n}\) is absolutely continuous; \(H_{\cdot }^{n} = F_{1}(\cdot,U_{\cdot }^{n}) - \dfrac{d} {\mathit{dt}}U\cdot \in L^{1}(\Omega \times ]0,T[)\). Let \(K_{t}^{n} = \int _{0}^{t}H_{ s}^{n}\mathit{ds}\). To prove (4.150) the steps are:

  1. 1.

    \(\begin{array}{c} \mathbb{E}\left(\left\vert U_{t}^{n}\right\vert ^{4} + \left\vert V _{t}^{n}\right\vert ^{4} + \left\vert Y _{t}^{n}\right\vert ^{4} + \left\vert X_{t}^{n}\right\vert ^{4} + \left\updownarrow K^{n}\right\updownarrow _{t}^{2}\right) \leq C\left(1 + \mathbb{E}\vert H_{0}\vert ^{4}\right);\end{array}\)

  2. 2.

    \(\begin{array}{c} \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert V _{t}^{n} - U_{t}^{n}\right\vert ^{4} \leq \dfrac{C} {n^{3}}\left(1 + \mathbb{E}\vert H_{0}\vert ^{4}\right);\end{array}\)

  3. 3.

    \(\begin{array}{c} \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert Y _{t}^{n} - U_{t}^{n}\right\vert ^{4} + \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\vert X_{t}^{n} - U_{t}^{n}\vert ^{4} \leq \dfrac{C} {n} \left(1 + \mathbb{E}\vert H_{0}\vert ^{4}\right);\end{array}\)

  4. 4.

    \(\begin{array}{c} \mathbb{E}\sup \limits _{t\in \left[0,T\right]}\left\vert Y _{t}^{n} - U_{ t}^{n}\right\vert ^{2} \leq \dfrac{C} {\sqrt{n}}\left(1 + \mathbb{E}\vert H_{0}\vert ^{4}\right); \end{array}\)

  5. 5.

    Let \(t \in \mathbf{I}_{i}^{n}\). By Itô’s formula for \(\left\vert Y _{t}^{n} - X_{t}\right\vert ^{2}\) and the above estimates we obtain (4.150).

Exercise 4.3: In the same manner as the estimate from Proposition 4.8 is obtained, we derive using Proposition 6.74 the boundedness of approximating quantities. Then estimating, via the same Proposition 4.8, \(X^{\varepsilon } - X\) and \(\hat{X}^{\varepsilon } - X\) and using Proposition 6.9 the convergence results follow.

Exercise 4.4: For the first four questions, choose the control in feedback form as follows:

$$\displaystyle{U_{s} = -\left(\mu \left(s\right) + \dfrac{1} {2}m_{p}\ell^{2}\left(s\right) + \dfrac{a} {p}\right)\left(X_{s} - x_{0}\right).}$$

For the last question, choose

$$\displaystyle{\tilde{U}_{t} = -\left[\mu \left(s\right) + \left(\dfrac{1} {2}m_{p} + 9p\lambda \right)\ell^{2}\left(s\right) + \dfrac{a} {p}\right]\left(\tilde{X}_{s} - x_{0}\right).}$$

Exercise 4.5: The equivalence follows easily from Example 4.79.

Exercise 4.6: 1&2 Let \(\hat{x} \in \varPi _{E}\left(x\right)\) and \(\hat{y} \in \varPi _{E}\left(y\right)\). Then

$$\displaystyle\begin{array}{rcl} d_{K}^{2}(x) - d_{ K}^{2}(y)& \leq & \left\vert x -\hat{ y}\right\vert ^{2} -\left\vert y -\hat{ y}\right\vert ^{2} {}\\ & =& \left\vert x - y\right\vert ^{2} + 2\left\langle x - y,y -\hat{ y}\right\rangle {}\\ & \leq & \left\vert x - y\right\vert \left(\left\vert x - y\right\vert + 2\left\vert y - a\right\vert \right). {}\\ \end{array}$$

3. Let 0 < λ < 1 and \(x,y \in \mathbb{R}^{d}\). Put z = λ x + (1 −λ)y. Then there exists a \(\hat{z} \in \, E\) such that \(d_{K}(z) =\Vert z -\hat{ z}\Vert\). Hence

$$\displaystyle\begin{array}{rcl} \left\vert z\right\vert ^{2} - d_{ K}^{2}(z)& =& \left\vert z\right\vert ^{2} -\left\vert z -\hat{ z}\right\vert ^{2} {}\\ & =& 2\left\langle z,\hat{z}\right\rangle -\left\vert \hat{z}\right\vert ^{2} {}\\ & =& \lambda \left(2\left\langle x,\hat{z}\right\rangle -\left\vert \hat{z}\right\vert ^{2}\right) + (1-\lambda )\left(2\left\langle y,\hat{z}\right\rangle -\left\vert \hat{z}\right\vert ^{2}\right) {}\\ & =& \lambda (\left\vert x\right\vert ^{2} -\left\vert x -\hat{ z}\right\vert ^{2}) + (1-\lambda )(\left\vert y\right\vert ^{2} -\left\vert y -\hat{ z}\right\vert ^{2}) {}\\ & \leq & \lambda (\left\vert x\right\vert ^{2} - d_{ E}^{2}(x)) + (1-\lambda )(\left\vert y\right\vert ^{2} - d_{ E}^{2}(y)). {}\\ \end{array}$$

4. According to Alexandrov’s Theorem (1939),Footnote 1 the function \(x\mapsto \left\vert x\right\vert ^{2} - d_{K}^{2}(x)\) is almost everywhere twice differentiable, consequently so is \(x\mapsto d_{K}^{2}(x)\).

Chapter  5

Exercise 5.1

Let p ≥ 2, δ ≥ 0 and the Banach space

$$\displaystyle{\mathbb{V}_{m,k}^{\delta,p}\left(0,T\right)\mathop{ =}\limits^{ \mathit{def }}\left\{\left(Y,Z\right) \in S_{ m}^{0}\left[0,T\right] \times \varLambda _{ m\times k}^{0}\left(0,T\right): \left\Vert \left(Y,Z\right)\right\Vert _{\delta V } < \infty \right\},}$$

where

$$\displaystyle{\begin{array}{r} \left\Vert \left(Y,Z\right)\right\Vert _{\delta V }^{p}\mathop{ =}\limits^{ \mathit{def }}\mathbb{E}\sup \limits _{s\in \left[0,T\right]}e^{\delta pV _{s}}\left\vert Y _{s}\right\vert ^{p} + \mathbb{E}\ \left(\int _{0}^{T}e^{2\delta V _{s}}\left\vert Y _{s}\right\vert ^{2}L_{s}\mathit{dQ}_{s}\right)^{p/2} \\ + \mathbb{E}\ \left(\int _{0}^{T}e^{2\delta V _{s} }\left\vert Z_{s}\right\vert ^{2}\mathit{ds}\right)^{p/2},\end{array} }$$

and the complete metric space \(\mathbb{V}_{m,k}^{p}\left(0,T\right) =\bigcap _{\delta \geq 0}\mathbb{V}_{m,k}^{\delta,p}\left(0,T\right)\).

Using Lemma 6.58 we show that the mapping \(\Gamma: \mathbb{V}_{m,k}^{p}\left(0,T\right) \rightarrow \mathbb{V}_{m,k}^{p}\left(0,T\right)\) given by

$$\displaystyle{\left\{\begin{array}{l} \left(Y,Z\right) = \Gamma \left(X,U\right) \\ Y _{t} =\eta +\int _{t}^{T}\Phi \left(s,X_{ s},U_{s}\right)\mathit{dQ}_{s}\ -\int _{t}^{T}Z_{ s}\mathit{dB}_{s}\end{array} \right.}$$

has a unique fixed point in \(\mathbb{V}_{m,k}^{p}\left(0,T\right)\). First \(\Gamma \) is well defined because by Corollary 2.45 \(\mathbb{E}\ \sup _{t\in [0,T]}e^{p\delta V _{t}}\left\vert Y _{t}\right\vert ^{p} < \infty \) and by the inequality

$$\displaystyle{\begin{array}{l} \left\vert Y _{s}\right\vert ^{2}L_{s}\mathit{dQ}_{s} + \left\langle Y _{s},\Phi \left(s,X_{s},U_{s}\right)\right\rangle \mathit{dQ}_{s} \\ \leq \frac{1} {4\left(\delta -1\right)}\left\vert X_{s}\right\vert ^{2}L_{ s}\mathit{dQ}_{s} + \frac{1} {2\delta }\left\vert U_{s}\right\vert ^{2}\mathit{ds} + \left\vert Y _{ s}\right\vert \left\vert \Phi \left(s,0,0\right)\right\vert \mathit{dQ}_{s} + \left\vert Y _{s}\right\vert ^{2}\delta \mathit{dV }_{ s},\;\forall \delta > 1 \end{array} }$$

and Proposition 5.2 we get \(\left\Vert \left(Y,Z\right)\right\Vert _{\delta V }^{p} < \infty \) for all δ > 1.

From the inequality

$$\displaystyle{\begin{array}{l} \left\vert Y _{s} - Y _{s}^{{\prime}}\right\vert ^{2}L_{s}\mathit{dQ}_{s} + \left\langle Y _{s} - Y _{s}^{{\prime}},\Phi \left(s,X_{s},U_{s}\right) - \Phi \left(s,X_{s}^{{\prime}},U_{s}^{{\prime}}\right)\right\rangle \mathit{dQ}_{s} \\ \leq \frac{1} {2\delta }\left\vert U_{s} - U_{s}^{{\prime}}\right\vert ^{2}\mathit{ds} + \frac{1} {4\left(\delta -1\right)}\left\vert X_{s} - X_{s}^{{\prime}}\right\vert ^{2}L_{ s}\mathit{dQ}_{s} + \left\vert Y _{s} - Y _{s}^{{\prime}}\right\vert ^{2}\delta \mathit{dV }_{ s},\;\forall \delta > 1 \end{array} }$$

and Proposition 5.2 we obtain

$$\displaystyle{\left\Vert \left(Y - Y ^{{\prime}},Z - Z^{{\prime}}\right)\right\Vert _{\delta V }^{p} \leq \frac{C_{p}} {\left(\delta -1\right)^{p/2}}\left\Vert \left(X,U\right) -\left(X^{{\prime}},U^{{\prime}}\right)\right\Vert _{\delta V }^{p},\;\forall \delta > 1}$$

which tells us there exists a δ 0 > 1 such that \(\Gamma \) is a strict contraction on \(\left(\mathbb{V}_{m,k}^{p}\left(0,T\right),\left\Vert \ \cdot \ \right\Vert _{\delta V }\right)\), for all δ ≥ δ 0, and consequently, by Lemma 6.58, \(\Gamma \) has a unique fixed point in \(\mathbb{V}_{m,k}^{p}\left(0,T\right)\).

Exercise 5.3: Since

$$\displaystyle\begin{array}{rcl} & & \left(Y _{t}^{\varepsilon } - Y _{ t}^{\delta }\right)\left(G_{\varepsilon }\left(t,Y _{ t}^{\varepsilon },Z_{ t}^{\varepsilon }\right) - G_{\delta }\left(t,Y _{ t}^{\delta },Z_{ t}^{\delta }\right)\right) {}\\ & & \leq L\left\vert Y _{t}^{\varepsilon } - Y _{ t}^{\delta }\right\vert \left(2 + \left\vert Y _{ t}^{\varepsilon }\right\vert + \left\vert Y _{ t}^{\delta }\right\vert + \left\vert Z_{ t}^{\varepsilon }\right\vert + \left\vert Z_{ t}^{\delta }\right\vert \right) {}\\ \end{array}$$

we obtain, by Proposition 5.2, with N = 0, V = 0, λ = 0, that

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\left(\sup \limits _{s\in \left[0,T\right]}\left\vert Y _{s}^{\varepsilon } - Y _{ s}^{\delta }\right\vert ^{p}\right) + \mathbb{E}\ \left(\int _{ 0}^{T}\left\vert Z_{ s}^{\varepsilon } - Z_{ s}^{\delta }\right\vert ^{2}\mathit{ds}\right)^{p/2} {}\\ & & \leq C_{p}\mathbb{E}\ \left(\int _{0}^{T}L\left\vert Y _{ s}^{\varepsilon } - Y _{ s}^{\delta }\right\vert \left(2 + \left\vert Y _{ s}^{\varepsilon }\right\vert + \left\vert Y _{ s}^{\delta }\right\vert + \left\vert Z_{ s}^{\varepsilon }\right\vert + \left\vert Z_{ s}^{\delta }\right\vert \right)\mathit{ds}\right)^{p/2} {}\\ & & \leq C_{p}\left[\mathbb{E}\left(\sup \limits _{s\in \left[0,T\right]}\left\vert Y _{s}^{\varepsilon } - Y _{ s}^{\delta }\right\vert ^{p}\right)\right]^{1/2} {}\\ & & \times \left[\mathbb{E}\ \left(\int _{0}^{T}L\left(2 + \left\vert Y _{ s}^{\varepsilon }\right\vert + \left\vert Y _{ s}^{\delta }\right\vert + \left\vert Z_{ s}^{\varepsilon }\right\vert + \left\vert Z_{ s}^{\delta }\right\vert \right)\mathit{ds}\right)^{p}\right]^{1/2}. {}\\ \end{array}$$

Exercise 5.5: We apply the existence and uniqueness result from Theorem 5.27 and the comparison result from Theorem 5.33 for the BSDE

$$\displaystyle{Y _{t} =\eta +\int _{t}^{T}Y _{ s}\left(1 - Y _{s}^{+}\right)\mathit{ds} -\int _{ t}^{T}\left\langle Z_{ s},\mathit{dB}_{s}\right\rangle }$$

with 0 ≤ η ≤ 1.

Exercise 5.7: Assume that E is not convex. We shall show there exists a bounded continuous function \(g: \mathbb{R}^{k} \rightarrow E\) such that

$$\displaystyle{P\left(\left\{Y _{t}\notin E\right\}\right) > 0,\,\,\text{ for some}\,t \in \, [0,T].}$$

If E is not convex, we can find \(a,b \in \mathrm{ Bd}\left(E\right)\) such that ab and \(a +\lambda \left(b - a\right)\notin E\) for all λ ∈ ]0, 1[. Let \(\delta = \frac{1} {4}d_{E}(\frac{a+b} {2} ) > 0\). Define \(g: \mathbb{R}^{k} \rightarrow E\) by \(g\left(x^{\left(1\right)},x^{\left(2\right)},\ldots,x^{\left(k\right)}\right) = a + \left(b - a\right)\mathbf{1}_{(-\infty,1]}\left(x^{\left(1\right)}\right)\). By Exercise 1.7 we have

$$\displaystyle{\mathbb{E}\left(g\left(B_{T}^{\left(1\right)}\right)\vert \mathcal{F}_{ t}\right) = a + \left(b - a\right)\Phi \left(\dfrac{1 - B_{t}^{\left(1\right)}} {\sqrt{T - t}} \right),}$$

where

$$\displaystyle{\Phi (r):= \frac{1} {\sqrt{2\pi }}\int _{-\infty }^{r}e^{-\frac{x^{2}} {2} }\mathit{dx},\;r \in \mathbb{R}.}$$

We also have

$$\displaystyle{\left\vert Y _{t} - \mathbb{E}[g\left(B_{T}^{\left(1\right)}\right)\vert \mathcal{F}_{ t}]\right\vert \leq \mathbb{E}\left\vert \int _{t}^{T}F_{ s}\mathit{ds}\vert \mathcal{F}_{t}\right\vert \leq M\left(T - t\right) \leq \delta,}$$

if \(t \in \,[T - \frac{\delta } {M},T],\) where M > 0 denotes the bound of F.

Then for all \(t \in \,[T - \frac{\delta } {M},T],\)

$$\displaystyle\begin{array}{rcl} \left\vert Y _{t} -\frac{a + b} {2} \right\vert & \leq & \left\vert Y _{t} - \mathbb{E}[g\left(B_{T}^{\left(1\right)}\right)\vert \mathcal{F}_{ t}]\right\vert +\left\vert \mathbb{E}[g\left(B_{T}^{\left(1\right)}\right)\vert \mathcal{F}_{ t}] -\frac{a + b} {2} \right\vert {}\\ & \leq & \delta +\left\vert b - a\right\vert \left\vert \Phi \left(\dfrac{1 - B_{t}^{\left(1\right)}} {\sqrt{T - t}} \right)-\frac{1} {2}\right\vert. {}\\ \end{array}$$

Therefore

$$\displaystyle\begin{array}{rcl} 0& <& \mathbb{P}\left[\left\vert \Phi \left(\dfrac{1 - B_{t}^{\left(1\right)}} {\sqrt{T - t}} \right)-\frac{1} {2}\right\vert \leq \frac{\delta } {\left\vert b - a\right\vert }\right] {}\\ & \leq & \mathbb{P}\left(\left\vert Y _{t} -\frac{a + b} {2} \right\vert \leq 2\delta \right) {}\\ & \leq & \mathbb{P}\left(Y _{t}\notin E\right). {}\\ \end{array}$$