Abstract
Using a new Green type function we present a study of optimal control problem where the dynamic is governed by a second order ordinary differential equation (SODE) with m-point boundary condition.
JEL Classification: C61, C73
Mathematics Subject Classification (2010): 34A60, 34B15, 47H10, 45N05
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Key words
- Differential game
- Green function
- m-Point boundary
- Optimal control
- Pettis
- Strategy
- Sweeping process
- Viscosity
1 Introduction
The pioneering works concerning control systems governed by second order ordinary differential equations (SODE) with three point boundary condition are developed in [2, 16]. In this paper we present some new applications of the Green function introduced in [11] to the study of viscosity problem in Optimal Control Theory where the dynamic is governed by (SODE) with m-point boundary condition. The paper is organized as follows. In Sect. 2 we recall and summarize the properties of a new Green function (Lemma 2.1) with application to a second order differential equation with m-point boundary condition in a separable Banach space E of the form
Here γ is positive, f ∈ L E 1([0, 1]), m is an integer number > 3, \(0 \leq \tau <\eta _{1} <\eta _{2} < \cdot \cdot \cdot <\eta _{m-2} < 1\), α i ∈ R \(\left (i = 1,2,\ldots,m - 2\right )\) satisfying the condition
and u τ, x, f is the trajectory W E 2, 1([τ, 1])-solution to (SODE) associated with f ∈ L E 1([0, 1]) starting at the point x ∈ E at time τ ∈ [0, 1[. By Lemma 2.1, u τ, x, f and \(\dot{u}_{\tau,x,f}\) are represented, respectively, by
where G τ is the Green function defined in Lemma 2.1 with
We stress that both existence and uniqueness and the integral representation formulas of solution and its derivative for (SODE) via the new Green function are of importance of this work. Indeed this allows to treat several new applications to optimal control problems and also some viscosity solutions for the value function governed by (SODE) with m-point boundary condition. In Sect. 3, we treat an optimal control problem governed by (SODE) in a separable Banach space
where Γ is a measurable and integrably bounded convex compact valued mapping and S Γ 1 is the set of all integrable selections of \(\Gamma \). We show the compactness of the solution set and the existence of optimal control for the problem
These results lead naturally to the problem of viscosity for the value function associated with this class of (SODE) which is presented in Sect. 4. In Sect. 5 we deal with a class of (SODE) with Pettis integrable second member. Existence and compactness of the solution set are also provided. Open problems concerning differential game governed by (SODE) and (ODE) with strategies are given in Sect. 6. We finish the paper by providing an application to the dynamic programming principle (DPP) and viscosity property for the value function associated with a sweeping process related to a model in Mathematical Economics [25].
2 Existence and Uniqueness
Let E be a separable Banach space. We denote by E ∗ the topological dual of E; \(\overline{B}_{E}\) is the closed unit ball of E; \(\mathcal{L}([0,1])\) is the σ algebra of Lebesgue measurable sets on [0, 1]; λ = dt is the Lebesgue measure on [0, 1]; \(\mathcal{B}(E)\) is the σ algebra of Borel subsets of E. By L E 1([0, 1]), we denote the space of all Lebesgue–Bochner integrable E-valued functions defined on [0, 1]. Let C E ([0, 1]) be the Banach space of all continuous functions u: [0, 1] → E endowed with the sup-norm and let C E 1([0, 1]) be the Banach space of all functions u ∈ C E ([0, 1]) with continuous derivative, endowed with the norm
We also denote W E 2, 1([0, 1]) the space of all continuous functions in C E ([0, 1]) such that their first derivatives are continuous and their second weak derivatives belong to L E 1([0, 1]).
We recall and summarize a new Green type function given in [11] that is a key ingredient in the statement of the problems under consideration.
Lemma 2.1.
Let \(0 \leq \tau <\eta _{1} <\eta _{2} < \cdot \cdot \cdot <\eta _{m-2} < 1\) , γ > 0, m > 3 be an integer number, and α i ∈ R \(\left (i = 1,\ldots,m - 2\right )\) satisfying the condition
Let E be a separable Banach space and let G τ : [τ,1] × [τ,1] → R be the function defined by
where
and
Then the following assertions hold
-
(i)
For every fixed s ∈ [τ,1], the function G τ (.,s) is right derivable on [τ,1[ and left derivable on ]τ,1]. Its derivative is given by
$$\displaystyle\begin{array}{rcl} & & \left (\dfrac{\partial G_{\tau }} {\partial t} \right )_{+}(t,s) = \left \{\begin{array}{lll} \exp (-\gamma (t - s)),&\tau \leq s \leq t < 1\\ 0, &\tau \leq t < s < 1\end{array} \right. \\ & & \qquad \qquad \qquad \qquad + A_{\tau }\exp (-\gamma (t-\tau ))\phi _{\tau }(s), {}\end{array}$$(2.4)$$\displaystyle{ \left (\frac{\partial G_{\tau }} {\partial t} \right )_{-}(t,s) = \left \{\begin{array}{lll} \exp (-\gamma (t - s)),&\tau \leq s < t \leq 1\\ 0, &\tau < t \leq s \leq 1\end{array} \right.+A_{\tau }\exp (-\gamma (t-\tau ))\phi _{\tau }(s). }$$(2.5) -
(ii)
G τ (⋅,⋅) and \(\frac{\partial G_{\tau }} {\partial t} (\cdot,\cdot )\) satisfies
$$\displaystyle{\left \vert G_{\tau }(t,s)\right \vert \leq M_{G_{\tau }}\ \mathrm{and}\ \ \left \vert \frac{\partial G_{\tau }} {\partial t} (t,s)\right \vert \leq M_{G_{\tau }},\quad \forall (t,s) \in [\tau,1] \times [\tau,1],}$$where
$$\displaystyle{M_{G_{\tau }} =\max \{ \gamma ^{-1},1\}\left [1 + \vert A_{\tau }\vert \left (1 +\sum _{ i=1}^{m-2}\vert \alpha _{ i}\vert \right )\right ].}$$ -
(iii)
If u ∈ W E 2,1 ([τ,1]) with u(τ) = x and \(u(1) =\sum _{ i=1}^{m-2}\alpha _{i}u(\eta _{i})\) , then
$$\displaystyle{u(t) = e_{\tau,x}(t) +\int _{ \tau }^{1}G_{\tau }(t,s)(\ddot{u}(s) +\gamma \dot{ u}(s))ds,\quad \forall t \in [\tau,1],}$$where
$$\displaystyle{e_{\tau,x}(t) = x + A_{\tau }(1 -\sum _{i=1}^{m-2}\alpha _{ i})(1 -\exp (-\gamma (t-\tau )))x.}$$ -
(iv)
Let f ∈ L E 1 ([τ,1]) and let u f : [τ,1] → E be the function defined by
$$\displaystyle{u_{f}(t) = e_{\tau,x}(t) +\int _{ \tau }^{1}G_{\tau }(t,s)f(s)ds,\quad \forall t \in [\tau,1].}$$Then we have
$$\displaystyle{u_{f}(\tau ) = x,\quad u_{f}(1) =\sum _{ i=1}^{m-2}\alpha _{ i}u_{f}(\eta _{i}).}$$Further the function u f is derivable on [τ,1] and its derivative \(\dot{u}_{f}\) is defined by
$$\displaystyle{\dot{u}_{f}(t) =\lim _{h\rightarrow 0}\frac{u_{f}(t + h) - u_{f}(t)} {h} =\dot{ e}_{\tau,x}(t) +\int _{ \tau }^{1}\frac{\partial G_{\tau }} {\partial t} (t,s)f(s)ds,}$$with
$$\displaystyle{\dot{e}_{\tau,x}(t) =\gamma A_{\tau }(1 -\sum _{i=1}^{m-2}\alpha _{ i})\exp (-\gamma (t-\tau ))x.}$$ -
(v)
If f ∈ L E 1 ([τ,1]), the function \(\dot{u}_{f}\) is scalarly derivable, and its weak derivative \(\ddot{u}_{f}\) satisfies
$$\displaystyle{\ddot{u}_{f}(t) +\gamma \dot{ u}_{f}(t) = f(t)\quad a.e.\quad t \in [\tau,1].}$$
Proof.
-
(i)
Let \(s \in \left [\tau,1\right ]\) and t \(\in \left [\tau,1\right ].\) We consider two following cases.
Case 1 t ≠ s. For every small h > 0 with \(h <\min \left \{\left \vert t - s\right \vert,1 - t\right \},\) we have
$$\displaystyle\begin{array}{rcl} & & \frac{G_{\tau }\left (t + h,s\right ) - G_{\tau }\left (t,s\right )} {h} = \left \{\begin{array}{ll} \left (\gamma h\right )^{-1}\exp \left (-\gamma \left (t - s\right )\right )\left (1 -\exp \left (-\gamma h\right )\right ), \\ \quad \tau \leq s < t < 1 \\ 0, \\ \quad \tau \leq t < s \leq 1\end{array} \right. {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad + A_{\tau }\exp \left (-\gamma \left (t-\tau \right )\right ) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \times \left (1 -\exp \left (-\gamma h\right )\right )\left (\gamma h\right )^{-1}\phi _{ \tau }\left (s\right ). {}\\ \end{array}$$Hence \(G_{\tau }\left (\cdot,s\right )\) is right derivable at \(t \in [\tau,1[\setminus \left \{s\right \}\) and
$$\displaystyle\begin{array}{rcl} & & \left (\frac{\partial G_{\tau }} {\partial t} \right )_{+}\left (t,s\right ) = \left \{\begin{array}{ll} \exp \left (-\gamma \left (t - s\right )\right ),\text{ }&\tau \leq s < t < 1\\ 0, &\tau \leq t < s \leq 1\end{array} \right. {}\\ & & \qquad \qquad \qquad \qquad + A_{\tau }\exp \left (-\gamma \left (t-\tau \right )\right )\phi _{\tau }\left (s\right ). {}\\ \end{array}$$Similarly, it is not difficult to check that \(G_{\tau }\left (\cdot,s\right )\) is left derivable at \(t \in ]\tau,1]\setminus \left \{s\right \}\) and
$$\displaystyle\begin{array}{rcl} & & \left (\frac{\partial G_{\tau }} {\partial t} \right )_{-}\left (t,s\right ) = \left \{\begin{array}{ll} \exp \left (-\gamma \left (t - s\right )\right ),\text{ }&\tau \leq s < t \leq 1\\ 0, &\tau < t < s \leq 1\end{array} \right. {}\\ & & \qquad \qquad \qquad \qquad + A_{\tau }\exp \left (-\gamma \left (t-\tau \right )\right )\phi _{\tau }\left (s\right ). {}\\ \end{array}$$Case 2 t = s. Given 0 < h < 1 − s. We have
$$\displaystyle\begin{array}{rcl} & & \frac{G_{\tau }\left (t + h,s\right ) - G_{\tau }\left (t,s\right )} {h} = \left (\gamma h\right )^{-1}\left (1 -\exp \left (-\gamma h\right )\right ) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad + A_{\tau }\exp \left (-\gamma \left (t-\tau \right )\right )\left (1 -\exp \left (-\gamma h\right )\right ) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \times \left (\gamma h\right )^{-1}\phi _{ \tau }\left (s\right ), {}\\ \end{array}$$hence
$$\displaystyle{\left (\frac{\partial G_{\tau }} {\partial t} \right )_{+}\left (s,s\right ) = 1 + A_{\tau }\exp \left (-\gamma \left (s-\tau \right )\right )\phi _{\tau }\left (s\right ).}$$Now given 0 < h < s −τ. We have
$$\displaystyle\begin{array}{rcl} & & \frac{G_{\tau }\left (t - h,s\right ) - G_{\tau }\left (t,s\right )} {h} = A_{\tau }\exp \left (-\gamma \left (t-\tau \right )\right ) {}\\ & & \qquad \qquad \qquad \qquad \qquad \quad \times \left (1 -\exp \left (-\gamma h\right )\right )\left (\gamma h\right )^{-1}\phi _{ \tau }\left (s\right ), {}\\ \end{array}$$hence
$$\displaystyle{\left (\frac{\partial G_{\tau }} {\partial t} \right )_{+}\left (s,s\right ) = A_{\tau }\exp \left (-\gamma \left (s-\tau \right )\right )\phi _{\tau }\left (s\right ).}$$ -
(ii)
It is easy to see that \(\left \vert \phi _{\tau }(s)\right \vert \leq 1 +\sum _{ i=1}^{m-2}\left \vert \alpha _{i}\right \vert \) for all s ∈ [0, 1]. So, from the definition of G τ we deduce that for all s, t ∈ [τ, 1]
$$\displaystyle{\left \vert G_{\tau }(t,s)\right \vert \leq \frac{1} {\gamma } \left [1 + \left \vert A_{\tau }\right \vert \left (1 +\sum _{ i=1}^{m-2}\left \vert \alpha _{ i}\right \vert \right )\right ] \leq M_{G_{\tau }}.}$$Similarly we deduce that for all s, t ∈ [τ, 1]
$$\displaystyle{\left \vert \frac{\partial G_{\tau }} {\partial t} (t,s)\right \vert \leq 1 + \left \vert A_{\tau }\right \vert \left \vert \phi _{\tau }(s)\right \vert \leq 1 + \left \vert A_{\tau }\right \vert \left (1 +\sum _{ i=1}^{m-2}\left \vert \alpha _{ i}\right \vert \right ) \leq M_{G_{\tau }}.}$$ -
(iii)
Let x ∗ ∈ E ∗. By definition of G τ , for all t ∈ [τ, 1], we have
$$\displaystyle\begin{array}{rcl} & & \left \langle x^{{\ast}},\int _{\tau }^{1}G_{\tau }(t,s)\ddot{u}(s)ds\right \rangle =\int _{ \tau }^{1}\left \langle x^{{\ast}},G_{\tau }(t,s)\ddot{u}(s)\right \rangle ds {}\\ & & = \frac{1} {\gamma } \int _{\tau }^{t}(1 -\exp (-\gamma (t - s)))\left \langle x^{{\ast}},\ddot{u}(s)\right \rangle ds {}\\ & & \quad + \frac{A_{\tau }} {\gamma } (1 -\exp (-\gamma (t-\tau )))\int _{\tau }^{1}\left \langle x^{{\ast}},\phi _{\tau }(s)\ddot{u}(s)\right \rangle ds. {}\\ \end{array}$$On the other hand
$$\displaystyle\begin{array}{rcl} & & \int _{\tau }^{t}\left (1 -\exp (-\gamma (t - s))\right )\langle x^{{\ast}},\ddot{u}(s)\rangle ds {}\\ & & \quad = \left (\exp (-\gamma (t-\tau )) - 1\right )\langle x^{{\ast}},\dot{u}(\tau )\rangle +\gamma \int _{ \tau }^{t}\exp (-\gamma (t - s))\langle x^{{\ast}},\dot{u}(s)\rangle ds {}\\ \end{array}$$and \(\int _{\tau }^{1}\langle x^{{\ast}},\phi _{\tau }(s)\ddot{u}(s)\rangle ds = I_{1} + I_{2}\) where
$$\displaystyle\begin{array}{rcl} & & I_{1} =\sum _{ i=1}^{m-1}\int _{ \eta _{i}-1}^{\eta _{i} }\left (1 -\exp (-\gamma (1 - s))\right )\langle x^{{\ast}},\ddot{u}(s)\rangle ds {}\\ & & \quad = \left (\exp (-\gamma (1-\tau )) - 1\right )\langle x^{{\ast}},\dot{u}(\tau )\rangle +\gamma \int _{ \tau }^{1}\exp (-\gamma (1 - s))\langle x^{{\ast}},\dot{u}(s)\rangle ds {}\\ & & I_{2} = -\sum _{i=1}^{m-2}\sum _{ j=i}^{m-2}\alpha _{ j}\int _{\eta _{i}-1}^{\eta _{i} }(1 -\exp (-\gamma (\eta _{j} - s)))\langle x^{{\ast}},\dot{u}(s)\rangle ds {}\\ & & \quad = -\sum _{i=1}^{m-2}\alpha _{ i}\left (\exp (-\gamma (\eta _{i}-\tau )) - 1\right )\langle x^{{\ast}},\dot{u}(\tau )\rangle {}\\ & & \qquad -\gamma \sum _{i=1}^{m-2}\sum _{ j=i}^{m-2}\int _{ \eta _{i-1}}^{\eta _{i} }\exp (-\gamma (\eta _{i} - s))\langle x^{{\ast}},\dot{u}(s)\rangle ds {}\\ \end{array}$$with η 0 = τ, η m−1 = 1.
Hence
$$\displaystyle\begin{array}{rcl} & & \left \langle x^{{\ast}},\int _{\tau }^{1}G_{\tau }(t,s)(\ddot{u}(s) +\gamma \dot{ u}(s))ds\right \rangle {}\\ & & \quad = \frac{1} {\gamma } (\exp (-\gamma (t-\tau )) - 1)\langle x^{{\ast}},\dot{u}(\tau )\rangle {}\\ & & \qquad + \frac{A_{\tau }} {\gamma } (1 -\exp (-\gamma (t-\tau )))\langle x^{{\ast}},\dot{u}(\tau )\rangle {}\\ & & \qquad \times \left [\exp (-\gamma (1-\tau )) - 1 -\sum _{i=1}^{m-2}\alpha _{ i}\left (\exp (-\gamma (\eta _{i}-\tau )) - 1\right )\right ] {}\\ & & \qquad +\int _{ \tau }^{t}\langle x^{{\ast}},\dot{u}(s)\rangle ds + A_{\tau }\left (1 -\exp (-\gamma t)\right )\sum _{ i=1}^{m-2}\left (1 -\sum _{ j=i}^{m-2}\alpha _{ j}\right ) {}\\ & & \qquad \times \int _{\eta _{i-1}}^{\eta _{i} }\langle x^{{\ast}},\dot{u}(s)\rangle ds. {}\\ \end{array}$$This implies that
$$\displaystyle{\langle x^{{\ast}},\int _{ 0}^{1}G_{\tau }(t,s)(\ddot{u}(s) +\gamma \dot{ u}(s))ds\rangle =\langle x^{{\ast}},u(t) - e_{\tau,x}(t)\rangle,\quad \forall t \in [\tau,1].}$$Since this equality holds for every x ∗ ∈ E ∗, we get
$$\displaystyle{u(t) = e_{\tau,x}(t) +\int _{ \tau }^{1}G_{\tau }(t,s)(\ddot{u}(s) +\gamma \dot{ u}(s))ds,\quad \forall t \in [\tau,1].}$$ -
(iv)
Let f ∈ L E 1([0, 1]) and \(u_{f}(t) = e_{\tau,x}(t) +\int _{ \tau }^{1}G_{\tau }(t,s)f(s)ds, \forall t \in [0,1]\). Then, by definition of G τ in (i), we have u f (τ) = x and
$$\displaystyle\begin{array}{rcl} & & u_{f}(1) = e_{\tau,x}(1) + \frac{1} {\gamma } \int _{\tau }^{1}(1 -\exp (-\gamma (1 - s)))f(s)ds {}\\ & & \qquad \qquad + \frac{A_{\tau }} {\gamma } (1 -\exp (-\gamma (1-\tau )))\int _{\tau }^{1}\phi _{ \tau }(s)f(s)ds {}\\ & & \qquad \,\,\, = e_{\tau,x}(1) + \frac{1} {\gamma } \int _{\tau }^{1}\left [1 -\exp (-\gamma (1 - s)) -\phi _{\tau }(s)\right ]f(s)ds {}\\ & & \qquad \qquad + \frac{1} {\gamma } \left [A_{\tau }(1 -\exp (-\gamma (1-\tau ))) + 1\right ]\int _{\tau }^{1}\phi _{ \tau }(s)f(s)ds {}\\ & & \qquad \,\,\, = e_{\tau,x}(1) + \frac{1} {\gamma } \sum _{i=1}^{m-2}\sum _{ j=i}^{m-2}\alpha _{ j}\int _{\eta _{i}-1}^{\eta _{i} }(1 -\exp (-\gamma (\eta _{i} - s)))f(s)ds {}\\ & & \qquad \quad + \frac{A_{\tau }} {\gamma } \sum _{i=1}^{m-2}\alpha _{ i}(1 -\exp (-\gamma (\eta _{i}-\tau )))\int _{\tau }^{1}\phi _{ \tau }(s)f(s)ds {}\\ & & \qquad \,\,\, = e_{\tau,x}(1) + \frac{1} {\gamma } \sum _{i=1}^{m-2}\alpha _{ i}\int _{0}^{\eta _{i} }(1 -\exp (-\gamma (\eta _{i} - s)))f(s)ds {}\\ & & \qquad \qquad + \frac{A_{\tau }} {\gamma } \sum _{i=1}^{m-2}\alpha _{ i}(1 -\exp (-\gamma (\eta _{i}-\tau )))\int _{\tau }^{1}\phi _{ \tau }(s)f(s)ds. {}\\ \end{array}$$From the definition of e τ, x (t) and A τ , we deduce that
$$\displaystyle\begin{array}{rcl} e_{\tau,x}(1)& =& x + A_{\tau }\left (1 -\sum _{i=1}^{m-2}\alpha _{ i}\right )\left (1 -\exp (-\gamma (1-\tau ))\right )x {}\\ & =& A_{\tau }\left [A_{\tau }^{-1} + 1 -\exp (-\gamma (1-\tau )) +\sum _{ i=1}^{m-2}\alpha _{ i}\left (\exp (-\gamma (1-\tau )) - 1\right )\right ]x {}\\ & =& A_{\tau }\left [\sum _{i=1}^{m-2}\alpha _{ i}\exp (-\gamma (1-\tau )) -\sum _{i=1}^{m-2}\alpha _{ i}\exp (-\gamma (\eta _{i}-\tau ))\right ]x {}\\ \end{array}$$and
$$\displaystyle\begin{array}{rcl} & & e_{\tau,x}(\eta _{i}) = x + A_{\tau }\left (1 -\sum _{j=1}^{m-2}\alpha _{ j}\right )\left (1 -\exp (-\gamma (\eta _{i}-\tau ))\right )x {}\\ & & = A_{\tau }\left [A_{\tau }^{-1} + 1 -\exp (-\gamma (\eta _{ i}-\tau )) -\sum _{j=1}^{m-2}\alpha _{ j} +\exp (-\gamma (\eta _{i}-\tau ))\sum _{j=1}^{m-2}\alpha _{ j}\right ]x {}\\ & & = A_{\tau }\left [\exp \left (-\gamma (1-\tau )\right ) -\exp (-\gamma (\eta _{i}-\tau )) +\exp (-\gamma (\eta _{i}-\tau ))\sum _{j=1}^{m-2}\alpha _{ j}\right. {}\\ & & \quad \left.+\sum _{j=1}^{m-2}\alpha _{ j}\exp (-\gamma (\eta _{j}-\tau ))\right ]x. {}\\ \end{array}$$Hence we deduce that
$$\displaystyle\begin{array}{rcl} & & \sum _{i=1}^{m-2}\alpha _{ i}e_{\tau,x}(\eta _{i}) {}\\ & & \quad \quad = A_{\tau }\left [\sum _{i=1}^{m-2}\alpha _{ i}\exp \left (-\gamma (1-\tau )\right )\right. -\sum _{i=1}^{m-2}\alpha _{ i}\exp (-\gamma (\eta _{i}-\tau )) {}\\ & & \qquad \quad + \left.\left (\sum _{j=1}^{m-2}\alpha _{ j}\right )\sum _{i=1}^{m-2}\alpha _{ i}\exp (-\gamma (\eta _{i}-\tau )) -\left (\sum _{i=1}^{m-2}\alpha _{ i}\right )\right. {}\\ & & \qquad \quad \times \left.\sum _{j=1}^{m-2}\alpha _{ j}\exp (-\gamma (\eta _{j}-\tau ))\right ]x = e_{\tau,x}(1). {}\\ \end{array}$$So, by combining the above relations, we get
$$\displaystyle\begin{array}{rcl} & & u_{f}(1) =\sum _{ i=1}^{m-2}\alpha _{ i}e_{\tau,x}(\eta _{i}) + \frac{1} {\gamma } \sum _{i=1}^{m-2}\alpha _{ i}\int _{0}^{\eta _{i} }(1 -\exp (-\gamma (\eta _{i} - s)))f(s)ds {}\\ & & \qquad \qquad + \frac{A_{\tau }} {\gamma } \sum _{i=1}^{m-2}\alpha _{ i}(1 -\exp (-\gamma (\eta _{i}-\tau )))\int _{\tau }^{1}\phi _{ \tau }(s)f(s)ds {}\\ & & \qquad \,\,\, =\sum _{ i=1}^{m-2}\alpha _{ i}\left [e_{\tau,x}(\eta _{i}) + \frac{1} {\gamma } \int _{0}^{\eta _{i} }(1 -\exp (-\gamma (\eta _{i} - s)))f(s)ds\right. {}\\ & & \qquad \qquad + \left.\frac{A_{\tau }} {\gamma } (1 -\exp (-\gamma (\eta _{i}-\tau )))\int _{\tau }^{1}\phi _{ \tau }(s)f(s)ds\right ] {}\\ & & \qquad \,\,\, =\sum _{ i=1}^{m-2}\alpha _{ i}u_{f}(\eta _{i}). {}\\ \end{array}$$On the other hand, by the same arguments as in [2] we can conclude that u f is derivable and its derivative \(\dot{u}_{f}\) is defined by
$$\displaystyle{\dot{u}_{f}(t) =\dot{ e}_{\tau,x}(t) +\int _{ \tau }^{1}\frac{\partial G} {\partial t} (t,s)f(s)ds,\quad \forall t \in [0,1].}$$ -
(v)
Indeed, let t ∈ [0, 1]. Using the expression of \(\frac{\partial G} {\partial t}\) in (i) we have
$$\displaystyle\begin{array}{rcl} & & \dot{u}_{f}(t) =\dot{ e}_{\tau,x}(t) +\int _{ \tau }^{t}\exp (-\gamma (t - s))f(s)ds {}\\ & & \qquad \quad + A_{\tau }\exp (-\gamma (t-\tau ))\int _{\tau }^{1}\phi _{ \tau }(s)f(s)ds. {}\\ \end{array}$$Whence
$$\displaystyle\begin{array}{rcl} & & \langle x^{{\ast}},\ddot{u}_{ f}(t)\rangle = \frac{d} {dt}\langle x^{{\ast}},\dot{u}_{ f}(t)\rangle {}\\ & & =\langle x^{{\ast}},\ddot{e}_{\tau,x}(t)\rangle + \frac{d} {dt}\int _{\tau }^{t}\exp (-\gamma (t - s))\langle x^{{\ast}},f(s)\rangle ds {}\\ & & \quad - A_{\tau }\gamma \exp (-\gamma (t-\tau ))\int _{\tau }^{1}\langle x^{{\ast}},\phi _{\tau }(s)f(s)\rangle ds {}\\ & & =\langle x^{{\ast}},\ddot{e}_{\tau,x}(t)\rangle +\langle x^{{\ast}},f(t)\rangle -\gamma \int _{\tau }^{t}\exp (-\gamma (t - s))\langle x^{{\ast}},f(s)\rangle ds {}\\ & & \quad - A_{\tau }\gamma \exp (-\gamma (t-\tau ))\int _{\tau }^{1}\langle x^{{\ast}},\phi _{\tau }(s)f(s)\rangle ds. {}\\ \end{array}$$We also note that \(\ddot{e}_{\tau,x}(t) = -\gamma \dot{e}_{\tau,x}(t)\). Therefore
$$\displaystyle{\langle x^{{\ast}},\ddot{u}_{ f}(t)\rangle =\langle x^{{\ast}},f(t)\rangle -\langle x^{{\ast}},\gamma \dot{u}_{ f}(t)\rangle.}$$This implies that \(\dot{u}_{f}\) is scalarly derivable and
$$\displaystyle{\ddot{u}_{f}(t) +\gamma \dot{ u}_{f}(t) = f(t)\quad a.e.\quad t \in [0,1].}$$□
The following result is a direct application of Lemma 2.1.
Lemma 2.2.
With the notations of Lemma 2.1 , assume \(0 \leq \tau <\eta _{1} <\eta _{2} < \cdot \cdot \cdot <\eta _{m-2} < 1\) , γ > 0, m > 3 be an integer number, and α i ∈ R \(\left (i = 1,\ldots,m - 2\right )\) and (1.1.1) . Let f ∈ C E ([τ,1]) (resp. f ∈ L E 1 ([τ,1]). Then the m-point boundary problem
has a unique C E 2 ([τ,1])-solution (resp. W E 2,1 ([τ,1])-solution) which is given by the integral representation formulas
where
Remark.
It is clear that the Green function G τ depends on τ. When τ = 0, (1.1.1) is reduced to
where m is an integer number > 3, \(0 <\eta _{1} <\eta _{2} < \cdot \cdot \cdot <\eta _{m-2} < 1\), α i ∈ R \(\left (i = 1,2,\ldots,m - 2\right )\). Then the m-point boundary problem
has a unique C E 2([0, 1])-solution (resp. W E 2, 1([0, 1])-solution), u x, f , with integral representation formulas
where
This remark and its notation will be used in the next section.
3 Existence of Optimal Controls
Let us recall the following denseness result based on Lyapunov theorem. See e.g. [12, 28].
Proposition 3.1.
Let E be a separable Banach space. Let Γ: [0,T] → cwk(E) be a convex weakly compact valued measurable and integrably bounded mapping. Let \(ext(\varGamma ): t\mapsto ext(\varGamma (t))\) where \(ext(\Gamma (t))\) is the set of extreme points of \(\Gamma (t)(t \in [0,T])\) . Then the set \(S_{\Gamma }^{1}\) of all integrable selections of \(\Gamma \) is convex and \(\sigma (L_{E}^{1},L_{E^{{\ast}}}^{\infty })\) -compact and the set of all integrable selections \(S_{ext(\Gamma )}^{1}\) of \(ext(\Gamma )\) is dense in \(S_{\Gamma }^{1}\) with respect to this topology.
Proof.
In this section we will assume that the hypotheses and notations of Lemma 2.1 hold with τ = 0.
Theorem 3.1.
With the hypotheses and notations of Proposition 3.1 , let E be a separable Banach space and let \( \Gamma : [0,T]\rightarrow ck(E) \) be a convex compact valued measurable and integrably bounded mapping. Let us following (SODE)
Then the set \(\{u_{f}: f \in S_{\varGamma }^{1}\}\) of W E 2,1 ([0,1])-solutions to (SODE) Γ is compact in C E 1 ([0,1]) and the set \(\{u_{g}: g \in S_{ext(\varGamma )}^{1}\}\) of W E 2,1 ([0,1])-solutions to (SODE) ext(Γ) is dense in the compact set \(\{u_{f}: f \in S_{\Gamma }^{1}\}\) of W E 2,1 ([0,1])-solutions to \((\mathit{SODE})_{\Gamma }\).
Proof.
Step 1. Compactness of the solution set \(\{u_{f}: f \in S_{\varGamma }^{1}\}\) in C E 1([0, 1]).
Let \((u_{f_{n}})\) be a sequence of W E 2, 1([0, 1])-solutions to (SODE) Γ . As \(S_{\Gamma }^{1}\) is \(\sigma (L_{E}^{1},L_{E^{{\ast}}}^{\infty })\)-compact, by Eberlein–Smulian theorem, we may assume that (f n ) \(\sigma (L_{E}^{1},L_{E^{{\ast}}}^{\infty })\)-converges to \(f_{\infty }\in S_{\varGamma }^{1}\). From the properties of the Green function G 0 in Lemma 2.1 (by taking τ = 0) we have, for each n ∈ N,
with
On the other hand, from definition of the Green function G 0 in Lemma 2.1(iv) and (3.1.1), it is not difficult to show that \(\{u_{f_{n}}: n \in \mathbf{N}\}\) is equicontinuous in C E ([0, 1]). Indeed, let t, t′ ∈ [0, 1], from (3.1.1) and (iv), we have the estimate
Further, for each t ∈ [0, 1] \(\{u_{f_{n}}(t): n \in \mathbf{N}\}\) is relatively compact because it is included in the norm compact set \(e_{x}(t) +\int _{ 0}^{1}G_{0}(t,s)\varGamma (s)ds\) (see e.g. [12, 14]). So by Ascoli’s theorem, \(\{u_{f_{n}}: n \in \mathbf{N}\}\) is relatively compact in C E ([0, 1]). Similarly using the properties of \(\frac{\partial G_{0}} {\partial t}\) in Lemma 2.1 and (3.1.2) we deduce that \(\left \{\dot{u}_{f_{n}}: n \in \mathbf{N}\right \}\) is equicontinuous in \(C_{E}\left (\left [0,1\right ]\right )\). In addition, the set \(\left \{\dot{u}_{f_{n}}\left (t\right ): n \in \mathbf{N}\right \}\) is included in the compact set \(\dot{e}_{x}(t) +\int _{ 0}^{1}\frac{\partial G_{0}} {\partial t} \left (t,s\right )\varGamma \left (s\right )ds\). So \(\left \{\dot{u}_{f_{n}}: n \in \mathbf{N}\right \}\) is relatively compact in C E ([0, 1]) by Ascoli’s theorem. From the above facts, we deduce that there exists a subsequence of \(\left (u_{f_{n}}\right )_{n\in \mathbf{N}}\) still denoted by \(\left (u_{f_{n}}\right )_{n\in \mathbf{N}}\) which converges uniformly to u ∞ ∈ C E ([0, 1]) with \(u^{\infty }\left (0\right ) = x,\) \(u^{\infty }\left (1\right ) =\sum _{ i=1}^{m-2}\alpha _{i}u^{\infty }\left (\eta _{i}\right )\). Similarly, we may assume that \(\left (\dot{u}_{f_{n}}\right )\) converges uniformly to \(v^{\infty }\in C_{E}([0,1])\). Furthermore, by the above facts, it is easy to see that \(\left (\ddot{u}_{f_{n}}\right )\) \(\sigma (L_{E}^{1},L_{E^{{\ast}}}^{\infty })\)-converges to \(w^{\infty }\in L_{E}^{1}\left (\left [0,1\right ]\right )\). For every t ∈ [0, 1], using the representation formula (3.1.1), we have
From (3.1.4) and Lemma 2.1(iv), we deduce that u ∞ is derivable and its derivative \(\dot{u}^{\infty }\) is given by
Now using the integral representation formula (3.1.2) we have, for every t ∈ [0, 1],
so that by (3.1.5) and (3.1.6) we get \(v^{\infty } =\dot{ u}^{\infty }\). Now invoking Lemma 2.1(v) and using (3.1.4) we get
Thus we get \(\ddot{u}^{\infty }(t) = w^{\infty }(t)\,a.e.\,t \in [0,1]\) so that by (3.1.4)
Step 2. Main fact: u ∞ coincides with the W E 2, 1([0, 1])-solution \(u_{f_{\infty }}\) associated with \(f_{\infty }\in S_{\varGamma }^{1}\) to
Remember that
and by the above fact, \((\ddot{u}_{f_{n}} +\gamma \dot{ u}_{f_{n}})\) converges weakly in \(L_{E}^{1}([0,1])\) to \(\ddot{u}^{\infty } +\gamma \dot{ u}^{\infty }\). Let \(v \in L_{E^{{\ast}}}^{\infty }([0,1])\). Multiply scalarly the equation
by v(t) and integrating on [0, 1] yields
It is clear that
so that
Using (3.1.7), (3.1.8), and (3.1.10) and uniqueness of solutions we get \(u^{\infty } = u_{f_{\infty }}\). This proves the first part of the theorem, while the second part follows from Proposition 3.1 and the integral representation formulas. □
Now comes a direct application to the existence of optimal controls for the problem
Theorem 3.2.
Under the hypotheses and notations of Theorem 3.1 , problem (*) – (**) admits an optimal control.
Proof.
Let us set \(m:=\inf _{f\in S_{\varGamma }^{1}}\int _{0}^{1}J(t,u_{f}(t),\dot{u}_{f}(t),\ddot{u}_{f}(t))dt\). Let us consider a minimizing sequence \((u_{f_{n}},\dot{u}_{f_{n}},\ddot{u}_{f_{n}})\), that is
Since (f n ) is relatively weakly compact in L E 1([0, 1]), we may assume that (f n ) converges weakly in L E 1([0, 1]) to \(\overline{f}\). Applying the arguments in the proof of Theorem 3.1 shows that \((u_{f_{n}})\) converges uniformly to \((u_{\overline{f}})\), \((\dot{u}_{f_{n}})\) converges uniformly to \(\dot{u}_{\overline{f}}\) and \((\ddot{u}_{f_{n}})\) \(\sigma (L_{E}^{1},L_{E^{{\ast}}}^{\infty })\) -converges to \(\ddot{u}_{\overline{f}}\) with
Now apply the lower semicontinuity for integral functionals ([14], Theorem 8.1.6) yields
Hence we conclude that
□
Now along the paper we will assume that the hypotheses and notations of Lemma 2.1 hold.
4 Viscosity Property of the Value Function
The results given in Sect. 3 lead naturally to the problem of viscosity for the value function associated with a second order differential inclusion. Similar results dealing with ordinary differential equation (ODE) and evolution inclusion with control measures are available in [2, 7, 14, 16]. In this section we treat a new problem of value function in the context of second order ordinary differential equations (SODE) with m-point boundary condition. Assume that E is a separable Banach space, Z is a convex compact subset of E and S Z 1 is the set of all Lebesgue measurable mappings f: [0, 1] → Z (alias measurable selections of the constant mapping Z). For each f ∈ S Z 1, let us denote by u τ, x, f the trajectory solution associated with the control f ∈ S Z 1 starting from x at time τ ∈ [0, η 1[ to
with the integral representation formulas
and
where the coefficient A τ and the Green function G τ are given in Lemma 2.1.
By the above considerations and Lemma 2.1(ii), it is easy to check that \(\dot{u}_{\tau,x,f}\) are uniformly majorized by a continuous function c τ : [τ, 1] → R +, namely
It is worth mentioning that integral representation formulas (4.1) and (4.2) will be useful in the study of the value function we present below. Let us mention a useful lemma that is borrowed from ([16], Lemma 6.3) and ([7], Lemma 3.1).
Lemma 4.1.
Assume that (1.1.1) is satisfied. Let \((t_{0},x_{0}) \in [0,\eta _{1}[\times E\) and let Z be a convex compact subset in E. Let Λ: [0,T] × E × Z → R be an upper semicontinuous function such that the restriction of Λ to [0,T] × B × Z is bounded on any bounded subset B of E. If
for some η > 0, then there exists σ > 0 such that
where \(u_{t_{0},x_{0},f}\) is the trajectory solution associated with the control f ∈ S Z 1 starting from x 0 at time t 0 to
Proof.
By hypothesis, one has \(\max _{z\in Z}\varLambda (t_{0},x_{0},z) < -\eta < 0\). As Λ is upper semi continuous, so is the function
Hence there is \(\varepsilon > 0\) such that
for \(0 \leq t - t_{0} \leq \varepsilon\) and \(\vert \vert x - x_{0}\vert \vert \leq \varepsilon\). As \(\dot{u}_{t_{0},x_{0},f}\) is uniformly bounded for all f ∈ S Z 1 and for all t ∈ [t 0, 1] by using the estimate (4.3) we can take σ > 0 such that \(\vert \vert u_{t_{0},x_{0},f}(t) - u_{t_{0},x_{0},f}(t_{0})\vert \vert \leq \varepsilon\) for all t ∈ [t 0, t 0 +σ] and for all f ∈ S Z 1. Then by integrating
for all f ∈ S Z 1 and the result follows. □
For simplicity we deal first with a dynamic programming principle (DPP) for a value function V J related to a bounded continuous function J: [0, 1] × E × Z → R associated with
The following result is of importance in the statement of viscosity.
Theorem 4.1 (of Dynamic Programming Principle).
Let (1.1.1) holds. Let x ∈ E, 0 ≤τ < η 1 <.. < η m−2 < 1 and σ > 0 such that τ + σ < η 1. Assume that \( J : [0,1]\times E \times E \rightarrow \mathbf{R} \) is bounded continuous such that J(t,x,.) is convex on E for every \( (t, x) \in [0, 1] \times E \). Let us consider the value function
where u τ,x,f is the trajectory solution on [τ,1] associated the control f ∈ S Z 1 starting from x at time τ to
Then the following hold
with
where \(v_{\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ),g}\) denotes the trajectory solution on [τ + σ,1] associated with the control g ∈ S Z 1 starting from u τ,x,f (τ + σ) at time τ + σ to Footnote 1
Proof.
Let
For any f ∈ S Z 1, we have
By the definition of \(V _{J}(\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ))\) we have
It follows that
By taking the supremum on S Z 1 in this inequality we get
Let us prove the converse inequality.
Main Fact: \(f \rightarrow V _{J}(\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ))\) is lower semicontinuous on S Z 1 (endowed with the \(\sigma (L_{E}^{1},L_{E^{{\ast}}}^{\infty })\)-topology).
Let us focus on the expression of \(V _{J}(\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ))\)
where \(v_{\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ),g}\) denotes the trajectory solution on [τ +σ, 1] associated with the control g ∈ S Z 1 starting from u τ, x, f (τ +σ) at time τ +σ to (SODE) (4.5). By the integral representation formulas (4.1) (4.2) given above we have
with
It is already seen in the proof of Step 1 of Theorem 3.1 that \(f\mapsto u_{\tau,x,f}\) from S Z 1 into C E ([τ, 1]) is continuous when S Z 1 is endowed with the \(\sigma (L_{E}^{1},L_{E^{{\ast}}}^{\infty })\) topology and C E ([τ, 1]) is endowed with the norm of uniform convergence, namely, when f n \(\sigma (L_{E}^{1},L_{E^{{\ast}}}^{\infty })\)-converges to f ∈ S Z 1, then \(u_{\tau,x,f_{n}}\) converges uniformly to u τ, x, f , this entails that
for every t ∈ [τ, 1]. Further, when g n \(\sigma (L_{E}^{1},L_{E^{{\ast}}}^{\infty })\)-converges to g ∈ S Z 1, by compactness of Z, and the boundedness property of G τ+σ (t, s) in Lemma 2.1, it is not difficult to check that
for every t ∈ [τ, 1]. Therefore
for every t ∈ [τ, 1]. Hence in view of ([14], Theorem 8.1.6) we deduce that
is lower semicontinuous on \(S_{Z}^{1} \times S_{Z}^{1}\) using the above fact and the convexity assumption on the integrand J(t, x, . ). Consequently \(f \rightarrow V _{J}(\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ))\) is lower semicontinuous on S Z 1. Hence the mapping
is lower semicontinuous on S Z 1. Since S Z 1 is weakly compact in L E 1([0, 1]), there is \(f^{1} \in S_{Z}^{1}\) such that
Similarly there is \(g^{2} \in S_{Z}^{1}\) such that
where \(v_{\tau +\sigma,u_{\tau,x,f^{1}}(\tau +\sigma ),g^{2}}(t)\) denotes the trajectory solution on [τ +σ, 1] associated with the control \(g^{2} \in S_{Z}^{1}\) starting from \(u_{\tau,x,f^{1}}(\tau +\sigma )\) at time τ +σ to
Let us set
Then \(\overline{f} \in S_{Z}^{1}\) (because S Z 1 is decomposable). Let \(w_{\tau,x,\overline{f}}\) be the trajectory solution on [τ, 1] associated with \(\overline{f} \in S_{Z}^{1}\), that is
By uniqueness of solution we have
Coming back to the expression of V J and W J we have
□
Here are our results on viscosity of solutions for the value function.
Theorem 4.2 (of Viscosity Subsolution).
Assume that E is a separable Hilbert space. Assume (1.1.1) and J: [0,1] × E × E → R is bounded continuous such that J(t,x,.) is convex on E for every (t,x) ∈ [0,1] × E. Let us consider the value function
where u τ,x,f is the trajectory solution on [τ,1] associated the control f ∈ S Z 1 starting from x ∈ E at time τ to
Then V J satisfies a viscosity property: For any \(\varphi \in C^{1}([0,1] \times E)\) such that V J reaches a local maximum at \((t_{0},x_{0}) \in [0,\eta _{1}[\times E\) , then
Proof.
Assume by contradiction that there exist a \(\varphi \in C^{1}([0,1] \times E)\) such that V J reaches a local maximum at \((t_{0},x_{0}) \in [0,\eta _{1}[\times E\) for which
for some η > 0. Applying Lemma 4.1, by taking
yields σ > 0 such that
where \(u_{t_{0},x_{0},f}\) is the trajectory solution associated with the control f ∈ S Z 1 starting from x 0 at time t 0 to
Applying the dynamic programming principle (Theorem 4.1) gives
Since \(V _{J}-\varphi\) has a local maximum at (t 0, x 0), for small enough σ
for all f ∈ S Z 1. By (4.2.2), for each n ∈ N, there is \(f^{n} \in S_{Z}^{1}\) such that
From (4.2.3) and (4.2.4) we deduce that
Therefore we have
As \(\varphi \in C^{1}([0,1] \times E)\)
Applying the integral representation formulas (4.1) and (4.2) gives
with
where the coefficient \(A_{t_{0}}\) and the Green function \(G_{t_{0}}\) are defined in Lemma 2.1. Then from (4.2.6) we get the estimation
Since f n(s) ∈ Z for all s ∈ [t 0, 1], it follows that
for all t, s ∈ [t 0, 1]. From (4.2.7) and this inclusion we get
Put the estimation (4.2.8) in (4.2.5) we get
By combining (4.2.1) and (4.2.9) we get the estimation
Therefore we have that \(0 < \frac{\sigma \eta } {2} < \frac{1} {n}\) for every n ∈ N. Passing to the limit when n goes to ∞ in the preceding inequality yields a contradiction. □
5 Optimal Control Problem in Pettis Integration
We provide in this section some results in optimal control problems governed by an (SODE) with m-point boundary condition where the controls are Pettis-integrable. Here E is a separable Banach space. We recall and summarize some needed results on the Pettis integrability. Let f: [0, 1] → E be a scalarly integrable function, that is, for every x ∗ ∈ E ∗, the scalar function \(t\mapsto \langle x^{{\ast}},f(t)\rangle\) is Lebesgue-integrable on [0, 1]. A scalarly integrable function f: [0, 1] → E is Pettis-integrable if, for every Lebesgue-measurable set A in [0, 1], the weak integral ∫ A f(t)dt defined by \(\langle x^{{\ast}},\int _{A}f(t)dt\rangle =\int _{A}\langle x^{{\ast}},f(t)\rangle \,dt\) for all x ∗ ∈ E ∗ belongs to E. We denote by P E 1([0, 1], dt) the space of all Pettis-integrable functions f: [0, 1] → E endowed with the Pettis norm \(\vert \vert f\vert \vert _{Pe} =\sup _{x^{{\ast}}\in \overline{B}_{ E^{{\ast}}}}\int _{0}^{1}\vert \langle x^{{\ast}},f(t)\rangle \vert dt.\) A mapping f: [0, 1] → E is Pettis-integrable iff the set \(\{\langle x^{{\ast}},f\rangle: \vert \vert x^{{\ast}}\vert \vert \leq 1\}\) is uniformly integrable in the space L R 1([0, 1], dt). More generally a convex compact valued mapping Γ: [0, 1] ⇉ E is scalarly integrable, if, for every x ∗ ∈ E ∗, the scalar function \(t\mapsto \delta ^{{\ast}}(x^{{\ast}},\varGamma (t))\) is Lebesgue-integrable on [0, 1], Γ is Pettis-integrable if the set \(\{\delta ^{{\ast}}(x^{{\ast}},\varGamma (.)): \vert \vert x^{{\ast}}\vert \vert \leq 1\}\) is uniformly integrable in the space L R 1([0, 1], dt). In view of [[6], Theorem 4.2; or [14], Cor. 6.3.3] the set S Γ Pe of all Pettis-integrable selections of a convex compact valued Pettis-integrable mapping \(\Gamma: [0,1] \rightrightarrows E\) is sequentially \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\)-compact. We refer to [19], for related results on the integration of Pettis-integrable multifunctions.
We provide some useful lemmas.
Lemma 5.1.
Let G : [0,1] × [0,1] → R be a mapping with the following properties
-
(i)
for each t \( \in [0, 1], G(t,\,.) \) is Lebesgue-measurable on [0,1],
-
(ii)
for each s \( \in [0, 1], G(., s) \) is continuous on [0,1],
-
(iii)
there is a constant M > 0 such that \( |G(t, s)| \leq M \) for all \( (t, s) \in [0, 1] \times [0, 1] \).
Let f: [0,1] → E be a Pettis-integrable mapping. Then the mapping
is continuous from [0,1] into E, that is, u f ∈ C E ([0,1]).
Proof.
Let (t n ) be a sequence in [0, 1] such that t n → t ∈ [0, 1]. Then we have the estimation
As the sequence ( | G(t n , . ) − G(t, . ) | ) is bounded in L R ∞([0, 1]) and pointwise converges to 0, it converges to 0 uniformly on uniformly integrable subsets of L R 1([0, 1]) in view of a lemma due to Grothendieck’s [24], in others terms it converges to 0 with respect to the Mackey topology τ(L ∞, L 1), see also [5] for a more general result concerning the Mackey topology for bounded sequences in \(L_{E{\ast}}^{\infty }\). Since the set \(\{\vert \langle x^{{\ast}},f(s)\rangle \vert: \vert \vert x^{{\ast}}\vert \vert \leq 1\}\) is uniformly integrable in L R 1([0, 1]), the second term in the above estimation goes to 0 when t n → t showing that u f is continuous on [0, 1] with respect to the norm topology of E. □
The following is a generalization of Lemma 5.1.
Lemma 5.2.
Let G: [0,1] × [0,1] → R be a mapping with the following properties
-
(i)
for each t \( \in [0, 1], G(t,\,.) \) is Lebesgue-measurable on [0,1],
-
(ii)
for each s \( \in [0, 1], G(., s) \) is continuous on [0,1],
-
(iii)
there is a constant M > 0 such that \( |G(t, s)| \leq M \) for all \( (t, s) \in [0, 1] \times [0, 1] \).
Let Γ: [0,1] → E be a convex compact valued measurable and Pettis-integrable mapping. Then the set
is equicontinuous in C E ([0,1]).
Proof.
By Lemma 5.1 it is clear that
Let us check the equicontinuity property. Indeed, let t, t k ∈ [τ, 1] such that t k → t, we have the estimation
As the sequence ( | G(t k , . ) − G(t, . ) | ) is bounded in L R ∞([0, 1]) and the set \(\{\vert \delta ^{{\ast}}(x^{{\ast}},\varGamma (.))\vert: \vert \vert x^{{\ast}}\vert \vert \leq 1\}\) is uniformly integrable in L R 1([0, 1]), by invoking again Grothendieck lemma [24] as in the proof of Lemma 5.1, the second term goes to 0 when t k → t showing that \(\{u_{f}: f \in S_{\varGamma }^{Pe}\}\) is equicontinuous in C E ([0, 1]). □
The following lemma is crucial in the statement of the (SODE) with Pettis-integrable second member and m-point boundary condition. Here we suppose that the hypotheses and notations of Lemma 2.1 hold.
Lemma 5.3.
Let x ∈ E, let G τ be the Green function, e τ,x and \(\dot{e}_{\tau,x}\) in Lemma 2.1
and let f be a Pettis-integrable function. Let us consider the mapping
Then the following assertions hold
-
(1)
u τ,x,f is continuous i.e. u τ,x,f ∈ C E ([0,1]),
-
(2)
\(u_{\tau,x,f}(\tau ) = x,\quad u_{\tau,x,f}(1) =\sum _{ i=1}^{m-2}\alpha _{i}u_{\tau,x,f}(\eta _{i})\),
-
(3)
The function u τ,x,f is scalarly derivable, that is, for every x ∗ ∈ E ∗ , the scalar function \(\langle x^{{\ast}},u_{\tau,x,f}\rangle\) is derivable and its weak derivative \(\dot{u}_{\tau,x,f}\) satisfies
$$\displaystyle{\dot{u}_{\tau,x,f}(t) =\dot{ e}_{\tau,x}(t) +\int _{ \tau }^{1}\frac{\partial G_{\tau }} {\partial t} (t,s)f(s)ds,\quad \tau \in [0,\eta _{1}[,\quad t \in [\tau,1].}$$ -
(4)
The function \(\dot{u}_{\tau,x,f}\) is continuous and scalarly derivable, that is, for every x ∗ ∈ E ∗ , the scalar function \(\langle x^{{\ast}},\dot{u}_{\tau,x,f}\rangle\) is derivable and its weak derivative \(\ddot{u}_{\tau,x,f}\) satisfies
$$\displaystyle{\ddot{u}_{\tau,x,f}(t) +\gamma u_{\tau,x,f}(t) = f(t)\quad a.e.\quad t \in [\tau,1].}$$
Proof.
(1) Since e τ, x ∈ C E ([0, 1]) and G τ is a Carathéodory and bounded function, u τ, x, f is continuous on [τ, 1] with respect to the norm topology of E in view of Lemma 5.1. (2) follows from Lemma 2.1(iv). (3)–(4) Similarly, using the property of \(\frac{\partial G_{\tau }} {\partial t}\) in Lemma 2.1 we infer that \(t\mapsto \int _{\tau }^{1}\frac{\partial G_{\tau }} {\partial t} (t,s)f(s)ds\) is continuous on [τ, 1] with respect to the norm topology of E in view of Lemma 5.1 and so is the mapping \(t\mapsto \dot{e}_{\tau,x}(t) +\int _{ \tau }^{1}\frac{\partial G_{\tau }} {\partial t} (t,s)f(s)ds\). Now (3)–(4) follow from the computation used in (iv)–(v) in Lemma 2.1. □
By W P, E 2, 1([τ, 1]) we denote the space of all continuous functions in C E ([τ, 1]) such that their first weak derivatives are continuous and their second weak derivatives are Pettis-integrable on [τ, 1]. By Lemma 5.3, given a Pettis-integrable function f: [τ, 1] → E (shortly f ∈ P E 1([τ, 1]) the (SODE)
admits a unique W P, E 2, 1([τ, 1])-solution with integral representation formulas
The following result provides the compactness of solutions for a class of (SODE) with m (m > 3) point boundary condition and Pettis-integrable controls.
Theorem 5.1.
Let E be a separable Banach space and let Γ: [0,1] → ck(E) be a convex compact valued measurable and Pettis-integrable mapping. Let us consider the following
Then the set \(\{u_{\tau,x,f}: f \in S_{\varGamma }^{Pe}\}\) of \(W_{P,E}^{2,1}([\tau,1])\) -solutions to (SODE) Γ is compact in C E ([τ,1]).
Proof.
Let \((u_{\tau,x,f_{n}})\) be a sequence of \(W_{P,E}^{2,1}([\tau,1])\)-solutions to (SODE) Γ . As S Γ Pe is sequentially \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\)-compact, by extracting a subsequence we may assume that (f n ) converges with respect to the \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\) topology to \(f_{\infty }\in S_{\varGamma }^{Pe}\). Using Lemma 5.3, we have, for each n ∈ N,
From the property the Green function G τ in Lemma 2.1, (5.1.1) and Lemma 5.2, we infer that \(\{u_{\tau,x,f_{n}}: n \in \mathbf{N}\}\) is equicontinuous in C E ([0, 1]). Further, for each t ∈ [τ, 1], \(\{u_{\tau,x,f_{n}}(t): n \in \mathbf{N}\}\) is relatively compact because it is included in the norm compact set \(e_{\tau,x}(t) +\int _{ 0}^{1}G_{\tau }(t,s)\varGamma (s)ds\) (see e.g. [12, 14]). So by Ascoli’s theorem, \(\{u_{\tau,x,f_{n}}: n \in \mathbf{N}\}\) is relatively compact in C E ([τ, 1]). Similarly using the properties of \(\frac{\partial G_{\tau }} {\partial t}\) in Lemma 2.1, (5.1.2) and Lemma 5.2, we deduce that \(\left \{\dot{u}_{\tau,x,f_{n}}: n \in \mathbf{N}\right \}\) is equicontinuous in \(C_{E}\left (\left [\tau,1\right ]\right )\). In addition, the set \(\left \{\dot{u}_{\tau,x,f_{n}}\left (t\right ): n \in \mathbf{N}\right \}\) is included in the compact set \(\dot{e}_{\tau,x}(t) +\int _{ 0}^{1}\frac{\partial G_{\tau }} {\partial t} \left (t,s\right )\varGamma \left (s\right )ds\). So \(\left \{\dot{u}_{\tau,x,f_{n}}: n \in \mathbf{N}\right \}\) is relatively compact in C E ([τ, 1]) using the Ascoli’s theorem. From the above facts, we deduce that there exists a subsequence of \(\left (u_{\tau,x,f_{n}}\right )_{n\in \mathbf{N}}\) still denoted by \(\left (u_{\tau,x,f_{n}}\right )_{n\in \mathbf{N}}\) which converges uniformly to \(u^{\infty }\in C_{E}([\tau,1])\) with \(u^{\infty }\left (0\right ) = x\) and \(u^{\infty }\left (1\right ) =\sum _{ i=1}^{m-2}\alpha _{i}u^{\infty }\left (\eta _{i}\right )\). Similarly, we may assume that \(\left (\dot{u}_{\tau,x,f_{n}}\right )\) converges uniformly to v ∞ ∈ C E ([τ, 1]). Furthermore, by the above facts, it is easy to see that \(\left (\ddot{u}_{\tau,x,f_{n}}\right )\) converges \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\) to a Pettis integrable function \(w^{\infty }\in P_{E}^{1}([\tau,1])\). For every t ∈ [τ, 1], using the representation formula (5.1.1), we have
From (5.1.4) and Lemma 5.3, we deduce that u ∞ is scalarly derivable and its weak derivative \(\dot{u}^{\infty }\) is given by
Now using the integral representation formula (5.1.2) we have, for every t ∈ [τ, 1],
so that by (5.1.5) and (5.1.6) we get \(v^{\infty } =\dot{ u}^{\infty }\). Now using (5.1.4) and invoking Lemma 5.3(4) we get
Thus we get \(\ddot{u}^{\infty }(t) = w^{\infty }(t)\,a.e.\,t \in [\tau,1]\) so that
Step 2. Main fact: u ∞ coincides with the \(W_{P,E}^{2,1}([\tau,1])\)-solution \(u_{f_{\infty }}\) associated with \(f_{\infty }\in S_{\varGamma }^{Pe}\) to
Remember that
and by the above fact, \((\ddot{u}_{\tau,x,f_{n}} +\gamma \dot{ u}_{\tau,x,f_{n}})\) \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\)-converges in P E 1([τ, 1]) to \(\ddot{u}^{\infty } +\gamma \dot{ u}^{\infty }\). Let \(v = h \otimes x^{{\ast}}\in L^{\infty }([\tau,1]) \otimes E^{{\ast}}\). Multiply scalarly the equation
by v(t) and integrating on [τ, 1] yields
It is clear that
so that by invoking the separability of E
Using (5.1.7), (5.1.8), and (5.1.10) and uniqueness of solution we obtain \(u^{\infty } = u_{f_{\infty }}.\) □
Remark.
In the context of Theory of Control, we have stated in the proof of Theorem 5.1, the dependence of the trajectory solution with respect to the Pettis controls. Namely, with the notations of Theorem 5.1, if \(u_{\tau,x,f_{n}}\) is the W P, E 2, 1([τ, 1])-solution of
and if (f n ) \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\)-converges to \(f_{\infty }\in S_{\varGamma }^{Pe}\), then \((u_{\tau,x,f_{n}})\) converges uniformly to \(u_{\tau,x,f_{\infty }}\), \((\dot{u}_{\tau,x,f_{n}})\) converges uniformly to \(\dot{u}_{\tau,x,f_{\infty }}\) and \((\ddot{u}_{\tau,x,f_{n}})\) \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\)-converges to \(\ddot{u}_{\tau,x,f_{\infty }}\) where \(u_{\tau,x,f_{\infty }}\) is the W P, E 2, 1([τ, 1])-solution of
The above remark is of importance since it allows to prove further results. Here is an application to the existence of W P, E 2, 1([τ, 1])-solution of a (SODE) with m-point boundary condition.
Theorem 5.2.
Let F: [0,1] × (E × E) → E be a Carathéodory mapping satisfying
for all (t,x,y) ∈ [0,1] × E × E where Γ: [0,1] ⇉ E is a convex compact valued Pettis-integrable mapping. Then the (SODE)
has a W P,E 2,1 ([τ,1])-solution.
Proof.
Let us set
Then Theorem 5.1 shows that \(\mathcal{X}\) is compact and convex in C E ([τ, 1]). For each \(u \in \mathcal{X}\), let us set
In view of Lemma 5.3, Φ(u) is non empty. Let us prove that the mapping \(\varPhi: \mathcal{X} \rightarrow \mathcal{X}\) is continuous. Let (u n , v n ) ∈ Graph Φ such that u n → u and v n → v in \(\mathcal{X}\). We need to check that v = Φ(u). Taking account of the particular structure of \(\mathcal{X}\) and the remark of Theorem 5.1, we have that \(\dot{u}_{n} \rightarrow \dot{ u}\) uniformly and \(\ddot{u}_{n}\) \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\)-converges to \(\ddot{u}\) and that \(\dot{v}_{n} \rightarrow \dot{ v}\) uniformly and \(\ddot{v}_{n}\) \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\)-converges to \(\ddot{v}\). Multiply scalarly the equality
by h(t) ⊗ x ∗ where \(h \in L_{\mathbf{R}^{+}}^{\infty }([\tau,1])\) and \(x^{{\ast}}\in \overline{B}_{E^{{\ast}}}\) and integrating on [τ, 1] gives
By passing to the limit when n → ∞ in (5.2.1) we get
by Lebesgue dominated convergence theorem, because
for all (t, x, y) ∈ [0, 1] × E × E. By (5.2.2) we deduce that
Whence we get
for every \(x^{{\ast}}\in \overline{B}_{E^{{\ast}}}\). By taking a dense sequence \((e_{k}^{{\ast}})_{k\in \mathbf{N}}\) in \(\overline{B}_{E^{{\ast}}}\) for the Mackey topology we get
for all k ∈ N. Finally we get
proving that Graph Φ is compact. By applying the Kakutani–Ky Fan fixed point theorem to Φ, we find \(u \in \mathcal{X}\) such that u = Φ(u) which is a W P, E 2, 1([τ, 1])-solution of the (SODE) under consideration. □
The compactness in C E ([τ, 1]) of
are of importance and rely on some delicate arguments in the pioneering work of [1, 2] involving the Pettis uniformly integrable condition, Grothendieck lemma characterizing the Mackey topology for bounded sets in L R ∞ [24] and other compactness results. Second order differential inclusions with three point boundary condition in case where the second member is a Pettis-integrable convex compact valued multifunction is initiated in [2]. At this point a second order differential inclusion with upper semicontinuous convex compact valued multifunction and three point boundary condition of the form
is available in [2, 27]. Taking account into the above facts, one may state the validity of Theorem 5.2 when F is a convex compact valued upper semicontinuous mapping. Since we don’t focus on differential inclusion in the paper, we only mention a closure type lemma which may have an independent interest and solves this problem.
Theorem 5.3.
Let F: [0,1] × E × E → E be a convex compact valued upper semicontinuous mapping satisfying
for all (t,x,y) ∈ [0,1] × E × E where Γ: [0,1] ⇉ E is a convex compact valued Pettis-integrable mapping. Let \((u_{n},v_{n}) \in \mathcal{X}\times \mathcal{X}\) such that u n → u and v n → v in \(\mathcal{X}\) and that
for all n ∈ N and for all t ∈ [τ,1]. Then we have \(\ddot{v}(t) +\gamma v(t) \in F(t,u(t),\dot{u}(t))\) a.e.
Proof.
Let h ⊗ x ∗ where \(h \in L_{\mathbf{R}^{+}}^{\infty }([\tau,1])\) and \(x^{{\ast}}\in \overline{B}_{E^{{\ast}}}\). From
we have
Integrating on [τ, 1] this inequality yields
Repeating the arguments of the proof of Theorem 5.2, we have that \(\dot{u}_{n} \rightarrow \dot{ u}\) uniformly and \(\ddot{u}_{n}\) \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\)-converges to \(\ddot{u}\) and that \(\dot{v}_{n} \rightarrow \dot{ v}\) uniformly and \(\ddot{v}_{n}\) \(\sigma (P_{E}^{1},L^{\infty }\otimes E^{{\ast}})\)-converges to \(\ddot{v}\). Then by passing to the limit when n → ∞ in (5.3.1) we get
because
for all (t, x, y) ∈ [0, 1] × E × E and the mapping F is upper semicontinuous. By (5.3.2) we deduce that
Whence we get
for every \(x^{{\ast}}\in \overline{B}_{E^{{\ast}}}\). By taking a dense sequence \((e_{k}^{{\ast}})_{k\in \mathbf{N}}\) in \(\overline{B}_{E^{{\ast}}}\) for the Mackey topology we get
for all k ∈ N so that
□
6 Open Problems: Differential Game Governed by (SODE), (ODE) and Sweeping Process with Strategies
To finish the paper we discuss some viscosity problems in a differential game governed by a class of (ODE) with strategy in the line of Elliot [20], Elliot–Kalton [21] and Evans–Souganides [22]. For simplicity we assume that E is a separable Hilbert space. Let us consider two compact subsets Y and Z in E. Set
Denote by Γ(τ) the set of all strategies α: Z(τ) → Y (τ) and Δ(τ) the set of all strategies β: Y (τ) → Z(τ). Let us given a Carathéodory integrable mapping F: [0, 1] × (Y × Z) → E such that F(t, y, z) ⊂ K(t) for all (t, y, z) ∈ [0, 1] × Y × Z where K: [0, 1] ⇒ E is a convex compact valued integrably bounded mapping, a bounded continuous integrand J: [0, 1] × E × Y × Z → R and let us define the upper–lower value function
where u τ, x, α(z), z is the trajectory W E 2, 1([τ, 1])-solution of second order differential game
with the integral representation formulas
and similarly u τ, x, y, β(y) is the trajectory W E 2, 1([τ, 1])-solution of second order differential game
We aim to generalize the viscosity problem in Theorem 4.2 to the case of strategies in the following
Proposition 6.1.
Let J: [0,1] × E × Y × Z → R be a bounded continuous integrand, τ,σ ∈ [0,1] such that τ ∈ [0,η1[ and τ + σ < 1 and let us consider the upper value function
where u τ,x,α(z),z is the trajectory \( W_{E}^{2,1}([ \tau, 1]) \)-solution of second order differential game
Then U J satisfies a sub-viscosity property: For any \(\varphi \in C^{1}([0,1] \times E)\) such that \(U_{J}-\varphi\) reaches a local maximum at \((t_{0},x_{0}) \in [0,\eta _{1}[\times E\) , then
provides that U J satisfies the DPP
Proof.
Assume there is a \(\varphi \in C^{1}([0,1] \times E)\) such that \(U_{J}-\varphi\) reaches a local maximum at \((t_{0},x_{0}) \in [0,\eta _{1}[\times E\) for which
Hence there exists some η > 0 such that
Set
Then we have
Hence there exists some \(\overline{z} \in Z\) such that
Since the mapping
is continuous there is \(\varepsilon > 0\) such that
for \(0 \leq t - t_{0} \leq \varepsilon\) and \(\vert \vert x - x_{0}\vert \vert \leq \varepsilon\). As \(\dot{u}_{t_{0},x_{0},\alpha (z),z}\) is estimated by
with c ∈ C R ([t 0, 1]) for all z ∈ Z(t 0) and for all α ∈ Γ(t 0) in view of the above integral representation formula, so we can choose σ > 0 such that \(\vert \vert u_{t_{0},x_{0},\alpha (z),z}(t) - u_{t_{0},x_{0},\alpha (z),z}(t_{0})\vert \vert \leq \int _{t_{0}}^{t_{0}+\sigma }c(t)dt \leq \varepsilon\) for all t ∈ [t 0, t 0 +σ] and for all z ∈ Z(t 0) and for all α ∈ Δ(t 0). Then the constant control \(\overline{z}(t) = \overline{z},\forall t \in [t_{0},1]\) belongs to Z(t 0) and \(\alpha (\overline{z})\) belongs to Y (t 0) for all α ∈ Γ(t 0) so that by integrating we have
for all α ∈ Γ(t 0). Thus
As U J satisfies the DPP property, we have
Hence, for every n ∈ N, there exists α n ∈ Γ(t 0) such that
But \(U_{J}-\varphi\) has a local maximum at (t 0, x 0), for small enough σ
for any trajectory solution \(u_{t_{0},x_{0},\alpha (z),z}\) associated with control (α(z), z) (α ∈ Γ(t 0), z ∈ Z). From (6.1.4) and (6.1.5) we deduce
Thus we have
But
and
because \(u_{t_{0},x_{0},\alpha ^{n}(\overline{z}),\overline{z}}\) is the W 2, 1([τ, 1])-solution to (SODE)
From (6.1.6) and (6.1.7) we deduce
Using (6.1.3) and (6.1.8) it follows that \(0 < \frac{\sigma \eta } {2} < \frac{1} {n}\) for every n ∈ N. Passing to the limit when n goes to ∞ in the preceding inequality yields a contradiction. □
The viscosity property for the lower–upper value function is an open problem in the present context. Proposition 6.1 is a step forward in the problem under consideration. Compare with earlier result in the literature dealing with viscosity problem governed by (ODE) in R n involving differential games and strategies, e.g. [4, 20, 22], evolution inclusions, e.g. [7, 8, 13–17] involving Young control measures, and Relaxation and Bolza problems governed by (SODE), e.g. [3, 9–11]. In order to illustrate the comparison, let us come back to a differential game governed by ordinary differential equation (ODE). Let \(\mathcal{M}_{+}^{1}(Y )\) and \(\mathcal{M}_{+}^{1}(Z)\) be the set of all probability Radon measures on compact metric space Y and Z, respectively, endowed with the narrow topology so that \(\mathcal{M}_{+}^{1}(Y )\) and \(\mathcal{M}_{+}^{1}(Z)\) are compact metrizable. Consider the space of Young measures (alias relaxed controls)
and as above denote by Γ(τ) the set of all strategies \(\alpha: \mathcal{Z}(\tau ) \rightarrow \mathcal{Y}(\tau )\) and Δ(τ) the set of all strategies \(\beta: \mathcal{Y}(\tau ) \rightarrow \mathcal{Z}(\tau )\). Let J: [0, 1] × (E × Y × Z) → R be a bounded Carathéodory integrand and let F: [0, 1] × (E × Y × Z) → E be a Carathéodory mapping satisfying F(t, x, y, z) ∈ K(t) for all (t, x, y, z) ∈ [0, 1] × E × Y × Z where K: [0, 1] ⇉ E is a convex compact valued integrably bounded mapping and a Lipschitz type condition \(\vert \vert F(t,x_{1},y,z) - F(t,x_{2},y,z)\vert \vert \leq \lambda \vert \vert x_{1} - x_{2}\vert \vert \) for all (t, x 1, y, z), (t, x 2, y, z) in [0, 1] × E × Y × Z. Then one may consider the lower value function
where u τ, x, μ, β(μ) is the absolutely continuous solution to (ODE)
and the upper value function
where u τ, x, α(ν), ν is the absolutely continuous solution to (ODE)
and state the viscosity properties for these functions. In the sequel, we will make some additional assumptions on J and F, namely, J and F are continuous and the family (J(. , . , y, z))(y, z) ∈ Y ×Z is equicontinuous and the family (F(. , . , y, z))(y, z) ∈ Y ×Z is equicontinuous.
Proposition 6.2.
Let J: [0,1] × E × Y × Z → R be a bounded continuous integrand, and let us consider the upper value function
Let us consider the Hamiltonian
Then U J is a viscosity solution to the HJB equation \(\frac{\partial U} {\partial t} + H^{+}(t,x,\nabla U)) = 0\) , that is, for any \(\varphi \in C^{1}([0,1] \times E)\) for which \(U_{J}-\varphi\) reaches a local maximum at \( (t_{0},\,x_{0}) \in [0,\,1] \times E \) we have
and for any \(\varphi \in C^{1}([0,1] \times E)\) for which \(U_{J}-\varphi\) reaches a local minimum at \( (t_{0},\,x_{0}) \in [0,\,1] \times E \), we have
provided that U J satisfies the DPP
Proof.
See Proposition 6.1 and ([14], Theorem 8.3.12). We will sketch the proof. Assume there is a \(\varphi \in C^{1}([0,1] \times E)\) such that \(U_{J}-\varphi\) reaches a local maximum at (t 0, x 0) ∈ [0, 1] × E for which
Hence there exists some η > 0 such that
Set
Then we have
Hence there exists some \(\overline{\nu } \in \mathcal{M}_{+}^{1}(Z)\) such that
Since the mapping
is continuous there is \(\varepsilon > 0\) such that
for \(0 \leq t - t_{0} \leq \varepsilon\) and \(\vert \vert x - x_{0}\vert \vert \leq \varepsilon\). As \(\dot{u}_{t_{0},x_{0},\alpha (\nu ),\nu }\) is estimated by \(\vert \vert \dot{u}_{t_{0},x_{0},\alpha (\nu ),\nu }(t)\vert \vert \leq \vert K(t)\vert \) with \(\vert K\vert \in L_{\mathbf{R}}^{1}([t_{0},1])\) for all \(\nu \in \mathcal{Z}(t_{0})\) and for all α ∈ Γ(t 0) Γ(t 0) so we can choose σ > 0 such that \(\vert \vert u_{t_{0},x_{0},\alpha (\nu ),\nu }(t) - u_{t_{0},x_{0},\alpha (\nu ),\nu }(t_{0})\vert \vert \leq \int _{t_{0}}^{t_{0}+\sigma }\vert K(t)\vert dt \leq \varepsilon\) for all t ∈ [t 0, t 0 +σ] and for all \(\nu \in \mathcal{Z}(t_{0})\) and for all α ∈ Γ(t 0). Then the constant control \(\overline{\nu }_{t} = \overline{\nu },\forall t \in [t_{0},1]\) belongs to \(\mathcal{Z}(t_{0})\) and \(\alpha (\overline{\nu })\) belongs to \(\mathcal{Y}(t_{0})\) for all α ∈ Γ(t 0) so that by integrating we have
for all α ∈ Γ(t 0). Thus
As U J satisfies the DPP, we have
Hence, for every n ∈ N, there exists α n ∈ Γ(t 0) such that
But \(U_{J}-\varphi\) has a local maximum at (t 0, x 0), for small enough σ
for any trajectory solution \(u_{t_{0},x_{0},\alpha (\nu ),\nu }\) associated with control (α(ν), ν) \((\alpha \in \varGamma (t_{0}),\nu \in \mathcal{Z}(t_{0}))\). From (6.2.4) and (6.2.5) we deduce
Thus we have
But
and
so that by combining with (6.2.7)
From (6.2.6) and (6.2.8) we deduce
Using (6.2.3) and (6.2.9) it follows that \(0 < \frac{\sigma \eta } {2} < \frac{1} {n}\) for every n ∈ N. Passing to the limit when n goes to ∞ in the preceding inequality yields a contradiction.
Next assume that \(U_{J}-\varphi\) has a local minimum at (t 0, x 0) ∈ [0, 1] × E. We must prove that
and so will assume the contrary that
Arguing as in ([14], Lemma 8.3.11(b)) asserts that there exists for all sufficiently small σ > 0 some α ∈ Γ(t 0) such that
for all \(\nu \in \mathcal{Z}(t_{0})\). According to the DPP property we have
Hence, for every n ∈ N, there exists \(\nu ^{n} \in \mathcal{Z}(t_{0})\) such that
But \(U_{J}-\varphi\) has a local minimum at (t 0, x 0), for small enough σ
for any trajectory solution \(u_{t_{0},x_{0},\alpha (\nu ),\nu }\) associated with control (α(ν), ν) \((\alpha \in \varGamma (t_{0}),\nu \in \mathcal{Z}(t_{0}))\). From (6.2.11) and (6.2.12) we deduce
Thus we have
But
and
so that from (6.2.14)
From (6.2.13) and (6.2.15) we deduce
Using (6.2.10) and (6.2.16) it follows that \(\frac{1} {n} \geq \frac{\sigma \eta } {2} > 0\) for every n ∈ N. Passing to the limit when n goes to ∞ in the preceding inequality yields a contradiction. □
Taking account into the sweeping process introduced by J.J. Moreau [26] and its modelisation in Mathematical Economics [25], we finish the paper with an application to the DPP and viscosity property for the value function associated with a sweeping process. Compare with Theorem 3.5 in [17] dealing with sweeping process involving Young measure control and Theorem 4.2 dealing with (SODE). Here E is a separable Hilbert space.
Proposition 6.3.
Let \( C\,:\,[0,\,T] \rightarrow ck(E) \) be a convex compact valued L-Lipschitzean mapping:
Let Z be a convex compact subset in E and \(\mathcal{S}_{Z}^{1}\) is the set of all integrable mappings \( f\, :\, [0,\,T] \rightarrow Z \). Assume that \( J\, :\, [0,\,T] \times E \times E \rightarrow \mathbf{R} \) is bounded and continuous such that J(t,x,.) is convex for every \( (t,\,x) \in [0,\,T] \times E \). Let us consider the value function
where u τ,x,f is the trajectory solution on [τ,T] associated the control \(f \in \mathcal{S}_{Z}^{1}\) starting from x ∈ E at time τ to the sweeping process \((\mathcal{P}\mathcal{S}\mathcal{W})(C;f;x)\)
and the Hamiltonian
where \( M := L + 2|Z|, (t, x, \rho) \in [0, T] \times E \times E \; \mathrm{and}\; \partial[d_{C(t)]}](x) \) denotes the subdifferential of the distance functions \(x\mapsto d_{C(t)}x\) . Then V J has the DPP property
and is a viscosity subsolution to the HJB equation
that is, for any \(\varphi \in C^{1}([0,T]) \times E)\) for which \(V _{J}-\varphi\) reaches a local maximum at \( (t_{0},\,x_{0}) \in [0, T] \times E \), we have
Proof.
We prove first that V J has the DPP property by applying the continuous property of the solution with respect to the state and the control (see Lemma 6.1 below) and lower semicontinuity of the integral functional ([14], Theorem 8.1.6). We omit the proof of Lemma 6.1 because it is an adaptation of the proof of Lemma 4.1 in [17].
Lemma 6.1.
Let \(u_{\tau,x^{n},f^{n}}\) be the trajectory solution on [τ,T] associated the control \(f^{n} \in \mathcal{S}_{Z}^{1}\) starting from x n ∈ E at time τ to the sweeping process \((\mathcal{P}\mathcal{S}\mathcal{W})(C;f^{n};x)\)
-
(a)
If (x n ) converges to x ∞ and f n converges \(\sigma (L_{E}^{1},L_{E}^{\infty })\) to f ∞ , then \(u_{\tau,x^{n},f^{n}}\) converges uniformly to \(u_{\tau,x^{\infty },f^{\infty }}\) , which is the Lipschitz solution of the sweeping process \((\mathcal{P}\mathcal{S}\mathcal{W})(C;f^{\infty };x^{\infty })\)
$$\displaystyle{\left \{\begin{array}{ll} -\dot{ u}_{\tau,x^{\infty },f^{\infty }}(t) - f^{\infty }(t) \in N_{C(t)}(u_{\tau,x^{\infty },f^{\infty }}(t)) \\ u_{\tau,x^{\infty },f^{\infty }}(\tau ) = x^{\infty }\in C(\tau )\end{array} \right.}$$ -
(b)
Let \( J \,: \,[0,\,1] \times (E \times E) \rightarrow] -\infty,\,+\infty] \) be a normal integrand such that J(t,x,.) is convex on E for all (t,x) ∈ [0,T] × E and that
$$\displaystyle{J(t,u_{\tau,x^{n},f^{n}}(t),f^{n}(t)) \geq \beta _{ n}(t)}$$for all n ∈ N and for all t ∈ [0,T] for some uniformly integrable sequence \((\beta _{n})_{n\in \mathbf{N}}\) in \( L_{R}^{1} ([0,\, T]), \), then we have
$$\displaystyle{\liminf _{n\rightarrow \infty }\int _{\tau }^{T}J(t,u_{\tau,x^{n},f^{n}}(t),f^{n}(t))\,dt \geq \int _{\tau }^{T}J(t,u_{\tau,x^{\infty },f^{\infty }}(t),f^{\infty }(t))\,dt.}$$
Let us focus on the expression of \(V _{J}(\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ))\)
where \(v_{\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ),g}\) denotes the trajectory solution on [τ +σ, T] associated with the control g ∈ S Z 1 starting from u τ, x, f (τ +σ) at time τ +σ.
Main Fact: \(f \rightarrow V _{J}(\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ))\) is lower semicontinuous on S Z 1 (endowed with the \(\sigma (L_{E}^{1},L_{E}^{\infty })\)-topology). Let \((f_{n},g_{n}) \in \mathcal{S}_{Z}^{1} \times \mathcal{S}_{Z}^{1}\) such that \(f_{n} \rightarrow f \in \mathcal{S}_{Z}^{1}\) and \(g_{n} \rightarrow g \in \mathcal{S}_{Z}^{1}\). By Lemma 6.1, \(u_{\tau,x,f_{n}}\) converges uniformly to u τ, x, f and \(v_{\tau +\sigma,u_{\tau,x,f_{ n}}(\tau +\sigma ),g_{n}}\) converges uniformly to \(v_{\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ),g}\) so that by invoking the lower semicontinuity of integral functional ([14], Theorem 8.1.6) we get
proving that the mapping \(f\mapsto \int _{\tau }^{\tau +\sigma }J(t,u_{\tau,x,f}(t),f(t))dt\) is lower semicontinuous on \(\mathcal{S}_{Z}^{1}\) and the mapping \((f,g)\mapsto \int _{\tau +\sigma }^{T}J(t,v_{\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ),g}\) (t), g(t))dt is lower semicontinuous on \(\mathcal{S}_{Z}^{1} \times \mathcal{S}_{Z}^{1}\). It follows that the mapping \(f\mapsto V _{J}(\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ))\) is lower semicontinuous on \(\mathcal{S}_{Z}^{1}\) and so is the mapping \(f\mapsto \int _{\tau }^{\tau +\sigma }J(t,u_{\tau,x,f}(t),f(t))dt + V _{J}(\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ))\). Now the DPP property for V J follows the same line of the proof of Theorem 4.1. This fact allows to obtain the required viscosity property. Let us recall the following
Lemma 6.2.
Let \( (t_{0},\, x_{0}) \in [0,\, T] \times E \) and let Z be a convex compact subset in E. Let \( \Lambda : [0,\, T] \times E \times Z \rightarrow \mathbf{R} \) be an upper semicontinuous function such that the restriction of Λ to [0,T] × B × Z is bounded on any bounded subset B of E. If
for some η > 0, then there exists σ > 0 such that
where \(u_{t_{0},x_{0},f}\) is the trajectory solution associated with the control \( f \in S_{Z}^{1} \) starting from x 0 at time t 0 to
Assume by contradiction that there exists a \(\varphi \in C^{1}([0,T] \times E)\) and a point \((t_{0},x_{0}) \in [0,T] \times E\) for which
Applying Lemma 6.2 by taking
provides some σ > 0 such that
where \(u_{t_{0},x_{0},f}\) is the trajectory solution associated with the control \(f \in \mathcal{S}_{Z}^{1}\) starting from x 0 at time t 0 to the sweeping process \((\mathcal{P}\mathcal{S}\mathcal{W})(C;f;x)\)
Applying the dynamic programming principle gives
Since \(V _{J}-\varphi\) has a local maximum at (t 0, x 0), for small enough σ
for all \(f \in \mathcal{S}_{Z}^{1}\). For each n ∈ N, there exists \(f^{n} \in \mathcal{S}_{Z}^{1}\) such that
From (6.3.3) and (6.3.4) we deduce that
Therefore we have
As \(\varphi \in C^{1}([0,T] \times E)\) we have
Since \(u_{t_{0},x_{0},f^{n}}\) is the trajectory solution starting from x 0 at time t 0 to the sweeping process \((\mathcal{P}\mathcal{S}\mathcal{W})(C;f^{n};x)\)
by the classical property of normal convex cone and the estimation \(\vert \vert \dot{u}_{t_{0},x_{0},f^{n}}(t) - f^{n}(t)\vert \vert \leq L + 2\vert Z\vert = M\) we get
so that (6.3.6) yields
Putting the estimate (6.3.7) in (6.3.5) we get
so that (6.3.1) and (6.3.8) give \(0 < \frac{\sigma \eta } {2} < \frac{1} {n}\) for all n ∈ N. Passing to the limit when n goes ∞ in this inequality gives a contradiction. □
Viscosity problem governed by sweeping process with strategies and Young measures
where the integrand J, the upper value function U J , the data Y, Z and F are defined as in Proposition 6.2, is an open problem. Further related results dealing with continuous and bounded variation (BVC) solution in sweeping process governed by non empty interior closed convex valued continuous mappings are available in [17, 18].
Notes
- 1.
It is necessary to write completely the expression of the trajectory \(v_{\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ),g}\) that depends on \((f,g) \in S_{Z}^{1} \times S_{Z}^{1}\) in order to get the lower semicontinuous dependence with respect to f ∈ S Z 1 of \(V _{J}(\tau +\sigma,u_{\tau,x,f}(\tau +\sigma ))\)>.
References
Amrani, A., Castaing, C., Valadier, M.: Convergence in Pettis norm under extreme points condition. Vietnam J. Math. 26(4), 323–335 (1998)
Azzam, D.L., Castaing, C., Thibault, L.: Three point boundary value problems for second order differential inclusions in Banach spaces. Control Cybern. 31, 659–693 (2002). Well-posedness in optimization and related topics (Warsaw, 2001)
Azzam, D.L., Makhlouf, A., Thibault, L.: Existence and relaxation theorem for a second order differential inclusion. Numer. Funct. Anal. Optim. 31, 1103–1119 (2010)
Bardi, M., Capuzzo Dolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Birkhauser, Boston (1997)
Castaing, C.: Topologie de la convergence uniforme sur les parties uniformément intégrables de L E 1 et théorème de compacité faible dans certains espaces du type Köthe-Orlicz. Sém. Anal. Convexe 10, 5.1–5.27 (1980)
Castaing, C.: Weak compactness and convergences in Bochner and Pettis integration. Vietnam J. Math. 24(3), 241–286 (1996)
Castaing, C., Marcellin, S.: Evolution inclusions with pln functions and application to viscosity and controls. J. Nonlinear Convex Anal. 8(2), 227–255 (2007)
Castaing, C., Raynaud de Fitte, P.: On the fiber product of Young measures with applications to a control problem with measures. Adv. Math. Econ. 6, 1–38 (2004)
Castaing, C., Truong, L.X.: Second order differential inclusions with m-points boundary condition. J. Nonlinear Convex Anal. 12(2), 199–224 (2011)
Castaing, C., Truong, L.X.: Some topological properties of solutions set in a second order inclusion with m-point boundary condition. Set Valued Var. Anal. 20, 249–277 (2012)
Castaing, C., Truong, L.X.: Bolza, relaxation and viscosity problems governed by a second order differential equation. J. Nonlinear Convex Anal. 14(2), 451–482 (2013)
Castaing, C., Valadier, M.: Convex analysis and measurable multifunctions. In: Lecture Notes in Mathematics, vol. 580. Springer, Berlin (1977)
Castaing, C., Jofre, A., Salvadori, A.: Control problems governed by functional evolution inclusions with Young measures. J. Nonlinear Convex Anal. 5(1), 131–152 (2004)
Castaing, C., Raynaud de Fitte, P., Valadier, M.: Young Measures on Topological Spaces. With Applications in Control Theory and Probability Theory. Kluwer Academic, Dordrecht (2004)
Castaing, C., Jofre, A., Syam, A.: Some limit results for integrands and Hamiltonians with application to viscosity. J. Nonlinear Convex Anal. 6(3), 465–485 (2005)
Castaing, C., Raynaud de Fitte, P., Salvadori, A.: Some variational convergence results with application to evolution inclusions. Adv. Math. Econ. 8, 33–73 (2006)
Castaing, C., Monteiro Marquès, M.D.P., Raynaud de Fitte, P.: On a optimal control problem governed by the sweeping process (2013). Preprint
Castaing, C., Monteiro Marquès, M.D.P., Raynaud de Fitte, P.: On a Skorohod problem (2013). Preprint
El Amri, K., Hess, C.: On the Pettis integral of closed valued multifunction. Set Valued Anal. 8, 329–360 (2000)
Elliot, R.J.: Viscosity Solutions and Optimal Control. Pitman, London (1977)
Elliot, R.J., Kalton, N.J.: Cauchy problems for certains Isaacs-Bellman equations and games of survival. Trans. Am. Soc. 198, 45–72 (1974)
Evans, L.C., Souganides, P.E.: Differential games and representation formulas for solutions of Hamilton-Jacobi-Issacs equations. Indiana Univ. Math. J. 33, 773–797 (1984)
Godet-Thobie, C., Satco, B.: Decomposability and uniform integrability in Pettis integration. Questiones Math. 29, 39–58 (2006)
Grothendieck, A.: Espaces vectoriels topologiques. Publicação da Sociedade de Matemática de Sao Paulo (1964)
Henry, C.: An existence theorem for a class of differential equation with multivalued right-hand side. J. Math. Anal. Appl. 41, 179–186 (1973)
Moreau, J.J.: Evolution problem associated with a moving set in Hilbert space. J. Differ. Equ. 26, 347–374 (1977)
Satco, B.: Second order three boundary value problem in Banach spaces via Henstock and Henstock-Kurzweil-Pettis integral. J. Math. Appl. 332, 912–933 (2007)
Valadier, M.: Some bang-bang theorems. In: Multifunctions and Integrands, Stochastics Analysis, Approximations and Optimization Proceedings, Catania, 1983. Lecture Notes in Mathematics, vol. 1091, pp. 225–234
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Japan
About this chapter
Cite this chapter
Castaing, C., Godet-Thobie, C., Truong, L.X., Satco, B. (2014). Optimal Control Problems Governed by a Second Order Ordinary Differential Equation with m-Point Boundary Condition. In: Kusuoka, S., Maruyama, T. (eds) Advances in Mathematical Economics Volume 18. Advances in Mathematical Economics, vol 18. Springer, Tokyo. https://doi.org/10.1007/978-4-431-54834-8_1
Download citation
DOI: https://doi.org/10.1007/978-4-431-54834-8_1
Received:
Revised:
Published:
Publisher Name: Springer, Tokyo
Print ISBN: 978-4-431-54833-1
Online ISBN: 978-4-431-54834-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)