1 Introduction

We shall consider state-constrained two player differential games in which the dynamics is decoupled in the following sense: a first system is exclusively controlled by one player using measurable functions u

$$\begin{aligned} \left\{ \begin{array}{lll} \dot{y}(t) =f_1(t,y(t),u(t)), \quad \mathrm{for \ a.e.\ } t\in [t_0,T] \\ u(t)\in U \quad \text{ for } \text{ a.e. } \;\;t\in [t_0,T]\\ y(t_0) =y_0 \in A_1 \\ y(t)\in A_1 \quad \text{ for } \text{ all } \;\; t\in [t_0,T] \ , \end{array} \right. \end{aligned}$$
(1)

whereas, a second player intervenes with measurable functions v modifying the dynamics of a second control system

$$\begin{aligned} \left\{ \begin{array}{lll} \dot{z}(t) =f_2(t,z(t),v(t)), \quad \mathrm{for \ a.e.\ } t\in [t_0,T] \\ v(t)\in V \quad \text{ for } \text{ a.e. } \;\; t\in [t_0,T]\\ z(t_0) = z_0 \in A_2 \\ z(t)\in A_2 \quad \text{ for } \text{ all } \;\; t\in [t_0,T] \ . \end{array} \right. \end{aligned}$$
(2)

Here, \(T>0\) is a fixed final time, \(f_1(.,.,.): \mathbb {R}\times \mathbb {R}^{n_1} \times \mathbb {R}^{m_1} \rightarrow \mathbb {R}^{n_1}\) and \(f_2(.,.,.): \mathbb {R}\times \mathbb {R}^{n_2} \times \mathbb {R}^{m_2} \rightarrow \mathbb {R}^{n_2}\) are given functions, \(U \subset \mathbb {R}^{m_1}\) and \(V\subset \mathbb {R}^{m_2}\) are given sets.

Associated with any initial data \((t_0,x_0)=(t_0,y_0,z_0)\in [0,T]\times A_1 \times A_2\), \(A_1\) (resp. \(A_2\)) being a nonempty closed subset of \(\mathbb {R}^{n_1}\) (resp. \(\mathbb {R}^{n_2}\)), and with any couple of controls \((u(\cdot ), v(\cdot ))\) we shall consider the following cost functional:

$$\begin{aligned} J(t_0,x_0;u(\cdot ),v(\cdot )) := \int _{t_0}^T [L_1(t,x(t),u(t)) + L_2\big (t,x(t),v(t))] \ dt + g\big (x(T)\big ), \end{aligned}$$
(3)

in which \(x(t)=x[t_0,x_0;u(\cdot ),v(\cdot )](t)\) (\(=(y[t_0,y_0;u(\cdot )](t),z[t_0,z_0;v(\cdot )](t)) \)) denotes the solution of systems (1) and (2) associated with the controls (uv). Set \(n:=n_1+n_2\). The functions \(L_1:\mathbb {R}\times \mathbb {R}^n\times \mathbb {R}^{m_1}\rightarrow \mathbb {R}\) and \(L_2:\mathbb {R}\times \mathbb {R}^n\times \mathbb {R}^{m_2}\rightarrow \mathbb {R}\) are called Lagrangians (or running cost) and \(g:\mathbb {R}^n\rightarrow \mathbb {R}\) is the final cost. We shall consider a differential game in which the first player wants to minimize the functional J(.), while the second player’s goal is to maximize J(.).

For each starting point \((y_0,z_0)\in A_1\times A_2\) and subinterval \([t_0,T_0]\subset [0,T]\), we define the set of admissible controls for the two players as follows:

$$\begin{aligned} \mathcal{U} ([t_0,T_0],y_0)&:=\left\{ u(\cdot ):[t_0, T_0] \rightarrow U\; \textrm{measurable} \; | \; y[t_0,y_0;u(\cdot )](t) \in A_1 \; \forall t \in [t_0, T_0] \right\} ; \\ \mathcal{V} ([t_0,T_0],z_0)&:=\left\{ v(\cdot ):[t_0, T_0]\rightarrow V\; \textrm{measurable} \; | \; z[t_0,z_0;v(\cdot )](t) \in A_2 \; \forall t \in [t_0, T_0] \right\} . \end{aligned}$$

When \(T_0=T\), which is often the case under consideration, we shall use the simplified notation:

$$\begin{aligned} \mathcal{U} (t_0,y_0)&:=\left\{ u(\cdot ):[t_0, T] \rightarrow U\; \textrm{measurable} \; | \; y[t_0,y_0;u(\cdot )](t) \in A_1 \; \forall t \in [t_0, T] \right\} ; \\ \mathcal{V} (t_0,z_0)&:=\left\{ v(\cdot ):[t_0, T]\rightarrow V\; \textrm{measurable} \; | \; z[t_0,z_0;v(\cdot )](t) \in A_2 \; \forall t \in [t_0, T] \right\} . \end{aligned}$$

Our standing assumptions allows us to be in a situation such that, for all \(x_0=(y_0,z_0)\in A_1\times A_2\) and \(t_0 \in [0,T]\), we have

$$\begin{aligned} \mathcal{U} (t_0,y_0) \ne \emptyset \; \textrm{and} \; \mathcal{V} (t_0,z_0) \ne \emptyset . \end{aligned}$$

As is customary in differential games theory, we consider the upper value function \(V^\sharp \) and the lower value function \(V^\flat \). In the definition of \(V^\sharp \) and \(V^\flat \) we shall make use of nonanticipative strategies, in the Varayia-Roxin-Elliot-Kalton sense. To recall this notion, we take, for instance, initial data \((t_1,y_0) \in [0,T]\times A_1\) and \((t_2,z_0) \in [0,T]\times A_2\). We say that a mapping \(\alpha :\mathcal{V}(t_2,z_0)\rightarrow \mathcal{U}(t_1,y_0)\) is a nonanticipative strategy for the first player if, for any \(\tau \in [0,T-t_2]\), for all controls \(v_1(\cdot )\) and \(v_2(\cdot )\) belonging to \(\mathcal{V}(t_2,z_0)\), which coincide a.e. on \([t_2,t_2+\tau ]\), then \(\alpha (v_1(\cdot ))\) and \(\alpha (v_2(\cdot ))\) coincide a.e. on \([t_1,(t_1+\tau )\wedge T]\). Analogously we can define the nonanticipative strategies \(\beta \) for the second player. For \(t_0\in [0,T]\) and \(x_0=(y_0,z_0)\in A_1\times A_2\), we write \(S_U(t_0,x_0)\) and \(S_V(t_0,x_0)\) the sets of the nonanticipative strategies for the first and second player respectively.

Now, the lower value \(V^\flat \) is defined by:

$$\begin{aligned} V^\flat (t_0,x_0):=\inf _{\alpha \in S_U(t_0,x_0)}\sup _{v(\cdot )\in \mathcal{V} (t_0,y_0)} J(t_0,x_0;\alpha (v(\cdot )),v(\cdot ))\;, \end{aligned}$$
(4)

where the functional J(.) is represented in (3). The upper value function is defined as follows:

$$\begin{aligned} V^\sharp (t_0,x_0):= \sup _{\beta \in S_V(t_0,x_0)}\inf _{u(\cdot )\in \mathcal{U} (t_0,z_0)} J(t_0,x_0;u(\cdot ),\beta (u(\cdot ))) \ . \end{aligned}$$
(5)

In absence of state constraints, differential games have been widely investigated using different approaches, cf. [3, 5, 16, 18, 22, 28, 30]. The case in which the dynamics of the players are decoupled and that each player has to make sure that his state variable does not violate his own state-constraint is a classical situation for a number of applications (cf. Isaacs’ book [25] for classical examples of games with decoupled dynamics).

State constrained problems are in general more difficult to treat: the main issue is due to the fact that both players have to use admissible controls and strategies, and the set of controls allowed to each player depends on the position of the initial state variable. To solve this problem it is useful to provide Filippov-type results (called also distance estimates results) in a nonanticipative way (see the discussion in [11, 12]). State constrained differential games with coupled dynamical constraints have been investigated in [27] and [12]. Koike in [27], under implicit uniform controllability assumptions and considering inner Hamiltonians, shows that the upper and lower value functions are viscosity solutions of the associated Hamilton–Jacobi–Isaacs equation and provides a comparison result. In [12] the implicit controllability assumptions of [27] are replaced by directly verifiable constraint qualifications (inward pointing conditions); it is shown that it is possible to derive a nonanticipative Filippov-type result (for measurable in time dynamic constraint and state constraints with \(\mathcal{C}^1\) boundaries) and, as a consequence, that the upper and lower values are locally Lipschitz continuous; these are also the unique viscosity solution of the related Hamilton–Jacobi–Isaacs equation; moreover conditions under which the game admits a value are provided.

Decoupled state constrained differential games have been considered in [6, 17] for pursuit-evasion problems, in [11] for Bolza problems, and in [1] for exit cost problems. Imposing some uniform controllability assumptions, Bardi, Koike and Soravia [6] show that the game admits a (continuous) value, and imposing additional constraint qualifications they provide a comparison theorem. Using different (viability) techniques, Cardaliaguet, Quincampoix and Saint-Pierre [17], under some suitable assumptions (which do not involve controllability conditions) demonstrate that the game has a value in the class of lower semicontinuous functions. The existence result [17] was subsequently extended to Bolza problems in [11], in which a nonantivipative Filippov-type theorem is also proved (for state constraints with a \(\mathcal{C}^2\) boundary); this is used also to show that the value is locally Lipschitz continuous (when the final cost is locally Lipschitz continuous). Bagagiolo, Maggistro and Zoppello [1] investigate exit cost differential game on domains with \(\mathcal{C}^2\) boundary and provide an existence and uniqueness result; the continuity of the values follow from a nonanticipative Filippov-type result, which is proved for linear dynamic constraints and state constraints that are represented by half-spaces (the boundary are hyperplanes). For an application of the results obtained in [1] to a discontinuous hybrid model we refer the reader to [2].

Numerical schemes for Differential games (in presence or in absence of state constraints) have been developed by Falcone, cf. [23] and the references therein (see also his papers in collaboration with Bardi, Bottacin, Soravia and Cristiani [3, 4, 21]). For an alternative approach also we suggest the work done by Cardaliaguet, Quincampoix and Saint-Pierre, cf. [16]. For different methods that transform a reference state constrained differential game problem to a state constraint free problem see [24].

In our paper, we first establish that, under a set of assumptions which allow the dynamic constraints and the Lagrangians to be possibly discontinuous w.r.t the time variable (more precisely \(f_1\), \(f_2\), \(L_1\) and \(L_2\) are supposed to be merely of bounded variation in t), \(V^\flat \) and \(V^\sharp \) are locally Lipschitz continuous. The main difficulty, here, is represented by the presence of the state constraint, and it is, then, necessary to provide a new nonanticipative Filippov-type theorem, which holds for general closed sets and is a crucial result to construct admissible controls and strategies. Then, we show that \(V^\flat \) and \(V^\sharp \) are solutions in the viscosity sense of the Hamilton–Jacobi–Isaacs equation associated with the reference problem. We highlight that we focus on fundamental properties of the lower and upper value functions, in particular their Lipschitz regularity and characterization as viscosity solutions, even when the problem data exhibit only a bounded variation behaviour with respect to the time variable. This investigation introduces a novel perspective in the literature on differential games, even for state constraint free problems, extending key insights from recents findings obtained in the context of optimal control [14, 8, 9] and calculus of variations [10], where problems with data of bounded variation with respect to the time variable have been considered. Following [27] (cf. [26] for optimal control problems) and [6, 11, 17] the Hamilton–Jacobi–Isaacs equation shall involve an inner Hamiltonian: this is the (standard inf-sup) Hamiltonian which is modified on the boundary of the state constraint set, taking into account only inward pointing (w.r.t. the state constraint set) vectors which belong to the convexified velocity sets of each player. The last step is represented by a comparison theorem, which we prove imposing additional continuity properties on the dynamic functions \(f_1\) and \(f_2\), and on the Lagrangians \(L_1\) and \(L_2\). As a result we obtain that the differential game has a value. The comparison theorem provided here represents an extension to differential games with decoupled dynamics of the comparison result proved in [31] for optimal control problems: this is based on the stability properties of the interior of the Clarke tangent cone (cf. [29]).

2 Standing Assumptions

We shall assume that the data involved in systems (1) and (2) and the cost (3) above satisfy the following hypotheses:

  1. (H1):

    \(f_1(.,y,.)\) is \(\mathcal{L}\times \mathcal{B}^{m_1}\) measurable, \(f_2(.,z,.)\) is \(\mathcal{L}\times \mathcal{B}^{m_2}\) measurable and \(L_2(., x = (y; z),.) is \mathcal{L} \times \mathcal{B}^{m_1}\) measurable and \(L2(:; x = (y; z); :) is \mathcal{L} \times \mathcal{B}^{m_2}\) measurable for each y and z (here \(\mathcal{L}\) denotes the Lebesgue subsets of \(\mathbb {R}\) and \(\mathcal{B}^{m}\) the Borel subsets of \( \mathbb {R}^{m}\)); \(U \subset \mathbb {R}^{m_1}\) and \(V\subset \mathbb {R}^{m_2}\) are closed sets;

  2. (H2):
    1. (i)

      there exists \(c_f \in L^{1}(0,T)\) such that

      $$\begin{aligned} \hspace{0.7 in} |f_1(t,y,u)|\,\le \, c_f(t)(1+|y|), \;\; |f_2(t,z,v)| \,\le \, c_f(t)(1+|z|) \end{aligned}$$

      for all \((y,z,u,v) \in \mathbb {R}^{n_1}\times \mathbb {R}^{n_2}\times U\times V\; \; \text{ and } \text{ for } \text{ a.e. } t\in [0,T]\),

    2. (ii)

      for every \(R_{0}>0\), there exists \(c_0>0\) such that

      $$\begin{aligned} \hspace{-1.2cm} |f_1(t,y,u)|\vee |f_2(t,z,v)| \,\le \, c_0 \quad \text{ for } \text{ all } (t,x=(y,z),u,v) \in [0,T]\times R_0\mathbb {B}\times U \times V, \end{aligned}$$
  3. (H3):

    for every \(R_{0}>0\), there exist a modulus of continuity \(\omega _f:\mathbb {R}_+ \rightarrow \mathbb {R}_+\) and \(k_{f} \in L^{1}(0,T)\) such that

    $$\begin{aligned} |f_1(t,y',u) - f_1(t,y,u)|\le \omega _f(|y-y'|), \;\; |f_2(t,z',v) - f_2(t,z,v)|\le \omega _f(|z-z'|) \end{aligned}$$

    \(\text{ for } \text{ all } \, y, y' \in \mathbb {R}^{n_1}, \ z,z'\in \mathbb {R}^{n_2}\) with \(|y|, |y'|, |z|, |z'| \le R_0\), \(u\in U, \; v\in V, \; \text{ and } t \in [S,T]\), and

    $$\begin{aligned} |f_1(t,y',u) - f_1(t,y,u)|\le k_f(t) |y-y'|, \; \; |f_2(t,z,v) - f_2(t,z',v)|\le k_f(t) |z-z'| \end{aligned}$$

    \(\text{ for } \text{ all } \, y, y' \in \mathbb {R}^{n_1}, \ z,z'\in \mathbb {R}^{n_2}\) with \(|y|, |y'|, |z|, |z'| \le R_0\), \(u\in U, \; v\in V, \; \text{ and } \text{ a.e. } t \in [S,T]\),

  4. (H4):

    for every \(R_{0}>0\), there exist a modulus of continuity \(\omega _L:\mathbb {R}_+ \rightarrow \mathbb {R}_+\) and \(k_{L} \in L^{1}(0,T)\) such that (here \(x=(y,z)\))

    $$\begin{aligned} & |L_1 (t, x', u) - L_1 (t, x, u)| \vee |L_2(t, x', v) - L_2(t, x, v)| \le \omega _L(|x-x'|) \\ & \quad \text{ for } \text{ all } \, x, x' \in R_0\mathbb {B}, u\in U, v\in V,\,\text{ and } t \in [S,T], \\ & |L(t,x',u,v) - L(t,x,u,v)| \le k_L(t) |x-x'| \ \text{ for } \text{ all } \, x, x' \in R_0\mathbb {B}, \ \\ & \quad u\in U, \; v\in V,\,\text{ and } \text{ a.e. } t\in [S,T] \,, \end{aligned}$$

    and

    $$\begin{aligned} |L_1 (t, x, u)| \vee |L_2(t, x, v) | \le c_0 \text{ for } \text{ all } \ x\in R_0\mathbb {B},\ u\in U,\ v\in V \text{ and } t\in [S,T]. \end{aligned}$$
  5. (BV):

    for every \(R_0 >0\), \(f_1(.,y,u)\), \(f_2(.,z,v)\) and L(., xuv) have bounded variation uniformly over \(x=(y,z) \in R_0 \mathbb {B}\), \(u\in U\) and \(v\in V\) in the following sense: there exists a non-decreasing function of bounded variation \(\eta :[0,T] \rightarrow [0,\infty )\) such that

    $$\begin{aligned} & |f_1(s,y,u) - f_1(t,y,u)|\ \vee \ |f_2(s,z,v) - f_2(t,z,v)|\ \\ & \vee \ |L_1 (s, x, u) - L_1 (t, x, u)| \vee |L_2(s, x, v) - L_2(t, x, v)|\le \eta (t) - \eta (s), \end{aligned}$$

    for every \([s,t] \subset [0,T]\), \(u\in U\), \(v\in V\), and \(x=(y,z) \in R_0 \mathbb {B}\).

  6. (H5):

    for every \(R_0 >0\), there exists \(k_g\ge 0\) such that \(|g(x)-g(x')| \le k_g|x-x'|\) for all \( x, x' \in R_0\mathbb {B}\)

  7. (IPC):

    (Convexified Inward Pointing Condition) for each \(t \in [S,T)\), \(s \in (S,T]\), \(y \in \partial A_1\), and \(z \in \partial A_2\),

    $$\begin{aligned} \textrm{co}\,f_1(t^+,y,U)\cap \textrm{int}\, T_{A_1}(y)\;\not =\; \emptyset \,, \quad \textrm{co}\,f_1(s^-,y,U) \cap \textrm{int}\, T_{A_1}(y)\;\not =\; \emptyset \, \end{aligned}$$

    and

    $$\begin{aligned} \textrm{co}\,f_2(t^+,z,U)\cap \textrm{int}\, T_{A_2}(z)\;\not =\; \emptyset \,, \quad \textrm{co}\,f_2(s^-,z,V) \cap \textrm{int}\, T_{A_2}(z)\;\not =\; \emptyset \,. \end{aligned}$$

Here, \(T_{A}(x)\) denotes the Clarke’s tangent cone to the set A at x defined by

$$\begin{aligned} \begin{aligned} T_A(x) :=\Big \{&\eta \; : \; \text { for any sequences } x_i \xrightarrow {A} x \text { and } t_i \downarrow 0, \text { there exists } \{w_i\} \subset A \\&\text { such that } t_i^{-1}(w_i-x_i)\rightarrow \eta \Big \}. \end{aligned} \end{aligned}$$

\(\mathbb {B}\) denotes the closed unit ball of the Euclidean space; \(\textrm{co} D\) is the convex hull of the set D. For \(a,b\in \mathbb {R}\), we write \(a\wedge b:=\min \{a,b\}\) and \(a\vee b:=\max \{a,b\}\). The limits in (IPC) are intended in the sense of Kuratowski (see for instance [32] for details on this notion):

$$\begin{aligned} f_1(t^+,y,U) = \lim _{t'\downarrow t} f_1(t',y,U), \quad f_1(s^-,y,U)=\lim _{s'\uparrow s} f_1(s',y,U), \end{aligned}$$

and similarly for \(f_2\).

3 The Hamilton–Jacobi–Isaacs Equation

We first introduce the Hamiltonian functions of interest in this paper, starting from the (un-max-minimized) Hamiltonian:

$$\begin{aligned} & \mathcal{H}(t,x=(y,z),p=(p_y,p_z),u,v):= - f_1(t,y,u) \cdot p_y \\ & \quad - f_2(t,z,v) \cdot p_z - L_1 (t,x,u) - L_2 (t,x,v). \end{aligned}$$

We observe that, from the particular game structure (which is decoupled with respect the controls), these coincide, i.e. the Isaacs condition holds. We write H the obtained Hamiltonian function:

$$\begin{aligned} H (t,x,p):= \inf _{v\in V} \sup _{u\in U} \mathcal{H}(t,x,p,u,v) = \sup _{u\in U} \inf _{v\in V} \mathcal{H}(t,x,p,u,v) \ . \end{aligned}$$

We set also

$$\begin{aligned} Q_1(t,(y,z)):= (\textrm{co}\,f_1, L_1)(t^+,(y,z),U), \quad \quad Q_2(t,(y,z)):= (\textrm{co}\,f_2, L_2)(t^+,(y,z),V) \end{aligned}$$

and

$$\begin{aligned} & G_1(t,(y,z)):= \{ (e_1,\ell _1)\in Q_1(t,(y,z)) \; : \; e_1 \in \textrm{int} T_{A_1}(y) \}, \\ & G_2(t,(y,z)):= \{ (e_2,\ell _2)\in Q_2(t,(y,z)) \; : \; e_2 \in \textrm{int} T_{A_2}(z) \}. \end{aligned}$$

This allows us to introduce the inner Hamiltonian

$$\begin{aligned} H_{in}(t,(y,z),p=(p_y,p_z)):=\inf _{(e_2,\ell _2)\in G_2(t,(y,z))} \sup _{(e_1,\ell _1)\in G_1(t,(y,z))} [- e_1 \cdot p_y - e_2 \cdot p_z - \ell _1 - \ell _2 ], \end{aligned}$$

which is defined on \([0,T]\times (A_1\times A_2)\times (\mathbb {R}^{n_1}\times \mathbb {R}^{n_2})\). Observe that the Isaacs condition is still satisfied:

$$\begin{aligned} \begin{aligned} H_{in}(t,(y,z),p=(p_y,p_z))= & \inf _{(e_2,\ell _2)\in G_2(t,(y,z))} \sup _{(e_1,\ell _1)\in G_1(t,(y,z))} [- e_1 \cdot p_y - e_2 \cdot p_z - \ell _1 - \ell _2 ] \\= & \sup _{(e_1,\ell _1)\in G_1(t,(y,z))} \inf _{(e_2,\ell _2)\in G_2(t,(y,z))} [- e_1 \cdot p_y - e_2 \cdot p_z - \ell _1 - \ell _2 ] \ . \end{aligned} \end{aligned}$$
(6)

We aim to characterize the lower and upper value functions as generalized solutions in the viscosity sense to the following Hamilton–Jacobi–Isaacs equation:

$$\begin{aligned} \left\{ \begin{aligned} - \partial _t W(t,x) + H_{in}\Big (t,x,\partial _x W(t,x) \Big )&= 0 \qquad \textrm{on } \; [0,T)\times (A_1\times A_2) \\ W(T,x)&= g(x) \qquad \textrm{on } \; A_1\times A_2 . \end{aligned} \right. \end{aligned}$$
(7)

The inner Hamiltonian function \(H_{in}\) involved in Eq. (7) can be discontinuous. One way to overcome this difficulty, when we consider the notion of viscosity solution, is to make use of the upper and lower semicontinuous envelopes of the Hamiltonian \(H_{in}\) (see for instance [6, 7, 20]). We recall that the upper and lower semicontinuous envelopes of a function \(\Phi (t,x,p)\), written respectively \(\Phi ^*\) and \(\Phi _*\), are defined by

$$\begin{aligned} \Phi ^*(t,x,p):= \limsup _{(t^\prime , x^\prime , p^\prime )\rightarrow (t,x,p)}\Phi (t^\prime , x^\prime , p^\prime ) \end{aligned}$$

and

$$\begin{aligned} \Phi _*(t,x,p):= \liminf _{(t^\prime , x^\prime , p^\prime )\rightarrow (t,x,p)}\Phi (t^\prime , x^\prime , p^\prime ). \end{aligned}$$

(The limits here are taken at points where \(\Phi \) is defined.)

Definition 3.1

(Viscosity super/sub solutions of (7)) A continuous function \(w: [0,T] \times (A_1\times A_2) \longrightarrow \mathbb {R}\) is called viscosity supersolution of (7) on \([0,T)\times (A_1\times A_2)\) if \(w(T,x)=g(x)\) for all \(x\in (A_1\times A_2)\) and it satisfies the following property: for any test function \(\varphi \in \mathcal{C}^1 \) such that \(w- \varphi \) has a local minimum at \((t_0,x_0)\in [0,T) \times (A_1\times A_2)\) (relative to \([0,T]\times (A_1\times A_2)\)) then

$$\begin{aligned} - \partial _t \varphi (t_0,x_0) + (H_{in})^* (t_0,x_0, \partial _x \varphi (t_0,x_0))\; \ge 0. \end{aligned}$$

A continuous function \(w: [0,T] \times (A_1\times A_2) \longrightarrow \mathbb {R}\) is called viscosity subsolution of (7) on \([0,T)\times (A_1\times A_2)\) if \(w(T,x)=g(x)\) for all \(x\in (A_1\times A_2)\) and it satisfies the following property: for any test function \(\varphi \in \mathcal{C}^1\) such that \(w- \varphi \) has a local maximum at \((t_0,x_0)\in [0,T) \times (A_1\times A_2)\) (relative to \([0,T]\times (A_1\times A_2)\)) then

$$\begin{aligned} - \partial _t \varphi (t_0,x_0) + (H_{in})_* (t_0,x_0, \partial _x \varphi (t_0,x_0))\; \le 0. \end{aligned}$$

Definition 3.2

(Viscosity solution of (7)) Consider the Hamilton–Jacobi–Isaacs equation (7). Then we say that a continuous function is a viscosity solution of (7) if it is(at the same time) a supersolution on \( [0,T) \times A_1 \times A_2\) and subsolution on \([0,T) \times A_1 \times A_2 \) of (7).

A central role in the analysis of the value functions is the fact that we can guarantee the possibility to construct admissible controls and strategies in a nonanticipative way. This is the objective of next section.

4 State Constrained Control Systems: Nonanticipative Constructions of Admissible Controls

Consider the state-constrained control system, described as follows:

$$\begin{aligned} & \dot{x} (t) \, =\, f(t,x(t),u(t)) \; \text{ a.e. } \; t \in [0,T] \end{aligned}$$
(8)
$$\begin{aligned} & u(t)\, \in \, U \; \text{ a.e. } \; t \in [0,T] \end{aligned}$$
(9)
$$\begin{aligned} & x(t)\, \in \, A \; \quad \text{ for } \text{ all } \; t \in [0,T] \,, \end{aligned}$$
(10)

in which \(f(.,.,.):\mathbb {R}\times \mathbb {R}^N \times \mathbb {R}^{m} \rightarrow \mathbb {R}^N\) is a given function, \(A\subset \mathbb {R}^N\) is a given closed set, and \(U\subset \mathbb {R}^m\) is a given closed set.

We shall refer to a couple (x(.), u(.)), comprising a measurable function \(u(.): [0,T]\rightarrow \mathbb {R}^{m}\) and an absolutely continuous function \(x(.): [0,T]\rightarrow \mathbb {R}^{N}\) which satisfy \(\dot{x}(t)=f(t,x(t),u(t))\) and \(u(t)\in U\) a.e., as a process (on [0, T]). The function x(.) is called a state trajectory and the function u(.) is called a control function. If x(.) satisfies the state constraint (10), the process is ‘admissible’.

We shall assume that the control system data satisfy the hypotheses: (H1)\('\)-(H3)\('\), (BV)\('\) and (IPC)\('\) which are the equivalent formulations of the hypotheses (H1)–(H3), (BV) and (IPC) but for the control system (8)–(10) and \(f_1, A_1, m_1\) and \(n_1\) (or \(f_2, A_2, m_2\) and \(n_2\)) are replaced by fAm and N. Observe that the inward pointing condition now takes the form

(IPC)\('\)::

(Convexified Inward Pointing Condition) for each \(t \in [S,T)\), \(s \in (S,T]\), \(x \in \partial A\),

$$\begin{aligned} \textrm{co}\,f(t^+,x,U)\cap \textrm{int}\, T_{A}(x)\;\not =\; \emptyset \,, \quad \textrm{co}\,f(s^-,x,U) \cap \textrm{int}\, T_{A}(x)\;\not =\; \emptyset \, . \end{aligned}$$

Employing the \(L^\infty \)-metric on the set of trajectories, we derive linear estimates w.r.t. the left-end points of a reference process and its approximating process. This result, often referred as nonanticipative Filippov’s theorem or ’distance estimates’, guarantees the possibility to construct approximating admissible controls (and trajectories) in a nonanticipative way, and, therefore, build up suitable nonanticipative strategies (which is crucial when dealing with the differential games considered in this paper).

Theorem 4.1

Take a closed nonempty set \(A\subset \mathbb {R}^N\). Fix \(r_0>0\) and define \(R(t):=(r_0+1)\exp \left( \int _0^t c_f(s)ds\right) -1\). Assume hypotheses (H1)\('\)-(H3)\('\), (BV)\('\), (IPC)\('\) for \(R_0:=R(T)\). Then there exist constants \(k_0>0\), \(K_0>0\), \(\delta _0>0\), \(\rho _0>0\), (whose magnitude depends only on the parameter \(r_0\) and the data of assumptions (H1)\('\)-(H3)\('\), (BV)\('\), (IPC)\('\)), with the following property: given any \((t_1,\xi _1) \in [0,T]\times \left( A \cap R(t_1)\mathbb {B}\right) \) and admissible process \((x_1(.),u_1(.))\) on \([t_1,T]\) such that \(x_1(t_1)=\xi _1\), for any \((t_2,\xi _2)\in [0,T]\times \left( A \cap R(t_2)\mathbb {B}\right) \), there exists an admissible process \((x_2(.),u_2(.))\) on \([t_2,T]\) with \(x_2(t_2)=\xi _2\) such that the construction of \(u_2(.)\) is nonanticipative,

$$\begin{aligned} & x_2(t)\;\in \; \textrm{int}\, A\quad \text{ for } \text{ all } t \in (t_2,T]\ , \text { and } \end{aligned}$$
(11)
$$\begin{aligned} & \Vert x_1(.+t_1\!-\!t_2)- x_2(.)\Vert _{L^\infty (t_2,T_2)} \;\le \; K_0 \,\left( |\xi _1 - \xi _2| + |t_1\!-\!t_2|\right) \;, \end{aligned}$$
(12)

where \(T_2:=(T+t_2\!-\!t_1)\wedge T\).

Moreover if \(\rho :=(1+\eta (T))\, \exp \left( \int _0^{T} k_f(t)\ dt\right) (|\xi _1-\xi _2|+|t_1\!-\!t_2|)\le \rho _0\) then there exists a finite sequence \(\{\tau _1,\dots , \tau _M\}\), (with \(M\le T/\delta _0+1\)) such that \(t_2\le \tau _1\), \(\tau _1+\delta _0\le \tau _2\),..., \(\tau _j+\delta _0\le \tau _{j+1},\dots , \tau _M< T_2\), and the control \(u_2\) on \([t_2,T_2]\) that has the following structure:

$$\begin{aligned} u_2(t):=\left\{ \begin{array}{ll} \bar{u}_j(t) & \text { if } t\in [\tau _j,(\tau _j+k_0\rho )\wedge T_2] \\ u_1(t-k_0\rho +t_1-t_2) & \text { if } t\in (\tau _j+k_0\rho ,(\tau _j+\delta _0)\wedge T_2]\quad \text { for }\quad j=1,\dots ,M \\ u_1(t+t_1-t_2) & \text { otherwise,} \end{array}\right. \end{aligned}$$

for some control functions \(\bar{u}_j:[\tau _j,(\tau _j+k_0\rho )\wedge T_2]\rightarrow U\), \(j=1,\dots ,M\).

Remark 4.2

A scrutiny of the proof of Theorem 4.1 tells us further useful information about the existence of admissible controls for processes emerging from some given initial data \((t_0,\xi _0)\in [0,T)\times A\) (even if we are not necessarily interested in comparing it with respect to admissible processes having different initial data, which is the purpose of Theorem 4.1 statement). Indeed, we can always fix a control function \(u_0\) on [0, T]. Then for any given initial data \((t_0,\xi _0)\in [0,T)\times A\), we can consider a positive number \(\rho \), which now has a different expression from (14) below and can be written in terms of the state constraint violation of the trajectory \(x_0(.)\) associated with the control \(u_0\) on \([t_0,T]\) (cf. [15] for the details in the context of differential inclusions, the adaptation of which to the control systems is straightforward). The analysis along the line of the proof of Theorem 4.1 (cf. [15] for the differential inclusions) then provides an admissible pair \((\bar{x},\bar{u})\) such that \(\bar{x}(t_0)=\xi _0\), \(\bar{x}(t)\in \textrm{int}A\) for all \(t\in (t_0,T]\) and, if \(\xi _0\) is ‘close’ to the boundary \(\partial A\) and a reference vector \(v\in \textrm{co} f(t_0^+,\xi _0,U)\) is given, then the control \(\bar{u}\) can be constructed in such a manner that, for each \(\varepsilon >0\) we can find \(\sigma >0\) such that

$$\begin{aligned} \left| \int _{t_0}^{t_0+\sigma } \left[ f(s,\bar{x}(s),\bar{u}(s))-v\right] \,ds \right| \le \varepsilon . \end{aligned}$$
(13)

When we consider our reference differential game problem and assumptions (H1)-(H3), (BV) and (IPC) are satisfied, this translates into the fact that for every initial data \((t_0,x_0=(y_0,z_0))\in [0,T)\times A_1\times A_2\), the sets of admissible controls \(\mathcal{U}(t_0,y_0)\) and \(\mathcal{V}(t_0,z_0)\) are nonempty and, in particular, we can always find controls \(\bar{u}\in \mathcal{U}(t_0,y_0)\) and \(\bar{v}\in \mathcal{V}(t_0,z_0)\) satisfying the properties described above.

Proof of Theorem 4.1

We fix a control function \(u_0\) on [0, T]. Fix also any \(r_0>0\). Assume that the function f and set A in the theorem statement satisfy (H1)\('\), (H2)\('\), (H3)\('\), (IPC)\('\) and (BV)\('\) with constant \(c_0 >0\), and functions \(c_f, k_{f} \in L^1(0,T)\), for \(R_0:=R(T)\). Take any \(t_1\in [0,T]\). The constants \(R_0\) and \(c_0\) bound, respectively, magnitudes and velocities of all processes (xu) on the subinterval \([t_1,T]\subset [0,T]\) starting from \(R(t_1)\mathbb {B}\). Let \(\bar{\eta }>0\) and \(\eta (.)\) be the constant and modulus of variation appearing in (BV)\('\). Take any \(\xi _i \in A \cap R(t_i)\mathbb {B}\), \(i=1,2\), with \(\xi _1\ne \xi _2\) (for, otherwise, there is nothing to prove).

Let \((x_1,u_1)\) be any given admissible process on \([t_1,T]\). We shall first construct, in a nonanticipative way, an admissible process \((x_2,u_2)\) on \([t_2,T_2]\) such that (11) and (12) are satisfied (Steps 1–5). In Step 6, we provide its extension to \([t_2,T]\) (when \(T_2<T\)).

Step 1. (A reduction argument).

The reduction techniques and the preliminary analysis used in [15] can be easily adapted to our control system (8) (considering the multivalued map \((t,x)\leadsto F(t,x):=f(t,x,U)\)), allowing us to restrict our attention, without loss of generality, to the case when

(i): \(\xi _2 \in ( \partial A + \frac{\bar{\eta }}{2} \mathbb {B}) \cap A \cap R(t_2)\mathbb {B}\) and

$$\begin{aligned} \rho :=(1+\eta (T))\, \exp \left( \int _0^{T} k_f(t)\ dt\right) (|\xi _1-\xi _2|+|t_1\!-\!t_2|)\le \rho _0, \text { for some } \rho _0>0, \end{aligned}$$
(14)

(ii): we consider a subinterval \([t_2,\tilde{T}_2]\subset [0,T]\) (in place of \([t_2,T_2]\)) such that \(\tilde{T}_2-t_{2} \le \delta _0\) for some \(\delta _0>0\),

(iii): \(\eta (\tilde{T}_2)-\eta (t_{2}) \le \gamma .\)

Here, \(\rho _0>0\), \(\delta _0>0\) and \(\gamma >0\) are constants which depend only on the data given in the assumptions, see (16) and (17)–(18) below.

Using well-known stability properties of the interior of Clarke tangent cone [29], owing to [15, Lemma 5], we can eventually modify the reference function f at most on a finite number of times \(\{\sigma _i \}\subset [0,T]\) and obtain a new function \(\tilde{f}\) which satisfies the following property: there exist \(\epsilon \in (0,1)\) and \(\bar{\eta }>0\) (we can arrange that \(\bar{\eta }\) is the same constant appearing in (BV)\('\), otherwise we can always reduce its size) such that given any \((t,x) \in [0,T]\times \big ( ( \partial A + \bar{\eta }\mathbb {B}) \cap R_0\mathbb {B}\cap A \big )\), we can find a vector \(v \in \textrm{co}\, \tilde{f}(t,x,U)\) such that

$$\begin{aligned} x'+ [0,\epsilon ](v+ \epsilon \mathbb {B}) \;\subset \; A, \quad \text{ for } \text{ all } x' \in (x+ \epsilon \mathbb {B})\cap A. \end{aligned}$$
(15)

We also know that a process (xu) for the dynamics governed by \(\tilde{f}\) is also a process for f and vice-versa. Therefore, it is not restrictive to continue our analysis assuming that, for any \((t,x)\in [t,T]\times \big ( ( \partial A + \bar{\eta }\mathbb {B}) \cap R_0\mathbb {B}\cap A \big )\) we can find \(v\in \textrm{co}\, f(t,x,U)\) such that (15) is true for all \(x' \in (x+\epsilon \mathbb {B})\cap A\).

Let \(\omega : [0, T]\rightarrow [0, \infty )\) be the function

$$\begin{aligned} \omega (\delta ) := \sup \left( \int _{I}k_{f}(s)ds \right) , \end{aligned}$$

where the supremum is taken over sub-intervals \(I\subset [0,T]\) of length not greater than \(\delta \). Observe that \(\omega (.)\) is well-defined on [0, T], for \(k_{f} \in L^1(0,T)\), and that \(\omega (\delta ) \rightarrow 0\), as \(\delta \downarrow 0\).

Fix \(k_0\ge 1\) such that \(k_0 > \epsilon ^{-1}\) and take constants \(\delta _0 >0\), \(\rho _0 >0\) and \(\gamma >0\) in such a manner that

$$\begin{aligned} \delta _0 \le \epsilon ,\quad \rho _0+c_0 \delta _0< \epsilon ,\quad k_0 \rho _0 < \delta _0,\quad \rho _0 \le \bar{\eta }, \quad 4 \delta _0 c_0 \le \bar{\eta }, \end{aligned}$$
(16)

and

$$\begin{aligned} 4e^{\omega (\delta _0)}(\gamma + c_0\omega (\delta _0)) < \epsilon , \end{aligned}$$
(17)
$$\begin{aligned} \left[ \delta _0/2 + c_0 k_0 \omega (\delta _0) + k_0e^{\omega (\delta _0)}(\gamma (1+\omega (\delta _0)+c_0 \omega (\delta _0) (3+ \omega (\delta _0))\right] \rho + \gamma \delta _0 < (k_0 \epsilon -1)\;. \end{aligned}$$
(18)

Set \(K:=(1+\eta (T))\, \exp \left( \int _0^{T} k_f(t)\ dt\right) \) so that \(\rho = K (|\xi _1-\xi _2|+|t_1\!-\!t_2|)\).

Step 2. (Admissible control construction – first part of the time interval).

Since we can restrict attention to a situation in which (i) is valid, we can find a vector \(v \in \textrm{co}\, f(t_{2},\xi _2,U)\) satisfying property (15) for \((t,x)=(t_{2},\xi _2)\). Now, consider the arc \(y :[t_{2},\tilde{T}_2]\rightarrow \mathbb {R}^{n}\) such that \(y (t_{2})=\xi _2\) and \(\dot{y}(t)\;=\;v\). It immediately follows that, for all \(t \in [t_{2},(t_{2}+k_0\rho )\wedge T]\),

$$\begin{aligned} y(t)\;=\;\xi _2+ (t-t_2)\, v\;. \end{aligned}$$
(19)

Recalling that \(c_0\) constitutes an upper bound for the magnitude for both v and \(\Vert \dot{x}_1 \Vert _{L^{\infty }}\), we deduce that

$$\begin{aligned} \Vert x_1(.+t_1\!-\!t_2) - y(.) \Vert _{L^{\infty }(t_{2},(t_{2}+(T-t_1)\wedge k_0\rho )\wedge \tilde{T}_2)} \;\le \; 2 c_0k_0 \rho \;. \end{aligned}$$
(20)

In addition, from (BV)\('\) we also obtain that, for all \(s \in [t_{2}, (t_{2}+k_0 \rho )\wedge \tilde{T}_2]\),

$$\begin{aligned} d_{\textrm{co}\, f(s,y(s),U)}(\dot{y}(s))\le & (\eta (s)-\eta (t_{2})) + k_f(s)|y (s)- y (t_2)| \nonumber \\\le & (\eta (s)-\eta (t_{2})) + k_{f}(s)c_0\, (s-t_2)\;. \end{aligned}$$
(21)

Invoking Filippov’s Existence Theorem (cf. [32, Thm. 2.4.3]), in which we take as reference multivalued function \(\widetilde{F}(t,x):=\textrm{co}\,F(t,x)=\textrm{co}\,f(t,x,U)\)) and bearing in mind condition (21), we can find an \(\widetilde{F}\)-trajectory \(\tilde{x} \) on \([t_{2},(t_{2}+k_0\rho )\wedge \tilde{T}_2]\) with \(\tilde{x}(t_2)=y(t_2)=\xi _2\) and such that, for any \(t \in [t_{2},(t_{2}+k_0\rho )\wedge \tilde{T}_2]\)

$$\begin{aligned} \Vert \tilde{x} -y \Vert _{L^{\infty }(t_{2},t)} \;\le \; e^{\omega (\delta _0)}(\gamma + c_0\,\omega (\delta _0) )\, (t-t_{2}) \;. \end{aligned}$$
(22)

It follows that for all \(t\in [t_2,(t_2+k_0\rho )\wedge \tilde{T}_2]\),

$$\begin{aligned} \begin{aligned} \tilde{x}(t)&\in y(t) + e^{\omega (\delta _0)}(\gamma + c_0\,\omega (\delta _0))(t-t_2)\mathbb {B}, \text { from } (22), \\&\subset \xi _2 + (t-t_2) (v+e^{\omega (\delta _0)}(\gamma + c_0\, \omega (\delta _0))\mathbb {B}), \text { from } (19),\\&\subset \xi _2 + (t-t_2) (v+\epsilon /4\, \mathbb {B}) , \text { from } (17) \\&\subset \textrm{int} A, \text { from } (14)--(16). \end{aligned} \end{aligned}$$
(23)

We take a decreasing sequence \(\{\sigma _k\}_{k\ge 1}\) in \((t_2,(t_2+k_0\rho )\wedge \tilde{T}_2]\) such that \(\sigma _1:=(t_2+k_0\rho )\wedge \tilde{T}_2\) and \(\sigma _k \downarrow t_2\) as \(k\rightarrow +\infty \). Observe that, since \(\tilde{x}(t)\in \textrm{int} A\) for all \(t\in (t_2,(t_2+k_0\rho )\wedge \tilde{T}_2]\), there exists a decreasing sequence \(\epsilon _k \downarrow 0\), with \(\epsilon _k \in (0,(\epsilon (\sigma _k-t_2))\wedge (\delta _0 \rho ))\), for all \(k\ge 1\), such that

$$\begin{aligned} \tilde{x}(\sigma ) + \epsilon _k\, \mathbb {B} \subset \textrm{int} A \text { for all } \tau \in [\sigma _k,\sigma _1]. \end{aligned}$$

Employing the techniques used in the proof of [13, Lemma 5.2, Step 3] we can find an F–trajectory \(x_2(.)\) on \([t_1,(t_2+k_0\rho )\wedge \tilde{T}_2]\) such that \(x_2(t_2)=y(t_2)=\xi _2\) and, for each \(k\ge 2\),

$$\begin{aligned} x_2(t) \in \tilde{x}(t) + \frac{\epsilon _k}{2}\, \mathbb {B} \subset \textrm{int} A, \text { for all } t\in (\sigma _k,\sigma _{k-1}]. \end{aligned}$$
(24)

In particular we have:

$$\begin{aligned} x_2(\sigma _1) \in \tilde{x}(\sigma _1) + \frac{\epsilon _1}{2}\, \mathbb {B} \subset y(\sigma _1) + \left[ \frac{\delta _0}{2}+e^{\omega (\delta _0)}(\gamma + c_0\, \omega (\delta _0))k_0\right] \rho \, \mathbb {B}. \end{aligned}$$
(25)

Using the Filippov’s Selection Theorem (cf. [32, Thm. 2.3.13]) we can find a control \(\bar{u}_2:[t_{2},(t_{2}+k_0\rho )\wedge \tilde{T}_{2}] \rightarrow U\) such that \((x_2,\bar{u} _2)\) is a process on \([t_{2},(t_{2}+k_0\rho )\wedge \tilde{T}_{2}]\).

Observe that, in fact, from the analysis of this step we can deduce a much stronger property: for each initial data \((t,\xi )\), where \(t\in [0,T]\) and

$$\begin{aligned} \xi \in \left( \partial A +\frac{\bar{\eta }}{2}\, \mathbb {B}\right) \cap A\cap R(t)\mathbb {B}, \end{aligned}$$
(26)

we can construct a control \(\bar{u}:[t,(t+k_0\rho )\wedge T]\rightarrow U\) such that the associated process \((\bar{x},\bar{u})\) emerging from \(\bar{x}(t)=\xi \) satisfies the condition \(\bar{x}(s)\in \textrm{int} A\) for all \(s\in (t,(t+k_0\rho )\wedge T]\) and

$$\begin{aligned} \bar{x}((t+k_0\rho )\wedge T)\in \bar{y}((t+k_0\rho )\wedge T) + \left[ \frac{\delta _0}{2}+e^{\omega (\delta _0)}(\gamma + c_0\, \omega (\delta _0))k_0\right] \rho \, \mathbb {B}, \end{aligned}$$

where \(\bar{y}(s):=\xi +(s-t)v\), \(s\in [t,(t+k_0\rho )\wedge T]\) and \(v\in \textrm{co}\, f(t,\xi ,U)\cap \textrm{int} T_A(\xi )\) is taken according to Step 1.

Therefore for each initial data \((t,\xi )\) such that (26) is satisfied, we consider the associated (fixed) control obtained in this step, which is admissible on \([t,(t+k_0\rho )\wedge T]\).

Step 3. (Admissible control construction – second part of the time interval and distance estimates).

Observe that, given any control u on \([t_1,T]\) such that the process (x(.), u(.)) with starting point \(x(t_1)=\xi _1\) is admissible on \([t_1,T]\), if we consider the process \((\hat{x}(.),u(.+t_1\!-\!t_2))\) on \([t_2,T2]\) such that \(\hat{x}(t_2)=\xi _2\) (which in general is not necessarily admissible), then from Gronwall’s inequality we have

$$\begin{aligned} \begin{aligned} \max _{t \in [t_{2},T_{2}]}d_{A}(\hat{x}(t))&\le \Vert x(.+t_1\!-\!t_2)-\hat{x}(.)\Vert _{L^{\infty }(t_{2},T_{2})} \\&\le \exp \left( \int _0^{T} k_f(t)\ dt\right) (1+\eta (T))\left( |\xi _1-\xi _2|+|t_1\!-\!t_2|\right) \; (= \rho ). \end{aligned} \end{aligned}$$
(27)

Take now an admissible process \((x_1(.),u_1(.))\) on \([t_1,T]\) such that \(x_1(t_1)=\xi _1\). Define a new control

$$\begin{aligned} u_2(t)\;:=\; \left\{ \begin{array}{ll} \bar{u} _2 (t) & \text{ if } t \in [t_{2},(t_{2}+k_0\rho )\wedge \tilde{T}_2] \\ u_1(t\!+\!t_1\!-\!t_2-k_0 \rho ) & \text{ if } t \in (t_{2}+k_0\rho ,\tilde{T}_2]\;. \end{array} \right. \end{aligned}$$

Write \((x_2,u_2)\) the process associated with the control \(u_2\) with starting point \(x_2(t_2)=\xi _2\). It follows that for any \(t \in [t_{2},(t_{2}+k_0\rho )\wedge \tilde{T}_{2}]\)

$$\begin{aligned} \begin{aligned} |x_1(t\!+\!t_1\!-\!t_2) - x_2(t) |&\le \displaystyle \int _{t_2}^t |f(s+t_1\!-\!t_2,x_1(s+t_1\!-\!t_2), u_1(s+t_1\!-\!t_2)) \\&\qquad \qquad \qquad \qquad \qquad -f(s,x_2(s), u_2(s))|ds +| \xi _1 - \xi _2| \\&\le 2c_0 k_0 \rho + | \xi _1 - \xi _2| \\&\le (2c_0 k_0K +1) \left( | \xi _1 - \xi _2| + |t_1\!-\!t_2|\right) \ . \end{aligned} \end{aligned}$$
(28)

We have to consider now the case in which \(t_{2}+k_0 \rho <\tilde{T}_{2}\). Write \((\hat{x}_2(.),u_1(.+t_1\!-\!t_2))\) the process associated with the control \(u_1(.+t_1\!-\!t_2)\) with starting point \(\hat{x} _2(t_2)=\xi _2\). From (27) applied to \((\hat{x}_2, u_1(.+t_1\!-\!t_2))\) we deduce that \(\max _{t\in [t_2,T_2]} d_A(\hat{x}_2(t))\le \rho \). For all \(t \in [t_{2}+k_0\rho ,\tilde{T}_{2}]\) we have

$$\begin{aligned} \begin{aligned} |\hat{x}_2&(t) - x_2(t)| = \bigg |\int _{t_2}^t f(s,\hat{x} _2(s), u_1(s+t_1\!-\!t_2)) ds - \int _{t_2}^t f(s,x_2(s), u_2(s)) ds\bigg | \\&= \bigg |\int _{t_2}^t f(s,\hat{x} _2(s), u_1(s+t_1\!-\!t_2)) ds - \int _{t_2}^{t_2+k_0\rho } f(s,x_2(s), \bar{u}_2 (s)) ds \\&\quad - \int _{t_2+k_0\rho }^t f(s,x_2(s), u_1(s+t_1\!-\!t_2-k_0\rho )) ds \bigg |\\&\le \int _{t - k_0\rho }^t |f(s,x_2(s), u_1(s+t_1\!-\!t_2)) | ds + \int _{t_2}^{t_2+k_0\rho } |f(s,x_2(s), \bar{u}_2(s))| ds \\&\quad + \int _{t_2}^t |f(s,\hat{x} _2(s), u_1(s+t_1\!-\!t_2)) - f(s,x_2(s), u_1(s+t_1\!-\!t_2))| ds \\&\quad + \int _{t_2}^{t-k_0\rho } |f(s, x_2(s), u_1(s+t_1\!-\!t_2)) - f(s+k_0\rho ,x_2(s), u_1(s+t_1\!-\!t_2))| ds \\&\quad + \int _{t_2}^{t-k_0\rho }\! |f(s\!+\!k_0\rho , x_2(s), u_1(s\!+\!t_1\!-\!t_2)) - f(s\!+\!k_0\rho ,x_2(s\!+\!k_0\rho ), u_1(s\!+\!t_1\!-\!t_2))| ds \\&\le 2c_0k_0\rho + \int _{t_2}^t k_f(s)|\hat{x} _2(s) - x_2(s)| ds + \int _{t_2}^{t-k_0\rho } (\eta (s+k_0\rho ) - \eta (s))ds \\&\quad + \int _{t_2}^{t-k_0 \rho } k_f(s)|x_2(s+k_0\rho )-x_2(s)|ds \\&\le (2c_0 +\gamma + \omega (\delta _0)c_0) k_0\rho + \int _{t_2}^t k_f(s)|\hat{x} _2(s) - x_2(s)| ds. \end{aligned} \end{aligned}$$
(29)

Then, from Gronwall’s inequality (in the integral form), we deduce that, for all \(t \in [t_{2}, \tilde{T}_{2}]\),

$$\begin{aligned} |\hat{x} _2(t) - x_2(t)|\le e^{\omega (\delta _0)} (2c_0 +\gamma + \omega (\delta _0)c_0) k_0\rho . \end{aligned}$$
(30)

Take any \(t \in (t_{2}+k_0 \rho ,\tilde{T}_2]\), from the estimates above we obtain

$$\begin{aligned} \begin{aligned} |x_1(t\!+\!t_1\!-\!t_2)&-x_2(t)|\le |x_1(t\!+\!t_1\!-\!t_2)- \hat{x}_2(t)| + |\hat{x}_2(t) - x_2(t)| \\&\le e^{\omega (\delta _0)}(1+\eta (T))(|\xi _1-\xi _2|+|t_1\!-\!t_2|) + e^{\omega (\delta _0)}(2c_0+\gamma +\omega (\delta _0)c_0)k_0\rho \\&\le e^{\omega (\delta _0)}\left[ 1+\eta (T) + (2c_0+\gamma +\omega (\delta _0)c_0)k_0K\right] (| \xi _1 - \xi _2| + |t_2\!-\!t_1|)\;, \end{aligned} \end{aligned}$$
(31)

from which we deduce the required estimate:

$$\begin{aligned} \Vert x_1(.+t_1\!-\!t_2) -x_2(.) \Vert _{L^{\infty }(t_{2},\tilde{T}_2)}\;\le \; K_0 (| \xi _1 - \xi _2| + |t_1\!-\!t_2|), \end{aligned}$$
(32)

where

$$\begin{aligned} K_0 :=e^{\omega (\delta _0)}\left[ 1+\eta (T) + (2c_0+\gamma +\omega (\delta _0)c_0)k_0K\right] . \end{aligned}$$

Step 4. (The process is admissible on the second part of the time interval).

From (24) we know that \(x_2(t)\in \textrm{int}A\) for all \(t\in (t_2,(t_2+k_0\rho )\wedge \tilde{T}_2]\). So to complete the proof we proceed assuming that \(t_2+k_0\rho <\tilde{T}_2\). Define the arc \(\hat{y}:[t_2,\tilde{T}_2]\rightarrow \mathbb {R}^n\) as follows

$$\begin{aligned} \hat{y}(t) :=\left\{ \begin{array}{ll} y(t) = \xi _2 + v(t-t_2) & \text { if } t\in [t_2,t_2+k_0\rho ) \\ \xi _2 + k_0\rho v + \int _{t_2+k_0\rho }^t\dot{\hat{x}}_2(s-k_0\rho )ds & \text { if } t\in [t_2+k_0\rho ,\tilde{T}_2] \end{array}\right. . \end{aligned}$$
(33)

Observe that, when \(t\in [t_2+k_0\rho ,\tilde{T}_2]\), we have \(\hat{y}(t)=k_0\rho v + \hat{x}_2(t-k_0\rho )\), and writing z(t) a projection on A of the arc \(t\mapsto \hat{x}_2(t-k_0\rho )\), it satisfies \(|\hat{x}_2(t-k_0\rho )-z(t)|=d_A(\hat{x}_2(t-k_0\rho ))\le \Vert \hat{x}_2-x_1(.+t_1\!-\!t_2) \Vert _{L^\infty (t_2+k_0\rho ,\tilde{T}_2)} \le \rho \) and we deduce that

$$\begin{aligned} \hat{y}(t)\in z(t) +k_0\rho v + \rho \mathbb {B} \text { for all } t \in [t_2+k_0\rho , \tilde{T}_2]. \end{aligned}$$
(34)

Notice also that for all \(t\in [t_2+k_0\rho , \tilde{T}_2]\), making use of (25) and (30), recall that here \(\tau _1=t_2+k_0\rho \), we obtain

$$\begin{aligned} |&x_2(t)-\hat{y}(t)| \le |x_2(t_2+k_0\rho ) -y(t_2+k_0\rho )|\nonumber \\&\qquad + \bigg |\int _{t_2+k_0\rho }^{t} \Big [f(s,x_2(s),u_2(s)) - f(s-k_0\rho ,\hat{x}_2(s-k_0\rho ),u_1(s-k_0\rho +t_1\!-\!t_2))\Big ] ds\bigg | \nonumber \\&\le \left[ \delta _0/2 + e^{\omega (\delta _0)}(\gamma +c_0 \omega (\delta _0))k_0 \right] \rho + \int _{t_2+k_0\rho }^{t} (\eta (s)-\eta (s-k_0\rho ))ds \\&\qquad + \int _{t_2+k_0\rho }^{t} k_f(s)|x_2(s)-x_2(s-k_0\rho )+x_2(s-k_0\rho )-\hat{x}_2(s-k_0\rho )|ds \nonumber \\&\le \left[ \delta _0/2 + e^{\omega (\delta _0)}(\gamma +c_0 \omega (\delta _0))k_0 + c_0 \omega (\delta _0)k_0 + \omega (\delta _0)e^{\omega (\delta _0)}(2c_0+\gamma +\omega (\delta _0)c_0)k_0\right] \rho + \gamma \delta _0\nonumber \end{aligned}$$
(35)

Since \(|\hat{x}_2(t-k_0 \rho ) -\hat{x}(t_{2})|\;\le \; c_0 (\tilde{T}_2-t_{2})\) for all \(t \in (t_{2}+k_0 \rho , \tilde{T}_{2}]\), appealing once again to (16), we also have

$$\begin{aligned} |z(t) - \hat{x}_2(t_{2})|=|z(t)-\hat{x}_2(t-k_0 \rho )+\hat{x}_2(t-k_0 \rho )-\xi _2|\;\le \; \rho +c_0 \delta _0 \le \rho _0+c_0\delta _0< \varepsilon . \end{aligned}$$

Thus bearing in mind (15) and (16), we see that

$$\begin{aligned} z(t) + k_0 \rho (v + \epsilon \mathbb {B}) \;\subset \; A\;, \end{aligned}$$

and, owing to (34),

$$\begin{aligned} \hat{y}(t)+ (k_0 \epsilon - 1) \rho \mathbb {B}\;\subset \; A\;. \end{aligned}$$

Taking into account (18) and (36), we deduce that \(x_2(t) \in \textrm{int}\, A\) for all \(t\in [t_2+k_0\rho ,\tilde{T}_2]\) in this case as well, confirming all the assertions of the theorem.

Step 5. (Iteration, nonanticipativity).

With the help of the reduction argument of Step 1 we constructed (in Steps 2 and 3) an admissible process on the interval \([t_2,\tilde{T}_2]\) of length at most \(\delta _0>0\), and the magnitude of \(\delta _0\), depends only on the data of the problem and the choice of the radius \(r_0>0\) (which regulates the size of the region in which the processes are supposed to emerge). We recall that the reduction argument of Step 1 (we refer the reader to [15] and [13] for full details) allows to reduce attention to subintervals of length smaller than \(\delta _0\), since, if \(T_2-t_2>\delta _0\), we partition \([t_2,T_2]\) as a family of \(M_0\) contiguous intervals, each of length at most \(\delta _0\), where \(M_0\) is the smaller integer such that \(M_0^{-1}(T_2-t_2)\le \delta _0\), \(\{[\sigma _0^i,\sigma _1^i]\}_{i=1}^{M_0}\), where \(\sigma _0^1=t_2\), \(\sigma _1^{M_0}=T_2\), \(\sigma _1^i=\sigma _0^i+\delta _0\) for all \(i=1,\dots ,M_0-1\) and \(\sigma _1^{M_0}-\sigma _0^{M_0}\le \delta _0\). If the starting point \(\xi _2\) belong to \(\left( \partial A+\frac{\bar{\eta }}{2}\mathbb {B}\right) \cap A\cap R(t_2)\mathbb {B}\), then we construct an admissible process \((x_2,u_2)\) on \([\sigma _0^1=t_2,\sigma _1^1]\) according to Steps 1–4. On the other hand, if \(\xi _2\in (A\cap R(t_2)\mathbb {B})\setminus \left( \partial A+\frac{\bar{\eta }}{2}\mathbb {B}\right) \), then we just consider the admissible control \(u_2(.):=u_1(.+t_1-t_2)\) on \([\sigma _0^1,\sigma _1^1]\). In a subsequent stage we simply extend the obtained process \((x_2,u_2)\) for \([\sigma _0^1=t_2,\sigma _1^2]\) taking into account the position of the new initial condition \(x_2(\sigma _0^2):=x_2(\sigma _1^1)\) and according to the criterion employed above: the control depends on whether \(x_2(\sigma _0^2)\in \left( \partial A + \frac{\bar{\eta }}{2}\mathbb {B}\right) \cap A\) or not (observe that it necessarily belongs to \(R(\sigma _0^2)\mathbb {B}\)). This tells us that to build up an admissible process \((x_2,u_2)\) on (the full time interval) \([t_2,T_2]\) it is necessary to apply the construction displayed in the previous steps only a finite number of times \(M\le M_0\).

We write \(\tau _j\in [t_2,T_2]\), for \(j=1,\dots M\), the initial time of each interval of length at most \(\delta _0\) on which we employ the above construction of Steps 1–4. Observe that we have \(t_2\le \tau _1<\dots<\tau _M<T_2\), \(\tau _{j+1}-\tau _j\ge \delta _0\) for all \(j=1,\dots ,M-1\). Whenever \(t\notin \cup _{j=1}^M \left[ \tau _j,(\tau _j+\delta _0)\wedge T_2\right] \) we have \(u_2(t):=u_1(t+t_1-t_2)\). Therefore we shall end up with an admissible control \(u_2\) on \([t_2,T_2]\) that has the following structure:

$$\begin{aligned} u_2(t):=\left\{ \begin{array}{cl} \bar{u}_j(t) & \text { if } t\in [\tau _j,(\tau _j+k_0\rho )\wedge T_2] \\ u_1(t-k_0\rho +t_1-t_2) & \text { if } t\in (\tau _j+k_0\rho ,(\tau _j+\delta _0)\wedge T_2] \text { for } j=1,\dots ,M \\ u_1(t+t_1-t_2) & \text { otherwise,} \end{array}\right. \end{aligned}$$

for some control functions \(\bar{u}_j:[\tau _j,(\tau _j+k_0\rho )\wedge T_2]\rightarrow U\), \(j=1,\dots ,M\). Observe that the control \(u_2\) is constructed starting from \(u_1\) (shifted of the quantity \(t_1-t_2\)) and it is modified on the intervals \([\tau _j,(\tau _j+\delta _0)\wedge T_2]\) according to Steps 1–4.

Now, we show that this construction is nonanticipative. Take two admissible processes \((x_1,u_1)\) and \((x_1',u_1')\) on \([t_1,T]\) such that \(x_1(t_1)=x_1'(t_1)=\xi _1\). Take any \(\sigma \in [0,T-t_1]\). It is not restrictive to consider only the following two situations:

  1. (i)

    \(t_2+\sigma \in [t_2,\tau _1]\) and \(t_2<\tau _1\),

  2. (ii)

    \(t_2+\sigma \in [\tau _1,(\tau _1+\delta _0)\wedge T_2]\),

since the analysis for all the other cases can be easily traced back to these ones.

Case (i) : \(t_2+\sigma \in [t_2,\tau _1]\) (\(t_2<\tau _1\)). In this case if \(u_1=u_1'\) a.e. on \([t_1,t_1+\sigma ]\), then clearly for a.e. \(t\in [t_2,(t_2+\sigma )\wedge T_2]\) we have \(u_2(t)=u_1(t+t_1-t_2)=u_1'(t+t_1-t_2)=u_2'(t)\).

Case (ii) : \(t_2+\sigma \in [\tau _1,(\tau _1+\delta _0)\wedge T_2]\). Suppose that \(u_1=u_1'\) a.e. on \([t_1,t_1+\sigma ]\). If \(t_2+\sigma \in [\tau _1,\tau _1+k_0\rho ]\) then we have

$$\begin{aligned} \left\{ \begin{array}{ll} u_2(t)=u_2'(t) \text { a.e. on } [t_2,\tau _1] \\ u_2(t)=\bar{u}_1(t) = u_2'(t) \text { a.e. } t\in [\tau _1,\tau _1+k_0\rho ] \end{array}\right. \end{aligned}$$
(36)

On the other hand, when \(t_2+\sigma \in [\tau _1+k_0\rho ,(\tau _1+\delta _0)\wedge T_2]\), in addition to (36) we have \(u_2(t)=u_1(t-k_0\rho +t_1-t_2)=u_1'(t-k_0\rho +t_1-t_2)=u_2'(t)\) a.e. on \([\tau _1+k_0\rho ,(t_2+\sigma )\wedge T_2]\). In both cases, we obtain that the mapping that provides the control \(u_2\) is nonanticipative.

Step 6. (Construction completion).

If \(T_2=T\) (which means that \(t_2\!-\!t_1\ge 0\)) then the construction of the admissible process \((x_2,u_2)\) is complete. Otherwise if \(T_2<T\) (i.e. \(T_2:=T+t_2\!-\!t_1\) and \(t_2<t_1\)), we have to extend the process \((x_2,u_2)\) on \([t_2,T_2]\) obtained above to the interval \([t_2,T]\). Observe that in this extension procedure we restrict attention only to condition (11) since estimate (12) involves the restriction of the trajectory \(x_2(.)\) to the time interval \([t_2,T_2]\).

We consider the process \((w_2,{u_0}_{\mid [T_2,T]})\) on \([T_2,T]\) such that \(w_2(T_2)=x_2(T_2)\), where \(u_0\) is the control (on [0, T]) that we fixed at the beginning of the proof. Since \(w_2(T_2) \in \textrm{int}A\) and \(T-T_2=t_1\!-\!t_2<\delta _0\), if \(x_2(T_2)\in (\textrm{int}A)\smallsetminus \left( \partial A+\frac{\bar{\eta }}{2} \mathbb {B}\right) \), then from (16) we deduce that \(w_2(t)\in \textrm{int}A\) for all \(t\in [T_2,T]\). In this case we extend \((x_2,u_2)\) to \([t_2,T]\) obtaining \(x_2(t)=w_2(t)\) and \(u_2(t)=u_0(t)\) on \([T_2,T]\).

On the other hand, if \(x_2(T_2)\in (\textrm{int} A)\cap \left( \partial A + \frac{\bar{\eta }}{2}\mathbb {B}\right) \), then there exists a vector \(v_2 \in \textrm{co} f(T_2,x_2(T_2),U)\) which satisfies condition (15). Then employing exactly the argument displayed in Step 2 on the whole interval \([T_2,T]\) (in place of \([T_2,(T_2+k_0\rho )\wedge T]\)), we can find a process \((\bar{x}_2,\bar{u}_2)\) on \([T_2,T]\) such that \(\bar{x}_2(T_2)=x_2(T_2)\) and \(\bar{x}_2(t)\in \textrm{int}A\) for all \(t\in [T_2,T]\). Therefore, in this situation, the extension of \((x_2,u_2)\) will be obtained setting \((x_2,u_2)=(\bar{x}_2,\bar{u}_2)\) on \([T_2,T]\).

The nonanticipative property of our construction in the last interval \([T_2,T]\) (when \(T-T_2=t_1-t_2>0\)) follows from the fact that we have two possible situations: either \(x_2(T_2)\in \textrm{int}A\setminus \left( \partial A+\frac{\bar{\eta }}{2}\mathbb {B}\right) \) and we complete \(u_2\) with the (fixed) control \(u_0\), or \(x_2(T_2)\in \partial A +\frac{\bar{\eta }}{2}\mathbb {B}\) and then we extend \(u_2\) using the control \(\bar{u}\) provided by Step 2.

Finally, observe that if

$$\begin{aligned} (1+\eta (T))\exp \left( \int _0^T k_f(t)dt\right) \left( |t_1\!-\!t_2|+|\xi _1-\xi _2|\right) >\rho _0 \end{aligned}$$
(37)

then to construct the admissible process \((x_2,u_2)\) we merely make use of the argument of Step 6 (repeating it at most for a finite number of times). The analysis of Step 6 provides a state trajectory \(x_2(.)\) on \([t_2,T]\) satisfying \(x_2(t_2)=\xi _2\) and condition (11), but it gives no information on the \(L^\infty \) distance between the two trajectories \(x_1(.)\) and \(x_2(.)\). However when the distance between the initial data is big enough and (37) is in force we immediately deduce that

$$\begin{aligned} \Vert x_1(.+t_1\!-\!t_2)-x_2(.)\Vert _{L^\infty (t_2,T_2)}\le \frac{4}{\rho _0} c_0TK(1+\eta (T))\left( |t_1\!-\!t_2|+|\xi _1-\xi _2|\right) . \end{aligned}$$

Then, possibly adjusting the constant \(K_0\) we deduce in this case also the validity of (12). \(\square \)

5 Regularity Properties of the Lower and Upper Value Functions

Proposition 5.1

(Lipschitz continuity) Let \(A_1\subset \mathbb {R}^{n_1}\) and \(A_2\subset \mathbb {R}^{n_2}\) be nonempty closed sets. Suppose that assumptions (H1)–(H5), (BV) and (IPC) are satisfied. Then the lower value function \(V^\flat \) and the upper value function \(V^\sharp \) are locally Lipschitz continuous on \([0,T] \times A_1\times A_2\).

Proof

Fix a real number \(r_0>0\). Take any \((t_1,x_1=(\xi _1,\zeta _1))\), \((t_2,x_2=(\xi _2,\zeta _2)) \in [0,T]\times (A\cap r_0 \mathbb {B})\). Define \(T_1:=(T+t_1-t_2)\wedge T\), \(T_2:=(T+t_2-t_1)\wedge T\) and take \(R_0:=\exp \left( \int _0^{T}c_f(t)dt \right) (r_0+1)\).

Let \(\varepsilon >0\) be a given number. From the definition of the upper value \(V^\sharp \), we can find a nonanticipative strategy \(\beta _2 \in S_V(t_2,x_2)\) such that

$$\begin{aligned} V^\sharp (t_2,x_2) \le ~ \inf _{u\in \mathcal{U}(t_2,\xi _2)}J(t_2,x_2;u(\cdot ),\beta _2(u)(\cdot )) + \varepsilon . \end{aligned}$$
(38)

We consider the nonanticipative map \(\Psi : \mathcal{U}([t_1,T],\xi _1) \rightarrow \mathcal{U}([t_2,T],\xi _2)\) provided by Theorem 4.1 applied to the control system \(\dot{y} =f_1(t,y,u)\), which with any admissible process \((y_1, u_1)\) on \([t_1,T]\) such that \(y_1(t_1)=\xi _1\) associates a constant \(K_0\) (depending only on \(r_0\)) and an admissible process \((y_2,u_2=\Psi (u_1))\) such that \(y_2(t_2)=\xi _2\), \(y_2(t)\in \textrm{int}(A)\) for all \(t\in (t_2,T]\) and

$$\begin{aligned} \Vert y_1(.+t_1\!-\!t_2)-y_2(.)\Vert _{L^\infty (t_2,T_2)} \le K_0(|\xi _1-\xi _2|+|t_1\!-\!t_2|). \end{aligned}$$
(39)

We also know that we can restrict our attention to the case in which

$$\begin{aligned} \begin{aligned} \rho _\xi&:=(1+\eta (T))\exp \left( \int _0^Tk_{f}(t)dt \right) \left( |\xi _1-\xi _2|+|t_1\!-\!t_2|\right) \le \rho _0;\\ \rho _\zeta&:=(1+\eta (T))\exp \left( \int _0^Tk_{f}(t)dt \right) \left( |\zeta _1-\zeta _2|+|t_1\!-\!t_2|\right) \le \rho _0 \end{aligned} \end{aligned}$$
(40)

for some \(\rho _0\ge 0\) and let \(\delta _0\) such that \(\rho _\xi ,\rho _\zeta \le \delta _0/k_0\). We have also the information that the control \(u_2\) has the following structure:

$$\begin{aligned} u_2(t)= \left\{ \begin{array}{ll} \bar{u}_j(t) & \text { if } t\in [\tau _j,(\tau _j+k_0\rho _\xi )\wedge T_2] \\ u_1(t-k_0\rho _\xi +t_1\!-\!t_2) & \text { if } t\in (\tau _j+k_0\rho _\xi ,(\tau _j+\delta _0)\wedge T_2] \end{array} \right. \end{aligned}$$

for some control functions \(\bar{u}_j:[\tau _j,(\tau _j+k_0\rho _\xi )\wedge T_2]\rightarrow U\), \(j=1,\dots ,M \).

Theorem 4.1 gives also a nonanticipative map \(\Phi :\mathcal{V}([t_2,T],\zeta _2)\rightarrow \mathcal{V}([t_1,T],\zeta _1)\) (related to the control system \(\dot{z}=f_2(t,z,v)\)), such that, for any admissible process \((z_2,v_2)\) on \([t_2,T]\) with \(z_2(t_2)=\zeta _2\), we have an admissible process \((z_1,v_1=\Phi (v_2))\) on \([t_1,T]\) satisfying

$$\begin{aligned} \Vert z_2(.+t_2\!-\!t_1)-z_1(.)\Vert _{L^\infty (t_1,T_1)}\le K_0\left( |\zeta _1-\zeta _2|+|t_1\!-\!t_2| \right) \end{aligned}$$
(41)

and, again since (40) is in force, we have

$$\begin{aligned} v_1(t)= \left\{ \begin{array}{ll} \bar{v}_i(t) & \text { if } t\in [\sigma _i,(\sigma _i+k_0\rho _\zeta )\wedge T_1] \\ v_2(t-k_0\rho _\zeta +t_2\!-\!t_1) & \text { if } t\in (\sigma _i+k_0\rho _\zeta , (\sigma _i+\delta _0)\wedge T_1] \end{array} \right. \end{aligned}$$

for some \(\bar{v}_i:[\sigma _i,(\sigma _i+k_0\rho _\zeta )\wedge T_1]\rightarrow V\), \(i=1,\dots ,N \). Observe that the composition of the nonanticipative maps \(\Phi \), \(\beta _2\) and \(\Psi \) provides a nonanticipative strategy \(\beta _1:=\Phi \circ \beta _2\circ \Psi : \mathcal{U}([t_1,T],\xi _1) \rightarrow \mathcal{V}([t_1,T],\zeta _1)\). We emphasize that the strategies \(\Psi \) and \(\Phi \) provided by Theorem 4.1 are such that also the composition \(\Phi \circ \beta _2\circ \Psi (=\beta _1)\) is nonanticipative and the situation when \(t_1\ne t_2\) does not generate an issue. Indeed, it is immediate to see that the map \(\beta _1\) is anticipative when \(t_2\le t_1\). So we restrict our attention to the case when \(t_1<t_2\) (that is \(T_1:=T+t_1-t_2<T\)) and \(|t_1-t_2|\) is small enough. Take two admissible controls \(u_1,u_2\in \mathcal{U}([t_1,T];\xi _1)\) and any \(\sigma \in [0,T-t_1]\). If \(t_1+\sigma \in [t_1,T_1]\) and \(u_1=u_2\) a.e. on \([t_1,t_1+\sigma ]\), then it is immediate to see that \(\beta _1(u_1)=\beta _1(u_2)\) a.e. on \([t_1,t_1+\sigma ]\). Suppose now that \(t_1+\sigma \in (T_1,T]\). Then the trajectories \(\tilde{y}_1\) and \(\tilde{y}_2\) associated respectively with \(\beta _1(u_1)\) and \(\beta _1(u_2)\) are such that \(\tilde{y}_1(T_1)=\tilde{y}_2(T_1)\). Therefore, from Step 6 of the proof of Theorem 4.1 we know that on \([T_1,T]\) we use either a given (fixed) control \(u_0\) (when \(\tilde{y}_1(T_1)=\tilde{y}_2(T_1)\in (\textrm{int} A_1)\setminus \left( \partial A_1+\frac{\bar{\eta }}{2}\mathbb {B}\right) \)) or a particular control \(\bar{u}\) constructed in Step 2 of the proof of Theorem 4.1 (when \(\tilde{y}_1(T_1)=\tilde{y}_2(T_1)\in (\textrm{int} A_1)\cap \left( \partial A_1+\frac{\bar{\eta }}{2}\mathbb {B}\right) \)). In either case, we have \(\beta _1(u_1)=\beta _1(u_2)\) a.e. also on \([T_1,t_1+\sigma ]\).

From the definition of \(V^\sharp \) we have \(V^\sharp (t_1,x_1)\ge \inf _{u\in \mathcal{U}(t_1,\xi _1)} J(t_1,x_1;u(\cdot ), \beta _1(u)(\cdot )) - \varepsilon \) and, therefore, there exists a control \(\hat{u}_1\in \mathcal{U}(t_1,\xi _1)\) such that

$$\begin{aligned} V^\sharp (t_1,x_1)\ge J(t_1,x_1;\hat{u}_1(\cdot ), \beta _1(\hat{u}_1)(\cdot )) - \varepsilon . \end{aligned}$$
(42)

Write \((\hat{y}_1, \hat{u}_1)\) and \((\hat{z}_1, \beta _1(\hat{u}_1))\) the associated admissible process such that \(\hat{y}_1(t_1)=\xi _1\) and \(\hat{z}_1(t_1)=\zeta _1\). Consider also the admissible control \(\hat{u}_2=\Psi (\hat{u}_1)\in \mathcal{U}(t_2,\xi _2)\). From (38) we deduce that

$$\begin{aligned} V^\sharp (t_2,x_2)\le J(t_2,x_2;\hat{u}_2(\cdot ), \beta _2(\hat{u}_2)(\cdot )) + \varepsilon . \end{aligned}$$
(43)

Denote by \((\hat{y}_2, \hat{u}_2)\) and \((\hat{z}_2, \beta _2(\hat{u}_2))\) the corresponding admissible processes with \(\hat{x}_2(t_2):=(\hat{y}_2(t_2),\hat{z}_2(t_2)) =x_2:=(\xi _2,\zeta _2)\). Write L := L1 + L2. In view of (42) and (43) it follows that

$$\begin{aligned} \begin{aligned}&V^\sharp (t_2,x_2) - V^\sharp (t_1,x_1) \le J(t_2,x_2;\hat{u}_2(\cdot ), \beta _2(\hat{u}_2)(\cdot )) - J(t_1,x_1; \hat{u}_1(\cdot ), \beta _1(\hat{u}_1)(\cdot )) + 2\varepsilon \\&=\int _{t_2}^T L(t,\hat{x}_2(t), \hat{u}_2(t), \beta _2(\hat{u}_2)(t))dt -\int _{t_1}^T L(t,\hat{x}_1(t), \hat{u}_1(t), \beta _1(\hat{u}_1)(t))dt \\&\qquad +g( \hat{x}_2(T)) -g( \hat{x}_1(T)) + 2\varepsilon \\&=\int _{t_2}^{T_2} L(t,\hat{x}_2(t), \hat{u}_2(t), \beta _2(\hat{u}_2)(t))dt +\int _{T_2}^{T} L(t,\hat{x}_2(t), \hat{u}_2(t), \beta _2(\hat{u}_2)(t))dt \\&\quad -\int _{t_2}^{T_2} L(t\!+\!t_1\!-\!t_2,\hat{x}_1(t\!+\!t_1\!-\!t_2), \hat{u}_1(t\!+\!t_1\!-\!t_2), \beta _1(\hat{u}_1)(t\!+\!t_1\!-\!t_2))dt \\&\quad -\int _{T_1}^{T} L(t,\hat{x}_1(t), \hat{u}_1(t), \beta _1(\hat{u}_1)(t))dt +g(\hat{x}_2(T)) -g(\hat{x}_1(T)) + 2\varepsilon . \end{aligned} \end{aligned}$$
(44)

From (39) (resp. (41)) which is valid for \(\hat{y}_1\) and \(\hat{y}_2\) (resp. \(\hat{z}_1\) and \(\hat{z}_2\)) we obtain

$$\begin{aligned} \begin{aligned} \Vert \hat{y}_1(.+t_1\!-\!t_2)-\hat{y}_2(.)\Vert _{L^\infty (t_2,T_2)} \le K_0 \left( |\xi _1-\xi _2|+|t_2\!-\!t_1| \right) . \\ (\text {resp. } \ \Vert \hat{z}_2(.+t_2\!-\!t_1)-\hat{z}_1(.)\Vert _{L^\infty (t_1,T_1)} \le K_0 \left( |\zeta _1-\zeta _2|+|t_2\!-\!t_1| \right) . ) \end{aligned} \end{aligned}$$
(45)

In particular we deduce that

$$\begin{aligned} & |(\hat{y}_2(T)-\hat{z}_2(T))-(\hat{y}_1(T)-\hat{z}_1(T))| \le \sqrt{2}c_0|t_2\!-\!t_1| + \sqrt{2}K_0\nonumber \\ & \quad \left( |\xi _1-\xi _2|+ |\zeta _1-\zeta _2| + |t_1\!-\!t_2|\right) . \end{aligned}$$
(46)

In addition, since \(|L|\le c_0\) along the reference trajectories, we have

$$\begin{aligned} \begin{aligned} \bigg |\int _{T_2}^T L(t,\hat{x}_2(t),\hat{u}_2(t), \beta _2(\hat{u}_2)(t))dt -\int _{T_1}^{T} L(t,\hat{x}_1(t), \hat{u}_1(t), \beta _1(\hat{u}_1)(t))dt \bigg | \le 2c_0 |t_2\!-\!t_1|. \end{aligned} \end{aligned}$$
(47)

It remains to provide an estimate of the term

$$\begin{aligned}&\Delta :=\bigg |\int _{t_2}^{T_2} L(t,\hat{x}_2(t), \hat{u}_2(t), \beta _2(\hat{u}_2)(t))dt \\&\qquad -\int _{t_2}^{T_2} L(t\!+\!t_1\!-\!t_2, \hat{x}_1(t\!+\!t_1\!-\!t_2),\hat{u}_1(t\!+\!t_1\!-\!t_2), \beta _1(\hat{u}_1)(t\!+\!t_1\!-\!t_2))dt \bigg | \\&\quad \le \bigg |\int _{t_2}^{T_2} \big [L(t,\hat{x}_2(t), \hat{u}_2(t), \beta _2(\hat{u}_2)(t))-L(t,\hat{x}_1(t\!+\!t_1\!-\!t_2), \hat{u}_1(t\!+\!t_1\!-\!t_2), \beta _1(\hat{u}_1)(t\!+\!t_1\!-\!t_2)) \big ] dt\bigg | \\&\qquad +\int _{t_2}^{T_2} \big [\eta (t\!+\!t_1\!-\!t_2)\vee \eta (t)-\eta (t\!+\!t_1\!-\!t_2)\wedge \eta (t)\big ]dt \\&\quad \le \bigg |\int _{t_2}^{T_2} \big [L(t,\hat{x}_2(t), \hat{u}_2(t), \beta _2(\hat{u}_2)(t)) -L(t,\hat{x}_2(t), \hat{u}_1(t\!+\!t_1\!-\!t_2),\beta _1(\hat{u}_1)(t\!+\!t_1\!-\!t_2)) \big ] dt\bigg | \\&\qquad + \int _{t_2}^{T_2} |L(t,\hat{x}_2(t), \hat{u}_1(t\!+\!t_1\!-\!t_2), \beta _1(\hat{u}_1)(t\!+\!t_1\!-\!t_2)) \\&\qquad - L(t,\hat{x}_1(t\!+\!t_1\!-\!t_2), \hat{u}_1(t\!+\!t_1\!-\!t_2), \beta _1(\hat{u}_1)(t\!+\!t_1\!-\!t_2))|dt+\eta (T)|t_1\!-\!t_2| \\&\quad \le \bigg |\int _{t_2}^{T_2} \big [ L(t,\hat{x}_2(t), \Psi (\hat{u}_1)(t), \beta _2(\hat{u}_2)(t)) -L(t,\hat{x}_2(t), \hat{u}_1(t\!+\!t_1\!-\!t_2),\beta _2(\hat{u}_2)(t)) \big ] dt\bigg | \\&\qquad + \bigg |\int _{t_2}^{T_2} \big [ L(t,\hat{x}_2(t), \hat{u}_1(t\!+\!t_1\!-\!t_2),\beta _2(\hat{u}_2)(t)) - L(t,\hat{x}_2(t), \hat{u}_1(t\!+\!t_1\!-\!t_2),\Phi (\beta _2(\hat{u}_2))(t\!+\!t_1\!-\!t_2))\big ]dt\bigg | \\&\qquad + \int _{t_2}^{T_2} k_L(t) |(\hat{y}_2(t),\hat{z}_2(t)) -(\hat{y}_2(t\!+\!t_1\!-\!t_2),\hat{z}_2(t\!+\!t_1\!-\!t_2))|dt + \eta (T)|t_1\!-\!t_2| \\&\quad \le \eta (T)|t_1\!-\!t_2| +\sqrt{2}K_0K_L \left( |\xi _1-\xi _2| + |\zeta _1-\zeta _2| + |t_1\!-\!t_2| \right) \\&\qquad +\sum _{j=1}^M \bigg | \int _{\tau _j}^{(\tau _j+\delta _0)\wedge T_2} L_1(t,\hat{x}_2(t),\Psi (\hat{u}_1)(t)) dt - \int _{\tau _j}^{(\tau _j+\delta _0)\wedge T_2} L_1(t,\hat{x}_2(t),\hat{u}_1(t\!+\!t_1\!-\!t_2))dt \bigg | \\&\qquad +\sum _{i=1}^N \bigg | \int _{\sigma _i}^{(\sigma _i+\delta _0)\wedge T_2} L_2(t,\hat{x}_2(t),\beta _2(\hat{u}_2)(t)) dt - \int _{\sigma _i}^{(\sigma _i+\delta _0)\wedge T_2} L_2(t,\hat{x}_2(t),\Phi (\beta _2(\hat{u}_2))(t\!+\!t_1\!-\!t_2))dt \bigg |. \end{aligned}$$

Introducing \(K_L:=\int _0^T k_L(t)dt\), we have

$$\begin{aligned}&\bigg |\int _{\tau _j}^{(\tau _j+\delta _0)\wedge T_2} \big [ L_1(t,\hat{x}_2(t),\Psi (\hat{u}_1)(t)) - L_1(t,\hat{x}_2(t),\hat{u}_1(t\!+\!t_1\!-\!t_2)) \big ]dt\bigg | \\&\le \bigg |\int _{\tau _j}^{\tau _j+\delta _0-k_0\rho _\xi } L_1(t+k_0\rho _\xi ,\hat{x}_2(t+k_0\rho _\xi ),\hat{u}_1(t\!+\!t_1\!-\!t_2))dt \\&\qquad -\int _{\tau _j+k_0\rho _\xi }^{\tau _j+\delta _0} L_1(t,\hat{x}_2(t),\hat{u}_1(t\!+\!t_1\!-\!t_2)) dt\bigg | +2c_0k_0\rho _\xi \\&\le \int _{\tau _j}^{\tau _j+\delta _0-k_0\rho _\xi } \big | L_1(t+k_0\rho _\xi ,\hat{x}_2(t+k_0\rho _\xi ),\hat{u}_1(t\!+\!t_1\!-\!t_2)) -L_1(t,\hat{x}_2(t),\hat{u}_1(t\!+\!t_1\!-\!t_2)) \big |dt \\ &+\bigg | \int _{\tau _j}^{\tau _j+\delta _0-k_0\rho _\xi } L_1(t,\hat{x}_2(t),\hat{u}_1(t\!+\!t_1\!-\!t_2)) dt -\int _{\tau _j+k_0\rho _\xi }^{\tau _j+\delta _0} L_1(t,\hat{x}_2(t),\hat{u}_1(t\!+\!t_1\!-\!t_2))dt \bigg | \\&\quad +2c_0k_0\rho _\xi \\&\le 4c_0k_0\rho _\xi + \sqrt{2}c_0 K_L k_0 \rho _\xi + \eta (T)k_0\rho _\xi \end{aligned}$$

and

$$\begin{aligned}&\bigg |\int _{\sigma _i}^{(\sigma _i+\delta _0)\wedge T_2} \big [ L_2(t,\hat{x}_2(t),\beta _2(\hat{u}_2)(t)) - L_2(t,\hat{x}_2(t),\Phi (\beta _2(\hat{u}_2))(t\!+\!t_1\!-\!t_2)) \big ]dt\bigg | \\&\le \bigg |\int _{\sigma _i+k_0\rho _\zeta }^{\sigma _i+\delta _0} L_2(t,\hat{x}_2(t),\beta _2(\hat{u}_2)(t)) - \int _{\sigma _i}^{\sigma _i+\delta _0-k_0\rho _\zeta } L_2(t+k_0\rho _\zeta ,\hat{x}_2(t+k_0\rho _\zeta ),\beta _2(\hat{u}_2)(t)) dt\bigg | \\&\quad +2c_0k_0\rho _\zeta \\&\le \bigg |\int _{\sigma _i+k_0\rho _\zeta }^{\sigma _i+\delta _0} L_2(t,\hat{x}_2(t),\beta _2(\hat{u}_2)(t))dt -\int _{\sigma _i}^{\sigma _i+\delta _0-k_0\rho _\zeta } L_2(t,\hat{x}_2(t),\beta _2(\hat{u}_2)(t))dt\bigg | \\&\quad +2c_0k_0\rho _\zeta \\&+ \int _{\sigma _i}^{\sigma _i+\delta _0-k_0\rho _\zeta } \big | L_2(t,\hat{x}_2(t),\beta _2(\hat{u}_2)(t))- L_2(t+k_0\rho _\zeta ,\hat{x}_2(t+k_0\rho _\zeta ),\beta _2(\hat{u}_2)(t))\big | dt \\&\le 4c_0k_0\rho _\zeta + \sqrt{2}c_0 K_L k_0 \rho _\zeta + \eta (T)k_0\rho _\zeta . \end{aligned}$$

Using (40), we finally obtain the estimate :

$$\begin{aligned} \Delta \le \Big (\sqrt{2}K_0K_L +2M_0k_0(4c_0+\sqrt{2}c_0K_L+\eta (T))k_0&(1+\eta (T))K + \eta (T)\Big ) \\&\times \left( |\xi _1-\xi _2| + |\zeta _1-\zeta _2| + |t_1\!-\!t_2| \right) \end{aligned}$$

with \(M_0\ge M\vee N\) and \(K\ge \int _0^T k_{f}(t)dt\).

Exchanging the role of \(V^\flat (t_1,x_1)\) and \(V^\flat (t_2,x_2)\) in the inequality above, we obtain

$$\begin{aligned} |V^\flat (t_1,x_1) - V^\flat (t_2,x_2)|~\le ~ K^\flat (|t_1\!-\!t_2| + |x_1-x_2|)\ , \end{aligned}$$

for some constant \(K^\flat >0\), which depends only on the data of the problem, confirming the proposition statement. \(\square \)

6 Solutions of the Hamilton–Jacobi–Isaacs Equations

Write \( L := L_1 + L_2.\)

Proposition 6.1

(Dynamic programming principle) Let \(A_1\subset \mathbb {R}^{n_1}\) and \(A_2\subset \mathbb {R}^{n_2}\) be closed nonempty sets. Assume (H1)–(H4), (BV) and (IPC). For any \((t_0,x_0=(y_0,z_0))\in [0,T]\times A_1\times A_2\) and for all \(\sigma \in (0,T-t_0]\) we have

$$\begin{aligned} \begin{array}{c} \displaystyle V^\flat (t_0,x_0)\, = \,\inf _{\alpha \in S_U(t_0,x_0)}\sup _{v\in \mathcal{V}(t_0,y_0)} \Big \{\int _{t_0}^{t_0+\sigma } L(t, x[t_0,x_0;\alpha (v),v](t),\alpha (v)(t),v(t) ) \ dt + \\ \displaystyle + V^\flat \big (t_0,x[t_0,x_0;\alpha (v),v](t_0+\sigma )\big ) \Big \} \ , \end{array} \end{aligned}$$
(48)

and

$$\begin{aligned} \begin{array}{c} \displaystyle V^\sharp (t_0,x_0)\, = \,\sup _{\beta \in S_V(t_0,x_0)}\inf _{u\in \mathcal{U}(t_0,z_0)} \Big \{\int _{t_0}^{t_0+\sigma } L(t, x[t_0,x_0;u,\beta (u)](t),u(t),\beta (u)(t) ) \ dt + \\ \displaystyle + V^\sharp \big (t_0,x[t_0,x_0;u,\beta (u)](t_0+\sigma )\big ) \Big \} \ . \end{array} \end{aligned}$$
(49)

Proposition 6.1 can be proved adopting standard arguments already employed for the state constraints free case (cf. [5, 22]). Therefore its proof is omitted.

Theorem 6.2

Let \(A_1\subset \mathbb {R}^{n_1}\) and \(A_2\subset \mathbb {R}^{n_2}\) be closed nonempty sets. Assume that conditions (H1)–(H5), (BV) and (IPC) are satisfied. Then, the lower value function \(V^\flat \) and the upper value function \(V^\sharp \) are viscosity solutions on \([0,T)\times A_1\times A_2\) of the HJI equation (7).

Proof

We show here only that \(V^\flat \) is a viscosity solution of the HJI equation (7). The proof for \(V^\sharp \) is similar so we omit it.

  1. (i)

    Recall that \(n=n_1+n_2\). Take any \((t_0,x_0=(y_0,z_0))\in [0,T)\times A_1\times A_2\) and \(\varphi \in C^1(\mathbb {R},\mathbb {R}^{n})\) such that \(V^\flat -\varphi \) has a local minimum at \((t_0,(y_0,z_0))\) (relative to \([0,T]\times A_1\times A_2\)). We can assume that \(\varphi (t_0,(y_0,z_0))=V^\flat (t_0,(y_0,z_0))\), and that there exists \(r>0\) such that \(V^\flat (t,(y,z))\ge \varphi (t,(y,z))\) for all \((t,(y,z))\in \left( (t_0,(y_0,z_0)) + r \mathbb {B}\right) \cap \left( [0,T]\times A_1\times A_2\right) \). Suppose, by contradiction, that there exists \(\theta >0\) such that

    $$\begin{aligned} \left( H_{in}\right) ^*\left( t_0,(y_0,z_0),\nabla _{y,z}\varphi (t_0,(y_0,z_0))\right) -\nabla _t \varphi (t_0,(y_0,z_0))\le -\theta . \end{aligned}$$

    From the definition of the upper semicontinuous envelope, we have

    $$\begin{aligned} \begin{aligned} \left( H_{in}\right) ^*&(t_0,(y_0,z_0),\nabla _{y,z}\varphi (t_0,(y_0,z_0))) \\&\ge \limsup \limits _{y\xrightarrow []{\textrm{int}A_1}y_0} H_{in}(t_0,(y,z_0),\nabla _{y,z}\varphi (t_0,(y,z_0))) \\&=\limsup \limits _{y\xrightarrow []{\textrm{int}A_1}y_0} \inf _{(e_2,\ell _2)\in G_2(t_0,(y,z_0))} \sup _{(e_1,\ell _1)\in G_1(t_0,(y,z_0))} [- e_1 \cdot p_y - e_2 \cdot p_z - \ell _1 - \ell _2 ] \\&= \inf _{(e_2,\ell _2)\in G_2(t_0,(y_0,z_0))} \sup _{(e_1,\ell _1)\in Q_1(t_0,(y_0,z_0))} [- e_1 \cdot p_y - e_2 \cdot p_z - \ell _1 - \ell _2 ]. \end{aligned} \end{aligned}$$
    (50)

    where \((p_y,p_z):=\nabla _{y,z} \varphi (t_0,(y_0,z_0))\). Then we can select \((\tilde{e}_2,\tilde{\ell }_2)\in (\textrm{co} \ f_2,L_2)(t_0^+,(y_0,z_0),V)\) such that \(\tilde{e}_2 \in \textrm{int} T_{A_2}(z_0)\)

    $$\begin{aligned} \sup _{(e_1,\ell _1)\in Q_1(t_0,(y_0,z_0))} [- e_1 \cdot p_y - \tilde{e}_2 \cdot p_z - \ell _1 - \tilde{\ell }_2 ] -\nabla _t \varphi (t_0,(y_0,z_0)) \le -\theta . \end{aligned}$$

    Using the stability properties of the interior of Clarke tangent cone and the arguments of the proof of Theorem 4.1 (see Remark 4.2), we can find an admissible control \(\tilde{v}\in \mathcal{V}(t_0,z_0)\) and \(\sigma _0\in (0,T-t_0)\) such that, for every strategy \(\alpha \in S_U(t_0,x_0)\), we have

    $$\begin{aligned} \int _{t_0}^{t_0+\sigma _0} \Big [ \mathcal {H} (s,\tilde{x}(s), \nabla _x\varphi (s,\tilde{x}(s)), \alpha (\tilde{v}) (s), \tilde{v} (s) ) - \nabla _t \varphi (s,\tilde{x}(s)) \Big ] \ ds \le - \frac{\theta }{2} \ , \end{aligned}$$

    where \(\tilde{x}(s):=x[t_0,x_0; \alpha (\tilde{v}) , \tilde{v}](s)\). Now, applying the Dynamic Programming Principle and standard arguments (cf. the proof of [12, Theorem 4.3]) we arrive at a contradiction.

  2. (ii)

    Let \((t_0,x_0=(y_0,z_0))\) be a local maximum (relative to \([0,T]\times A_1\times A_2\)) for \((V^\flat - \varphi ),\ \varphi \in C^1(\mathbb {R}^n,\mathbb {R})\) and there exists \(r>0\) such that \(V^\flat (t,(y,z))\le \varphi (t,(y,z))\) for all \((t,(y,z))\in \left( (t_0,(y_0,z_0)) + r \mathbb {B}\right) \cap \left( [0,T]\times A_1\times A_2\right) \). Suppose that, by contradiction, there exists \(\theta >0\) such that

    $$\begin{aligned} \left( H_{in}\right) _*(t_0,(y_0,z_0),\nabla _{y,z}\varphi (t_0,(y_0,z_0)))-\nabla _t \varphi (t_0,(y_0,z_0))\ge \theta . \end{aligned}$$

    Then from the definition of the lower semicontinuous envelope

    $$\begin{aligned} \begin{aligned}&\left( H_{in}\right) _*(t_0,(y_0,z_0),\nabla _{y,z}\varphi (t_0,(y_0,z_0))) \\&\quad \le \liminf \limits _{z\xrightarrow []{\textrm{int}A_2}z_0} H_{in}(t_0,(y_0,z),\nabla _{y,z}\varphi (t_0,(y_0,z))) \\ &\quad = \liminf \limits _{z\xrightarrow []{\textrm{int}A_2}z_0} \inf _{(e_2,\ell _2)\in G_2(t_0,(y_0,z))} \sup _{(e_1,\ell _1)\in G_1(t_0,(y_0,z))} [- e_1 \cdot p_y - e_2 \cdot p_z - \ell _1 - \ell _2 ] \\ &\quad = \inf _{(e_2,\ell _2)\in Q_2(t_0,(y_0,z_0))} \sup _{(e_1,\ell _1)\in G_1(t_0,(y_0,z_0))} [- e_1 \cdot p_y - e_2 \cdot p_z - \ell _1 - \ell _2 ] \end{aligned} \end{aligned}$$

    where \((p_y,p_z):=\nabla _{y,z} \varphi (t_0,(y_0,z_0))\). It follows that we can choose \((\tilde{e}_1,\tilde{\ell }_1)\in (\textrm{co}\ f_1,L_1)(t_0^+,(y_0,z_0),U)\) such that \(\tilde{e}_1 \in \textrm{int} T_{A_1}(y_0)\) and

    $$\begin{aligned} \inf _{(e_2,\ell _2)\in Q_2(t_0,(y_0,z_0))} [- \tilde{e}_1 \cdot p_y - e_2 \cdot p_z - \tilde{\ell }_1 - \ell _2 ] -\nabla _t \varphi (t_0,(y_0,z_0)) \ge \theta , \end{aligned}$$

    Remark 4.2 tells us that associated with the initial data \((t_0,y_0)\) and the vector \(\tilde{e}_1\in \textrm{co}f_1(t_0^+,y_0,U)\cap \textrm{int}T_{A_1}(y_0)\), we can construct an admissible control \(\bar{u}_0\in \mathcal{U}(t_0,y_0)\) such that an estimate like that one in formula (13) is in force. Consider the (constant) strategy \(\alpha _0\in \mathcal{S}_{U}(t_0,x_0)\) such that for all \(v\in \mathcal{V}(t_0,z_0)\) we have \(\alpha _0(v):=\bar{u}_0\). Then \(\alpha _0\) is clearly nonanticipative. Moreover we can find \(\sigma _0\in (0,T-t_0)\) such that, for all \(v\in \mathcal{V}(t_0,z_0)\)

    $$\begin{aligned} \int _{t_0}^{t_0+\sigma _0} \Big [ \mathcal {H} (s,\tilde{x}(s), \nabla _x\varphi (s,\tilde{x}(s)), \alpha _0 ( v) (s), v (s) ) - \nabla _t \varphi (s,\tilde{x}(s)) \Big ] \ ds \ge \theta /2, \end{aligned}$$

    where \(\tilde{x}(s):=x[t_0,x_0; \alpha _0 ( v) , v](s)\). Invoking again the Dynamic Programming Principle and known arguments (cf. the proof of [12, Theorem 4.3]) we arrive at a contradiction.

\(\square \)

Under the hypotheses (H1)–(H5), (BV) and (IPC), Proposition 5.1 ensures that the lower value function \(V^\flat \) and the upper value function \(V^\sharp \) are locally Lipschitz continuous on \([0,T]\times A_1\times A_2\). Moreover, Theorem 6.2 establishes that \(V^\flat \) and \(V^\sharp \) are viscosity solutions of (7) on \([0,T)\times A_1\times A_2\). Therefore, we can summarize these results in the following theorem.

Theorem 6.3

Let \(A_1\subset \mathbb {R}^{n_1}\) and \(A_2\subset \mathbb {R}^{n_2}\) be nonempty closed sets. Assume that conditions (H1)–(H5), (BV) and (IPC) are satisfied. Then

  1. (i)

    the lower value function \(V^\flat \) is locally Lipschitz continuous and is a viscosity solution of (7);

  2. (ii)

    the upper value function \(V^\sharp \) is locally Lipschitz continuous and is a viscosity solution of (7).

Imposing some additional assumptions we obtain the existence and uniqueness of the value for the reference differential game.

Theorem 6.4

Assume that (H1)–(H5), (BV) and (IPC) hold true. Suppose in addition that

  1. (i)

    \(A_1\subset \mathbb {R}^{n_1}\), \(A_2\subset \mathbb {R}^{n_2}\), \(U\subset \mathbb {R}^{m_1}\) and \(V \subset \mathbb {R}^{m_2}\) are compact nonempty sets;

  2. (ii)

    \(f_1\) is locally Lipschitz continuous w.r.t. (ty) and continuous in u; \(L_1\) is continuous;

  3. (iii)

    \(f_2\) is locally Lipschitz continuous w.r.t. (tz) and continuous in v; \(L_2\) is continuous.

Then \(V:=V^\flat = V^\sharp \) is the unique viscosity solution on \([0, T )\times A_1\times A_2\) of (7).

Proof of Theorem 6.4

In this case, since Theorem 6.3 holds, the lower value function \(V^\flat \) is continuous on \([0,T)\times A_1\times A_2\) and is a subsolution and a supersolution of (7). Observe that condition (iv) of Theorem 7.1 (below) is satisfied as a consequence of the validity of assumption (IPC). Also, from this theorem, we deduce that for any viscosity solution W of (7), we have \(W\le V^\flat \) and \(V^\flat \le W\) on \([0,T )\times A_1\times A_2\). This proves the uniqueness of the viscosity solution of (7). Since \(V^\sharp \) is a viscosity solution of (7), we obtain that \(V^\flat =V^\sharp \) on \([0, T )\times A_1\times A_2\) and \(V:=V^\flat =V^\sharp \) is the unique viscosity solution of (7). \(\square \)

7 A Comparison Result

Theorem 7.1

Assume that conditions (H1)–(H5) and (BV) are satisfied. Suppose also that

  1. (i)

    \(A_1\subset \mathbb {R}^{n_1}\), \(A_2\subset \mathbb {R}^{n_2}\), \(U\subset \mathbb {R}^{m_1}\) and \(V \subset \mathbb {R}^{m_2}\) are compact nonempty sets;

  2. (ii)

    \(f_1\) is locally Lipschitz continuous w.r.t. (ty) and continuous in u; \(L_1\) is continuous;

  3. (iii)

    \(f_2\) is locally Lipschitz continuous w.r.t. (tz) and continuous in v; \(L_2\) is continuous;

  4. (iv)

    for all \(y \in \partial A_1\), and \(z \in \partial A_2\),

    $$\begin{aligned} \textrm{int}\, T_{A_1}(y)\;\not =\; \emptyset , \quad \textrm{int}\, T_{A_2}(z)\;\not =\; \emptyset . \end{aligned}$$

Consider two continuous functions \(W_1,W_2 : [0,T]\times A_1\times A_2\rightarrow \mathbb {R}\) satisfying the following properties

  1. (v)

    \(W_1(t,(y,z))\) is a viscosity subsolution of the (HJI) equation (7);

  2. (vi)

    \(W_2(t,(y,z))\) is a viscosity supersolution of the (HJI) equation (7);

  3. (vii)

    \(W_1(T,.)=W_2(T,.) \; (=g(.))\) on \(A_1\times A_2\).

Then we obtain:

$$\begin{aligned} W_1(t,(y,z))\le W_2(t,(y,z)), \quad \forall (t,y,z)\in [0,T]\times A_1\times A_2 \ . \end{aligned}$$

Proof

Step 1. Suppose that

$$\begin{aligned} \max _{(t,(y,z))\in [0,T]\times A_1\times A_2} \Big \{ W_1(t,(y,z))-W_2(t,(y,z)) \Big \}>0, \end{aligned}$$

then there exists \((t_0,(y_0,z_0))\in [0,T)\in A_1\times A_2\) such that

$$\begin{aligned} W_1(t_0,(y_0,z_0))-W_2(t_0,(y_0,z_0))>0. \end{aligned}$$
(51)

There exists a constant \(M\le 0\) such that \(W_1,W_2\ge M\) on \([0,T]\times A_1\times A_2\) and \(L-1\ge M\) on \([0,T]\times A_1\times A_2\times U\times V\). Set \(c:=1-MT (>0)\) and define \(\tilde{W}_1, \tilde{W}_2, \tilde{H}_{in} \) the following functions (obtained by a Kruzkov type transform):

$$\begin{aligned} \begin{aligned}&\tilde{W}_i(s,(y,z)):=\frac{1}{1+s} \log \Big (W_i(T-s,(y,z))+M(T-s)-M+c \Big ),\qquad i=1,2 \\&\tilde{H}_{in}(s,(y,z),w,p_t,p_y,p_z):=\inf _{(e_2,\ell _2)\in G_2(T-s,(y,z))} \sup _{(e_1,\ell _1)\in G_1(T-s,(y,z))} \Big [(1+s)p_t \\&\quad -(1+s) p_y\cdot e_1 -(1+s)p_z\cdot e_2 - \big ( \ell _1 + \ell _2 - M\big )/e^{(1+s)w} \Big ]. \end{aligned} \end{aligned}$$
(52)

Set \(s_0:=T-t_0\). Observe that \(\tilde{W}_1(0,(y,z))=\tilde{W}_2(0,(y,z))=\tilde{g}(y,z)\), with \(\tilde{g}(y,z):=\log (g(y,z)-M+1)\), and

$$\begin{aligned} \max _{(s,(y,z))\in [0,T]\times A_1\times A_2} \left\{ \tilde{W}_1(s,(y,z)) - \tilde{W}_2(s,(y,z)) \right\} \ge \tilde{W}_1(s_0,(y_0,z_0)) - \tilde{W}_2(s_0,(y_0,z_0)) >0. \end{aligned}$$

Since \(\tilde{W}_1,\tilde{W}_2\) are continuous on the compact set \([0,T]\times A_1\times A_2\) and \(\tilde{W}_1(0,(y,z))=\tilde{W}_2(0,(y,z))\), we can find a point \((\bar{s},(\bar{y},\bar{z}))\in (0,T]\times A_1\times A_2\) such that

$$\begin{aligned} \max _{(s,(y,z))\in [0,T]\times A_1\times A_2} \left\{ \tilde{W}_1(s,(y,z)) - \tilde{W}_2(s,(y,z)) \right\} = \tilde{W}_1(\bar{s},(\bar{y},\bar{z})) - \tilde{W}_2(\bar{s},(\bar{y},\bar{z}))=:\alpha (>0). \end{aligned}$$

Lemma 7.2

Suppose that the assumtions of Theorem 7.1 are satisfied. Then, \(\tilde{W}_1\) and \(\tilde{W}_2\) are respectively viscosity subsolution and supersolution on \((0,T]\times A_1\times A_2\) of

$$\begin{aligned} \left\{ \begin{aligned} W(s,(y,z)) + \tilde{H}_{in}\Big (s,(y,z),W(s,(y,z)),\partial _{s,y,z} W(s,(y,z)) \Big )&= 0 \qquad \textrm{on } \; (0,T)\times (A_1\times A_2) \\ W(0,(y,z))&= \tilde{g}(y,z) \qquad \textrm{on } \; A_1\times A_2 . \end{aligned} \right. \end{aligned}$$
(53)

Proof

Assume that \((s_0,(y_0,z_0)) \in (0,T]\times A_1\times A_2\) is a local minimizer for \(\tilde{W}_2-\psi \), \(\psi \in \mathcal{C}^1 \) and \(\tilde{W}_2(s,(y,z))\ge \psi (s,(y,z))\) for all (s, (yz)) in a neighbourhood of \((s_0,(y_0,z_0))\). Then \((t_0:=T-s_0,(y_0,z_0))\) is a local minimizer for \(W_2-\varphi \), where

$$\begin{aligned} \varphi (t,(y,z)):=e^{(1+T-t)\psi (T-t,(y,z))}-Mt+M-c. \end{aligned}$$

We know that if \(W_2\) is a supersolution of (7) then

$$\begin{aligned} H_{in}(T-s_0,(y_0,z_0),p_0)-\nabla _t \varphi (T-s_0,(y_0,z_0))\ge 0, \end{aligned}$$

where \(p_0:=\nabla _{y,z}\varphi (T-s_0,(y_0,z_0)))\), that is

$$\begin{aligned} \Big \{&\inf _{(e_2,\ell _2)\in G_2(t,(y_0,z_0))} \big [-p_z\cdot e_2 - \ell _2 \big ] \\ +&\sup _{(e_1,\ell _1)\in G_1(t,(y_0,z_0))} \big [-p_y\cdot e_1 -\ell _1 \big ] \Big \} -\nabla _t \varphi (T-s_0,(y_0,z_0))\ge 0 \ . \end{aligned}$$

We deduce, writing \(q_0:=\nabla _{y,z}\psi (s_0,(y_0,z_0))\), that

$$\begin{aligned}&(1+s_0)e^{(1+s_0)\psi (s_0,(y_0,z_0))}\times \Big \{ \inf _{(e_2,\ell _2)\in G_2(T-s,(y_0,z_0))} \big [-q_z\cdot e_2 - \ell _2 \big ] \\&+ \sup _{(e_1,\ell _1)\in G_1(T-s,(y_0,z_0))} \big [-q_y\cdot e_1 -\ell _1 \big ] \Big \} \\&+e^{(1+s_0)\psi (s_0,(y_0,z_0))}[\psi (s_0,(y_0,z_0)) +(1+s_0)\nabla _t \psi (s_0,(y_0,z_0))]+M\ge 0. \end{aligned}$$

It follows that

$$\begin{aligned}&\inf _{(e_2,\ell _2)\in G_2(T-s,(y_0,z_0))} \sup _{(e_1,\ell _1)\in G_1(T-s,(y_0,z_0))} \big [ -(1+s)q_y\cdot e_1 \\&\quad -(1+s)q_z\cdot e_2 - (\ell _1 + \ell _2 -M)/e^{(1+s)\psi (s,(y,z))}\big ] \\&\quad \quad +\psi (s_0,(y_0,z_0))+(1+s_0)\nabla _t \psi (s_0,(y_0,z_0))\ge 0 \end{aligned}$$

and so

$$\begin{aligned} \tilde{H}_{in}(s_0,(y_0,z_0),\tilde{W}_2(y_0,z_0),\nabla _{s,y,z}\psi (s_0,(y_0,z_0)))\ge 0. \end{aligned}$$

This confirms that \(\tilde{W}_2\) is a viscosity supersolution of (53). Similar arguments show that \(\tilde{W}_1\) is a viscosity subsolution of (53). This concludes the proof of Lemma 7.2. \(\square \)

We continue the proof of the theorem by constructing suitable test functions.

Step 2. For \(s,t\in [0,T], \ y,y'\in A_1,\ z,z'\in A_2\) we set

$$\begin{aligned} \phi _n(s,t,y,y',z,z'):=&\tilde{W}_1(s,(y,z))-\tilde{W}_2(t,(y',z')) +n^2\left| y-y'-\frac{1}{n} \xi _1\right| ^2 -n^2\left| z'-z-\frac{1}{n}\bar{\xi }_2\right| ^2 \\&-|y'-\bar{y}|^2-|z-\bar{z}|^2-|t-\bar{s}|^2-n^2|s-t|^2, \end{aligned}$$

where \(\bar{\xi }_1\in \textrm{int}T_{A_1}(\bar{y})\) and \(\bar{\xi }_2\in \textrm{int} T_{A_2}(\bar{z})\). Then, there exist constants \(\delta \in (0,1)\) and \(\eta \in (0,1)\) such that (cf. [29])

$$\begin{aligned} y+(0,\delta ](\bar{\xi }_1 +\eta \mathbb {B})\subset \textrm{int}A_1,\ \forall y\in (\bar{y}+2\delta \mathbb {B})\cap A_1 \end{aligned}$$

and

$$\begin{aligned} z+(0,\delta ](\bar{\xi }_2 +\eta \mathbb {B})\subset \textrm{int}A_2,\ \forall z\in (\bar{z}+2\delta \mathbb {B})\cap A_2. \end{aligned}$$

Since \(\phi _n\) is continuous and \([0,T]\times A_1\times A_2\) is a compact set, for each \(n\ge 1\), there exists a maximizer \((s_n,t_n,y_n,y_n',z_n,z_n')\) for \(\phi _n\) on \(([0,T]\times A_1\times A_2)^2\).

We know that

$$\begin{aligned} \phi _n(s_n,t_n,y_n,y_n',z_n,z_n') \ge \phi _n(\bar{s},\bar{s},\bar{y}+\frac{1}{n}\bar{\xi }_1,\bar{y},\bar{z},\bar{z}+\frac{1}{n}\bar{\xi }_2) \end{aligned}$$

that is

$$\begin{aligned} \begin{aligned}&n^2\left| y_n-y_n'-\frac{1}{n}\bar{\xi }_1\right| ^2 + n^2\left| z_n'-z_n-\frac{1}{n}\bar{\xi }_2\right| ^2 + n^2\left| s_n-t_n\right| ^2 +|y_n'-\bar{y}|^2 +|z_n-\bar{z}|^2 +|t_n-\bar{s}|^2 \\&\le \tilde{W}_1(s_n,(y_n,z_n)) - \tilde{W}_2(t_n,(y_n',z_n')) -\big (\tilde{W}_1(\bar{s},(\bar{y}+\frac{1}{n}\bar{\xi }_1,\bar{z})) - \tilde{W}_2(\bar{s},(\bar{y}, \bar{z}+\frac{1}{n}\bar{\xi }_2))\big ). \end{aligned} \end{aligned}$$

Thus, using also the fact that \(0\le \tilde{W}_1(\bar{s},(\bar{y},\bar{z})) - \tilde{W}_2(\bar{s},(\bar{y}, \bar{z}))- (\tilde{W}_1(s_n,(y_n,z_n))-\tilde{W}_1(s_n,(y_n,z_n))) \) (recall that \((\bar{s},(\bar{y},\bar{z}))\) is a maximizer for \(\tilde{W}_1- \tilde{W}_2\) on \([0,T]\times A_1\times A_2\)), we obtain

$$\begin{aligned} \begin{aligned} n^2&\left| y_n-y_n'-\frac{1}{n}\bar{\xi }_1\right| ^2 +n^2\left| z_n-z_n'-\frac{1}{n}\bar{\xi }_2\right| ^2 +n^2|s_n-t_n|^2 +|y_n'-\bar{y}|^2 +|z_n-\bar{z}|^2 +|t_n-\bar{s}|^2\\&\le \omega \left( |(t_n-s_n, y_n-y_n', z_n- z_n')| \right) + \omega \left( \left| \frac{1}{n}\bar{\xi }_1\right| \right) + \omega \left( \left| \frac{1}{n}\bar{\xi }_2\right| \right) \\&\le C, \end{aligned} \end{aligned}$$

for some constant \(C>0\). Here, \(\omega :\mathbb {R}_+\rightarrow \mathbb {R}_+\) is the modulus of continuity for the continuous functions \(\tilde{W}_1\) and \(\tilde{W}_2\). This yields

  • \(|y_n-y_n'-\frac{1}{n}\bar{\xi }_1|, \ |z_n'-z_n-\frac{1}{n}\bar{\xi }_2|, \ |s_n-t_n| \le \mathrm{\frac{\sqrt{c}}{n}}\);

  • \(|y_n-y_n'|\le \frac{1}{n}\left( \sqrt{c}+|\bar{\xi }_1| \right) \);

  • \(|z_n-z_n'|\le \frac{1}{n}\left( \sqrt{c}+|\bar{\xi }_2| \right) \).

Therefore we obtain that \(y_n\rightarrow \bar{y}\), \(y_n'\rightarrow \bar{y}\), \(z_n\rightarrow \bar{z}\), \(z_n'\rightarrow \bar{z}\), \(s_n\rightarrow \bar{s}\) and \(t_n\rightarrow \bar{s}\) as \(n \rightarrow +\infty \). Moreover, taking \(\bar{n} \ge 1/\delta \) large enough, we also have

$$\begin{aligned} y_n \in \textrm{int}A_1 \quad \text{ and } \quad z_n' \in \textrm{int}A_2, \quad \text{ for } \text{ all } n\ge \bar{n} . \end{aligned}$$

Step 3. For each \(n\ge \bar{n}\) we consider the maps

$$\begin{aligned} \psi _n^1(s,(y,z))&:=\tilde{W}_2(t_n,(y_n',z_n')) +n^2\left| y-y_n'-\frac{1}{n}\bar{\xi }_1\right| ^2 +n^2\left| z_n'-z-\frac{1}{n}\bar{\xi }_2\right| ^2 +n^2|s-t_n|^2 \\&+|y_n'-\bar{y}|^2 +|z-\bar{z}|^2 +|t_n-\bar{s}|^2, \\ \psi _n^2(t,(y',z'))&:=\tilde{W}_1(s_n,(y_n,z_n)) -n^2\left| y_n-y'-\frac{1}{n}\bar{\xi }_1\right| ^2 -n^2\left| z'-z_n-\frac{1}{n}\bar{\xi }_2\right| ^2 -n^2|s_n-t|^2 \\&-|y'-\bar{y}|^2 +|z_n-\bar{z}|^2 +|t-\bar{s}|^2. \end{aligned}$$

Observe that \(\psi _n^1\) and \(\psi _n^2\) are \(\mathcal{C}^1\), \(\tilde{W}_1-\psi _n^1\) has a (local) maximum at \((s_n,(y_n,z_n))\) relative to \([0,T]\times A_1\times A_2\) and \(\tilde{W}_2-\psi _n^2\) has a (local) minimum at \((t_n,(y_n',z_n'))\) relative to \([0,T]\times A_1\times A_2\). Bearing in mind that \(\tilde{W}_1\) and \(\tilde{W}_2\) are respectively viscosity subsolution and supersolution on \((0,T]\times A_1\times A_2\) of (53), and using basic properties of the lower and upper semicontinuous envelopes and the regularity assumptions on \(f_1\), \(f_2\), \(L_1\) and \(L_2\), we have

$$\begin{aligned} \begin{aligned} \tilde{W}_1(s_n&,(y_n,z_n)) - \tilde{W}_2(t_n,(y_n',z_n')) \\&\le \inf _{v\in V} \sup _{u\in U} \Big [ 2(1+ t_n)[n^2( s_n - t_n) - ( t_n - \bar{s})] \\&-(1+ t_n)[2n^2( y_n- y_n'-\frac{1}{n}\bar{\xi }_1)-( y_n'-\bar{y})]\cdot f_1(T- t_n, y_n', u) \\&\quad +(1+ t_n)[2n^2( z_n'- z_n-\frac{1}{n}\bar{\xi }_2)]\cdot f_2(T- t_n, z_n', v) \\&- \frac{(L_1(T- t_n,(y_n',z_n'),u) + L_2(T- t_n,(y_n',z_n'),v) -M)}{e^{(1+ t_n)\tilde{W}_2( t_n,( y_n', z_n'))}}\Big ] \\&- \inf _{v\in V} \sup _{u\in U} \Big [ 2(1+ s_n)[n^2( s_n - t_n) ] \\&-(1+ s_n)[2n^2( y_n- y_n'-\frac{1}{n}\bar{\xi }_1)]\cdot f_1(T- s_n, y_n, u) \\&\quad +(1+ s_n)[2n^2( z_n'- z_n-\frac{1}{n}\bar{\xi }_2)+( z_n-\bar{z})]\cdot f_2(T- s_n, z_n, v) \\&-\frac{(L_1(T- s_n,(y_n,z_n),u) + L_2(T- s_n,(y_n,z_n),v) -M)}{e^{(1+ s_n)\tilde{W}_1( s_n,( y_n, z_n))}}\Big ]. \end{aligned} \end{aligned}$$

Taking the limit as \(n\rightarrow +\infty \) and extracting a subsequence if necessary (we do not relabel), we deduce that, for some \(\bar{u} \in U\) and \(\bar{v} \in V\),

$$\begin{aligned} & {0} < \frac{\alpha }{2}\le \tilde{W}_{1}({\bar{s}},({\bar{y}},{\bar{z}})) - \tilde{W}_{2} ({\bar{s}},({\bar{y}},{\bar{z}})) \\ & \quad \le \frac{ L_{1}(T-{\bar{s}}, (\bar{\bar{y}},{\bar{z}}),{\bar{u}}) + L_{2}(T-{\bar{s}}, (\bar{\bar{y}},{\bar{z}}),{\bar{v}})-M }{e^{(1+{\bar{s}})}} \times \left( \frac{1}{e^{\widetilde{W}_{1}({\bar{s}},({\bar{y}},{\bar{z}}))}} - \frac{1}{e^{\widetilde{W}_{2}({\bar{s}},({\bar{y}},{\bar{z}}))}}\right) . \end{aligned}$$

A contradiction. It follows that \(W_{1} - W_{2} \le 0\) on \((t,x)\in [0,T]\times A_{1} \times A_{2}\). This confirms the theorem statement. \(\square \)