1 Introduction

We shall consider a two-player differential game with state-constraints. The dynamic constraint for the game, which relate the state trajectory x(.) to the control actions u(.) and v(.) of each of the two players, takes the form:

$$\begin{aligned} \left\{ \begin{array}{lll} \dot{x}(t) =f(x(t),u(t),v(t)), \quad {\mathrm{for}} \; {\mathrm{a.e.}} t\in [t_0,T] \\ u(t)\in U \quad \text{ for } \text{ a.e. } \;\;t\in [t_0,T]\\ v(t)\in V \quad \text{ for } \text{ a.e. } \;\;t\in [t_0,T]\\ x(t_0) =x_0 \in A \\ x(t)\in A \quad \text{ for } \text{ all } \;\; t\in [t_0,T] \ . \end{array} \right. \end{aligned}$$
(1)

Here, \(T>0\), \(t_0 \in [0,T]\), \(f:\mathbb {R}^n\times \mathbb {R}^{m_1}\times \mathbb {R}^{m_2}\rightarrow \mathbb {R}^n\) is a given function, \(A\subset \mathbb {R}^n\), \(U \subset \mathbb {R}^{m_1}\) and \(V\subset \mathbb {R}^{m_2}\) are given closed sets. For any left end-point \(x_0 \in A \), any initial time \(t_0\in [0,T]\) and any choice of measurable controls \((u(.), v(.)):[t_0,T]\rightarrow U \times V\), we denote by \(x[t_0,x_0;u(.),v(.)](.)\) the solution of system (1) (which always exists and is unique under our assumptions).

For arbitrary initial data \((t_0,x_0)\in [0,T]\times A \), we associate with controls (u(.) and v(.)), chosen by each of the two players, the following cost functional:

$$\begin{aligned} J(t_0,x_0;u(.),v(.)):= \int _{t_0}^T L\big (t,x(t),u(t),v(t)\big ) \ dt + g\big (x(T)\big ), \end{aligned}$$
(2)

in which \(x(t)=x[t_0,x_0;u(.),v(.)](t)\). The function \(L:\mathbb {R}\times \mathbb {R}^n\times \mathbb {R}^{m_1}\times \mathbb {R}^{m_2}\rightarrow \mathbb {R}\) is called Lagrangian (or running cost) and \(g:\mathbb {R}^n\rightarrow \mathbb {R}\) is the final cost.

The aim of the first player is to choose u(.) to minimize the cost function and that of the second player to choose v(.) to maximize it. These objectives are, of course, in conflict. The precise nature of ‘control’ (non-anticipative or open loop) employed by the players will be made precise below.

We shall assume that the data for system (1) and the cost (2) satisfy the following four hypotheses, which we shall refer to as ‘standing hypotheses’:

  1. (H1):

    \(U \subset \mathbb {R}^{m_1}\) and \(V\subset \mathbb {R}^{m_2}\) are closed sets; f(x, ., .) is \(\mathcal{B}^{m_1}\times \mathcal{B}^{m_2}\) measurable for each x; L(tx, ., .) is \(\mathcal{B}^{m_1}\times \mathcal{B}^{m_2}\) measurable for each (tx);

    (here, \(\mathcal{B}^{m_i}\) is the collection of Borel sets of \(\mathbb {R}^{m_i}\))

  2. (H2):

    there exists \(M>0\) such that

    \(|f(x,u,v)| \,\le \, M(1+|x|)\) and \(|L(t,x,u,v)| \,\le \, M(1+|x|)\), for all \((t,x) \in [0,T]\times \mathbb {R}^{n},\, u \in U,\, v \in V\);

  3. (H3):

    for every \(R>0\), there exist \(k_f>0\), \(k_g>0\) and \(k_L>0\) such that

    \(|f(x,u,v)-f(x',u,v)| \,\le \, k_{f}|x-x'|, \quad \text{ for } \text{ all } t\in [0,T], \, x, x' \in R \mathbb {B}\), \(u\in U\) and \(v\in V\);

    \(|L(t,x,u,v)-L(t',x',u,v)| \,\le \, k_{L}(|t-t'|+|x-x'|)\),

    \(\text{ for } \text{ all } t,t'\in [0,T], \, x, x' \in R\mathbb {B}\), \(u\in U\) and \(v\in V\);

    \(|g(x)-g(x')| \le k_g|x-x'|\), for all \( x, x' \in R \mathbb {B}\);

    (here, \(\mathbb {B}\) is the closed unit ball in Euclidean space, and \(R \mathbb {B}\) is the closed ball of radius R)

  4. (H4):

    A is a set with non-empty interior having representation:

    $$\begin{aligned} A \,=\, \{ x \in \mathbb {R}^{n} \; : \; h(x)\le 0 \} \; , \end{aligned}$$
    (3)

    for a given function \(h(.): \mathbb {R}^{n} \rightarrow \mathbb {R}\) of class \(C^{1+}\) (i.e. h is a \(C^{1}\) function with locally Lipschitz gradient)  such that \( \nabla h(x) \ne 0 \) whenever \(h(x) =0\).

We shall also find it necessary, at different stages of our investigations, to invoke one or other of the following hypotheses

  1. (CQ1):

    for every \(R>0\) there exists \(\eta >0\) such that

    $$\begin{aligned} \sup _{v\in V}\inf _{u \in U} \nabla h(x)\cdot f(x,u, v) \le - \eta , \quad x \in \partial A \cap R\mathbb {B}\ ; \end{aligned}$$
  2. (CQ2):

    for every \(R>0\) there exists \(\eta >0\) such that

    $$\begin{aligned} \sup _{u\in U}\inf _{v \in V} \nabla h(x)\cdot f(x, u,v) \le - \eta , \quad x \in \partial A \cap R\mathbb {B}\ , \end{aligned}$$

We shall refer to (CQ1) and (CQ2) as inward pointing conditions for player 1 and player 2, respectively. We shall also invoke a strengthening of the standing hypotheses, namely:

  1. (H5):

    \(U \subset \mathbb {R}^{m_1}\) and \(V\subset \mathbb {R}^{m_2}\) are compact sets; f(x, ., .) is continuous for each x; L(tx, ., .) is continuous for each (tx).

Given a subinterval \([t_0,T_0]\subset [0,T]\) we shall write

$$\begin{aligned}&\mathcal{U} [t_0,T_0] ~ := ~ \left\{ u(.):[t_0, T_0] \rightarrow U\; {\mathrm{measurable}} \right\} ; \\&\mathcal{V} [t_0,T_0] ~ := ~ \left\{ v(.):[t_0, T_0]\rightarrow V\; {\mathrm{measurable}} \right\} . \end{aligned}$$

If, in addition we are given a point \(x_0 \in A\), we shall consider the sets of admissible (also referred to as feasible) controls, for the left end-point \(x_0\) on the subinterval \([t_0,T_0]\subset [0,T]\), which are defined as follows:

$$\begin{aligned}&{\mathcal{AD}} ([t_0,T_0],x_0) \\&\quad := \left\{ (u(.),v(.))\in \mathcal{U} [t_0,T_0] \times \mathcal{V} [t_0,T_0] \; : \; x[t_0,x_0;u(.),v(.)](t)\in A, \;\; \forall \, t \in [t_0, T_0] \right\} \ ; \\&\mathcal{U} ([t_0,T_0],x_0) \\&\quad := \left\{ u(.)\in \mathcal{U} [t_0,T_0] \; : \; \exists \,v(.) \; \text{ s.t. } (u(.),v(.))\in {\mathcal{AD}} ([t_0,T_0],x_0) \right\} ; \\&\mathcal{V} ([t_0,T_0],x_0) \\&\quad := \left\{ v(.)\in \mathcal{V} [t_0,T_0] \; : \; \exists \,u(.) \; \text{ s.t. } (u(.),v(.))\in {\mathcal{AD}} ([t_0,T_0],x_0) \right\} . \end{aligned}$$

We shall often consider the case in which \(T_0=T\). In this case we will employ a simplified notation in which \(T_0=T\) is not explicitly written:

$$\begin{aligned} \mathcal{U} (t_0,x_0) :=\mathcal{U} ([t_0,T],x_0) \quad \text{ and } \quad \mathcal{V} (t_0,x_0) :=\mathcal{V} ([t_0,T],x_0) \ . \end{aligned}$$

We shall see that, under assumptions (H1)-(H4), if in addition (CQ1) (respectively (CQ2)) is satisfied, then, for all \(x_0\in A\) and \( t_0 \in [0,T]\), we have (cf. Proposition 3.4 below):

$$\begin{aligned} \mathcal{V} (t_0,x_0)= \mathcal{V} [t_0,T] \;\;\;\; {\mathrm{and }} \;\;\;\; \mathcal{U} ([t_0,T],x_0) \ne \emptyset \ \end{aligned}$$

(respectively

$$\begin{aligned} \mathcal{U} (t_0,x_0)= \mathcal{U} [t_0,T] \;\;\;\; {\mathrm{and }} \;\;\;\; \mathcal{V} ([t_0,T],x_0) \ne \emptyset ) \ . \end{aligned}$$

We consider the upper value \(V^\sharp \) and the lower value \(V^\flat \) (see the Definitions (4) and (5) below), whose definition involves the notion of nonanticipative strategies (in the Varayia–Roxin–Elliot–Kalton sense). More precisely, fix an initial datum \((t_0,x_0)\in [t_0,T]\times A \). We recall that a map \(\alpha :\mathcal{V}(t_0,x_0)\rightarrow \mathcal{U}(t_0,x_0)\) is a non-anticipative strategy for the first player at the point \(x_0\) if, for any \(\tau \in [0,T-t_0]\), for all controls \(v_1(.)\) and \(v_2(.)\) belonging to \(\mathcal{V}(t_0,x_0)\), which coincide a.e. on \([t_0,t_0+\tau ]\), \(\alpha (v_1)(.)\) and \(\alpha (v_2)(.)\) coincide a.e. on \([t_0,t_0+\tau ]\). Analogously we can define non-anticipative strategies \(\beta \) for the second player. Taking the time interval of reference \( [t_0,T_0]\subset [0,T]\), we shall use the following notation for the admissible strategies of, respectively, player 1 and player 2

$$\begin{aligned}&S_U([t_0,T_0],x_0):=\{ \alpha : \mathcal{V}([t_0,T_0],x_0)\rightarrow \mathcal{U}([t_0,T_0],x_0) \; \text{ s.t. } \; \alpha (.) \text{ is } \text{ nonanticipative } \\&\quad \text{ and } \; x[t_0,x_0;\alpha (v),v](t)\in A \; \text{ for } \text{ all } t \in [t_0,T_0] \; \text{ and } \text{ for } \text{ all } v\in \mathcal{V}([t_0,T_0],x_0) \; \} \end{aligned}$$

and

$$\begin{aligned}&S_V([t_0,T_0],x_0):=\{ \beta : \mathcal{U}([t_0,T_0],x_0)\rightarrow \mathcal{V}([t_0,T_0],x_0) \text{ s.t. } \beta (.) \text{ is } \text{ nonanticipative } \\&\quad \text{ and } \; x[t_0,x_0;u,\beta (u)](t)\in A \; \text{ for } \text{ all } t \in [t_0,T_0] \; \text{ and } \text{ for } \text{ all } u\in \mathcal{U}([t_0,T_0],x_0) \; \}. \end{aligned}$$

When \(T_0=T\), to simplify notation we write \(S_U(t_0,x_0)\) and \(S_V(t_0,x_0)\) the sets of admissible nonanticipative strategies for the first and second player, respectively.

The lower value \(V^\flat \) is then defined by:

$$\begin{aligned} V^\flat (t_0,x_0):=\inf _{\alpha \in S_U(t_0,x_0)}\sup _{v(.)\in \mathcal{V}(t_0,x_0)} J(t_0,x_0;\alpha (v)(.),v(.))\;. \end{aligned}$$
(4)

The upper value function is written as follows:

$$\begin{aligned} V^\sharp (t_0,x_0):= \sup _{\beta \in S_V(t_0,x_0)}\inf _{u(.)\in \mathcal{U} (t_0,x_0)} J(t_0,x_0;u(.),\beta (u)(.)) \ . \end{aligned}$$
(5)

Our goal in this paper is to prove, under hypotheses (H1)-(H5) (referred to as the ‘standing assumptions’) and also hypothesis (CQ1), that the lower value function is a Lipschitz continuous function that is characterized as the unique uniformly continuous viscosity solution of an appropriate Hamilton-Jacobi-Isaacs (HJI) equation. Analogous properties of the upper value function are established, when (CQ2) replaces (CQ1). We establish also that, under the standing hypotheses, when the Isaacs condition and both (CQ1) and (CQ2), strengthened to include (CQ3) (or (CQ4)) below, are satisfied, the value functions coincide, i.e. the game has a value.

The zero sum differential games literature has, since its inception [16], heavily focused on conditions under which the game has a value. But the lower game is of independent importance, because of its relevance to robust controller design for bounded disturbances. This is a well-established field of control engineering design which, it is argued, is better suited to applications involving low frequency, persistent disturbances than those based on stochastic models. One well-developed approach for ‘bounded disturbances’ robust design, based on ‘worst case disturbance’ analysis of the effects of the disturbance, involves solving a lower game, in which the controller and disturbance in the controller design problem are interpreted as the first and second control in the lower game, respectively; the lower cost function provides an assessment of the quality of a chosen robust controller design strategy, regarding disturbance suppression and other objectives [4]. By including a state constraint set in the formulation of the lower game, we can take account of the presence of a ‘safe region’ of the state space. Here, the requirement that the chosen feedback control maintains the state in the safe region, regardless of the disturbance, is built into the design specifications.

A link between the lower value function and viscosity solutions to a HJI equation, with potential application to robust control design, is established in this paper, only when the constraint qualification (CQ1) is satisfied. (CQ1) requires that there exist control values driving the state into the interior of the state constrained region, that the disturbances are ‘matched’ to the control action, and that the control actuator is designed to have sufficient power to dominate the anticipated bounded disturbances. Robust control design methodologies based on such assumptions are standard in the robust control literature, see, e.g. [13].

Aspects of the state-constrained zero sum differential games in this paper that deal with the existence of a value are less relevant to practical control engineering. This is because existence of a value is established only under the assumption that (CQ1), (CQ2) and (CQ3) (or (CQ4)) are satisfied. This is a highly restrictive hypothesis, since (CQ1) and (CQ2), which require that, simultaneously, the control dominates the disturbance and the disturbance dominates the control, are in some sense in conflict. Fortunately, as we have argued, it is only the lower game that is relevant to robust controller design.

Our treatment of state-constrained zero-sum differential games differs from earlier work by Koike [17] on this subject, in three respects. First, we replace the implicit uniform controllability hypotheses in Koike by directly verifiable constraint qualifications (CQ1) or (CQ2). Second, we show that the lower and upper associated value functions are unique viscosity solutions of HJI equations for the upper and lower games respectively, in a simple sense that avoids redefinition of the Hamiltonians on the boundary of the state constraint set, as well consideration of lower and upper envelope solutions, employed in [17]. Third, we provide conditions under which the game has a value.

The paper is organized as follows. We state our main results in Sect. 2. Section 3 is devoted to nonanticipative constructions of feasible controls and strategies. In Sect. 4 we establish the Lipschitz regularity of the upper and lower values, characterizing them as constrained viscosity solutions of an Hamilton–Jacobi–Isaacs equation. We provide a comparison result in Sect. 5.

2 Main Results

In this section we state our main results, which characterize the lower value, the upper value, and the value functions as generalized solutions in the viscosity sense to the following HJI equation:

$$\begin{aligned} \left\{ \begin{array}{ll} - \partial _t W(t,x) + H\Big (t,x,\partial _x W(t,x) \Big )= 0 &{}\quad {\mathrm{on }} \; [0,T)\times A \\ W(T,x)= g(x)&{} \quad {\mathrm{on }} \; A , \end{array} \right. \end{aligned}$$
(6)

in which the function \(H:\mathbb {R}\times \mathbb {R}^n\times \mathbb {R}^n\rightarrow \mathbb {R}\) is the Hamiltonian associated with either the lower or upper game.

Take the (un-max-minimized) Hamiltonian function for the differential game associated with (1)–(2) to be:

$$\begin{aligned} \mathcal{H}(t,x,p,u,v):= - f(x,u,v) \cdot p - L (t,x,u,v)\ . \end{aligned}$$

The lower Hamiltonian and the upper Hamiltonian are defined respectively to be

$$\begin{aligned} H^\flat (t,x,p):= \inf _{v\in V} \sup _{u\in U} \mathcal{H}(t,x,p,u,v) \ , \end{aligned}$$
(7)

and

$$\begin{aligned} H^\sharp (t,x,p):= \sup _{u\in U} \inf _{v\in V} \mathcal{H}(t,x,p,u,v) \ . \end{aligned}$$
(8)

Clearly we have \(H^\sharp \le H^\flat \).

We adopt the following definition of viscosity supersolution and subsolution for state constrained problems, namely.

Definition 2.1

  1. (i)

    [Viscosity supersolution] A continuous function \(W: [0,T] \times A \longrightarrow \mathbb {R}\) is a viscosity supersolution on \(D \subset [0,T) \times A\) of the Hamilton-Jacobi-Isaacs equation (6) if for any test function \(\varphi :\mathbb {R}\times \mathbb {R}^n \rightarrow \mathbb {R}\) of class \( \mathcal{C}^1 \) such that \(W- \varphi \) has a local minimum (relative to \([0,T] \times A\)) at \((t_0,x_0)\in D \), then

    $$\begin{aligned} - \partial _t \varphi (t_0,x_0) + H (t_0,x_0, \partial _x \varphi (t_0,x_0))\; \ge 0. \end{aligned}$$
  2. (ii)

    [Viscosity subsolution] A continuous function \(W: [0,T] \times A \longrightarrow \mathbb {R}\) is a viscosity subsolution on \(D \subset [0,T) \times A\) of (6) if for all \(\varphi :\mathbb {R}\times \mathbb {R}^n \rightarrow \mathbb {R}\) of class \( \mathcal{C}^1 \) such that \(W- \varphi \) has a local maximum (relative to \([0,T] \times A\)) at \((t_0,x_0)\in D \), then

    $$\begin{aligned} - \partial _t \varphi (t_0,x_0) + H (t_0,x_0, \partial _x \varphi (t_0,x_0))\; \le 0. \end{aligned}$$

We say that a continuous function W is a viscosity solution on \(D \subset [0,T) \times A\) of (6) if it is simultaneously a supersolution and a subsolution of (6) on \(D \subset [0,T) \times A\).

Theorem 2.2

Assume that conditions (H1)–(H4) are satisfied.

  1. (i)

    Suppose, in addition, that (CQ1) is verified. Then, the lower value function \(V^\flat \) is locally Lipschitz continuous.

  2. (ii)

    Suppose, in addition, that (CQ1) and (H5) are verified. Then \(V^\flat \) is a viscosity supersolution on \([0,T) \times A\) and a viscosity subsolution on \([0,T)\times {\mathrm{int}} \ A\) of equation (6) in which \(H=H^\flat \).

  3. (iii)

    Suppose, in addition, that (CQ1) and (H5) are verified, and A is compact. Then \(V^\flat \) is the unique function which is simultaneously a supersolution on \([0,T) \times A\) and a subsolution on \([0,T) \times {\mathrm{int}} A \) of (6), with \(H=H^\flat \), in the class of uniformly continuous functions.

Observe that \(V^\flat \) is a constrained viscosity solution of (6) in the sense of [21] (cf. [2]).

Theorem 2.3

Assume that (H1)–(H4) are satisfied.

  1. (i)

    Suppose, in addition, that (CQ2) is verified. Then, the lower value function \(V^\sharp \) is locally Lipschitz continuous.

  2. (ii)

    Suppose, in addition, that (CQ2) and (H5) are verified. Then \(V^\sharp \) is a viscosity supersolution on \([0,T) \times {\mathrm{int}} \ A\) and a viscosity subsolution on \([0,T)\times A\) of equation (6) in which \(H=H^\sharp \).

  3. (iii)

    Suppose, in addition, that (CQ2) and (H5) are verified, and A is compact. Then \(V^\sharp \) is the unique function which is simultaneously a viscosity supersolution on \([0,T) \times {\mathrm{int}} \ A\) and a viscosity subsolution on \([0,T)\times A\) of equation (6), with \(H=H^\sharp \), in the class of uniformly continuous functions.

The proofs of these theorems, which are given in later sections of the paper, are built up in several stages. To be more precise, properties (i) and (ii) of Theorems 2.2 and 2.3 follow respectively from Proposition 4.2 and Theorem 4.3 below. The comparison Theorem 5.1 establishes part (iii) of Theorems 2.2 and 2.3.

We highlight the fact that, for the proof of the Lipschitz regularity of the two value functions, assumption (H5) is not required. It is invoked only when we seek to interpret the values as viscosity subsolutions or supersolutions of (6).

We introduce additional constraint qualifications to derive further properties of the game values.

  1. (CQ3):

    For every \(x_0\in \partial A\) and for every \(p_0\in \mathbb {R}^n\) there exist \(u_0\in U\) and \(v_0\in V\) such that

    1. (i)

      \({{\mathcal {H}}} (t_0,x_0,p_0,u_0,v_0) = \inf _{v\in V}\sup _{u\in U} {{\mathcal {H}}} (t_0,x_0,p_0,u,v)\) and

    2. (ii)

      \( \nabla h(x_0)\cdot f(x_0,u_0, v_0)< 0 \ .\)

  2. (CQ4):

    For every \(x_0\in \partial A\) and for every \(p_0\in \mathbb {R}^n\) there exist \(u_0\in U\) and \(v_0\in V\) such that

    1. (i)

      \({{\mathcal {H}}} (t_0,x_0,p_0,u_0,v_0) = \sup _{u\in U} \inf _{v\in V} {{\mathcal {H}}} (t_0,x_0,p_0,u,v)\) and

    2. (ii)

      \( \nabla h(x_0)\cdot f(x_0,u_0, v_0)< 0 \ .\)

Observe that the two conditions (CQ3) and (CQ4) coincide when the Isaacs condition, namely

$$\begin{aligned} H ^\flat (t,x,p) =H ^ \sharp (t,x,p ) , \quad \forall \; (t,x,p)\in [0,T)\times A \times \mathbb {R}^n \,, \end{aligned}$$
(9)

is satisfied.

Theorem 2.4

Assume that (H1)–(H5), (CQ1) and (CQ2) hold true. Suppose moreover that the Isaacs condition (9) and (CQ3) (or equivalently (CQ4)) are satisfied. Then, the game has a value namely: \(V ^\sharp = V ^\flat \) and \( V:=V ^\sharp = V ^\flat \) is a viscosity solution on \([0,T)\times A\) of (6) with \(H=H^\flat =H^\sharp \). If, in addition, A is compact, then V is the unique the unique continuous viscosity solution on \([0,T)\times A\) of (6) with \(H=H^\flat =H^\sharp \).

This theorem is a direct consequence of Corollary 4.6 and Theorem 5.1.

Remark 2.5

  1. (i)

    The assertions of Theorems 2.2 and 2.3 remain valid if we replace the Hamiltonians \(H^\flat \) and \(H^\sharp \) with the Hamiltonians \(H_-\) and \(H_+\), respectively, where:

    $$\begin{aligned}&H_-(t,x,p):= \inf _{v\in V} \sup _{u\in U(x,v)} \mathcal{H}(t,x,p,u,v) \ , \quad \text{ and } \\&H_+(t,x,p):= \sup _{u\in U} \inf _{v\in V(x,u)} \mathcal{H}(t,x,p,u,v) \ . \end{aligned}$$

    Here, \(U(x,v):=\{ u\in U| \; f(x,u, v)\in T_{A}(x)\}\) and \(V(x,u):=\{ v\in V| \; f(x,u,v)\in T_{A}(x)\}\), in which \(T_{A}(x)\) denotes the Clarke tangent cone to the set A at x. Notice that \(H_-\) and \(H_+\) are in general discontinuous, therefore, in this case, the notion of viscosity solution requires consideration of the (upper and lower) semicontinuous envelopes of the Hamiltonians \(H_-\) and \(H_+\), indicated by the upper and lower ‘\(*\)’ notation (cf. [2, 17]). But using the fact that, under our hypotheses, we have

    $$\begin{aligned}&H_-(t,x,p)= H^\flat (t,x,p), \\&\quad H_+(t,x,p)= H^\sharp (t,x,p) \quad \text{ for } \text{ all } (t,x,p) \in [0,T]\times {\mathrm{int }}A \times \mathbb {R}^n \ , \end{aligned}$$

    and

    $$\begin{aligned}&{H_-} ^ * (t,x,p) = H^\flat (t,x,p), \\&\quad {H_+}_* (t,x,p) = H^\sharp (t,x,p) \quad \text{ for } \text{ all } (t,x,p) \in [0,T]\times \partial A \times \mathbb {R}^n \ , \end{aligned}$$

    we can easily reduce the analysis to the case when we consider the reference lower and upper Hamiltonians \(H^\flat \) and \(H^\sharp \). We also observe that, if all the assumptions of Theorem 2.4 are satisfied (included the Isaacs condition (9), precisely with \(H^\flat \) and \(H^\sharp \)), then the value \( V:=V ^\sharp = V ^\flat \) is still the unique bounded uniformly continuous viscosity solution on \([0,T) \times A\) of (6) with \(H=H_-\) and, also, with \(H=H_+\).

  2. (ii)

    Similar results were earlier proved in [5], but only in the case of separated dynamics.

  3. (iii)

    Observe that, if the standing hypotheses are merely supplemented by the Isaacs condition and (CQ1)-(CQ2), from Corollary 4.5 and Theorem 5.1 we can only conclude that \(V ^\sharp \le V ^\flat \) on \([0,T]\times A\) . To obtain the existence of the value for the game we need to impose also (CQ3) (or (CQ4)). While (CQ3) and (CQ4) are very strong, the value of the game may not exist, if they are not imposed. This is illustrated by Example 1 below.

Example 1

Consider the following two-player differential game with state-constraints:

$$\begin{aligned} \left\{ \begin{array}{lll} \dot{x}(t) =u(t)+v(t), \quad &{} \text{ for } \text{ a.e. } \;\; t\in [t_0,1] \\ u(t)\in U:=\{-3,+1\} \quad &{} \text{ for } \text{ a.e. } \;\;t\in [t_0,1]\\ v(t)\in V:=\{-2,+2\} \quad &{} \text{ for } \text{ a.e. } \;\;t\in [t_0,1]\\ x(t_0) =x_0 \in A \\ x(t)\in A := \{ x\in \mathbb {R}\; : \; x\le 0 \} \quad &{} \text{ for } \text{ all } \;\; t\in [t_0,T] \ . \end{array} \right. \end{aligned}$$

For arbitrary initial data \((t_0,x_0)\in [0,1]\times A \), and controls (u(.) and v(.)), we consider the following cost functional:

$$\begin{aligned} J(t_0,x_0;u(.),v(.)) := \int _{t_0}^1 (-u(t) + v(t)) \ dt \ . \end{aligned}$$

Observe that, for this example, the Isaacs condition and the two constraint qualifications (CQ1) and (CQ2) are satisfied, but (CQ3) (and (CQ4)) is violated. We claim that we have the strict inequality:

$$\begin{aligned} V^\flat (0,0)> V^\sharp (0,0) \ . \end{aligned}$$

Proof of the claim

Notice that

$$\begin{aligned}&\mathcal{U}(0,0)= \{ w(.) \in L^\infty \; : \; w(t)\in \{-3,+1\} \; a.e. \}, \\&\mathcal{V}(0,0)= \{ w(.) \in L^\infty \; : \; w(t)\in \{-2,+2\} \; a.e. \}. \end{aligned}$$

Consider the lower game. Take any \(\epsilon >0\). Then, there exists a non-anticipative strategy \(\alpha \in S_U(0,0)\) such that \(x[0,0; \alpha (v),v](t)\in A\) for all \(t\in [0,1]\) and \(v(.) \in \mathcal{V}(0,0)\), and

$$\begin{aligned} V^\flat (0,0) \ge \sup _{v(.) \in \mathcal{V}(0,0)} \int _0^1 ( -\alpha (v)(t) + v(t) ) dt - \epsilon \ . \end{aligned}$$

We also know that, for any \(v(.) \in \mathcal{V}(0,0)\), \(u(.)=\alpha (v)(.)\) must satisfy the state constraint and hence

$$\begin{aligned} \int _0^1 u(t) \ dt \le - \int _0^1 v(t) \ dt \ . \end{aligned}$$

It follows that

$$\begin{aligned} \sup _{v(.) \in \mathcal{V}(0,0)} J(0,0;\alpha (v)(.),v(.)) \ge \sup _{v(.) \in \mathcal{V}(0,0)} \left( 2 \times \int _0^1 v(t) \ dt \right) = 4 \ . \end{aligned}$$

Since \(\epsilon >0\) is arbitrary, we deduce that

$$\begin{aligned} V^\flat (0,0) \ge 4 \ . \end{aligned}$$

Consider now the upper game. Take any \(\epsilon >0\). Then, there exists a non-anticipative strategy \(\beta \in S_V(0,0)\) such that \(x[0,0; u,\beta (u)](t)\in A\) for all \(t\in [0,1]\) and \(u(.) \in \mathcal{U}(0,0)\), and

$$\begin{aligned} V^\sharp (0,0) \le \inf _{u(.) \in \mathcal{U}(0,0)} \int _0^1 ( - u(t) + \beta (u)(t) ) dt + \epsilon \ . \end{aligned}$$

For any \(u(.) \in \mathcal{U}(0,0)\), \(v(.)=\beta (u)(.)\) must satisfy

$$\begin{aligned} \int _0^1 v(t) \ dt \le - \int _0^1 u(t) \ dt \ . \end{aligned}$$

It follows that

$$\begin{aligned} \inf _{u(.) \in \mathcal{U}(0,0)} J(0,0;u(.),\beta (u)(.)) \le \inf _{u(.) \in \mathcal{U}(0,0)} \left( - 2 \times \int _0^1 u(t) \ dt \right) = -2 \ . \end{aligned}$$

Since \(\epsilon >0\) is arbitrary, we obtain that

$$\begin{aligned} V^\sharp (0,0) \le - 2 \ , \end{aligned}$$

confirming the claim. \(\square \)

Remark 2.6

Koike [17] also provides viscosity solution characterizations of the upper and lower values. To be specific, [17] establishes that they are unique viscosity solutions, up to the boundary of the state constraint, of the upper and lower HJI equation, defined in terms of the upper and lower semicontinuous envelopes of the Hamiltonians \(H_{-}\) and \(H_{+}\), under a controllability condition. (Some information about the relation of our definitions of viscosity solutions and those of Koike are provided by Remark 2.5.) Direct comparisons with Theorems 2.2 and 2.3 are not possible: examples can be given in which (CQ1) and (CQ2) are satisfied but Koike’s controllability condition is not satisfied, and vice versa. Our hypotheses have the advantage however that they are expressed in terms of the defining sets and functions of the dynamic game formulation, and are more amenable to direct verification than Koike’s hypotheses, because of their implicit nature. We mention that, in Example 1, (CQ1) and (CQ2) and also Koike’s controllability hypotheses are satisfied. Example 1 therefore illustrates that, also in Koike’s framework, the value of the dynamic game may fail to exist.

3 State Constrained Control Systems: Nonanticipative Constructions of Feasible Controls

Consider the state-constrained control system, described as follows:

$$\begin{aligned}&\dot{x} (t) \, =\, {\tilde{f}} (t,x(t),u(t)) \; \text{ a.e. } \; t \in [0,T] \end{aligned}$$
(10)
$$\begin{aligned}&u(t)\, \in \, {\widetilde{U}} (t) \nonumber \\&x(t)\, \in \, A \; \quad \text{ for } \text{ all } \; t \in [0,T] \,, \end{aligned}$$
(11)

in which \({\tilde{f}}(.,.,.):\mathbb {R}\times \mathbb {R}^n \times \mathbb {R}^{m} \rightarrow \mathbb {R}^n\) is a given function, and \({\widetilde{U}}(.): \mathbb {R}\leadsto \mathbb {R}^m\) is a given multifunction.

We shall refer to a couple (x(.), u(.)), comprising a measurable function \(u(.): I\rightarrow \mathbb {R}^{m}\) and an absolutely continuous function \(x(.): I \rightarrow \mathbb {R}^{n}\) which satisfy \(\dot{x}(t)={\tilde{f}}(t,x(t),u(t))\) and \(u(t)\in {\widetilde{U}} (t)\) a.e., as a process (on the subinterval \(I\subset [0,T]\)). The function x(.) is called a state trajectory and the function u(.) is called a control function. If x(.) satisfies the state constraint (11), the process is ‘feasible’.

We shall assume that the control system data satisfy the following hypotheses, in which \(r_{0}\) is some positive real number. There exist constants \(\rho >0\), \(\eta >0\) and \(M>0\), and \({\tilde{k}} _f(.)\in L^1([0,T],\mathbb {R})\) such that:

(H1)\('\)::

\({\tilde{f}} (.,x,.)\) is \(\mathcal{L}\times \mathcal{B}^{m}\) measurable for each x, where \(\mathcal{L}\) is the collection of Lebesgue measurable sets of \(\mathbb {R}\) and \(\mathcal{B}^{m}\) the collection of Borel sets of \( \mathbb {R}^{m}\); the set-valued map \({\widetilde{U}} (.)\) has Borel-measurable graph.

(H2)\('\)::

\(|{\tilde{f}}(t,x,u)| \,\le \, M(1+|x|) \quad \text{ for } \text{ all } (t,x) \in [0,T]\times \mathbb {R}^{n},\, u \in {\widetilde{U}} (t)\) .

(H3)\('\)::

\(|{\tilde{f}}(t,x,u) - {\tilde{f}}(t,x',u)| \,\le \, {\tilde{k}} _{f}(t)|x-x'| \quad \text{ for } \text{ all } t\in [0,T], \, x, x' \in e^{MT}(1 + r_0)\mathbb {B}\) and \(u\in U(t)\).

(CQ)::

(Inward pointing condition)

$$\begin{aligned} \inf _{u \in {\widetilde{U}} (t)} \nabla h(x)\cdot {\tilde{f}} (t,x,u) \le - \eta \ , \end{aligned}$$

for all \((t,x) \in [0,T] \times e^{MT}(1 + r_0)\mathbb {B}\) for which \(- \rho \le h(x)\le 0 \).

Employing the \(L^\infty \)-metric on the set of trajectories, and the (Ekeland) metric:

$$\begin{aligned} d_I(u_1(.),u_2(.)) ~=~ {\mathrm{meas}} ~ \{ t\in I \; : \; u_1(t) \ne u_2(t) \} \ , \end{aligned}$$

on the set of controls, we derive linear estimates w.r.t. the left-end points of a reference process and its approximating process. Such estimates, often referred as nonanticipative Filippov-type theorems (cf. [6, 7]), ensure that it is possible to construct approximating feasible controls (and trajectories) in a nonanticipative way, and, therefore, build up suitable nonanticipative strategies.

Here, we restrict attention to the case in which the boundary of A is smooth, illustrating how the approach suggested in [8] can be extended to obtain Filippov’s type theorems. The basic idea is to modify the control on a suitable measurable set whose measure is bounded above by a number which depends linearly on the distance of two distinct left-end points. Imposing stronger inward pointing conditions (as in [9] and [15]), we can construct controls with similar properties for state constraints sets that are merely assumed to be closed.

Theorem 3.1

[Nonanticipative Filippov’s Theorem] Fix \(r_0>0\). Assume that hypotheses (H1)\('\)–(H3)\('\), (H4) and (CQ) are satisfied. Then there exists a constant \(K>0\) (whose magnitude depends only on the parameter \(r_0\) and the data of assumptions (H1)\('\)-(H3)\('\), (H4) and (CQ)) with the following property: given any \((\tau ,y_1) \in [0,T]\times \big (((r_0+1)e^{M\tau }-1)\mathbb {B}\cap A \big )\) and feasible process \((x_1(.),u_1(.))\) on \([\tau ,T]\) such that \(x_1(\tau )=y_1\), for any \(y_2\in A \cap ((r_0+1)e^{M\tau }-1)\mathbb {B}\), and any \(d>0\) such that \(d\ge |y_2 - y_1|\), there exists a feasible process \((x_2(.),u_2(.))\) on \([\tau ,T]\) with \(x_2(\tau )=y_2\) such that the construction of \(u_2(.)\) is nonanticipative, and

$$\begin{aligned}&x_2(t) \in {\mathrm{int}} \ A, \;\; \text{ for } \text{ all } t\in (\tau ,T]\ , \end{aligned}$$
(12)
$$\begin{aligned}&d_{[{\tau ,T]}}(u_1(.),u_2(.))\;\le \; K \, d \ , \end{aligned}$$
(13)
$$\begin{aligned}&\Vert x_1(.)- x_2(.)\Vert _{L^\infty (\tau ,T)} \;\le \; K \, d \;. \end{aligned}$$
(14)

We start by proving a local version of the theorem in the form of the following lemma.

Lemma 3.2

Fix \(r_0>0\). Assume that (H1)\('\)–(H3)\('\), (H4) and (CQ) are satisfied. Then, there exist constants \(\delta _0>0\), \(K_0>0\) and \(K_1>1\) (depending only on the parameter \(r_0\) and the data provided by assumptions (H1)\('\)-(H3)\('\), (H4) and (CQ)) satisfying the following property: take any \(\tau \in [0,T]\), \(y,y'\in A\cap ((r_0+1)e^{M\tau }-1)\mathbb {B}\), any \(d>0\) such that \(d\ge |y - y'|\), and any feasible process (x(.), u(.)) on \([\tau ,T]\) with \(x (\tau )=y\). Then, there exists a process \((x'(.),u'(.))\) on \([\tau , T]\) such that \(x' (\tau ) = y'\),

$$\begin{aligned} x'(t) \in {\mathrm{int}} \ A \;\;\; \text{ for } \text{ all } \ t\in (\tau ,(\tau +\delta _0) \wedge T]\ , \end{aligned}$$

the construction of \(u'(.)\) is nonanticipative, and

$$\begin{aligned}&d_{[\tau ,T]}(u(.),u'(.))\;\le \; K_0 ~ d \ ,\\&\Vert x(.)- x'(.)\Vert _{L^\infty (\tau ,T)} \le K_1 \Big ( |y - y' | + d_{[\tau ,T]}(u(.),u'(.)) \Big )\;. \end{aligned}$$

Proof

Fix any \(r_0>0\), \(\tau \in [0,T]\), and \(y ,y' \in A\cap ((r_0+1)e^{M\tau }-1)\mathbb {B}\), and any \(d>0\) such that \(d\ge |y - y'|\). The number \(R_0 := e^{MT}(1 + r_{0})\) is an upper bound on the magnitude of all state trajectories x(.) on \([\tau ,T]\) such that \(x(\tau )\in ((r_0+1)e^{M\tau }-1)\mathbb {B}\), and, from (H2)\('\), \(M(1+R_0)\) is an upper bound for the corresponding velocities \(\dot{x}(.)\). Write \(k_h\) and \(k_h'\) the Lipschitz constants for h(.) and \(\nabla h(.)\) respectively, on \(R_0 \mathbb {B}\). Denote by \(\omega (.)\) a modulus of continuity for the function \(t\rightarrow \int _0^t {\tilde{k}} _f(s)\ ds\)

Assumptions (H1)\('\)–(H3)\('\), (H4) and (CQ) guarantee also the existence of \(\rho _0>0\) and \(\bar{\delta } >0\) with the following property: for any \(\tau \in [0,T]\) and any process (x(.), u(.)) on \([\tau ,(\tau + \bar{\delta }) \wedge T]\) with \(x(\tau ) \in A\cap ((r_0+1)e^{M\tau }-1)\mathbb {B}\), one of the following two cases occurs: either

Case 1\(h(x(\tau )) \le - \rho _0\); if we have this initial condition, then \( x(t) \in A \; \text{ for } \text{ all } t\in [\tau ,(\tau + \bar{\delta }) \wedge T]\)

or

Case 2:\(- \rho _0 < h(x(\tau )) \le 0 \); in this case there exists a control \(\bar{u}(.) :[\tau ,(\tau +\bar{\delta }) \wedge T] \rightarrow \mathbb {R}^m\) such that \(\bar{u}(t) \in {\widetilde{U}} (t)\) a.e. and

$$\begin{aligned} \nabla h(x(t)) \cdot {\tilde{f}} (t,x(t),\bar{u}(t)) \le - \eta \;\; \; a.e.\; [\tau ,(\tau + \bar{\delta }) \wedge T]\, . \end{aligned}$$

We define the continuous function \(\beta (.):\mathbb {R}_+\rightarrow \mathbb {R}_+\) as follows:

$$\begin{aligned} \beta (\delta )\;:=\; 2M(1+R_0)\, \left( k_{h}'M(1+R_0)\delta + k_{h}\omega (\delta ) \right) \,e^{\int _0^T {\tilde{k}} _f(s)\ ds}\, . \end{aligned}$$
(15)

Since \(\beta (.)\) is monotone increasing and \(\beta (0)=0\), we can choose \(\delta _0:=\delta (r_{0})\in (0, \bar{\delta })\) such that \( \beta (\delta _0) < \eta \,\). We also select \(K_0:= K(r_0)>0\) such that

$$\begin{aligned} K_0>\frac{k_h e^{\int _0^T {\tilde{k}} _f(s)\ ds}}{\eta - \beta (\delta _0)}\;. \end{aligned}$$
(16)

Now we fix any \(\tau \in [0,T]\) and a feasible process (x(.), u(.)) on \([\tau ,T]\) such that \(x(\tau )=y\). Write \(\bar{\tau }:= (\tau +\delta _{0})\wedge T\) to simplify notation. We shall construct a second process \((x'(.),u'(.))\) on \([\tau ,T]\) such that \(x'(\tau )=y'\),

$$\begin{aligned} x'(t) \in {\mathrm{int}} \ A \quad \forall \ t \in (\tau , \bar{\tau }] \end{aligned}$$

and

$$\begin{aligned}&d_{[\tau ,T]}(u(.),u'(.))\;\le \; K_0\, d \,, \\&\Vert x(.)- x'(.)\Vert _{L^\infty (\tau ,T)} \le K_1 \Big ( |y - y' | + d_{[\tau ,T]}(u(.),u'(.)) \Big )\;. \end{aligned}$$

Consider the (non necessarily feasible) process \(({\hat{x}}(.),u(.))\) with left-end point \(\hat{x}(\tau )=y'\). We shall assume that case 2) occurs for, otherwise, \((x'(.),u'(.))=(\hat{x}(.),u(.))\) has already the desired properties. Notice that from the Gronwall inequality (cf. for instance [22, Lemma 2.4.4]) we obtain:

$$\begin{aligned} ||x(.) - {\hat{x}} (.)||_{L^{\infty }(\tau ,t)} \,\le \, e^{\int _0^T {\tilde{k}} _f(s)\ ds}\, |y-y'|\,, \quad \forall t\in [\tau ,T]\ . \end{aligned}$$
(17)

For all \(t \in [\tau ,\bar{\tau }]\) we denote by \(\mathcal{A}(t)\) the measurable set

$$\begin{aligned} \mathcal{A}(t)\;\,=\; \left\{ s\in [\tau , t] \; : \; \frac{d}{ds}h({\hat{x}} (s)) \ge 0 \; \right\} , \end{aligned}$$
(18)

where \(\frac{d}{ds}h({\hat{x}} (s))\) is the total derivative of the map \(s\rightarrow h({\hat{x}} (s))\). Consider now the process \((x'(.),u'(.))\) on \([\tau ,T]\) such that \(x'(\tau )=y'\) and

$$\begin{aligned} u'(t)\;=\; \left\{ \begin{array}{ll} \bar{u}(t) &{} \text{ if } \;\;\;t\in [\sigma ,T], \;\;\; \frac{d}{dt}h({\hat{x}} (t)) \ge 0 \;\;\; \text{ and } \;\; \; {\mathrm{meas}} \{\mathcal{A}(t)\} ~< ~ K_0 d \ \\ u(t)&{} \text{ otherwise } \ , \end{array} \right. \end{aligned}$$
(19)

where the control \({\bar{u}}(t) \in {\widetilde{U}} (t) \) satisfies the following inward pointing property (recall that we are in case 2):

$$\begin{aligned} \nabla h({\hat{x}} (t)) \cdot {\tilde{f}} (t,{\hat{x}} (t),\bar{u}(t)) \le - \eta \;\; \; a.e.\; t \in [\tau , \bar{\tau }]\, . \end{aligned}$$

Observe that the construction above is nonanticipative, in the sense that if two controls \(u_1(.)\) and \(u_2(.)\) are such that \(u_1(.)=u_2(.)\) a.e. on some time interval \([\tau ,\tau ^*]\subset [\tau ,T]\), then the corresponding controls \(u_1'(.)\) and \(u_2'(.)\), defined by (19), satisfy the property \(u'_1(.)=u_2'(.)\) a.e. on \([\tau ,\tau ^*]\).

Set

$$\begin{aligned} {\bar{\sigma }} := \sup \{ t\in [\tau ,\bar{\tau }] \; : \; {\mathrm{meas}} \{\mathcal{A}(t)\}< K_0 d \} \, . \end{aligned}$$

Notice that

$$\begin{aligned} d_{[\tau ,T]}(u(.),u'(.))=d_{[\tau ,\bar{\tau }]}(u(.),u'(.))\,\le \, {\mathrm{meas}} \{\mathcal{A}({\bar{\sigma }} )\} \,\le \,K_0 d \;, \end{aligned}$$

and, owing to Gronwall’s Inequality, we also have

$$\begin{aligned}&||x'(.) - {\hat{x}} (.)||_{L^{\infty }(\tau ,t)} \le 2M (1+R_0)\, e^{\int _0^T {\tilde{k}} _f(s)\ ds}\, d_{[\tau ,\bar{\tau }]}(u(.),u'(.)) \nonumber \\&\quad \le 2M (1+R_0)\, e^{\int _0^T {\tilde{k}} _f(s)\ ds}\, \text{ meas }\, \{[\tau , t \wedge {\bar{\sigma }}]\cap \mathcal{A}(t)\}\,, \quad \forall t\in [\tau ,T]\,. \end{aligned}$$
(20)

Then, from (17) and (20), taking \(K_1:= (1+ 2M (1+R_0))\, e^{\int _0^T {\tilde{k}} _f(s)\ ds}\), we also derive

$$\begin{aligned} ||x(.) - x' (.)||_{L^{\infty }(\tau ,t)} ~ \le ~ K_1 \Big ( |y - y' | + d_{[\tau ,T]}(u(.),u'(.)) \Big )\;, \quad \forall t\in [\tau ,T]\,. \end{aligned}$$

It remains to prove that the process \((x'(.),u'(.))\) is feasible on \([\tau ,\bar{\tau }]\), and more precisely \(x'(t) \in {\mathrm{int}} \ A\), for all \(t \in (\tau , \bar{\tau }]\).

Clearly, for each \(t \in [\tau , \bar{\tau }]\), we obtain

$$\begin{aligned} h(x'(t))= & {} h(x'(\tau )) + \int _{\tau }^{t}\nabla h(x'(s))\cdot {\tilde{f}} (s,x'(s),u'(s))ds \nonumber \\\le & {} h(x'(\tau )) + \int _{\tau }^{t}\nabla h(\hat{x}(s))\cdot {\tilde{f}}(s,x'(s),u'(s))ds \nonumber \\&+\, k_{h}' M(1+R_0)\, |t-\tau |\, ||x'(.)-\hat{x}(.)||_{L^{\infty }(\tau ,t)} \nonumber \\\le & {} h(x'(\tau )) + \int _{\tau }^{t}\nabla h(\hat{x}(s))\cdot {\tilde{f}} (s,\hat{x}(s),u'(s))\ ds \nonumber \\&+\, \Big ( k_{h}' M(1+R_0) |t-\tau |+ k_{h} \omega (|t-\tau |) \Big )\,~ ||x'(.)-\hat{x}(.)||_{L^{\infty }(\tau ,t)}\;. \end{aligned}$$
(21)

We claim that \(h(x(t))< 0\) for all \(t \in (\tau ,\bar{\tau }]\). We consider the following two possible cases:

Case (i)\(t \in [\tau ,{\bar{\sigma }}]\). We have

$$\begin{aligned}&\int _{\tau }^{t}\nabla h(\hat{x}(s))\cdot {\tilde{f}} (s,\hat{x}(s),u'(s))\ ds \;\le \; \int _{[\tau ,t]\backslash \mathcal{A}(t)} \nabla h({\hat{x}} (s)) \cdot {\tilde{f}} (s,\hat{x}(s),u(s)) \ ds \\&\qquad +\,\int _{[\tau ,t]\cap \mathcal{A}(t)} \nabla h({\hat{x}} (s)) \cdot {\tilde{f}} (s,\hat{x}(s),\bar{u}(t)) \ ds \\&\quad < \; 0\,-\,\eta \, \text{ meas }\,\{ \mathcal{A} (t)\}\, . \end{aligned}$$

As a consequence from the definition of \(\beta (\delta _0)\) and inequalities (20) and (21), it follows that

$$\begin{aligned} h(x'(t)) \;< \; -(\eta - \beta (\delta _{0})) \text{ meas }\,\{ \mathcal{A}(t) \} \; < \;0\, . \end{aligned}$$
(22)

Case (ii)\(t \in [{\bar{\sigma }}, \bar{\tau }]\). In this case, since \( h({\hat{x}} (\tau )) = h(x'(\tau ))\) we have that

$$\begin{aligned}&\int _{\tau }^{t}\nabla h(\hat{x}(s))\cdot {\tilde{f}} (s,\hat{x}(s),u'(s))\ ds \; = \; \int _{\tau }^{t} \nabla h({\hat{x}} (s)) \cdot {\tilde{f}} (s,\hat{x}(s),u(s)) \ ds \\&\qquad +\,\int _{\tau }^{t} \nabla h({\hat{x}} (s)) \cdot [{\tilde{f}} (s,\hat{x}(s),u'(s)) -f(s,\hat{x}(s),u(s))] \ ds \\&\quad \le h(\hat{x}(t))-h({\hat{x}} (\tau )) \,-\, \eta \text{ meas }\,\{ \mathcal{A}({\bar{\sigma }}) \} = h(\hat{x}(t)) - h(x'(\tau ))\,-\, \eta ~ K_0 d \;. \end{aligned}$$

From this estimate, (16) and inequalities (17), (20) and (21), we derive that

$$\begin{aligned} h(x'(t))\le & {} k_h||x(.)-\hat{x}(.)||_{L^{\infty }(\tau ,t)} \;-\; (\eta - \beta (\delta _{0}))\, K_0 \, d \\\le & {} [k_h e^{\int _0^T \omega (s) \ ds} \;-\; (\eta - \beta (\delta _{0}))\, K_0] \, d \\< & {} 0\,. \end{aligned}$$

This confirms our claim and the proof of the lemma is now complete. \(\square \)

Proof of Theorem 3.1

Assume that (H1)\('\)–(H3)\('\), (H4) and (CQ) are satisfied, and fix \(r_0>0\). Then consider the constants \(\delta _0>0\), \(K_0>0\) and \(K_1>1\) provided by Lemma 3.2. Choose the integer \(N_0\) in such a manner that it is the first index value i for which \(\tau +(i\delta _{0})\ge T\). Observe that \(N_0\) satisfies \(N_0\le T/\delta _{0}+1\), and does not depend on the choice of reference time \(\tau \), or the choice of the left-end points \(y_1\) and \(y_2\), or the process \((x_1(.),u_1(.))\).

We recursively apply Lemma 3.2 to the reference process \((x(.),u(.))=(x_1(.),u_1(.))\) on \([\tau ,T]\), to obtain a finite sequence of processes \((x_{i}'(.),u_{i}'(.))\) on \([\tau ,T]\), \(i=1,\ldots ,N_0\), where the ith process is an extension of the \((i-1)\)th process, which is feasible on \([\tau ,(\tau +i\delta _0)\wedge T]\) (more precisely \(x_{i}'(t)\in {\mathrm{int }} \ A\) for all \((\tau ,(\tau +i\delta _0)\wedge T]\)), and the ith control \(u_{i}'(.)\) is constructed in a nonanticipative way. We shall finally take \((x_2(.),u_2(.)):= (x_{N_0}'(.),u_{N_0}'(.))\). This process, then, is such that \(x_2(\tau )=y_2\), \(x_2(t)\in {\mathrm{int }} \ A\) for all \(t \in (\tau ,T]\), and satisfies all the required properties established in the statement of the proposition.

Writing \(\tau _0=\tau \), \(\tau _i= \tau +(i\delta _{0})\), \(y=x(\tau _0)=y_1\), \(y'=x_1'(\tau _0)=y_2\), we have, for \(i=1,\ldots ,N_0\)

$$\begin{aligned}&d_{[\tau _{i-1},T]}(u(.),u_{i}'(.)) \;\le \; K_0\, d_{i-1} \\&\Vert x(.)- x_i'(.)\Vert _{L^\infty (\tau _{i-1},T)} \le K_1 \Big ( |x(\tau _{i-1}) - x_{i}'(\tau _{i-1})| + d_{[\tau _{i-1},T]}(u(.),u_{i}'(.)) \Big ) \ , \end{aligned}$$

where \(d_{i-1} = |x(\tau _{i-1}) - x_{i}'(\tau _{i-1})|\), for \(i=1,\ldots ,N_0\), and \(d_0=d \; (\ge |y-y'|)\). Therefore, for \(i=1,\ldots , N_0\), we obtain

$$\begin{aligned} d_{[\tau _{i-1},T]}(u(.),u_{i}'(.)) \;\le \; K_0\,K_1^{i-1} \, (1+K_0)^{i-1} d \end{aligned}$$
(23)

and

$$\begin{aligned} \Vert x(.)- x_i'(.)\Vert _{L^\infty (\tau _{i-1},T)} \;\le \; K_1^{i-1} \, (1+K_0)^{i-1} d. \end{aligned}$$
(24)

As a consequence we derive, from (23),

$$\begin{aligned} d_{[\tau ,T]}(u(.),u_{N_0}'(.))\le & {} \sum _{i=1}^{N_0} d_{[\tau _{i-1},T]}(u(.),u_{i}'(.)) \\\le & {} \sum _{i=1}^{N_0} K_0\,K_1^{i-1} \, (1+K_0)^{i-1}~ d \\\le & {} K_1^{N_0} \, (1+K_0)^{N_0}~d \ , \end{aligned}$$

and, from (24),

$$\begin{aligned} \Vert x(.)- x_{N_0}'(.)\Vert _{L^\infty (\tau ,T)}\le & {} \max _{i=1, \dots , N_0} \Vert x(.)- x_i'(.)\Vert _{L^\infty (\tau _{i-1},T)} \\\le & {} K_1^{N_0} \, (1+K_0)^{N_0}~d \ . \end{aligned}$$

Thus, taking \(K\,:=\, K_1^{N_0} \, (1+K_0)^{N_0}\), all the assertions of the Theorem 3.1 are confirmed. \(\square \)

Assume that we are given initial data and a control function (non necessarily feasible). A modification of the proof of Theorem 3.1 provides a nonanticipative construction of a feasible control for the reference initial data which satisfy some specific inward pointing properties, paying the (sometimes) ‘affordable price’ of neglecting the distance estimates between the employed controls. In particular we obtain that for any initial data, the set of feasible processes is non-empty.

Proposition 3.3

Fix any \(r_0>0\). Assume that (H1)\('\)–(H3)\('\), (H4) and (CQ) are satisfied. Take any initial time \(t_0 \in [0,T]\) and any \(x_0 \in A \cap (e^{Mt_0}(r_0+1) -1) \mathbb {B}\). Consider a measurable function u(.) defined on \([t_0,T]\) with \( u(t)\in {\widetilde{U}} (t)\) a.e.. We can construct a feasible process \(({\tilde{x}}(.), {\tilde{u}}(.))\) on \([t_0,T]\) such that \({\tilde{x}}(t_0)=x_0\), \({\tilde{x}} (t) \in {\mathrm{int }} \ A\) for all \(t\in (t_0,T]\), and the map \(u(.)\rightarrow \gamma _{t_0,x_0}(u(.)):={\tilde{u}}(.)\) is nonanticipative.

Proof

Consider the process (x(.), u(.)) on \([t_0,T]\) such that \( x(t_0)=x_0\). The construction of a feasible process \(({\tilde{x}}(.), {\tilde{u}}(.))\) can be provided following the lines of the proof of Lemma 3.2 and Theorem 3.1, in which we do not fix an upper bound to the measure of the set in which we modify the reference control u(.). More precisely, starting from (x(.), u(.)), we can construct a selection \({\bar{u}} (.)\) as in case 2) of the proof of Lemma 3.2. Let \(M_0>0\) un upper bound for the velocities of all trajectories having \(x_0\) as left end-point. Take \(\delta _0\in (0,\min \{ \frac{\rho }{2k_hM_0} ; \frac{\eta }{M_0^2k_h'+k_hk_fM_0}\})\) such that \(\beta (\delta _0)<\eta \). Here, \(k_h\) and \(k_h'\) the Lipschitz constants for h(.) and \(\nabla h(.)\) respectively, on \(e^{MT}(1+r_0) \mathbb {B}\), \(\rho \) and \(\eta \) are the positive constants provided by (CQ), and \(\beta (.)\) is the positive function defined in (15) (cf. the proof of Lemma 3.2). It is not restrictive to assume that \(-\rho /2 < h(x_0) \le 0\). Then, we consider the process \(({\tilde{x}}(.),{\tilde{u}}(.))\) on \([t_0,T]\) such that \({\tilde{x}}(t_0)=x_0\) and

$$\begin{aligned} {\tilde{u}} (t)\;:=\; \left\{ \begin{array}{ll} \bar{u}(t) &{} \text{ if } \;\; t\in [t_0,T]\; \text{ and } \; \frac{d}{dt}h( x (t)) > - \eta \\ u(t)&{} \text{ otherwise } \ , \end{array} \right. \end{aligned}$$

where the control function \({\bar{u}}(.) \in {\widetilde{U}} (.) \) is chosen to satisfy the inward pointing condition:

$$\begin{aligned} \nabla h( x (t)) \cdot {\tilde{f}} (t, x (t),\bar{u}(t)) \le - \eta \;\; \; a.e.\; t \in [t_0, T]\, . \end{aligned}$$

Observe that the construction of \({\tilde{u}}(.)\) is nonanticipative and, for all \(t \in [t_0, (t_0+\delta _0)\wedge T]\),

$$\begin{aligned} d_{[\tau ,t]}(u(.),{\tilde{u}} (.)) \, = \, {\mathrm{meas}} \left\{ {\widetilde{\mathcal{A}}} (t ):=\left\{ s\in [t_0,t] \; : \; \frac{d}{dt}h( x (t)) > - \eta \right\} \right\} \;. \end{aligned}$$

Gronwall’s inequality yields

$$\begin{aligned}&||x(.) - {\tilde{x}} (.)||_{L^{\infty }(\tau ,t)} \le 2M (1+R_0)\, e^{\int _0^T {\tilde{k}} _f(s)\ ds}\, d_{[\tau ,t]}(u(.),{\tilde{u}} (.)) \\&\quad = 2M (1+R_0)\, e^{\int _0^T {\tilde{k}} _f(s)\ ds}\, {\mathrm{meas}} \{{\widetilde{\mathcal{A}}} (t )\} \ . \end{aligned}$$

The same analysis leading to inequalities (21) and (22) now gives:

$$\begin{aligned} h({\tilde{x}}(t)) \; < \;0 \quad \text{ for } \text{ all } t \in {(t_0, (t_0+\delta _0)\wedge T]} \, . \end{aligned}$$

The recursive argument appearing in the proof of Theorem 3.1 may be used, once again, to extend the process \(({\tilde{x}}(.),{\tilde{u}}(.) )\), preserving non-anticipativity. The proof of the proposition is complete. \(\square \)

3.1 Construction of Nonanticipative Maps and Strategies

Consider system (1), and fix an initial time \(t_0 \in [0,T] \) and a control \({\tilde{v}} (.)\in \mathcal{V} [t_0,T]\). Observe that assumptions (H1)–(H3) and (CQ1) guarantee that conditions (H1)\('\)–(H3)\('\) and (CQ) are satisfied by the function

$$\begin{aligned} {\tilde{f}} (t,x,u) :=f(x,u,{\tilde{v}} (t)) \ . \end{aligned}$$
(25)

The same property is valid when we fix a control \({\tilde{u}} (.)\in \mathcal{U} [t_0,T]\) and consider the function \((t,x,v)\rightarrow f(x,{\tilde{u}} (t),v)\). A consequence of Proposition 3.3 is the following proposition.

Proposition 3.4

Assume that (H1)–(H4) are satisfied. Take any \((t_0,x_0)\in [0,T]\times A\).

  1. (i)

    If in addition (CQ1) is satisfied, then

    $$\begin{aligned} \mathcal{V} (t_0,x_0)= \mathcal{V} [t_0,T], \;\; \;\; \mathcal{U} ([t_0,T],x_0) \ne \emptyset \;\;\;\; {\mathrm{and }} \;\;\;\; S_U(t_0,x_0)\ne \emptyset \ . \end{aligned}$$

    In particular, there exists a nonanticipative strategy \(\Gamma _{t_0,x_0} \in S_U(t_0,x_0)\) such that for any \(v(.) \in \mathcal{V} (t_0,x_0)\) we have \( x (t) \in {\mathrm{int }} \ A\) for all \(t\in (t_0,T]\), where \(x(t):=x[t_0,x_0;\Gamma _{t_0,x_0}(v) (.), v(.)](t)\), for \(t\in [t_0,T]\).

  2. (ii)

    If in addition (CQ2) is satisfied, then

    $$\begin{aligned} \mathcal{U} (t_0,x_0)= \mathcal{U} [t_0,T], \;\; \;\; \mathcal{V} ([t_0,T],x_0) \ne \emptyset \;\;\;\; {\mathrm{and }} \;\;\;\; S_V(t_0,x_0)\ne \emptyset \ . \end{aligned}$$

    In particular, there exists a nonanticipative strategy \({\tilde{\Gamma }} _{t_0,x_0} \in S_V(t_0,x_0)\) such that for any \(u(.) \in \mathcal{U} (t_0,x_0)\) we have \( x (t) \in {\mathrm{int }} \ A\) for all \(t\in (t_0,T]\), where \(x(t):=x[t_0,x_0;u(.), {\tilde{\Gamma }} _{t_0,x_0}(u)(.)](t)\), for \(t\in [t_0,T]\).

Proof

We show only (i), since (ii) can be proved in a similar way. Fix \({\hat{u}} \in \mathcal{U} [t_0,T]\). Take any \(v(.)\in \mathcal{V} [t_0,T]\). Then we can consider

$$\begin{aligned} {\tilde{f}} (t,x,u) :=f(x,u, v (t)) \end{aligned}$$

and apply Proposition 3.3 to obtain a feasible control \(\gamma _{t_0,x_0}({\hat{u}})(.)\in \mathcal{U} ([t_0,T],x_0)\). We define

$$\begin{aligned} \Gamma _{t_0,x_0}(v)(.):= \gamma _{t_0,x_0}({\hat{u}})(.) \ . \end{aligned}$$
(26)

It immediately follows that \(\mathcal{V} (t_0,x_0)= \mathcal{V} [t_0,T]\), \(\mathcal{U} ([t_0,T],x_0) \ne \emptyset \) and \( x (t) \in {\mathrm{int }} \ A\) for all \(t\in (t_0,T]\), where \(x(.):=x[t_0,x_0;\Gamma _{t_0,x_0}(v) (.), v(.)](.)\). It remains to show that \(\Gamma _{t_0,x_0}\) is nonanticipative. Take \(v_1(.), v_2(.)\in \mathcal{V} [t_0,T]\). If \(v_1(.)=v_2(.)\) a.e. on \([t_0,t_0+\tau ]\), for some \(\tau \in [0,T-t_0]\). Write \({\tilde{f}} _1 (t,x,u) :=f(x,u, v_1 (t))\) and \({\tilde{f}} _2 (t,x,u) :=f(x,u, v_2 (t))\). Let \(\gamma ^1_{t_0,x_0}\) and \(\gamma ^2_{t_0,x_0}\) be the nonanticipative maps provided by Proposition 3.3 and associated respectively with \({\tilde{f}} _1\) and \({\tilde{f}} _2\). Since \({\tilde{f}} _1 (t,x,u)= {\tilde{f}} _2 (t,x,u)\) a.e. on \([t_0,t_0+\tau ]\), it immediately follows that

$$\begin{aligned} \gamma ^1_{t_0,x_0}({\hat{u}})(t)=\gamma ^2_{t_0,x_0}({\hat{u}})(t), \quad \text{ for } \text{ a.e. } \; t \in [t_0,t_0+\tau ] \end{aligned}$$

and therefore

$$\begin{aligned} \Gamma _{t_0,x_0}(v_1)(t)=\Gamma _{t_0,x_0}(v_2)(t), \quad \text{ for } \text{ a.e. } \; t \in [t_0,t_0+\tau ], \end{aligned}$$

confirming that \(\Gamma _{t_0,x_0} \in S_U(t_0,x_0)\). \(\square \)

In Sect. 1 we have introduced the notion of nonanticipative strategies \(S_U\) and \(S_V\) for the two players controlling in system (1). We shall make use also of an extension to this concept, introducing a notion of nonanticipative maps between sets of admissible controls, not necessarily employed by different players, and having possibly different initial times. Take a pair of initial data points \((t_1,x_1), (t_2,x_2)\in [0,T]\times A\). If conditions (H1)-(H4) and (CQ1) are satisfied, from Proposition 3.4, we know that \(\mathcal{U} (t_1,x_1)\ne \emptyset \) and \(\mathcal{U} (t_2,x_2)\ne \emptyset \). A map \(\gamma : \mathcal{U} (t_1,x_1) \rightarrow \mathcal{U} (t_2,x_2)\) is called nonanticipative if for any \(\tau \in [0,T-t_1]\), given \(u,u'\in \mathcal{U} (t_1,x_1) \) such that \(u(.)=u'(.)\) a.e. on \([t_1,t_1+\tau ]\), then also \(\gamma (u)(.)=\gamma (u')(.)\) a.e. on \([t_2,(t_2+\tau )\wedge T]\). An equivalent notion is valid for maps \(\gamma ' : \mathcal{V} (t_1,x_1) \rightarrow \mathcal{V} (t_2,x_2)\).

We draw attention to an immediate consequence of Theorem 3.1, Propositions 3.3 and 3.4 summarized as the following corollary.

Corollary 3.5

Fix any \(r_0>0\). Assume that (H1)–(H4) and (CQ1) are satisfied. Then there exists a constant \(K>0\) such that for any initial time \(t_0 \in [0,T]\), for any \(x_1\), \(x_2 \in A \cap r_0 \mathbb {B}\), and for any \(v (.)\in \mathcal{V} [t_0,T]\), we can find a nonanticipative map \(\gamma _v:\mathcal{U} (t_0,x_1) \longrightarrow \mathcal{U} (t_0,x_2)\) with the following property: for any \( u_1(.)\in \mathcal{U} (t_0,x_1)\) we have

$$\begin{aligned}&d_{[t_0,T]}(u_1(.),\gamma _v(u_1)(.))\;\le \; K\, |x_1-x_2|\,, \end{aligned}$$
(27)
$$\begin{aligned}&\Vert x_1(.) - x_2(.)\Vert _{L^\infty (t_0,T)} \le K |x_1 - x_2 | \;. \end{aligned}$$
(28)

where \(x_1(.)\) and \(x_2(.)\) are the trajectories associated with the admissible controls \((u_1(.),v(.))\in {\mathcal{AD}}(t_0,x_1)\) and \((u_2(.):=\gamma _v(u_1)(.),v(.))\in {\mathcal{AD}}(t_0,x_2)\) for system (1).

Remark 3.6

Corollary 3.5 has an obvious symmetric counterpart in which we can take any \(x_1\), \(x_2 \in A \cap r_0 \mathbb {B}\) and any \(u(.)\in \mathcal{U} [t_0,T]\), obtaining a nonanticipative map \(\gamma _u:\mathcal{V} (t_0,x_1) \longrightarrow \mathcal{V} (t_0,x_2)\) and estimates as (27) and (28).

4 Properties of the Lower and Upper Value Functions

Proposition 4.1

(Dynamic Programming Principle) Assume (H1)–(H4). For any \((t_0,x_0)\in [0,T]\times A\) and for all \(\sigma \in (0,T-t_0]\) we have the following properties:

  1. (i)

    if in addition (CQ1) is satisfied, then

    $$\begin{aligned}&V^\flat (t_0,x_0) = \inf _{\alpha \in S_U([t_0, t_0+\sigma ] ,x_0)}\sup _{v\in \mathcal{V}(t_0,x_0)}\quad \left\{ \int _{t_0}^{t_0+\sigma } L(t, x[t_0,x_0;\alpha (v),v](t), \right. \nonumber \\&\qquad \qquad \qquad \quad \left. \alpha (v)(t),v(t) ) \ dt + V^\flat \big (t_0+\sigma ,x[t_0+\sigma ,x_0;\alpha (v),v](t_0+\sigma )\big ) \right\} . \nonumber \\ \end{aligned}$$
    (29)
  2. (ii)

    if in addition (CQ2) is satisfied, then

    $$\begin{aligned}&V^\sharp (t_0,x_0) = \sup _{\beta \in S_V([t_0, t_0+\sigma ],x_0)}\inf _{u\in \mathcal{U}(t_0,x_0)}\quad \Big \{\int _{t_0}^{t_0+\sigma } L(t, x[t_0,x_0;u,\beta (u)](t), \nonumber \\&\qquad \qquad \qquad \quad u(t),\beta (u)(t) ) \ dt + V^\sharp \big (t_0+\sigma ,x[t_0+\sigma ,x_0;u,\beta (u)](t_0+\sigma )\big ) \Big \}, \nonumber \\ \end{aligned}$$
    (30)

The proof of Proposition 4.1 is based on standard arguments and there is no substantial difference from the case without state constraints (cf. [2, 11, 14, 17, 19]).

Proposition 4.2

(Lipschitz continuity) Suppose that assumptions (H1)–(H4) and (CQ1) (respectively (CQ2)) are satisfied. Then, the lower value function \(V^\flat \) (respectively the upper value function \(V^\sharp \)) is locally Lipschitz continuous on \([0,T] \times A\).

Proof

We prove the local Lipschitz regularity just for \(V^\flat \), since the case for \(V^\sharp \) can be treated in a similar way.

Fix any \(r_0>0\). Let \((t_0,x_0),(t_1,x_1)\in [0,T]\times (A\cap r_0\mathbb {B})\). Write \(R_0:=e^{MT}(1+r_0)\), \(T_1:=(T+t_1-t_2)\wedge T\) and \(T_2:=(T+t_2-t_1)\wedge T\). For any given \(\varepsilon >0\), invoking the definition of \(V^\flat \), there exists a nonanticipative strategy \(\alpha _1 \in S_U(t_1,x_1)\), such that

$$\begin{aligned} V^\flat (t_1,x_1) + \varepsilon ~\ge ~ \sup _{v\in \mathcal{V}(t_1,x_1)}J(t_1,x_1,\alpha _1(v), v)\;. \end{aligned}$$

We claim that there exist a constant \(K>0\), which depends only on the data of the problem, a nonanticipative map \(\phi :\mathcal{V}(t_2,x_2)\rightarrow \mathcal{V}(t_1,x_1)\) and a nonanticipative strategy \(\alpha _2\in S_U(t_2,x_2)\) such that, for all \(v(.) \in \mathcal{V}(t_2,x_2)= \mathcal{V}[t_2,T]\), we have:

$$\begin{aligned} d_{[t_1,T_1]}(\alpha _1(\phi (v))(.),\alpha _2(v)(.-t_1+t_2)) \;\le \; K\, |x_1-x_2|\,, \end{aligned}$$
(31)

and

$$\begin{aligned} \Vert x_1(.)- x_2(.-t_1+t_2)\Vert _{L^\infty (t_1,T_1)} \; \le \; K |x_1 - x_2 | \;, \end{aligned}$$
(32)

in which \(x_1(t):=x[t_1,x_1;\alpha _1(\phi (v)), \phi (v)](t)\) and \(x_2(t):=x[t_2,x_2;\alpha _2(v),v](t)\).

We start defining the nonanticipative map \(\phi :\mathcal{V}(t_2,x_2)\rightarrow \mathcal{V}(t_1,x_1)\). We observe that only one of the following two cases might occur: either \(T_1<T\) or \(T_1=T\).

If \(T_1<T\) (i.e. \(T_2=T\)), then we fix a measurable function \({\hat{v}} \in \mathcal{V}[T_1,T]\) and, for all \(v(.) \in \mathcal{V}(t_2,x_2)=\mathcal{V}[t_2,T]\), we set

$$\begin{aligned} \phi (v)(t)\;:=\; \left\{ \begin{array}{ll} v(t-t_1+t_2) &{} \;\;\;\;\; \text{ on } \;\;\;\; \; [t_1,T_1] \ , \\ {\hat{v}} (t) &{} \;\;\;\;\; \text{ on } \;\;\;\; \; (T_1,T] \ , \end{array} \right. \end{aligned}$$
(33)

On the other hand, if \(T_1=T\), then, for all \(v(.) \in \mathcal{V}(t_2,x_2)=\mathcal{V}[t_2,T]\), we write

$$\begin{aligned} \phi (v)(.):=v(.-t_1+t_2) |_{[t_1,T]}. \end{aligned}$$

Observe that the map \(\mathcal{V}(t_2,x_2)\rightarrow {\mathcal{AD}}(t_1,x_1)\) defined by

$$\begin{aligned} v\rightarrow (\alpha _1(\phi (v)), \phi (v)) \end{aligned}$$

is nonanticipative and can be modified giving a nonanticipative map \(\mathcal{V}(t_2,x_2) \rightarrow {\mathcal{AD}}(t_2,x_1)\) as follows:

$$\begin{aligned} v\rightarrow (\tilde{\alpha }_1(\phi (v)), v) \ , \end{aligned}$$

in which, if \(T_1<T\), then

$$\begin{aligned} \tilde{\alpha }_1(\phi (v))(.):= {\alpha }_1(\phi (v)) (. - t_2 + t_1) |_{[t_2,T]} \ . \end{aligned}$$

Otherwise, if \(T_1=T\) (i.e. \(T_2<T\)), then we set

$$\begin{aligned} \tilde{\alpha }_1(\phi (v))(t)\;=\; \left\{ \begin{array}{ll} \alpha _1(\phi (v))(t - t_2 + t_1) &{} \;\;\;\;\; \text{ on } \;\;\;\; \; [t_2,T_2] \ , \\ \Gamma _{T_2,x_2(T_2)}(v)(t) &{} \;\;\;\;\; \text{ on } \;\;\;\; \; (T_2,T] \ , \end{array} \right. \end{aligned}$$
(34)

in which \(x_2(T_2):=x[t_2,x_2; \alpha _1(\phi (v)),v](T_2)\) and \(\Gamma _{T_2,x_2(T_2)}\) is the nonanticipative map defined in (26) (for a fixed measurable function \({\hat{u}} \in \mathcal{U}[T_2,T]\)).

Given any \(v \in \mathcal{V}(t_2,x_2)\) we consider the nonanticipative map \(\gamma _v:\mathcal{U}(t_2,x_2)\rightarrow \mathcal{U}(t_2,x_1)\) provided by Corollary 3.5. It follows that the map \(\mathcal{V}(t_2,x_2)\rightarrow {\mathcal{AD}}(t_2,x_2)\) defined by

$$\begin{aligned} v\rightarrow (\gamma _v(\tilde{\alpha }_1(\phi (v))), v) \ \end{aligned}$$

is non anticipative and, setting \(\alpha _2(v):=\gamma _v(\tilde{\alpha }_1(\phi (v)))\) we obtain a strategy satisfying all the requirements of the claim, and, in particular, estimates (31) and (32).

Using again the definition of \(V^\flat \), employing the strategy \(\alpha _2\in S_U(t_2,x_2)\) constructed above, we have

$$\begin{aligned} V^\flat (t_2,x_2) ~\le ~ \sup _{v\in \mathcal{V}(t_2,x_2)} \ J(t_2,x_2,\alpha _2(v),v)\;. \end{aligned}$$

And, therefore, there exists a feasible control \({\bar{v}} _2\in \mathcal{V}(t_2,x_2)=\mathcal{V}[t_2,T]\) such that

$$\begin{aligned} V^\flat (t_2,x_2) \le J(t_2,x_2,u_2,\alpha _2({\bar{v}} _2),{\bar{v}} _2) +\varepsilon \;. \end{aligned}$$

Write \({\bar{x}} _1(t):=x[t_1,x_1;\alpha _1(\phi ({\bar{v}} _2)), \phi ({\bar{v}} _2)](t)\) and \({\bar{x}} _2(t):=x[t_2,x_2;\alpha _2({\bar{v}} _2),{\bar{v}} _2](t)\). Owing to the inequalities above, bearing in mind estimates (31)–(32), the Lipschitz continuity properties of L and g (see condition (H3)) and the upper bound \(M(1+R_0)\) for both the velocity of trajectories emanating from \(A\cap r_0\mathbb {B}\) and for the Lagrangian L along these trajectories (see (H2)), some routine analysis yields

$$\begin{aligned}&V^\flat (t_2,x_2) - V^\flat (t_1,x_1) ~ \le ~ J(t_2,x_2,\alpha _2({\bar{v}} _2),{\bar{v}} _2) \\&\qquad - J(t_1,x_1, \alpha _1(\phi ({\bar{v}} _2)), \phi ({\bar{v}} _2)) + 2 \varepsilon \\&\quad \le \int _{t_2}^{T} L(t, {\bar{x}} _2(t),\alpha _2({\bar{v}} _2)(t),{\bar{v}} _2(t) ) \ dt \\&\qquad - \int _{t_1}^{T} L(t, {\bar{x}} _1(t),\alpha _1(\phi ({\bar{v}} _2))(t), \phi ({\bar{v}} _2)(t) ) \ dt + g({\bar{x}} _1(T)) - g({\bar{x}} _2(T)) + 2\varepsilon \\&\quad \le \int _{t_1}^{T_1} |L(t, {\bar{x}} _1(t),\alpha _1(\phi ({\bar{v}} _2))(t), \phi ({\bar{v}} _2)(t) ) \\&\qquad -\, L(t+t_2-t_1, {\bar{x}} _2(t+t_2-t_1),\alpha _2({\bar{v}} _2)(t+t_2-t_1),{\bar{v}} _2(t+t_2-t_1) )|\ dt\\&\qquad +\, M(1+R_0)|t_1-t_2|+ k_gK|x_1-x_2| + k_gM(1+R_0)|t_1-t_2| + 2\varepsilon \\&\quad \le \int _{t_1}^{T_1} |L(t, {\bar{x}} _1(t),\alpha _1(\phi ({\bar{v}} _2))(t), \phi ({\bar{v}} _2)(t) ) \\&\qquad - L(t, {\bar{x}} _1(t),\alpha _2({\bar{v}} _2)(t+t_2-t_1),\phi ({\bar{v}} _2)(t) )|\ dt \\&\qquad + \int _{t_1}^{T_1} | L(t, {\bar{x}} _1(t),\alpha _2({\bar{v}} _2)(t+t_2-t_1),\phi ({\bar{v}} _2)(t) ) \\&\qquad - \, L(t+t_2-t_1, {\bar{x}} _2(t+t_2-t_1),\alpha _2({\bar{v}} _2)(t+t_2-t_1),{\bar{v}} _2(t+t_2-t_1) )|\ dt\\&\qquad +\, k_gK|x_1-x_2| + M(1+R_0)(1+k_g)|t_1-t_2| + 2\varepsilon \\&\quad \le \int _{t_1}^{T_1} k_L(|t_1-t_2| + \Vert {\bar{x}} _1(.)- {\bar{x}} _2(.-t_1+t_2)\Vert _{L^\infty (t_1,T_1)} ) \ dt + k_gK|x_1-x_2| \\&\qquad + M(1+R_0) d_{[t_1,T_1]}(\alpha _1(\phi ({\bar{v}} _2))(.),\alpha _2({\bar{v}} _2)(.-t_1+t_2)) \\&\qquad +\, M(1+R_0)(1+k_g)|t_1-t_2| + 2\varepsilon \\&\quad \le Tk_L(|t_1-t_2| + K|x_1-x_2| ) + k_gK|x_1-x_2| \\&\qquad +\, M(1+R_0)\big [(1+k_g)|t_1-t_2| + \, K|x_1-x_2|\big ] + 2\varepsilon \;. \end{aligned}$$

Exchanging the roles of \((t_1,x_1)\) and \((t_2,x_2)\) in the above inequalities, and letting \(\varepsilon \downarrow 0\) we finally obtain that

$$\begin{aligned} |V^\flat (t_1,x_1) - V^\flat (t_2,x_2)|~\le ~ K^\flat (|t_1-t_2| + |x_1-x_2|)\ , \end{aligned}$$

for some constant \(K^\flat >0\) (which depends only on \(r_0\) and the data of the differential game), confirming the proposition statement. \(\square \)

4.1 Solutions of Hamilton–Jacobi–Isaacs equations

Theorem 4.3

Assume that conditions (H1)–(H5) are satisfied.

  1. (i)

    If in addition (CQ1) is satisfied, then the lower value function \(V^\flat \) is a viscosity supersolution on \([0,T) \times A\) and a viscosity subsolution on \([0,T)\times {\mathrm{int}} \ A\) of equation (6) in which \(H=H^\flat \);

  2. (ii)

    If in addition (CQ2) is satisfied, then the upper value function \(V^\sharp \) is a viscosity supersolution on \([0,T) \times {\mathrm{int}} \ A\) and a viscosity subsolution on \([0,T)\times A\) of equation (6) in which \(H=H^\sharp \).

Proof

We start observing that the continuity of \(H^\flat \) and \(H^\sharp \) follows immediately from Berge Maximum Theorem (cf. [1, Theorem 1.4.16]). We shall prove below part (i). Part (ii) is proved by applying part (i) to characterize \(-V^{\sharp }(t,x)\) as the lower value function of a modified game, in which the original cost J(., ., ., .) is replaced by \(-J(.,.,.,.)\), and using the fact that a subgradient of \(-V^{\sharp }\) is expressible as \(-(\xi _0, \xi _1)\) in which \((\xi _0, \xi _1)\) is a supergradient of \(V^{\sharp }\) and vice-versa. Here, sub- and supergradients are understood in the sense of gradients of minorizing or majorizing \(\mathcal{C}^1\) test functions, as in Definition 2.1.

Step 1 \(V^\flat \)is a viscosity supersolution on\([0,T) \times A\)of equation (6) with\(H=H^\flat \).

Take any \((t_0,x_0)\in [0,T)\times A\) and let \(\varphi :\mathbb {R}\times \mathbb {R}^n\rightarrow \mathbb {R}\) be a \({\mathcal {C}}^1\) function such that \(V^\flat - \varphi \) has a local minimum at \((t_0,x_0)\) (relative to \([0,T]\times A\)). It is not restrictive to assume that \(V^\flat (t_0,x_0) = \varphi (t_0,x_0)\) and so there exists \(r_0\in (0,1)\) such that \(V^\flat (t,x) \ge \varphi (t,x)\), for all \((t,x) \in \big ( (t_0,x_0) + r_0 \mathbb {B}\big ) \cap \big ( [0,T]\times A \big )\). Suppose in contradiction that, for some \(\theta >0\), we have

$$\begin{aligned} H^\flat (t_0,x_0,\partial _x \varphi (t_0,x_0)) - \partial _t \varphi (t_0,x_0) \le - \theta \ , \end{aligned}$$
(35)

and so

$$\begin{aligned} \inf _{v\in V}\sup _{u\in U} {{\mathcal {H}}} (t_0,x_0,\partial _x \varphi (t_0,x_0),u,v) - \partial _t \varphi (t_0,x_0) \le - \theta \ . \end{aligned}$$

Take \(v_0\in V\) such that

$$\begin{aligned} \sup _{u\in U} {{\mathcal {H}}} (t_0,x_0,\partial _x \varphi (t_0,x_0),u,v_0) - \partial _t \varphi (t_0,x_0) \le - \theta \ , \end{aligned}$$

and, by continuity of the functions involved in the expression above, we obtain, following a reduction in the size of \(r_0>0\) if required,

$$\begin{aligned} \sup _{u\in U} {{\mathcal {H}}} (t,x,\partial _x \varphi (t,x),u,v_0) - \partial _t \varphi (t,x) \le - \frac{\theta }{2} \ , \end{aligned}$$
(36)

for all \((t,x) \in \big ( (t_0,x_0) + r_0 \mathbb {B}\big ) \cap \big ( [0,T]\times A \big )\). Define the control \({\tilde{v}} (.) \equiv v_0\). From Proposition 3.4 we know that the control \( {\tilde{v}} \in {\mathcal {V}}(t,x)\). Since an upper bound can be established for the speed of all trajectories emanating from \(\big ( (t_0,x_0) + \mathbb {B}\big ) \cap \big ( [0,T]\times A \big )\), from (36) we deduce that there exists \(\sigma _0\in (0,T-t_0)\) such that for every strategy \(\alpha \in S_U(t_0,x_0)\) and for every \(s\in [0,\sigma _0]\), we have

$$\begin{aligned} {{\mathcal {H}}} (s,x(s), \partial _x\varphi (s,x(s)), \alpha ({\tilde{v}}) (s), {\tilde{v}} (s) ) - \partial _t \varphi (s,x(s)) \le -\frac{\theta }{2} \ , \end{aligned}$$

where \(x(s):=x[t_0,x_0; \alpha ({\tilde{v}}) , {\tilde{v}}](s)\). It follows that, for every strategy \(\alpha \in S_U(t_0,x_0)\) and for every \(\sigma \in [0,\sigma _0]\), we obtain

$$\begin{aligned} \int _{t_0}^{t_0+\sigma } \Big [ {{\mathcal {H}}} (s,{\tilde{x}}(s), \partial _x\varphi (s,{\tilde{x}}(s)), \alpha ({\tilde{v}}) (s), {\tilde{v}} (s) ) - \partial _t \varphi (s,{\tilde{x}}(s)) \Big ] \ ds \le - \frac{\theta \sigma }{2} \ , \end{aligned}$$
(37)

where \({\tilde{x}}(s):=x[t_0,x_0; \alpha ({\tilde{v}}) , {\tilde{v}}](s)\).

On the other hand, invoking the Dynamic Programming Principle (Proposition 4.1), writing \(x(s):=x[t_0,x_0; \alpha ( v) , v](s)\), we deduce:

$$\begin{aligned} \displaystyle 0= & {} \inf _{\alpha \in S_U([t_0, t_0+\sigma ],x_0)}\sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ \int _{t_0}^{t_0+\sigma } L(s, x(s),\alpha (v)(s),v(s) ) \ ds \right. \\ \displaystyle&\left. +\, V^\flat \big (t_0+\sigma ,x(t_0+\sigma )\big ) - V^\flat (t_0,x_0)\right\} \\ \displaystyle\ge & {} \inf _{\alpha \in S_U([t_0, t_0+\sigma ],x_0)}\sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ \int _{t_0}^{t_0+\sigma } L(s, x(s),\alpha (v)(s),v(s) ) \ ds \right. \\ \displaystyle&\left. +\, \varphi \big (t_0+\sigma ,x(t_0+\sigma )\big ) - \varphi (t_0,x_0)\right\} \\ \displaystyle= & {} \inf _{\alpha \in S_U([t_0, t_0+\sigma ],x_0)}\sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ \int _{t_0}^{t_0+\sigma } \Big [ L(s, x(s),\alpha (v)(s),v(s) ) \right. \\&\left. +\, \partial _x\varphi (s,x(s)) \cdot f(s,x(s), \alpha ( v) (s), v (s) ) +\partial _t \varphi (s,x(s)) \Big ] \ ds \right\} \\ \displaystyle= & {} \inf _{\alpha \in S_U([t_0, t_0+\sigma ],x_0)}\sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ - \, \int _{t_0}^{t_0+\sigma } \Big [ {{\mathcal {H}}} (s,x(s), \partial _x\varphi (s,x(s)), \alpha ( v) (s), v (s) ) \right. \\&\left. - \, \partial _t \varphi (s,x(s)) \Big ] \ ds \right\} \\ \displaystyle\ge & {} \inf _{\alpha \in S_U([t_0, t_0+\sigma ],x_0)} \left\{ -\, \int _t^{t+\sigma } \Big [ {{\mathcal {H}}} (s,{\tilde{x}}(s), \partial _x\varphi (s,{\tilde{x}}(s)), \alpha ({\tilde{v}}) (s), {\tilde{v}} (s) ) \right. \\&\left. -\, \partial _t \varphi (s,{\tilde{x}}(s)) \Big ] \ ds \right\} \ . \end{aligned}$$

This relation together with (37) provides a contradiction.

Step 2 \(V^\flat \)is a viscosity subsolution on\([0,T)\times {\mathrm{int}} \ A\)of equation (6) with\(H=H^\flat \).

Take any \((t_0,x_0)\in [0,T)\times {\mathrm{int}} \ A\) and let \(\varphi :\mathbb {R}\times \mathbb {R}^n\rightarrow \mathbb {R}\) be a \({\mathcal {C}}^1\) function such that \(V^\flat - \varphi \) has a local maximum at \((t_0,x_0)\) (relative to \([0,T]\times A\)). Again it is not restrictive to assume that \(V^\flat (t_0,x_0) = \varphi (t_0,x_0)\) and so there exists \(r_0\in (0,1)\) such that \(\big ( (t_0,x_0) + r_0 \mathbb {B}\big ) \cap \big ( [0,T]\times A \big ) \subset [0,T]\times {\mathrm{int}} \ A\), and \(V^\flat (t,x) \le \varphi (t,x)\) for all \((t,x) \in \big ( (t_0,x_0) + r_0 \mathbb {B}\big ) \cap \big ( [0,T]\times A \big )\). Suppose in contradiction that, for some \(\theta >0\), we have

$$\begin{aligned} H^\flat (t_0,x_0,\partial _x \varphi (t_0,x_0)) - \partial _t \varphi (t_0,x_0) \ge \theta \ \end{aligned}$$
(38)

that is

$$\begin{aligned} \inf _{v\in V}\sup _{u\in U} {{\mathcal {H}}} (t_0,x_0,\partial _x \varphi (t_0,x_0),u,v) - \partial _t \varphi (t_0,x_0) \ge \theta \ . \end{aligned}$$

We also have that the multifunction

$$\begin{aligned} v\leadsto G(v):=\arg \max \{ {{\mathcal {H}}} (t_0,x_0,\partial _x \varphi (t_0,x_0),u,v) - \partial _t \varphi (t_0,x_0) ~ | ~ u\in U \} \end{aligned}$$

takes values (non-empty) compact subsets of U and is upper-semi-continuous (owing to Berge’s Theorem). Then, classical results on set-valued maps (cf. [1, Prop. 8.2.1]) imply that G(.) is Borel-measurable and, therefore, there exists a Borel-measurable selection \(\bar{u}(v)\in G(v)\) (cf. [1, Thm. 8.1.3] for a measurable selection theorem). Consider any control \( v \in {\mathcal {V}}(t_0,x_0)\). The map (defined on \([t_0,T]\))

$$\begin{aligned} t\rightarrow \bar{u}(v(t)) \quad \text{ is } \text{ measurable } \text{ on } \;\; [t_0,T] \end{aligned}$$

and takes values in U. Observe also, from the construction above, that

$$\begin{aligned} {{\mathcal {H}}} (t_0,x_0,\partial _x \varphi (t_0,x_0),\bar{u}(v(t)),v(t)) - \partial _t \varphi (t_0,x_0) \ge \theta \ . \end{aligned}$$

Write \({\bar{x}}(.):=x[t_0,x_0; \bar{u}(v) , v](.)\). From the regularity of \(\varphi \) and of the functions involved in the definition of \({{\mathcal {H}}}\), and since there exists an upper bound for the speed of all trajectories starting from \(x_0\), we can find \(\sigma _0\in (0,(T-t_0)\wedge r_0/2)\) such that

$$\begin{aligned} {\bar{x}}(s) \in x_0 + \frac{r_0}{2} \mathbb {B}\subset {\mathrm{int}} \ A, \quad \text{ for } \text{ all } s\in [t_0,t_0 + \sigma _0], \end{aligned}$$

and

$$\begin{aligned} {{\mathcal {H}}}(s,{\bar{x}}(s), {\partial \varphi }_x(s,{\bar{x}}(s)) , \bar{u}(v(s)), v (s)) - {\partial \varphi }_t(s,{\bar{x}}(s)) \ge \frac{\theta }{2}, \quad \text{ for } \text{ all } s\in [t_0,t_0 + \sigma _0] . \end{aligned}$$
(39)

Define now the map \(\bar{\alpha }: \mathcal{V}(t_0,x_0) \rightarrow \mathcal{U}(t_0,x_0)\) as follows

$$\begin{aligned} \bar{\alpha }(v)(t) := \left\{ \begin{array}{ll} \bar{u}(v(t))\quad \forall t\in [t_0,t_0+\sigma _0)\\ \Gamma _{t_0+\sigma _0,{\bar{x}} (t_0+\sigma _0)}(v)(t)\quad \forall t\in [t_0+\sigma _0, T] \ , \end{array}\right. \end{aligned}$$

where \(\Gamma _{t_0+\sigma _0,x}\in S_U(t_0+\sigma _0,x)\) is the strategy defined as in (26) (we can always fix a measurable function \({\hat{u}} \in \mathcal{U}[t_0+\sigma _0,T]\) to construct \(\Gamma _{t_0+\sigma _0,x}\), for all \(x\in A\)). Clearly \(\bar{\alpha }\in S_U(t_0 ,x_0)\), and also \(\bar{\alpha }\in S_U([t_0,t_0+\sigma ] ,x_0)\) for all \(\sigma \in [0,\sigma _0]\). Now, for a given control \( v \in {\mathcal {V}}(t_0,x_0)\), we still use notation \({\bar{x}}(.):=x[t_0,x_0; \bar{\alpha } ( v) , v](.)\) for the (admissible) trajectory which now involves the strategy-control pair \((\bar{\alpha }, v)\). From (39) we immediately deduce that

$$\begin{aligned}&{{\mathcal {H}}}(s,{\bar{x}}(s), {\partial \varphi }_x(s,{\bar{x}}(s)) , \bar{\alpha } (v) (s), v (s)) \\&\quad - {\partial \varphi }_t(s,{\bar{x}}(s)) \ge \frac{\theta }{2}, \quad \text{ for } \text{ all } s\in [t_0,t_0 + \sigma _0] . \end{aligned}$$

Therefore, for all \(\sigma \in [0,\sigma _0]\), integrating on \([t_0,t_0 + \sigma ]\), we obtain that

$$\begin{aligned} \inf _{v\in \mathcal{V}(t_0,x_0)} \int _{t_0}^{t_0+\sigma } \Big [ {{\mathcal {H}}} (s,x(s), \partial _x\varphi (s, {\bar{x}}(s)), \bar{\alpha } (v) (s), v (s)) - \partial _t \varphi (s,{\bar{x}}(s)) \Big ] \ ds \ge \frac{\theta \sigma }{2} \ . \end{aligned}$$
(40)

But, the Dynamic Programming Principle Proposition 4.1 (writing \(x(s):=x[t_0,x_0; \alpha ( v) , v](s)\)) yields:

$$\begin{aligned} \displaystyle 0= & {} \inf _{\alpha \in S_U([t_0,t_0+\sigma ],x_0)}\sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ \int _{t_0}^{t_0+\sigma } L(s, x(s),\alpha (v)(s),v(s) ) \ ds \right. \\ \displaystyle&\left. +\, V^\flat \big (t_0+\sigma ,x(t_0+\sigma )\big ) - V^\flat (t_0,x_0)\right\} \\ \displaystyle\le & {} \inf _{\alpha \in S_U([t_0,t_0+\sigma ],x_0)}\sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ \int _{t_0}^{t_0+\sigma } L(s, x(s),\alpha (v)(s),v(s) ) \ ds \right. \\ \displaystyle&\left. +\, \varphi \big (t_0+\sigma ,x(t_0+\sigma )\big ) - \varphi (t_0,x_0)\right\} \\ \displaystyle= & {} \inf _{\alpha \in S_U([t_0,t_0+\sigma ],x_0)}\sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ - \int _{t_0}^{t_0+\sigma } \Big [ {{\mathcal {H}}} (s,x(s), \partial _x\varphi (s,x(s)), \alpha ( v) (s), v (s) ) \right. \\ \displaystyle&\left. - \, \partial _t \varphi (s,x(s)) \Big ] \ ds \right\} \\ \displaystyle\le & {} \sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ - \int _t^{t+\sigma } \Big [ {{\mathcal {H}}} (s,{\bar{x}}(s), \partial _x\varphi (s,{\bar{x}}(s)), \bar{\alpha } ( v) (s), v (s) ) \right. \\&\left. - \, \partial _t \varphi (s,{\bar{x}}(s)) \Big ] \ ds \right\} \ . \end{aligned}$$

We have arrived at a contradiction to (40) and the proof is complete. \(\square \)

Proposition 4.4

Assume that conditions (H1)–(H5) are satisfied.

  1. (i)

    If in addition (CQ1) and (CQ3) are satisfied, then the lower value function \(V^\flat \) is simultaneously a viscosity supersolution and a viscosity subsolution on \([0,T) \times A\) of equation (6) in which \(H=H^\flat \);

  2. (ii)

    If in addition (CQ2) and (CQ4) are satisfied, then the upper value function \(V^\sharp \) is simultaneously a viscosity supersolution and a viscosity subsolution on \([0,T)\times A\) of equation (6) in which \(H=H^\sharp \).

Proof

Again we provide just the proof of (i) since (ii) can be derived arguing in a similar way. The fact that \(V^\flat \) is a viscosity supersolution on \([0,T) \times A\) of equation (6) with \(H=H^\flat \) has been already established in Theorem 4.3. So we have to prove that \(V^\flat \) is a viscosity subsolution on \([0,T)\times A\) of equation (6) with \(H=H^\flat \).

Take any \((t_0,x_0)\in [0,T)\times A\) and let \(\varphi :\mathbb {R}\times \mathbb {R}^n\rightarrow \mathbb {R}\) be a \({\mathcal {C}}^1\) function such that \(V^\flat - \varphi \) has a local maximum at \((t_0,x_0)\) (relative to \([0,T]\times A\)). We can restrict attention to the case when \(x_0\in \partial A\), since otherwise, if \(x_0 \in {\mathrm{int}} \ A\), then the analysis is like in step 2 of the proof of Theorem 4.3. And it is not restrictive to assume that \(V^\flat (t_0,x_0) = \varphi (t_0,x_0)\) and that there exists \(r_0\in (0,1)\) such that \(V^\flat (t,x) \le \varphi (t,x)\) for all \((t,x) \in \big ( (t_0,x_0) + r_0 \mathbb {B}\big ) \cap \big ( [0,T]\times A \big )\). Suppose in contradiction that, for some \(\theta >0\), we have

$$\begin{aligned}&\theta \le H^\flat (t_0,x_0,\partial _x \varphi (t_0,x_0)) - \partial _t \varphi (t_0,x_0) \\&\quad = \inf _{v\in V}\sup _{u\in U} {{\mathcal {H}}} (t_0,x_0,\partial _x \varphi (t_0,x_0),u,v) - \partial _t \varphi (t_0,x_0) \ . \end{aligned}$$

Invoking (CQ3) we can find \(u_0\in U\), \(v_0\in V\) and \(\eta _0>0\) such that, eventually reducing the size of the radius \(r_0\), from the continuity of \({{\mathcal {H}}}\), \(\partial _t \varphi \) and \(\partial _x \varphi \), we obtain

$$\begin{aligned} {{\mathcal {H}}} (t,x,\partial _x \varphi (t,x),u_0,v_0) - \partial _t \varphi (t,x) \ge \frac{\theta }{2} \ , \end{aligned}$$
(41)

and

$$\begin{aligned} \nabla h(x)\cdot f(x,u_0, v_0)< -\eta _0 \ , \end{aligned}$$
(42)

for all \((t,x) \in \big ( (t_0,x_0) + r_0 \mathbb {B}\big ) \cap \big ( [0,T]\times A \big )\). We also know that we can find \(\sigma _0\in (0,(T-t_0)\wedge r_0/2)\) such that all trajectories x(.)s starting from \(x_0\) satisfy the inclusion \(x(s) \in x_0 + \frac{r_0}{2} \mathbb {B}\) for all \(s\in [t_0,t_0 + \sigma _0]\). Fix the control \({\tilde{u}}(.)\equiv u_0\) and consider the admissible strategy for player one \(\alpha _0 \in S_U(t_0,x_0)\) defined as in (26) of the proof of Proposition 3.4 taking \({\tilde{u}}(.)\equiv u_0\) as reference control:

$$\begin{aligned} \alpha _0(.):=\Gamma _{t_0,x_0}(v)(.)= \gamma _{t_0,x_0}({\tilde{u}}\equiv u_0 )(.) \ . \end{aligned}$$

Observe that \(\alpha _0\in S_U(t_0+\sigma _0,x_0)\) and, from (42) and the construction of \(\gamma _{t_0,x_0}({\tilde{u}}\equiv u_0 )\), for the (constant) control \({\tilde{v}} (.)\equiv v_0\in \mathcal{V}(t_0,x_0)\) we obtain \(\alpha _0({\tilde{v}} \equiv v_0)(s)=u_0\) for a.e. \(s\in [t_0,t_0 + \sigma _0]\).

Therefore, invoking once again the Dynamic Programming Principle Proposition 4.1 and using inequality (41) (writing \(x(.):=x[t_0,x_0; \alpha ( v) , v](.)\), \({\tilde{x}}(.):=x[t_0,x_0; \alpha _0 ( v) , v](.)\) and \({\tilde{x}} _0(.):=x[t_0,x_0; \alpha _0 ({\tilde{v}} \equiv v_0) , {\tilde{v}} \equiv v_0](.)\) to make the notation simpler), for all \(\sigma \in [0, \sigma _0]\) we arrive at the following sequence of inequalities:

$$\begin{aligned} \displaystyle 0\le & {} \inf _{\alpha \in S_U([t_0,t_0+\sigma ],x_0)}\sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ - \int _{t_0}^{t_0+\sigma } \Big [ {{\mathcal {H}}} (s,x(s), \partial _x\varphi (s,x(s)), \alpha ( v) (s), v (s) ) \right. \\ \displaystyle&\left. -\, \partial _t \varphi (s,x(s)) \Big ] \ ds \right\} \\ \displaystyle\le & {} \sup _{v\in \mathcal{V}(t_0,x_0)} \left\{ - \int _t^{t+\sigma } \Big [ {{\mathcal {H}}} (s,{\tilde{x}}(s), \partial _x\varphi (s,{\tilde{x}}(s)), \alpha _0 ( v) (s), v (s) ) \right. \\&\left. - \, \partial _t \varphi (s,{\tilde{x}}(s)) \Big ] \ ds \right\} \\ \displaystyle\le & {} \left\{ - \int _t^{t+\sigma } \Big [ {{\mathcal {H}}} (s,{\tilde{x}} _0(s), \partial _x\varphi (s,{\tilde{x}} _0(s)), \alpha _0 ( v_0) (s), v_0 ) - \, \partial _t \varphi (s,{\tilde{x}} _0(s)) \Big ] \ ds \right\} \\ \displaystyle\le & {} - \frac{\theta }{2} \sigma <0 \ . \end{aligned}$$

This is a contradiction and the proof is complete. \(\square \)

Immediate consequences of Theorem 4.3 and Proposition 4.4 are the following corollaries.

Corollary 4.5

Assume that (H1)–(H5), (CQ1) and (CQ2) hold true. Suppose moreover that the Isaacs condition

$$\begin{aligned} H^\flat (t,x,p) =H^\sharp (t,x,p ) , \quad \forall \; (t,x,p)\in [0,T)\times A \times \mathbb {R}^n \end{aligned}$$

is satisfied. Then,

  1. (i)

    the lower value function \(V^\flat \) is a viscosity supersolution on \([0,T) \times A\) and a viscosity subsolution on \([0,T)\times {\mathrm{int}} \ A\) of equation (6) also with \(H={H^\sharp }\)

  2. (ii)

    the upper value function \(V^\sharp \) is a viscosity supersolution on \([0,T) \times {\mathrm{int}} \ A\) and a viscosity subsolution on \([0,T)\times A\) of equation (6) also with \(H=H^\flat \).

Corollary 4.6

Assume that (H1)–(H5) and (CQ1)–(CQ2) hold true. Suppose moreover that the Isaacs condition

$$\begin{aligned} H^\flat (t,x,p) =H^\sharp (t,x,p ) , \quad \forall \; (t,x,p)\in [0,T)\times A \times \mathbb {R}^n \end{aligned}$$

and (CQ3) (or equivalently (CQ4)) are satisfied. Then,

  1. (i)

    the lower value function \(V^\flat \) is a viscosity solution on \([0,T) \times A\) of equation (6) also with \(H={H^\sharp }\)

  2. (ii)

    the upper value function \(V^\sharp \) is a viscosity solution on \([0,T) \times A\) of equation (6) also with \(H=H^\flat \).

5 Comparison and Uniqueness Results

Theorem 5.1

Assume that conditions (H1)–(H5) are satisfied and that A is compact.

  1. (i)

    Suppose in addition that (CQ1) is satisfied and consider equation (6) in which we take \(H=H^\flat \). Take two continuous functions \(W_1, \; W_2 :[0,T] \times A \longrightarrow \mathbb {R}\) satisfying the following properties

    1. (a)

      \(W_1(T,.)=W_2(T,.) \; (=g(.))\) on A;

    2. (b)

      \(W_1(t,x)\) is a viscosity subsolution on \([0,T) \times \mathrm{int } \ A\) of equation (6);

    3. (c)

      \(W_2(t,x)\) is a viscosity supersolution on \([0,T) \times A\) of equation (6).

    Then we obtain:

    $$\begin{aligned} W_1 (t,x)\le W_2(t,x), \quad \forall (t,x)\in [0,T]\times A \ . \end{aligned}$$
  2. (ii)

    Suppose in addition that (CQ2) is satisfied and consider equation (6) in which we take \(H=H^\sharp \). Take two continuous functions \(W_1, \; W_2 :[0,T] \times A \longrightarrow \mathbb {R}\) satisfying the properties (a)-(c) of (i) above. Then we obtain:

    $$\begin{aligned} W_1 (t,x)\le W_2(t,x), \quad \forall (t,x)\in [0,T]\times A \ . \end{aligned}$$

Proof

We shall prove only part (i) of Theorem 5.1, in which we take the Hamiltonian to be \(H=H^\flat \). The proof of part (ii) is similar.

Step 1 Suppose, in contradiction, that \(\sup _{(t,x)\in [0,T]\times A} W_1(t,x) - W_2(t,x) >0\). Then, there would exist a point \(({\bar{s}} _0,{\bar{x}} _0)\in [0,T)\times A\) such that:

$$\begin{aligned} {W}_1({\bar{s}} _0,{\bar{x}} _0) - {W}_2({\bar{s}} _0,{\bar{x}} _0) >0 \ . \end{aligned}$$
(43)

We now follow an approach based on a Kruzkov type transform. Let \(M\in \mathbb {R}\) be a lower bound for both \(W_1\) and \(W_2\). We can choose the constant M in such a manner that \(M\le 0\) and

$$\begin{aligned} L(s, x, u, v) - M \ge 1 \ , \quad \text{ for } \text{ all } u\in U, \; v \in V \ , \end{aligned}$$
(44)

for all \(s\in [0,T]\), and \(x \in A\). Take a positive constant c such that \(c\ge 1-MT\). Define the functions

$$\begin{aligned} \widetilde{W}_i(t,x) := \frac{1}{1+t} \log \Big (W_i(T-t,x) + M(T-t) - M + c \Big ),\qquad i=1,2 \ , \end{aligned}$$

and the Hamiltonian \(\widetilde{H}^\flat : \mathbb {R}^{1+n+1+1+n}\rightarrow \mathbb {R}\):

$$\begin{aligned} {{\widetilde{H}^\flat }(t,x,w,p_t,p_x) } := \inf _{v\in V}\sup _{u\in U} {\widetilde{\mathcal{H}}} (t,x,w,p_t,p_x,u,v) \ , \end{aligned}$$

where

$$\begin{aligned} {\widetilde{\mathcal{H}}} (t,x,w,p_t,p_x,u,v):= & {} (1+t) p_t -(1+t) f(x,u,v) \cdot p_x \\&\quad - \frac{L(T-t,x,u,v)-M}{ e^{(1+t) w}} \ . \end{aligned}$$

Write \({\bar{t}} _0:=T-{\bar{s}} _0 (>0)\). Observe that, passing to functions \(\widetilde{W}_i\)’s (which are continuous on \( [0,T]\times A\)), from (43) and the compactness of A there would exist a number \(\sigma >0\) and a point \(({\bar{t}},{\bar{x}})\in (0,T]\times A\) such that:

$$\begin{aligned} \sigma:= & {} \widetilde{W}_1({\bar{t}},{\bar{x}}) - \widetilde{W}_2({\bar{t}},{\bar{x}}) = \max _{(t,x)\in [0,T]\times A}(\widetilde{W}_1(t,x) - \widetilde{W}_2(t,x)) \nonumber \\\ge & {} \widetilde{W}_1({\bar{t}} _0,{\bar{x}} _0) - \widetilde{W}_2({\bar{t}} _0,{\bar{x}} _0)>0 \ . \end{aligned}$$
(45)

We restrict attention to the case when ‘\({\bar{x}} \in \partial A\)’, since dealing with the case ‘\({\bar{x}} \in {\mathrm{int}} \ A\)’ is similar, but simpler (cf. [2, 14]).

We claim that \(\widetilde{W}_1\) and \(\widetilde{W}_2\) satisfy the conditions for being a viscosity subsolution on \((0,T)\times {\mathrm{int}} \ A \), and a viscosity supersolution on \((0,T]\times A\), respectively, of

$$\begin{aligned} \left\{ \begin{array}{ll} W(t,x) + {\widetilde{H}^\flat }\Big (t , x , W(t,x),\partial _t W(t,x), \partial _x W(t,x) \Big )= 0&{} \quad {\mathrm{on }} \; (0,T)\times A\\ W(0,x)= {\tilde{g}} (x)&{} \quad {\mathrm{on }} \; A \ , \end{array} \right. \end{aligned}$$
(46)

where \({\tilde{g}} (x):=\log (g(x) - M + c)\).

Indeed, assume that \((t_0, x_0)\in (0,T]\times A\) is a local minimizer for \(\widetilde{W}_2-\psi \) where \(\psi \) is a \({\mathcal {C}}^1\) function. It is not restrictive to suppose that \((\widetilde{W}_2-\psi )(t_0, x_0)=0\) and, so, locally we have \(\widetilde{W}_2\ge \psi \). Consider the \({\mathcal {C}}^1\) test function \(\varphi (s,x):=e^{(1+T-s)\psi (T-s,x)} -Ms +M -c \). From the monotonicity of the exponential function, we obtain that \((T-t_0, x_0)\) is a local minimizer for \((t,x)\rightarrow (W_2-\varphi )(t,x)\). Consequently, since \(W_2\) is a viscosity supersolution of (6), we have

$$\begin{aligned} H^\flat (T-t_0,x_0,\partial _x \varphi (T-t_0,x_0)) - \partial _t \varphi (T-t_0,x_0) \ge 0 \ , \end{aligned}$$

which yields

$$\begin{aligned} \inf _{v\in V}\sup _{u\in U} {{\mathcal {H}}} (T-t_0,x_0,\partial _x \varphi (T-t_0,x_0),u,v) - \partial _s \varphi (T-t_0,x_0) \ge 0 \ . \end{aligned}$$
(47)

On the other hand, from the definition of \(\varphi \), we also have:

$$\begin{aligned} \partial _s \varphi (T-t_0,x_0)= & {} - e^{(1+ t_0)\psi (t_0,x_0)}~ \left[ (1+ t_0){\partial _t} \psi (t_0,x_0) + \psi (t_0,x_0) \right] - M \ ;\\ {\partial _x}\varphi (T-t_0,x_0)= & {} e^{(1+ t_0 )\psi ( t_0, x_0)}~ (1+t_0) ~ {\partial _x}\psi (t_0,x_0) \ . \end{aligned}$$

Substituting these quantities in (47), we obtain

$$\begin{aligned} 0\le & {} \psi (t_0,x_0)+ \inf _{v\in V}\sup _{u\in U} \Big \{ (1+t_0) {\partial _t} \psi (t_0,x_0) \\&-(1+t_0) f(x_0,u,v) \cdot {\partial _x}\psi (t_0,x_0) - \frac{L(T-t_0,x_0,u,v)-M}{e^{(1+t_0) \psi (t_0,x_0)}} \Big \} \\= & {} \psi (t_0,x_0)+ {{\widetilde{H}^\flat }\big (t_0,x_0,\psi (t_0,x_0),{\partial _t} \psi (t_0,x_0),{\partial _x}\psi (t_0,x_0)\big ) } \ . \end{aligned}$$

which yields that \(\widetilde{W}_2\) is a viscosity supersolution of (46) on \((0,T] \times A\).

The fact that \(\widetilde{W}_1\) is a viscosity subsolution on \((0,T)\times {\mathrm{int}} \ A \) of (46) can be shown by similar techniques. So we omit its proof.

Step 2 This step consists in selecting suitable test functions. Define \({\bar{\xi }}:=-\nabla h({\bar{x}}) \; (\in {\mathrm{int }}\ T_{A}(\bar{x})\)). From the characterization of the interior of the Clarke tangent cone of A at \(\bar{x}\), \(T_A(\bar{x})\), (cf. [20]), we can find constants \(\delta \in (0,1)\) and \(\eta \in (0,1)\) such that

$$\begin{aligned} z+(0,\delta ]({\bar{\xi }} + \eta \mathbb {B}) ~ \subset ~ {\mathrm{int}} A \ , \quad \text{ for } \text{ all } z\in ({\bar{x}} + 2 \delta \mathbb {B})\cap A \ . \end{aligned}$$

Write \(\omega (.):\mathbb {R}_+ \rightarrow \mathbb {R}_+\) for a modulus of continuity for functions \(\widetilde{W}_i\)’s. Notice that \(\omega (.)\) can be considered bounded from above by a constant \(C>0\), since the \(\widetilde{W}_i\)’s are bounded on \([0,T]\times A\).

For any \(n\in \mathbb {N}\) fixed, we define the continuous (bounded from above) function \(\phi _n\) depending on the variables \(s,t\in [0,T]\), \(x,y\in A\):

$$\begin{aligned} \phi _n(s,t,x,y):= & {} \widetilde{W}_1(s,x) - \widetilde{W}_2(t,y) - n^2 \left| x-y-\frac{1}{n} {\bar{\xi }} \right| ^2 - n^2\left| s-t +\frac{1}{\sqrt{n}}\right| ^2 \nonumber \\&- |y-{\bar{x}}|^2 - |t-{\bar{t}}|^2 \ . \end{aligned}$$
(48)

Let \((s_n,t_n,x_n,y_n)\in [0,T]^2\times A^2\) be a point of maximum for \(\phi _n\), which exists since A is compact:

$$\begin{aligned} \phi _n (s_n,t_n,x_n,y_n)= \max _{(s,t,x,y)\in [0,T]^2\times A^2 } \phi _n(s,t,x,y) \ . \end{aligned}$$

Observe that, for all \(n > \max \{ \frac{1}{\delta }; \left( \frac{4}{{\bar{t}}}\right) ^2 \} \), \(({\bar{t}} - \frac{1}{\sqrt{n}},{\bar{x}} + \frac{1}{n} {\bar{\xi }}) \in (0,T) \times {\mathrm{int}} A\) and we have

$$\begin{aligned} \phi _n (s_n,t_n,x_n,y_n)\ge \phi _n \left( {\bar{t}} - \frac{1}{\sqrt{n}},{\bar{t}} ,{\bar{x}} + \frac{1}{n} {\bar{\xi }} ,{\bar{x}}\right) \ . \end{aligned}$$
(49)

We claim that

$$\begin{aligned} \lim _{n\rightarrow \infty } x_n = \lim _{n\rightarrow \infty } y_n = {\bar{x}} \quad \text{ and } \lim _{n\rightarrow \infty }s_n = \lim _{n\rightarrow \infty } t_n ={\bar{t}} \ . \end{aligned}$$

Indeed, from (48) and (49), for each \(n > \max \{ \frac{1}{\delta }; \left( \frac{4}{{\bar{t}}}\right) ^2 \} \) we obtain

$$\begin{aligned} 0\le & {} \widetilde{W}_1(s_n,x_n) - \widetilde{W}_2 (t_n,y_n)) - \left[ \widetilde{W}_1\left( {\bar{t}} - \frac{1}{\sqrt{n}}, {\bar{x}}+ \frac{1}{n} {\bar{\xi }}\right) - \widetilde{W}_2({\bar{t}}, {\bar{x}}) \right] \\&- n^2 \left| x_n - y_n - \frac{1}{n} {\bar{\xi }} \right| ^2 - n^2\left| s_n - t_n + \frac{1}{\sqrt{n}}\right| ^2 - |y_n-{\bar{x}}|^2 - |t_n-{\bar{t}}|^2 \ . \end{aligned}$$

As a consequence, since from (45) the term \((\widetilde{W}_1(s_n,x_n) - \widetilde{W}_2 (s_n,x_n)) - ( \widetilde{W}_1({\bar{t}}, {\bar{x}}) - \widetilde{W}_2({\bar{t}}, {\bar{x}}) )\le 0 \), and making use of the modulus of continuity \(\omega (.)\) (which is bounded by C), we deduce that

$$\begin{aligned}&n^2 \left| x_n - y_n - \frac{1}{n} {\bar{\xi }} \right| ^2 + n^2\left| s_n - t_n + \frac{1}{\sqrt{n}}\right| ^2 + |y_n-{\bar{x}}|^2 + |t_n-{\bar{t}}|^2 \nonumber \\&\quad \le \omega (|(s_n - t_n , x_n - y_n)|) + \omega \left( \left| \left( \frac{1}{\sqrt{n}} , \frac{{\bar{\xi }}}{n}\right) \right| \right) \nonumber \\&\quad \le 2C \ . \end{aligned}$$
(50)

It follows that

$$\begin{aligned}&\left| x_n - y_n - \frac{1}{n} {\bar{\xi }} \right| , \; \left| s_n - t_n + \frac{1}{\sqrt{n}}\right| \le \frac{\sqrt{2C}}{n} \ , \\&|y_n-{\bar{x}}|, \; |t_n-{\bar{t}}| \le \sqrt{2C} \ , \\&|x_n - y_n | \le \frac{1}{n} \left( \sqrt{2C} + |{\bar{\xi }}|\right) \ , \\&|s_n - t_n | \le \frac{1}{\sqrt{n}}\left( \frac{\sqrt{2C}}{\sqrt{n}} + 1\right) \ . \end{aligned}$$

Extracting a subsequence, we obtain in the limit

$$\begin{aligned} \lim _{n\rightarrow \infty } x_n = \lim _{n\rightarrow \infty } y_n = {\tilde{x}} \quad \text{ and } \lim _{n\rightarrow \infty }s_n = \lim _{n\rightarrow \infty } t_n ={\tilde{t}} \ , \end{aligned}$$

for some \(({\tilde{t}}, {\tilde{x}})\in [0,T]\times A\). Using again (50), and observing that

$$\begin{aligned} \lim _{n\rightarrow \infty } \left[ \omega (|(s_n - t_n , x_n - y_n)|) + \omega \left( \left| \left( \frac{1}{\sqrt{n}} , \frac{{\bar{\xi }}}{n}\right) \right| \right) \right] = 0 \ , \end{aligned}$$
(51)

we deduce that \(({\tilde{t}}, {\tilde{x}})=({\bar{t}}, {\bar{x}})\).

Step 3 Take \({\bar{n}}\in \mathbb {N}\) large enough such that

$$\begin{aligned} {\bar{n}} > \max \left\{ \frac{1}{\delta }; \left( \frac{4}{{\bar{t}}}\right) ^2 \right\} \ , \end{aligned}$$

and, for all \(n\ge {\bar{n}}\), we have:

$$\begin{aligned} \omega (|(s_n - t_n , x_n - y_n)|) + \omega \left( \left| \left( \frac{1}{\sqrt{n}} , \frac{{\bar{\xi }}}{n}\right) \right| \right) \le \min \left\{ \eta ^2; \delta ^2; \frac{\bar{t}^2}{4} \right\} \ . \end{aligned}$$
(52)

Considering (50) with this choice of \({\bar{n}}\), we see that, for all \(n\ge {\bar{n}}\),

$$\begin{aligned}&0<t_n \ , \quad 0<s_n<T, \quad y_n\in ({\bar{x}} + \delta \mathbb {B})\cap A \ ,\\&x_n \in y_n + \frac{1}{n} [{\bar{\xi }} + \eta \mathbb {B}] \subset y_n + \delta [{\bar{\xi }} + \eta \mathbb {B}] \subset {\mathrm{int }} A \ . \end{aligned}$$

As a consequence \((s_n,x_n)\in (0,T)\times {\mathrm{int}} \ A\), and \((t_n,y_n)\in (0,T]\times A\).

Fix \(n\in \mathbb {N}\) with \(n\ge {\bar{n}}\). We define the test functions \(\psi _1, \psi _2:[0,T]\times \mathbb {R}^n \rightarrow \mathbb {R}\) as follows:

$$\begin{aligned} \psi _1(s,x):= & {} \widetilde{W}_2(t_n,y_n) + n^2 \left| x - y_n - \frac{1}{n} {\bar{\xi }} \right| ^2 + n^2 \left| s - t_n + \frac{1}{\sqrt{n}}\right| ^2 \\&+ |y_n-{\bar{x}}|^2 + |t_n-{\bar{t}}|^2 \ , \end{aligned}$$

and

$$\begin{aligned} \psi _2(t,y):= & {} \widetilde{W}_1(s_n,x_n) - n^2 \left| x_n - y - \frac{1}{n} {\bar{\xi }} \right| ^2 - n^2 \left| s_n - t + \frac{1}{\sqrt{n}}\right| ^2 \\&+ |y - {\bar{x}}|^2 + |t - {\bar{t}}|^2 \ . \end{aligned}$$

Observe that the point \((s_n,x_n)\) is a local maximizer for the function \(\widetilde{W}_1-\psi _1\) (on \([0,T]\times A\)). Since \(\widetilde{W}_1\) is a viscosity subsolution of (46) on \((0,T)\times {\mathrm{int}} \ A\), it immediately follows that

$$\begin{aligned} \widetilde{W}_1(s_n,x_n) + {\widetilde{H}^\flat }\Big (s_n,x_n , \widetilde{W}_1(s_n,x_n),\partial _s \psi _1(s_n,x_n), \partial _x \psi _1(s_n,x_n) \Big ) \le 0 \ . \end{aligned}$$
(53)

Similarly, since \((t_n,y_n)\) is a local minimizer for the function \(\widetilde{W}_2-\psi _2\) on \([0,T]\times A\), and \(\widetilde{W}_2\) is a viscosity supersolution of (46) on \((0,T]\times A\), we also have

$$\begin{aligned} 0 \le \widetilde{W}_2(t_n,y_n) + {\widetilde{H}^\flat }\Big (t_n,y_n , \widetilde{W}_2(t_n,y_n),\partial _t \psi _2(t_n,y_n), \partial _y \psi _2(t_n,y_n) \Big ) \ . \end{aligned}$$
(54)

Taking into account inequalities (53) and (54), for all \(n\ge {\bar{n}}\) we obtain:

$$\begin{aligned} \widetilde{W}_1(s_n,x_n) - \widetilde{W}_2(t_n,y_n)\le & {} {\widetilde{H}^\flat }\Big (t_n,y_n , \widetilde{W}_2(t_n,y_n),\partial _t \psi _2(t_n,y_n), \partial _y \psi _2(t_n,y_n) \Big ) \\&- {\widetilde{H}^\flat }\Big (s_n,x_n , \widetilde{W}_1(s_n,x_n),\partial _s \psi _1(s_n,x_n), \partial _x \psi _1(s_n,x_n) \Big ) \ . \end{aligned}$$

Letting \(n\rightarrow +\infty \), taking into account the regularity properties of all the functions involved (\(\widetilde{W}_i\) and \(\psi _i\) for \(i=1,2\) and \({\widetilde{H}^\flat }\)), it follows that

$$\begin{aligned} 0 < \sigma= & {} \widetilde{W}_1({\bar{t}}, {\bar{x}}) - \widetilde{W}_2({\bar{t}}, {\bar{x}}) \nonumber \\\le & {} {\widetilde{H}^\flat }\Big ({\bar{t}}, {\bar{x}} , \widetilde{W}_2({\bar{t}}, {\bar{x}}),\partial _t \psi _2({\bar{t}}, {\bar{x}}), \partial _x \psi _2({\bar{t}}, {\bar{x}}) \Big ) \nonumber \\&- {\widetilde{H}^\flat }\Big ({\bar{t}}, {\bar{x}} , \widetilde{W}_1({\bar{t}}, {\bar{x}}),\partial _t \psi _1({\bar{t}}, {\bar{x}}), \partial _x \psi _1({\bar{t}}, {\bar{x}}) \Big ) \ . \end{aligned}$$
(55)

Therefore, considering the expressions for the derivatives of \(\psi _1\) and \(\psi _2\), bearing in mind the limit behaviour of the sequences \((s_n,x_n)\) and \((t_n,y_n)\) (cf. also (50) and (51)), using the definition of the Hamiltonian \({\widetilde{H}^\flat }\), the inequality (55) above, for suitable \({\bar{u}} \in U\) and \({\bar{v}} \in V\), implies

$$\begin{aligned} 0 < \sigma\le & {} -\frac{ L(T-{\bar{t}}, {\bar{x}},{\bar{u}},{\bar{v}})-M}{e^{(1+{\bar{t}}) \widetilde{W}_2({\bar{t}}, {\bar{x}})}} - \frac{L(T-{\bar{t}}, {\bar{x}}, {\bar{u}}, {\bar{v}}) - M }{e^{(1+{\bar{t}}) \widetilde{W}_1({\bar{t}}, {\bar{x}})}}\\= & {} \big ( L(T-{\bar{t}}, {\bar{x}},{\bar{u}},{\bar{v}})-M \big ) \frac{e^{\widetilde{W}_2({\bar{t}}, {\bar{x}})} - e^{\widetilde{W}_1({\bar{t}}, {\bar{x}})}}{e^{\widetilde{W}_1({\bar{t}}, {\bar{x}})+ \widetilde{W}_2({\bar{t}}, {\bar{x}})}} \ . \end{aligned}$$

From (44) it follows that

$$\begin{aligned} e^{\widetilde{W}_2({\bar{t}}, {\bar{x}})} - e^{\widetilde{W}_1({\bar{t}}, {\bar{x}})} > 0 \ . \end{aligned}$$

But this contradicts (45).

We conclude that \(W_1(t,x) - W_2(t,x) \le 0\) for all \((t,x)\in [0,T]\times A\), confirming the assertions of the theorem. \(\square \)

Observe that in step 2 of the proof of Theorem 5.1 we can replace the continuous function \(\phi _n\) defined in (48) by the function \(\tilde{\phi } _n\) (still depending on the variables \(s,t\in [0,T]\), \(x,y\in A\)), in which we have exchanged x and y in the term \(n^2 |y-x-\frac{1}{n} {\bar{\xi }} |^2\):

$$\begin{aligned} \tilde{\phi } _n(s,t,x,y):= & {} \widetilde{W}_1(s,x) - \widetilde{W}_2(t,y) - n^2 \left| y-x-\frac{1}{n} {\bar{\xi }} \right| ^2 - n^2\left| s-t +\frac{1}{\sqrt{n}}\right| ^2 \\&- |y-{\bar{x}}|^2 - |t-{\bar{t}}|^2 \ . \end{aligned}$$

It is then easy to check that this minor modification produces a sequence \(\{(s_n,t_n,x_n,y_n)\}\) in \([0,T]^2\times A^2\) such that, this time, we have \(y_n \in {\mathrm{int}} \ A\) for each n large enough. This suits very well with the fact that \((t_n,y_n)\) is a local minimizer for the function \(\widetilde{W}_2- \tilde{\psi } _2\) (where \(\tilde{\psi } _2\) is a function obtained by a suitable modification of \(\psi _2\)), and now \(\widetilde{W}_2\) is a viscosity supersolution of (46) (on \((0,T]\times {\mathrm{int}} \ A\)). Then, arguing as in the proof of Theorem 5.1, we obtain the following proposition.

Proposition 5.2

Assume that conditions (H1)–(H5) are satisfied and that A is compact. Then all the assertions of Theorem 5.1 remain valid when we replace (b) and (c) by (b)\('\) and (c)\('\) below:

(b)\('\):

\(W_1(t,x)\) is a viscosity subsolution on \([0,T) \times A\) of equation (6);

(c)\('\):

\(W_2(t,x)\) is a viscosity supersolution on \([0,T) \times {\mathrm{int}} \ A\) of equation (6).