Keywords

Mathematics Subject Classification (2010)

1 Introduction

The present work aims at investigating a system of Hamilton–Jacobi–Bellman equations on multi-domains. Consider the repartition of \(\mathbb {R}^{d}\) by disjoint subdomains (Ω i)i=1,…,m with

$$\mathbb {R}^d=\overline{\varOmega }_1\cup\cdots\cup\overline{ \varOmega }_m, \qquad\varOmega_i\cap\varOmega_j= \emptyset\quad \mbox{for }i\neq j. $$

Consider a collection of Hamilton–Jacobi–Bellman (HJB) equations

$$ \left\{\begin{array}{l@{\quad }l} -\partial_t u(t,x)+H_i(x,Du(t,x))=0, & \mbox{for}\ t\in(0,T),\ x\in \varOmega _i,\\[2pt] u(T,x)=\varphi (x), & \mbox{for}\ x\in \varOmega _i, \end{array}\right. $$
(1.1)

with the different Hamiltonians H i satisfying standard assumptions, and where \(\phi:\mathbb {R}^{d}\to \mathbb {R}\) is a Lipschitz continuous function. We address the question to know what condition should be considered on the interfaces (i.e., the intersections of the sets \(\overline{\varOmega}_{i}\)) in order to get the existence and uniqueness of solution, and also what should be the precise notion of solution.

In order to identify a global solution satisfying (1.1) on each subdomain Ω i, one can define a global HJB equation with the Hamiltonian H defined on the whole \(\mathbb {R}^{d}\) with H(x,p)=H i(x,p) whenever xΩ i. However, H can not be expected to be continuous and the definition of H on the interfaces between the subdomains Ω i is not clear.

The viscosity notion has been introduced by Crandall–Lions to give a precise meaning to the HJ equations with continuous Hamiltonians. This notion has been extended to the discontinuous case by Ishii (see [14]), and later to the case where the Hamiltonian is measurable with respect to the space variable (see [7]). The main difficulty remains the uniqueness of viscosity solution when the Hamiltonian is not continuous.

In [16], a stationary HJ equations with discontinuous Lagrangian have been studied where the Hamiltonian is the type of H(x,p)+g(x) with continuous H and discontinuous g. A uniqueness result is proved under rather restrictive assumptions on g. In [7], the viscosity notion has been extended for the HJ equations with space-measurable Hamiltonians, and a uniqueness result has been established under a transversality assumption. Roughly speaking, this transversality condition amounts saying that the behavior of the solution on the interfaces is not relevant and can be ignored. In the present work, we will consider some more general situations where the transversality condition may not be satisfied. Our aim is to derive some junction conditions that have to be considered on the interfaces in order to guarantee the existence and uniqueness of the viscosity solution of (1.1).

Let us mention that the first work dealing with the case where the whole space is separated into two subdomains by one interface has been studied in [4]. In the context, general results on the viscosity sense and uniqueness of solution are analyzed. Even though the problem in [4] considers the problem of a steady equation, the paper shares the same difficulty as the ones we are presenting here. In the present work, our approach is completely different from the one used in [4] and seems to be easy to generalize for two or multi-domains problems. Other papers related to the topic of HJB equations with discontinuous Hamiltonians are [1, 13], where some HJB equations are studied on networks (union of a finite number of half-lines with a single common point), motivated by some traffic flow problems. An inspiring result is the strong comparison principle of [13] leading to the uniqueness result by considering a HJ equation on the junction point.

In the present work, we will investigate the junction conditions on the interfaces. For this, by using the Filippov regularization of the multifunctions F i, we shall introduce a particular optimal control problem on \(\mathbb {R}^{d}\). The main feature of this control problem is that its value function is solution to the system of (1.1). By investigating the transmission conditions satisfied by the value function on the interfaces between the subdomains Ω i, we obtain the equations which are defined on the interfaces. Then the system (1.1) is completed by these equations on the interfaces and the existence and uniqueness of solution is guaranteed. No transversality requirement is needed in this paper. The main idea developed here follows the concept of Essential Hamiltonian introduced in [5], and provides a new viscosity notion that is quite different from the notion of Ishii [14]. This new definition gives a precise meaning to the transmission conditions between Ω i and provides the uniqueness of viscosity solution.

The paper is organized as follows. In Sect. 2, the setting of the problem is described and the main results are presented. Section 3 is devoted to the link with optimal control problem and the study of the properties of the value function, and the proofs for the main results are given in Sect. 4.

2 Main Results

2.1 Setting of the Problem

Consider the following structure on \(\mathbb {R}^{d}\): given \(m\in \mathbb {N}\), let {Ω 1,…,Ω m} be a finite collection of C 2 open d-manifolds embedded in \(\mathbb {R}^{d}\). For each i=1,…,m, the closure of Ω i is denoted as \(\overline{\varOmega }_{i}\). Assume that this collection of manifolds satisfies the following:

(H1) :

\(\displaystyle \begin{cases} \mbox{(i)}&\mathbb {R}^d=\bigcup_{i=1}^m \overline{\varOmega }_i\quad \mbox{and}\quad \varOmega _i\cap \varOmega _j=\emptyset\quad \mbox{when }i\not= j,\ i,j\in\{1,\ldots, m\}; \\ \mbox{(ii)} &\mbox{Each}\ \overline{\varOmega }_i\mbox{ is proximally smooth and wedged.} \\ \end{cases} \)

The concepts of proximally smooth and wedged are introduced in [9]. For any set \(\varOmega \subseteq \mathbb {R}^{d}\), we recall that \(\overline{\varOmega }\) is proximally smooth means that the signed distance function to \(\overline{\varOmega }\) is differentiable on a tube neighborhood of \(\overline {\varOmega }\). \(\overline{\varOmega }\) is said to be wedged means that the interior of the tangent cone of \(\overline{\varOmega }\) at each point of \(\overline{\varOmega }\) is nonempty. The precise definitions and properties are presented in Appendix B.

Let \(\varphi :\mathbb {R}^{d}\rightarrow \mathbb {R}\) be a given function satisfying:

(H2) :

φ is a bounded Lipschitz continuous function.

Let T>0 be a given final time, for i=1,…,m, consider the following system of Hamilton–Jacobi (HJ) equations:

$$ \begin{cases}\displaystyle -\partial_t u(t,x)+H_i(x,Du(t,x))=0, & \mbox{for}\ t\in(0,T),\ x\in \varOmega _i,\\ u(T,x)=\varphi (x), & \mbox{for}\ x\in \varOmega _i. \end{cases} $$
(2.1)

The system above implies that on each d-manifold Ω i, a classical HJ equation is considered. However, there is no information on the boundaries of the d-manifolds which are the junctions between Ω i. We then address the question to know what condition should be considered on the boundaries in order to get the existence and uniqueness of solution to all the equations.

In the sequel, we call the singular subdomains contained in the boundaries of the d-manifolds the interfaces. Let \(\ell\in \mathbb {N}\) be the number of the interfaces and we denote Γ j, j=1,…, the interfaces which are also open embedded manifolds with dimensions strictly smaller than d. Assume that the interfaces satisfy the following:

(H3) :

\(\displaystyle \begin{cases} \mbox{(i)}\hspace*{-4pt} &\mathbb {R}^d= (\bigcup_{i=1}^m \varOmega _i )\cup(\bigcup_{j=1}^{\ell} \varGamma_j ),\ \ \varGamma_j\cap\varGamma_k=\emptyset,\ \ j\not= k,\ j,k=1,\ldots ,\ell; \\ \mbox{(ii)}\hspace*{-4pt} &\mbox{If } \varGamma_j\cap\overline{\varOmega }_i\neq\emptyset, \ \ \mbox{then }\varGamma_j\subseteq\overline{\varOmega }_i,\ \ \mbox{for}\ i=1,\ldots , m,\ j=1,\ldots,\ell; \\ \mbox{(iii)}\hspace*{-4pt} &\mbox{If } \varGamma_k\cap\overline{\varGamma}_j\neq\emptyset ,\ \ \mbox{then }\varGamma_k\subseteq\overline{\varGamma}_j,\ \ \mbox{for}\ j,k\in \{1,\ldots,\ell\}; \\ \mbox{(iv)}\hspace*{-4pt} &\mbox{Each }\overline{\varGamma}_j\mbox{ is proximally smooth and relatively wedged.} \end{cases} \)

For any open embedded manifold Γ with dimension p<d, \(\overline{\varGamma}\) is said to be relatively wedged if the relative interior (in \(\mathbb {R}^{p}\)) of the tangent cone of \(\overline{\varGamma}\) at each point of \(\overline{\varGamma}\) is nonempty, see Appendix B for the precise definition.

Example 1

A simple example is shown in Fig. 1 with d=1, m=2 and =1. Here \(\mathbb {R}=\varOmega _{1}\cup\varGamma_{1}\cup \varOmega _{2}\) with

$$\varOmega _1=\{x:x<0\},\qquad \varOmega _2=\{x:x>0\},\qquad \varGamma_1=\{0\}. $$

Note that Ω 1, Ω 2 are two one dimensional manifolds, and the only interface is the zero dimensional manifold Γ 1.

Fig. 1
figure 1

A multi-domain in 1d

Other possible examples in \(\mathbb {R}^{2}\) are depicted in Fig. 2.

Fig. 2
figure 2

Other possible examples in \(\mathbb {R}^{2}\)

We are interested particularly in the HJ equations with the Hamiltonians \(H_{i}:\overline{\varOmega }_{i}\times \mathbb {R}^{d}\rightarrow \mathbb {R}\), i=1,…,m of the following Bellman form: for \((x,q)\in\overline{\varOmega }_{i}\times \mathbb {R}^{d}\),

$$ H_i(x,q)=\sup_{p\in F_i(x)}\{-p\cdot q\}, $$

where \(F_{i}:\overline{\varOmega }_{i}\rightsquigarrow \mathbb {R}^{d}\) are multifunctions defined on \(\overline{\varOmega }_{i}\) and satisfy the following assumptions:

(H4) :

\(\displaystyle \begin{cases} {\mbox{(i)}} &\forall x\in\overline{\varOmega }_i,\quad F_i(x) \mbox{ is a nonempty, convex, and compact set;} \\ {\mbox{(ii)}} & F_i \mbox{ is Lipschitz continuous on }\varOmega _i\mbox{ with respect to the}\\&\mbox{Hausdorff metric;} \\ {\mbox{(iii)}} &\exists \mu>0 \mbox{ so that}\quad \max\{|p|:p\in F_i(x)\} \leq\mu(1+\|x\|) \forall x\in\overline{\varOmega }_i; \\ {\mbox{(iv)}} & \exists \delta>0 \mbox{ so that}\quad \forall x\in \overline{\varOmega }_i,\ \delta\overline{B(0,1)}\subseteq F_i(x). \end{cases} \)

The hypothesis (H4)(i)–(iii) are classical for the study of HJB equations, whereas (H4)(iv) is a strong controllability assumption. Although this controllability assumption is restrictive, we use it here in order to ensure the continuity of solutions for the system (2.1). The continuity property plays an important role in our analysis, but it can be obtained under weaker assumption than (H4)(iv), see [15].

Remark 2.1

For the simplicity, we define the multifunction F i on \(\overline{\varOmega }_{i}\). In fact, if F i is only defined on Ω i and satisfies (H4), it can be extended to the whole \(\overline{\varOmega }_{i}\) by its local Lipschitz continuity.

2.2 Essential Hamiltonian

The main goal of this work is to identify the junction conditions that ensure the uniqueness of the solution for the HJ system (2.1). In [7], the uniqueness of the solution of space-measurable HJ equations has been studied under some special conditions, called “transversality” conditions. Roughly speaking, this transversality condition would mean, in the case of problem (2.1), that the interfaces can be ignored and the behavior of the solution on the interfaces is not relevant. Here we consider the case when no transversality condition is assumed and we analyze the behavior of the solution on the interfaces.

First of all, in order to define a multifunction on the whole \(\mathbb {R}^{d}\), an immediate idea is to consider the approach of Filippov regularization [11] of (F i)i=1,…,m. For this consider the multifunction \(G:\mathbb {R}^{d}\rightsquigarrow \mathbb {R}^{d}\) given by:

$$\forall x\in \mathbb {R}^d,\quad G(x):=\mathrm{co} \bigl\{F_i(x): i\in\{1,\ldots,m\}, x\in\overline{\varOmega }_i\bigr\}. $$

G is the smallest upper semi-continuous (usc) envelope of (F i)i=1,…,m such that G(x)=F i(x) for xΩ i. Consider the Hamiltonian associated to G:

$$H_G(x,q)=\sup_{p\in G(x)}\{-p\cdot q\}. $$

If H G(⋅,q) is Lipschitz continuous, then one could define the HJB equations on the interfaces with the Hamiltonian H G and the uniqueness result would follow from the classical theory. However, G is not necessarily Lipschitz continuous and the characterization by means of HJB equations is not valid, see [10].

The next step is to define the multifunctions on the interfaces Γ j. We first recall the notion of tangent cone. For any C 2 smooth \(\mathcal {C}\subseteq \mathbb {R}^{p}\) with 1≤pd, the tangent cone \(\mathcal {T}_{\mathcal {C}}(x)\) at \(x\in \mathcal {C}\) is defined as

$$\mathcal {T}_{\mathcal {C}}(x)=\biggl\{v\in \mathbb {R}^p:\mathop{\lim \inf}\limits _{t\rightarrow 0^+} \frac{d_{\mathcal {C}}(x+tv)}{t}=0\biggr\}, $$

where \(d_{\mathcal {C}}(\cdot)\) is the distance function to \(\mathcal {C}\). For j=1,…,, we define the multifunction \(\widetilde {G}_{j}:\varGamma_{j}\rightsquigarrow \mathbb {R}^{d}\) on the interface Γ j by

$$\forall x\in\varGamma_j,\quad \widetilde {G}_j(x):=G(x)\cap \mathcal {T}_{\varGamma_j}(x). $$

Note that \(\mathcal {T}_{\varGamma_{j}}(x)\) agrees with the tangent space of Γ j at x, and the dimension of \(\mathcal {T}_{\varGamma_{j}}(x)\) is strictly smaller than d. On \(\widetilde {G}_{j}\) we have the following regularity result for which the proof is postponed to Appendix A.

Lemma 2.2

Under the assumptions (H1), (H2) and (H4), \(\widetilde {G}_{j}(\cdot):\varGamma_{j}\rightsquigarrow \mathbb {R}^{d}\)is locally Lipschitz continuous onΓ j.

Through this paper, and for the sake of simplicity of the notations, for k=1,…,m+ we set

$$\mathcal {M}_k= \begin{cases}\displaystyle \varOmega _k, & \mbox{for}\ k=1,\ldots,m;\\ \varGamma_{k-m}, & \mbox{for}\ k=m+1,\ldots,m+\ell, \end{cases} $$

and we define a new multifunction \(F^{\mathit {new}}:\mathbb {R}^{n}\rightsquigarrow \mathbb {R}^{n}\) by

$$F^{\mathit {new}}_k(x):= \begin{cases}\displaystyle F_k(x) & \mbox{for}\ x\in \mathcal {M}_k,\ k=1,\ldots,m;\\ \widetilde {G}_{k-m}, & \mbox{for}\ x\in \mathcal {M}_k,\ k=m+1,\ldots,m+\ell. \end{cases} $$

In all the sequel, we will also need the “essential multifunction” F E which will be used in the junction conditions:

Definition 2.3

(The essential multifunction)

The essential multifunction \(F^{E}:\mathbb {R}^{d}\rightsquigarrow \mathbb {R}^{d}\) is defined by

$$F^{E}(x):= \bigcup_{k\in\{1,\ldots,m+\ell\}} \bigl\{F^{E}_k(x): x\in\overline{\mathcal {M}}_k \bigr\},\quad \forall x\in \mathbb {R}^d, $$

where \(F^{E}_{k}:\overline{\mathcal {M}}_{k}\rightsquigarrow \mathbb {R}^{d}\) is defined by

$$F^{E}_k(x)=F^{\mathit {new}}_k(x)\cap \mathcal {T}_{\overline{\mathcal {M}}_k}(x),\quad \mbox{\textit{for}} \ x\in\overline{\mathcal {M}}_k. $$

F E is called essential velocity multifunction in [5]. According to the definition, F E(x) is the union of the corresponding inward and tangent directions to each subdomain near x. We note that

$$F^{E}|_{\mathcal {M}_i}=F_i,\quad \mbox{for}\ i=1,\ldots,m,\quad \mbox{and}\quad F^{E}(x)\subseteq G(x),\quad \mbox{for}\ x\in \mathbb {R}^d. $$

Example 2

Suppose the following dynamic data for the domain in Example 1:

$$F_1(x)=\biggl[-\frac{1}{2},1\biggr],\quad \forall x\in \varOmega _1,\quad \mbox{and}\quad F_2(x)=\biggl[-1,\frac{1}{2} \biggr],\quad \forall x\in \varOmega _2. $$

On this simple example, one can easily see that G and F E are different on the interface {0}:

$$G(0)=[-1,1],\qquad F^{E}(0)=\biggl[-\frac{1}{2},\frac{1}{2}\biggr]. $$

Now, define the “essential” Hamiltonian \(H^{E}:\mathbb {R}^{d}\times \mathbb {R}^{d}\rightarrow \mathbb {R}\) by:

$$ H^{E}(x,q)=\sup_{p\in F^{E}(x)}\{-p\cdot q\},\quad \forall (x,q) \in \mathbb {R}^d\times \mathbb {R}^d. $$

We point out that on each d-manifold Ω i, for each \(q\in \mathbb {R}^{d}\)

$$H^E(x,q)=H_i(x,q),\quad \mbox{whenever}\ x\in \varOmega _i. $$

In general, H E is not Lipschitz continuous with respect to the first variable. Some properties of H E will be discussed in Sect. 3.

2.3 Main Results

We now state the main existence and uniqueness result.

Theorem 2.4

Assume that (H1)(H4) hold. The following system:

(2.2a)
(2.2b)
(2.2c)

has a unique viscosity solution in the sense of Definition 2.6.

Note that the system (2.2a)–(2.2c) can be rewritten as

$$ \begin{cases}\displaystyle -\partial_t u(t,x)+H^E(x,Du(t,x))=0, & \mbox{for}\ t\in(0,T),\ x\in \mathbb {R}^d,\\ u(T,x)=\varphi (x), & \mbox{for}\ x\in \mathbb {R}^d, \end{cases} $$

which is an HJB equation on the whole space with a discontinuous Hamiltonian H E.

Before giving the definition of viscosity solution, we need the following notion of extended differentials.

Definition 2.5

(Extended differential)

Let \(\phi:(0,T)\times \mathbb {R}^{d}\rightarrow \mathbb {R}\)be a continuous function, and let \(\mathcal {M}\subseteq \mathbb {R}^{d}\)be an openC 2embedded manifold in \(\mathbb {R}^{d}\). Suppose that \(\phi\in C^{1}((0,T)\times \mathcal {M})\). Then we define the differential ofϕon any \((t,x)\in(0,T)\times\overline{\mathcal {M}}\)by

$$\nabla_{\overline{\mathcal {M}}}\phi(t,x):=\lim_{x_n\rightarrow x,x_n\in \mathcal {M}} \bigl( \phi_t(t,x_n),D\phi(t,x_n) \bigr). $$

Note that ∇ϕ is continuous on \((0,T)\times \mathcal {M}\), the differential defined above is nothing but the extension of ∇ϕ to the whole \(\overline{\mathcal {M}}\).

Definition 2.6

(Viscosity solution)

Let \(u:(0,T]\times \mathbb {R}^{d}\rightarrow \mathbb {R}\)be a bounded local Lipschitz continuous function. For any \(x\in \mathbb {R}^{d}\), let \(\mathbb{I}(x):=\{i,x\in\overline{\mathcal {M}}_{i}\}\)be the index set.

  1. (i)

    We say thatuis a supersolution of (2.2a)(2.2b) if for any \((t_{0},x_{0})\in(0,T)\times \mathbb {R}^{d}\), \(\phi\in C^{1}((0,T)\times \mathbb {R}^{d})\)such thatuϕattains a local minimum on (t 0,x 0), we have

    $$-\phi_t(t_0,x_0)+H^E\bigl(x_0,D \phi(t_0,x_0)\bigr)\geq0. $$
  2. (ii)

    We say thatuis a subsolution of (2.2a)(2.2b) if for any \((t_{0},x_{0})\in(0,T)\times \mathbb {R}^{d}\), any continuous \(\phi:(0,T)\times \mathbb {R}^{d}\rightarrow \mathbb {R}\)with \(\phi|_{(0,T)\times \mathcal {M}_{k}}\)beingC 1for any \(k\in \mathbb{I}(x)\)such thatuϕattains a local maximum at (t 0,x 0) on \((0,T)\times\overline{\mathcal {M}}_{k}\), we have

    $$-q_t+\sup_{p\in F^{E}_k(x_0)}\{-p\cdot q_x\}\leq 0,\quad \mbox{\textit{with}}\ (q_t,q_x)=\nabla_{\overline{\mathcal {M}}_k} \phi(t_0,x_0). $$
  3. (iii)

    We say thatuis a viscosity solution of (2.2a)(2.2c) ifuis both a supersolution and a subsolution, andusatisfies the final condition

    $$u(T,x)=\varphi (x),\quad \forall x\in \mathbb {R}^d. $$

2.4 Comments

The problem (2.1) is formally linked to some hybrid control problems where the dynamics depend on the state region. Theorem 2.4 indicates a new characterization of the value function of hybrid control problems without transition cost. More details are presented in Sect. 3.

Another application related to the addressed problem in this paper is the traffic flow problems where the structure of multi-domains is composed by one-dimensional half-lines and a junction point. On each half-line an HJ equation is imposed to describe the density of the traffic and it is interesting to understand what happens at the junction point. See [13] for more details.

A similar topic with one interface (hyperplane) separating two subdomains has been studied in [4]. The work [4] deals with an infinite horizon problem which leads to the stationary HJB equations with running cost. In this context, a complete analysis of the uniqueness of solutions for (2.1) is provided in [4].

In the present work, we consider a more general situation where the intersection of the domains are interfaces with different dimensions from d−1 to zero. In order to focus only on the difficulty arising from this general structure, we consider a time-dependent equations without running cost. The presence of running costs arises to further difficulties that will be addressed in [15].

Optimal control problems on stratified domains have been studied by Bressan–Hong [6] and Barnard–Wolenski [5]. The stratified domains are the multi-domains provided with dynamic data on each subdomain under some structural conditions. The work [5] focuses on the flow invariance on stratified structure. The junction condition established in our work is inspired by the notion of essential dynamics introduced in [5].

3 Link with Optimal Control Problems

Recall that for the classical optimal control problems of the Mayer’s type, the value function can be characterized as the unique viscosity solution of the equations of the type (2.1) with Lipschitz continuous Hamiltonians. In our settings of problem, the multifunctions F i are defined separately on \(\overline{\varOmega }_{i}\). A first idea would be to consider the “regularization” of F i. However, the regularized multifunction G is only usc in general, and this is not enough to guarantee the existence and uniqueness of solution for (2.1). So in our framework, in order to link the Hamilton–Jacobi equation with a Mayer’s optimal control problem, we need to well define the global trajectories driven by the dynamics (F i)i=1,…,m. Consider the following differential inclusion

$$ \begin{cases}\displaystyle \dot{y}(s)\in G(y(s)), & \mbox{for}\ s\in(t,T),\\ y(t)=x. \end{cases} $$
(3.1)

Since G is usc, (3.1) admits an absolutely continuous solution defined on [τ,T]. For any \((t,x)\in[0,T]\times \mathbb {R}^{d}\), we denote the set of absolutely continuous trajectories by

$$S_{[t,T]}(x):=\bigl\{y_{t,x}, y_{t,x}\ \mbox{satisfies}\ (3.1)\bigr\}. $$

Now consider the following Mayer’s problem

$$ v(t,x):=\min\bigl\{\varphi \bigl(y(T)\bigr), y(\cdot)\in S_{[t,T]}(x)\bigr\}. $$
(3.2)

Since G is usc and convex, the set S [t,T](x) of absolutely continuous arcs is compact in \(C(t,T;\mathbb {R}^{d})\) (see Theorem 1, [2] pp. 60). And then the problem (3.2) has an optimal solution for any t∈[0,T], \(x\in \mathbb {R}^{d}\).

As in the classical case, v satisfies a Dynamical programming principle (DPP).

Proposition 3.1

Assume that (H1)(H3) hold. Then for any \((t,x)\in[0,T]\times \mathbb {R}^{d}\)the following holds.

  1. (i)

    The super-optimality. \(\exists \bar{y}_{t,x}\in S_{[t,T]}(x)\)such that

    $$v(t,x)\geq v\bigl(t+h,\bar{y}_{t,x}(t+h)\bigr),\quad \mbox{\textit{for}}\ h \in[0,T-t]. $$
  2. (ii)

    The sub-optimality.y t,xS [t,T](x) such that

    $$v(t,x)\leq v\bigl(t+h,y_{t,x}(t+h)\bigr),\quad \mbox{\textit{for}}\ h\in[0,T-t]. $$

An important fact resulting from the assumptions (H2) and (H4)(iv) is the local Lipschitz continuity of the value function v.

Proposition 3.2

Assume that (H1)(H4) hold. Then the value functionvis locally Lipschitz continuous on \([0,T]\times \mathbb {R}^{d}\).

Proof

For any t∈[0,T], we first prove that v(t,⋅) is locally Lipschitz continuous on \(\mathbb {R}^{d}\). Let \(x,z\in \mathbb {R}^{d}\), without loss of generality, suppose that

$$v(t,x)\geq v(t,z). $$

There exists \(\overline{y}_{t,z}\in S_{[t,T]}(z)\) such that

$$v(t,z)=\varphi \bigl(\overline{y}_{t,z}(T)\bigr). $$

We set

$$h=\frac{\|x-z\|}{\delta},\qquad \xi(s)=x+\delta\frac{z-x}{\|z-x\|}(s-t)\quad \mbox{for}\ s \in[t,t+h]. $$

Note that ξ(t)=x, ξ(t+h)=z. By the controllability assumption (H4)(iv), we can define the following trajectory

$$ \widetilde{y}_{t,x}(s)= \begin{cases}\displaystyle \xi(s), & \mbox{for}\ s\in[t,t+h],\\ \overline{y}_{t,z}(s-h), & \mbox{for}\ s\in[t+h,T]. \end{cases} $$

By denoting L φ>0 the Lipschitz constant of φ, we have

$$ \begin{aligned} v(t,x)-v(t,z)&\leq \varphi \bigl(\widetilde{y}_{t,x}(T) \bigr)-\varphi \bigl(\overline{y}_{t,z}(T)\bigr) \\ &\leq L_{\varphi }\bigl\|\widetilde{y}_{t,x}(T)-\overline{y}_{t,z}(T)\bigr\| \\ &\leq L_{\varphi }\bigl\|\overline{y}_{t,z}(T-h)-\overline{y}_{t,z}(T)\bigr\| \\ &\leq L_{\varphi }\|G\|h=\frac{L_{\varphi }\|G\|}{\delta}\|x-z\|, \end{aligned} $$

where we deduce the local Lipschitz continuity of v(t,⋅).

Then for \(x\in \mathbb {R}^{d}\), we prove the Lipschitz continuity of v(⋅,x) on [0,T]. For any t,s∈[0,T], without loss of generality suppose that t<s. By the super-optimality, there exists y opS [t,T](x) such that

$$v(t,x)=v\bigl(s,y^{\mathit{op}}(s)\bigr). $$

Then

$$ \bigl|v(t,x)-v(s,x)\bigr|=\bigl|v\bigl(s,y^{\mathit{op}}(s)\bigr)-v(s,x)\bigr|\leq L_v \|G\|(s-t), $$

where L v is the local Lipschitz constant of v(s,⋅). And the proof is complete. □

Remark 3.3

Assumption (H4)(iv) plays an important role in our proof for the Lipschitz continuity of the value function. However, it is worth mentioning that the Lipschitz continuity can also be satisfied in some cases where (H4)(iv) is not satisfied. In Example 1, if one take F 1=F 2 Lipschitz continuous dynamics, then the value function will be Lipschitz continuous without assuming any controllability property. For multi-domains problems, some weaker assumptions of controllability are analyzed in [15].

The following result analyzes the structure of the dynamics and makes clear the behavior of the trajectories.

Proposition 3.4

Suppose \(y(\cdot):[t,T]\rightarrow \mathbb {R}^{d}\)is an absolutely continuous arc. Then the following are equivalent.

  1. (i)

    y(⋅) satisfies (3.1);

  2. (ii)

    For eachk=1,…,m+, y(⋅) satisfiesy(t)=xand

    $$\dot{y}(s)\in F^{\mathit {new}}_k\bigl(y(s)\bigr),\quad \mbox{\textit{a.e. whenever}}\ y(s)\in \mathcal {M}_k, $$
  3. (iii)

    y(⋅) satisfies

    $$\begin{cases}\displaystyle \dot{y}(s)\in F^{E}(y(s)) & \mbox{\textit{for}}\ s\in(t,T),\\ y(t)=x. \end{cases} $$

Proof

It is clear that (ii) implies (i) since \(F^{\mathit {new}}_{k}(x)\subseteq G(x)\) whenever \(x\in \mathcal {M}_{k}\). So assume that (i) holds, and let us show that (ii) holds as well.

The proof is essentially the same as in Proposition 2.1 of [5]. For any k=1,…,m+, let \(J_{k}:=\{s\in[t,T]:y(s)\in \mathcal {M}_{k}\}\). Without loss of generality, suppose that the Lebesgue measure mes(J k)≠0. We set

$$\tilde{J}_k:=\bigl\{s\in J_k: \dot{y}(s)\ \mbox{exists in}\ G\bigl(y(s)\bigr)\ \mbox{and}\ s\ \mbox{is a Lebesgue point of } J_k\bigr\}. $$

It is clear that \(\tilde{J}_{k}\) has full measure in J k. For any \(s\in \tilde{J}_{k}\), then being a Lebesgue point implies that there exists a sequence {s n} such that s ns as n→∞ with \(s\neq s_{n}\in\tilde{J}_{k}\) for all n. Since \(y(s_{n})\in \mathcal {M}_{k}\), we have

$$\dot{y}(s)=\lim_{n\rightarrow \infty}\frac{y(s_n)-y(s)}{s_n-s}\in \mathcal {T}_{\mathcal {M}_k} \bigl(y(s)\bigr). $$

Then by the definition of \(F^{\mathit {new}}_{k}\), we have

$$\dot{y}(s)\in G\bigl(y(s)\bigr)\cap \mathcal {T}_{\mathcal {M}_k}\bigl(y(s)\bigr)= F^{\mathit {new}}_k\bigl(y(s)\bigr),\quad \forall s\in\tilde{J}_k, $$

which proves (ii).

It is clear that (ii) ⇒ (iii) ⇒ (i) since \(F^{\mathit {new}}_{k}(\cdot)\subseteq F^{E}(\cdot)\subseteq G(\cdot)\), which ends the proof. □

Proposition 3.4 will be very useful in the characterization of the super-optimality and the sub-optimality by HJ equations involving the essential Hamiltonian H E.

3.1 The Supersolution Property

The following proposition shows the characterization of the super-optimality by the supersolutions of HJ equations. This is a classical result since G is usc.

Proposition 3.5

Suppose \(u:[0,T]\times \mathbb {R}^{d}\rightarrow \mathbb {R}\)is continuous. Thenusatisfies the super-optimality if and only if for any \((t_{0},x_{0})\in(0,T)\times \mathbb {R}^{d}\), \(\phi\in C^{1}((0,T)\times \mathbb {R}^{d})\)such thatuϕattains a local minimum on (t 0,x 0), we have

$$ -\phi_t(t_0,x_0)+H_G \bigl(x_0, D\phi(t_0,x_0)\bigr)\geq0. $$
(3.3)

Proof

This is a straightforward consequence of Theorem 3.2 and Lemma 4.3 in [12] (see also [3]). □

Due to the structure of the dynamics G illustrated in Proposition 3.4, it is possible to replace G by F E to get a more precise HJB inequality since the set of trajectories driven by G or F E is the same. But the difficulty here is that in general F E is not usc.

At first, we have the following result concerning the dynamics of the optimal trajectories.

Lemma 3.6

Lety(⋅)∈S [t,T](x) be an absolutely continuous arc along which the value functionvsatisfies the super-optimality. For any \(p\in \mathbb {R}^{d}\)such that there existst n→0+with \(\frac {y(t_{n})-x}{t_{n}}\rightarrow p\), by denoting co F E(x) the convex hull ofF E(x) we have

$$p\in \text {co }F^{E}(x). $$

The proof of Lemma 3.6 is presented in Appendix A. In the next theorem, we will use the statement of Lemma 3.6 to show that the functions satisfying the super-optimality condition is also a solution to a more precise HJB equation with H E than the HJB equation (3.3) with the Hamiltonian H G even if F E is not usc.

Theorem 3.7

Suppose \(u:[0,T]\times \mathbb {R}^{d}\rightarrow \mathbb {R}\)is continuous andu(T,x)=φ(x) for all \(x\in \mathbb {R}^{d}\). usatisfies the super-optimality if and only ifuis a supersolution of (2.2a)(2.2c), i.e. for any \((t_{0},x_{0})\in(0,T)\times \mathbb {R}^{d}\), \(\phi\in C^{1}((0,T)\times \mathbb {R}^{d})\)such thatuϕattains a local minimum on (t 0,x 0), we have

$$-\phi_t(t_0,x_0)+\sup_{p\in F^{E}(x_0)} \bigl\{-p\cdot D\phi(t_0,x_0)\bigr\}\geq0. $$

Proof

(⇒) Let \(\bar{y}_{t_{0},x_{0}}\) be the optimal trajectory along which u satisfies the super-optimality. Then for any \((t_{0},x_{0})\in(0,T)\times \mathbb {R}^{d}\), \(\phi\in C^{1}((0,T)\times \mathbb {R}^{d})\) such that uϕ attains a local minimum on (t 0,x 0), by the same argument in Proposition 3.5, we obtain

$$\frac{1}{h}\bigl(\phi(t_0,x_0)-\phi \bigl(t_0+h,\bar{y}_{t_0,x_0}(t_0+h)\bigr)\bigr)\geq 0, $$

i.e.,

$$ \frac{1}{h}\int^h_0 \bigl[- \phi_t\bigl(t_0+s,\bar{y}_{t_0,x_0}(t_0+s) \bigr) -D\phi\bigl(t_0+s,\bar{y}_{t_0,x_0}(t_0+s)\bigr) \cdot\dot{\bar{y}}_{t_0,x_0}(t_0+s) \bigr]ds\geq0. $$

Up to a subsequence, let h n→0+ so that \(x_{n}:=\bar {y}_{t_{0},x_{0}}(t_{0}+h_{n})\) satisfies \(\frac{x_{n}-x}{h_{n}}\rightarrow p\) for some \(p\in \mathbb {R}^{d}\). We then get

$$-\phi_t(t_0,x_0)-p\cdot D \phi(t_0,x_0)\geq0. $$

Lemma 3.6 leads to

$$ p\in \text {co }F^{E}(x_0). $$
(3.4)

Then we deduce that

$$-\phi_t(t_0,x_0)+\sup_{p\in \text {co }F^{E}(x_0)} \bigl\{-p\cdot D\phi(t_0,x_0)\bigr\}\geq0. $$

By the separation theorem

$$-\phi_t(t_0,x_0)+\sup_{p\in F^{E}(x_0)} \bigl\{-p\cdot D\phi(t_0,x_0)\bigr\}\geq0. $$

(⇐) For any \((t_{0},x_{0})\in(0,T)\times \mathbb {R}^{d}\), \(\phi\in C^{1}((0,T)\times \mathbb {R}^{d})\) such that uϕ attains a local minimum on (t 0,x 0), since u is a supersolution, we have

$$-\phi_t(t_0,x_0)+\sup_{p\in F^E(x_0)} \bigl\{-p\cdot D\phi(t_0,x_0)\bigr\}\geq0. $$

Note that F E(x 0)⊆G(x 0), then we deduce that

$$-\phi_t(t_0,x_0)+\sup_{p\in G(x_0)} \bigl\{-p\cdot D\phi(t_0,x_0)\bigr\}\geq0. $$

Then we deduce the desired result by Proposition 3.5. □

3.2 The Subsolution Property

As mentioned before, if G is Lipschitz continuous, one can characterize the sub-optimality by the opposite HJB inequalities:

$$-u_t(t_0,x_0)+H_G \bigl(x_0,Du(t_0,x_0)\bigr)\leq0 $$

in the viscosity sense. However, G is only usc on the interfaces. And the characterization using H G fails because there are dynamics in G which are not “essential”, which means for some pG(x), there does not exist any trajectory coming from x using the dynamic p. For instance in Example 2, at the point 0, G(0)=[−1,1]. Consider the dynamic p=1∈G(0), if there exists a trajectory y starting from 0 using the dynamic 1, y goes immediately into Ω 2 and y is not admissible since 1 is not contained in the dynamics F 2.

In the sequel, we consider the essential dynamic multifunction F E to replace G by eliminating the useless nonessential dynamics. Note that F E in general is not Lipschitz either. The significant role of F E is shown in the following result.

Lemma 3.8

For anypF E(x), there existsτ>tand a solutiony(⋅) of (3.1) which isC 1on [t,τ] with \(\dot{y}(t)=p\).

Proof

This is a partial result of in [5, Proposition 5.1]. For the convenience of reader, a sketch of the proof is given in Appendix B. □

More precisely, Lemma 3.8 can be rewritten as:

Lemma 3.9

Letk∈{1,…,m+}, \(x\in\overline{\mathcal {M}}_{k}\). Then for any \(p\in F^{E}_{k}(x)\), there existτ>tand a trajectory of (3.1) y(⋅) which isC 1on [t,τ] with \(\dot{y}(t)=p\)and \(y(s)\in \overline{\mathcal {M}}_{k}\)fors∈[t,τ].

The following two results give the characterization of sub-optimality by HJB inequalities.

Proposition 3.10

Let \(u:[0,T]\times \mathbb {R}^{d}\rightarrow \mathbb {R}\)be locally Lipschitz continuous andu(T,x)=φ(x) for all \(x\in \mathbb {R}^{d}\). Suppose thatusatisfies the sub-optimality, thenuis a subsolution of (2.2a)(2.2c) in the sense of Definition 2.6.

Proof

Given \((t_{0},x_{0})\in[0,T]\times \mathbb {R}^{d}\), for any \(k\in\mathbb{I}(x_{0}),p\in F^{E}_{k}(x_{0})\), by Lemma 3.9, there exists h>0 and a solution y(⋅) of (3.1) C 1 on [t 0,t 0+h] with \(\dot {y}(t_{0})=p,y(t_{0})=x_{0}\) and \(y(s)\in\overline{\mathcal {M}}_{k},\forall s\in [t_{0},t_{0}+h]\). By the sub-optimality of u

$$u(t_0,x_0)\leq u\bigl(t_0+h,y(t_0+h) \bigr). $$

For any \(\phi\in C^{0}((0,T)\times \mathbb {R}^{d})\cap C^{1}((0,T)\times \mathcal {M}_{k})\) such that uϕ attains a local maximum at (t 0,x 0) on \((0,T)\times \overline{\mathcal {M}}_{k}\), we have

$$u\bigl(t_0+h,y(t_0+h)\bigr)-\phi\bigl(t_0+h,y(t_0+h) \bigr)\leq u(t_0,x_0)-\phi(t_0,x_0). $$

Then we deduce that

$$\frac{1}{h}\bigl(\phi(t_0,x_0)-\phi \bigl(t_0,y(t_0+h)\bigr)\bigr)\leq0. $$

By taking h→0 we have

$$-q_t-p\cdot q_x\leq0,\quad \mbox{where}\ p\in F^{E}_k(x_0),\ (q_t,q_x)\in \nabla_{\overline{\mathcal {M}}_k}\phi(t_0,x_0), $$

i.e.

$$ -q_t+\sup_{p\in F^{E}_k(x_0)}\{-p\cdot q_x\}\leq 0. $$

 □

We present a precise example to illustrate that H E is the proper Hamiltonian for the subsolution characterization of the value function.

Example 3

Consider again the same 1d structure as in Example 1 and Example 2, i.e. \(\mathbb {R}=\varOmega _{1}\cup \varOmega _{2}\cup\varGamma_{1}\) with

$$\varOmega _1=(-\infty,0),\qquad \varOmega _2=(0,+\infty),\qquad \varGamma_1=\{0\}, $$

and the dynamics

$$F_1(x)=\biggl[-\frac{1}{2},1\biggr],\quad \forall x\in \varOmega _1,\quad \mbox{and}\quad F_2(x)=\biggl[-1,\frac{1}{2} \biggr],\quad \forall x\in \varOmega _2. $$

At the point 0, the convexified dynamics G(0)=[−1,1] and the essential dynamics \(F^{E}(0)=[-\frac{1}{2},\frac{1}{2}]\). Let T>0 be a given final time and the final cost function φ 2(x)=x. Then from any initial data \((t,x)\in[0,T]\times \mathbb {R}\), the optimal strategy is to go on the left as far as possible. Thus the value function is given by

$$v_2(t,x):=\min\bigl\{\varphi _2\bigl(y_{t,x}(T) \bigr)\bigr\}= \begin{cases} x-\frac{1}{2}(T-t) & x\leq0,\\ -\frac{1}{2}(T-t-x) & 0\leq x\leq T,\\ x-(T-t) & x\geq T-t. \end{cases} $$

At the point (t,x)=(0,0), \(\partial_{t} v_{2}(0,0)=\frac{1}{2}\), Dv 2(0,0)=1, \(Dv_{2}(0,0^{+})=\frac{1}{2}\), \(D^{+}v_{2}(0,0)=[\frac {1}{2},1]\). Then we have

$$-\partial_t v_2(0,0)+\max_{p\in F^E(0)}\bigl \{-p\cdot D^+v_2(0,0)\bigr\}=0\leq0, $$

while

$$-\partial_t v_2(0,0)+\max_{p\in G(0)}\bigl \{-p\cdot D^+v_2(0,0)\bigr\}=\frac{1}{2}> 0. $$

We see that the subsolution property fails if we replace F E by G which is larger.

Proposition 3.7 indicates that any function satisfying the sub-optimality is a subsolution of (2.2a)–(2.2c). The inverse result needs more elaborated arguments. The difficulty arises mainly from handling the trajectories oscillating near the interfaces, i.e. the trajectories cross the interfaces infinitely in finite time which exhibit a type of “Zeno” effect. The proofs of Theorem 3.12 and of Proposition 3.11 contain details on how to construct the “nice” approximate trajectories to deal with Zeno-type trajectories.

At first, we give the following result containing the key fact of Zeno-type trajectories.

Proposition 3.11

Letube a Lipschitz continuous subsolution of (2.2a)(2.2c). Suppose \(\mathcal {M}_{k}\)is a subdomain and \(\mathcal {M}\)is a union of subdomains with \(\mathcal {M}_{k}\subseteq\overline{\mathcal {M}}\). Assume \(\mathcal {M}\)has the following property: for every trajectoryy(⋅) of (3.1) defined on [a,b]⊆[t,t+h] with \(y(\cdot)\subseteq \mathcal {M}\), we have

$$ u\bigl(a,y(a)\bigr)\leq u\bigl(b,y(b)\bigr). $$
(3.5)

Then for any trajectoryy(⋅) of (3.1) defined on [a,b]⊆[t,t+h] lying totally within \(\mathcal {M}_{k}\cup \mathcal {M}\), we have

$$u\bigl(a,y(a)\bigr)\leq u\bigl(b,y(b)\bigr). $$

Proof

Here we adapt an idea introduced in [5] in a context of stratified control problems. Let y(⋅) be a trajectory of (3.1) with \(y(\cdot)\subseteq \mathcal {M}_{k}\cup \mathcal {M}\) satisfying (3.5). Without loss of generality, suppose that \(y(a)\in \mathcal {M}_{k}\) and \(y(b)\in \mathcal {M}_{k}\). By (H3), we have \(\mathcal {M}_{k}\cap \mathcal {M}=\emptyset\). Let \(J:=\{s\in[a,b]:y(s)\notin \mathcal {M}_{k}\}\), which is an open set and so can be written as

$$J=\bigcup_{n=1}^{\infty} (a_n,b_n) $$

where the intervals are pairwise disjoint. For a fixed p, we set

$$J_{ p} := \bigcup_{n=1}^{ p} (a_n,b_n), $$

which after re-indexing can be assumed to satisfy

$$b_0:=a\leq a_1<b_1\leq a_2 < b_2\leq\dots\leq a_{ p} < b_{ p}\leq a_{ p+1}:=b. $$

Choose p sufficiently large so that

$$ {\mathit {meas}}(J\backslash J_{ p})<\frac{r}{2 e^{LT}\|G\|}, $$

where ∥G∥ is an upper bound of the norm of any velocity that may appear, and r>0 is given by

$$r:=\inf_{\substack{s\in[b_0,b]\\ w\in\overline{\mathcal {M}}_k\backslash \mathcal {M}_k}}\bigl\| y(s)-w\bigr\|. $$

For n=1,…,p, \(y(s)\in \mathcal {M}\) for s∈(a n,b n). Let ε>0 small enough such that [a n+ε,b nε]⊆(a n,b n), then by (3.5)

$$u\bigl(a_n+\varepsilon ,y(a_n+\varepsilon )\bigr)\leq u\bigl(b_n- \varepsilon ,y(b_n-\varepsilon )\bigr). $$

Taking ε→0 and by the continuity of u and y(⋅), we deduce that

$$u\bigl(a_n,y(a_n)\bigr)\leq u\bigl(b_n,y(b_n) \bigr). $$

Next we need to deal with y(⋅) restricted to [b n,a n+1]. For n=0,…,p, by Proposition 3.4 \(\dot{y}(s)\in F^{\mathit {new}}_{k} (y(s) )\) for almost all s∈[b n,a n+1]∖J. For n=0,…,p, set ε n:=meas([b n,a n+1]∩J), and note that \(\sum_{n=0}^{ p}\varepsilon_{n}={\mathit {meas}}(J\backslash J_{ p})\). We calculate how far y(⋅) is from a trajectory lying in \(\mathcal {M}_{k}\) with dynamics \(F^{\mathit {new}}_{k}\) by

$$\xi_n:=\int_{b_n}^{a_{n+1}}\text {dist}\bigl(\dot{y}(s),F^{\mathit {new}}_k \bigl(y(s) \bigr) \bigr) ds \leq2 \|G\| \varepsilon_n. $$

By the Filippov approximation theorem (see [8, Theorem 3.1.6] and also [9, Proposition 3.2]), there exists a trajectory z n(⋅) of \(F^{\mathit {new}}_{k}\) defined on the interval [b n,a n+1] that lies in \(\mathcal {M}_{k}\) with z n(b n)=y(b n) and satisfies

$$ \bigl\|z_n(a_{n+1})-y(a_{n+1}) \bigr\|\leq e^{L(a_{n+1}-b_n)}\xi_n \leq2\|G\|e^{L(a_{n+1}-b_n)} \varepsilon_n. $$
(3.6)

Since u is subsolution of (2.2a)–(2.2c), then for any \(x\in \mathcal {M}_{k}\), note that \(F^{\mathit {new}}_{k}(x)\subseteq \mathcal {T}_{\mathcal {M}_{k}}(x)\) and \(\mathcal {T}_{\mathcal {M}_{k}}(x)=\mathcal {T}_{\overline{\mathcal {M}}_{k}}(x)\) by Definition 2.6

$$ -\partial_t \phi(t,x)+\sup_{p\in F^{\mathit {new}}_k(x)} \bigl\{-p\cdot D\phi(t,x)\bigr\}\leq0 $$
(3.7)

with \(\phi\in C^{0}((0,T)\times \mathbb {R}^{d})\cap C^{1}((0,T)\times \mathcal {M}_{k})\) and uϕ attains a local maximum at (t,x) on \((0,T)\times \mathcal {M}_{k}\). Since z n(⋅) lies in \(\mathcal {M}_{k}\) on [b n,a n+1] driven by the Lipschitz dynamics \(F^{\mathit {new}}_{k}\), then (3.7) implies that the sub-optimality of u is satisfied on \(z_{n}(\cdot)|_{[b_{n},a_{n+1}]}\), i.e.

$$u\bigl(b_n,z_n(b_n)\bigr)\leq u \bigl(a_{n+1},z_n(a_{n+1})\bigr). $$

Then by (3.6) we have

$$ \begin{aligned} u\bigl(b_n,y(b_n)\bigr)&= u \bigl(b_n,z_n(b_n)\bigr)\leq u \bigl(a_{n+1},z_n(a_{n+1})\bigr) \\ &\leq u\bigl(a_{n+1},y(a_{n+1})\bigr)+2L_u\|G \|e^{L(a_{n+1}-b_n)}\varepsilon _n. \end{aligned} $$

We set ε p:=meas(JJ p), and we deduce that

$$ \begin{aligned} u\bigl(a,y(a)\bigr)&\leq u\bigl(a_1,y(a_1) \bigr)+2L_u\|G\|e^{L(a_1-b_0)}\varepsilon _1 \\ &\leq u\bigl(a_2,y(a_2)\bigr)+2L_u\|G \|e^{L(a_2-b_0)}(\varepsilon _1+\varepsilon _2) \\ &\vdots \\ &\leq u\bigl(a_{ p+1},y(a_{ p+1})\bigr)+2L_u\|G \|e^{L(a_{p+1}-b_0)}\varepsilon ^p \\ &= u\bigl(b,y(b)\bigr)+2L_u\|G\|e^{L(b-a)}\varepsilon ^p. \end{aligned} $$

By taking p→+∞, we have ε p→0 and the desired result is obtained. □

Theorem 3.12

Supposeuis a locally Lipschitz continuous subsolution of (2.2a)(2.2c). Thenusatisfies the sub-optimality, i.e. for any trajectoryy(⋅)∈S [t,T](x), one has

$$u(t,x)\leq u\bigl(t+h,y(t+h)\bigr),\quad \forall h\in[0,T-t]. $$

Proof

Let \(\mathcal {M}\) be a union of subdomains (manifolds or interfaces). Let \(\bar{d}_{\mathcal {M}}\in\{0,\ldots,d\}\) be the minimal dimension of the subdomains in \(\mathcal {M}\). We claim that for any h∈[0,Tt] and any trajectory y(⋅) of (3.1) lying totally within \(\mathcal {M}\), we have

$$ u\bigl(a,y(a)\bigr)\leq u\bigl(b,y(b)\bigr),\quad \mbox{for any}\ [a,b]\subseteq[t,t+h]. $$
(3.8)

The proof of (3.8) is based on an induction argument with regard to the minimal dimension \(\bar{d}_{\mathcal {M}}\):

(HR) :

for \(\tilde{d}\in\{1,\ldots,d\}\), suppose that for any \(\mathcal {M}\) with \(\bar{d}_{\mathcal {M}}\geq\tilde{d}\) and for any trajectory y(⋅) that lies within \(\mathcal {M}\), (3.8) holds.

Step (1): Let us first check the case when \(\tilde{d}=d\). In this case, \(\bar{d}_{\mathcal {M}}=d\), then \(\mathcal {M}\) is a union of d-manifolds which are disjoint by (H1). For any trajectory y(⋅) of (3.1) lying within \(\mathcal {M}\), since y(⋅) is continuous, y(⋅) lies entirely in one of the d-manifolds, denoted by Ω i. The subsolution property of u implies that

$$-\partial_t u(t,x)+\sup_{p\in F_i(x)}\bigl\{-p\cdot Du(t,x)\bigr\}\leq0 $$

holds in the viscosity sense. Since the dynamics on Ω i is F i which is Lipschitz continuous, then by the classical theory u satisfies the sub-optimality along y(⋅) and (3.8) holds true.

Step (2): Now assume that (HR) is true for \(\tilde{d}\in\{1,\ldots,d\}\), and let us prove that (HR) is true for \(\tilde{d}-1\). In this case, the minimal dimension of subdomains in \(\mathcal {M}\) is \(\bar{d}_{\mathcal {M}}=\tilde{d}-1\), \(\tilde{d}\in\{1,\ldots,d\}\). As an induction hypothesis, assume that for any trajectory that lies within a union of subdomains each with dimension greater than \(\tilde{d}\), then (3.8) holds. Three cases can occur.

  • If \(\mathcal {M}\) contains only one subdomain, i.e. \(\mathcal {M}=\mathcal {M}_{k}\) with dimension \(\bar{d}_{\mathcal {M}}\) for some k∈{1,…,m+}, then for any trajectory y(⋅) lying within \(\mathcal {M}_{k}\), the subsolution property of u implies that u satisfies the sub-optimality along y(⋅) since the dynamics \(F^{\mathit {new}}_{k}\) is Lipschitz continuous on \(\mathcal {M}_{k}\).

  • If \(\mathcal {M}\) contains more than one subdomain and \(\mathcal {M}\) is connected, let \(\mathcal {M}'_{1},\ldots,\mathcal {M}'_{p}\) be all the subdomains contained in \(\mathcal {M}\) with dimension \(\bar{d}_{\mathcal {M}}\). Then \(\widetilde{\mathcal {M}}:=\mathcal {M}\backslash(\cup _{k=1}^{p}\mathcal {M}'_{k} )\) is a union of subdomains with dimension greater than \(\tilde{d}\). We note that \(\mathcal {M}'_{k}\subseteq\overline{\widetilde{\mathcal {M}}}\) for each k=1,…,p. Then by the induction hypothesis and Proposition 3.11, (3.8) holds true for any trajectory lying entirely within \(\widetilde{\mathcal {M}}\cup \mathcal {M}'_{1}\). Then by applying Proposition 3.11 for \(\widetilde{\mathcal {M}}\cup \mathcal {M}'_{1}\) and \(\mathcal {M}'_{2}\), (3.8) holds true for any trajectory lying entirely within \(\widetilde {\mathcal {M}}\cup \mathcal {M}'_{1}\cup \mathcal {M}'_{2}\). We continue this process and finally we have (3.8) holds true for any trajectory lying entirely within \(\mathcal {M}=\widetilde{\mathcal {M}}\bigcup(\cup_{k=1}^{p}\mathcal {M}'_{k} )\).

  • If \(\mathcal {M}\) is not connected, for any trajectory y(⋅) lying within \(\mathcal {M}\), since y(⋅) is continuous, then y(⋅) lies within one connected component of \(\mathcal {M}\). Then by the same argument as above, (3.8) holds true for y(⋅). And the induction step is complete.

Finally, to complete the proof of the theorem, we remark that for any trajectory y(⋅) of (3.1), by considering \(\mathcal {M}=\mathbb {R}^{d}\) with \(\bar{d}_{\mathcal {M}}=0\), taking a=t, b=t+h in (3.8) we have

$$u(t,x)\leq u\bigl(t+h,y(t+h)\bigr), $$

which ends the proof. □

4 Proof of Theorem 2.4

Since v satisfies the super-optimality and sub-optimality, by Theorem 3.7 and Theorem 3.10 v is a viscosity solution of (2.2a)–(2.2c).

The uniqueness result is obtained by the following result of comparison principle.

Proposition 4.1

Suppose that \(u:[0,T]\times \mathbb {R}^{d}\rightarrow \mathbb {R}\)is Lipschitz continuous andu(T,x)=φ(x) for any \(x\in \mathbb {R}^{d}\).

  1. (i)

    Ifusatisfies the super-optimality, thenv(t,x)≤u(t,x) for all \((t,x)\in[0,T]\times \mathbb {R}^{d}\);

  2. (ii)

    Ifusatisfies the sub-optimality, thenv(t,x)≥u(t,x) for all \((t,x)\in[0,T]\times \mathbb {R}^{d}\).

Proof

(i) For any \((t,x)\in[0,T]\times \mathbb {R}^{d}\), by the super-optimality of u, there exists a trajectory \(\overline{y}_{t,x}\) such that

$$u(t,x)\geq u\bigl(T,\overline{y}_{t,x}(T)\bigr)=\varphi \bigl( \overline{y}_{t,x}(T)\bigr). $$

By the sub-optimality of v, we have

$$v(t,x)\leq v\bigl(T,\overline{y}_{t,x}(T)\bigr)=\varphi \bigl( \overline{y}_{t,x}(T)\bigr). $$

Then we deduce that

$$v(t,x)\leq u(t,x). $$

(ii) The proof is completed by the same argument by considering the super-optimality of v and the sub-optimality of u. □

5 Conclusion

In this paper, we have studied the system (1.1) in a general framework of the multi-domains with several interfaces. The existence and uniqueness result of the solution is studied under some junction conditions on the interfaces. The latter are derived by considering a control problem for which the value function satisfies the system (1.1) on each sub-domain Ω i. The analysis of this value function indicates the information that should be considered on the interfaces in order to guarantee a continuous solution of the system.