1 Introduction

The optimal control problems of ordinary and partial differential equations have been studied extensively since the middle of the last century. Generally, the control systems considered in these literatures are well posed (see, for instance, [1]). However, it is well known that optimal control problems, where the control systems are non-well-posed, are much more challenging than the well-posed ones. In the case where a state equation admits more than one solution, the state variable does not depend continuously on the control variable. Hence, it is not clear how to make the sensitivity analysis of the state with respect to the control variable. Therefore, we could not obtain the variations of the state with respect to the control similarly as in [1]. Lions [2] first studied optimal control problems of non-monotone elliptic systems without state constraints, and Bonnans and Casas [3] considered the cases where the state constraints were involved. In [2, 3], in order to overcome the difficulties caused by the non-well-posedness, the first step was penalizing the original problem by removing the nonlinear term from the state equation, and regarding it as a part of the state constraints. Then, it was proved that the penalization problem had at least one solution, and the necessary conditions for a solution of this penalization problem were given. Finally, the necessary conditions for an optimal pair of the original problem were obtained by passing to the limit. Following this technique, some authors [4,5,6] discussed more general state equations.

Recently, Lin [7] studied the extendability and an optimal control problem after quenching for some ordinary differential equations. The quenching of a solution means that the derivative of the solution goes to infinity at finite time, while it keeps bounded itself. It was shown in [7] that a solution, which quenches for the first time at finite time, may hold different properties: not extendable after quenching, extendable uniquely or having at least two extended solutions. An optimal control problem, which means that among all the extended control-solution pairs of a given extendable solution which quenches for the first time at finite time, one would like to find the pair with the minimal energy for certain cost functional, was also studied in [7]. This cost functional is defined in a time interval whose left endpoint is the first quenching time of the aforementioned extendable solution, and the right endpoint is a given time greater than the left one. The Pontryagin maximum principle was established for this problem. This control problem is different to the classical one, since the extended solutions may be multiple.

Time optimal control problem is one of the important research areas of control theory. To our best knowledge, there is a small amount of literature on time optimal control problems, where the control systems are not well posed. In recent years, using different techniques, several researchers studied the existence and the necessary conditions of the optimal quenching/blowup time for some special controlled systems (see [9,10,11,12]).

Stimulated by the methods used in our paper [7], we shall study in this paper a time optimal control problem for a class of ordinary differential equations, which may have multiple solutions. Making use of a sequence of approximate and well-posed time optimal control problems, we establish the Pontryagin maximum principle for the autonomous system with multiple solutions. The existence of optimal controls, under the assumption that the system is affine and the control set is convex, is also obtained.

From the point of view of applied science, classical time optimal control problems can be used to describe a missile with one warhead hitting the target in the shortest time. Since the 1960s, the technology of one missile with multiple and independent re-entry vehicles (warheads) has been developed. When the missile is launched, the flight tracks of the independent vehicles carried by it are different, and their targets can be the same area. It is natural appealing to minimize the first time of hitting the target by these vehicles. This can be described by the time optimal control problem with multiple solutions.

This paper differs from the above-mentioned references as follows: (i) References [2,3,4,5,6] deal with time independent control systems governed by elliptic partial differential equations. But this paper concerns the time-dependent control systems. (ii) References [9,10,11,12] examine blowup or quenching time optimal control problems for ordinary differential equations with unique solutions. Although we studied in [7] an optimal control problem for ordinary differential equations with multiple solutions after quenching, it concerns the problem without terminal constraints of states, and the intention of using controls after quenching is to make a corresponding cost functional to achieve a minimum. But this paper, to our best knowledge, is the first paper on time optimal control problems for differential equations with multiple solutions. (iii) The solutions in [7] to the control system quench at finite time. This is caused by the discontinuity of the nonlinear term of the system. Since the nonlinear term of the system considered in this paper is continuous corresponding to the state variable, the solutions to this system and their derivatives in time are bounded. However, this system may still possess multiple solutions.

2 Definitions and Examples of Equations with Multiple Solutions

For convenience, for an \(\mathbb {R}^n\)-valued vector function \(f=(f^1\ f^2\ldots \ f^n)^T\) on a domain of \(\mathbb {R}^n\), \(f_t\) and \(f_y\) are defined the same as in [7, 12].

In this paper, the following controlled system will be considered,

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}y(t)}{{\hbox {d}}t}= & {} f(t,y(t),u(t)),\ t>0, \end{aligned}$$
(1)
$$\begin{aligned} \displaystyle y(0)= & {} y_0, \end{aligned}$$
(2)

where the state function y is a vector-valued function, the control function u has values in a set \(U\subset {\mathbb {R}}^m\).

We make the following assumptions throughout the paper.

Assumption 2.1

\(U\subset {\mathbb {R}}^m\) is a bounded and closed set.

Assumption 2.2

The target Q is a non-empty, closed, and convex set in \(\mathbb {R}^n\), \(y_0\in \mathbb {R}^n\) with \(y_0\not \in Q\).

Assumption 2.3

f(tyu) is a function which is measurable in \(t\in [0,+\infty [\), and continuous in \((y,u)\in \mathbb {R}^n\times U\). \(y_1\in \mathbb {R}^n\), with \(y_1\not \in Q\), is the singular point of the function f(tyu) corresponding to y. There exist constants \(0<\alpha <1\), \(L_1>0\) and \(L_2>0\) such that

$$\begin{aligned} |f(t,y,u)|\le L_1|y|^\alpha +L_2,\ \forall \ (t,y,u)\in [0,+\infty [\times \mathbb {R}^n\times U. \end{aligned}$$

Moreover, for any connected set \(E\subset \subset [0,+\infty [\times (\mathbb {R}^n\setminus \{y_1\})\), there exists a constant \(L_E>0\) and a uniform modulus of continuity \(\omega _E\) such that

$$\begin{aligned} |f(t,y,u)-f(t,x,v)|\le L_E|y-x|+\omega _E(|u-v|), \ \forall \ (t,x,v),\ (t,y,u)\in E\times U. \end{aligned}$$

Assumption 2.4

f(tyu) is continuously differentiable in \(y\in \mathbb {R}^n\setminus \{y_1\}\). For any \(r>0\), there exists \(L_r>0\) such that

$$\begin{aligned} |f_y(t,y,u)|\le L_r, \ \text{ if }\ t\in [0,+\infty [,\ |y-y_1|\ge r\ \text{ and }\ u\in U. \end{aligned}$$

Moreover, for any connected set \(E\subset \subset [0,+\infty [\times (\mathbb {R}^n\setminus \{y_1\})\), there exists a uniform modulus of continuity \({\widetilde{\omega }}_E\) such that

$$\begin{aligned} |f_y(t,y,u)-f_y(t,x,v)|\le {\widetilde{\omega }}_E(|y-x|+|u-v|), \ \forall \ (t,x,v),\ (t,y,u)\in E\times U. \end{aligned}$$

Now, set \(\mathcal {U}:=\big \{u:\ [0,+\infty [\rightarrow U;\ u\ \text{ is } \text{ Lebesgue } \text{ measurable }\big \}.\)

Definition 2.1

Let \(y_0\in \mathbb {R}^n\) and \(u\in \mathcal {U}\). We say that a function \(y(\cdot \ ;y_0,u)\) in \(C([0,T];\mathbb {R}^n)\) is a solution to (1)–(2) with \(T>0\) iff

$$\begin{aligned}&y(t;y_0,u)=y_0+\int _0^t f(\tau ,y(\tau ;y_0,u),u(\tau )){\hbox {d}}\tau ,\ \ t\in [0,T]. \end{aligned}$$

Under Assumptions 2.12.4, (1)–(2) may possess multiple solutions. For instance, consider the following equation (taken from [13]) with \(n=1\) and \(y_0=y_1=0\),

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}y(t)}{{\hbox {d}}t}= & {} y^{2/3}(t),\ t>0, \end{aligned}$$
(3)
$$\begin{aligned} \displaystyle y(0)= & {} 0. \end{aligned}$$
(4)

It is easy to check that (3)–(4) possesses an infinite number of solutions given by \(y(t)\equiv 0\), and \( y_c(t) =0,\ 0\le t\le c,\ y_c(t)= (t-c)^3/27,\ t> c, \) where c is an arbitrary positive constant.

We can also construct another example. Take the following equation, with \(n=1\), \(y_0=-1\) and \(y_1=0\), into consideration,

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}y(t)}{{\hbox {d}}t}= & {} y^{2/3}(t)+u(t),\ t>0, \end{aligned}$$
(5)
$$\begin{aligned} \displaystyle y(0)= & {} -1, \end{aligned}$$
(6)

where \( u(t) =1,\ 0\le t\le 3(1-\pi /4),\ \text{ and }\ u(t)=0,\ t>3(1-\pi /4).\)

In order to discuss (5)–(6), we first consider the following equation,

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}\widetilde{y}(t)}{{\hbox {d}}t}= & {} \widetilde{y}^{2/3}(t)+1,\ t>0, \end{aligned}$$
(7)
$$\begin{aligned} \displaystyle \widetilde{y}(0)= & {} -1. \end{aligned}$$
(8)

Since the function \(g(s,t):=s^{2/3}+1\) is locally Lipschitz corresponding to s and is bounded in the domain \(G=\{(s,t);\ ]-2,0[\times ]-\infty ,+\infty [\}\), it holds by the existence and continuation theory of ordinary differential equations (cf. [8]), (7)–(8) holds a unique solution \(\widetilde{y}\), which can be extended till the boundary of G. Because \(g(s,t)>0,\ (s,t)\in \mathbb {R}\times ]-\infty ,+\infty [\), we have from (7) that the solution \(\widetilde{y}\) to (7)–(8) is strictly monotone increasing in its interval of existence, from which and (8), it follows that the solution \(\widetilde{y}\) can be extended until it equals to 0, that is, can reach the boundary of G, at some finite time \(\widetilde{t}>0\).

Now, we claim that \(\widetilde{t}=3(1-\pi /4)\). Indeed, it follows from (7) that

$$\begin{aligned} \int _0^{t} 1/\left( \widetilde{y}^{2/3}(t)+1\right) {\hbox {d}}\widetilde{y}(t)=\int _0^t 1 {\hbox {d}}t,\ 0<t<\widetilde{t}. \end{aligned}$$

By Definition 2.1, \(\widetilde{y}\) has values in \(\mathbb {R}^1\). Here and throughout the remaining of this section, the calculations are in the framework of real numbers. Since by (8), \(\widetilde{y}(0)=-1\), and the real cube root of \(-1\) is itself, it holds that \(\widetilde{y}^{1/3}(0)=-1\). Then, we have

$$\begin{aligned} 3\big (\widetilde{y}^{1/3}(t)-\arctan \widetilde{y}^{1/3}(t)\big )=&t+3\left( \widetilde{y}^{1/3}(0)-\arctan \widetilde{y}^{1/3}(0)\right) \nonumber \\ =&t-3(1-\pi /4),\ 0<t<\widetilde{t}. \end{aligned}$$
(9)

Set \(\widetilde{g}(s):=3(s^{1/3}-\arctan s^{1/3}),\ s\in \mathbb {R}\). Then, \(\widetilde{g}'(s)=1/(1+s^{2/3})>0,\ s\ne 0\). Thus, \(\widetilde{g}(s)=0\) if and only if \(s=0\). This and (9) imply \(\widetilde{t}=3(1-\pi /4)\) and \( \widetilde{y}(3(1-\pi /4))=0.\) Hence, (5)–(6) also possesses an infinite number of solutions given by \(y(t) =\widetilde{y}(t),\ 0\le t\le 3(1-\pi /4),\ y(t)=0,\ t> 3(1-\pi /4), \) and \(y_c(t) =\widetilde{y}(t),\ 0\le t\le 3(1-\pi /4)\), \(y_c(t) =0,\ 3(1-\pi /4)< t\le c+3(1-\pi /4)\), \(y_c(t) =(t-3(1-\pi /4)-c)^3/27,\ t> c+3(1-\pi /4)\), where c is an arbitrary positive constant.

3 Pontryagin’s Maximum Principle

Set

$$\begin{aligned} \mathcal {T}:=\big \{(T, y, u)\in ]0,+\infty [\times C([0,T];\mathbb {R}^n)\times \mathcal {U};\ \ (1){-}(2)\ \text{ holds } \text{ on }\ [0,T]\big \} \end{aligned}$$

and \(\mathcal {T}_{ad}:=\big \{(T, y, u)\in \mathcal {T};\ y(T)\in Q\big \}.\)

In this paper, we consider the following time optimal control problem,

Problem (P) Find \((t^*, y^*, u^*)\in \mathcal {T}_{ad}\) such that \(t^*=\inf \limits _{(T, y, u)\in \mathcal {T}_{ad}} T.\)

\(t^*\) is called the optimal time for Problem (P). A control \(u^*\in \mathcal {U}\) and a function \(y^*\in C([0,t^*];\mathbb {R}^n)\) such that \((t^*, y^*, u^*)\in \mathcal {T}_{ad} \), are called an optimal control and an optimal state corresponding to \( u^*\) for Problem (P), respectively.

In this section, we shall establish the necessary conditions for Problem (P). Since (1)–(2) may possess multiple solutions, and \(f_y\) may be unbounded in the neighborhood of \(y_1\) (cf. the examples in the introduction), it is difficult getting Pontryagin’s maximum principle for Problem (P) in the non-autonomous case of (1)–(2). However, we can get the following necessary conditions for Problem (P) in the autonomous case, which is denoted by \((P_a)\).

Consider the following autonomous system,

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}y(t)}{{\hbox {d}}t}= & {} f(y(t),u(t)),\ t>0, \end{aligned}$$
(10)
$$\begin{aligned} \displaystyle y(0)= & {} y_0. \end{aligned}$$
(11)

First, the following theorem concerns the case where \(y_0=y_1\).

Theorem 3.1

Suppose that Assumptions 2.12.4 hold, and \(y_0=y_1\), \(t^*\) is the optimal time, and \(({u}^*, {y}^*)\) is an optimal pair for Problem (\({P_a}\)). Then, there exists a non-trivial function \({{\psi }}\in C(]0,t^* ];\mathbb {R}^n)\) satisfying

$$\begin{aligned} \displaystyle&\frac{{\hbox {d}}{{\psi }}(t)}{{\hbox {d}}t}=-f_y({y}^*(t),{u}^*(t)){{\psi }}(t) ,\ t\in ]0,t^* ], \end{aligned}$$
(12)
$$\begin{aligned}&\langle {\psi }(t),f({y}^*(t),{u}^*(t))\rangle =\max \limits _{u\in U}\langle {\psi }(t),f({y}^*(t),u)\rangle , \ \text{ a.e. }\ t\in ]0,t^* ], \end{aligned}$$
(13)
$$\begin{aligned}&\langle {\psi }(t^*),z-y^*(t^*)\rangle \ge 0, \ \text{ for } \text{ each }\ z\in Q, \end{aligned}$$
(14)
$$\begin{aligned}&\langle {\psi }(t),f({y}^*(t),{u}^*(t))\rangle \ge 0,\ \text{ a.e. }\ t\in ]0,t^* ]. \end{aligned}$$
(15)

Proof

Under the assumptions of this theorem, we first claim that

$$\begin{aligned} {y}^*(t)\ne y_1,\ \text{ for } \text{ each }\ t\in ]0,t^*]. \end{aligned}$$
(16)

Since \(y^*(t^*)\in Q\) and \(y_1\not \in Q\), it holds that \(y^*(t^*)\ne y_1\). On the other hand, suppose that there exists a time \(t_1\in ]0,t^*[\) such that \({y}^*(t_1)= y_1\). We set \(\widehat{v}(t):=u^*(t+t_1)\), \(0\le t\le t^*-t_1\). Since (10)–(11) is an autonomous system, it holds that \( \widehat{y}(t):= y^*(t+t_1)\), \(0\le t\le t^*-t_1\), is a solution to (10)–(11), corresponding to \(y_0=y_1\) and the control \(\widehat{v}\). Since \(y^*(t^*)\in Q\), we have that \(\widehat{y}(t^*-t_1)\in Q\). This contradicts with the fact that \(t^*\) is the optimal time of Problem (\(P_a\)) and implies (16) holds.

Next, we complete the proof of Theorem 3.1 in the following two steps.

Step 1. We construct an approximate control problem for Problem \(({P_a})\).

For each \(\delta \in ]0,t^*[\) and for each \(u\in \mathcal {U}\), consider the following system,

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}y(t)}{{\hbox {d}}t}= & {} f(y(t),u(t)),\ t>\delta , \end{aligned}$$
(17)
$$\begin{aligned} \displaystyle y(\delta )= & {} {y}^*(\delta ). \end{aligned}$$
(18)

Since \(y_0=y_1\), the corresponding solutions to (10)–(11) may be multiple [cf. example (3)–(4)]. However, by Assumptions 2.12.4 and (16), the local solution to (17)–(18) is unique. This property enables us to construct an approximate control problem for Problem \(({P_a})\). Set

$$\begin{aligned} \mathcal {T}^\delta :=\big \{(T, y, u)\in ]\delta ,+\infty [\times C([\delta ,T];\mathbb {R}^n)\times \mathcal {U};\ (17){-}(18) \ \text{ holds } \text{ on }\ [\delta ,T]\big \} \end{aligned}$$

and \(\mathcal {T}^\delta _{ad}:=\big \{({T}, y, u)\in \mathcal {T}^\delta ;\ y(T)\in Q\big \}.\)

Consider the following approximate control problem for Problem \(({P_a})\),

Problem (\({P}^\delta \)) Find \((t^*_\delta , y^*_\delta , u^*_\delta )\in \mathcal {T}_{ad}^\delta \) such that \(t^*_\delta =\inf \limits _{(T, y, u)\in \mathcal {T}_{ad}^\delta } T.\)

We claim that

$$\begin{aligned} t^*_\delta =t^*, \ (u^*_\delta , y^*_\delta )\ \text{ is } \text{ an } \text{ optimal } \text{ pair } \text{ of } \text{ Problem } \ (P^\delta ). \end{aligned}$$
(19)

Otherwise, there exist a \(t_\delta \) with \(\delta<t_\delta <t^*\) and a control such that \(u_\delta \in \mathcal {U}\) such that there exists a solution \(y_\delta \) of (17)–(18) corresponding to \(u_\delta \), satisfying \(y_\delta (t_\delta )\in Q\).

Set \(\widehat{u}_\delta (t):=\left\{ \begin{array}{ll}{u}^*(t),\ t\in [0,\delta [,\\ u_\delta (t),\ t\in [\delta , t_\delta ]\end{array}\right. \ \text{ and }\ \widehat{y}_\delta (t):=\left\{ \begin{array}{ll}y^*(t),\ t\in [0,\delta [,\\ y_\delta (t),\ t\in [\delta , t^\delta ]. \end{array}\right. \)

Then, it is easy to check that \((t_\delta ,\widehat{u}_\delta , \widehat{y}_\delta )\in {\mathcal {T}}_{ad}\) and \(\widehat{y}_\delta (t_\delta )\in Q\). This contradicts with the fact that \(t^*\) is the optimal time of Problem \((P_a)\).

Step 2. We get the necessary conditions for the optimal pair \(({u}^*,y^*)\).

Since U is a bounded and closed set, and may not be convex, we introduce \(u^{\rho }\) as follows, \( u^{\rho }(t)= u^*(t), \ t\in [\delta ,t^*]\setminus E_{\rho },\ u^{\rho }(t)=u(t),\ t\in E_{\rho }, \) where \(\rho \in ]0,1[\), \(E_\rho \subset [\delta ,t^*]\) with \(\text{ meas }(E_\rho )=\rho (t^*-\delta )\), and \(u\in \mathcal {U}\). Then, by (16), we can use the similar argument in the proof of the expression (28) in Proposition 3.1 of [7] to prove the following fact: there exists a \(\rho _0>0\) such that for each \(\rho \) in \(]0,\rho _0[\), (17)–(18) has a unique solution \(y^\rho \) corresponding to \(u^\rho \) in \([\delta ,t^*]\), satisfying \(y^\rho (t)\ne y_1,\ t\in [\delta ,t^*]\), and \(y^\rho \rightarrow y^*\ \text{ uniformly } \text{ in } \ [\delta ,t^*]\) as \(\rho \rightarrow 0\). Hence, Problem (\(P^\delta \)) can be viewed as a classical time optimal control problem. Since \({t}^*\) is the optimal time, and \(({u}^*, {y}^*)\) is an optimal pair of Problem (\({P^\delta }\)), it holds by classical theories of time optimal control problems (cf. [1]), that there exists a non-trivial function \({{\psi }^\delta }\in C([\delta ,t^* ];\mathbb {R}^n)\) satisfying

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}{{\psi ^\delta }}(t)}{{\hbox {d}}t}&=-f_y({y}^*(t),{u}^*(t)){{\psi }^\delta }(t) ,\ \ t\in [\delta ,t^* ], \end{aligned}$$
(20)
$$\begin{aligned} \langle {\psi }^\delta (t),f({y}^*(t),{u}^*(t))\rangle&=\max \limits _{u\in U}\langle {\psi ^\delta }(t),f({y}^*(t),u)\rangle ,\ \text{ a.e. }\ t\in [\delta ,t^* ], \end{aligned}$$
(21)
$$\begin{aligned} \langle {\psi }^\delta (t^*),z-y^*(t^*)\rangle&\ge 0, \ \text{ for } \text{ each }\ z\in Q, \end{aligned}$$
(22)
$$\begin{aligned} \langle {\psi ^\delta }(t),f({y}^*(t),{u}^*(t))\rangle&\ge 0,\ \text{ a.e. }\ t\in [\delta ,t^* ]. \end{aligned}$$
(23)

Since \({{\psi }^\delta }\) is non-trivial, we may assume that

$$\begin{aligned} |{{\psi }^\delta }(t^*)|=1,\ \text{ for } \text{ each }\ \delta \in ]0,t^*[. \end{aligned}$$
(24)

Then, by Assumption 2.4, (16) and (20), we can take a sequence \(\{\delta _n\}_{n=1}^\infty \) from the set \(\{\delta \}_{0<\delta <t^*}\) such that \(\delta _n\rightarrow 0\), and the subsequence \(\{\psi ^{\delta _{n}}\}\) is uniformly convergent on \([\eta , t^*]\) for each \(\eta \in ]0,t^*[\), as \(n\rightarrow +\infty \).

Let \({\psi }(t)=\lim \limits _{n\rightarrow +\infty }\psi ^{\delta _{n}}(t)\), \(t\in ]0,t^*]\). Then, by (20)–(24), we obtain that \(\psi \) is non-trivial, and (12)–(15) hold. This completes the proof of Theorem 3.1. \(\square \)

Next, Pontryagin’s maximum principle for Problem (\(P_a\)) in the case where \(y_0\ne y_1\) will be considered. We need the following lemma. Set

$$\begin{aligned} \mathcal {T}_1:=\big \{(T, y, u)\in ]0,+\infty [\times C([0,T];\mathbb {R}^n)\times \mathcal {U};\ (10){-}(11) \ \text{ holds } \text{ on }\ [0,T]\big \} \end{aligned}$$

and \(\mathcal {T}_{ad}^1:=\big \{(T, y, u)\in \mathcal {T}_1;\ y(T)=y_1\big \}.\)

Problem (\(P_1\)) Find \((\overline{s}, \overline{y}, \overline{u})\in \mathcal {T}_{ad}^1\) such that \(\overline{s}=\inf \limits _{(T, y, u)\in \mathcal {T}_{ad}^1} T.\)

Lemma 3.1

Suppose that Assumptions 2.12.4 hold, and \(y_0\ne y_1\). If \(\overline{s}\) is the optimal time, and \(({\overline{u}}, {\overline{y}})\) is an optimal pair of Problem (\({P_1}\)), then there exists a non-trivial function \({{\psi }}\in C([0,\overline{s} [;\mathbb {R}^n)\) holding the properties that

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}{{\psi }}(t)}{{\hbox {d}}t}= & {} -f_y({\overline{y}}(t),{\overline{u}}(t)){{\psi }}(t) ,\ \ t\in [0,\overline{s} [,\\ \langle {\psi }(t),f({\overline{y}}(t),{\overline{u}}(t))\rangle= & {} \max \limits _{u\in U}\langle {\psi }(t),f({\overline{y}}(t),u)\rangle ,\ \text{ a.e. }\ t\in [0,\overline{s} [,\\ \langle {\psi }(t),f({\overline{y}}(t),{\overline{u}}(t))\rangle\ge & {} 0,\ \text{ a.e. }\ t\in [0,\overline{s} [. \end{aligned}$$

Sketch proof of Lemma 3.1

Since \(\overline{y}\) is the optimal state of Problem (\(P_1\)), it holds that \( {\overline{y}}(t)\ne y_1,\ \text{ for } \text{ any }\ t\in [0,\overline{s}[.\) For each \(\delta \in ]0,\overline{s}[\), set

$$\begin{aligned} \overline{\mathcal {T}}_{ad}^\delta :=\big \{({T}, y, u)\in \mathcal {T}_1;\ y(T)=\overline{y}(\overline{s}-\delta )\big \}. \end{aligned}$$

Consider the following approximate control problem for Problem \(({P_1})\),

Problem (\({\overline{P}}^\delta \)) Find \((\overline{s}_\delta , \overline{y}_\delta , \overline{u}_\delta )\in \overline{\mathcal {T}}_{ad}^\delta \) such that \(\overline{s}_\delta =\inf \limits _{(T, y, u)\in \overline{\mathcal {T}}_{ad}^\delta } T.\)

Then, we can use the similar argument in the proof of (19) to get that \(\overline{s}_\delta =\overline{s}\), \((\overline{u}, \overline{y})\) is an optimal pair of Problem \((\overline{P}^\delta )\). Furthermore, we can use the similar argument in the proof of (20)–(24), which is the necessary conditions for Problem (\(P^\delta \)), to get Pontryagin’s maximum principle for Problem (\({\overline{P}}^\delta \)) in \([0,\overline{s}-\delta ]\). Finally, we can prove Lemma 3.1 as \(\delta \rightarrow 0\). \(\square \)

Suppose that \(t^*\) is the optimal time, and \(({u}^*, {y}^*)\) is an optimal pair for Problem (\({P_a}\)) in the case where \(y_0\ne y_1\). This problem can be divided into the following two cases:

Case (i) \({y}^*\) does not pass the point \(y_1\) at any time in \([0,t^*]\).

In this case, we can use the similar argument, which shows that Problem (\(P^\delta \)) can be viewed as a classical time optimal control problem (see the proof of Step 2 in Theorem 3.1), to prove that Problem (\({P_a}\)) becomes a classical time optimal control problem, and we can get the classical maximum principle for Problem (\({P_a}\)) in this case.

Case (ii) \({y}^*\) passes the point \(y_1\) at some time.

Theorem 3.2

Suppose that Assumptions 2.12.4 hold, and \(y_0\ne y_1\), \(t^*\) is the optimal time, and \(({u}^*, {y}^*)\) is an optimal pair of Problem (\(P_a\)). Moreover, the trajectory \(y^*\) passes the point \(y_1\) in \([0,t^*]\). Then, there exist a number \(s^*\in ]0, t^* [\), and two non-trivial functions \(\psi _1\) in \(C([0,s^*[; \mathbb {R}^{n})\) and \(\psi _2\) in \(C(]s^*, t^* ]; \mathbb {R}^{n})\) such that

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}{{\psi _1}}(t)}{{\hbox {d}}t}&=-f_y({y}^*(t),{u}^*(t)){{\psi _1}}(t) ,\ \ t\in [0,s^*[,\\ \displaystyle \frac{{\hbox {d}}{{\psi _2}}(t)}{{\hbox {d}}t}&=-f_y({y}^*(t),{u}^*(t)){{\psi _2}}(t) ,\ \ t\in ]s^*,t^*] ,\\ \langle {\psi }(t),f({y}^*(t),{u}^*(t))\rangle&=\max \limits _{u\in U}\langle {\psi }(t),f({y}^*(t),u)\rangle , \ \text{ a.e. }\ t\in [0,t^* ],\\ \langle {\psi }(t^*),z-y^*(t^*)\rangle&\ge 0, \ \text{ for } \text{ each }\ z\in Q, \end{aligned}$$

where \(\psi (t)=\psi _1(t),\ t\in [0, s^* [,\) and \(\psi (t)= \psi _2(t),\ t\in ]s^*,t^* ].\) Moreover,

$$\begin{aligned} \langle {\psi }(t),f({y}^*(t),{u}^*(t))\rangle \ge 0,\ \text{ a.e. }\ t\in [0,t^* ]. \end{aligned}$$

Proof

Since \(y^*(t^*)\in Q\), \(y_1\not \in Q\) and \(y_0\ne y_1\), \(y^*\) can only pass the point \(y_1\) in \(]0,t^*[\). Suppose that \(s^*\in ]0,t^*[\) is the first time such that \(y^*(s^*)=y_1\). Then, \(y^*\) satisfies (17)–(18), where \(\delta \) is substituted by \(s^*\), and \(y^*(\delta )\) is substituted by \(y_1\). Thus, since this system is autonomous, we can use the similar argument in the proof of (16) to show that \({y}^*(t)\ne y_1,\ \text{ for } \text{ each }\ t\in ]s^*,t^*[\), which implies that the trajectory \({y}^*\) passes the point \(y_1\) only once in \(]0,t^*[\) at \(s^*\). Then, we can use the similar argument in the proof of (19) of Theorem 3.1 to prove that \(s^*\) is the optimal time, \((u^*, y^*)\) is an optimal pair for Problem \((P_1)\), and we can get a necessary condition for the optimal time \(s^*\) by Lemma 3.1.

On the other hand, consider the following system,

$$\begin{aligned} \displaystyle \frac{{\hbox {d}}y(t)}{{\hbox {d}}t}= & {} f(y(t),u(t)),\ t>s^*, \end{aligned}$$
(25)
$$\begin{aligned} \displaystyle y(s^*)= & {} y_1. \end{aligned}$$
(26)

Set

$$\begin{aligned} \mathcal {T}^2:=\big \{&(T, y, u)\in ]s^*,+\infty [\times C([s^*,T];\mathbb {R}^n)\times \mathcal {U}; \ (25){-}(26)\ \text{ holds } \text{ on }\ [s^*,T]\big \}, \end{aligned}$$

and \(\mathcal {T}_{ad}^2:=\big \{(T, y, u)\in \mathcal {T}^2;\ y(T)\in Q\big \}\).

Problem (\(P_2\)) Find \((\widetilde{t}^*, \widetilde{y}^*, \widetilde{u}^*)\in \mathcal {T}_{ad}^2\) such that \(\widetilde{t}^*=\inf \limits _{(T, y, u)\in \mathcal {T}_{ad}^2} T.\)

It is also easy to check that \(t^*\) is the optimal time, and \((u^*, y^*)\) is an optimal pair for Problem \((P_2)\). Then, we can use the similar argument in the proof of (12)–(15) in Theorem 3.1 to get Pontryagin’s maximum principle for Problem \((P_2)\), from which and Lemma 3.1. we can get Theorem 3.2. \(\square \)

4 Existence of Optimal Controls

Since (1)–(2) may have multiple solutions, the existence of optimal controls for Problem (P) is complex. However, if the system is affine, namely, f(tyu) is in the form \(g(t,y)+B(t)u\), and the control set U is convex, then we have the following existence result for Problem (P).

Theorem 4.1

Suppose that f(tyu) is in the form \(g(t,y)+B(t)u\), and Assumptions 2.12.3 hold, where \(B\in L^\infty (]0,+\infty [;\mathbb {R}^{n\times m})\). Moreover, assume that U is convex, and \(\mathcal {T}_{ad}\ne \emptyset \). Then, there exists at least one optimal control for Problem (P).

In order to prove Theorem 4.1, we need the following lemma, which is Corollary 1.12.2 in [13].

Lemma 4.1

Assume that

  1. (i)

    \(\phi (x)\) is a non-negative and bounded function in \([x_0,x_0+a]\);

  2. (ii)

    \(\psi (x)\) is a continuous non-decreasing function in \([x_0,x_0+a]\);

  3. (iii)

    \(g(z)\ge 0\) is a continuous non-decreasing function for \(0\le z<+\infty \) and there exists a \(\phi _0>0\) such that \(g(z)> 0\) for \(z\ge \phi _0\). Moreover, \(g(\phi (x))\) is integrable. Then, for an arbitrary positive constant \(\widetilde{k}\), if there exists a number \(x_1\in ]x_0,x_0+a]\) such that \(G(\widetilde{k})+\psi (x)-\psi (x_0)\) belongs to the domain \(G^{-1}\), whenever \(x_0\le x \le x_1\), where \(G(\phi )=\displaystyle \int _{\phi _0}^{\phi }\frac{{\hbox {d}}z}{g(z)},\ \phi \ge \phi _0 \), and \(G^{-1}\) is the inverse function of G, and the inequality

    $$\begin{aligned} \phi (x)\le \widetilde{k}+\int _{x_0}^{x} g(\phi (t)){\hbox {d}}\psi (t),\ x_0\le x\le x_0+a \end{aligned}$$

    holds, then

    $$\begin{aligned} \phi (x)\le G^{-1}(G(\widetilde{k})+\psi (x)-\psi (x_0)),\ x_0\le x\le x_1. \end{aligned}$$

Proof of Theorem 4.1

Since we assume that \(\mathcal {T}_{ad}\) is not empty, it follows that \(t^* < +\infty \). Thus, we can utilize the definition of \(t^*\) to get a sequence \(\{u_k\}_{k=1}^\infty \) in \(\mathcal {U}\) holding the following properties: (i) \(t_1\ge t_2\ge \cdots \ge t_k\ldots \) and \(t_k \rightarrow t^*\) as \({k\rightarrow +\infty }\); (ii) \(y(t_k;y_0,u_k)\in Q\) for all k, where for each k, \(y(\cdot \ ;y_0,u_k)\) is a solution to (1)–(2) corresponding to \(u_k\) in \([0,t_k]\).

By Property (i), we may assume that

$$\begin{aligned} 0\le t_k\le t^*+1, \ \text{ for } \text{ all }\ k. \end{aligned}$$
(27)

Because U is bounded, there exist a function \({u}^*\) in \(L^2( ]0,t^*+1[; \mathbb {R}^m)\) and a subsequence of the sequence \(\{u_k\}_{k=1}^\infty \), still denoted in the same way, such that \(u_k\rightharpoonup {u}^*\ \text{ weakly } \text{ in } \ L^2( ]0,t^*+1[; \mathbb {R}^m), \ \text{ as } \ k\rightarrow +\infty . \) Take a point \(u_0\in U\). We extend the function \({u}^*\) by setting it to \(u_0\) in the interval \([t^*+1, +\infty [\), and denote the extension by \({u}^*\) again. Since U is closed and convex, we have \(u^*\in \mathcal {U}\) (see the definition of U in Assumption 2.1 and the definition of \(\mathcal {U}\) before Definition 2.1 in Section 2).

For each k, but k fixed, write \(y_k\) for \(y(\cdot \ ;y_0,u_k)\). Then, we have

$$\begin{aligned}&y_k(t)=y_0+\int _0^t \big \{g(\tau ,y_k(\tau ))+B(\tau )u_k(\tau )\big \}{\hbox {d}}\tau ,\ \ t\in [0,t_k], \end{aligned}$$

from which, Assumption 2.3 and (27), it follows that

$$\begin{aligned} |y_k(t)| \le |y_0|+L_1\int _0^t|y_k(\tau )|^\alpha {\hbox {d}}\tau +C,\ \ t\in [0,t_k], \end{aligned}$$
(28)

where \(C\ge 1\) is a constant independent of k.

Take in Lemma 4.1 that \(\widetilde{k}=|y_0|+C\), \(\psi (t)\equiv t\), \(g(z)=L_1z^{\alpha },\ z\ge 0\), \(\phi (t)=|y_k(t)|\), \(t\in [0,t_k]\), \(x_0=0\), \(a=t_k\), and \(\phi _0=1\). Thus, we have

$$\begin{aligned} G(\phi )= & {} \int _1^\phi \frac{z^{-\alpha }}{L_1}{\hbox {d}}z=\frac{\phi ^{1-\alpha }-1}{L_1(1-\alpha )},\ \phi \ge 1,\ G^{-1}(\eta )\\= & {} (L_1(1-\alpha )\eta +1)^{\frac{1}{1-\alpha }},\ \eta \ge 0,\\ G(\widetilde{k})+\psi (t)-\psi (0)= & {} \frac{(|y_0|+C)^{1-\alpha }-1}{L_1(1-\alpha )}+t, \ t\in [0,t_k]. \end{aligned}$$

It is clear that for each \(t\in [0,t_k]\), \(G(\widetilde{k})+\psi (t)-\psi (0)\) belongs to the domain \(G^{-1}\). Then, it holds by (27)–(28) and Lemma 4.1 that for each k,

$$\begin{aligned} |y_k(t)|\le&G^{-1}(G(\widetilde{k})+t),\\ \le&\big ((|y_0|+C)^{1-\alpha }+(t^*+1)L_1(1-\alpha )\big )^{\frac{1}{1-\alpha }}, \ t\in [0, t_k]. \end{aligned}$$

Hence, there exists a constant \(M>0\) such that

$$\begin{aligned} |y_k(t)|\le M, \ \text{ for } \text{ all }\ k\ \text{ and } \text{ for } \text{ each }\ t\in [0,t_k]. \end{aligned}$$
(29)

Then, we can also get the equicontinuity of \(\{y_k\}_{k=1}^{\infty }\) in \([0,t^*]\). Thus, there exist a subsequence of the sequence of \(\{y_k\}_{k=1}^{\infty }\), still denoted in the same way, and a function \(y^*\in C([0,t^*];\mathbb {R}^n)\), such that

$$\begin{aligned} y_k(\cdot )\rightarrow y^*\ \text{ uniformly } \text{ in }\ [0,t^*],\ \text{ as }\ k\rightarrow +\infty , \end{aligned}$$
(30)
$$\begin{aligned} y^*(t)=y_0+\int _0^t \big \{g(\tau ,y^*(\tau ))+B(\tau )u^*(\tau )\big \}{\hbox {d}}\tau ,\ \ t\in [0,t^*]. \end{aligned}$$

That is, \(y^*\) is a solution to (1)–(2) corresponding to \(u^*\) in \([0,t^*]\).

On the other hand, it holds from Assumption 2.3 and (29) that

$$\begin{aligned} |y_k(t_k)-y^*(t^*)|\le&|y_k(t_k)-y_k(t^*)|+|y_k(t^*)-y^*(t^*)|\\ \le&\int _{t^*}^{t_k} |g(\tau ,y_k(\tau ))+B(\tau )u_k(\tau )|{\hbox {d}}\tau +|y_k(t^*)-y^*(t^*)|\\ \le&\big (L_1M^\alpha +C_1\big )(t_k-t^*)+|y_k(t^*)-y^*(t^*)|, \end{aligned}$$

where \(C_1>0\) is a constant, from which, Property (i), and (30), we have

$$\begin{aligned}&|y_k(t_k)-y^*(t^*)|\rightarrow 0,\ \text{ as }\ k\rightarrow +\infty . \end{aligned}$$
(31)

Since Q is closed, we get by Property (ii) and (31) that \(y^*(t^*)\in Q.\) This implies that \(u^*\) is an optimal control and completes the proof of Theorem 4.1. \(\square \)

5 Conclusions

In this work, a time optimal control problem for a class of ordinary differential equations with multiple solutions was studied.

We first gave two examples of equations with an infinite number of solutions. Then, Pontryagin’s maximum principle for the autonomous controlled system for Problem (\(P_a\)) under Assumptions 2.12.4 was established. However, the non-autonomous case was not considered in this paper and deserves to be researched. Then, we got the existence of optimal controls for Problem (P) under Assumptions 2.12.3, and the assumption that the system is affine, and the control set U is convex. How to prove the existence of optimal controls of Problem (P) for general systems, and in the case where U is non-convex, is a more complex and interesting problem.

The controllability problem, that is, whether \(\mathcal {T}_{ad}=\emptyset \) or \(\mathcal {T}_{ad}^1=\emptyset \), is an interesting problem, since \(y_1\) is a singular point. When \(y_0\ne y_1\), we proved the Pontryagin maximum principle under the assumption that \({y}^*\) passes the point \(y_1\) in \([0,t^*]\). How to judge whether \({y}^*\) passes \(y_1\) is worth of study.