Keywords

1 Introduction

We consider a finite-dimensional control system

$$\begin{aligned} {\left\{ \begin{array}{ll}\dot{y}(t)=f(y(t),u(t)),&{}\text { for a.e. }t>0, \\ y(0)=x,\\ u(t)\in U,&{}\text { for a.e. }t>0.\end{array}\right. }\end{aligned}$$
(1)

where U is a given compact subset of \(\mathbb R^m\), \(x\in \mathbb R^d\), \(u(\cdot )\in \mathscr {U}:=\{v:[0,+\infty [\rightarrow U\text { such that }v \text { is measurable}\},\) and \(f:\mathbb R^d\times U\rightarrow \mathbb R^d\) is continuous on \(\mathbb R^d{\setminus } S\) and such that for every compact \(K\subseteq \mathbb R^d{\setminus } S\) there exists \(L=L_K>0\) with

$$\begin{aligned} \Vert f(x,u)-f(y,u)\Vert \le L_K\Vert x-y\Vert ,\text { for all }x,y\in K,\,u\in U. \end{aligned}$$

Given a closed subset \(S\subseteq \mathbb R^d\), called the target set, the minimum time function \(T:\mathbb R^d\rightarrow [0,+\infty ]\) is defined as follows:

$$\begin{aligned} T(x):=\inf \{T>0:\, \exists \,y(\cdot ) \text { solution of (1) satisfying } y(0)=x,\, y(T)\in S\},\end{aligned}$$
(2)

where we set \(\inf \emptyset =+\infty \) by convention.

We are interested in the following property, called small-time local attainability (STLA): given \(T>0\) there exists an open set \(U\subseteq \mathbb R^d\) such that \(U\supseteq S\) and \(T(x)\le T\) for all \(x\in U\). This amounts to say that for every fixed time \(T>0\) there is a neighborhood of the target whose points can be steered to the target itself along admissible trajectories of the system in a time less than T. STLA may be formulated also in this way: for every \(\bar{x}\in \partial S\) there exists \(\delta _{\bar{x}}>0\) and a continuous function \(\omega _{\bar{x}}:[0,+\infty [\rightarrow [0,+\infty [\) such that \(\omega (r)\rightarrow 0\) as \(r\rightarrow 0\) and \(T(x)\le \omega _{\bar{x}}(d_S(x))\) for all \(x\in B(\bar{x},\delta _{\bar{x}})\), where \(d_S(\cdot )\) denotes the Euclidean distance function from S.

STLA has been studied by several authors, and it turned out that estimates of this type have consequences also in regularity property of the minimum time function. One of the most important results on this line was found in [6], where it was proved that a controllability condition known as Petrov’s condition yields an estimate \(T(x)\le \omega _{\bar{x}}(d_S(x))\) with \(\omega _{\bar{x}}(r)=C_{\bar{x}}r\), for \(C>0\), and this is equivalent to local Lipschitz continuity of \(T(\cdot )\) in \(U{\setminus } S\) for a suitable neighborhood U of S.

For a compact target S, Petrov’s condition can be formulated as follows: there exist \(\delta ,\mu >0\), such that for every \(x\in \mathbb R^d{\setminus } S\) whose distance \(d_S(x)\) from S is less than \(\delta \) there exist \(u\in U\) and a point \(\bar{x}\in S\) with \(\Vert x-\bar{x}\Vert =d_S(x)\) and

$$\begin{aligned} \langle x-\bar{x}, f(x,u)\rangle \le -\mu d_S(x).\end{aligned}$$
(3)

From a geometric point of view, the underlying idea is the following: for every point near to S, there is an admissible velocity pointing toward S sufficiently fast, i.e., whose component in direction of S is sufficiently large. Since Petrov’s condition involves only admissible velocities (i.e. first order term in the expansion of the trajectories) we refer to it as a first-order condition for STLA.

If we assume that the distance is smooth around S, we can give also another version of Petrov’s condition: for every x near to S we require the existence of an admissible \(C^1\)-trajectory \(\gamma _x(\cdot )\) of (1) satisfying \(\gamma _x(0)=x\) and \(\dfrac{d}{dt}(d_S \circ \gamma _x)(0)<-\mu \). Accordingly, due to the smoothness of \(\gamma _x\) and \(d_S\), we have also that for \(t>0\) sufficiently small we have \(\dfrac{d}{dt}(d_S \circ \gamma _x)(t)<-\mu \). This formulation enhances the infinitesimal decreasing properties of the distance along at least one admissible trajectories contained in (3).

Natural steps toward the generalization of this condition are the following:

  1. 1.

    consider instead of the distance its square, since it is well known that the square of the distance enjoys more regularity properties then the distance itself.

  2. 2.

    take an integral version of the infinitesimal decreasing property, thus obtaining

    $$\begin{aligned} d^2_S(\gamma _x(t))-d^2_S(x)<-\dfrac{\mu }{2} t d_S(x)+o(t). \end{aligned}$$
  3. 3.

    notice that instead of \(\gamma _x(t)\) we can consider any point \(y_t\in \mathscr {R}_{x}(t)\), where

    $$\begin{aligned} \mathscr {R}_{x}(t):= & {} \{z\in \mathbb R^d:\, \text {there exists a trajectory}\, \gamma \,\text {of (1) with}\\&\gamma (0)=x,\,\gamma (t)=y\} \end{aligned}$$

    The crucial fact is that the map \(t\mapsto y_t\) is no longer required to be necessarily an admissible trajectory, even if \(y_t\in \mathscr {R}_{x}(t)\) for all t. Such kinds of curves will be called \(\mathscr {A}\)-trajectories starting from x.

In this way the problem is reduced to estimate the rate of decreasing of the distance along \(\mathscr {A}\)-trajectories. The first paper in which this point of view was introduced is [1], where all the above generalization were performed. More precisely, it is assumed that there exists \(\mu >0\) such that for every x near to S and t sufficiently small we can find an \(\mathscr {A}\)-trajectory (in the original paper is called R-trajectory) \(y_t\) such that

$$\begin{aligned} y_t=x+a(t;x)+t^{\alpha }A(x)+o(t^{\alpha };x), \end{aligned}$$

where

  1. 1.

    \(a(\cdot )\), \(A(\cdot )\) are smooth functions,

  2. 2.

    the reminder satisfies a uniform estimate \(\Vert o(t^{\alpha };x)\Vert \le Kt^{\alpha +\beta }\) with \(K,\beta \) suitable positive constants independent of x,

  3. 3.

    \(\Vert a(\cdot )\Vert \) is bounded from above by \(M t^sd_S(x)\) where M is a suitable constant,

  4. 4.

    there exists a point \(\bar{x}\in S\) with \(\Vert x-\bar{x}\Vert =d_S(x)\) and \(\langle x-\bar{x}, A(x)\rangle \le -\mu d_S(x)\).

Roughly speaking, we require the infinitesimal decreasing property of Petrov’s condition for the essential leading term of at least an \(\mathscr {A}\)-trajectory which now is a term of order \(\alpha \ge 1\). The name “essential leading term”, introduced by [1], is motivated by the fact that as long as x is taken near to S, we have that \(\Vert a(\cdot )\Vert \) vanishes. By the equivalency between Petrov’s condition and local Lipschitz continuity of \(T(\cdot )\) we can not expect any more an estimate like \(T(x)\le Cd_S(x)\) in the case \(\alpha >1\), however it turns out that a similar estimate holds true, yielding \(T(x)\le Cd^{1/\alpha }_S(x)\). We refer to this conditions as higher order Petrov-like conditions for STLA.

In [4] was treated the case in which the constant \(\mu \) appearing in Petrov’s condition is a function \(\mu =\mu (d_S(x))\) allowed to slowly vanish as \(d_S(x)\rightarrow 0\). This was not covered by [1], since there was assumed \(\mu \) to be always constant. From a geometric point of view, this means that we are allowed to arrive tangentially to the target. There was obtained an estimate \(T(x)\le Cd^\beta _S(x)\) also involving the dependency of \(\mu (\cdot )\) on \(d_S(\cdot )\), but under additional geometrical assumptions on the target, which were removed in a later paper [2] by Krastanov, where the results of [1, 4] are subsumed in a unique formulation, but still under strong smoothness hypothesis on the terms appearing in the expression of \(y_t\) and taking into account a decay of \(r\mapsto \mu (r)\) only as suitable powers of r.

The recent paper [5] weakened some smoothness assumptions required in [1, 2] on the terms appearing in the expression of the \(\mathscr {A}\)-trajectory \(t\mapsto y_t\), but instead of them, the authors assumed more regularity on the target set than in [2]. With even more regularity, in [5] is also defined a generalized curvature by means of suitable generalized gradients of higher order of the distance function. This allows to consider not only first-order expansion of the distance along an \(\mathscr {A}\)-trajectory, but also second-order effects, improving further STLA sufficient conditions. This was in the spirit of [1], in which was pointed out that STLA cannot be reduced to attainability of the single points of the target, but needs to take into account also the geometrical properties of the target.

We present here a STLA result removing the smoothness assumptions on the terms appearing in the expression of the \(\mathscr {A}\)-trajectory \(t\mapsto y_t\), as in [5], but without any additional regularity hypothesis on the target set used in [5], thus fully generalizing the results of [2] also in presence of the additional state constraint \(y(t)\in \overline{\varOmega }\), where \(\varOmega \) is an open subset of \(\mathbb R^d\) with \(\varOmega {\setminus } S\ne \emptyset \).

In general a complete description of the set of \(\mathscr {A}\)-trajectories, on which higher order conditions must be checked, turns to be very difficult. For control affine systems of the form

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \dot{x}(t)=f_0(x(t))+\sum _{i=1}^{M}u_i(t)f_i(x(t)),\\ x(0)=x,\end{array}\right. }\end{aligned}$$
(4)

where \(f_i(\cdot )\) are smooth vector fields, and \(u_i:[0,+\infty [\rightarrow [-1,1]\) are measurable, additional information on \(\mathscr {A}\)-trajectories can be obtained by the study of the Lie algebra generated by \(\{f_i\}_{i=1,\dots ,M}\), as performed in various degree of generality in the papers [1, 2, 4, 5]. In the forthcoming paper [3] the analysis of such kind of systems is performed also in presence of state constraints, in order to provide explicit higher order conditions for STLA.

The paper is structured as follows: in Sect. 2 we formulate and prove the main result on STLA, and in Sect. 3 we compare this result with some other similar results from [2, 5].

2 A General Result on STLA

Throughout the paper, given a set \(Z\subseteq \mathbb R^d\) and a positive number \(\delta \), we set \(Z_\delta =\{y\in \mathbb R^d:\, d_Z(y)\le \delta \}\), moreover we denote by \(\partial ^Pd_S(x)\) the proximal superdifferential of \(d_S\) at x. Given an open set \(\varOmega \subseteq \mathbb R^d\), we consider the

figure a

, i.e., we add to system (1) the condition \(x(t)\in \overline{\varOmega }\). Consequently, we can define the state constrained reachable set from \(x_0\in \overline{\varOmega }\) at time \(\tau \ge 0\):

$$\mathscr {R}_{x_0}^{\varOmega }(\tau ):=\Bigl \{y(\tau ):\, y(\cdot ) \text {is a solution of (1) defined on }[0,\tau ]\text { with }y([0,\tau ])\subseteq \overline{\varOmega }\Bigr \}.$$

The state constrained minimum time function from \(x_0\in \overline{\varOmega }\) is

$$T_{\varOmega }(x_0):= {\left\{ \begin{array}{ll}+\infty ,&{}\text { if }\mathscr {R}_{x_0}^{\varOmega }(\tau )\cap S=\emptyset \text { for all }\tau \ge 0,\\ \inf \{\tau \ge 0:\,\mathscr {R}_{x_0}^{\varOmega }(\tau )\cap S\ne \emptyset \},&{}\text { otherwise}. \end{array}\right. }$$

Lemma 1

Let \(\delta >0\) be a constant, \(\lambda :\mathbb R^2\rightarrow \mathbb R\), \(\theta :\mathbb R\rightarrow \mathbb R\) be continuous functions such that

  1. 1.

    \(r\mapsto \dfrac{\theta (r)r}{\lambda (\theta (r),r)}\) is bounded from above by a nonincreasing function \(\beta (\cdot )\in L^1(]0,\delta [)\);

  2. 2.

    \(\lambda (\theta (r),r)>0\) for \(0<r<\delta \), and \(\lambda (0,r)=0\) for \(r>0\) .

Consider any sequence \(\{r_i\}_{i\in \mathbb N}\) in \([0,\delta ]\) satisfying for all \(i\in \mathbb N\):

(\(S_1\)) \(r_{i+1}^2-r_{i}^2\le -\lambda (\theta (r_i),r_i)\),                (\(S_2\)) \(\theta (r_i)\ne 0\) implies \(r_i\ne 0\).

Then we have: a) \(r_i\rightarrow 0\); b) \(\displaystyle \sum _{i=0}^{\infty }\theta (r_i)\le 2\int _{0}^{r_0}\beta (r)\,dr.\)

Proof

According to \((S_1)\), the sequence \(\{r_i\}_{i\in \mathbb N}\) is monotone and bounded from below, thus it admits a limit \(r_{\infty }\) satisfying \(0\le r_{\infty }<\delta \). Assume by contradiction that \(r_{\infty } >0\). By passing to the limit for \(i\rightarrow +\infty \) in \((S_1)\), since \(\lambda (\cdot ,\cdot )\in C^0\) and \(\lambda (\theta (r_i),r_i)\ge 0\), we obtain that \(\displaystyle 0=\lambda (\theta (r_{\infty }),r_{\infty })\) contradicting the assumptions on \(\lambda \), thus \(r_{\infty }=0\). Since if \(\theta (r_i)\ne 0\) we have \(r_i\ne 0\) and \(\dfrac{r_i^2-r_{i+1}^2}{\lambda (\theta (r_i),r_i)}\ge 1\), we obtain

$$\begin{aligned} \sum _{i=0}^{\infty }\theta (r_i)&=\sum _{\begin{array}{c} i=0\\ \theta (r_i)\ne 0 \end{array}}^{\infty }\theta (r_i) \le \sum _{\begin{array}{c} i=0\\ \theta (r_i)\ne 0 \end{array}}^{\infty }\dfrac{\theta (r_i)}{\lambda (\theta (r_i),r_i)}(r_{i}^2-r_{i+1}^2)\\&\le \sum _{\begin{array}{c} i=0\\ \theta (r_i)\ne 0 \end{array}}^{\infty }\dfrac{\theta (r_i)}{\lambda (\theta (r_i),r_i)}(r_{i}+r_{i+1})(r_{i}-r_{i+1}) \le 2\sum _{\begin{array}{c} i=0\\ \theta (r_i)\ne 0 \end{array}}^{\infty }\dfrac{\theta (r_i)r_i}{\lambda (\theta (r_i),r_i)}(r_{i}-r_{i+1})\\&\le 2\sum _{\begin{array}{c} i=0\\ \theta (r_i)\ne 0 \end{array}}^{\infty }\beta (r_i)(r_{i}-r_{i+1})\le 2\int _{0}^{r_0}\beta (r)dr, \end{aligned}$$

recalling the monotonicity property of \(r\mapsto \beta (r)\).

Theorem 1

(General attainability). Consider the system (1). Let \(\delta _0>0\) be a positive constant, \(\sigma ,\mu :[0,+\infty [\times [0,+\infty [\rightarrow [0,+\infty [\), and \(\tau ,\theta :[0,+\infty [\rightarrow [0,+\infty [\) be continuous functions. Let \(Q:[0,+\infty [\times \mathbb R^d\rightarrow [0,+\infty [\) be a function such that \(t\mapsto Q(t,x)\) is continuous for every \(x\in S_{\delta _0}{\setminus } S\).

We assume that:

  1. (1)

    \(\tau (r)=0\) iff \(r=0\), \(0<\theta (r)\le \tau (r)\) for every \(0<r<\delta _0\);

  2. (2)

    for any \(x\in (S_{\delta _0}\cap \overline{\varOmega }){\setminus } S\) and \(0<t\le \tau (d_S(x))\) the following holds

    1. (2.a)

      \(\mathscr {R}_{x}^{\varOmega }(t)\cap S_{2\delta _0}\ne \{x\},\)

    2. (2.b)

      if \(\mathscr {R}_{x}^{\varOmega }(t)\cap S=\emptyset \), there exists \(y_t\in \mathscr {R}_{x}^{\varOmega }(t)\cap B(x,\chi (t,d_S(x)))\) with

      $$\begin{aligned} \min _{\zeta \in \partial ^Pd_S(x)}\langle d_S(x)\zeta , y_t-x\rangle +\Vert y_t-x\Vert ^2 \le -\mu (t,d_S(x))+ \sigma (t,d_S(x)); \end{aligned}$$
    3. (2.c)

      if S is not compact, then \(\left( \mathscr {R}_{x}^{\varOmega }(t)\cap S_{2\delta _0}\right) {\setminus } S\subseteq B(0,Q(t,x))\)

  3. (3)

    the continuous function \(\lambda :[0,+\infty [\times [0,+\infty [\rightarrow \mathbb R\), defined as \(\lambda (t,r):=2\mu (t,r)-2\sigma (t,r)\), satisfies the following properties:

    1. (3.a)

      \(0<2\lambda (\theta (r),r)<r^2\), \(\lambda (0,r)=0\) for all \(0<r < \delta _0\);

    2. (3.b)

      \(r\mapsto \dfrac{\theta (r)r}{\lambda (\theta (r),r)}\) is bounded from above by a nonincreasing function \(\beta (\cdot )\in L^1(]0,\delta _0[)\).

Then, if we set \(\displaystyle \omega (r_0):= 2\int _{0}^{r_0}\beta (r)\,dr\), we have that \(T_\varOmega (x)\le \omega (d_S(x))\) for any \(x\in S_{\delta _0}\cap \overline{\varOmega }\).

Before proving the result, we make some remarks on the assumptions. Assumption (2.a) requires that from every x in the feasible set and sufficiently near to S we can move remaining inside the feasible set and not too far from S. Moreover, given a time \(t<T(x)\) (thus \(\mathscr {R}_x(t)\cap S=\emptyset \)), in (2.b) we assume the existence of a \(y_t\) in the reachable set, not too far from x (2.c), such that the square of the distance from the target is decreased of at least \(\lambda (t,d_S(x))/2=\mu (t,d_S(x))-\sigma (t,d_S(x))\). Assumption (3) requires \(\lambda \) to satisfy the requests of Lemma 1, thus concluding the proof.

Proof

(of Theorem 1). We define a sequence of points and times \(\{(x_i,t_i,r_i)\}_{i\in \mathbb N}\) by induction as follows. We choose \(x_0\in (S_{\delta _0} \cap \overline{\varOmega }){\setminus } S\), and set \(r_0=d_S(x_0)\), \(t_0=\min \{T_{\varOmega }(x_0),\theta (r_0)\}\). Suppose to have defined \(x_i\), \(t_i\), \(r_i\). We distinguish the following cases:

  1. 1.

    if \(x_i\in S\), we define \(x_{i+1}=x_i\), \(t_{i+1}=0\), \(r_{i+1}=0\).

  2. 2.

    if \(x_i\notin S\) and \(t_i\ge T_{\varOmega }(x_i)\), in particular we have \(T_{\varOmega }(x_i)<+\infty \), thus we can choose \(x_{i+1}\in \mathscr {R}^\varOmega _{x_i}(T_{\varOmega }(x_i))\cap S\) and define \(r_{i+1}=0\), \(t_{i+1}=0\).

  3. 3.

    if \(x_i\notin S\) and \(t_i<T_{\varOmega }(x_i)\), we choose \(x_{i+1}\in \mathscr {R}_{x_i}^{\varOmega }(t_i)\) such that

    $$\begin{aligned} \min _{\zeta _i\in \partial ^Pd_S(x_{i})}\langle r_i \zeta _i, x_{i+1}-x_i\rangle +\Vert x_{i+1}-x_i\Vert ^2 \le -\mu (t_i,r_i)+ \sigma (t_i,r_i), \end{aligned}$$

    and define \(r_{i+1}=d_S(x_{i+1})\), \(t_{i+1}=\min \{T_{\varOmega }(x_{i+1}),\theta (r_{i+1})\}\). According to the semiconcavity of \(d_S^2(\cdot )\) (with semiconcavity constant 2) and recalling that \(\zeta _x \in \partial ^P d_S(x)\) iff \(2\zeta _x d_S(x) \in \partial ^P d_S(\cdot )\), we have that there exists \(\zeta _x\in \partial d_S(x)\) such that

    $$\begin{aligned} r_{i+1}^2&-r_i^2\le \langle 2 \zeta _i r_i,x_{i+1}-x_i\rangle +2\Vert x_{i+1}-x_i\Vert ^2\le -\lambda (t_i,r_i). \end{aligned}$$
    (5)

    We notice that in this case \(x_{i+1}\notin S\) since \(x_{i+1}\in \mathscr {R}_{x_i}^{\varOmega }(t_{i})\) and \(t_i= \theta (r_i)<T_{\varOmega }(x_i)\), thus \(t_{i+1}>0\) and \(r_{i+1}>0\).

The assumptions of Lemma 1) are satisfied:

  1. 1.

    \(r^2_{i+1}-r^2_i\le -\lambda ( \theta (r_i),r_i)\),

  2. 2.

    it is obvious that \(\theta (r_i)\ne 0\) implies \(r_i\ne 0\). Indeed, assume that \(r_i=0\). Since \(0\le \theta (r)\le \tau (r)\), and \(\tau (r)=0\) iff \(r=0\) , we have \(\theta (0)=0\).

  3. 3.

    by assumption, there exists \(\beta \in L^1(]0,\delta _0[)\) such that \(\dfrac{\theta (s)s}{\lambda ( \theta (s),s)}\le \beta (r)\).

Applying Lemma 1, we have that a) \(r_i\rightarrow 0\), b) \(\displaystyle \sum _{i=0}^{\infty } \theta (r_i)\le 2\int _{0}^{r_0}\beta (r)\,dr.\) Since \(\displaystyle \sum _{i=0}^{\infty }t_i\le \sum _{i=0}^{\infty } \theta (s_i)\), we have \(\displaystyle \sum _{i=0}^{\infty }t_i\le 2\int _{0}^{r_0}\beta (r)\,dr\). If S is compact, since \(d_S(x_i)\rightarrow 0\), we have that \(\{x_i\}_{i\in \mathbb N}\) is bounded. If S is not compact, for every \(j\in \mathbb N\), we notice that \(x_j\in \overline{\left( S_{2\delta _0}\cap \mathscr {R}_{x_0}^{\varOmega }\left( \sum _{i=0}^{j}t_i\right) \right) {\setminus } S} \subset \overline{B\left( 0,Q\left( \sum _{i=0}^{j}t_i ,x\right) \right) }.\) Since \(\sum _{i=0}^{j}t_i\) converges, we have that there exists \(R>0\) such that \(Q\left( \displaystyle \sum _{i=0}^{j}t_i,x\right) \le R\) for all \(j\in \mathbb N\), thus also in this case \(\{x_i\}_{i\in \mathbb N}\) is bounded. Up to subsequence, still denoted by \(\{x_i\}_{i\in \mathbb N}\), we have that there exists \(\bar{x}\in \mathbb R^d\) such that \(x_i\rightarrow \bar{x}\). Since \(d_S(x_i)\rightarrow 0\), we have \(\bar{x}\in S\) and so \(T_{\varOmega }(x_0)\le \displaystyle \sum _{i=1}^{\infty }t_i\le \omega (d_S(x_0))\), which concludes the proof.

At this level the state constraints play no role, since their presence is hidden in Assumption (2) of Theorem 1, which requires the knowledge of at least an approximation of the reachable set in time t. In the control-affine case (4) this can be obtained by studying the Lie algebra generated by the vector fields appearing in the dynamics, since, as well known, noncommutativity of the flows of such vector fields will generate further direction along which the system can move, and so more \(\mathscr {A}\)-trajectories. Indeed, up to an higher order error, such \(\mathscr {A}\)-trajectories can be described by mean of their generating Lie brackets at the initial point, and so it is possible to impose the decreasing condition of Assumption (2) of Theorem 1 directly on such Lie brackets. This gives a tool to check it in many interesting cases. State constraints may reduce the number of feasible \(\mathscr {A}\)-trajectory generated by Lie bracket operations, since in order to construct each of them we have to concatenate several flows, thus possibly exiting from the feasible region after a certain time. This problem can be faced for instance by imposing a sort of inward pointing conditions (see e.g. [3]) in order to prevent such a situation, forcing all the flows involved the construction of the bracket to remain inside the feasible region. Finally, we notice that the distance to the boundary of the feasible region may be estimate by a semiconcavity inequality similar to the one used to estimate the decreasing of the distance from the target, thus at each step of the construction in the proof of Theorem 1 it is possible to estimate also the distance from the boundary of the feasible region.

3 Comparison with Other Results

Example 1

The ground space is \(\mathbb R\), and set \(S=\{z_k:\,k\in \mathbb N\}\cup \{0\}\). Since S does not satisfy the internal sphere condition, the results of [5] cannot be applied. Take \(U=[-1,1]\) and define \(f(x,u)=\frac{u}{\log |x|}\) for \(0<|x|<1/2\). We have that \(f\in C^{1,1}_{\mathrm {loc}}(S_{1/2}{\setminus } S)\times [-1,1])\) and w.l.o.g. we can extend it to a function \(C^{1,1}_{\mathrm {loc}}((\mathbb R{\setminus } S)\times [-1,1])\), still denoted by f. Clearly, for any \(0<\bar{x}<1/2\) the optimal control corresponds to \(u(t)\equiv 1\), and for \(-1/2<\bar{x}<0\) the optimal control is \(-1\). We restrict our attention only to \(x>0\) due to the symmetry of the system. Consider now any \(\mathscr {A}\)-trajectory \(\sigma _{\bar{x}}(\cdot )\) starting from \(\bar{x}\) of the form \(\sigma _{\bar{x}}(t)=\bar{x}+a(t,\bar{x})+t^\alpha A(\bar{x})+ o(t^\alpha ,\bar{x})\), where \(A(\cdot )\) is a Lipschitz continuous map, \(\Vert a(t,x)\Vert \le t^s c(x)\) for \(s>0\) and a Lipschitz map \(c(\cdot )\) satisfying \(c(x)\rightarrow 0\) when \(d_S(x)\rightarrow 0\), i.e. of the same structure as in [1, 2]. If \(\sigma _{\bar{x}}(t)>\bar{x}\) for all \(t>\tau _0\) the \(\mathscr {A}\)-trajectory do not approach the target. Excluding this case, and up to a time shift, we can restrict to \(\mathscr {A}\)-trajectories satisfying \(0\le \sigma _{\bar{x}}(t)\le \bar{x}\) for \(t>0\). In particular, we have that \(|\sigma _{\bar{x}}(t)-\bar{x}|\le \frac{2t}{|\log \bar{x}|}\) since all trajectories contained in \([0,\bar{x}]\) have modulus of speed which cannot exceed \(\frac{1}{|\log \bar{x}|}\). By letting \(\bar{x}\rightarrow 0^+\), we obtain for all \(t>0\) that \(\Vert t^\alpha A(0)+ o(t^\alpha ,0)\Vert \le 0\), thus, dividing by \(t^\alpha \) and letting \(t\rightarrow 0^+\) we obtain \(A(0)=0\) and thus by Lipschitz continuity of \(A(\cdot )\) we have \(|A(x)|\le C|x|\). In particular, the results of [1] cannot be applied because the essential leading term vanishes as we approach the target. Theorem 3.1, which is the main result of [2], requires the existence of \(0\le \lambda <\displaystyle \frac{2\alpha }{2\alpha -1}\) for the \(\mathscr {A}\)-trajectory \(\sigma _{\bar{x}}(\cdot )\) such that \(\langle x-\pi _S(x),A(x)\rangle \le -\delta d_S^\lambda (x)\), where \(\pi _S(x)\) is the projection of x on S. For \(x\ge 0\), we obtain \(d_S(x)=x\) and \(\pi _S(x)=0\), and together with \(|A(x)|\le C|x|\), this implies \(\lambda \ge 2\), but \(\dfrac{2\alpha }{2\alpha -1}\le 2\) since it is assumed that \(\alpha \ge 1\), thus also this result cannot be applied. We consider the optimal solution \(\gamma _{\bar{x}}(t)=\bar{x}+\frac{t}{\log \bar{x}}+o(t)\) corresponding to the control \(u=1\). Take \(y_{\bar{x}}(t)=\bar{x}+\frac{t}{2\log \bar{x}}\). It can be easily proved that \(\bar{x}>y_{\bar{x}}(t)>\gamma _{\bar{x}}(t)\), and from this that \(y_{\bar{x}}(\cdot )\) is an \(\mathscr {A}\)-trajectory. Assumption of Theorem 1 are satisfied with \(\delta _0=1/2\), \(\theta (r)=\tau (r)=r|\log r|\), \(\mu (t,r)=\frac{rt}{2|\log r|}\), \(\sigma =\frac{t^2}{4\log ^2x}\), \(\beta (r)=|\log r|\), providing the estimate \(T(x)\le 4(x-x\log x)\). Indeed, we can compute exactly the minimum time function in this case, which turns out to be \(T(x)=x-x\log x\) for \(0<x<1/2\) (and in general \(T(x)=|x|-|x|\log |x|\) for \(|x|<1/2\)).